AI chatbots are already widely used by businesses to greet customers and answer their questions – either over the phone or on websites. Some companies have found that they can, to some extent, replace humans with machines in call centre roles.
However, the available evidence suggests there are sectors – such as healthcare and human resources – where extreme care needs to be taken regarding the use of these frontline tools, and ethical oversight may be necessary.
A recent, and highly publicised, example is that of a chatbot called Tessa, which was used by the National Eating Disorder Association (NEDA) in the US. The organisation had initially maintained a helpline operated by a combination of salaried employees and volunteers. This had the express goal of assisting vulnerable people suffering from eating disorders.
However, this year, the organisation disbanded its helpline staff, announcing that it would replace them with the Tessa chatbot. The reasons for this are disputed. Former workers claim that the shift followed a decision by helpline staff to unionise. The vice president of NEDA cited an increased number of calls and wait times, as well as legal liabilities around using volunteer staff.
Whatever the case, after a very brief period of operation, Tessa was taken offline over reports that the chatbot had issued problematic advice that could have exacerbated the symptoms of people seeking help for eating disorders.
It was also reported that Dr Ellen Fitzsimmons-Craft and Dr C Barr Taylor, two highly qualified researchers who assisted in the creation of Tessa, had stipulated that the chatbot was never intended as a replacement for an existing helpline or to provide immediate assistance to those experiencing intense eating disorder symptoms.
Significant upgrade
So what was Tessa designed for? The researchers, alongside colleagues, had generated an observational study highlighting the challenges they faced in designing a rule-based chatbot to interact with users who are concerned about eating disorders. It is quite a fascinating read, illustrating design choices, operations, pitfalls and amendments.
The original version of Tessa was a traditional, rule-based chatbot, albeit a highly refined one, which is one that follows a pre-defined structure based on logic. It could not deviate from the standardised pre-programmed responses calibrated by its creators.
Their conclusion included the following point: “Rule-based chatbots have the potential to reach large populations at low cost in providing information and simple interactions but are limited in understanding and responding appropriately to unanticipated user responses”.

AI chatbots are already widely used to engage with customers or users of a service. Tero Vesalainen / Shutterstock
This might appear to limit the uses for which Tessa was suitable. So how did it end up replacing the helpline previously used by NEDA? The exact chain of events is under discussion amid differing accounts, but, according to NPR, the hosting company of the chatbot changed Tessa from a rules-based chatbot with pre-programmed responses to one with an “enhanced questions and answers feature”.
The later version of Tessa was one employing generative AI, much like ChatGPT and similar products. These advanced AI chatbots are designed to simulate human conversational patterns with the intention of giving more realistic and useful responses. Generating these customised answers relies on large databases of information, which the AI models are trained to “comprehend” through a variety of technological processes: machine learning, deep learning and natural language processing.
Learning lessons
Ultimately, the chatbot generated what have been described as potentially harmful answers to some users’ questions. Ensuing discussions have shifted the blame from one institution to another. However, the point remains that the ensuing circumstances could potentially have been avoided if there had been a body providing ethical oversight, a “human in the loop” and an adherence to the clear purpose of Tessa’s original design.
It’s important to learn lessons from cases such as this against the background of a rush towards the integration of AI in a variety of systems. And while these events took place in the US, they contains lessons for those seeking to do the same in other countries.
The UK would appear to have a somewhat fragmented approach to this issue. The advisory board to the Centre for Data Ethics and Innovation (CDEI) was recently dissolved and its seat at the table was taken up by the newly formed Frontier AI Taskforce. There are also reports that AI systems are already being trialled in London as tools to aid workers – though not as a replacement for a helpline.
Both of these examples highlight a potential tension between ethical considerations and business interests. We must hope that the two will eventually align, balancing the wellbeing of individuals with the efficiency and benefits that AI could provide.
However, in some areas where organisations interact with the public, AI-generated responses and simulated empathy may never be enough to replace genuine humanity and compassion – particularly in the areas of medicine and mental health.


Apple App Store Injunction Largely Upheld as Appeals Court Rules on Epic Games Case
SpaceX Reportedly Preparing Record-Breaking IPO Targeting $1.5 Trillion Valuation
Taiwan Opposition Criticizes Plan to Block Chinese App Rednote Over Security Concerns
SoftBank Shares Slide as Oracle’s AI Spending Plans Fuel Market Jitters
SK Hynix Shares Surge on Hopes for Upcoming ADR Issuance
IBM Nears $11 Billion Deal to Acquire Confluent in Major AI and Data Push
Mizuho Raises Broadcom Price Target to $450 on Surging AI Chip Demand
Moore Threads Stock Slides After Risk Warning Despite 600% Surge Since IPO
Adobe Strengthens AI Strategy Ahead of Q4 Earnings, Says Stifel
Nvidia Develops New Location-Verification Technology for AI Chips
Evercore Reaffirms Alphabet’s Search Dominance as AI Competition Intensifies
Australia Enforces World-First Social Media Age Limit as Global Regulation Looms
China Adds Domestic AI Chips to Government Procurement List as U.S. Considers Easing Nvidia Export Curbs
US Charges Two Men in Alleged Nvidia Chip Smuggling Scheme to China
SK Hynix Considers U.S. ADR Listing to Boost Shareholder Value Amid Rising AI Chip Demand
Australia’s Under-16 Social Media Ban Sparks Global Debate and Early Challenges 



