Connect with us

Trends

Cautioning people about doctor chatbot

Many of us have engaged with chatbots, the computerized conversational programs meant to help with things like our banking or a utility company query. The recent introduction of large language models (LLMs) has turbocharged these bots and their abilities, as neural network language models can now use patterns of speech from vast numbers of sources to generate human-like interactive conversations.

The use of chatbots in medicine, specifically, has a much longer history than one might think. In 1966, Joseph Weizenbaum at the Massachusetts Institute of Technology developed the ELIZA program, which used natural language processing to mimic a psychotherapist by offering questions or reflections based on text input from a “patient.”

As one might imagine, given the early stages of computer programming, this was a simple program. However, for some of those who used it, it proved to be emotionally engaging in a powerful way. As the story goes, when Weizenbaum first trialed the system with his secretary, she asked him to leave the room after the first couple of interactions, so that she could continue the discussions in private.

Today, the use of chatbots in health care is widespread — most of us already research our symptoms online with Dr Google before seeking “analogue” medical advice. But when we consider the remarkable conversational ability of the new LLMs and their potential to provide health solutions, we need to be mindful of both the benefits and the risks the technologies can bring.

In March of this year, another chatbot named ELIZA gained media prominence in Belgium. This ELIZA was an OpenAI powered chatbot available on an app called Chai, which is marketed as a general chatbot app. And over the course of six weeks of discussing fears concerning the environment with this new ELIZA, a man in his 30s took his own life. Media reports noted the app had encouraged this individual to do so after he had proposed sacrificing himself to save the planet.

If one were to ask this ELIZA if it’s a medical device, it will say it’s not, however, it does have “tools built into my system that can help users manage and overcome certain mental health challenges.” Furthermore, if it detects a user has ”serious mental health issues,” it will encourage them to seek professional help, bringing the software under the definition of a “medical device.” This tragic case demonstrates the new types of risks, and the much greater potential for manipulation, which can come with algorithms that can feign sincerity or compassion in a completely new and compelling way.

The regulation of MedTech has always lagged behind the technologies it’s trying to regulate. When the first ELIZA was created in the 1960s, medical implants like pacemakers were already available, however, it would be a further three decades until regulations were introduced for medical device technologies in Europe. And when first introduced, these rules didn’t address software — only when the system was updated in 2017 were some specific regulations introduced.

Moreover, these rules for regulating of MedTech were largely developed with physical products in mind, and a lot of the work needed to meet the requirements for things like pacemakers involves documenting and controlling the risks that come with such a device. However, identifying all the risks associated with generative models simply isn’t possible, as LLMs work in ways that even the developers don’t completely understand, which makes managing them a real challenge.

Still, with the Artificial Intelligence Act likely to enter into force in the coming years, regulation in Europe will increase. But even so, it remains to be seen how we’ll ensure these technologies incorporate appropriate safety guardrails. Crucially, LLM-based chatbots that provide medical advice currently fall under the medical device regulations, however, their unreliability precludes their approval as such. Despite this, they remain available.

Society expects physicians to be rigorously trained and continually evaluated, and to apply that knowledge with expertise, compassion and ethical standards. Dr Chatbot, meanwhile, can provide convincing advice, but it can’t tell you where the advice came from, why it’s giving the advice or how its ethical balances have been considered.

To earn their place in the medical armory, chatbots will need to demonstrate they’re safe and effective, but also that they can prevent harmful manipulation. Until the technology and the regulation achieves this, we all need to be aware of both the great potential and the real risks they bring when relied on for medical advice. POLITICO.eu

Copyright © 2024 Medical Buyer

error: Content is protected !!