Connect with us

Trends

Why are medical researchers advocating increased vigilance around GenAI

A team of medical researchers from Flinders University is advocating for increased vigilance around generative AI (GenAI) after witnessing the technology’s alarming potential for whipping up medical disinformation.

The team used this rapidly evolving form of artificial intelligence in a study to test how false information about health and medical issues might be created and spread.

Using generative AI tools for text, image and video creation, the team attempted to create disinformation about vaping and vaccines. They made use of publicly available generative AI platforms such as OpenAI’s GPT Playground for text, and DALL-E 2 and HeyGen for facilitating the production of image and video content.

In just over an hour, the researchers produced over 100 misleading blogs, 20 deceptive images and a convincing deep-fake video presenting health disinformation. Disturbingly, this video could be adapted into over 40 languages, amplifying its potential harm.

First author of the study Bradley Menz, a registered pharmacist and Flinders University researcher, said he had serious concerns about the findings, given prior examples of disinformation pandemics that have led to fear, confusion and harm.

“The implications of our findings are clear: society currently stands at the cusp of an AI revolution, yet in its implementation governments must enforce regulations to minimise the risk of malicious use of these tools to mislead the community,” Menz said.

“Our study demonstrates how easy it is to use currently accessible AI tools to generate large volumes of coercive and targeted misleading content on critical health topics, complete with hundreds of fabricated clinician and patient testimonials and fake, yet convincing, attention-grabbing titles.”

Menz suggested that the key pillars of pharmacovigilance — including transparency, surveillance and regulation — could serve as valuable examples for managing these risks and safeguarding public health amid rapidly advancing AI technologies.

Senior author Dr Ashley Hopkins, from the College of Medicine and Public Health, said that there is a clear need for AI developers to collaborate with healthcare professionals to ensure that AI vigilance structures focus on public safety and wellbeing.

“We have proven that when the guardrails of AI tools are insufficient, the ability to rapidly generate diverse and large amounts of convincing disinformation is profound. Now there is an urgent need for transparent processes to monitor, report and patch issues in AI tools,” Hopkins said. Hospital + Healthcare

Copyright © 2024 Medical Buyer

error: Content is protected !!