Trends
Why lack of clear regulation limits advancement of AI in healthcare
The healthcare industry has been at the forefront of the AI revolution. The benefits of AI are easy to see in healthcare – helping patients overcome health literacy, faster R&D and quicker completion of clinical trials, more effective diagnosis of rare diseases, fewer errors in patient data exchange across organizations and computing systems, and the automation of time-consuming business processes – all feeding in the utopian goals of reduction in costs and risks while increasing adherence to regulatory compliance.
AI regulations – Current state & gaps
A lack of clear regulation that governs the use of AI in healthcare is limiting its advancement. In general, healthcare has been one of the most regulated industries with frameworks such as the FDA GxP guidelines, as encapsulated in 21 CFR Part 11. These regulations encourage good practices, increase transparency, and make various stakeholders more accountable for managing patient health. Unfortunately, regulatory regimes have failed to keep pace with technological advancement, particularly AI.
Exacerbating the problem is the mushrooming of startups focused on applying AI to different aspects of the healthcare value chain. With the support of angel investors and venture capital funds, more than 4,000 AI startups launched in the last two years, and analysts expect 10,000 more to start in the coming years. With so much innovation and investment happening, it is even more imperative to address the valid concerns regarding bias in training data, hallucination and probability of incorrect results, data privacy and information security, lack of standards around ethics across the regions, interoperability of data between healthcare systems, among others.
In the absence of federal standards, many states have proposed separate AI bills and regulations, which, if passed, would create anything but a holistic approach essential to drive industry adoption. Recently, California, one of the biggest tech hubs in America, is trying to mend these issues with SB 1047 and AB 2013, AI regulation bills that currently sit on the desk of California Governor Gavin Newsom. SB 1047 is not only an ambitious bill designed to enforce safety standards for the development of large AI models, but it could also act as a bellwether for future AI regulation and affect the future of healthcare in the United States.
In the global arena, the European Union has taken the lead in defining AI regulations as part of the AI Act in March 2024. Under the Act, any AI system that is a Class IIa (or higher) medical device or uses an AI system as a safety component is designated as “high risk.” Such systems will need to comply with a raft of additional requirements, many of which overlap with the current rigorous requirements of conformity assessment under the EU MDR and IVDR. However, the AI Act does not cover the use of AI in many critical areas, such as drug discovery, clinical trial recruitment, pharmacovigilance, and member enrollment.
There is also a lack of clear guidelines on the applicability of the audit and CSV requirements (such as IQ/OQ/PQ processes) that are part of 21 CFR Part 11 for the AI systems in healthcare.
Bias & incorrect data sets for training
Another key challenge is the inherent bias and incorrect information in the available dataset.
One of the most visible cases highlighting this challenge is research by Derek Driggs, an ML researcher at the University of Cambridge. Driggs built an ML model for disease diagnosis with better accuracy than physicians. However, further investigation revealed that his model was flawed because it was trained on a data set that included scans of patients lying down and standing up. The patients lying down were much more likely to be seriously ill, so the algorithm learned to identify disease risk based on the person’s position in the scan.
This case sounds eerily like another famous experiment where an AI model learned to determine whether the images depicted wolves or huskies by searching for snow since the original training datasets for the algorithm included a disproportionate number of images of wolves in winter instead of zeroing in on the animals and their differences.
These examples are not one-off cases. Bias and incorrect data sets are intrinsic and constant risks for every AI model. A robust AI framework would help build transparency and predictability where patients and regulators can trust the AI models.
AI-driven diagnostics, automation, drug-discovery:
Multiple other use cases have emerged where AI is used in patient care, disease diagnosis, workflow automation, drug discovery, and many more. These use cases can primarily be categorized as areas where AI is primarily utilized to increase the speed and accuracy of existing work patterns. Let’s look at a few of these use cases:
- Diagnosing Patients: AI algorithms analyze medical imaging data, such as X-rays, MRIs, and CT scans, to assist healthcare professionals in accurate and swift diagnoses.
- Transcribing Medical Documents: Automatic Speech Recognition (ASR) technology employs advanced algorithms and machine learning models to convert spoken language into written text, providing a more efficient and accurate method for documenting medical information.
- Drug Discovery and Development: AI accelerates the drug discovery process by analyzing vast datasets to identify potential drug candidates and predict their efficacy.
- Administrative Efficiency: AI streamlines administrative tasks, such as billing and scheduling, reducing paperwork and improving overall operational efficiency within healthcare organizations.
While the number and usage of use cases are increasing, a fundamental paradox is at the core here.
The current approach to healthcare regulation is predicated on predictability. Regulators like the FDA review the approval request for drugs and devices (NDA/BLA/PMA) based on the safety and efficacy data generated in the controlled trials.
However, the world of AI is predicated on constant training with new datasets. Limiting the AI model to a controlled dataset or constrained environment defeats the purpose of the machine learning capability that makes AI effective. AI models are expected to change their response to the same question as they learn from the minutest of the differences in the surrounding environment.
These two approaches are at odds with each other. Without a clear regulatory framework, the sponsors and users of the AI use case assume liability and brand risk. This is truly a new frontier, and we need breakthroughs in health policy as dramatic as AI.
Conclusion
The World Health Organization’s recent publication, which lists key regulatory considerations on artificial intelligence (AI) for health, can be a good start to untangle these complex issues. The 18 regulatory considerations discussed in the publication fall under six broad categories: documentation and transparency, risk management, intended use and validation, data quality, privacy and data protection, and engagement and collaboration.
It is up to the industry, regulators, academia, legislators, and patient organizations to come together to ensure that AI delivers on the promise of a better (and healthier) world. Innovation & Tech Today