Connect with us

International Circuit

Radiology Partners, Aidoc talk AI adoption, handling bias, FDA actions

Artificial intelligence and machine learning have gained popularity in the medical technology industry in recent years. Some top players in space are developing systems or buying their way into the competition.

Medtronic, GE, and Philips have all invested in AI and machine learning, claiming the technologies can better diagnose and treat patients. The wave of adoption and adoption of AI has created new challenges for the healthcare industry and prompted the FDA to consider introducing new regulatory review procedures specifically for AI and machine learning technologies.

Rich Whitney, CEO of Radiology Partners, a U.S.-based radiology practice, said that while interest in AI has increased recently, obstacles in healthcare such as fragmentation and high levels of regulation have prevented widespread adoption.

One obstacle is the low level of acceptance by doctors. However, Whitney said Radiology Partner’s recent partnership with medical imaging AI company Aidoc should help address the problem.

Nina Kottler, Associate CMO, Clinical AI at Radiology Partners, believes a key benefit of using AI systems in radiology is to improve patient care by identifying health concerns more quickly. Kottler said one of the key features of the Aidoc algorithms is a triage system that can be used to flag critical exam results for radiologists to prioritize.

“If your patient has intracranial or pulmonary bleeding EmbolistThese are realizations that can be catastrophic if you don’t discover them. ” Said Kottler. “The sooner you get to these things, the better it is for patient outcomes.”

The algorithms are also used for oncological exams, which Kottler says can be used to determine whether diseases are getting better or worse.

However, guard dogs fear that the pendulum could swing too quickly towards the AI. For example, ECRI warns the Technology can be unreliable and misrepresent some patient populations, which can lead to misdiagnosis and inappropriate care decisions.

In an interview with MedTech Dive, Whitney, Kottler and Aidoc CEO Elad Walach discuss how interest in AI has grown and address possible distortions and regulatory changes at the FDA.

This interview has been edited for the sake of clarity and brevity.

MEDTECH DIVE: How did you see how AI technology has changed in recent years when this space received more attention?

RICH WHITNEY: Technology is moving very, very quickly. However, we have not yet stepped into the part of development where there is a significant amount of usage and actual impact. The partnership with Aidoc really creates the prospect of much more widespread use of AI and really moves us into the future we all envision. These are radiologists who are empowered by AI and are able to add significantly more value to the healthcare system.

NINA KOTTLER: The technology has improved a lot and there are many more options in terms of the algorithms available. However, technology alone is not enough. And I think what was missing was that connection with the radiologists. The technology is meant to be used in a clinical setting, and since not much was provided, there weren’t many lessons on how to do it properly.

AI systems need to be deployed with direct assistance from radiologists to ensure they understand how these clinical systems work. We need to make sure it’s built into their workflow, and then we need to figure out how to monitor these systems over time to make sure that both the AI ​​and the clinician are working together to improve patient care. And that’s not easy.

Do you see that vendors are prioritizing and investing more in AI systems today than they were two or three years ago?
ELAD WALACH: I can definitely say yes. Incidentally, while difficult, COVID has shaped the trend for healthcare executives to see value and ROI from software-based solutions. They know there is value that can be captured by using the right technical infrastructure and software. So I think that in terms of prioritization, absolutely. But I think doctors and radiologists using this technology are building a lot of momentum, understanding that there is value, and analyzing what that value is.

The FDA is considering how best to regulate AI. For example, whether algorithms should be updated without checking or whether they should remain “locked”. How will a change to this review process affect the industry?

KOTTLER: Instead of just locking one algorithm at a time and then waiting for that algorithm to improve and repeat that assessment, the FDA is reviewing the provider and their practices to see if the way the provider is updating things itself , good is enough. And if the vendor’s processes are good enough, the output should be good enough. The agency does not have to check the output, but can check the supplier processes. You are only at the beginning. I think they are currently testing it with a couple of large groups in beta so it will take a while. But I find it pretty fascinating.

WALACH: It’s a tough problem that the FDA is facing, and much of it is the deluge with the number of products coming out on the market. The question the agency is charged with is: How do we maintain safety and efficacy while ensuring that we can bring innovations to market? The agency has developed quite quickly in terms of creating new processes, new paths and communicating with companies. So you asked: are you waiting for the FDA to do something? In a sense, yes, but it’s a very active process. It is an active preoccupation with the agency. I think some exciting regulatory changes are coming.

A problem that is repeatedly addressed with AI is the distortion built into algorithms. How can you prevent this from happening and fix the problem if it is detected?

WALACH: You want to make sure that, on the one hand, you’ve trained the data on a very robust, diverse set that isn’t targeting any particular population. On the other hand, we want to make sure that even after a product is launched, we encounter distortions that were initially unexpected. We want to make sure that we are monitoring performance over time. For me, it’s about fighting prejudice with data. This data protects against distortions in all phases of the product life cycle.

KOTTLER: Eventually we could go in the opposite direction. For now, we’re trying to have data that is as generalizable as possible so that you can apply the same algorithm everywhere. Ultimately, however, it means that the specificity and value for a given patient must decrease, even if it’s just a little.

As the FDA evolves and these AI algorithms evolve, we can have an AI algorithm that is appropriate for a particular population, and that means it will be much more accurate for that population.

Where do you see the AI ​​heading?
KOTTLER: T.The next area we get into is predictive medicine. While medicine has always been about treating disease, we really need to be more concerned with disease prevention. AI can help us prevent disease because it detects things that we as humans may not be able to detect. When we combine this information with the other information we have as humans, we can predict which patients are at higher risk.

We will combine this with information from the patient systems that are increasingly being used, such as: B. Wearables to enable a more holistic view of the patient. Daily Adoption News

 

error: Content is protected !!