Connect with us

International Circuit

European CRMC: Experts deliberate on AI and medical devices

With European legislation on how to regulate the integration of artificial intelligence (AI) into daily life on the horizon, it’s time to consider how its use will impact the medical device industry, experts said at the 2023 RAPS European Clinical and Risk Management Conference.

The clinical evaluation of medical devices with AI is an “ever-changing landscape” that regulators will increasingly receive applications for, observed Rachel Mead, clinical regulatory lead at BSI Group. “This is a rapidly expanding field. We’re hearing more and more about how it can bring about transformation in the medical device field in terms of disease prevention, detection and diagnosis,” she added.

“What we need to think about is what the impact is on the regulatory framework; how we actually manage these sorts of devices moving forward from a compliance point of view,” Mead said, adding that the aim of the pending EU AI Act is not to stifle innovation as AI-linked systems hold a lot of promise for being able to adapt treatment protocols to meet individual patient needs. The problem is showing how research results translate into clinical practice and how they might provide a benefit in the long term.

While there are certainly knowledge gaps to fill, help for regulators may come from the act, which is currently under negotiation by EU member states and is expected to be finalized soon. The act is based on the findings of a white paper published by the European Commission in 2020, which concluded that regulation was warranted and that a risk-based approach was preferred among the respondents to a public consultation.

“At the moment, it looks as though medical devices will be classed as high-risk devices and they will be subject to conformity assessment,” Mead said, “but then, they are subject to conformity assessment under the [EU Medical Device Regulation (MDR)] and [EU In Vitro Diagnostic Medical Devices Regulation [IVDR] legislation, so I think we wait and see exactly how this is going to pan out.”

Mead pointed out how the MDR specified the need for sufficient clinical evidence – determined and justified by the manufacturer – and for a clear and unambiguous statement about the intended purpose of the device. The latter is the crucial first step in a solid clinical evaluation plan, and it needs to be linked to the intended clinical benefit.

“It’s actually surprising how often, with these types of devices, that we’re seeing that the manufacturers don’t necessarily understand how to properly define the intended purpose of the device,” said Mead. “They’re good at explaining what it does, but not necessarily what it means for the patients, what the outcomes are actually going to achieve in clinical practice.”

It’s important to specify the intended clinical use and benefit of the AI-based medical device and to do so in the context of the state-of-the-art and what it is doing above and beyond what is usual for current practice, Mead said. Moreover, she advised manufacturers to be mindful of three key components: proving a valid clinical association or scientific validity, detailing the technical or analytical performance, and validating the clinical performance.

The clinical evaluation report (CER) needs to provide a description of the entire device, Mead said, and since AI is an umbrella term, it’s not sufficient to say that the technology is based on machine learning. Give details and be specific, she advised. While there is a provision in MDR for claiming equivalence, Mead suggested that the ability to do this could be limited.

After highlighting the need to comply with the General Data Protection Regulation (GDPR) and some common shortcomings of CERs submitted for AI-based devices, Natascha J. Cuper highlighted the things that notified bodies and EU regulators want to see in Certification of Suitability (CEP) reports and CER.

“What we want to see, of course, is the selection criteria and the sample size of your training and validation data sets, and how representative they are for the intended use,” said Cuper, internal clinician and product reviewer at Kiwa Dare Services BV, a notified body in the Netherlands. “Have any potential biases in the datasets been considered, and what are the actual demographics – which should include all intended medical indications or applications? We also want to see any pre-processing of the data, for example, labeling, the ground truth, and how you decided to pick processes.”

It is also important to consider what happens postmarket and how you will ensure the AI model will continue to perform well and that there is no degeneration, she added. Collection of real-world prospective clinical data, especially if the clinical evaluation was based on retrospective datasets, is also going to be expected, said Cuper.

Plan for model maintenance, she advised. “You may have to retrain the model, so make sure that you follow your own [quality management system (QMS)] procedures for change management.” And don’t forget that you may have to inform the relevant notified body as it could change the application, she said. RAPS.org

Copyright © 2024 Medical Buyer

error: Content is protected !!