Connect with us

Daily News

What if AI In Health Care Is The Next Asbestos?

Artificial intelligence is often hailed as a great catalyst of medical innovation, a way to find cures to diseases that have confounded doctors and make health care more efficient, personalized, and accessible.

But what if it turns out to be poison?

Jonathan Zittrain, a Harvard Law School professor, posed that question during a conference in Boston Tuesday that examined the use of AI to accelerate the delivery of precision medicine to the masses. He used an alarming metaphor to explain his concerns:

In health care, Zittrain said, AI is particularly problematic because of how easily it can be duped into reaching false conclusions. As an example, he showed an image of a cat that a Google algorithm had correctly categorized as a tabby cat. On the next slide was a nearly identical picture of the cat, with only a few pixels changed, and Google was 100 percent positive that the image on the screen was guacamole.

“This is a frontline system … installed across the world for image recognition, and it can be tricked that easily,” Zittrain said. “OK, so now let’s put this in the world of medicine: How do you feel when the (algorithm) spits out with 100 percent confidence that guacamole is what you need to cure what ails you?”

He was part of a panel that explored the pitfalls of applying AI in medicine and the many ethical, political, and scientific questions that must be addressed to ensure its safety and effectiveness. Here’s a look at the key points discussed during the event at Harvard Medical School.

Data from wearables can’t be de-identified

Algorithms have shown an ability to analyze vast amounts of data from wearables to flag the onset of health problems such as irregular heart rhythms or tremors that could indicate the onset of Parkinson’s disease.

But wearable data are not the same as numbers on a spreadsheet; they can’t be easily anonymized, said Andy Coravos, chief executive of Elektra Labs, a company seeking to identify biomarkers in digital data to improve clinical trials.

“How many people here think you could de-identify your genome?” she asked. “Probably not, because your genome is unique to you. It’s the same with most of the biospecimens coming off a lot of wearables and sensors — I am uniquely identifiable with 30 seconds of walk data.”

That poses a privacy dilemma that is playing out on a daily basis, as health tech companies compile more data on their customers. Coravos said few, if any, meaningful regulations have been developed surrounding the collection of these data or the algorithms being used to analyze them for health care.

But if algorithms are the new drugs, she said, shouldn’t they be regulated with the same rigor?

“If you think about digital therapeutics, they all have a certain mechanism of action,” she said. “Is there an argument, with what we’ve learned in health care, to look at (digital treatments) in the same way we look at drugs?”

It is a question that will be answered by entrepreneurs until and unless it is taken up by regulators.

Bias isn’t just in people. It’s in the data they keep

AI is often discussed as a tool for eliminating bias in health care by helping doctors to standardize the way they care for patients. If a computer could provide objective advice on the best treatments for patients, then variations in care would diminish, and everyone would get the most effective care.

But Kadija Ferryman, a fellow at the Data & Society Research Institute in New York, said AI is just as likely to perpetuate bias as it is to eliminate it. That’s because bias is embedded in the data being fed to algorithms, whose outputs could be skewed as a result.

She cited an article in The Atlantic magazine that highlighted an algorithm used to identify skin cancer that was less effective in people with darker skin. In mental health care, data kept in electronic medical records has been shown to be infused with bias toward women and people of color.

The inequity in the data doesn’t just translate to unequal treatment, it can lead to ineffective care, said Ferryman, who is leading a research study on fairness in the application of precision medicine.

“Using AI has the potential to advance medical insights through the collection and analysis of large volumes and types of health data,” she said. “However, we must keep our focus on the potential for these technologies to exacerbate and extend unfair outcomes.”

Confusing correlation and causation

AI is excellent at finding correlations within data that are difficult for humans to detect, a skill that can be used to hone in on the causes of disease and help to develop more effective medicines.

But Zittrain, the Harvard law professor, devoted much of his talk to spurious correlations that AI has been known to surface. He noted one such correlation between the number of suicides by hanging or strangulation in North Carolina and the number of lawyers in the state.

In another example, the shape of a graph of opium production by year in Afghanistan correlated almost exactly with a silhouette of Mount Everest. The point, he said, is that a correlation is just a correlation — not a cause. And AI is not so great at distinguishing between the two.

That means it could advise you to take certain medicines, or change your diet, in order to remedy a medical problem based on associations that, in fact, have nothing to do with causing the medical problem in question.

It will take human logic and collaboration, Zittrain said, to reach meaningful conclusions.

“One hopes that various academic departments could use these associations to set agendas for research and say, ‘Cool, what’s going on here?” Zittrain said. “Another future is one in which everybody in each department is just running a different machine learning model that spits out answers specific to their zone.” – STAT News

Copyright © 2024 Medical Buyer

error: Content is protected !!