Connect with us

Trends

How hospitals are trying AI to communicate with patients

Health systems across the country are exploring blending artificial intelligence into their communication with patients, from billing to after-hours messages about medication or symptoms. But how best to actually talk to patients about the technology and its risks is still an open question.

STAT asked six health system leaders how they are disclosing their use of AI to the patients the technology is supposed to benefit. They said they’re facing a delicate messaging challenge: being transparent without confusing or alarming patients.

“We don’t want to scare patients [into thinking] that there was some autonomous AI that is communicating with them, but we want to ensure that they’re clear that their physician is fully in control of the communication,” said Brent Lamm, chief information officer at UNC Health, which is rolling out an Epic tool using generative AI to help craft MyChart responses beginning in August. Physicians must review messages before they’re dispatched.

None of the health systems STAT interviewed are using generative AI to respond to patients directly, nor even considering it in the short-term for diagnostic purposes. Most say they are testing new technology out in closed “sandboxes” before slowly rolling them out to providers or administrators, especially to help them draft responses to patient questions.

“We’re not having the conversation with patients yet, because of the expectation that it will be more confusing than anything else,” said Vanderbilt’s health data science center director Brad Malin. Vanderbilt is exploring using generative AI to streamline medical note-taking, but hasn’t deployed anything yet. Once it does, it may consider establishing a team to field patient questions.

“We need to figure out under what conditions you need to tell somebody that generative AI is on the table,” he said. “If you change from a scribe to a computer, but the physician has the ability to look over the notes, do you need to tell them that’s ChatGPT?…Those types of questions I don’t think have been answered yet.”

In the absence of federal guidance, health systems are looking to each other to set new industry standards for disclosure.

“I think the collective wisdom of Epic, and all the other institutions working on this, taking a consensus and a collaborative view, and working with everybody, is where we want to land,” Lamm said.

Epic customers piloting the AI feature can configure the messages as they see fit, a spokesperson told STAT. While some have chosen to automatically include a disclosure that the message was drafted with the help of AI, others have opted to not include any additional information. Epic is still figuring out how best to manage patient and provider education before rolling out the tool more broadly, the spokesperson said in an e-mail.

Legal and regulatory disclosure requirements still aren’t clear and likely vary depending on the use case; if AI was more directly involved in patient care, health systems might be required to obtain informed consent, for instance, said I. Glenn Cohen, faculty director of the Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics. Being deceptive about AI use could make health systems the target of Federal Trade Commission investigations, but those cases would hinge on patients’ expectations. And if they’re framed as quality improvement projects, the pilots may also be exempt from institutional review board oversight, Cohen said.

While health systems do risk overwhelming patients with lengthy and confusing explanations they’re not legally required to give, they should ask themselves, “if you think it would really matter to patients, what justification do you have for withholding this information?” Cohen said.

Still, there’s a lot health systems don’t know about how AI can impact care. That includes how big a risk a phenomenon known as automation bias — when providers are so used to AI being accurate that they miss when it makes mistakes — poses to patients.

UC San Diego Health’s chief digital officer, Christopher Longhurst, said the system is already using the Epic AI-based MyChart tool to help physicians manage the 50 to 70 non-emergency queries they get a day about issues like antibiotic prescriptions or elevated heart rates. The responses sent always include a disclaimer that the note has been automatically generated but reviewed by humans.

“We are being maximally transparent with our patients,” he said.

At Penn Medicine, a natural language processing tool — separate from generative AI, but still a branch of artificial intelligence — automates some follow-up texts to thousands of patients after their delivery about postpartum symptoms. Kirstin Leitner, obstetrics clinical lead on the project, said the health system published a video describing how it worked to explain to patients that it was largely automated, but that they could also reach human providers by responding “text me.” If questions relate to symptoms that could either be benign or more worrisome, the bot is designed to ask follow-up questions instead of offering reassurance or minimizing issues.

The tool, sold by Memora Health, is designed to project warmth with friendly reminders — and while patients seem to like it, it can create confusion, Leitner said. Some patients have replied along the lines of “I don’t know if this is a real person” in their text conversation, Leitner said.

What to communicate may depend on how different an AI-based approach is from the status quo. Physicians use tools and templates all the time, said UC Davis Health’s chief medical information officer Yauheni Solad. UC Davis in the process of deploying AI to help with triaging messages in providers’ inbox.

But health systems are still gauging patient expectations, said Kash Patel, chief digital information officer at Hackensack Meridian Health, which is experimenting with generative AI in a closed environment, including to codify medical notes for further machine learning analysis or to enrich an existing chatbot for scheduling and non-emergency requests.

As it gets closer to deploying the technology, the health system will canvas its patient focus group and an ethics panel about topics like disclosure, Patel said. “I do think our advocacy team, our patient facing teams, our community health workers need to be more educated and aware…that’s where we have to play a bigger role.” Stat News

Copyright © 2024 Medical Buyer

error: Content is protected !!