Artificial intelligence (AI) is supposed to make errors less common. But what happens if it doesn’t?
While AI can reduce medical errors, experts and malpractice attorneys say it could also cause them. And when a clinician using AI makes a mistake, it’s tough to answer the question of who’s to blame. Though there haven’t been any notable AI malpractice suits yet, these are things doctors and legal experts are paying attention to as AI takes over healthcare, with 3 in 5 physicians using it in 2024, according to an American Medical Association survey.
“If these systems are inaccurate, biased, or poorly integrated into clinical workflows, they can contribute to diagnostic errors or inappropriate treatment decisions, which could lead to malpractice claims,” Jeff Easley, general manager of the nonprofit Responsible AI Institute, told Healthcare Brew.
Malpractice mystery
On one hand, Jared Kaplan, co-founder and CEO of medical liability insurance company Indigo, told Healthcare Brew he believes AI’s benefits will outweigh its risks. He said his company may one day offer discounted premiums to practices that use AI, assuming certain AI apps are proven to reduce medical errors.
He likened it to how some homeowners can get insurance discounts for protective measures like burglar alarms, which reduce the odds of a break-in or other events that lead to home insurance claims.
Kaplan said AI is a “net positive” he believes will “drive down medical malpractice rates in the future.”
Legal limbo
But AI also exposes “a new threat” legally, he added.
If and when a clinician using the tech does make a medical mistake, “there’s no black-and-white answer” on whether the AI’s creator or the clinician will be deemed at fault—it will likely be judged on a case-by-case basis, Kaplan said.
The problem is how that’s decided.
The Federation of State Medical Boards suggested in April 2024 that its member medical boards—the groups that regulate, license, and discipline physicians in each state—should hold clinicians, not AI makers, liable if the tech makes a medical error.
“As with any other tool or differential used to diagnose or treat a condition, medical professionals are responsible for ensuring accuracy and veracity of evidence-based conclusions,” the recommendations read.
Navigate the healthcare industry
Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.
This issue of medical professionals’ responsibility when using technology has come up with electronic health records, too. Some clinicians have been sued, disciplined, or lost privileges for copying and pasting past patient observations instead of writing up new assessments for each patient, resulting in reports that lacked updated information, Medscape reported in 2024.
Other groups, like the American Medical Association, agree clinicians should double-check AI’s work, especially because the tech has been flagged for unconscious bias and hallucinations. Best practice, according to the Responsible AI Institute, is for AI to “support, not replace, medical judgment,” Easley said.
These suggestions are not binding, however. No federal laws or regulations have officially established how clinicians should use this technology. Some states have proposed bills, but only Georgia has passed legislation to govern AI in healthcare, Stateline reported.
The blame game
It’s easy to say clinicians should carefully check AI’s suggestions, but amid staff shortages, understaffing, and profit pressure, they don’t always have enough time to do that, Mark McNabola, personal injury and medical malpractice attorney partner at McNabola Law Group, told Healthcare Brew.
“The objective [of medicine] always was accuracy of diagnoses, treatment, and improving conditions. It has now been overshadowed by this business model to see as many patients in the shortest period of time possible,” McNabola said.
Asking physicians to judge for themselves when to trust AI and when to make their own call is “an immense, almost superhuman, burden,” two Johns Hopkins researchers and an MD said in a March 21 JAMA Health Forum article.
Avoiding errors
To protect patients and build a defense in the event of a lawsuit, it’s important for clinicians to document everything they do, Peter Kolbert, SVP for claim and litigation services for Healthcare Risk Advisors, said in a February white paper for the Doctors Company, a medical malpractice insurer.
In the absence of federal regulations, it’s crucial for healthcare organizations to establish AI use guidelines for their staff, Easley said.
“Realizing [AI’s] benefit requires more than just good intentions; it demands robust, end-to-end governance of AI systems in healthcare,” he said.