Skip to main content
Tech

AMA adopts proposal to protect patients from false and misleading AI-generated medical advice

Agencies like the FTC and FDA could step in to regulate AI models in healthcare.
article cover

Ignatiev/Getty Images

4 min read

The American Medical Association (AMA) voted to adopt a proposal on Tuesday to help protect patients against false or misleading medical information from artificial intelligence (AI) tools such as generative pretrained transformers (GPT).

AMA-affiliated organizations raised concerns in April about the lack of regulatory oversight for GPTs, especially as the technology becomes more prevalent in healthcare settings and more patients turn to GPT as a medical resource—which prompted the proposal. The AMA House of Delegates, which directs the AMA to support policies regulating AI-generated health information, adopted the proposal during the organization’s annual meeting this week.

“We don’t want to be chasing technology. Rather, as scientists, we want to use our expertise to structure guidelines and guardrails to prevent unintended consequences, such as baking in bias and widening disparities, dissemination of incorrect medical advice, or spread of misinformation or disinformation,” AMA trustee Alexander Ding said in a statement.

The AMA will work with the Federal Trade Commission (FTC), Food and Drug Administration (FDA), and other regulatory agencies to mitigate GPT medical misinformation, according to the resolution proposed by the American Society for Surgery of the Hand and the American Association for Hand Surgery. At next year’s annual meeting, the AMA will propose state and federal regulations for AI tools, according to a statement.

“AI took a huge leap with [large language model tools] and generative models, so all of the work that has been done up to this point in terms of regulatory and governance frameworks will have to be treated or at least reviewed with this new lens,” Baptist Health South Florida Chief Data Officer Sha Edathumparampil, who was not involved in the effort, told Healthcare Brew.

The proposal vote comes weeks after the World Health Organization recommended caution when implementing AI tools into healthcare which could produce responses that “appear authoritative and plausible to an end user; however, these responses may be completely incorrect or contain serious errors, especially for health-related responses.”

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

Other reports have highlighted AI’s challenges with providing medical information to everyday users.

When asked to produce a differential diagnosis for a postpartum hemorrhage, ChatGPT provided scientific evidence from sources that did not exist, Stat reported. Likewise, ChatGPT “provided critical resources” to users asking questions about suicide and addiction only 22% of the time, CNN reported.

Unless a GPT’s medical advice is filtered through healthcare providers, it can endanger patients with inaccurate and misleading information, according to the proposal.

Some AI models such as Google’s Med-PaLM 2 are able to provide medical advice about as accurately as a doctor could, scoring 85% on a US Medical Licensing Examination practice test when other models scored around 50%. Google’s AI model, however, is not widely available for consumer use.

Regulating the use of AI tools in healthcare can be complex, according to a 2021 brief from the Pew Charitable Trusts. Many healthcare laws are not applicable to consumer contexts; for example, HIPAA does not apply to patient data collected by certain health apps.

But there are ways the FTC and FDA could provide some oversight into AI tools, according to the proposal.

The FDA has already considered regulations for AI and machine learning medical devices, and the proposal suggests that the agency could “hold software developers responsible if GPT makes false medical claims.” The proposal also recommends that the FTC label any inaccurate or misleading AI-generated medical advice as “unfair or deceptive business practices,” which would violate the FTC Act.

“Moving toward creation of consensus principles, standards, and regulatory requirements will help ensure safe, effective, unbiased, and ethical AI technologies, including large language models (LLMs) and generative pre-trained transformers (GPT) [are developed] to increase access to health information and scale doctors’ reach to patients and communities,” Ding said in the AMA’s statement.

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.