Skip to main content
Tech

WHO raises concerns over potential data biases in AI health tools

The data used to train tools like ChatGPT may have racial biases, for example.
article cover

Laurence Dutton/Getty Images

3 min read

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.

Maybe think twice before using ChatGPT in a healthcare setting.

The World Health Organization (WHO) recommends caution when implementing artificial intelligence (AI)-generated large language model tools (LLMs) into routine healthcare. Concerns such as data biases or lack of protection for patient health data highlight the need for rigorous oversight to ensure that AI is used safely and ethically in healthcare.

“Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world,” according to the WHO.

AI has already made its way into healthcare, and has the potential to save the industry $360 billion annually. More healthcare organizations are utilizing AI for administrative and clinical tasks, such as optimizing operating room schedules or interpreting medical scans. Some patients at Philadelphia-based Jefferson Health are already turning to AI instead of physicians for diagnoses.

Still, there are some problems health systems need to contend with if administrators use AI.

Data used to train these AI technologies “may be biased, generating misleading or inaccurate information that could pose risks to health, equity, and inclusiveness,” the WHO found.

“Bias in data used to develop AI tools can reduce their safety and effectiveness for patients who differ—whether genetically or in socioeconomic status, general health status, or other characteristics—from the population whose data were used to develop the tool,” according to a 2020 report from the Government Accountability Office (GAO).

For example, data from electronic health records (EHR) can have racial biases. Providers were 2.5x more likely to note a negative descriptor in a Black patient’s EHR compared to a white patient, according to a 2022 study from the University of Chicago that analyzed more than 40,000 EHRs.

There may be an increase in patient privacy risks as AI becomes more common in clinical settings, the GAO report found. These LLMs may be trained using health data that patients did not consent to provide, or these technologies may not protect patients’ sensitive data, according to the WHO.

“While WHO is enthusiastic about the appropriate use of technologies, including LLMs, to support healthcare professionals, patients, researchers, and scientists, there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with LLMs,” the WHO said.

Navigate the healthcare industry

Healthcare Brew covers pharmaceutical developments, health startups, the latest tech, and how it impacts hospitals and providers to keep administrators and providers informed.