WHO Raises Concerns About Using AI for Healthcare

It seems that people everywhere are talking about and experimenting with ChatGPT. Indeed, artificial intelligence (AI)-generated large language model tools (LLMs) are becoming more popular every day. When it comes to using this technology for health-related purposes, however, caution should be exercised, according to an alert issued on May 16 by the World Health Organization (WHO). The organization also calls for rigorous oversight to ensure that AI is being used in safe, effective, and ethical ways.

“While the WHO is enthusiastic about the appropriate use of technologies, including LLMs, to support healthcare professionals, patients, researchers, and scientists, there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with LLMs,” said the organization. It draws attention to the “precipitous adoption of untested systems,” which “could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world.”

Several Concerns

The WHO lists several concerns that it says call for oversight. The first addresses the data used to train AI: they “may be biased, generating misleading or inaccurate information that could pose risks to health, equity, and inclusiveness.”

The second is about responses generated by LLMs. While they “can appear authoritative and plausible to an end user,” they “may be completely incorrect or contain serious errors, especially for health-related responses.”

The processing of personal data by AI tools raises questions about privacy and consent, since “LLMs may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response.” LLMs also could be “misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video content that is difficult for the public to differentiate from reliable health content.”

WHO Recommendations

In the May 16 alert, “the WHO recommends that policymakers ensure patient safety and protection while technology firms work to commercialize LLMs.”

Several years before that alert was released, and more than a year before these tools exploded in popularity, Ethics and Governance of Artificial Intelligence for Health: WHO Guidance was published.

This 2021 report pointed out risks related to the unregulated use of AI in clinical settings — risks that could see the rights and interests of patients and communities “subordinated to the powerful commercial interests of technology companies or the interests of governments in surveillance and social control.”

Also mentioned is the need to give healthcare professionals digital skills training, since this will ensure that they have the knowledge necessary to use the new technologies when making clinical decisions.

Given the number of user-friendly and intuitive AI platforms that have been made available to the public in the past 6 months, it is no surprise that so many people are talking about ways that these tools can be incorporated into clinical practice.

This article was translated from Medscape’s Portuguese edition.

Source: Read Full Article