Generative artificial intelligence (AI) could revolutionize health care, for example by facilitating the development of drugs or speeding up the detection of diseases, but the World Health Organization WHO says more care is needed at risks.
In a paper published Thursday, the WHO analyzes the dangers and benefits of using large multimodal models (LMMs) - a fast-growing type of generative AI technology - in health.
These LMMs can use multiple types of data, including text, images, and video, and generate results that are not limited to the type of data fed into the algorithm.
Also read: Why artificial intelligence is far from replacing the radiologist
Screening, research, teaching and administration
“LMMs are expected to be widely used and applied in health care, scientific research, public health and drug development
,” says the WHO.
The organization defines five areas that could use this technology: screening, for example to respond to written requests from patients;
scientific research and drug development;
medical and nursing education;
administrative tasks;
and use by patients, for example for symptom review.
Although this technology has great potential, the WHO points out that it has also been shown that these MMLs can produce false, inaccurate, biased, or incomplete results, which could obviously have unfortunate consequences.
“As MMLs are increasingly used in health care and medicine, errors, misuse and, ultimately, harm to individuals are inevitable
,” notes the WHO.
Also read: What ethics for the use of artificial intelligence in health?
Ethics and governance
The document also presents new guidance on the ethics and governance of MMLs, making more than 40 recommendations for governments, technology companies and healthcare providers on how to leverage this technology safely. security.
“Generative AI technologies have the potential to improve health care but only if those who develop, regulate and use these technologies fully identify and take into account the associated risks
,” underlines WHO chief scientist Jeremy Farrar.
We need transparent information and policies to manage the design, development and use of LMMs to achieve better health outcomes and to overcome persistent health inequities.”
[Rules are needed to] ensure that users harmed by an LMM are properly compensated or have other forms of recourse.
WHO
The WHO calls for the establishment of rules on liability to
“ensure that users harmed by an LMM are properly compensated or have other forms of recourse”
.
She also highlights that the compliance of LMMs with existing regulations, particularly in terms of data protection, also raises concerns.
Also read: Kent Walker (Google): “Artificial intelligence is too important not to be well regulated”
Regulation and cybersecurity
Furthermore, the fact that LMMs are often developed and deployed by technology giants risks establishing the dominance of these companies, according to the WHO.
The organization therefore recommends that LMMs be developed not only by scientists and engineers, but also by healthcare professionals and patients.
The WHO also warns of the vulnerability of MMLs to cybersecurity risks, which could jeopardize patient information and even the reliability of healthcare.
Finally, it concludes that governments should mandate regulatory authorities to approve the use of MMLs in healthcare, and calls for audits to be implemented to assess the impact of this technology.