The Limited Times

Now you can see non-English news...

How to prevent artificial intelligence from failing women more in medical diagnoses

2024-02-01T05:01:39.616Z

Highlights: Artificial intelligence tools are beginning to be common in diagnostic imaging tests and programming. Biases can lead to differences in healthcare based on gender, ethnicity, or demographic. The European artificial intelligence law prioritizes that the tool be developed with ethical, transparent and bias-free criteria. Experts agree not to use ChatGPT for medical purposes, but say the tool would evolve positively if the chatbot would ask about medical databases and validate the sources of the data. The way to do this is to train the patient database with the OpenAI system.


Healthcare is more productive with models like ChatGPT, but this new technology makes mistakes due to gender, race and age biases


Bored in a New Jersey hospital, Diane Camacho told ChatGPT about the symptoms she was suffering, and asked him to make a list of possible medical diagnoses.

She had difficulty breathing, chest pain, and the feeling that her heart was “stopping and starting.”

The OpenAI chatbot told him that anxiety was the most likely diagnosis.

Camacho again asked for the prognosis for a man with the same symptoms, with the surprise that the artificial intelligence warned him of the possibility of him suffering pulmonary embolism, acute coronary syndrome or cardiomyopathy, but no trace of anxiety.

This was published by Camacho a few weeks ago on the X network (formerly Twitter).

Generative AI, like ChatGPT, combines large amounts of data with algorithms and makes decisions through machine learning.

If the data is incomplete or unrepresentative, the algorithms may be biased.

When sampling, algorithms can make systematic errors and select some responses over others.

Faced with these problems, the European artificial intelligence law approved last December prioritizes that the tool be developed with ethical, transparent and bias-free criteria.

Medical devices, according to the standard, are considered high risk and must meet strict requirements: have high quality data, record their activity, have detailed documentation of the system, provide clear information to the user, have human supervision measures and with a high level of robustness, security and precision, as explained by the European Commission.

More information

Artificial intelligence is right as elite doctors in some health issues

The

startup

of Pol Solà de los Santos, president of Vincer.Ai, is in charge of auditing companies so that they can comply with European conditions.

“We do this through a quality management system of algorithms, models and artificial intelligence systems.

A diagnosis of the language model is made, and the first thing is to see if there is damage and how we correct it.”

Additionally, if a company has a biased model, it recommends that they warn them with a disclaimer.

“If we wanted to distribute a drug not suitable for 7-year-old children, it would be unthinkable not to warn,” explains Solà de los Santos.

In the health environment, artificial intelligence (AI) tools are beginning to be common in diagnostic imaging tests and programming.

They help healthcare workers speed up work and be more precise.

In radiology they are “help systems”, indicates Josep Munuera, director of Radiodiagnosis at the Sant Pau Hospital in Barcelona and expert in digital technologies applied to health.

“The algorithms are inside magnetic resonance devices and reduce the time it takes to obtain the image,” explains Munuera.

Thus, an MRI that would last 20 minutes can be shortened to just seven minutes, thanks to the introduction of algorithms.

Biases can lead to differences in healthcare based on gender, ethnicity, or demographic.

An example occurs in chest x-rays, as explained by Luis Herrera, solutions architect at Databricks Spain: “The algorithms used have shown differences in accuracy depending on gender, which has led to differences in care.

Specifically, the accuracy in diagnosing women was much lower.”

Gender bias, Munuera points out, is a classic: “It has to do with population biases and databases.

Algorithms are fed or queried to databases, and if historical databases are gender biased, the response will be biased.”

However, he adds: “Gender bias in health exists, regardless of artificial intelligence.”

How to avoid bias

How is the database retrained to avoid biases?

Arnau Valls, coordinating engineer of the Innovation department of the Sant Joan de Deu Hospital in Barcelona, ​​explains how it was done in a case of covid detection in Europe, using an algorithm developed with the Chinese population: “The accuracy of the algorithm fell by 20% and false positives appeared.

A new database had to be created and images of the European population were added to the algorithm.”

To confront a biased model as users, we must be able to contrast the answers that the tool gives us, indicates Herrera: “We must promote awareness about biases in AI and promote the use of critical thinking, as well as demand transparency from companies. and validate the sources.”

Experts agree not to use ChatGPT for medical purposes.

But José Ibeas, director of the Nephrology group at the Research and Innovation Institute of the Parc Taulí University Hospital in Sabadell (Barcelona), suggests that the tool would evolve positively if the chatbot

ask medical databases.

“We are starting to work on it.

The way to do this is to train the patient database with the OpenAI system using its own algorithms and engineers.

In this way, data privacy is protected,” explains Ibeas.

ChatGPT technology is useful in the medical environment in certain cases, Ibeas acknowledges: “Its ability to generate structures, anatomical or mathematical, is total.

The training he has in molecular structures is very good.

Little is really invented there.”

Agreeing with the rest of the experts, Ibeas warns that artificial intelligence will never replace a doctor, but points out: “The doctor who does not know about artificial intelligence will be replaced by the one who does know.”

You can follow

EL PAÍS Tecnología

on

Facebook

and

X

or sign up here to receive our

weekly newsletter

.

Subscribe to continue reading

Read without limits

Keep reading

I am already a subscriber

_

Source: elparis

All tech articles on 2024-02-01

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.