The Limited Times

Now you can see non-English news...

Algorithms in medicine at risk of bias

2021-05-18T23:30:13.124Z


With more and more applications of artificial intelligence in medicine, there is a risk that discrimination, for example of gender or race, already seen in other fields, is repeated. The alarm is raised by two editorials, one in the Jama magazine and the other in EBioMedicine, calling for an 'algorithmovigilance' (ANSA)


With more and more applications of artificial intelligence in medicine, there is a risk that discrimination, for example of gender or race, already seen in other fields, is repeated. The alarm is raised by two editorials, one in the Jama magazine and the other in EBioMedicine, calling for an 'algorithmovigilance' to assess risks similar to the pharmacovigilance used for therapies and vaccines. The commentary on Jama Network Open, by Peter Embi of Indiana University, starts from a study by IBM researchers published by the same journal that analyzed an algorithm used to determine the risk of postpartum depression in a sample of women.

The analysis revealed that the algorithm, if not corrected, leads to discrimination against women of color, caused by the fact that it acts on the basis of data collected on a predominantly white population. "The performance of algorithms changes if they are applied with different data or settings, and according to different human-machine interactions - writes Embi -. These factors can transform a beneficial tool into one that potentially causes damage".

Artificial intelligence has already come into use in medicine, underline two researchers from Stanford University, Londa Schiebinger and James Zou, on EBioMedicine, with tools based on algorithms for the diagnosis of tumors, heart disease, eye problems, with many more to come. "The white body and the male body have always been the norm in the search for therapies - they underline - so it is important that devices with artificial intelligence do not follow this same pattern".

In the medical field, algorithms help in diagnoses, in the choice of therapies and in research, and can be 'additional eyes' for doctors, for example in holding reports. However, bias problems can arise during development, especially if the data used to form the models that 'instruct' the algorithms are not representative, because they do not take into account factors such as race or gender. An example, the Stanford experts explain, are oximeters, very useful for monitoring patients with Covid.

The technology of these devices is based on the different absorption of light by the red blood cells when they are oxygenated, but also the melanin, the substance that colors the skin, absorbs it. The result is that oximeters in use today have three times the risk of reporting an incorrect measurement when used on black people, and errors in measurements on women are also more frequent, with the result that if one falls back into one categories you risk not receiving oxygen when needed. 

Source: ansa

All life articles on 2021-05-18

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.