The Limited Times

Now you can see non-English news...

Sexism in algorithms: an underestimated discrimination

2020-07-12T21:39:06.218Z


IBM Predicts Only AI That Is Bias-Free Will SurviveDespite the rise of feminism in recent years, the widespread and negative effects of sexism on artificial intelligence are often underestimated. Far from being a minority, sexism, and the discrimination it generates, nowadays permeates the operation of artificial intelligence algorithms. This is a problem because we increasingly use algorithms to make crucial decisions about our lives. For exampl...


Despite the rise of feminism in recent years, the widespread and negative effects of sexism on artificial intelligence are often underestimated.

Far from being a minority, sexism, and the discrimination it generates, nowadays permeates the operation of artificial intelligence algorithms. This is a problem because we increasingly use algorithms to make crucial decisions about our lives. For example, who can and who cannot access a job interview or a mortgage.

Sexism in algorithms

The scientific literature studying the presence of biases and errors in machine learning algorithms is still in its early stages, but the results are very worrying.

Algorithms have been proven to inherit the gender biases that prevail in our society. As we will see below, human biases lead to systematic errors in algorithms. Furthermore, these biases often tend to increase due to the large amount of data handled by the algorithms and their widespread use.

For example, in a study where machine learning techniques were applied to train artificial intelligence using Google News , the analogy "man is a computer programmer what woman is ax" was solved. The automatic response was that "x = housewife".

Similarly, another disturbing finding was one seen in a trained algorithm with text taken from the Internet. He associated feminine names like Sarah with words attributed to the family, such as parents and marriage. Instead, male names like John had stronger associations with words attributed to work, such as professional and salary.

Amazon also had to remove its personnel selection algorithm because it showed a strong gender bias, penalizing CVs containing the word woman.

Sexism also creeps into image search algorithms. For example, research showed that Bing retrieves photos of women more often when using words with warm features, such as sensitive or emotional , in searches . Conversely, words with competing traits, such as smart or rational, are more represented by pictures of men. What is more, when searching for the word person, photos of men are recovered more often than women.

In another work it was observed that the algorithm associated images of shopping and kitchens with women. Thus, he deduced that "if she is in the kitchen, she is a woman" most of the time. Instead, she associated images of physical training with men.

In addition to textual data and images, user input and interactions also reinforce and nurture bias learning from algorithms. An example of this was confirmed by a paper in which it was observed that topics related to family and romantic relationships are discussed much more frequently in Wikipedia articles on women than on men. Furthermore, women's biographies tend to be more linked (via links) to men's than vice versa.

Algorithmic bias in languages ​​with gender

Studies that have focused on examining gender bias have done so almost exclusively by analyzing how algorithms work with the English language. However, this language has no grammatical gender.

In English, the nice teacher and the nice teacher say the same: the nice teacher . Therefore, it is worth asking what happens with languages ​​such as Spanish, which does have a grammatical gender.

Research in this regard has found gender biases when translating from English to languages ​​with a grammatical gender like ours. For example, one study showed that when translating the word lawyer from English to Spanish there was a stronger automatic association with the word lawyer than lawyer. On the contrary, the word nurse was more related to the word nurse than nurse. In principle, you should have associated both translations with the same probability.

Despite the numerous criticisms of recent years, the biases that occur when translating from a language without a grammatical gender, such as English, to one with a grammatical gender, such as Spanish, continue to occur today in some automatic translators such as , for example, DeepL (see Figure 1).

Screenshot of the DeepL algorithm showing gender bias (05-14-2020).

Some translators like Google Translate have made corrections. Today they translate a set of words with the generic masculine (see Figure 2), but they have also incorporated the feminine and masculine breakdown by gender of words and even short sentences (see Figure 3).

Screenshot of Google Translate that shows generic masculine in the translation of a list of words (14-05-2020).

Screenshot of Google Translate that shows the gender split in the translation of a word between feminine and masculine (14-05-2020).

What solution do you have?

Initiatives and standards are currently being developed to address the problem of algorithmic biases. But, for the moment, most artificial intelligence systems have biases.

Research suggests that we underestimate the biases present in machines and we even tend to consider them fairer and prefer the recommendations of the algorithms to those of humans. But do we really want to delegate our decisions to algorithms that associate women with housewives? IBM predicts that "only artificial intelligence that is free from bias will survive."

What solution do you have?

Initiatives and standards are currently being developed to address the problem of algorithmic biases. But, for the moment, most artificial intelligence systems have biases.

Research suggests that we underestimate the biases present in machines and we even tend to consider them fairer and prefer the recommendations of the algorithms to those of humans. But do we really want to delegate our decisions to algorithms that associate women with housewives? IBM predicts that "only artificial intelligence that is free from bias will survive."

Naroa Martinez is a Postdoctoral researcher, University of Deusto and Helena Matute is Professor of Psychology at the same university.

This article was originally published in The Conversation.

Source: elparis

All news articles on 2020-07-12

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.