The Limited Times

Now you can see non-English news...

An algorithm will clean AI of gender biases - Society and Rights

2024-02-01T13:31:02.538Z

Highlights: An algorithm will clean AI of gender biases - Society and Rights. An algorithm that works on a large quantity of administrative texts to "clean" them of any term that has a discriminatory character. The project, a Prin, a project of Significant National Interest, financed by the Ministry of University and Research, on which three Italian universities work. The system allows the end user to identify the segments of the sentence that can create discrimination, and not only gender but also, for example, so-called 'ageism'


Artificial intelligence risks being a new engine for the propagation of gender discrimination and that linguistic sexism that permeates communication, including institutional communication. (HANDLE)


 Artificial intelligence risks being a new engine for the propagation of gender discrimination and that linguistic sexism that permeates communication, including institutional communication.

The same "feeding" mechanism of the algorithms that generate content-generating products is, in fact, by its very nature a vehicle for the reproduction and perpetuation of those stereotypes that permeate society and which, even unconsciously, thus end up multiplying the effects of a sexist representation of society.


    However, a tool is arriving that will be able to intervene to "correct" all those linguistic distortions that are a vehicle of discrimination.


    It is an algorithm that works on a large quantity of administrative texts to "clean" them of any term that has a discriminatory character.


    The project, a Prin, a project of Significant National Interest, financed by the Ministry of University and Research, on which three Italian universities work, the Polytechnic of Turin, the University of Bologna and the University of Tor Vergata had as objective is to build a model that intervenes on the 'textual corpora' that are used to test the algorithms, precisely to modify them in the sense of inclusion.


    The project, which is called Empowering Multilingual Inclusive Communication, E-MIMIC, "tries to reformulate the administrative text in a non-discriminatory way: this means that the system allows the end user to identify the segments of the sentence that can create discrimination, and not only gender but also, for example, so-called 'ageism', that form of discrimination based on age, or against disabled or visually impaired people.


    Then, once the erroneous segments have been identified, in a second phase it proposes a new text that eliminates these discriminations" explains Rachele Raus, a French scholar from the University of Bologna who is working on the project.

The user can then choose whether to intervene or not but does so after having been able to acquire awareness of the acquired language.

"Our team, with the Polytechnic of Turin, has essentially created neural networks that are capable of carrying out an encoding process in the first phase, a classifier capable of identifying non-inclusive tools and, in the second decoding phase , to generate a new, clean text" explains Professor Raus.



Reproduction reserved © Copyright ANSA

Source: ansa

All life articles on 2024-02-01

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.