The Limited Times

Now you can see non-English news...

Because AI must be transparent and explainable

2023-01-16T18:39:54.728Z


(HANDLE) by Alessio Jacona* «The user of an automatic system created for decision support must be able to fully understand the suggestions he receives». Fosca Giannotti speaks , one of the pioneering scientists of mobility data mining, social network analysis and privacy-preserving data mining (the set of techniques with which to extract information from data while protecting privacy): «In short, artifici


by Alessio Jacona*

«The user of an automatic system created for decision support must be able to fully understand the suggestions he receives».

Fosca Giannotti

speaks

, one of the pioneering scientists of mobility data mining, social network analysis and privacy-preserving data mining (the set of techniques with which to extract information from data while protecting privacy): «In short, artificial intelligence it must not only give an answer to our questions - continues the professor - but it must also explain how this answer was generated, and why».

Fosca Giannotti holds the chair of Computer Science at the Scuola Normale di Pisa

(established for the first time in 2021 and immediately entrusted to her) and

director of the Pisa KDD Lab - Knowledge Discovery and Data Mining Laboratory

, a joint research initiative of the University of Pisa and ISTI-CNR, founded in 1994 as one of the first data mining research laboratories.

Transparency and explainability of AI are central to his work;

especially since, in 2019, the European Research Council (ERC) awarded it a grant of 2.5 million euros over 5 years to support the research project entitled "Science and technology for explaining the decision-making process of AI" . 

«The theme of explainability emerges when human beings and machine learning systems are brought together to support their decisions - explains Giannotti - because when this happens, two possible scenarios are generated: taking everything that the AI ​​says as true, as was an oracle, or NOT to trust at all, disputing the response».

Two extreme reactions, irreconcilable and equally wrong, which can be overcome "only if we find a way to explain why and how the system reaches its decisions".

The example that the professor gives in this regard is that of the software used for risk assessment in granting a loan: when the latter is rejected by the system, a simple no can not only damage the applicant, but also does not offer him the possibility of change things by indicating what is wrong, what should and maybe still can be changed.

More or less the same thing also applies to AI-based software that is used to evaluate the resumes of job applicants, but the examples are many.

Not only that: the lack of transparency is a problem also and above all because automatic systems can make unjust or wrong decisions as they are influenced by "bias", by prejudices on race, sex, religion, etc.

absorbed through the massive amounts of data they are trained with.

Data that - it should not be forgotten - we human beings produce, and which therefore inevitably reflect our society with its contradictions and (precisely) prejudices.

«Transparency allows us to defend ourselves if necessary, and this is why the issue is also covered by the GDPR», recalls Giannotti.

* Journalist, innovation expert and curator of the Artificial Intelligence Observatory

Source: ansa

All news articles on 2023-01-16

You may like

News/Politics 2024-04-12T15:13:04.056Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.