Arthur Grimonpont is an engineer and consultant specializing in transition issues in the face of global crises. He is the author of
Algocratie
, (Actes Sud, 2022).
With each new challenge won by artificial intelligence (AI), we redouble our enthusiasm or... concern.
Proponents of laissez-faire technology argue that algorithms have no will of their own and only obey the will of their creators.
They see AI as a tool: neither good nor bad in itself.
Like a hammer, the algorithm would only serve the hand of whoever uses it.
That one can use a hammer to break the skull of his neighbor is indeed not a sufficient reason to prohibit the sale.
Read alsoAurélie Jean: "Why algocracy does not exist"
Technocritics retort that a technology is never neutral, and that some technologies lend themselves more easily than others to hostile uses.
Typically, in the United States, supporters of gun ownership see it as the best way to ensure their personal safety, while their ideological opponents find that this freedom makes their country the country with the most gun homicides. (among rich countries).
So, artificial intelligence: hammer or gun?
No analogy seems satisfactory, as there are so many different AIs.
Several AIs largely absent from public debate have a major impact on our human societies.
This is for example the case of Alladin, the AI used by the American company BlackRock - the world's leading asset manager - to place the fortune of its investors.
Capable of assessing financial risks better than any group of traders, Aladdin controlled up to 20 trillion in financial assets, or 10% of the global bond market (an amount equivalent to the GDP of the United States).
Another example: social media recommendation AIs.
Half of humanity interacts with them on a daily basis, but they remain completely on the sidelines of public debate.
They shape news feeds and suggest videos for more than four billion people to watch.
Algorithms are taking an increasing number of decisions of public importance instead of humans.
It is up to us to put them at the service of democracy and the common good.
Arthur Grimonpont
In these two examples, AI serves human desires, namely to optimize financial investments or generate advertising revenue.
In this sense, Aurélie Jean is right to say that AI is at the service of human wills - or more precisely here, of economic interests.
However, while pursuing the objective assigned to them, AI sometimes escapes the will of its creators.
This is even very common.
On social networks, the main objective of recommendation algorithms is to capture our attention.
To do this, they favor certain content over others, most often outside of any human control.
On YouTube, where 120,000 years of videos are viewed every day by humanity, three quarters are the result of an algorithmic recommendation.
However, many studies have shown that recommendation AIs tend to favor extreme content, polarize opinion and promote conspiracy theories.
Who would accuse the YouTube boss of
to have wanted to help Donald Trump to be elected in 2016?
Who would blame Facebook engineers for willfully stoking civil wars in Myanmar or Ethiopia?
In these two cases, no human will directly presided over the decisions of the algorithms.
We are not quite in algocracy, but we are not quite out of it either.
Algorithms are taking an increasing number of decisions of public importance instead of humans.
It is up to us to put them at the service of democracy and the common good.