The Limited Times

Now you can see non-English news...

Daniel Innerarity: “Algorithms are conservative and our freedom depends on being allowed to be unpredictable”

2022-07-05T07:23:02.620Z


The philosopher launches a new chair of Artificial Intelligence and Democracy to design "a dialogue in which humans and machines negotiate acceptable scenarios" to take advantage of the best of both worlds


Almost 20 years have passed since he won the National Essay Award for

The Transformation of Politics

and, since then, Daniel Innerarity has become a fundamental analyst for understanding, as he himself points out, that "democracy is not up to of the complexity of the world.

A few years ago, after reading a biography on Thomas Jefferson and his vocation for science, he began to wonder what a founder of revolutionary democracy like Jefferson would do “if instead of trivial artifacts and machines he had artificial intelligence and the most powerful algorithms in front of him. automated”.

That is the concern that led this 62-year-old from Bilbao to investigate this new complexity derived from technology.

Now, Innerarity weaves the willows of the new chair that it directs, of Artificial Intelligence and Democracy, at the European Institute of Florence, with the support of the Secretary of State for Digitization and the Institute of Democratic Governance.

A chair with which the Ikerbasque researcher intends to renew concepts, because Innerarity is convinced that the ones we use no longer serve to frame the new realities derived from thinking machines and, therefore, frictions arise that also threaten democracy.

"We are at a time in the history of humanity in which you can still negotiate, disagree, reflect on these technologies," he warns in a conversation from Florence, before they have solidified and make decisions on which society has not been able. argue.

Ask.

How do you perceive the daily controversies about artificial intelligence, like that of the Google researcher who believes that one of his machines already has a conscience?

Response.

The strategy of defining artificial intelligence from humans seems to me to be a big mistake: if humans have rights, so do machines;

if we pay taxes, too, if we tell jokes, too.

Because we are talking about two completely different intelligences.

My hypothesis is that there is not going to be a substitution.

And it also applies to democracy: we are not going to leave it in the hands of machines, simply because they do things very well, but not precisely politics.

It is an activity done in the midst of great ambiguity.

And machines work well where things can be measured and computerized, but they don't work well where you have context, ambiguity, uncertainty.

Instead of thinking about the emulation of humans by machines or fearing that machines will replace us,

We must think about what things we humans do well and what things machines do well, and design ecosystems that get the best performance from both

Q.

Your diagnosis is far from technophilia and technophobia.

R.

We have gone in a year from thinking that artificial intelligence is going to save politics to, after Cambridge Analytica, thinking that it is going to destroy democracy.

Why in such a short period of time have we gone from over-enthusiasm to the opposite, like with the Arab springs?

That kind of wave of democratization that we expected internet has not occurred.

And now we associate the word internet with hate speech, disinformation, etc.

When we have such different attitudes towards a technology, that means that we are not understanding it well.

Because it is true that the internet makes public space horizontal, it ends the verticality that would make us citizens mere spectators or subordinates.

But it is not true that it democratizes by itself,

The philosopher Daniel Innerarity poses next to the Congress of Deputies in Madrid. Andrea Comas

P.

And they launch this chair to try to put a little order to those ideas.

R.

It is necessary to make a renewal of concepts.

And it is here that we philosophers have a role to play.

For example, now a lot is said: whose data is it?

It seems to me that the concept of property is a very inadequate concept to refer to data, that more than a public good, they are a common good, something that cannot be appropriated, especially because the level of collection that I tolerate greatly conditions the of others.

And now we are dealing with an idea of ​​privacy that we have never had and the concept of sovereignty, the concept of power... A philosophical reflection is needed about some concepts that are being used in an inappropriate way and that deserve a revision.

And there are many centers in the world that are reflecting on it from an ethical and legal point of view,

Q.

In your recent book

The Society of Ignorance

you speak of algorithms as a new printing press that is coming to revolutionize everything.

R.

The turning point occurs from the moment when we humans design machines that have a life of their own, that are no longer merely instrumental.

When we produce artificial intelligence we enter quite unknown territory.

The distribution of the world that we had made, according to which humans are subjects of rights and obligations and we design a merely passive technology, which is subject to our control, is an idea that no longer works.

There is a break.

I compare it to the moment when Darwin ends the idea of ​​the God who designed creation: he forced us to think in a different way.

I think that when you talk about controlling technology you are in a pre-

Darwinian attitude

in that sense.

Obviously, algorithms, machines, robots, artificial intelligence, must have a human design, we have to discuss that.

But the idea of ​​control, like the one we have classically had for trivial technologies, seems to me to be completely inadequate.

What we have to do is establish a kind of dialogue in which humans and machines negotiate acceptable scenarios, thinking about equality, the impact on the environment, democratic values.

The idea of ​​control is not going to work when we talk about learning machines.

The property concept is inadequate to refer to data, which is a common good

P.

But it is very difficult to propose that negotiation when we do not know what is in the black box, we do not know how things work within the algorithm.

R.

It is a problem that today does not have an easy solution for several reasons.

First, because of the complexity of the matter.

Second, because the algorithm has a life of its own and is therefore also opaque to its designer.

And thirdly, because the idea of ​​auditing algorithms, of having transparency, we understand, let's say, as individual training, like someone who signs a document.

And I think we have to go to public systems that allow us to establish trust with machines.

This idea that these artifacts are a black box, as if humans weren't also black boxes.

The algorithms used to decide prison policies create many problems, but it is sometimes implied that an algorithm has biases and humans do not.

Aren't the heads of the judges also black boxes?

P.

But we humans know that we have these biases and we let the machine intervene because we want it to be less biased.

R.

Probably because we are more demanding in relation to the objectivity of technology.

We expect objectivity from technology and the moment it fails us, even to a small degree, it is much more intolerable than with a human.

The clearest case is that of autonomous car accidents.

They make us much more uneasy than the ones we have every day on the roads, but with technology it makes us very uncomfortable.

But the famous hit-and-run accident in Arizona would have happened exactly the same if a human had been driving.

Behind apparently automated processes there are people intervening without our knowledge

Q.

A scenario that raises the problem of responsibility, who has the reins when intelligent machines act?

R.

It is completely inappropriate to think that we control the rulers, if anything we monitor them, we revalidate their mandate... But we are not controlling them at all times in the political process.

There are many institutions over which we do not have electoral control, there are independent bodies.

Well, just as in the world of political institutions, in the world of technology we should arrive at an idea of ​​dialogue with the machine rather than control.

Every time we drive cars over which we have less control, but they are safer: my car does not allow me to fall asleep at the wheel, overtake without signaling, brake as much as I want.

The end result of the technology in the car is that I lose complete control, but in return it offers me general supervision of the processes so that I don't kill myself.

It is like when states give up sovereignty in Europe.

If we share political sovereignty, why not share technological sovereignty?

Q.

There are many political decisions to be made before letting the machines make them for themselves.

R.

I claim the philosophical reflection on these concepts.

What kind of society do we want?

Technology has the character of a means to an end.

The underlying question that should interest us is what values, what democracy do we want.

Q.

In the book you quote Nick Seaver: "If you don't see a human in the process, you have to look at a broader process."

When we interact with Alexa, we don't see the human, but she is the one who collected lithium in the mine in Bolivia, she is in the

click

farms in Asia.

R.

One of the most important things to focus on this issue well is to think less about oppositions.

Technology has a lot more humanity, if you will allow me the provocation, than ethicists usually claim.

Compared to those who conceive of technology as something immaterial, virtual, intangible, in cyberspace, it is actually much more material, with a brutal environmental impact.

And that material part is often outside our scope of attention.

And there have to be humans in the process: behind apparently automated processes there are people intervening without our knowledge.

In this labor transition, it is possible that we are going to have a new type of social conflict

P.

These technological processes lead to a less well-known polarization than political polarization, which is labor polarization: the labor market is going to be divided between qualified and well-paid jobs and others that are very basic and poorly paid.

R.

Which indicates a paradox: the promise of technology to free us from mechanical work has not been fulfilled.

And the other paradox may be that this is indicating that the machines are not going to fully replace us.

The expectation or fear of being replaced is completely unrealistic.

And that has to do with a very important distinction between task and work;

machines do tasks, but not actual jobs.

And in that transition it is possible that we are going to have a new type of social conflict.

With the case of the digital divide, we are seeing it now, the revolt of the elderly: people who are feeling expelled from this space.

And deep down we know that we are going to go to a more digitized world and we are going to save a lot, because it is more efficient, with less wasted time.

But instead of thinking in terms of substitution,

we have to think about what tasks can and should be carried out by a robot and what aspects of the human are unrealizable by a robot.

Not so much if this is good or bad.

The so-called artificial intelligence serves to solve certain types of political problems, but not others.

Let's not be so afraid of machines taking over all government tasks and instead facilitate those governance tasks that they can do better than us.

Q.

And what worries you the most in that area?

R.

What worries me most is the lack of reflexivity.

That the algorithmic environment accustoms us to certain things being decided in a way that we have not thought about or discussed enough.

That we take it for granted, that we do not argue.

Are we going to go algorithmic, automated environments?

Perfect, but let us know that there is some kind of authority behind it.

Let's see what authority it is, and let's do what we humans have always done with all authority: subject it to review.

If the algorithm does not contemplate an open scenario, it will not allow innovation and will rob us of the future

P.

However, artificial intelligence is being developed almost exclusively with the impetus of these great technologies.

It is Facebook, Amazon, Google, etc., that are deciding what intelligent machines we have, and with the sole objective of maximizing profits.

R.

We are at a time in the history of humanity in which it is still possible to negotiate, disagree and reflect on these technologies.

It is very possible that in not many years these technologies will have solidified into institutions, into processes, into algorithms that are much more difficult to discuss.

That is why this work of philosophical reflection is important.

There are many people considering technological regulation, but we are not going to properly regulate a technology that we do not understand because the concepts fail.

Technological reflection and philosophical reflection must go hand in hand as supports for any regulatory activity.

Q.

You criticize that there is an implicit conservatism in big data, which is like preparing a reheated dish.

A.

Big

data

, and all predictive analytics, are very conservative because they are based on the assumption that our future behavior will be in continuity with our past behavior.

Which is not completely false, because humans are very automatic and very conservative, and we repeat;

we have a resistance to change.

But in the history of humanity there are moments of rupture, of change, of transformation.

And if the algorithm is not capable of contemplating an indeterminate, open scenario, it will not allow that element of innovation, of novelty, that is in the story.

As Shoshana Zuboff says, it will steal our future.

I quote another philosopher, Hannah Arendt, who says that human beings are highly repetitive animals with great habits and inertia, but we are also capable of performing miracles, of giving rise to new things, of doing something unusual.

Not every day, but from time to time.

Revolutions, transformations, innovations, breaking with tradition, questions, etc.

And at such a unique moment in the history of humanity, in which we are very aware that we have to make great changes due to the climate crisis, the digital transformation, equality... At a time like this we should have a technology that is capable of anticipating the future of truth, unknown, open, indeterminate, democratic.

Or else, we have to consider that these technologies have certain limitations and define well what those limitations are, what is the scope of application of some algorithms, of basically conservative technologies.

And leaving indeterminate and open spaces for the free and unpredictable human dimension seems to me to be a fundamental democratic question.

Humans are unpredictable beings: we owe a good part of our freedom to that and machines must reflect this well.

You can write to

jsalas@elpais.es

, follow

EL PAÍS TECNOLOGÍA

on

Facebook

and

Twitter

or sign up here to receive our

weekly newsletter

.

50% off

Exclusive content for subscribers

read without limits

subscribe

I'm already a subscriber

Source: elparis

All tech articles on 2022-07-05

You may like

News/Politics 2024-03-23T00:35:34.817Z
News/Politics 2024-03-18T05:16:15.951Z
News/Politics 2024-03-31T06:06:18.500Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.