The Limited Times

Now you can see non-English news...

Artificial intelligence and democracy

2021-06-09T07:05:42.137Z


We must keep trying to invent procedures and institutions that work in the new digital constellation, just as our ancestors have done at other times in history.


More information

  • Tribune |

    'The key is to govern digital';

    by Luciano Floridi

The effects of artificial intelligence on the various aspects of our lives have raised all kinds of expectations and concerns. They have promoted a regulatory effort that, in the European Union, has been translated into ethical codes, a regulation to protect privacy and a recent proposal regarding the precautions that we must take with automatic decision-making systems. The perspectives from which the issue has been approached are fundamentally private law, administrative reform, cybersecurity and ethical recommendations, but we have hardly thought about it from the point of view of democracy, except for some catastrophic tests or, in the other extreme, making completely illusory promises of democratization.Digitization has a great political relevance that not only has to do with the fact that it is the object of politics (that there are digital politics), but digitization itself has to be understood as a political process. In the debates about artificial intelligence, much is said about its ethical, legal, and economic dimension, but very little about its political dimension.

It is necessary to think about what democratic self-government means and what meaning free political decision has in this new constellation. It would be about developing a theory of democratic decision in an environment mediated by artificial intelligence, developing a critical theory of automatic reason. We need a political philosophy of artificial intelligence, an approach that cannot be covered by technological reflection or by ethical codes.

We have to pay greater attention to the disruptions that this new constellation (increasingly intelligent systems, a more integrated technology and a more quantified society) is going to produce in our form of democratic organization. Certain decisions are no longer made solely by human beings but entrusted in whole or in part to systems that process data and give rise to a result that was not fully predictable. What about free choice - which is the normative core of democracy - in automated environments? Who decides when deciding an algorithm? The new digital environment is going to force us to rethink some of the basic categories of politics and to rule this world with other instruments. We are talking about particularly sophisticated and complex technologies,in which appeals to its "humanization" or certain ethical codes that seem to ignore its nature serve as very little generic. Learning machines, data analysis in gigantic proportions or the current proliferation of automated decision systems are not devices that can be regulated with simple intervention procedures, but that is an excuse for doing nothing but for regulatory institutions to act at least with the same intelligence as what they are obliged to regulate.but that is an excuse for doing nothing but for the regulatory institutions to act at least with the same intelligence as what they are obliged to regulate.but that is an excuse for doing nothing but for the regulatory institutions to act at least with the same intelligence as what they are obliged to regulate.

The fact that we are witnessing a brutal change in our technological environment, with largely unpredictable consequences, explains the fact that we do not know very well how to diagnose the situation and the scenario has been filled with extreme assessments, little nuanced, of excessive enthusiasm or of apocalyptic overtones, also formulated by intellectuals from whom we have the right to expect a more serene judgment. These valuations have been evolving in a very short period of time. Relatively recently we were celebrating the democratizing potential of the web in what became known as the

Arab Springs

and universal access to public space while now we are terrified of

bots.

, electoral interference and misinformation. The September 2018 issue of the MIT

Technology Review

was dedicated to the question of whether technology was threatening our democracy and

The Economist

On December 18, 2019, he was already talking about an “aithoritarianism”, an authoritarianism of artificial intelligence that could destroy democratic institutions. This explains why there are such conflicting descriptions of the situation in which we find ourselves: while some celebrate the arrival of a policy without ideological prejudices, others warn us about the end of democracy. There are those who assure that the new technology would solve the problems faced by the old policy; Others hold the new technological environment responsible for the loss of government capacity over social processes and the de-democratization of political decisions.

The fundamental question posed to us is what place political decision occupies in an algorithmic democracy. Democracy is free decision, popular will, self-government. To what extent is this possible and does it make sense in the hyper-automated, algorithmic environments advertised by artificial intelligence? Representative democracy is a way of articulating political power that attributes it to a specific body and in accordance with a chain of responsibility and legitimacy in which the principle that all power comes from the people is verified. From this perspective, the introduction of autonomous intelligent systems appears problematic.

The general trend towards automated management of human affairs is not just a quantitative increase in the instruments at our disposal but a qualitative transformation of our being in the world, a world in whose center we are no longer found. With automation we could be programming our own obsolescence. Marvin Minsky stated that we should consider ourselves lucky if intelligent machines have us as pets in the future. How can we ensure that this sinister prophecy is not fulfilled and that human beings have a certain sovereignty in these new technological environments?

When we talk about human-centered and democratic artificial intelligence, there are basically two strategies that allow us to think about a reappropriation of automated decision-making processes: the design of the human-machine ecosystem and transparency.

In the first place, it would be about designing the best presence of humans in processes characterized by enormous complexity, taking into account that it is a balance that inevitably includes a certain tension: we have to think about this ecosystem in such a way that humans are not subordinated (something incompatible with our ideal of self-determination) and at the same time we must intervene in the machines without ruining their performativity. With this I am not proposing a solution but calling attention to a problem that is sometimes overlooked by some ethical and humanistic solutions that are nothing more than mere exhortations.

The other strategy of humanization of technology is through transparency as a possibility to explain, understand and demand responsibility from artificial intelligence on the part of humans. Here, too, there are solutions and demands that seem not to take into account the complexity of the systems or the subjective limitations of understanding. The great task in this regard revolves around notions that are more realistic than transparency, such as explicability, the generation of trust or the idea that understanding is not so much a subjective matter but a collective one, which has to be facilitated and institutionally regulated. .

Human beings have been able to invent, with greater or lesser success, democratic procedures and institutions for very different realities: for Greek cities and for renaissance city-states, for nation-states and for some of our global institutions such as the European Union.

Are we so sure that this cannot be achieved in the new digital constellation?

I think we have no right to stop trying until it is proven to be an impossible goal.

Daniel Innerarity

is professor of Political Philosophy and Ikerbasque researcher at the University of the Basque Country.

@daniInnerarity

Source: elparis

All news articles on 2021-06-09

Similar news:

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.