The Limited Times

Now you can see non-English news...

Slavoj Zizek on Artificial Intelligence: 'The danger is not taking a chatbot for a person, but people talking like chatbots'


The Slovenian philosopher fears that the rise of powerful chatbots spells the death of irony and nuance in human thought. He talks about 'artificial idiocy'.

There is

nothing new about “chatbots”

that are able to carry on a conversation in natural language, understand the basic intent of a user, and give responses based on pre-established rules and data.

But the capacity of these chatbots has increased dramatically in recent months, which has created great concern and panic in many circles.

Much has been said about chatbots heralding the end of traditional student essays.

But one issue that warrants more attention is how chatbots should respond when human interlocutors make aggressive, sexist, or racist comments to prompt the bot to respond with its own gross fantasies.

Should AIs be programmed to answer at the same level of the questions posed to them?

If we decide that some kind of regulation is expedient,

we must determine how far censorship should go.

Will political positions that some cohorts consider “offensive” be prohibited?

What about expressions of solidarity with the Palestinians in the West Bank or the claim that Israel is an apartheid state (which former US President Jimmy Carter once put in the title of a book)?

Will they be banned for being “anti-Semitic”?


Photo: Shutterstock

The problem does not end here.

As artist and writer James Bridle warns,

the new AIs “rely on the indiscriminate appropriation of existing culture”

, and to believe that they are “really adept or meaningful is actively dangerous”.

Therefore, we must be very cautious with new AI imagers.

“In their attempt to understand and reproduce the totality of human visual culture,” observes Bridle,

“they seem to have also recreated our darkest fears.

Perhaps this is just a sign that these systems are very adept at mimicking human consciousness, even going as far as the terror that lurks in the depths of existence: our fear of filth, death, and corruption."

But how capable are the new AIs of approximating human consciousness?

Consider the case of the bar that recently advertised a drink special in the following terms: “Buy one beer for the price of two and get a second beer totally free!”.

To any human being, this is obviously a joke.

The classic "buy one, get one" is reformulated to self-cancel.

It's an expression of cynicism that will be appreciated as comedic sincerity, all to increase sales.

Would a chatbot understand any of this?

Artificial intelligence.


“Catch” presents a similar problem.

While it designates something that most people like to do (copulate), it also often takes on a negative value (“We got fucked!”).

Language and reality are complicated.

Is the AI ​​prepared to discern these differences?

In his 1805 essay "On the Gradual Formation of Thoughts in the Speech Process" (first published posthumously in 1878), the German poet Heinrich von Kleist reverses the well-known advice that one should not open one's mouth to speak unless you have a clear idea of ​​what you are going to say: “If, therefore, a thought is expressed in a confused way, it by no means follows that that thought was conceived in a confused way.

On the contrary,

it is quite possible that the ideas that are most confusedly expressed are those that were most clearly thought.

The relationship between language and thought is extraordinarily complicated.

In a passage from one of Stalin's speeches in the early 1930s, he proposes radical measures to "detect and mercilessly combat even those who oppose collectivization only in their thoughts... yes, that's what I mean, we

must fight even the thoughts of people.”

One can say with certainty that this passage had not been prepared in advance.

Having flown into his speech, Stalin immediately realized what he had just said.

But rather than back down, he decided to press the hyperbole.

As Jacques Lacan

later expressed ,

this was a case of the

appearance of the truth by surprise through the act of enunciation.

Louis Althusser

identified a similar phenomenon in the interaction between prize and surprise.

Someone who suddenly "prizes" an idea will be surprised at what they have achieved.

I repeat, can any chatbot do this?

"Heaven in disorder" by Slavoj Zizek.

artificial idiocy

The problem is not that chatbots are stupid;

it is that they are not “stupid” enough.

It's not that they're naive (and don't get the irony and reflexivity);

it's that they're not naive enough (and don't realize when naivety masks insight).

The real danger, then, is not that people will mistake a chatbot for a real person;

it's that communicating with chatbots

makes real people talk like chatbots, skipping over all the nuances and ironies,

obsessively saying precisely what they think they mean.

When I was young, a friend went to see a psychoanalyst for treatment after a traumatic experience.

The idea that this friend had about what analysts expect of his patients was commonplace and so he spent the entire first session making false “free associations” about how he hated his father and wanted to see him dead.

The analyst's reaction was witty: he adopted a naive “pre-Freudian” stance and reproached my friend for not respecting his father (“How can you talk like that about the person who has made you what you are?”).

This feigned naivety conveyed a clear message: I don't believe their false “associations.”

Could a chatbot pick up on that subtext?

Most likely it wouldn't, because it's like Rowan Williams' take on Prince Myshkin in Dostoevsky's

The Idiot


According to the traditional reading, Myshkin, "the idiot," is a "positively good and beautiful" holy man who is driven into insane isolation by the harsh brutalities and passions of the real world.

But in Williams's radical rereading, Myshkin represents the eye of the storm: as good and holy as he is, he is the one who unleashes the chaos and death he witnesses due to his role in the complex web of relationships that surrounds him. .

It's not just that Myshkin is a naive goofball.

It's that his particular kind of foolishness makes him unaware of the disastrous effects he has on others.

He is a flat person who literally talks like a chatbot.

The “goodness” of him lies in the fact that, much like a chatbot, he reacts to challenges without irony, speaking unthinking platitudes, taking everything literally, and resorting to mental auto-completion instead of genuine insight. idea formation.

For this reason, the new chatbots will get along very well with ideologues of all stripes, from the “woke” public to “Make America Great Again” nationalists who prefer to stay asleep.

*Slavoj Žižek, Professor of Philosophy at the European Graduate School, International Director of the Birkbeck Institute for the Humanities at the University of London and author of "Heaven in Disorder" (OR Books, 2021).

Copyright: Project Syndicate, 2023. From Ljubljana, Slovenia

Translation: Elisa Carnelli


look too

Slavoj Zizek: "We are also approaching some kind of civil war in the West"

The celebrations begin for the centenary of "Fervor de Buenos Aires", the first book of poetry by Jorge Luis Borges

Source: clarin

All news articles on 2023-03-25

You may like

Trends 24h

News/Politics 2023-06-06T05:51:51.763Z
News/Politics 2023-06-06T02:51:17.787Z
News/Politics 2023-06-06T12:32:52.829Z


© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.