The Limited Times

Now you can see non-English news...

Who's afraid of ChatGPT?

2023-03-28T09:24:46.865Z


We have to use our brains more than ever, not despite technological innovation but because of it. The buzz around ChatGPT has its raison d'être in its extraordinary ability to carry out tasks such as generating texts and simulating a conversation. It is a clear advance of that generative artificial intelligence (AI) that aims to emulate the logic of human thought in its communicative and creative form. In this game of imitation of human faculties, this algorithm gets its power from digesting


The buzz around ChatGPT has its raison d'être in its extraordinary ability to carry out tasks such as generating texts and simulating a conversation.

It is a clear advance of that generative artificial intelligence (AI) that aims to emulate the logic of human thought in its communicative and creative form.

In this game of imitation of human faculties, this algorithm gets its power from digesting what is available on the internet: the millions of texts produced by humans are combined to adapt to the dialogue established with the user.

It seems that we have just discovered that tasks of substantial intellectual content can be carried out by a machine, but is this really so?

One of the criticisms made of this application consists of drawing attention to its limitations, insofar as it uses data that is not sufficiently updated and that it would be more accurate if it were trained with the most recent data.

Now, this very inevitability of operating with existing data reveals its insurmountable limits.

ChatGPT is very powerful when it comes to processing a large amount of pre-existing data, but not in the production of new insights and knowledge or in recommendations about new phenomena about which data or information is lacking.

A so-called intelligence that emulates humans will not really be one as long as it does not comprehensively take charge of the world and is not capable of generating novelty.

The errors of this device are not due to lack of data but to a poor understanding of the world, which is logical if we take into account the nature of its operations.

Computational power is fast computation and processing of larger amounts of data, but not intelligence.

A neural network doesn't even know that words represent things.

The ChatGPT and other artifacts that will follow are products incredibly capable of processing information and language without knowing what it is about, that is, up to the limit at which understanding of the world begins.

The other property of human intelligence is its ability to deal with novelty in its various forms: innovation, questioning and breaking with what exists, critical capacity, managing uncertainty or contributing new ideas.

In all these fields, artificial intelligence devices can be of great help to us, but they always come up against an insurmountable limit.

Instead of being afraid that superintelligence will end up nullifying ours, we would do well to ask ourselves what is specific and insurmountable about our intelligence, and dedicate ourselves to cultivating it, in a way analogous to how the mechanization of work promoted the most creative trades.

We have to use our brains more than ever, not despite technological innovation but because of it.

The more intelligent artificial intelligence becomes, the more we will be forced to redefine the concept of intelligence.

A good part of human progress is due to the fact that we have concentrated our forces in tasks that demanded greater talent and we have mechanized as much as possible.

A concrete case of this dispute over demarcations has become evident during the discussion about the presence of ChatGPT in the world of education.

Instead of prohibiting or trivializing, we should take advantage of this opportunity to rethink what learning consists of in the new digital environment.

Does the educational claim still make sense when the information is not only accessible at the moment (as thanks to traditional search engines) but coherently elaborated?

There are two dimensions of learning, in what we could call its two extremes, that are affected by AI: the more personal and the more mechanical thinking.

In both cases it seems reasonable to consider to what extent we should limit the use of such instruments.

If by virtue of GPS navigation or smartphones we rely excessively on technology (to the point of weakening our sense of orientation in space or forgetting data and numbers), ChatGPT can help us settle for the information provided by this technology and stop considering that ordering that information, exposing it and making sense of it is a personal task for which we are not fully replaceable.

We should also consider to what extent it makes sense to underestimate the learning of certain cognitive processes due to the fact that artificial intelligence can do them.

The Internet has not meant that people do not need to know anything because everything can be found through Google, if only because you have to know what you want to know beforehand.

Who teaches to search and how to know that what was really sought has been found?

You can only understand the information on Wikipedia if you have prior knowledge.

The existence of calculators has not made it unnecessary for people to learn to calculate.

At least some idea of ​​arithmetic operations is essential to interpret the results.

Pilots must also be able to manually land a plane if the technology fails.

Regardless of the help you can give us, we should be able to write texts autonomously and without artificial intelligence.

You have to have been able to do certain things that no longer make sense to do.

Of course there will be changes in the learning objectives: presumably in the future it will not be necessary to know in all its subtleties and special cases the basic competences that can be assumed by the devices that we will have at our disposal.

However, anyone who fears or demands that people no longer need to learn to formulate text independently due to the existence of cognitive tools like ChatGPT has not understood that cognitive tools can generally only be used meaningfully by people do have an idea of ​​the cognitive processes from which it frees them.

​Professor in Political Philosophy, University of the Basque Country (UPV)


Copyright La Vanguardia, 2023.

look also

GPT Chat: seven days with the robot that pretends to be a human and has an answer for everything

Bill Gates assures that the world will change faster than we think: "The era of AI has begun"

Source: clarin

All news articles on 2023-03-28

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.