The Limited Times

Now you can see non-English news...

What is the 'black box' of artificial intelligence, a mystery that baffles experts

2023-05-25T10:19:50.335Z

Highlights: Google engineers sounded alarms after new software learned to connect a language without prior instructions. While deep learning educates machines to process data like the human brain, the so-called "black box" that hides the arbitrary predictions of artificial intelligence (AI) algorithms is increasingly worrying experts in this field. Google CEO Sundar Pichai said this ability of AI programs to generate skills or offer answers in unexpected ways is what's known as the " black box" The phenomenon of "black boxes" in AI is disturbing because of the lack of understanding and control over how these systems acquire skills or provide answers.


Google engineers sounded alarms after new software learned to connect a language without prior instructions.


While deep learning educates machines to process data like the human brain, the so-called "black box" that hides the arbitrary predictions of artificial intelligence (AI) algorithms is increasingly worrying experts in this field.

The "black box" appears when the developers of these systems stop understanding what happens in the alleys that the AI opens in its logical path, something that hinders the control of its actions.

Many even fear that this lack of transparency will lead to irreversible consequences, especially if this synthetic intelligence gains skills for which it was not prepared or acquires total autonomy.

The alarm went off weeks ago, when a group of Google engineers were programming AI software and were surprised to discover that, without any previous initiation, it had learned to connect a new language.

The rebellion of artificial intelligence, the great fear of experts, Photo REUTERS

Google CEO Sundar Pichai said this ability of AI programs to generate skills or offer answers in unexpected ways is what's known as the "black box."

Far from panicking, Pichai added, "I also don't think we fully understand how the human mind works." And he called on specialists from different areas to join the discussion, in order to make the process less nebulous.

A path of no return

For some, artificial intelligence has reached a point of no return. Photo REUTERS

Some theorists understand that a tipping point has been reached where certain types of AI have already overtaken the human mind. The problem is that man's finite condition is incapable of encompassing the infinite: advanced AI.

An example of "infinite" AI would be ChatGPT, which can write functional code and college-level essays. Also, risk medical diagnoses, create text-based games and explain scientific concepts in multiple levels of difficulty.

"Machine Learning models are tested to determine if they work correctly and what degree of error they have. As systems with AI are not infallible: the machine suggests a decision or solution and the human is the one who decides if it is fulfilled, "warns Marcela Riccillo, doctor in computer science, expert in AI and robotics.

Unlike more traditional programming, based on the implementation of instructions to obtain a result, in the development of AI, engineers work to achieve a system that imitates the "neural networks" of human intelligence.

Symbolic logic

Engineers are looking for a system that mimics the brain's neural networks.

In fact, deep learning algorithms are trained in the same way that a teacher explains a new concept to a child. Until he finally grasps the idea.

You usually turn to examples of something you're able to recognize, and before long, your own search inclinations will have crafted a "neural network" to categorize things you've never experienced before.

"Some of the mathematical techniques of machine learning break down their results, such as Decision Trees. On the other hand, neural networks, due to their enormous complexity, do not. In both cases it is known how their structure is, how they are internally and what is the learning method. In addition, in neural networks the path to their conclusions is ignored and the results cannot be justified, "warns Riccillo.

As with human intelligence, there is little awareness of how a deep learning system reaches its conclusions. As Yoshua Bengio, a pioneer in this field, points out, "as soon as you have a machine complicated enough, it is almost impossible to explain what it does."

The phenomenon of "black boxes" in AI is disturbing because of the lack of understanding and control over how these systems acquire skills or provide answers in unexpected ways.

This situation raises ethical questions about the potential risks associated with the technology and its potential effects on a society that is defenseless in the face of these cyber advances.

Pandora's box of algorithms

The fear that algorithms will become a Pandora's box.

The great challenge in this field is the elaboration of techniques that justify the decision made by a machine learning algorithm without opening Pandora's box.

But explaining AI decisions after they happen can have dangerous implications, argues Cynthia Rudin, a professor of computer science at Duke University.

"The Neural Networks -in particular Deep Learning- that is used in ChatGPT, are questioned for not explaining the results. Several companies are trying to achieve this. But if an application with Machine Learning does not learn well, in each technique you can try to improve the model, although it is not always achieved. Whether or not you explain your results," says Riccillo.

Appealing to the mechanical sincerity of ChatGPT, this chronicler consulted him about the biases that the black box masks in Generative AI.

"The black box is beyond human reach in AI systems that use complex algorithms, such as deep neural networks. Sometimes, it can be difficult for people to understand how a specific decision was made, as there can be multiple layers of processing and calculation that are difficult to follow."

These opaque models gain territory in some work environments and their spillovers already leave a trail of consequences. From approving a biopsy for a possible tumor, granting bail, bombing a military zone or approving a credit application.

Currently, some 581 models involved in medical decisions have received clearance from the Food and Drug Administration. Nearly 400 are intended to help radiologists detect abnormalities in medical imaging, such as malignant tumors or signs of a stroke.

SL

See also

ChatGPT: step by step, how to use and take advantage of the artificial intelligence of the moment

The boom of artificial intelligence, beyond ChatGPT: 5 free platforms to take advantage of it

Source: clarin

All tech articles on 2023-05-25

You may like

News/Politics 2024-03-02T04:56:09.192Z
News/Politics 2024-03-01T05:16:50.215Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.