The Limited Times

Now you can see non-English news...

Artificial intelligence: who controls the AI ​​and AI algorithms?

2021-01-18T17:43:45.049Z


Artificial intelligence will fundamentally change our everyday life. But not nearly as fast as many assume. Time enough to ask yourself: who controls the algorithms?


Icon: enlarge

Amazon head of equipment Dave Limp: Not every action of the Alexa software is traceable.

Artificial intelligence is still in the early stages

Photo: Andrej Sokolow / DPA

If you get bored in the home office, enter the search terms "Alexa Fails" on YouTube.

There you can experience the entertaining reality of artificial intelligence in its early phase:

Male voice: "Alexa, I urgently need medical help."

Alexa: "I urgently put medical help in your shopping basket." 


Children's voice: "Alexa, how do you spell book?"

Alexa: "It's spelled FUCK."


Also entertaining are the difficulties of the Amazon voice assistant with a Scottish accent or the endless loop that the artificially intelligent devices from Google and Amazon get into when you place them next to each other and tell them: "Repeat that!"

Jens-Uwe Meyer Arrow to the right

Dr.

Jens-Uwe Meyer is managing director of Innolytics GmbH, author and international keynote speaker.

With 13 books (including "Digital Winners", "Digital Disruption") and more than 250 articles, he is one of the pioneers in digitization and innovation in Europe.


www.jens-uwe-meyer.de

These video clips are funny gimmicks, but they vividly illustrate the challenge companies face when using applications based on artificial intelligence (AI).

Yes, there is a huge potential for efficiency and growth in AI, and the smart algorithms will definitely change our everyday lives.

But the technology is currently still at the very beginning, all the new miracle solutions that we are staring at with great expectations are still immature.

Luckily.

How much autonomy do we want to give up?

Because let's imagine for a moment that the digital mishaps mentioned at the beginning were not verbal gimmicks in the living room at home, but the reactions of an autonomous car at 120 km / h on the highway.

Probably nobody would laugh and say, "Google at the wheel, what do you expect?"

And what happens if the planned AI at the Rhineland-Palatinate State Criminal Police Office is wrong?

"Please excuse your arrest, our algorithm is still learning ..."

Artificial intelligence doesn't work quickly and easily.

On the contrary: like a small child, she has to learn everything with great effort.

And it is this learning phase that will drive companies in the coming years.

After all, which authority checks whether the results of the machine decisions also correspond to what the developers intended?

Has the input (images, language, data) been interpreted correctly?

Did the AI ​​derive the correct reactions from this?

And if not, did the system notice that it was wrong?

In theory, a lot of people might find it funny to spell the word "Book" FUCK and like that result.

Alexa then learns from this that this is obviously the correct spelling.

So what?

Algorithms control algorithms

Amazon reacted to the initial errors, set up an internal quality management system - and promptly caused a scandal: It turned out that the supposedly self-learning machines did not actually learn everything themselves, but that conversations were recorded and evaluated by people.

Data protectionists and consumers may be outraged by this, but from the point of view of an AI development, this approach is correct.

Because if AI applications are used, they have to be controlled just as strictly as any other business process - at least.

With every new employee in a company, the work results are examined very carefully at the beginning.

Why should it be any different with a newly developed AI?

Companies that work according to recognized quality management standards are even explicitly obliged to constantly monitor the results and improve their AI.

This task becomes more and more extensive as the amount of data increases.

Imagine a self-driving car driving through an average residential area.

The system has to process millions of pieces of information within a few seconds.

Who controls this system?

It would take a person years to evaluate the data from a single car trip.

So it will again be algorithms that monitor the functioning of an artificial intelligence.

However, according to the researchers at the Max Planck Society, even these would not be able to fully control super-intelligent machines.

In the end, it takes an AI to control the AI. 

The European authority for IT security ENISA is already warning of the "unpredictable risks" that the use of AI would bring.

What happens if cyber criminals manage to change the behavior of an AI?

When they penetrate systems.

Manipulate data - and, for example, change the behavior of networked autonomous vehicles?

That may sound like science fiction, but it is in the report "Artificial Intelligence Cybersecurity Challenges" from December 2020.

The use of AI is overrated

None of this is a black mark, just a sober look at reality.

But it is also a fact that the introduction of AI applications is taking place more slowly than expected and is significantly more complex than was imagined in the euphoria of the past decade.

According to a Bitkom study, the use of AI is significantly overestimated: 77 percent of all Germans assume that companies use personalized advertising - but only four percent of companies actually do this.

55 percent of those surveyed think that customer inquiries are now answered automatically.

In reality, only three percent of companies use this option.

Future Technology Artificial IntelligenceArrow to the right

Artificial intelligence (AI) will change our lives.

The possible uses in industry, trade and medicine are immense: AI opens up new opportunities for researchers, manufacturers and retailers, but it also raises new questions.

The overview shows what is already possible today - and which technical and ethical questions we need to discuss.

To the topic page "Future Technology Artificial Intelligence" arrow to the right

This is because AI is by no means a plug-and-play application of solving all problems that many managers would like.

Even if researchers are already working on the development of an artificial brain, the reality is currently still different.

AI is a combination of different artificial island talents.

The system does what it was trained to do.

No more.

If a system is taught to differentiate between different types of gin, it does just that. If you put a vodka in front of it, the AI ​​drops out.

This is a new island talent that has to be learned with great effort.

Trends such as transfer learning should enable AI applications to learn new things on the basis of what has already been learned.

But even that will - as of today - not work fully automatically.

Artificial intelligence already offers astounding applications.

Some of them can be admired in action on the YouTube channel #SogehtYouTube.

But before AI can really be used meaningfully, error-free and safely by a majority of companies for a variety of use cases, many years will pass.

We should use the time to determine who makes the rules for AI applications and how they are controlled and enforced.

To get back to the funny failures of the Amazon voice assistant: Alexa needs a boss.

And you shouldn't decide for yourself who that is.

Source: spiegel

All news articles on 2021-01-18

You may like

News/Politics 2024-03-23T00:35:34.817Z
News/Politics 2024-03-28T11:05:49.380Z

Trends 24h

News/Politics 2024-04-18T09:29:37.790Z
News/Politics 2024-04-18T14:05:39.328Z
News/Politics 2024-04-18T11:17:37.535Z

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.