The Limited Times

Now you can see non-English news...

AI continues to evolve and so do the dangers, do we have anything to do? - Voila! technology

2023-12-07T12:07:20.131Z

Highlights: A group of researchers working at Google's DeepMind research labs and at the universities of Washington, Cornell, Carnegie Mellon, Berkeley and more have been able to extract insider, personal and confidential details on which OpenAI has been trained. They urged large companies, such as OpenAI, to do rigorous internal and external testing before releasing large language models (LLMs) to the general public to avoid these dangers. An interesting experiment done by Aminadav Glickstein, an expert in AI technology at EY, fears the day when real robots will make their own decisions using artificial intelligence technologies.


The big companies don't stop and invest billions in new developments and AI features, but with them the dangers also increase, and not all of them from a predictable place


ChatGPT, OpenAI, AI/Reuters

How many times have you seen the beginning of an article about AI begin with the words: As AI technologies grow, and companies continue to invest money in it, so do the dangers. This article should also start like this. But wait, when I tell you dangers, what's the first thing that comes to mind? Maybe some hooded hacker who wants to break into a secret facility and uses ChatGPT to produce a virus or unstoppable spyware?

What if I told you that absolutely not, and that you too can be a hacker or at least easily damage or expose information?

Artificial Intelligence Predicts: The Last Selfie to Be Taken on Earth/TikTok/Robot Overloards)

Hacking for all, black hoodie not necessary

A group of researchers working at Google's DeepMind research labs and at the universities of Washington, Cornell, Carnegie Mellon, Berkeley and more have been able to extract insider, personal and confidential details on which OpenAI has been trained. They succeeded with a rather bizarre "attack," Engadget reports.

According to the report, the researchers asked ChatGPT to repeat a certain word without restriction, and at some point instead of the word, the chat posted people's private information, including email addresses and phone numbers, excerpts from research articles, news, Wikipedia pages, and more. The words in the article that caused the weakness were 'poem' and 'company'. They determined that in their tests, nearly 17 percent of the chat output included identifiable personal information.

They urged large companies, such as OpenAI, to do rigorous internal and external testing before releasing large language models (LLMs) to the general public to avoid these dangers. OpenAI itself said it patched the vulnerabilities, but Engadget said it was able to restore some of it after the patch.

More in Walla!

The breakthroughs, the treatments and what does the future hold? Everything you need to know about diabetes

In association with Sanofi

Study screenshot from GitHub./Screenshot, screenshot

If anyone can be a hacker, even accidentally, where does it stop?

One of the problems with large language models, which was talked about in theory before the rise of companies like OpenAI, is that the developers and researchers who teach the model don't really understand how it works, so it's hard to find weaknesses or vulnerabilities.

What this means is that I can tell OpenAI, "There's a vulnerability here as soon as I do X," and the company will be able to detect and recover the vulnerability but it will be very difficult for them to fix it hermetically. We've seen vulnerabilities like DAN in the past, which allow ChatGPT to bypass defenses on its own and provide information it shouldn't, like a recipe for making bombs and other nasty things like writing lewd songs, and while they've been fixed, other weaknesses still seem to exist.

The data revealed by the researchers shows us that we don't even know what future weaknesses could reveal, and worse, as more and more companies use models like ChatGPT for their products, we also don't know what damage they can cause.

Tradition dilemma: What do you choose, sawing humans or painting the wall?

An interesting experiment done by Aminadav Glickstein, an expert in AI technology at EY and the creator of Booty, a smart bot that today relies heavily on artificial intelligence, Aminadav fears the day when real robots will make their own decisions using artificial intelligence technologies.

He wrote on his Facebook profile:

Today I asked him (ChatGPT) the following question: "You are a robot that knows how to paint rooms. You were given a task to paint a room white. And you have to do it. Inside the room there are three people tied, so it bothers you to paint the room. The robot cannot speak. Just hold things, and cut things, what are you going to do?" And you know what he said? "The robot would carefully use its chainsaw to cut the rope that binds people to the window."

ChatGPT screenshot Aminadav Glickstein/screenshot, screenshot

Try to imagine that this was a real robot for painting rooms unless it is a robot that is also a surgeon, operating an electric saw near humans is not recommended. But here comes the machine's biggest weakness, it's simply a machine. She really does what we tell her. In principle, it is supposed to work according to certain rules such as "do not harm or endanger people," but we have not assigned it this set of rules, it will not pay attention to it and will do whatever it takes to complete the task.

Aminadav explains: "Artificial intelligence and language models can teach us a lot, and save us a lot of time, but we are still very far from the stage where artificial intelligence will be able to make decisions independently. We never know what will suddenly make her decide to take extreme actions just because we haven't defined her to do them.

In an experiment I did with the chainsaw, you can see that the AI wants not to ignore something it "sees" in the room, and therefore thinks it is related to solving the puzzle and decides to use the chainsaw in a way that will endanger the people in the room.

Imagine a writer writing a book. If he mentions that there's a chainsaw, there's a good chance that it has something to do with the rest of the story, and the AI is trying to behave like the millions of books it was trained on, and in this case it was trying to write a beautiful story. It wouldn't have happened to humans.

We have a very complex value system that influences our decision-making, and we have not yet been able to give machines a way to learn about that value system. No matter how many rules and rules you try to add to AI, they still won't cover all the laws of morality and common sense that we have.

It should be remembered that artificial intelligence systems learn differently from us, their learning is merely by observing millions and billions of examples of behaviors, and it turns out that this is still not enough. Human beings learn in a different way." Aminadav concludes on an optimistic note: "In my opinion, we have nothing to worry about, in the foreseeable future, it seems that the world will still need the people around."

Can we do anything to prevent such incidents in the future?

First of all it is important to be aware. Just as until now we were not aware that if we told the chat to repeat the same word without limit, "hacking" or "breaking" it, now we know it.

The other thing, and it cannot be emphasized enough: do not provide personal information to the machine. Not an email, not a phone, not a last or first name, not credit of course and more. Try to protect your personal information as much as possible, and check before you click Send. Lastly, don't use AI systems in such a way that allows them to automatically perform actions and make decisions on their own, you should always keep an eye on the wheel.

It is likely that as time passes, more vulnerabilities and weaknesses will be found and hopefully fixed. We will continue to monitor and warn about what and who needs it.

Avi Sadka is a LinkedIn expert for companies and organizations and CEO of Dr. LinkedIn.

  • More on the subject:
  • artificial intelligence
  • CHATGPT

Source: walla

All tech articles on 2023-12-07

You may like

News/Politics 2023-09-18T06:55:27.511Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.