ChatGPT (Photo: Reuters)
This is the second part of a two-part article - to read the first part of the
interview that Sam Altman, the founder of OpenAI, the company behind ChatGPT, gave to the "New York Times", he referred to the technology that his company is developing as a modern equivalent of the "Manhattan Project" - the development project of The nuclear bomb.
Altman is aware of the dangers of artificial intelligence, and he himself says that the product his company aims to develop, general artificial intelligence, can indeed launch humanity to the stars (both metaphorically and practically), but it can also destroy it - it all depends on the developments.
Just last week, a series of scientists and technologists signed an open letter calling on artificial intelligence development laboratories around the world to stop its development, due to the problems that may arise with intelligent computers and the possible danger that such computers will try to take over the world or destroy it, Skynet-style Terminator, especially if someone The idea of connecting artificial intelligence to weapon systems will come up.
The call did go out in Western countries, but whoever answered that open letter answered with a logical and reasonable question: Will China also stop its efforts to develop general artificial intelligence?
It is clear that authoritarian and power-hungry countries will not stop these efforts, and therefore in the West they should not stop either, similar to the deterrence principle of guaranteed mutual annihilation, as in the days of the Cold War.
Are we at the beginning of a new arms race similar to the nuclear arms race, but this time with artificial intelligence?
Are we at the beginning of a new arms race similar to the nuclear arms race?
Google versus ChatGPT (Photo: ShutterStock)
But long before we have to fear Skynet or murderous robots that have gone back in time looking for Sarah Connor, artificial intelligence today still has plenty of problems.
One of the essential problems of GPT (the learning model behind ChatGPT, it's important to emphasize the difference), and similar tools like Google's Bard and others, is what Google calls "information hallucinations".
The artificial intelligence models tend to "complete" and improvise information gaps that exist with them, and simply invent things that are missing to complete the requested task, as a kind of digital Baron Munchausen, the machines simply find details and facts that do not exist or are incorrect.
These are "information hallucinations".
The problem is that those who do not have prior knowledge in the field answered by artificial intelligence and can examine its text with a critical eye, such as high school students or undergraduate students - may think that these are solid facts, because artificial intelligence often produces texts that at a glance look or sound coherent and knowledgeable, but On closer examination,
Another problem concerns the identification of objects in images.
In the training process of the artificial intelligences that were created, for example, to identify cancerous skin tumors better and faster, strange biases may arise, which we as humans know how to figure out - but the artificial intelligence - cannot.
For example, an artificial intelligence trained to identify skin growths and moles that might be cancerous tended to classify any photo that contained a ruler or tape measure near the mole as a dangerous mole, simply because in its training model many images of cancerous moles contained a tape measure or ruler, because the size and circumference had to be measured The cancerous mole...
More in Walla!
The device that makes a revolution in the fight against wrinkles - in an introductory sale
Served on behalf of B Cure Laser
A series of scientists and technologists on an open letter calling on artificial intelligence development laboratories around the world to stop its development (Photo: Reuters)
Another bias, but one that exists and cannot be ignored, is social bias.
For example, artificial intelligence had more difficulty recognizing the faces of dark-skinned people, including artificial intelligence used in autonomous cars, simply because the original data set, that is, the set of images on which we trained the AI to recognize what faces were, contained mostly images of white people.
Similarly, artificial intelligences that sort resumes knew how to infer a person's skin color from data such as residential neighborhood or schools.
It is worth noting that in these cases the artificial intelligence is not to blame.
In this case, she is simply a reflection of racist notions or prejudices that already exist in human society, but which have made their way into her code as a machine.
Of course, today there is already more awareness of the issue and efforts to correct or neutralize such biases, but there is still a long way to go.
And the third very prominent problem of artificial intelligence, besides biases and hallucinations, is the black box problem.
Sometimes, and here the problem lies precisely in the advanced capabilities of artificial intelligence and its ability to reach and draw its own conclusions - artificial intelligences reach the answers that the scientists and engineers who created them have no explanation of how the artificial intelligence reached the same result or conclusion.
This is a problem when it comes to introducing the use of artificial intelligence into supervised areas such as really sorting resumes, insurance policies or autonomous driving - the understandable need to be able to explain how the machine reached a certain result arises, and this answer is not always available.
Intelligence and work
Like other technologies before it, one of the biggest concerns that artificial intelligence raises is the threat to existing professions.
When ChatGPT, Dal-E, Midjourney and other generative artificial intelligence stormed into our lives, there were "experts" who were quick to declare that professions such as journalism, graphic design, teaching and more are dead, and that those who practice them will soon have to go to the employment office.
The reality, on the other hand, is a little different and technological history also says that this is not the case.
"Professions will change, not necessarily disappear," says Uri Eliabiev, an artificial intelligence consultant.
"The need for journalists, designers, doctors, still exists. The intention is that all those professionals will take these tools and understand how they can use them for their needs. I will give an example: if once a designer was required to make an illustration of a certain character. Today a customer will demand - do I have the same character, at every stage of life, in every season, and with 20 different facial expressions. The demands on these people (professionals) will change accordingly. And the skills that those professionals need to train, should also change accordingly," says Eliabiev.
"Will these people lose their jobs? There's a chance. But to protect yourself, you need to know how to use the tools. There's the well-worn saying - 'Artificial intelligence won't take anyone's job. But people who know how to use it will'... if you If you want to be relevant in the future job market, understand how you can use it (artificial intelligence), because otherwise your profession is really in danger," says the artificial intelligence consultant.
"Perhaps the immediate threat is to content creators and designers, but from my knowledge of the history of the technology world, technology does not make people redundant, it simply increases the appetite. As Wix entered our lives, the website building market in the world, I estimate, did not shrink. It grew, And there are other players - but there's just differentiation. For simple sites like a small online store, Wix might close the corner for you. But if you're a big store and have tens of thousands of products, and need your own special customization, then it probably won't work for you.
The technological madness is coming to the retail world, a commercial product with advanced branding (the market warehouse chain)
How to make a stop sign disappear?
The brave new world of artificial intelligence that is integrated into all areas of life, also brings with it a series of dangers and cyber attacks of new types that we did not know - and these are perhaps its more immediate dangers.
Beyond the ability, for example, to destroy the reputation of a public figure by creating a fake video of his character, the so-called deep fake, which has become accessible to almost anyone with a computer (we recently saw this in the story with the Pope's coat, which was completely fake), Eliaviev tells us about the ability, for example, to hide Traffic signs with artificial intelligence deciphering them in autonomous driving:
It turns out that placing several stickers in strategic locations on a stop sign will cause artificial intelligence to completely ignore the sign and continue driving... the result could be fatal or devastating at the very least.
Another example, is adding static noise that humans don't hear, but voice assistants do, may cause them to hear completely different things.
And to finish on the subject with which we opened, what is the chance or approval for the destruction of humanity if someone decides to connect artificial intelligence as an operator of weapons of mass destruction?
This is an entire field dealing with artificial intelligence on the battlefield.
"As of today, there are no developments in the field that are completely autonomous," reassures Eliabiev.
"They still talk today about the person in the loop (meaning that the weapon activation is ultimately decided by a person and not a machine - N.L.), and even though this is technically possible, I still don't know any companies that have done it," he adds.
And to my question whether a computer like Skynet might arise tomorrow, Eliabiev answers: "Is it theoretically possible? It is possible. Is it realistic? I don't see it happening in the foreseeable future, for a variety of reasons: First, the technology today is not good enough for that. Even with a map The paths to general artificial intelligence, we are not yet at the point where there is a real danger. We are on the threshold where models will be able to develop independent capabilities. We are at a critical point and the threshold can be breached, but we are still far from there according to the existing and common practice today," Eliabiev concludes.