The Limited Times

Now you can see non-English news...

Of artificial ethics and morality

2023-05-18T09:51:26.817Z

Highlights: Gabriel Zurdo is CEO of BTR Consulting, a specialist in technological and business risk. He says the real challenge lies not in the intelligence that algorithms and machines could achieve, but in the ethical development of us as humans. The real challenge is ethics in the application and use of this ability to think artificially, he says. The legal vacuum is enormous and that regulations will arrive late -surely- and that the underlying discussion is Ethics in the use of Artificial Intelligence. The Italian government has decided to block ChatGPT in the wake of a possible data breach.


Of artificial ethics and morality


ChatGPT and its unusual popularity do not stop showing its weak side in the absence of controls and definitions, regarding the ethical management of these resources by the innovation industry that does not stop generating astonishment and controversy.

At the same time, this tool creates a wealth of opportunities for the cybercrime industry to take much bigger and faster steps than its opponents. This could become trivial and even trivial if we relate it to the problem that, necessarily and obligatory, we must address from now on: artificial ethics and morality.

Artificial Intelligence (AI) is in the spotlight and has just been discovered by those who for years have been their data generation instrument for algorithms, but barely know about it.

The height of the situation is that, these days, there are already dozens of misleading versions that seek to catch the curious. From the plainest and most mundane, but with a realistic and simply human perspective, there are already several fake versions that cause damage to users as enthusiastic as unprepared, deploying endless campaigns that spread malware and phishing trying to capture credit cards, user-IDs, passwords, etc. These are new variants of the ChatGPT browser, present since the beginning of the year with several other extensions of the brand.

OpenAI, the company behind the tool, previously announced that it had to take ChatGPT offline on March 20 to fix a bug that allowed some people to view and access other users' chat history information.

The firm seems to have no reason to massively collect and process personal data, which users on a planetary level seem to ignore and compulsively decide to hand over. These are used to train algorithms with the risk that ChatGPT could generate and store false information about users.

The reality is that these types of resources could invoke two of the most sensitive risks and threats in terms of cybersecurity: AI and an extraordinary ability to generate and disseminate disinformation.

But there are even more reasons to worry. This technology could help an ordinary human design and execute a cyberattack, without having any knowledge, and at the same time ventilate our privacy, as suggested by the Italian government, which has decided to block ChatGPT in the wake of a possible data breach.

Schools and universities around the world have followed the same path by restricting the application of their local networks for fear that it will become recurrent and natural, the appearance of a new and unexpected event: the management of student plagiarism.

Thus, unfortunately, there is no way to verify the age of users, which exposes minors to answers absolutely inappropriate for their age and knowledge.

This happens in many other platforms, but in this case, the possibility of affecting people's self-determination seems radical, considering that the legal vacuum is enormous and that regulations will arrive late -surely- and that the underlying discussion is ethics in the application and use of this ability to think artificially.

The real challenge lies not in the intelligence that algorithms and machines could achieve, but in the ethical development of us as humans.

Technology will continue to evolve until it reaches a point that will escape the capabilities of people, who could be responsible for creating an AI that does not share "human values", which would make it clear that we have not been smart enough to control our own creations.

Gabriel Zurdo is CEO of BTR Consulting, a specialist in technological and business risk.

Source: clarin

All news articles on 2023-05-18

You may like

News/Politics 2024-02-05T08:50:56.615Z
News/Politics 2024-02-27T09:15:56.936Z

Trends 24h

News/Politics 2024-03-28T06:04:53.137Z

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.