The Limited Times

Now you can see non-English news...

To what extent will AI weapons be accepted? Guidelines for prohibiting weapons themselves

2019-08-29T04:52:38.179Z


A guideline was adopted at an international conference held in Switzerland this month that artificial weapons (AI) and other weapons that are equipped with artificial intelligence (AI), etc., are not allowed to be attacked by selecting targets at their own discretion. Although it is not legally binding, showing the future of weapons ...


A guideline was adopted at an international conference held in Switzerland this month that artificial weapons (AI) and other weapons that are equipped with artificial intelligence (AI), etc., are not allowed to be attacked by selecting targets at their own discretion. Although it is not legally binding, it is a de facto international norm that indicates the future of weapons. However, the discussions mixed the thoughts of each country, and the guidelines included items to make gray zones for new technologies such as “AI that can be explained” at the last minute.

Geneva, Switzerland, was about to arrive at 3 am (local time) on the 22nd. A report summarizing the meeting was unanimously adopted in a small conference room at the United Nations headquarters in Europe, where air conditioning was stopped. The discussion on the “Autonomous Lethal Weapons System” (LAWS), which has continued since 2014, has finally been finalized as a result document.

The Convention on the Restriction of the Use of Certain Conventional Weapons (CCW) ratified by 125 countries and regions to regulate inhumane weapons. Under this government expert meeting, more than 90 government representatives and NGOs attended. LAWS has not yet appeared, but it is said that development in the United States, South Korea, Israel, etc. is underway, and human rights organizations have appealed to ban LAWS itself.

Eleven guidelines have been adopted this time. Regarding future LAWS using a new technology such as machine learning called AI, we first determined the protection of civilians during the war and confirmed that they would comply with international laws centered on international humanitarian law. He emphasizes that even if AI-equipped weapons are technically capable of picking targets and deciding attacks, humans are responsible for using them. He also showed the need for human involvement from development to deployment and use.

On the other hand, it was also sought that this guideline does not prevent the development of “intelligent autonomous technology” itself such as AI and the peaceful use. This is because there is no big difference in software technology such as AI between the consumer and the military. In fact, some countries, such as Russia, have been completely opposed to placing strong restrictions on development when LAWS, a future weapon, has not emerged.

The guidelines are endorsed at the CCW Conference of the Parties scheduled for November and are expected to become more robust, but it is unclear whether or not they will aim for a treaty.

Bonnie Docherty, a Harvard Law School professor who is a human rights organization brain, was selected after interviewing the Asahi Shimbun. "It has become a broad meaning that it can be unrestrained."

On the other hand, Takushoku University's professor Yusuke Sato (security theory), who joined the Japanese government delegation, said, “Each country that adopted the guidelines was directed to reflect it in their own policies at their own responsibility. No more institutions or treaties will be created. ”

Explainable AI US focus on development

In the guideline, one item that was not in the original proposal was included at the initiative of the United States immediately before adoption. In the item of “communication between humans and machines”, if technology that allows humans and AI to communicate sufficiently is established, it may be possible to realize legal LAWS in line with international humanitarian law. What do you mean.

The current AI is like a very complex mathematical formula that incorporates a mechanism that mimics the neural circuit of the brain. The basis of AI's answer is not obvious to humans, and the “thinking process” is a black box. If the answer is a foreign language translation, it would not be a problem if it was wrong, but weapons with an unknown basis for selecting the killed target are not allowed.

The US Department of Defense also stipulates that attacks are determined by internal regulations to involve humans. However, after the terrorist attacks in the United States, the United States entered an era in which a large number of drones and robots were used in Afghanistan to remove missile attacks and device bombs. Japan will not develop a fully autonomous weapon, but AI expects it to be “significant in reducing human error and saving labor”.

So if we can see the AI ​​thinking process with the new technology and we can say that it is a meditation with humans, we can consider the AI ​​judgment as the same as the human judgment.

The US Department of Defense has recently focused on “eXplainable AI = XAI”. At a symposium held in Stanford University in May, an AI technical officer stated that “LAI is one of the most focused areas for the Department of Defense”, assuming LAWS.

On the other hand, Laura Nolan, a former Google programmer who participated in the symposium, repelled, "XAI development is in the early stages. It doesn't alleviate the fear of being hijacked." The conflict between human rights groups and countries that want to advance autonomous weapons development is likely to continue. However, Associate Professor Yasuhito Fukui (international law) at Hiroshima City University, who has analyzed the debate over LAWS, evaluated the guidelines as “a kind of soft law”. Soft law is not legally binding like a treaty, but it has a normative nature that can suffer significant disadvantages if not followed. Mr. Fukui says, “I think it was the best choice for LAWS regulations, which can't be said to be a sea or mountain. This guideline is a de facto international standard.” (Ichiro Matsuo)

Source: asahi

All news articles on 2019-08-29

You may like

Trends 24h

News/Politics 2024-04-16T06:32:00.591Z
News/Politics 2024-04-16T07:32:47.249Z

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.