New Technologies Demand New Laws And Ethics – Analysis

By

By Vijay Sakhuja

The Group of Governmental Experts (GGE) on lethal autonomous weapon systems (LAWS) met in Geneva earlier this month and emphasised the critical necessity to ban “fully autonomous weapon systems.” They urged states to move towards negotiating a legally binding instrument on this issue. The International Committee for Robot Arms Control (ICRAC) has positioned issues of ethics and public conscience in the forefront and reiterated the “Principle of Non-Delegation of the Authority to Kill” by non-human mechanisms.

Fully autonomous weapon systems are a product of the Fourth Industrial Revolution (4IR) centered on disruptive technologies such as Artificial Intelligence (AI), deep machine learning, robots, and drones. These have opened the flood gates to new opportunities in nearly all facets of human activity including warfare. Autonomous and intelligent machines such as robots have brought enormous advantages, freeing human operators from routine and mechanical tasks. These are highly exciting technological advancements; however, there is also the potential for frightening outcomes amid fears that humans are being pushed to the secondary level or even sidelined in the decision-making process. Further, delegating complex tasks including decision-making to devices, sensors and algorithms could potentially engender severe complications.

Warfare is not immune to usage of unmanned systems and devices, and these have found relevance in tasks such as search and rescue, bomb disposal, firefighting, and so on, making warfare and emergency response more efficient and accurate with less collateral damage. Some militaries may be exploring weaponisation of autonomous technologies and it is their belief that soldiers and civilians will be at less risk as also liberating them from any moral consequences of killing or for self-defence.

A number of NGOs have been agitating against the use of LAWS and have called for an international ban on ‘killer robots’ including “a treaty for emerging weapons.” A campaign – ‘Coalition to Stop Killer Robots’ – led by Professor Noel Sharkey of the University of Sheffield claims support from about 64 NGOs. Sharkey has expressed concerns over the growing competition among companies and militaries to build and acquire autonomous and smarter weapons, and labelled this trend as “a new arms race.” Further, it has also been noted that “inanimate machines cannot understand or respect the value of life” and “machines should never be permitted to take human life on the battlefield or in policing, border control, or any circumstances.”

Similar sentiments have been expressed elsewhere: some Google staffers have written a letter in an attempt to impress upon the company the need to “suspend work on a US military project that involved drones and artificial intelligence capability.” Likewise, scientists and academics at the Korea Advanced Institute of Science and Technology (KAIST) in Seoul have threatened to boycott projects developing AI for military use. Elon Musk, a leader in pioneering robotics and AI, is heading a group of 116 specialists from across 26 countries urging the UN to ban the development and use of ‘killer robots’.

Although no country has openly declared that they are pursuing LAWS, states have made public their views on the issue. For instance, China’s submission to the GGE notes that although LAWS has not been clearly defined, the issue merits the attention of the international community, and it supports the use of LAWS in operations involving nuclear, biological and chemical weapons environments. However, LAWS lack capability of distinction in use against combatants and innocents; do not possess the capability of determining proportionality of use of force; and it is difficult to establish accountability, which can potentially lead to the killing or maiming of non-combatants. It is therefore necessary to formulate general legal norms for LAWS.

The UK has endorsed and made public its policy to not “develop and use fully autonomous weapons, or weapons that can make decisions independent from human oversight.” British MP Mark Lancaster, minister of state for the armed forces, has observed the issue from a doctrinal perspective and stated that it is “absolutely right that our weapons are operated by real people capable of making incredibly important decisions, and we are guaranteeing that vital oversight.”

Unlike China and the UK, the Russian submission to the GGE notes that the “preventive, prohibitive or restrictive measures against LAWS” are complex, and given the limited human understanding of these technologies, such a ban may not serve the purpose. Although it is difficult to distinguish between civilian and military developments in autonomous systems, a ban on LAWS can preclude peaceful uses of these technologies.

While that may be so, legal and ethical issues are beginning to gain greater salience particularly in the context of warfare where surrendering unbridled control to machines to take decisions to kill, injure, or destroy could challenge international law, ethics of warfighting, and human values.

IPCS

IPCS (Institute for Peace and Conflict Studies) conducts independent research on conventional and non-conventional security issues in the region and shares its findings with policy makers and the public. It provides a forum for discussion with the strategic community on strategic issues and strives to explore alternatives. Moreover, it works towards building capacity among young scholars for greater refinement of their analyses of South Asian security.

Leave a Reply

Your email address will not be published. Required fields are marked *