The Weaponization Of AI: A Frontier Risk For Humanity – OpEd

By

Artificial intelligence (AI) is rapidly becoming the center of the global power play, as nations compete to develop and deploy the automated weapons systems that AI will make possible. These weapons, known as Lethal Autonomous Weapon Systems (L.A.W.S.), are machines that can identify, engage and destroy targets without human control. They pose a frontier risk, a low-likelihood, high-impact threat that could arise as humans explore new realms of technology.

L.A.W.S are type of autonomous military system that can independently search for and engage targets based on programmed constraints and descriptions. LAWs may operate in the air, on land or at sea. For examples of L.A.W.S. currently in use include the Patriot missile system, AEGIS naval weapons system, Phalanx weapons system, and Israeli Harpy weapons system.

The use of L.A.W.S. is a controversial topic and has been debated by many countries around the world. The United Nations has been discussing the issue since 2013 and has called for a ban on L.A.W.S. And also a concern for human rights organizations such as Amnesty International and Human Rights Watch. They have called for a ban on L.A.W.S. due to the potential for these machines to be used in ways that violate international humanitarian law (IHL) and human rights law (HRL). 

AI weaponization means using AI to deliberately inflict harm on human beings by integrating it into the systems and tools of national militaries. It involves the use of AI algorithms to enhance the capabilities of existing weapons, such as drones, missiles, tanks, and submarines, or to create new types of weapons, such as swarms of micro-robots, cyberattacks, and bio-weapons. It enables more efficient use of conventional modes of weapons used in air, land, water, and space using AI-based decision-making. The weaponization of AI – particularly in nuclear materials, toxins, and chemical materials – is documented and has also been considered in the context of climate manipulation and space usage.

Now the question is “Why is AI weaponization a frontier risk?” because it has the potential to cause catastrophic consequences for humanity and the planet. AI weapons could lower the threshold for initiating and escalating conflicts, as they reduce the human and financial costs of warfare. They could also increase the speed and complexity of warfare, making it harder for humans to intervene or de-escalate. These weapons could act autonomously, without human oversight or intervention, making it difficult to assign responsibility and accountability for their actions. They could also malfunction or be hacked by adversaries, leading to unintended or malicious outcomes. And could raise serious ethical and moral questions about the value of human life, dignity, and rights. They could also challenge the existing legal and humanitarian frameworks that govern the use of force and protect civilians. It could pose an existential threat to humanity if they develop superintelligence, the ability to surpass human intelligence and capabilities. They could then turn against their creators or pursue goals that are incompatible with human values and interests.

AI weaponization is a complex and urgent challenge that requires global cooperation and regulation. The international community should establish norms and standards for the development and use of AI weapons, such as banning certain types of weapons or requiring human supervision and control. These norms and standards should be based on existing humanitarian law and ethical principles. There must be mechanisms to verify and enforce compliance with the norms and standards for AI weapons, such as inspections, sanctions, and dispute resolution. These mechanisms should be transparent, impartial, and effective. The stakeholders involved in the development and deployment of AI weapons, such as governments, military, industry, academia, and civil society, should adopt responsible innovation and governance practices, such as conducting risk assessments, ensuring accountability and oversight, engaging in dialogue and collaboration, and fostering a culture of ethics and trust.

AI weaponization is a frontier risk that needs to be regulated before it becomes a reality that cannot be ignored anymore. It is a challenge that requires collective action from all actors across the human ecosystem in cyberspace, geospace, and space (CGS). By working together, we can ensure that AI is used for good, not evil.

*Author is Ph.D. Scholar (SPIR-QAU) and has worked on various public policy issues as Policy Consultant in National Security Division (NSD), Prime Minister Office (PMO). Currently, she is working at Islamabad Policy Research Institution (IPRI) as Policy Consultant. Her work has been published in local and International publications. She can be reached at [email protected]. Twitter: @NoureenAkhtar16

Noureen Akhtar

Noureen Akhtar s a PhD Scholar (SPIR-QAU) and has worked on various public policy issues as a Policy Consultant in National Security Division (NSD), Prime Minister's Office (PMO). Currently, she is working in Islamabad Policy Research Institution (IPRI) as a Policy Researcher/Consultant. Her work has been published in local and International publications. She can be reached at [email protected]. Twitter: @NoureenAkhtar16

Leave a Reply

Your email address will not be published. Required fields are marked *