The Dark Side Of AI: Israel’s Controversial Use Of Data And Algorithms In Gaza – OpEd

By

AI is a broad term that refers to the ability of machines or software to perform tasks that normally require human intelligence such as perception, reasoning, learning, decision making and problem solving. AI has many applications and implications for modern warfare, both on and off the battlefield. Here, we will explore how AI helps Israel army select bombing targets in Gaza, and the implications and challenges of using AI in warfare. 

The Gospel: Israel’s AI-based target creation system

According to the reports, Israel has been using an AI-based system called “the Gospel” to generate and prioritize targets for its airstrikes in Gaza. The Gospel is a platform that collects and analyzes data from various sources such as satellite imagery, drone footage, human intelligence and social media to identify potential targets that are linked to Hamas.

The Gospel then assigns a score to each target, based on its importance, urgency, and collateral damage risk, and sends the list to the IDF commanders for approval and execution. The IDF claims that the Gospel has significantly increased the speed and accuracy of its target creation process. 

The implications and challenges of using AI in warfare

However, there are also concerns and criticisms about the use of AI in Israel’s bombing campaign. Some of the issues include:

– The lack of transparency and accountability of the Gospel and its algorithms. It is unclear how the Gospel collects, verifies, and processes the data, and how it determines the score and ranking of the targets. There is also no independent oversight or review of the Gospel’s decisions and actions .

– The possibility of errors and biases in the data and the algorithms. The data sources used by the Gospel may be incomplete, inaccurate, outdated or manipulated, and the algorithms may have inherent or learned biases that affect their judgments. For example, the Gospel may rely on social media posts that are false or misleading, or it may favor certain types of targets over others based on the data it receives .

– The ethical and legal implications of delegating human decisions to machines.The use of AI in warfare raises questions about the moral and legal responsibility of the human operators and commanders, and the respect for the principles of international humanitarian law, such as distinction, proportionality, and precaution. For example, who is accountable if the Gospel makes a mistake or causes excessive harm? How can the IDF ensure that the Gospel respects the rights and dignity of the civilians in Gaza? How can the IDF verify that the targets are legitimate and proportionate to the threat ? 

The use of AI in warfare also affects international relations and diplomacy, as it puts new topics on the international agenda such as the ethical, legal and social implications of AI, the development of standards and norms for AI, and the governance and regulation of AI. These topics require multilateral cooperation and dialogue among different stakeholders such as governments, international organizations, civil society and the private sector . 

The use of AI in warfare also challenges geostrategic relations as AI can enhance the military, economic and societal power of countries that have advanced AI capabilities, and create asymmetries and inequalities among countries that have different levels of AI development and adoption. AI can also create new opportunities and threats for national security and international stability such as cyberattacks, autonomous weapons and disinformation campaigns  . 

The use of AI in warfare also serves as a tool for diplomats and negotiators, as AI can help collect and analyze data, provide decision support, draft and translate documents, and facilitate communication and collaboration. AI can also help monitor and implement international agreements and treaties such as the Paris Agreement on climate change or the Iran nuclear deal  . 

The use of AI in warfare also faces some opposition and resistance, both from within and outside Israel. Some of the sources and arguments of opposition include:

– Human rights organizations and activists such as Amnesty International, Human Rights Watch, and B’Tselem, who have documented and denounced the violations of human rights and international law committed by the IDF in Gaza, and have called for an investigation and accountability for the use of AI in warfare .

– International bodies and governments, such as the United Nations, the European Union, and some Arab and Muslim countries, who have expressed concern and condemnation for the IDF’s attacks on Gaza, and have urged for a ceasefire and a peaceful resolution of the conflict .

– Some Israeli citizens and groups, such as conscientious objectors, peace activists, and journalists who have opposed and protested against the IDF’s military operations in Gaza, and have questioned the morality and legality of the use of AI in warfare. 

The future of AI in modern warfare

The future of AI in modern warfare is a topic that has been widely discussed and debated by experts, scholars, and policymakers. It is reiterated that AI is a broad term that refers to the ability of machines or softwares to perform tasks that normally require human intelligence such as perception, reasoning, learning, decision making and problem solving. AI has many applications and implications for modern warfare, both on and off the battlefield. Some of the possible roles and impacts of AI in future warfare are:

– AI can enhance the command and control (C2) of military forces by collecting and analyzing data from various sources such as sensors, satellites, drones, and human intelligence, and providing situational awareness, threat assessment, and decision support to commanders and operators.

– AI can enable the development and deployment of lethal autonomous systems (LAS) such as robots, drones, missiles, and guns that can operate independently or in coordination with human operators and that can select and engage targets without human intervention.

– AI can improve the performance and accuracy of small arms and light weapons, such as rifles, pistols, and grenades by incorporating features such as smart scopes, biometric locks and self-guided projectiles that can increase the hit probability and reduce the collateral damage of these weapons.

– AI can facilitate the use of three-dimensional (3D) printing, a technology that can create physical objects from digital models by enabling the design and production of customized and complex military equipment such as weapons, vehicles, and drones, that can be rapidly deployed and adapted to different scenarios.

– AI can also be used for data analysis and cybersecurity by using machine learning and natural language processing to process large amounts of information such as intelligence reports, social media posts, and enemy communications and to detect and counter cyber threats such as malware, hacking, and spoofing.

– AI can also pose challenges and risks for modern warfare such as ethical, legal and moral dilemmas, cyber and physical vulnerabilities and strategic and operational uncertainties, that require careful consideration and regulation by military and civilian authorities. 

As a matter of fact, the use of AI in warfare is a complex and sensitive issue that involves many factors and perspectives. AI is a powerful and transformative technology that can have significant impacts on the conduct and outcome of war, and that requires responsible and accountable use by all actors involved. The case of Israel’s use of AI to select bombing targets in Gaza raises many important questions and debates for the future of AI in war.

Altaf Moti

Altaf Moti writes on diverse topics such as politics, economics, and society.

Leave a Reply

Your email address will not be published. Required fields are marked *