The Future Of War: Less Fantastic, More Practical – Analysis

By

Despite the hype, some of the biggest impacts from the use of AI and applied machine learning in the defence context are mundane.

By Lindsey R. Sheppard

Artificial intelligence (AI) is a defining element in a societal transition from the Information Age to one dominated by data, information, and cyberphysical systems. As states now compete in “the gray zone” or through  hybrid measures — tactics intended to remain below the threshold of armed conflict [i] — leveraging the massive amounts of information and data at hand is of strategic importance. [ii] Kinetic effect will no doubt remain crucial in armed conflict. However, securing advantage in a world with artificial intelligence, data analytics, and cloud computing requires mastery of data and information awareness — i.e., the non-kinetic and digital.

The AI toolbox

AI is an umbrella term that often includes various disciplines of computer science, [iii] learning strategies, applications, and use cases. AI has experienced a surge of excitement, research, and application in the past decade, driven by an increased availability of data and computing power, advances in machine learning (as distinguished from rules-based systems [iv]), and electronics miniaturisation. It has gone from largely residing in the realm of academia and research to widespread application across the public and private sectors. [v]

Despite the hype, some of the biggest impacts from the use of AI and applied machine learning in the defence context are mundane. Areas like logistics, predictive maintenance, and sustainment are ripe for computational innovation by assisting humans with repetitive tasking and enabling the processing of large volumes of data. The operational reality in the near term may simply be optimising resource allocation and management that allows actors to act efficiently within their opponent’s decision loop.

While the pursuit of AI for national defence and military systems raises concerns on the role of technology in warfare, many of the concerns are not inherently new. International law of war as well as nation-specific law [vi] on use of force and military action are applicable whether AI is incorporated into systems or not. Principles of military necessity, distinction, and proportionality along with existing frameworks, structures, and institutions remain relevant and regulate the development and deployment of any advanced technology for armed conflict.[vii] Military legal advisors, ethicists, and policymakers, to name a few, continue to work to identify potential gaps in existing law and guidance and have yet to reach consensus that such gaps do exist. What relevant stakeholders do agree on is the necessity for system attributes such as robustness, safety, transparency, and traceability.

Further, an AI system exists within an AI ecosystem [viii] that includes not only the algorithms but also the data from which the algorithms “learn,” computing infrastructure, governance structures, and the many humans that design, interact with, deploy, and are impacted by the technology. The development of the AI ecosystem will be critical to the success of AI systems in future conflicts. Many nations face an underdeveloped AI ecosystem and are rapidly working to invest the requisite time, attention, and financial support for growth.[ix]

For learning-based solutions, we must also address the necessity and availability of data and computing resources. Data quality and security are hurdles to AI application in the relatively data-scarce environment of national security contexts. While defence-related data does exist, it is often unstructured and not suited to machine learning solutions, let alone statistical data analysis. Foundational computational infrastructure and networking often require significant upgrades in most cases while simultaneously addressing the necessity of “compute.” [x]

Effectively, the one aspect of AI that is actually an arms race is the competition for talent. [xi] Developing an educated and skilled workforce is of critical importance to the success of highly capable machines. This includes professional education, university Science, Technology, Engineering, and Math (STEM), and incorporating computer science into early education for children. While we must not only contend with the significant time required to grow capable talent, nations will continue to compete for the existing talent. For example, both Russia and China are recognising the imperative to retain or bring home technical talent and expertise [xii] in addition to STEM education initiatives. [xiii]

Mitigating risk and managing expectation

Even with all puzzle pieces in place, the capabilities of AI and machine learning are still quite limited relatively and are expected to remain so for the foreseeable future. [xiv] AI is always purpose-built, problem-specific, and context-dependent. It operates effectively on discrete tasks over well-bounded problems. Further, machine learning requires volumes of labeled datasets that are time-intensive to create and maintain, and the need for access challenges the defence sector’s traditional approach to securing sensitive data through silos and restricted access.

A misunderstanding of the limitations of AI, in part through mismanaged expectations on the promise of intelligent machines, in fact serves as a mechanism for exacerbating risks and increasing the potential for accidents. AI introduces new vulnerabilities and failure modes into systems. [xv] While system failure in warfare is not unique to AI, failure in machine learning may look different, possibly in unrecognisable, new, and unexpected ways. Further, it may be difficult to verify a system is behaving as intended. Even more challenging for applied machine learning is classifying an unwanted behaviour and ensuring a system does not exhibit said behaviour again. Deploying machine learning in the context of warfare thus requires an assessment of the consequences of failures. Even in the most well-known defence applications, like drone video analysis, [xvi] the technical maturity and capability of AI currently presents an unacceptable risk in relying completely on machines.

The decisions men and women face in combat are uniquely human. While the applicability of AI to security challenges holds promise in areas with repetitive, well-defined tasking, we should resist the temptation to blindly tackle with AI our hardest problems of how and when humans wage war. AI will not make the difficult choices and decisions inherent in armed conflict any less difficult. Additionally, the use of AI, machine learning, and analytic support tools is not a mechanism by which humans can abdicate responsibility over decisions.

Two conclusions become clear. One, AI is likely to be one tool of many in the digital toolbox where it is applicable. Given technical maturity considerations, learning-based systems may not be the most appropriate solution for many problems. However, the defence enterprise is ripe with areas that are appropriate for the application of AI and limiting considerations should not be lost in discussions on lethal force.

Two, investing in people may be the best safeguard against missteps and misuse. From senior leaders to developers to end users, people must understand the capabilities as well as the limitations to guide the development and deployment of AI and machine learning capability. In our increasingly digital future, featuring increasingly digitised warfare, we cannot afford to under- or over-estimate the applicability and potential of AI.


[i] Melissa Dalton, Kathleen H. Hicks, Megan Donahoe, Lindsey Sheppard, Alice Hunt Friend, Michael Matlaga, Joseph Federici, Matthew Conklin, Joseph Kiernan, By Other Means Part II: Adapting to Compete in the Gray Zone (Washington, DC: CSIS, 2019).

[ii] Lindsey Sheppard and Matthew Conklin, “Warning for the Gray Zone”, By Other Means Part II: Adapting to Compete in the Gray Zone, August 13, 2019,

[iii] Machine learning, natural language processing, knowledge representation, automated reasoning, computer vision, and robotics as identified in Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 3rd ed. (Harlow, UK: Pearson Education

Limited, 2014).

[iv] Also known as “expert systems,” rules-based systems are those in which functionality is implemented through hard-coded rules or specified relationships as programmed by humans. At a fundamental level, rules consist of an “IF this condition” and a “THEN that output or action.” As distinct from machine learning, rules-based system performance does not learn or improve over time with new contexts unless new functionality is programmed by a human.

[v] Michael Chui, James Manyika, Mehdi Miremadi, Nicolaus Henke, Rita Chung, Pieter Nel, and Sankalp Malhotra, Notes from the AI Frontier: Insights from Hundreds of Use Cases, McKinsey Global Institute, April 2018.

[vi] An example of “nation-specific law” is the US Department of Defense’s Law of War Manual.

[vii] For the United States perspective on the significance of Law of War to Artificial Intelligence, see: Defense Innovation Board, AI Principles: Recommendation on the Ethical Use of Artificial Intelligence by the Department of Defense, Supporting Document, October 31, 2019, pages 22-24, 53-58.

[viii] Lindsey Sheppard, Robert Karlen, Andrew Hunter, and Leonard Balieiro, Artificial Intelligence and National Security: The Importance of the AI Ecosystem (Washington, DC: CSIS, 2018).

[ix] Raymond Perrault, Yoav Shoham, Erik Brynjolfsson, Jack Clark, John Etchemendy, Barbara Grosz, Terah Lyons, James Manyika, Saurabh Mishra, and Juan Carlos Niebles, “The AI Index 2019 Annual Report”, AI Index Steering Committee, Human-Centered AI Institute, Stanford University, December 2019.

[x] Meredith Whittaker (@mer__edith), “Only ~5 companies in the West have the resources needed to develop AI. AI startups and academic AI research labs license (or are gifted) computational resources from these Big Tech companies”. Twitter, November 29, 2019.

[xi] Elsa Kania, “China’s AI talent ‘arms race’”, The Strategist, ASPI, April 23, 2018,

[xii] Samuel Bendett, “Russia’s National AI Center Is Taking Shape”, Defense One, September 27, 2019; Don Weinland, “China in push to lure overseas tech talent back home”, Financial Times, February 11, 2018.

[xiii] Dawn Liu, “China ramps up tech education in bid to become artificial intelligence leader”, NBC News, January 4, 2020.

[xiv] Rodney Brooks, “My Dated Predictions”, Rodney Brooks: Robots, AI and other Stuff (blog), January 1, 2018, Ram Shankar Siva Kumar, David O’Brien, Jeffrey Snover, Kendra Albert, Salome Viljoen, “Failure Modesin Machine Learning”, Microsoft, November 10, 2019.

[xv] Ram Shankar Siva Kumar, David O’Brien, Jeffrey Snover, Kendra Albert, Salome Viljoen, “Failure Modes in Machine Learning”, Microsoft, November 10, 2019.

[xvi] Colin Clark, “Air Combat Commander Doesn’t Trust Project Maven’s Artificial Intelligence – Yet”, Breaking Defense, August 21, 2019.

Observer Research Foundation

ORF was established on 5 September 1990 as a private, not for profit, ’think tank’ to influence public policy formulation. The Foundation brought together, for the first time, leading Indian economists and policymakers to present An Agenda for Economic Reforms in India. The idea was to help develop a consensus in favour of economic reforms.

Leave a Reply

Your email address will not be published. Required fields are marked *