The Stanley Foundation
Menu
Close
Seeking a secure peace with freedom and justice, built on global citizenship and effective global governance.
Search 
Courier


The Critical Human Element in the Machine Age of Warfare
Originally published in the Bulletin of Atomic Scientists

In 1983, Stanislav Petrov helped to prevent the accidental outbreak of nuclear war by recognizing that a false alarm in Soviet early warning systems was not a real report of an imminent US attack. In retrospect, it was a remarkable call made under enormous stress, based on a guess and gut instinct. If another officer had been in his place that night—an officer who simply trusted the early warning system—there could have been a very different outcome: worldwide thermonuclear war.

As major militaries progress toward the introduction of artificial intelligence (AI) into intelligence, surveillance, and reconnaissance, and even command systems, Petrov’s decision should serve as a potent reminder of the risks of reliance on complex systems in which errors and malfunctions are not only probable, but probably inevitable. Certainly, the use of big data analytics and machine learning can resolve key problems for militaries that are struggling to process a flood of text and numerical data, video, and imagery. The introduction of algorithms to process data at speed and scale could enable a critical advantage in intelligence and command decision-making. Consequently, the US military is seeking to accelerate its integration of big data and machine learning through Project Maven, and the Chinese military is similarly pursuing research and development that leverage these technologies to enable automated data and information fusion, enhance intelligence analysis, and support command decision-making. Russian President Vladimir Putin, meanwhile, has suggested, “Artificial intelligence is the future, not only for Russia, but for all humankind.... Whoever becomes the leader in this sphere will become the ruler of the world.”

To date, such military applications of AI have provoked less debate and concern about current capabilities than fears of “killer robots” that do not yet exist. But even though Terminators aren’t in the immediate future, the trend toward greater reliance upon AI systems could nonetheless result in risks of miscalculation caused by technical error. Although Petrov’s case illustrates the issue in extremis, it also offers a general lesson about the importance of human decision-making in the machine age of warfare.

It is clear that merely having a human notionally “in the loop” is not enough, since the introduction of greater degrees of automation tend to adversely impact human decision-making. In Petrov’s situation, another officer may very well have trusted the early warning system and reported an impending US nuclear strike up the chain of command. Only Petrov’s willingness to question the system—based on his understanding that an actual US strike would not involve just a few missiles, but a massive fusillade—averted catastrophe that day.

Today, however, the human in question might be considerably less willing to question the machine. The known human tendency toward greater reliance on computer-generated or automated recommendations from intelligent decision-support systems can result in compromised decision-making. This dynamic—known as automation bias or the overreliance on automation that results in complacency—may become more pervasive, as humans accustom themselves to relying more and more upon algorithmic judgment in day-to-day life.

In some cases, the introduction of algorithms could reveal and mitigate human cognitive biases. However, the risks of algorithmic bias have become increasingly apparent. In a societal context, “biased” algorithms have resulted in discrimination; in military applications, the effects could be lethal. In this regard, the use of autonomous weapons necessarily conveys operational risk. Even greater degrees of automation—such as with the introduction of machine learning in systems not directly involved in decisions of lethal force (e.g., early warning and intelligence)—could contribute to a range of risks.

Friendly Fire—and Worse

As multiple militaries have begun to use AI to enhance their capabilities on the battlefield, several deadly mistakes have shown the risks of automation and semi-autonomous systems, even when human operators are notionally in the loop. In 1988, the USS Vincennes shot down an Iranian passenger jet in the Persian Gulf after the ship’s Aegis radar-and-fire-control system incorrectly identified the civilian airplane as a military fighter jet. In this case, the crew responsible for decision-making failed to recognize this inaccuracy in the system—in part because of the complexities of the user interface—and trusted the Aegis targeting system too much to challenge its determination. Similarly, in 2003, the US Army’s Patriot air defense system, which is highly automated with high levels of complexity, was involved in two incidents of fratricide. In these stances, naïve trust in the system and the lack of adequate preparation for its operators resulted in fatal, unintended engagements.

As the US, Chinese, and other militaries seek to leverage AI to support applications that include early warning, automatic target recognition, intelligence analysis, and command decision-making, it is critical that they learn from such prior errors, close calls, and tragedies. In Petrov’s successful intervention, his intuition and willingness to question the system averted a nuclear war. In the case of the USS Vincennes and the Patriot system, human operators placed too much trust in and relied too heavily on complex, automated systems. It is clear that the mitigation of errors associated with highly automated and autonomous systems requires a greater focus on this human dimension.

Nuclear missiles are displayed September 3, 2015, during a parade in Beijing. As the US, Chinese, and other militaries seek to leverage artificial intelligence to support applications that include early warning, automatic target recognition, intelligence analysis, and command decision-making, it is critical that they learn from earlier errors, close calls, and tragedies. (Xinhua/Pan Xu via Getty Images)


There continues, however, to be a lack of clarity about issues of human control of weapons that incorporate AI. Former Secretary of Defense Ash Carter has said that the US military will never pursue “true autonomy,” meaning humans will always be in charge of lethal force decisions and have mission-level oversight. Air Force Gen. Paul J. Selva, vice chairman of the Joint Chiefs of Staff, used the phrase “Terminator Conundrum” to describe dilemmas associated with autonomous weapons and has reiterated his support for keeping humans in the loop because he doesn’t “think it’s reasonable to put robots in charge of whether we take a human life.” To date, however, the US military has not established a full, formalized definition of in the loop or of what is necessary for the exercise of appropriate levels of human judgment over use of force that was required in the 2012 Defense Department directive “Autonomy in Weapons Systems.”

The concepts of positive or meaningful human control have started to gain traction as ways to characterize the threshold for giving weapon system operators adequate information to make deliberate, conscious, timely decisions. Beyond the moral and legal dimensions of human control over weapons systems, however, lies the difficult question of whether and under what conditions humans can serve as an effective failsafe in exercising supervisory weapons control, given the reality of automation bias.

Crew members monitor equipment in the combat information center of the nuclear-powered aircraft carrier USS Abraham Lincoln in the Caribbean Sea. Former Secretary of Defense Ash Carter has said that the US military will never pursue “true autonomy,” meaning humans will always be in charge of lethal force decisions and have mission-level oversight. (Photo by Corbis via Getty Images)


When War Is Too Fast for Humans to Keep Up

Moreover, it remains to be seen whether keeping human operators directly involved in decision-making will even be feasible for a number of military missions and functions, and different militaries will likely take divergent approaches to issues of automation and autonomy.

Already, there has been the aforementioned transition to greater degrees of automation in air and missile defense, driven by the inability of humans to react quickly enough to defend against a saturation attack. Similar dynamics may be in play for future cyber operations because of comparable requirements of speed and scale. Looking to the future potential of AI, certain Chinese military thinkers even anticipate the approach of a battlefield “singularity,” at which human cognition could no longer keep pace with the speed of decision and tempo of combat in future warfare. Perhaps inevitably, keeping a human fully in the loop may become a major liability in a number of contexts. The type and degree of human control that is feasible or appropriate in various conditions will remain a critical issue.

Looking forward, it will be necessary to think beyond binary notions of a human in the loop versus full autonomy for an AI-controlled system. Instead, efforts will of necessity shift to the challenges of mitigating risks of unintended engagement or accidental escalation by military machines.

Inherently, these issues require a dual focus on the human and technical dimensions of warfare. As militaries incorporate greater degrees of automation into complex systems, it could be necessary to introduce new approaches to training and specialized career tracks for operators. For instance, the Chinese military appears to recognize the importance of strengthening the “levels of thinking and innovation capabilities” of its officers and enlisted personnel, given the greater demands resulting from the introduction of AI-enabled weapons and systems. Those responsible for leveraging autonomous or “intelligent” systems may require a greater degree of technical understanding of the functionality and likely sources of fallibility or dysfunction in the underlying algorithms.

In this context, there is also the critical human challenge of creating an AI-ready culture. To take advantage of the potential utility of AI, human operators must trust and understand the technology enough to use it effectively, but not so much as to become too reliant upon automated assistance. The decisions made in system design will be a major factor in this regard. For instance, it could be advisable to create redundancies in AI-enabled intelligence, surveillance, and reconnaissance systems such that there are multiple methods to ensure consistency with actual ground truth. Such a safeguard is especially important due to the demonstrated vulnerability of deep neural networks, such as image recognition, to being fooled or spoofed through adversarial examples, a vulnerability that could be deliberately exploited by an opponent. The potential development of counter-AI capabilities that might poison data or take advantage of flaws in algorithms will introduce risks that systems could malfunction in ways that may be unpredictable and difficult to detect.

In cases in which direct human control may prove infeasible, such as cyber operations, technical solutions to unintended engagements may have to be devised in advance. For instance, it may be advisable to create an analogue to circuit breakers that might prevent rapid or uncontrollable escalation beyond expected parameters of operation.

While a ban on AI-enabled military capabilities is likely improbable, and treaties or regulations could be too slow to develop, nations might be able to mitigate likely risks of AI-driven systems to military and strategic stability through a prudent approach that focuses on pragmatic practices and parameters in the design and operation of automated and autonomous systems, including adequate attention to the human element.


Elsa B. Kania is an adjunct fellow with the Technology and National Security Program at the Center for a New American Security, where she focuses on Chinese defense innovation and emerging technologies, particularly artificial intelligence. Her research interests include Chinese military modernization, information warfare, and defense science and technology. She is an independent analyst, consultant, and cofounder of the China Cyber and Intelligence Studies Institute, which seeks to become the premier venue for analysis and insights on China’s use of cyber and intelligence capabilities as instruments of national power.

Her prior professional experience includes working at the US Department of Defense, the Long Term Strategy Group, FireEye Inc., and the Carnegie-Tsinghua Center for Global Policy. Kania is a graduate of Harvard College. She was awarded the James Gordon prize for her thesis on the Chinese People’s Liberation Army and its strategic thinking on information warfare. While at Harvard, she worked as a research assistant at the Belfer Center for Science and International Affairs and the Weatherhead Center for International Affairs. Kania was a Boren Scholar in Beijing, China, and she is fluent in Mandarin Chinese.


— Elsa B. Kania
Share: Email Facebook Twitter
HIGHLIGHTS
Receive Materials Receive Materials
The Stanley Foundation publishes policy briefs, analytical articles, and reports on a number of international issues. To reduce our carbon footprint and cut waste, we almost exclusively, use electronic distribution for our publications. Sign up to receive our resources via e-mail.

Employment Opportunities
Policy Program Associate, Nuclear Policy
The Stanley Foundation seeks a program associate to join its Policy Programming Department to contribute energy, creativity, and analytical skill to the design and implementation of programming on technology and nuclear policy. 

Operations Specialist
Seeking a dedicated, dynamic individual who has a passion for working in the field of event planning and prefers a small-business atmosphere with opportunities for international travel. This full-time position involves event-planning operations and activities for various programming efforts on climate change, nuclear policy issues, and mass violence and atrocities.


Courier Courier
The Spring 2018 issue of Courier highlights the issues facing Cambodian fishermen as their lakes are affected by climate change. It includes a tribute to Dick and Mary Jo Stanley, a look at UN Secretary-General Antonio Guterres’ attempts at UN reform, and a peek at our Iowa Student Global Leadership conference.

The issue also focuses on how artificial intelligence is being integrated into military systems around the world, how sustainable farming practices can impact carbon emissions, and the cyber vulnerabilities of the world’s most dangerous weapons. Spring 2018 PDF. (1,151K) Subscribe for FREE.


the latest the latest
Our bimonthly newsletter is filled with resources to keep you up to date with our work at the Stanley Foundation. Each edition includes news about recent publications and stories as well as features our people and partners.

You will also find many extras, from upcoming events to multimedia resources. 

Sign up for the latest to stay engaged on key global issues.


Stanley Foundation Annual Conferences Stanley Foundation Annual Conferences
The Stanley Foundation holds two annual conferences, UN Issues and the Strategy for Peace Conference. These bring together experts from the public and private sectors to meet in a distraction-free setting and candidly exchange ideas on pressing foreign policy challenges.

Divided into roundtable talks, the cutting-edge discussions are intended to inspire group consensus and shared recommendations to push forward the debate on the foundation’s key policy areas.


The Stanley Foundation: Part of COP23 The Stanley Foundation: Part of COP23
As a part of our efforts to limit global warming to 1.5° C, the foundation put forward policy ideas to achieve a global turning point in emissions by 2020, built upon efforts to catalyze global climate action by countries and sub- and non-state actors, and worked with journalists to strengthen coverage of the UN climate negotiations.

A Climate for Conflict: Stories from Somalia A Climate for Conflict: Stories from Somalia
The Ground Truth Project, New America, and the Stanley Foundation are hosting a Screening of “A Climate for Conflict” and discussion with the creators followed by a panel discussion on Climate Security and Societal Resilience on May 30, 2017.

Somalia today is at a crossroads between a deepening crisis and a path to stability. Photographer and filmmaker Nichole Sobecki and writer Laura Heaton spent 18 months documenting personal stories of Somalia, creating a film, photography, and reporting that vividly illustrate the human consequences and security risks of a changing climate. Read more.

Follow the conversation online with #AClimateforConflict.


Watch and Learn Watch and Learn
Stanley Foundation events, talks, video reports, and segments from our Now Showing event-in-a-box series can now be viewed on YouTube. To receive regular updates on our video posts, please subscribe today.