The [AI]nformation battlespace

The [AI]nformation battlespace

Artificial intelligence is coming to the battlefield: how will we prepare?

Under the cover of darkness, a team of soldiers from the United States Special Operations Command quietly makes their way into an unnamed East African city. They’re clad in specialized clothing and makeup that hampers facial recognition software from matching their faces against freely-available online databases of hacked Department of Defense files, and carry secured cell phones to communicate with their home base — a converted cargo ship anchored off-shore in international waters. A team on the ship flies a drone overhead to keep an eye on their surroundings, and relays a live video back to mission control stateside, which keeps an eye on the team’s surroundings and will alert them to any potential threats.

The team’s mission is to track down an individual who stole a piece of technology from a defense contractor. Various US intelligence services intercepted the thief’s communications to foreign intelligence officials, and learned that they’re planning to turn over the tech to them in this particular city. It’s a safe place for a handoff: a decade earlier, the Chinese government had wired it up with an advanced surveillance network, allowing them to autonomously keep eyes on the city’s residents, foreign intelligence and military officials, and everyone else who' might be passing through.

The soldiers in the team have to not only find their target, but do so without tipping off local officials and their Chinese allies, as to avoid kicking off an international incident. Once they have their quarry in hand, they have to make it back out of the city to their ship and into safe territory.

It’s a scenario that sounds a bit like the plot of thriller from Tom Clancy (or from authors like August Cole / P.W. Singer or Linda Nagata) but it’s actually an idea that came out of a group session during a conference on artificial intelligence that took place at the United States Army War College’s Center for Strategic Leadership in Carlisle, Pennsylvania, which has a particular focus on strategic wargaming.

As I’d written about military affairs (real world and science fictional), I was one of the people invited to take part. When I arrived at the conference last year, I found a diverse group of people: military reporters like Breaking Defense’s Sydney J. Freedberg Jr., The Red and The Last Good Man author Linda Nagata, and Empires of EVE: A History of the Great Wars of EVE Online author Andrew Groen, as well as a number of officers from the Army and Marines, ranging from active-duty captains to retired officers.

Our task for the session was to figure out how to set up the parameters for a war game that would simulate a conflict between two adversaries, one armed with an artificial intelligence in charge of their command and control systems, something that military strategists have begun to think about more seriously as technology on the battlefield has undergone drastic changes in the last decade.

Going into the conference were a number of questions: how would a military put artificial intelligence to use, and if it were fielded, how would the country combat an adversary that utilized such a system? And would this particular gathering be a useful stepping stone towards that eventual goal, or would it just be an intriguing thought-experiment?

“We’re not talking about killer robots,” Dr. Andrew Hill, the Chair of Strategic Leadership for the school, told the assembled group in his opening statements. There’s a general wariness in defense circles of James Cameron’s 1984 film Terminator, which set the public image of murderous military robots.

Many within the military community believe that artificial intelligence is the revolution in the history of warfare, seeing it as an inevitable development that’ll permeate every part of the battlefield. If AI can give a military an edge over their adversaries, it’s something that soldiers and planners in the future will need to take into consideration. One way to do that is to use a time-tested training method: wargaming.


Photo: U.S. Navy / Andrew Liptak

Technological change is a constant factor that militaries around the world have had to cope and adapt to throughout history. With every new advance, opponents work to counter its effects, lest they succumb to an enemy attack. Armies introduced formations pikemen to fend off horse-bound knights. Builders changed castle walls to be shorter and thicker to counter cannonballs. The introduction of the machine gun prompted armies to dig trenches, and then armored tanks to crawl over them. Airplanes allowed strategists to bomb their enemies from afar, and missiles from even further away. It’s a blood-soaked history of evolutionary change.

In his book Army of None: Autonomous Weapons and the Future of War, Center for a New American Security Senior Fellow Paul Scharre notes that armies around the world have increasingly begun to implement robots into their ranks, and that they’re able to operate in or conduct tasks that humans can’t. “Automation is already used for a variety of functions in weapons today,” he writes. “But in most cases it is still humans choosing the targets and pulling the trigger. Whether that will continue is unclear.”

That future feels as though it’s coming up fast. In recent years, the United States military (and various allies) have begun implementing systems that feel like they could have been ripped right out of a science fiction novel, from laser and microwave systems designed to take down swarms of drones or missiles, to hypersonic missiles designed to fly 15 times the speed of sound towards their targets.

The future that Hill and other strategists envisions is a far cry from skeletal robots bent on wiping out humanity: they see a future where the military not only has its share of robotic systems operating in a battlefield, but one that goes far behind the front lines: advanced systems that streamline the flow of information to decision-makers, that optimize the vast logistical apparatus that supplies the military, or which provides supporting role in helping mechanics perform preventative maintenance on vehicles or equipment.

In his opening comments before the working group, Hill noted that the present moment of technological change shares some similarities to another recent era: the inter-war period between the First and Second World Wars. That lull between conflicts was one of rapid technological progress and transformation. Not only did airplanes, submarines, and tanks evolve into effective fighting vehicles, but militaries figured out the best way to utilize them on the battlefield, changing the speed and firepower available to commanders.

As the technology evolved, strategists devised new methods for either incorporating them into their doctrine, or figured out how to best counter them. One tool at their disposal is wargaming, a simulated battle on a table that involves certain parameters that allows strategists to solve complicated problems without mobilizing hundreds or thousands of soldiers.

Hill highlighted a series of exercises that the Navy and Army performed between the two conflicts. Strategists understood that the capabilities of aircraft would quickly evolve, and extrapolated. They gave naval aircraft additional “speed, range, payload, and targeting capabilities well beyond those of aircraft of the time,” allowing them to imagine their potential uses, and come up with new tactics and uses for them.

By the time the United States entered the Second World War, the Navy conducted 136 such war games, most of them against Japan. Admiral Charles W. Nimitz, who oversaw the entire Pacific campaign during World War II, later noted that they had practiced so many scenarios “that nothing that happened during the war was a surprise,” save for Japan’s kamikaze tactics. Simulating those hypothetical scenarios proved to be a major reason for why the United States prevailed in the Pacific: preparation is essential.

Many within the military and defense industries feel that artificial intelligence could have a similar, evolutionary effect on how war is fought, and with his conference, Hill wanted to do something similar: anticipate the capabilities of artificial intelligence to try and work out how the military might be able to use or counter an adversary armed with such a system.

In 2019 and 2020, the threat that such technology poses is still largely theoretical, but its appearance on a battlefield is something that seems likely. When it does, strategists want to be ready, ensuring that the military has the technology and mindset for dealing with such a threat.


Photo: U.S. Army / Andrew Liptak

In order to address a problem, one has to first identify and understand it. After a handful of briefings that covered the overview of the principles of warfare and the nature of artificial intelligence, Hill presented the assembled group with a straight-forward task at the start: examine the principles of warfare as understood by the United States military, and identify how impact would impact them.

Principles of warfare are the fundamental values, collective teachings, and shared understandings that help guide how the military carries out its mission and values. The military defines its principles in across nine broad categories:

  1. Objective (the goal of operations)
  2. Offensive (the actions to seize, hold, or exploit)
  3. Mass (the effect of combat power in time and space)
  4. Economy of force (all of the elements that play a role in combat)
  5. Maneuver (the concentration of power against an enemy force)
  6. Unity of command (unbroken influence from a command)
  7. Security (protection for the forces going into combat)
  8. Surprise (BOO!)
  9. Simplicity (clear, uncomplicated planning)

Throughout military history, technology has impacted each principle, and technology promises to add an extra layer of complexity to existing doctrine. To fully understand how AI would affect the military, we conference participants would have to figure out how it interacts with each principle.

Breaking into working groups, we were each given a principle to consider. In some cases, artificial intelligence brings obvious advantages: AI and information systems could help commanders distribute information effectively, providing personnel with more and better information to help determine how to utilize the forces at their disposal. It could conceivably collect vast amounts of information and sift it into actionable, useful intelligence for leaders in the field.

In others, there are more nebulous answers: how does AI help with a military force’s mass and economy of force?

My group was assigned to identifying problems around “unity of command” and began to discuss how an advanced system might have access to the training and operational background on the entire body of soldiers in an entire force, and assign them to units where they’d be most effective, not just with their individual roles, but alongside their fellow soldiers. We imagined how such a system might be able to tap into vast reams of intelligence, such as weather reports, social media posts, or micro-financial updates to glean insights into an enemy’s disposition, and make recommendations to commanders for the best course of action.

Other groups looked at how AI might help a military figure out how to fight, position its forces, protect itself from electronic threats, and more. But out of all of those groups emerged a broad consensus: the biggest asset that AI could bring to the battlefield is the speed and processing of information. Any war game would have to provide a simulacrum for such a system. But how do you simulate a thing that by design, is doing tasks that people can’t easily accomplish?

Furthermore, the principles of warfare that the US military adheres have been developed by generations of human military strategists and theorists. AI is decidedly inhuman, and because it could conceivably interpret vast amounts of data, its decisions, recommendations, or observations might be completely different from anything its human counterparts might come up with.


Photo: U.S. Army / Andrew Liptak

While it seems likely that artificial intelligence is coming down the pipeline, how useful will a series of war games really be for the military? Certainly, the practice has proven to be useful in the past, but it’s not a surefire solution for identifying all potential problems. After all, the US was caught off guard when Japan launched a surprise attack against Pearl Harbor in 1941. And, advances in aviation technology and artificial intelligence don’t line up comfortably: they’re two very different fields, with extraordinarily different uses. And given how strange AI can be, can it be adequately simulated by a war game?

Beyond the challenges that AI poses to planners, there are other potential problems. One participant was retired Lt. General Edward Cardon, who served as the commander for the US Army’s Cyber Command and helped found the Army Future Command. He explained to me that AI is just one particularly large issue that the military has to deal with, and that it has to compete with a number of other priorities.

Chief amongst those? “You have the geopolitical realities of the present,” he explained — the every-day problems that crop up when nations around the world find themselves in violent disagreement. Those include the various problems in the Middle East, and potential threats from around the world in places like China, Iran, North Korea, Russia, and Venezuela. Those geopolitical issues take up a considerable amount of attention and priority from strategists.

Moreover, the feeling that an impending artificial intelligence shakeup isn’t a universal one amongst military planners at the Pentagon, or even the most pressing concern. Cardon notes that the DoD has a lot of other technologies before it to consider. “You have the thinking on future war and what that looks like and making sure we’re prepared,” he says. “Then you have all the competing technologies, which the department [has to] sort out.”

One such example are hypersonic weapons. “Are hypersonics more important than AI, or is AI more important than hypersonics?” He posed. “These are the debates that are going on.”

Despite those competing priories, the uses of AI as a topic of conversation has begun to spill over into the discussion of technology. The Air Force wants drones to fly alongside F15 and F35 fighter jets, the Army is looking into using machine learning to identify when vehicles might need new parts, and the Pentagon is seriously thinking about how to use swarms of drones to attack and overwhelm an enemy. DARPA is planning to invest nearly $2 billion in smart weapons. Artificial intelligence plays a role in each of those systems, bringing their own advantages and complications to the battlefield.

The inevitable result is a military that will increasingly become more automated in the future. That brings some real opportunities for the military to conduct itself: attendees spoke about how soldiers could become more lethal when aided by technology, given more information about their surroundings, targets, and more, and able to wage war effectively with a smaller force.

Cardon explains that the military’s interest in artificial intelligence started to grow as military leaders and policymakers watched as China began making its own strides in the field, as well as the strides that are being made in the private sector. “What I would argue is driving a lot of artificial intelligence right now is the commercial sector. When you start to look at what’s going on commercially,” he explains, the conversation then shifts to “what are the applications of war?”

He says that he feels that we’re in a time now that resembles not just the interwar period, but the years prior to the First World War, where new technologies were available, but lacked the understanding on how to use it effectively. But he notes that parallels to the two periods only goes so far. “I tend to think that it’s different because [AI] is doing things that a human can’t do. It’s a speed thing, and it’s going to change how decisions are made.”

That leads to a much greater question: “You’re starting to see this big debate: are we looking at an evolution or revolution in warfare?” Cardon notes. On one hand, the character of war will change with the tools that are being introduced, but the fundamental nature of war would remain constant — the principles that guide war would remain the same.

On the other hand, “if machines are making decisions, doesn’t that change the nature of war, because they aren’t bound by fear or honor? Would machines conduct war differently?”


Photo: U.S. Army / Andrew Liptak

While there are substantial obstacles ahead of the military for artificial intelligence, the goal of this particular conference isn’t to anticipate or predict future threats: it’s an exercise in getting comfortable with the future and the change that it will bring.

Throughout the two-day workshop, Hill reminded participants that they shouldn’t make assumptions or feel constrained by what the state of artificial intelligence and warfare are like now. We were to make broad assumptions and take real leaps in faith for what could be possible. The goal for this particular project wasn’t to produce a scenario that will be 100 percent realistic, but to get commanders and leadership thinking about how to conduct warfare from very different angles. Rather than thinking outside the box, he’s essentially advocating for participants to shake it up and see what could happen down the road.

And that future promises to be strange. Prior to the conference, several conference attendees went on a tour of an Amazon warehouse to see how the company’s systems work in person. One takeaway was that the company developed a different way to store various items: randomly, with its systems managing its inventory and employees to effectively use all of the available space in any given warehouse.

The working groups came back with a range of answers. Artificial intelligence systems can crunch a lot of data, and could be utilized for all of those principles. It could effectively disseminate orders across a chain of command, providing each link with vital information quickly and effectively. Computers could pull in vast amounts of data and analyze it to provide leaders with accurate battlefield information and intelligence to more accurately utilize the assets that they have under their control. Planning troop movements or logistics; time-consuming and resources-intensive operations could be left up to computers, which could accomplish the task faster.

That type of organization might not appear on its face to be effective or even logical, and attendees drew a comparison: an AI crunching through vast amounts of data might be able to create connections that human personnel might never be able to make on their own. The results might be effective, but which might appear counterintuitive or baffling if the computer doesn’t supply the reasoning behind those decisions.

One example that came up was that we’ve used artificial intelligence to play games like chess or Go, in which we’ve seen it make decisions that no human player has ever made. Amazon’s warehouse systems organized itself in ways that were positively inhuman is another example of that strange decision making. Inserting artificial intelligence into a place where it would oversee command and control for a military means that we’d likely see the same play out on the battlefield.

That led to a key topic amongst the working groups: trust. If the military begins using a system, the people relying on it will need to be able to trust whatever suggestions it spits out, whether it’s a direction to preemptively change a part on a vehicle, or to send soldiers after a certain objective that could result in casualties.

For that reason, Hill says, simulations are important, and standing up a simulation is an effective way for leaders to begin thinking about and internalizing the changes that artificial intelligence would bring to the table.

For the conference’s second day, the participants shifted focus: lay the groundwork for how one would simulate such a system.

My group meshed those concepts with story, providing a conceptual framework for the various advantages that the red team’s artificial intelligence might have, such as widespread surveillance and traffic monitoring, and the efforts that a blue team might have to counter it. The threats that we imagined being most pressing for a team of soldiers would be the overwhelming amount of data that they’d be running into. How would such a system utilize all of the information that it might have on the soldiers that it’s opposing? Absent killer robots at its disposal, how could it turn its resources and environment against the players on the other team?

Any artificially-equipped military would have access to considerably more resources and would be able to process information far quicker than the other team: players on that team would have more time and access to more information than their counterparts, or have the ability to listen into what the human team is planning. They might be able to call on more resources or unexpected resources that the other team wouldn’t have access to.

The opposing blue team wouldn’t have nearly as much time, far more limited information, and would likely face an uphill battle. But they’re not out of tools to use. They could deliberately poison the enemy’s information well, planting misinformation or viruses. They could try and mess with the underlying infrastructure itself, or find weaknesses in how the artificial intelligence processes information.

The desired output for the conference wasn’t to produce a fully-polished, deliverable game concept, but a variety of options that could be refined further into something that was workable.

However, it’s unlikely that this particular project will ever be fully realized: Hill left the Army War College for the private sector last year, leaving the task for conceptualizing and planning for AI to other thinkers within the US military.

But while Hill’s plans for a series of war games won’t come to fruition, the conference highlights the earliest efforts that are underway to try and conceptualize the future. While various parts of the military are turning to science fiction as a way to imagine what’s to come on the battlefield, the specter of artificial intelligence casts a long shadow over the people who are trying to point the military in the right direction to address future, potential threats.

Ultimately, the lesson at the heart of the issue seems to be one of resilience and improvisation. To face artificial intelligence in the future, the military will need to organize and train its people to quickly adapt to changing scenarios and technologies, and to be open to the strange possibilities and directions that could be coming their way.

In our scenario, a team is sent into an enemy city, and knowing that they’ll face a AI-equipped adversary, they do everything in their power to stay out of sight: messing with its vision to slip under the radar, and to present as low an information profile as possible. Will they succeed? Maybe we’ll find out in a couple of decades.


This is a piece that I’ve been working on for a while, ever since I was invited to take part in the war gaming session last year. It doesn’t seem like this particular initiative has gone anywhere, it seems like it might be a useful early step for various planners to take cues from. It’s certainly an interesting field to follow, and undoubtably, there’ll be more developments in the years to come.

I’ve written a bit about the future of warfare and science fiction in recent months. Here are a couple of posts to check out if you liked this one:

As always, thanks for reading. I’ll be back in a couple of days with a regular roundup. To those who served, thank you for your service.

Andrew