The Future of War, Part 2. Going Full Auto
In my previous article, The Future of War, Part 1, I wrote about how the future battlefield with include large numbers of robots, which will in turn lead to significant levels of AI-driven autonomy. This in turn will lead to the automation of warfare, as one side or another will fall to the temptation to take the “man” out of the loop. This isn’t an argument for doing so, rather than a confident assertion that it will happen. You can bet on it.
Why Flip the Switch to Full-Auto?
The primary reason is speed. If all the new technologies introduced in the past three decades can be summed up in one way, it is that they have made speeding up the so-called OODA (Observe, Orient, Decide, Act) loop critical in battle. The idea is that winning may require getting the jump on the adversary; it might be about the question who has the fastest draw; it can also be about which system rapidly and seamlessly get all the relevant actors rowing together, and with precision and coordination. The most consequential constraint on OODA loop speed now is the human sitting somewhere in the loop. He or she has to take precious time to identify and take in the most pressing information, and come rapidly about how to act, where to send whom, and to do what. It stands to reason that any more thoroughly automated force would have a significant advantage over a force with less or no automation.
Moreover, automation is essential for making the most of collaborative warfare. In collaborative warfare, someone or some system detects a threat. The information is shared on the network, and an AI determines which weapon system has the best shot, perhaps taking into account pre-programmed parameters related to rules of engagement, or a desire to minimize collateral damage. Meaning, the weapon system with the best shot might be a vehicle with an anti-tank missile or a 30mm cannon, as opposed to a circling F-16 with a 500-pound bomb that would deal with the threat but also take out a city block. So, the information gets passed to an armored vehicle, the turret of which automatically “slews” to target the threat with its cannon, or perhaps the info is fed into the vehicle’s missile. This technology exists and is in the field, however they are all rigged to keep a man-in-the-loop. Somewhere there usually is a human being who has to pull the trigger perhaps only after authorization from headquarters. That’s the feature that will be skipped when the systems are set on full-auto.
Collaborative warfare. Source: https://www.thalesgroup.com/en/markets/defence-and-security/naval-forces/underwater-warfare/collaborative-anti-submarine-warfare
We can also imagine collaborative warfare in an offensive context: Certain assets are tasked with certain objectives and given leeway to figure out for themselves how to accomplish it. Autonomous systems can select targets, hopefully skipping civilian targets or low-value military targets in favor of high-value military ones. They can perform services to assist other assets: refueling a plane, or protecting it with lures or electronic warfare capabilities. Maybe their job is to suppress a threat to enable the plane to perform its assigned task un-troubled. On the ground, the automated turrets on armored vehicles or separate robots carrying weapons systems might follow the guidance and targeting information fed to it by the artificial intelligence software rather than the cues of vehicle pilots or other human system operators, all of whom can be expected to have limited vision and a fragmentary understanding of the larger operational picture, especially compared to a computer networked to all the sensors present in a unit and beyond.
Urban Warfare
The most obvious context that might induce a commander to go “full-auto” is urban warfare, where poor visibility and short distances mean forces have fractions of seconds to react to threats they might not see. Imagine a man with an advanced shoulder-held guided missile popping into someone’s view, though perhaps not any of the man’s intended targets. If a shot can be made, it must be as soon as possible. A fully automated system could do the job potentially within seconds of the threat being detected. The AI would confirm the threat, select the best weapon for the job, feed it targeting data, and fire.
The Israeli Defense Forces are at the cutting edge of automating as much as possible at the platoon level so as to empower the kind of small ground units the IDF prefers for urban operations to make full use of available support, even if the resources in question belong to other services. Unit leaders can pinpoint targets on a tablet and have accurate targeting information shared automatically. The AI then determines who should take the shot, in a matter often described as similar to ride sharing apps like Uber, which uses algorithms to task ride requests to specific drivers. The Israeli system is not automated: weapons systems don’t fire instantly at targets designated for them by a computer. But they could, with policy rather than technology holding Israelis back. Sooner rather than later, Israelis will find that in a given situation, setting systems on full automatic might be precisely what’s required.
Robotic remote turrets like this Elbit system already are proliferating. They can be slaved to information networks, that tell them what to shoot, where, and when.
In air and naval combat, potentially great distances increase the time available to allow for some delay, but rarely more than minutes. Hypersonic missiles will only shrink that time. We have seen already how difficult it can be to respond effectively to high-speed threats. In the Falklands War, British ships had little time to detect and respond to attacks by Argentine aircraft, some armed with supersonic Exocets. Argentine pilots were adept at flying under the radar so as to avoid detection, only popping up rapidly when their Exocets were in range to allow the missiles to obtain their targets and fire. In this manner, three Exocet-carrying Argentine Super-Étendards made a run at the British fleet that was so rapid as to allow those most in danger, three destroyers on picket duty (the HMS Coventry, Glasgow, and Sheffield), time to respond effectively. By the time an Exocet had locked on to Sheffield, there was almost nothing her commander could do. The Étendards, moreover, were never at risk.
The U.S. military is now developing unmanned “loyal wingmen” to escort manned aircraft.
Now, imagine a far more dangerous risk environment with far more elements to track and a crowded battle space. What if instead of at most five Super-Étendards armed with one Exocet each, Argentina operated hard-to-detect stealthy Next Generation aircraft operating in tandem with unmanned drones of various types, many armed with additional anti-ship missiles? What if, at the same time, salvos of long-range ship-based and ground-based missiles fired from the Argentina mainland were in-bound? What if Argentina’s missiles could identify targets autonomously and, for example, decide to pass on HMS Sheffield so as to hunt down something more vital to British naval operations. What if Argentine electronic warfare capabilities enabled them to disrupt British information networks?
Imagining Automatic Battle
We can safely assume that in this century, aircraft will fly escorted by potentially large numbers of drones, some of those drones now capable of releasing other drones. There might be dozens of planes, each with multiple drones operating with it. The drones might perform a variety of functions, all designed to protect the plane and extend or complement its capabilities. They might have electronic warfare capabilities, or be armed with lures, or themselves be lures designed to draw missiles away from their target. They might have air-to-air capabilities or air-to-ground capabilities. They might be built to combat submarines, surface ships, or satellites. They might be early-warning platforms. The same applies to manned naval vessels, which increasingly will have robotic escorts with diverse capabilities. The net result is to complicate both sides’ operating picture, increase significantly the complexity and magnitude threats, boost the flood of incoming data that has to be processed, and create profound challenges to anyone attempting to prioritize threats and conceive and execute an optimal response.
Robust electronic warfare capabilities would be crucial. They are needed to protect one’s own networks while disrupting or hacking the adversary’s systems. This is another lesson from Ukraine, where both sides use jammers and counter-jammers to great effect and have been able to hack each other’s systems. The more networked systems are involved, the greater their dependence on information networks, which themselves will be contested. This only heightens the usefulness of automated systems, which, because they think for themselves, need not depend on maintaining tethers to other systems.
Textron’s M5, an unmanned ground “loyal wingman” that can perform numerous tasks within a ground unit in support of infantry and manned platforms.
Hubin, Again
Perhaps the best vision of a future battlefield I’ve found remains French Army General Guy Hubin’s book, published in 2000, Perspectives Tactiques. I wrote about Hubin’s views at length here. Hubin calls for abandoning what he refers to as a homothetic force, which he associates with contemporary armies. To Hubin, homothetic units, nestled within one another like Russian dolls, forces rely on vertical communications chains and tend to be arrayed in ways that reflect a linear form of movement in the physical space. There is a “front” and a “rear” and a clear direction of movement. Future forces, per Hubin, will involve small, dispersed, and decentralized units that often overlap or are interspersed with enemy units. There will be large gaps between units, and they will need to be able to maneuver in any direction rather heed an artificial notion of forward and backward. Units, to be survivable despite their small numbers and dispersion. They must therefore possess access to combined arms and joint capabilities at the lowest echelon, so that fire teams can call in fires from distant platforms that may belong to other services. Hubin also is keen on avoiding linear supply lines in favor of “pulsing” just the right resources required at the right place, and at the right time.
Hubin believed that instead of units working with habitual partners, or drawing on resources organized per a top-to-bottom command structure, maneuvering units would be able to link up to pertinent networks and platforms as they moved. Combat support units, for example, would link up seamlessly with combat units moving within a relevant range. This would make the fullest use of scarce resources and enable. It also means units remain functional even though their component parts increasingly need to disperse. Those dispersed units also need to be able to concentrate, though only when necessary, and ideally while preventing the adversary from seeing where one is doing it. One can easily recognize all the ways in which AI and automated systems could go a long way toward making Hubin’s vision real. Slowly but surely modern militaries already are drifting in his direction.
Downsides
The downsides to automation are obvious and discussed at length elsewhere. Suffice it to say, automation raises important legal and ethical questions regard who is responsible for lethal actions. Automated systems can make mistakes as well as be tricked. This can be said of humans as well, but when automated systems do terrible things, who is to blame? Besides hacking into systems, adversaries can game adversary’s systems, perhaps to exploit weaknesses or manipulate how it decides when to fire. Wiley adversaries can make systems miss their shots, or perhaps take shots at precisely the wrong people. Militaries are wise to resist full automation and insist on having a man at least “on” the loop, but the argument here is that sooner or later the comparative advantages of switching to “full auto” will be deemed irresistible, at least in certain contexts.
Conclusions
Future battles may well be determined by who has the best software, or simply the number of robotic assets they can integrate into a force. How weapons systems function in a degraded information space also may be crucial. Obviously how well the technology is integrated into systems people use will be an important determinant. There will be new doctrines regarding when to go full-auto, or how best to integrate autonomous systems. Most militaries can be expected to insert humans somewhere if not because of the utility of doing so than because of enduring romantic ideas about leaders and military chiefs. But that choice increasingly will resemble a decision once made in the 1950s to place mostly useless astronauts atop rockets flown and guided by computers and ground communications. This made the rockets more costly and ensured that the space program would get people killed.
Great post as usual. How do we reconcile technology improving at a pace faster than the infrastructure which armies will need to traverse to concentrate at decisive points and moments?