Military robots have featured in science fiction for decades but how practical are they in the real world, fighting alongside or even replacing humans?
Military robots have featured in science fiction at least since Ray Bradbury set a mechanical hound hunting in 1953’s Fahrenheit 451 – arguably since Rossum’s Universal Robots rebelled against their creators in Karel Capek’s play R.U.R, which introduced the word ‘robot’ to the English language.
Logan’s Run featured robotic eagles that hunted and killed citizens who refused to die at the appointed age. Weapon has an android soldier as its protagonist, as does the Cassandra Kresov series – in both cases the androids are, in fact, deserters – and Bolos, artificially intelligent super-tanks, were created by Keith Laumer but have been written about by a variety of authors including SM Stirling, David Weber and Mercedes Lackey. Even those with only a passing familiarity with science fiction have often heard of Skynet, the overarching Artificial Intelligence of James Cameron’s Terminator, which conquers the Earth and tries to eradicate humanity with T-101s and airborne Hunter Killer units before dispatching cyborg assassins back in time. Cameron’s Aliens also featured Bishop, an android attached to a platoon of Colonial Marines as technical support, and the Separatists of the Star Wars prequel trilogy deploy an almost exclusively robot army.
Television had Battlestar Galactica’s Cylons, robot servants created by the people of the Twelve Colonies, who rebel against their masters and virtually eradicate humanity before hunting the last few thousand survivors across space with machine intensity.
Taste of Armageddon, a Star Trek:TOS episode, featured a pair of alien planets engaging in war by computer simulation; whilst not precisely AI or robotic systems, the episode foreshadows the danger of ‘clean’ wars. In the episode, the computer has been simulating strikes and assigning casualty figures which the belligerents, in accordance with their treaties, then cull from their own populations – cleanly, and without physical suffering. Consequently, the war has become acceptable, and no real effort has been made to end it.
This could be compared to the modern role of drones which carry out high-risk missions without the fear of friendly casualties; war and military operations play better with democratic electorates when they don’t involve dead soldiers being shipped home to grieving families. This is also partially responsible for the increase in the use of Private Military Contractors, whose deaths don’t show up in military casualty figures.
Combat robots in computer games include the darkly comical HK-47 of Knights of the Old Republic, the anachronistic Liberty Prime of Fallout 3 and the highly sexualised EDI of the Mass Effect series. Robots and androids are a staple of future fictional battlefields. Aside from their obvious futurism, and in some case the fact that the machines themselves are the technological conceit upon which the story rests, the reasons for the use of robots in these fictional worlds generally revolve around the avoidance of human casualties and their lack of human frailties; robots do not lose effectiveness or efficiency due to fatigue or human emotions such as boredom, fear and compassion. Nor do they stop merely because they’ve been damaged. As Kyle Reese says, ‘That terminator is out there. It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.’
Most devotees of Carl von Clausewitcz’s On War would scoff at these fictional robots – until recently. Von Clausewitcz entered service with the Prussian Army aged 12 as a Lance Corporal, he attained the rank of Major General, serving in the Rhine Campaigns and the Napoleonic Wars before reforming the Prussian Army. He wrote On War between 1816 and 1830, and it has become required reading for professional military officers everywhere, having proved remarkably resistant to the advance of technology and relevant to modern warfare to this day. When the tank replaced the horse as the mount of the cavalry, fuel replaced grain as the limiting logistical factor to their deployment. When airpower became the definitive tool of power projection, it merely displaced artillery.
When Vice Admiral Arthur Cebrowski ushered in the age of Net-Centric Warfare, he claimed that it would provide ‘total information awareness’ and dispel von Clausewitcz’s fog of war; deployed for the first time in Afghanistan and Iraq, Cebrowski’s paradigm found the fog of war still essentially impenetrable.
And yet, one of von Clausewitcz’s most self-evident truisms, ‘war is fought by human beings,’ may no longer stand up to scrutiny.
Predators and Pack Bots
Although the first examples of automation for the battlefield appeared as early as WWI with the ‘electric dog’ designed to carry supplies through the trenches, the first purpose-built modern military robot was the British Army’s ‘Wheelbarrow,’ developed for use by Explosive Ordinance Disposal units. The reasoning was obvious; if the bomb defusal specialist cut the wrong wire, only the robot would be blown up.
Today, drones and robots are used for roles that are considered too dirty, dangerous or dull for humans; robots can investigate terrain rendered lethal by radiological, chemical or biological weapons. Unmanned Ground Vehicles such as the Talon can roll into a firefight that would be sure to kill humans without incurring casualties; they’ve even continued to function after repeated hits by .50 calibre rounds. To contextualise this, .50 ammunition was originally designed for anti-aircraft use and conflated with the German 13.2mm TuF, which was developed as an anti-tank round. Fired by machine guns, .50 cal can demolish houses in minutes. And surveillance systems such as Global Hawk can orbit a specified area and watch it for up to thirty hours with unblinking vigilance.
Human pilots require a wide variety of life-support equipment including oxygen systems, ejector seats and G-suits – and yet, if they make a 7-9 G turn, they will pass out. At 20 Gravities their internal membranes tear and they die. Unmanned Aircraft do not need oxygen or G-suits to function through nine-G turns – or even 20-G turns. The squishy organic components of manned fighter jets are their primary limiting factor. This is also true at sea; robotic vessels can operate in Sea State Six – in which humans cannot, as the motion of the waves is so violent as to toss humans about with enough force to break bones.
‘They don’t get hungry,’ Gordon Johnson of the Pentagon’s Joint Forces Command points out. ‘They’re not afraid. They don’t forget their orders. They don’t care if the guy next to them has just been shot. Will they do a better job than humans? Yes.’ His words are uncannily reminiscent of Kyle Reese’s description of the T101 hunting Sarah Connor in Terminator.
Speed is another key factor in the deployment of unmanned systems. Defensive robot systems can recognise threats and respond to them in the time a human operator would take to utter the first syllable of whatever fear-induced expletive comes to mind. Until recently, the only sane response to coming under mortar bombardment was to get under cover as quickly as possible; by contrast, the CRAM (Counter Rocket Artillery Mortar) system – affectionately referred to as ‘R2D2’ – shoots the incoming ordinance down before it poses a threat. The Sea Viper air defence system equipped by the Royal Navy’s Type 45 Destroyers is capable of tracking over 2,000 targets out to 400km, and operating multiple defensive systems ranging from Aster missiles to Phalanx machine-guns to engage supersonic targets from high-altitude to the sea’s surface and from 120 km to point-blank range – simultaneously, and autonomously. The limiting factor is human response time – giving the systems authorisation to engage their targets. One officer stated ‘The trend towards the future will be robots responding to robot attack, especially when operating at technologic speed… As the loop gets shorter and shorter, there won’t be any time in it for humans.’
This is problematic, given the West’s literary tradition of our creations rising up to destroy us, as they have since Mary Shelley penned ‘Frankenstein’ and have continued to the present day – hence the Western public’s insistence on ‘keeping the human in the loop.’ We’re uncomfortable with giving machines agency in selecting who lives and who dies, so we keep a human in control of even the most autonomous systems; they are unable to prosecute targets unless those targets have themselves opened fire (counter-sniper systems are enabled to have a ‘quick-draw’ response for the purpose of rapidity) or a human has authorised the use of lethal force. But does this genuinely keep a human finger on the trigger, or is it an illusory fail-safe?
Autonomy – or not to be: Systems Operating Humans and the Case for Machine Agency
On July 3rd, 1988, the USS Vincennes was patrolling the Persian Gulf when its Aegis radars detected an Iranian Airbus squawking a transponder that identified it as a passenger jet and flying on a consistent heading, altitude and speed. There was nothing to suggest that the aircraft was military or posed any sort of threat – but the Aegis systems had been designed to defend US ships from a storm of Soviet air threats in the North Atlantic waters of World War III; target discrimination wasn’t its forte. It dropped a hash on the Combat Information Centre’s displays identifying it as an Iranian F-14 Tomcat, a supersonic fighter aircraft with surface-attack capability. The Aegis systems were in semi-automatic mode and required human authorisation for the release of weapons – but of the eighteen humans who made up the ship’s command crew, none were willing to override the machine’s assessment. As requested by the Aegis systems, they authorised it to engage.
That they could do so was another indication of the status of their systems; as the only vessel fitted with an Aegis system, they were the only ship authorised to make a decision to open fire without seeking authorisation from a higher authority. The US Navy trusted the system’s judgement more highly than that of their own human ships’ captains. The Aegis selected a Surface to Air Missile from its list of available munitions, launched it, and killed two hundred and ninety passengers and crew, including sixty-six children.
Incidents such as this make it easy to see why many want to keep the humans in the loop – but also demonstrate that assuming humans are at the top of the loop is erroneous. Consider modern air-to-air combat; BVR (Beyond Visual Range) missile engagements are conducted at a minimum of 37km. The MBDA Meteor missile about to enter service with the RAF can engage targets at 100km or more. The pilot is physically incapable of seeing the target. All his assessments are based on information fed to him by his aircraft’s systems – in effect, the machines tell the organics what to do. Add to this the fact that the pilot is probably flying a fighter designed for instability to enhance its agility in dogfights, and that without computer-assisted flying it would be so unstable as to plummet to the ground…
The whole purpose of human control of Unmanned Systems is to prevent machines from having agency in kill/don’t kill decisions – but those decisions are made based on information gathered, analysed and assessed by more machines, whether a human makes the decision or not. The human makes the decision more slowly than a machine would in an increasingly fast-paced environment. Worse, if the systems can only prosecute targets when given specific authority, all an enemy has to do is disrupt communications between the drone and the operator; the drone is left in the battlespace, impotently requesting permission to fire.
We can also consider expense. Since the Joint Strike Fighter program began in 1996, the F-35 JSF’s unit cost has almost doubled; figures of $229 million per aircraft have been reported, with total lifetime costs for the fleet of $ 1.5 trillion. These aircraft will be ready for operational deployment between 2015 and 2019. By contrast, the Unmanned X47B program started in 2006, has cost $813 million to develop and is expected to deliver battle-ready drones by 2019 – despite being the first ever drone to be capable of taking off and landing from an aircraft carrier, which is essentially a tiny moving airstrip that even elite pilots find challenging. Additionally, a requirement for a human to be running each Unmanned System prevents militaries from making personnel savings in an era when they are increasingly finding it harder to acquire the enormous sums of money they’re used to; they only save on personnel costs when a single operator can run multiple machines or there is no operator at all. Consider that it costs the US military $6 million to train each fighter pilot. There are persistent rumours in the US Military that the USAF cancelled the X-45 – the X-47B’s sister project – before it even entered testing because its lower unit cost and lack of need for a fully-trained pilot made it more attractive to the Pentagon than the F-35A, whose budget was already beginning to balloon. Adoption of the X-45 could render fighter pilots irrelevant overnight – not a prospect that the USAF, whose commanding generals are all ex-fighter pilots, found appealing.
Relocating the pilot from the cockpit to a remote station doesn’t eliminate the potential for psychological casualties, either. The ‘Cubicle Warriors’ who operate drones in combat overseas from bases in California and Nevada do not assume the same physical risks as the troops on the ground or flying manned systems, and as such are often derided not only by their enemies, but often by their own forces. However, Colonel Michael Downs of the USAF recalls an incident in which the crew of a Predator drone could only watch as a team of Special Forces were surrounded and killed by insurgents. ‘You see Americans killed in front of your eyes and then have to go to a PTA meeting.’ Colonel Gary Fabicus, commander of a Predator squadron, states ‘You are going to war for twelve hours, shooting weapons at targets, directing kills on enemy combatants, and then you get in the car, drive home and within twenty minutes you are sitting at the dinner table talking to your kids about their homework.’
The shift in mindset is total – from life-and-death circumstances to the banality of everyday living with a couple of keystrokes and a brief commute. Often the drone operators can’t cope. A survey of drone crews found that rather than them suffering less stress than the crews of manned systems who were actually based in the war zone, they actually had ‘significantly increased fatigue, emotional exhaustion and burnout.’ They were also likely to suffer from ‘impaired domestic relationships.’ Humans, it appears, are the weak link in the warfighing system.
‘Evolving’ Better Robots
But where are the T-101s and ED 209s? Where are the humanoid robots acting as cops from Elysium? Just around the corner, according to the United States’ Defence Advanced Research Projects Agency (DARPA). In 2004, a DARPA study on optimal robot forms concluded that ‘humanoid robots should be fielded – the sooner, the better.’ Wheeled systems can operate on only 30% of the Earth’s terrain, tracked systems on 50% of terrain; legs can operate on close to 100% of terrain types. Further, the primary human battleground of the modern age is the city – designed for humanoid forms to move around in.
This is far from the only example of biomimetic robot design. One DARPA employee describes himself as a ‘Combat Zoologist.’ His role is ‘getting robots to jump, run, crawl, do things that nature does well. We’re evolving our machines to be more like animals.’
An example of this would be General Atomics’ Cheetah, a quadruped robot with an organic gait capable of outrunning Usain Bolt. Another would be their ‘Big Dog’ pack robot; watching it slip on ice and struggle to retain its balance is disturbing – its biomimetic movements are organic enough to trigger an ‘uncanny valley’ response in many viewers. Other examples include ornithopter drones that fly by flapping their wings and disguise themselves as insects or birds, and systems that mimic the swimming abilities of fish or snakes. The same study that advocated the creation of humanoid robots also advocated centauroid forms – quadruped robots with a humanoid torso, head and arms mounted on it.
Biomimetics even extends to coding AI. Designers programming aerial systems that operate in close proximity model their behaviour on insect swarms. Some AI researchers claim that simply writing an Artificial Intelligence line by line is impossible; they advocate using a biological template – either by directly modelling organic brains neuron by neuron to create a virtual brain that would function in much the same manner as a virtual computer drive, or by creating an AI kernel that will assimilate information and ‘grow’ into a fully-fledged AI. In both cases, researchers have studied the brains of cats; they show curiosity and a desire to play – both essential to the learning process. This is somewhat counter-intuitive; most of us do not think of machine intelligence as being emotional, but emotion may be not merely possible, but essential to Artificial Intelligence.
War Was Fought By Human Beings
So will we see autonomous systems choosing who they kill? It seems very likely. What are a few hundred fatalities and a literary tradition of our artificial servants rising up to become our machine overlords and eradicating us compared to the demand for faster, cheaper, more durable, more deadly forces that are politically easier to deploy during a resource-scarce century in which war is apparently the norm rather than the exception? Not only is autonomy on the drawing boards, it is being built. In some cases it is already in the air. In 2014, BaE’s Taranis Unmanned Combat Autonomous Vehicle had its first test flights. An unmanned Stealth aircraft with a projected intercontinental range and the capability to carry weapons, Taranis’s airframe was found to perform beyond expectations. It is designed to have global reach with a capability for air-to-air refuelling, to respond to attacks against it autonomously – presumably a ‘quick-draw’ response which requires no human input – and to penetrate an enemy’s airspace invisibly before going on to engage and destroy air or ground targets. BaE stress that it has a human operator; given the sophistication of the AI they’re developing for it, that in all likelihood means that it will be able to do everything by itself but engage a non-attacking target without specific human authorisation – but that human will almost certainly make the determination as to engage based on machine information, possibly provided by Taranis itself. Further, it isn’t expected to be operational until 2030, by which time the fiction of humans having a meaningful place in the loop may have passed.
It would appear that the science fiction writers were correct about the place of robot systems on the battlefield – and the reasoning behind it. Bradbury’s mechanical hound is uncannily echoed in Boston Dynamics’ Wildcat robot. Nolan and Johnston’s hunter-killer robot eagle can be heard in the flapping of a DARPA ornithopter’s wings. Michael Creighton’s Swarm, Cameron’s Terminator, even the beautiful female android Cassandra Kresov of Joel Shepard’s Breakaway have real-world analogues on the drawing board, in development or actually deployed to the battlespace.
Sci-fi writers are also basing their work on real-life experience of Unmanned Systems. In Fury Born’s Alicia De Vries, an Imperial Marine hundreds of years in the future, uses an aerial drone to reconnoitre a street and identify enemy positions; she then uses the drone’s systems to highlight targets and fires on them without ever actually seeing them. A Solarian League Navy Ship uses a drone to direct orbital bombardment projectiles in Shadow of Freedom. Chris Moriarty’s Spin Control has a different take on ‘keeping a human in the loop’; in a futuristic West Bank, Israeli troops have no conscious volition over their actions. They are puppeted by a Military Artificial Intelligence who controls their every movement. The machine operates human drones. Worse, the AI thinks it’s playing a game; every time it realises real people are being killed, it goes mad and has to be replaced.
The Post-Human Battlespace
An analysis of the current experience suggests that there is no place for humans in the battlespace of the future. We’re obsolescent; too slow, too imprecise, too riven with human frailties. Yet the science-fiction writers disagree. Humans also roam the war zones of their future, working with, against and even for them. There is an obvious literary conceit behind this; it is hard to invite a reader to emotionally invest in the fate of robots. Yet their predictions may also be true. Alicia De Vries becomes a member of the Imperial Cadre, the Empire’s legendary Drop Commandos, in part because of her neurological ability to handle three implanted processor nodes that enable her to operate her own cybernetic systems and wirelessly operate external systems. Joshua Calvert of Peter F Hamilton’s Night’s Dawn trilogy is implanted with neural nanonics that enable him to operate his spacecraft during high-gravity operations without lifting a finger, just by thinking about it. His internal membranes have been reinforced to cope with the extremes of spaceflight. Chris Moriarty’s Catherine Li has ceramisteel implants that make her stronger, faster and tougher than any human. It appears that all that is necessary to keep a corner of the fictional battlefield human is for humans to become post-human, enhancing their own bodies and minds with machines, or perhaps genetic modifications as in Richard Morgan’s Black Man.
But is this also predictive? Braingate technology has already enabled Matthew Nagle, a quadriplegic, to operate computers, play games and change TV channels just by thinking about it. Speaking of the Braingate chip, he said ‘I do feel like it was a part of me… They plugged me in and it was go, go, go. It was cool, man.’ Soldiers who lose limbs are being fitted with powered prosthetics that respond to their thoughts – even provide sensation. The US military reports that some 167 amputees have returned to full active duty – some deploying to the battlefield. Gene Therapy is already a reality, with dysfunctional genes being altered or replaced for therapeutic purposes; the same techniques can be applied to enhance a healthy genome. Bioengineers recently produced synthetic human skin with gene-modified spider-silk woven in that is bullet-resistant. And of all DARPA’s bioengineering efforts, their brain-interface project is ‘the most lavishly funded,’ with the prospect of integrating cybernetic devices to the soldier’s body, thought-controlled machines and vehicles and even network-enabled telepathy all being discussed as serious possibilities.
Perhaps von Clausewitcz’s dictum doesn’t need to be entirely scrapped, but merely amended. Perhaps in the future, war will be fought by Post-human beings…