Home > Blog > Lethal Autonomous Weapon Systems and AI: Developments and Resistance in 2018
Feature autonomous weapon systems Autonome Waffensysteme

Lethal Autonomous Weapon Systems and AI: Developments and Resistance in 2018

Introduction

Drones and robots that act independently and without human control are basic components of dystopian fiction. In addition to their iconic appearance in the much-coveted action movie classics of the “Terminator” series, they can also be found in a variety of other works such as “Blade Runner”, the Takeshi Kovacs Trilogy and its spin-off series “Altered Carbon“, the “Deus Ex” series, and numerous other fictional worlds. When it comes to autonomous weapon systems, some emotionally charged terms like “killing machines” or even “killer robots” fall quickly. As soon as you hear that, the reception and classification of these systems and their development usually focuses primarily on the risks and the associated uncertainties.

This article does not contain any ethical assessments of autonomous weapons systems from my part. There are already more than enough media barkers who paint the devil on the wall and devote themselves extensively and loudly to various technological horror scenarios in which machines rebel against their creators. Instead, a sober attempt is made at this point, to provide you with an overview of the topic of autonomous weapons systems. From the definition and the historical developments, to projects and systems that have been known to the public so far.

At the same time, the activities of different opponents of these developments will be examined. These seek to point out ethical / moral discrepancies in the use of these systems and to influence the legal and technological developments related to them.

This article will be further developed and updated in the future, as a result of new developments that might occur. I will flag these updates, as usual, as notes at the end of the article.

 

BAE Systems Autonomous Drone "Taranis" / "Raptor"
BAE Systems Autonomous Drone “Taranis”, © BAE Systems / Defense Advanced Research Projects Agency

 

What are Lethal Autonomous Weapon Systems?

Lethal Autonomous Weapon Systems (“LAWS”) can identify, select, attack and eliminate targets without human intervention. They can analyze and classify any sensor data independently, navigate on their own and control various weapons such as machine guns, cannons or missiles. LAWS can move around on wheels, tracks or artificial legs. Depending on their intended purpose they can also be fixated in certain places, e.g. on naval ships or on border lines.

To facilitate their high levels of autonomy, the immense advances in Artificial Intelligence (AI) have been fundamental. Likewise, the considerable advances in video & sensor technology, data transmission / processing and data analysis as well as numerous other technological areas are elementary. All of these factors combined, form the basis for the development of lethal autonomous weapon systems – from fictional actors to real-world operational military applications and products.

 

Degrees of Autonomy of Lethal Autonomous Weapon Systems (HITL vs. HOTL vs. HOOTL)

The gradations and distinctions between autonomous and semi-autonomous weapon systems are also important aspects to classify these lethal systems. In particular, the (sometimes fluid) transition between “human in the loop” (HITL), “human on the loop” (HOTL ) and “human out of the loop” (HOOTL) systems is important to note1. They influence the degree of autonomy regarding decision-making and chains of command:

  • HITL: The weapon system awaits orders for further action from a human user after identifying potential targets independently.
  • HOTL: The execution of actions is mostly autonomous. The weapon system can detect, classify and then target targets independently and without human input. However, human intervention is still possible at any time.
  • HOOTL: Systems operate completely autonomous within the respective combat and deployment situation. From target acquisition to offensive actions, no input, approval or decision-making by human users are necessary anymore.

 

What do nation states expect from the use of Lethal Autonomous Weapon Systems?

In addition to the advances in weapons technologies many nation states which promote the use of these systems and deploy them into their armies, expect savings potential in terms of costs. The production and operation of unmanned systems is generally considered much cheaper in the long term than classic manned systems. Long training periods can be shortened in the case of semi-autonomous weapon systems and can be even completely bypassed in autonomous systems. In addition, improvements in combat skills / abilities can often be easily imported via software update.2

In addition, the accompanying reduction of human losses for one’s own side constitutes a motivational basis for the introduction of autonomous weapon systems that should not be underestimated. The training of human soldiers consumes time and many resources. Almost every modern conflict is at the same time carried out on a medial level, and hardly anything has a similarly demoralizing effect on warring populations as the sight of soldiers returning to their home in a coffin. Correspondingly, the medial effects of conflicts can be eased more easily by (semi-)autonomous reconnaissance and combat systems – wrong decisions have less drastic consequences for the life of one’s own human troops. In case of loss, combat units can be quickly and easily reproduced and put into use again.3

Another benefit is the absence of human emotions and mental imponderables: An autonomous robot would not indulge in revenge and is not prone to disproportionate actions. It does not suffer from psychological consequences of war or makes fatal momentous misjudgments due to mental and / or physical fatigue.4 At this point, critics of LAWS might note and hint to that this also applies to positive human emotional reactions such as compassion – a point that we will take a closer look on later.

 

Brief Overview of the Historical Development of (Semi-)Autonomous Weapon Systems

The development of autonomous weapon systems is characterized by the entanglement of the development of both remote-controlled ammunition and unmanned vehicles in general. In order to better understand and classify these systems, it is worth to take a brief look at the historical developments in these areas.

As early as 1898, the inventor and scientist Nikola Tesla – always good for one or the other technological surprise – presented a remote-controlled ship or guided torpedo.5 However, these developments did not really get going until World War II: the early German rocket program (Vergeltungswaffe 1 / Fieseler Fi 103 & Vergeltungswaffe 2 / Aggregat 4) is not only the precursor of space programs in East and West but also the grandfather of various rocket projects of the Cold War.6 Self-directed “cruise missiles” are long-term results of these complex development processes.

 

 

In the World War II era remote-controlled vehicles entered the battlefield also on the ground: While the Germans experimented with explosive tanks like “Goliath” (Sd.Kfz. 302/303a /303b), “Springer” (Sd.Kfz. 304) and “Borgward B IV” (Sd.Kfz. 301), the Soviets tested unmanned vehicle types in combat with the so-called “Teletanks” (“Телетанк”). Even if all of these tank projects were still very far away from semi-autonomous functions and can also be classified as, at least, questionable in terms of effectiveness and cost / benefit, they can certainly be seen as the technological forerunners of today’s unmanned ground vehicles.7

 

 

Apart from the missile projects mentioned earlier, there was further progress for unmanned systems in the air. The British “Fairey IIIF Floatplane” and its derivatives were used for air defense training.8. This was also true for the mass-produced US-American “Radioplane OQ-2”.9

During the 1950s, these early guided airplanes were used as targets for bombardment in order to divert enemy fire from manned bombers.10 During the Vietnam War, the deployment of successor models shifted to flying platforms for risk-free reconnaissance flights and spy missions – a feature that today’s unmanned systems continue to perform as an important task.11

The use of unmanned vehicles as semi-autonomous weapon systems already took place in the 1970s in the form of first successful test runs12, however, it was to last until October 2001, when it came to the first actual combat mission of a drone: a CIA-led air raid with a Predator drone against a Taliban leader in the Afghan war.13

 

Brimstone Missile fired by Reaper Drone
Effects of Brimstone Missile fired by Reaper Drone, © MBDA Systems

 

Modern Systematics of (Semi-)Autonomous Weapon Systems & Current Examples

In the nearly two decades since this first official drone mission was carried out, a great deal has happened: To date, there are a large number of different weapon systems with different levels of autonomy and a wide variety of areas and fields of application. At the same time, the number of countries and international corporations that are able to implement their own programs for the construction and operation of such systems continues to grow.

Regardless, the classification of unmanned weapon systems is currently primarily based on their area of application or the medium of their particular environment. Thus, among other things, a distinction between unmanned aerial, ground and underwater vehicles can be made. In this section of the article, these different areas are considered and each gets enriched with specific examples.

This section of the article is featuring a few selected armed systems. The sheer amount of systems that are used primarily for reconnaissance or other purposes is even more extensive than the already broad selection of (semi-)autonomous weapon systems. Accordingly, the following list does not claim to be a complete one (it’s far from that). The focus is more on a first impression of the current possibilities and circumstances.

 

Unmanned Aerial Vehicles (UAV)

Due to the historical developments and the medium of their operations, Unmanned Aerial Vehicles are currently the strongest area of (semi-)autonomous weapon systems so far.

The two reference drones of General Atomics are downright iconic: MQ-1A / B “Predator” and MQ-9 “Reaper”. They are the pioneers of all modern drones of the early 21st century. Further developments can be found, for example, in the form of the X47-B drone by Northrop Grumman and the “Taranis” system developed by BAE Systems in cooperation with DARPA.

In turn, e.g. the Northrop Grumman’s MQ-8 “Fire Scout” drone is based on the design and flight characteristics of classic helicopters.

 

 

Unmanned Ground Vehicles (UGV)

In the field of Unmanned Ground Vehicles, the systems are mainly based on classic tracked and wheeled vehicles.

The British Foster-Miller / QinetiQ “TALON SWORDS” & “MAARS” systems fall into this category as well as Milem’s “THeMIS” from Estonia. Russia is also developing such autonomous tracked vehicles, the “Platforma-M” drone is a well-known project in this regard.

The Israeli “Guardium” system is also worth to mention. It was developed by G-NIUS, a collaboration between Israel Aerospace Industries and Elbit Systems and has been in use by the Israeli army for several years. It is mainly used for border security and offers both semi-autonomous and fully autonomous modes of operation.14

 

 

Unmanned Surface Vehicles (USV)

Unmanned Surface Vehicles are currently still a comparably small section of the systems in use. Nevertheless, developments have also accelerated in this area.

One of the most popular models so far is the Israeli “Protector USV” by Rafael Advanced Defense Systems, which already exists in numerous versions with different fields of application like coastal protection. Among other things, it is currently being considered to use the small drone boats in swarms by using larger manned “motherships”.15

Recent developments include the “Sea Hunter” craft developed by Vigor Industrial in cooperation with DARPA, which is currently still in sea trials. The drone with trimaran design tests new concepts of navigability and its applications will range from mine clearing to active submarine hunting.16

 

 

Other Types & New Developments

In the future, autonomous systems are also to be expected in the area of space. These systems will most likely be upgraded for combat missions as well. A first taste of it are the test flights of the space drone X-37B by Boing Defense. Over the years, the space drone has sparked both international fascination and skepticism with its top secret cargo shipments and its many completed operations in orbit which sometimes lasted for years uninterrupted in orbit.17

 

X-37B Boing Space Drone
X-37B back on earth after 20 months in space, © Boing Defense / U.S. Air Force / DoD

 

However, apart from the more classic vehicle variants and deployment models, there are also projects in which novel concepts of drones and robots are tried out. For example, the movement that seems quite familiar to us, on two or more legs. Boston Dynamics “Big Dog” has caused a lot of astonishment in the world a few years ago, especially with its natural-looking movements – even in adverse conditions like moving on ice. Meanwhile, Boston Dynamics has created the “SpotMini” system, a new and significantly evolved generation of four-legged friends. At the same, the “Atlas” system is also in the starting blocks with a more humanoid approach regarding design and movement.18

 

 

Apart from that, spidery robots that move on several legs are now being considered in research and development.19 Also the locomotion of snakes20 and worms21 make sense for some scenarios.

Much like the appearances and musculoskeletal systems of sea creatures and birds that inspired people in the past to conquer the oceans and the skies, we can now apply proven concepts from nature to novel machine designs – thanks to advances in material sciences and robotics.

 

 

Resistance to AI-based Autonomous Weapon Systems

The opponents of the military use of autonomous weapon systems are as multifaceted as their respective arguments. In addition to some NGOs as well as state actors, resistance in science and research as well as within the global corporate world has formed recently.

The reasoning of these factions against the development and deployment of lethal autonomous weapon systems include:

  • Ethical concerns: Since fully autonomous weapon systems may no longer require human input, this could lead to an uneven and immoral form of warfare, because machines and human troops would be pitted against each other. At the same time, technological disparities between conflicting parties could be even more serious then today.
  • Increase of conflicts: By transferring the risk of human soldiers to more easily replaceable machines, the social inhibition to lead armed conflicts is significantly reduced. This could lead to increased and faster military missions by nation states that use such weapon systems, since the wartime fatigue of their population may be kept low longer.
  • Uncertain allocation of responsibility & accountability: Different levels of autonomy of the systems may make it increasingly difficult to assign clear responsibilities in scenarios in which, for example, the civilian populations gets hit or international law gets violated – making it harder to detect and prosecute war crimes.
  • Loss of Control: Fears that fully autonomous or AI-enhanced weapons may tend to become unmanageable. This also means, but not only, “Terminator”-like scenarios in which “killer robots” try to seize control. The critics also point to the possibilities that, for example, despots, counterparties or unauthorized third parties may manipulate access to the systems and / or the programming of these and could cause massive damage.

 

Demands for International Regulations

Since 2014, the United Nations has also put the issue of lethal autonomous weapon systems on the agenda. For example, experts in the “UN Institute for Disarmament Research” (UNIDIR) deal with the subject every year, analyze developments and seek to create a framework that is internationally valid. However, in the middle of April 2018, China brought a new dynamic to LAWS at one of the UN meetings in Geneva: In a call to all nations, China proposes “to negotiate and conclude a succinct protocol to ban the use of fully autonomous weapon systems”.22

Another 22 countries have joined this prohibition call and are pushing for an agreement until the end of 2019. Other large countries such as Germany and France also express their concerns, but are now advocating non-binding directives as interim solutions.23

An unusually rapid change in positions for UN standards. Especially when you consider that just 4 years ago only 5 states (Cuba, Ecuador, Egypt, the Vatican and Pakistan) have explicitly spoken in favor of such a ban and besides and that otherwise there was only cautious skepticism among the most influential states.24

 

AI Researchers boycott Research Institution

The international research community has also recently been plagued by doubts. The South Korean university KAIST is currently dealing with a boycott movement of scientists. More than 50 researchers from the field of AI science have refused to cooperate with the institution since the beginning of April 2018. The reason for this lies in the long-standing and intensive cooperation with the defense corporation Hanwha Techwin (formerly Samsung Techwin).25 The application of AI research in this collaboration causes discomfort among scientists as they consider fully-autonomous lethal weapon systems without human users or supervision as a mistake.26

 

Open Letter from the Global Robotics & AI Scene

Together with more than a hundred other CEOs from robotics and AI companies, mainly from the US and the UK, high-tech entrepreneur Elon Musk published an open letter, which includes the warning of a global arms race with lethal autonomous weapon systems. The letter was published at the beginning of the International Joint Conference on Artificial Intelligence (IJCAI) in Melbourne.27

The signatories complain about the missing of international regulations and that they feel particularly responsible because global corporations are in danger of developing autonomous weapons themselves, in order to stay competitive technologically. At the same time – in addition to risks of abuse by hackers and despots at the same time – they see the danger that warfare is changing in extent and speed in a form that would be even hardly comprehensible to people in the near future.28

 

Google Employees criticize Defense-Cooperation “Project Maven”

Criticism is also forming within the corporate world, as the example of Google shows: Several thousand employees at different levels signed an open letter to their CEO Sundar Pichai in April 2018. They announced their displeasure against the cooperation of the group with the U.S. Department of Defense in the context of the so-called “Project Maven”.29

“Dear Sundar, we believe that Google should not be in the business of war” is said at the beginning of the letter of the staff. “We request that you: 1. Cancel this project immediately and 2. Draft, publicize, and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology”.

Spicy thing here: Google’s parent company Alphabet has long been part of the “Defense Innovation Boards” of the U.S. military, in which Milo Medin, a vice president of Google, is a consultant.30 Therefore, the pacifist signers of the letter should not be quite as surprised and the project, which they are now criticizing, will certainly not be the first or the last of its kind at Google, which connects research on AI and autonomous weapon systems.

 

Microsoft’s ethical AI-Committee

Contrary to the open letter from the Google employees, Microsoft seems to be more in favor of a top-down approach: Eric Horvitz, Head of Microsoft Research, said at a conference on Ethics and Artificial Intelligence in Pittsburgh on April 9th, 2018, that they want to ensure that their own technological developments and research in the area of AI are used only in an “ethical way” and not for military or otherwise “unethical” purposes.31

For this purpose, a number of guidelines have been formulated, which are overseen by a special committee called “Aether” (“AI and Ethics in Engineering and Research”). This committee examines how customers use the company’s technologies and whether and with whom one should generally stay in business relationships. According to Horvitz, Microsoft has already rejected and canceled “significant deals” and established restrictions in other business relationships. For example, this affected the use of various developments in facial recognition technology. Microsoft offers e.g. cloud-based “cognitive services” that incorporate various algorithms for recognizing faces and even emotions – functions that also play a significant role in the context of the targeting systems of autonomous weapons.32

 

Someone walking over the wing of an X-47B autonomous drone at dusk
X-47B at dusk, © Joe McNally / National Geographic

 

Comment & Conclusion

With regard to lethal autonomous weapon systems, it is currently seething on an international level. The current skepticism seems to be growing, both among various states, in the educational and research sectors as well as in the global corporate world. Resistance against military and security applications of lethal autonomous weapons systems in combination with AI seems to be increasing at a pace as rapid as the evolution of the systems themselves is making astonishing progress.

If even politically sluggish and otherwise rather long-term thinking countries like China change their opinion so rapidly, one should keep one’s eyes open. Furthermore, if a company like Microsoft also thinks it’s necessary to start its own committee on the subject and, as a result, gets rid of some juicy deals and even leading AI researchers start to boycott their formerly well-paying institutions, well, then we see clear signs for a big change.

It is clear that the topic is more relevant than ever due to rapid technological developments and pressure from entire industries. Concerns about the multitude of uncertainties and risks associated with the use of such systems appear to be the driving force behind most of the parties involved, not just for humanitarian-pacifist reasons, but also due to the fear of being technologically shaken off by competitors.

To what extent these resistances could lead to practicable, significant and internationally binding measures is, however, quite questionable. As with so many new technologies, the cat has been out of the bag for some time. What is technically possible and what benefits players in the global competition for technological hegemony and power will become reality, sooner or later. Greater transparency in how these weapons systems are used could definitely help to prevent abuse, or could at least allow the international community to respond more quickly to some serious BS going down.

While additional international actions and regulations could help to monitor, channel and mitigate global concerns in a better way, it would be naive to fully rely on the binding nature of international agreements. For the most part, the sword of the United Nations are dull, as can be seen from the many global sources of conflict and fire in past and present and the general dealings of various states with UN resolutions.

This raises the question, on which current and future battlefields and in which forms we will see more intensive applications of lethal autonomous weapons systems: Whether in the complex turmoil of the Syrian war, on the tense Korean borders or in old as well as new (civil) war scenarios around the globe. It remains to be seen, if and in which way AI-enhanced autonomous weapons will become reality and if we subsequently enter our own dystopian cyberpunk world.

JHS

 

You want to voice your opinion & thoughts on this topic?
You got ideas on how to improve this article?
Feel free to comment & discuss below.

 

  1. Bonnie Docherty: “Losing Humanity” Cambridge, MA: HRW, November 2012.
  2. David Francis: How a New Army of Robots Can Cut the Defense Budget. Fiscal Times, 2013
  3. Gary E. Marchant et al.: “International Governance of Autonomous Military Robots”. Columbia Science & Technology Law Review 12, June 2011, p. 275f.
  4. Ronald C. Arkin: “The Case for Ethical Autonomy in Unmanned Systems”. Journal of Military Ethics 9, no. 4, 2010, p. 332-341.
  5. Jon Turi: “Tesla’s toy boat: A drone before its time”. Article on Engadget, 2014
  6. John M. Logsdon et al.: “Space Exploration”. Article on Britannica, March2018, p. 3.
  7. Jeffrey L. Caton: “Autonomous Weapon Systems: A Brief Survey of Developmental, Operational, Legal, and Ethical Issues”. U.S. Army War College Press, Strategic Studies Institute, Carlisle, PA, December 2015, p. 5f.
  8. Ron Bartsch, James Coyne, Katherine Gray: “Drones in Society: Exploring the Strange New World of Unmanned Aircraft”. Routledge, New York, 2017. p. 25.
  9. “Radioplane RP-5A Target Drone”. Article / Post of the Western Museum of Flight, 2010.
  10. “1950s & 1960s” Article on UAV Universe, 2017.
  11. “1960s & 1970s” Article on UAV Universe, 2017.
  12. Thomas P. Ehrhard: “Air Force UAVs – The Secret History”. Mitchell Institute for Airpower Studies, Arlington, VA, July 2010, p. 12ff.
  13. Chris Woods: “The Story of America’s Very First Drone Strike”. Article on TheAtlantic, May 2015.
  14. “Enguard! Introducing the Guardium UGV”. Article on Defense-Update
  15. Article on Naval Drones, October 2012
  16. Katherine Owens: “Navy anti-submarine drone-ship conducts minehunting testing” Article on DefenseSystems, May 2017.
  17. Kiona Smith-Strickland: “Now We Know at Least Two Payloads on the X-37B”. Article on Smithsonian Air & Space, May 2015.
  18. Overview of Boston Dynamic’s different projects, 2018.
  19. Dani Deahl: “This bionic spider can curl up and do somersaults”. Article on TheVerge, March 2018.
  20. Modular Snake Robots, Biorobotics Lab Carnegie Mellon University, 2010.
  21. Softworm, Case Western Reserve University Center for Biologically Inspired Robotics Research, 2013.
  22. Sean Welsh: “China’s shock call for ban on lethal autonomous weapon systems”. Article on IHS Jane’s Defence Weekly, April 2018.
  23. Bonnie Docherty: “We’re running out of time to stop killer robot weapons”. Article on TheGuardian, April 2018
  24. Adrianne Jeffries: “Only five countries actually want to ban killer robots”. Article on TheVerge, May 2014.
  25. For example, the autonomous Sentry-Gun “Samsung SGR-A1” was developed jointly by KAIST & Techwin and has been in operation for years on the border with North Korea.
  26. Matthew Hutson: “South Korean university’s AI work for defense contractor draws boycott”. Article on ScienceMag, April 2018.
  27. Presse-Release of the Future of Life Institute.
  28. James Vincent: “Elon Musk and AI leaders call for a ban on killer robots”. Article on TheVerge, 2018.
  29. Google and the Pentagon are using “Project Maven” to improve video evaluation via the use of AI, which includes the improvement of the accuracy of AI-enhanced autonomous weapons systems.See the press-release of the U.S. Department of Defense from July 2017.
  30. Milo Medins Bio on the Defense Innovation Board, 2018.
  31. Matt Weinberger: “Microsoft has given up ‘significant sales’ over concerns that the customer will use AI for evil, says a top scientist”. Article  on BusinessInsider, April 2018.
  32. Alan Boyle: “Microsoft is turning down some sales over AI ethics, top researcher Eric Horvitz says”. Article on Geekwire, April 2018.

Check Also

Feature image Brazil 1985

Brazil (1985): Terry Gilliam’s Vision of a Bureaucratic Dystopia

Plot, settings and visual elements of the dystopian film classic as well as their relation to other media works. A consideration of the interactions of architecture, society & the consequences of excessive bureaucratization, depicted in the movie.

Leave a Reply