Techno-Utopianism, Coded Ethics and Some Confusion on the Three Laws

A recent installment of The Inquiry asked “Can we teach robots ethics”? The discussion in the episode provides some really interesting food for thought. Anyway, it reminded me that, for a while, I’ve wanted to write about something that troubles me in discussions of AI, and particularly military AI uses: The idea that coding ethics is either possible or desirable.

This is a fairly quick stab at the topic, and is mainly framed around Arkin’s Governing Lethal Behaviour from 2008. It’s in no way intended to be an exhaustive treatment of the topic – or indeed, of Arkin’s work… But hopefully, this kind of criticism can generate useful further discussion.

This is Part One. Part Two to follow.

Robert Arkin’s “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture” (Arkin, 2008) is widely regarded as a keystone text in the field of ethics and autonomous military (non)lethal technology. Arkin states his aim as such:

“[To] provide the basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Laws of War and Rules of Engagement.” (Arkin, 2008)

When taken on its own terms, it is probably fair to conclude that the work succeeds in attaining this objective, however, this is not to say that it is without fault. The work does present a coherent and timely review of many of the issues, both current and potential, facing scholars, policy makers and military leaders concerning autonomous technology, yet it is also limited by a surprisingly uncritical ontology of this technology – which at times verges on techno-utopian idealism. Similarly, whilst Arkin does provide the reader with a full and, at times, thought-provoking account of the recent history and future potentialities of autonomous warfare, this too is limited by his exclusion of all non-technical-efficacy based considerations, and by an  occasional reliance on reductionist logics in order to advance is central thesis.

This review seeks to address these limitations, whilst also providing a discussion of the range and depth of contribution that this paper makes to the field more broadly. Ultimately, it will be shown that, for students of International Relations hoping to understand how and why autonomous military technologies are being developed and deployed, Arkin’s in an invaluable, if somewhat flawed, work.

 

Laws of War / Laws of Robotics

The central contention of Arkin’s work is that autonomous machines will not only be capable of ethical conduct on the battlefield, but that they will in fact be able to operate in a more controlled, more ethical – indeed, more compassionate – manner than their human counterparts. Arkin links the ethical conduct of war explicitly to the legal rules and norms that are codified in universal Laws of War[1] (LoW) and in theatre specific Rules of Engagement (RoE). Robots, the argument goes, will be better able to see through the fog of war, will not be hamstrung by emotional responses such as fear or anger, and thus will be better equipped to deliver requisite lethality within a strictly defined legal-ethical framework.

There is, of course, a fairly substantial body of existing literature which criticises the notion that ethical conduct can be accurately engendered by codification– either legally in convention or digitally on a circuit board. Schwarz (Schwarz, 2013) and Sparrow (Sparrow, 2007), for example, have criticised the predominance of algorithmic calculation in military and strategic discourse and practice. Similarly, Bousquet, Schwarz, Rappert and Der Derian (Derian, 2007; Rappert, 2007; Schwarz, 2013) have undertaken work that explores some of the problems with emerging techno-managerial practices in military/strategic planning which seek to justify modes of violence as universally ethical or technically “correct.”

It is beyond the scope of this review to make meaningful headway into these wider philosophical debates. Rather, the present work takes them as a starting point in order to foreground the absolute centrality of codes – whether legal or digital – to Arkin’s work, and to introduce some of the potential limitations of his approach.

As something of an afterthought to his initial discussion of the ethical behaviour of robots, Arkin confronts his reader with a rather scant dismissal of Asimov’s Three Laws of Robotics. Stating that such a discussion “would be incomplete without some reference to [them]” (Arkin, 2008), He goes on to note that:

“…while they are elegant in their simplicity and have served a useful fictional purpose by bringing to light a whole range of issues surrounding robot ethics and rights, they are at best a strawman to bootstrap the ethical debate and as such serve no useful practical purpose beyond their fictional roots.” (Arkin, 2008)

And, finally, quotes Anderson (Anderson, 2008, p. 7) in summarising:

“Asimov’s ‘Three Laws of Robotics’ are an unsatisfactory basis for Machine Ethics, regardless of the status of the machine”.

Whilst such an assertion may be disagreeable to some,[2] it is not, in and of itself, particularly problematic. Indeed, few, if any, would attempt a serious argument that a current or future ethical code or legal paradigm for military robotics should be based on foundational laws sourced from a four decades old science-fiction universe. The issue arises – and this is the justification for such exhaustive treatment of this point in this review – when this dismissal of Asimovian[3] law is considered as an aspect of the broader treatment of law, of codes and of technology throughout the text.[4] What emerges from this undertaking is the observation that Arkin’s misapprehension of The Three Laws becomes somewhat paradigmatic of a more general misalignment between his legal and techno-utopianism and the model of ethical coding he seeks to prescribe.

The argument will proceed as follows: Firstly, the work will provide some brief background on the technology, before moving on to discuss in more detail how, in Arkin’s work, lethal autonomous machines are encoded with the ability to behave ethically through a hardwiring of LoW and RoE, and the capabilities this is alleged to give them as ethical combatants. It will then return to the ideas of Asimov, explaining both what the Laws of Robotics are and what can be garnered from Arkin’s treatment of them. The final section represents a culmination of the previous discussion wherein the review will explore how it is possible to compare the nature and effect of encoded LoW to that of Asimov’s laws, with potentially similar limitations. Finally, it will be shown that the expectation of ethical conduct for Arkin’s robots is somewhat out of step with the capabilities he argues they ought to hold.

Increasingly Autonomous Lethal Technology

Recent advances in unmanned and semi-autonomous military technology have been well documented, both in the work presently being reviewed and elsewhere.[5] Arkin himself identifies some current key technologies which operate in a semi-autonomous supervised capacity, but which have the capability built-in for fully autonomous operation. He also points to the developments taking place in several militaries towards Armed Robotic Vehicles (ARVs), Unmanned Ariel Vehicles (UAVs) and various other tactical and strategic unmanned systems. The sophistication and intended deployment of these systems varies from a static platform deployed on the Korean DMZ with a detect and engage range of up to 4km in daylight, or a similar “automated kill-zone” platform deployed by the Israeli Defence Force (IDF), through to a VTOL tactical UAV developed for the US Navy.

In each instance, current practice and discourse focuses on the need to maintain an element of human control and supervision in the decision loop. Even in instances where these weapons can independently identify, monitor and lock on to targets, there is still a necessity for a human operative to give a command to fire or engage. However, Arkin also makes clear that there is a strong trend, indeed a sense of near-inevitability, towards increasing levels of automation and, ultimately, fully autonomous, unmanned operation. For example, Arkin quotes an IDF division commander as saying that “at least in the initial phases of deployment, we’re going to have to keep a man in the loop.” Arkin, probably quite rightly, takes this to imply “the potential for more autonomous operations in the future.”(Arkin, 2008)

As further evidence in support of the trend towards increasing autonomy and lethality in military technology, Arkin quotes a 2007 US Army Solicitation for Proposals, stating:

“Armed UMS [Unmanned Systems] are beginning to be fielded in the current battlespace, and will be extremely common in the Future Force Battlespace… This will lead directly to the need for the systems to be able to operate autonomously for extended periods, and also to be able to collaboratively engage hostile targets within specified rules of engagement… with final decision on target engagement being left to the human operator…. Fully autonomous engagement without human intervention should also be considered, under user-defined conditions, as should both lethal and non-lethal engagement and effects delivery means.” [Boldface added for emphasis by Arkin]

For Arkin, then, the question of fully autonomous lethal robots is not so much if, but when. He is certainly not alone in holding such a belief. In fact, despite the observable presence of restraint in the development and deployment of fully autonomous, unmanned lethal technology, many policy and military debates are equally characterised by a techno-rationalist positivity towards advances in autonomy and lethality. In these discourses, the enhanced utility of force, the reduced risk to human personnel and an increased comparative advantage in capability is intermingled with arguments emphasising the precision and ethicality of technology capable of advanced targeting and exact strikes.[6]

The advantages of autonomous lethal robots that are selected for attention in Arkin’s work are typical of those highlighted by their proponents. In focussing on the potential for robots to be programmed in order to act ethically however, Arkin notes the following key points (Bold Emphasis my own):

  1. The ability to act conservatively: i.e., they do not need to protect themselves in cases of low certainty of target identification. UxVs do not need to have self-preservation as a foremost drive, if at all. They can be used in a self-sacrificing manner if needed and appropriate without reservation by a commanding officer.
  2. The eventual development and use of a broad range of robotic sensors better equipped for battlefield observations than humans’ currently possess.
  3. They can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events. In addition, “Fear and hysteria are always latent in combat, often real, and they press us toward fearful measures and criminal behaviour” [Walzer 77, p. 251]. Autonomous agents need not suffer similarly.
  4. Avoidance of the human psychological problem of “scenario fulfilment” is possible, a factor believed partly contributing to the downing of an Iranian Airliner by the USS Vincennes in 1988 [Sagan 91]. This phenomena leads to distortion or neglect of contradictory information in stressful situations, where humans use new incoming information in ways that only fit their pre-existing belief patterns, a form of premature cognitive closure. Robots need not be vulnerable to such patterns of behaviour.
  5. They can integrate more information from more sources far faster before responding with lethal force than a human possibly could in real-time. This can arise from multiple remote sensors and intelligence (including human) sources, as part of the Army’s network-centric warfare concept and the concurrent development of the Global Information Grid.
  6. When working in a team of combined human soldiers and autonomous systems, they have the potential capability of independently and objectively monitoring ethical behaviour in the battlefield by all parties and reporting infractions that might be observed. This presence alone might possibly lead to a reduction in human ethical infractions.

It is worth noting that robots, as described above, will not only be capable of operating more ethically than their human counterparts, but are also deemed capable of monitoring and reporting on human behaviour in order to improve the ethical conduct of human soldiers. They will be able to better assimilate and act upon battlefield information, they are free from the compromising imperative of self-preservation, and they will be capable of operating with cool efficiency regardless of circumstance.

There are, of course, several criticisms or counter-arguments that can be levelled at such a positive outlook on the likely potentials and implications of autonomous lethal machines. Scholars from within International Relations have exhibited a general tendency to characterise technology in purely instrumental terms, yet even within this      community, there has been a notable hesitation to discuss LARs purely in terms of their efficacy. Furthermore, there is a burgeoning sub-field within International Relations scholarship that looks to engage with thought, methods and philosophy from outside the discipline in order to better understand the important role that technology, and technological development, plays in world politics (Aradau, 2007; Dillon & Reid, 2001; Henriksen & Ringsmose, 2015; Linden, 2015; Schwarz, 2013; Smith, 2002).

Nick Srinicek (Srnicek, 2011), Michael Bourne (Bourne, 2012), Columba Peoples (Peoples, 2009) and Antione Bousquet (Bousquet, 2009) (as well as a number of others) have made valuable contributions to this relatively novel body of literature, and their work is instructive not only in demonstrating the enlightening potential of such interdisciplinary work, but also in directing future scholars towards rich seams of thought worthy of further exploration. The present work hopes to make a small contribution to this emerging tradition, borrowing most heavily upon the Science and Technology Studies (STS) derived notion of Sociotechnical Imaginaries (Jasanoff & Kim, 2015).

This review intends, at least in part, to  point towards the importance of  imaginaries and  conceptualisations of technology, and its relation to social, political and legal ideas, impact upon the kinds of futures that can be imagined and the types of worlds that can be created. Specifically, it hopes to demonstrate – in a somewhat shorthand fashion – how Arkin’s generously positive understanding of the reasoning capabilities and infallibility of autonomous robotics interact with his faith in the ethical reach and purchase of the Laws of War to create an almost utopian image of robotic capabilities in carrying out ethical warfare.

I would also note, that the present work is likely to offer far more questions than answers.

Arkin’s Robots and the Laws of War

The ability of robots to behave ethically in combat is reliant, in part, on their apparently superior ability to gather and process information, and consequently, to make better decisions resulting in more ethical actions. However, the capacity for this decision making process to deliver ethical outcomes is also predicated on the embedding of an ethical code or framework into the decision making capabilities of the technology. That is to say, that lethal autonomous robots must be provided with data, code and decision making algorithms that recognise what constitutes “ethical action” and “ethical decision.”

For Arkin, this ethical decision making capability can be achieved through a successful embedding of existing Laws of War into the decision making architecture of autonomous robots. The issue, in its most simple articulation, is whether this can be engineered to a satisfactory level of precision. Once again, there is a considerable body of literature which challenges, at a very fundamental level, whether such an endeavour is capable of providing an ethical means to engage in conflict – some of which Arkin actually alludes to, but is quick to dismiss.

Refusing to acknowledge the substantive effects of increasingly autonomous and lethal technology doubtlessly weakens Arkin’s case. For example, it seems naïve to neglect the importance of dramatically reducing the pooling of risk (by removing considerations of friendly mortality for one side) for both jus ad bellum and jus in bello calculations of force and conduct. Similarly, critics have highlighted the dangers of lowering the bar for entry into conflict by making war a riskless endeavour, and others have pointed out the problem of responsibility and accountability in the conduct of autonomous warfare (Anderson, 2008; Ben-Naftali & Triger, 2013; Schwarz, 2013; Sharkey, 2008). Others have gone further still and discussed whether it is possible, let alone desirable, to measure ethics in terms of universal or technical conventions of good or, indeed, if a means/end algorithmic calculation can ever truly comprehend what it means to act ethically.[7]

It is clear that Arkin does believe that an effective autonomous ethical decision making architecture is likely to be possible in the near future. It is also clear that this architecture should be designed in such a way that it can govern behaviour which corresponds to, and is limited by, existing legal and normative framework of international law (the Laws of War). From a critical perspective, it is necessary to consider (at least) three further questions at this point: Firstly, what understanding of technology and what understanding of law makes this embedding of LoW possible and likely to be effective? Secondly, how will autonomous technology interact with the legal/normative codes of LoW and are these codes suitable for the governing of autonomous robot behaviour? Finally, what capabilities does a robot need to possess, in this understanding, in order to effectively carry out the proposed ethical mandate?

The Laws of War

The Laws of War, often called “humanitarian law” (IHL), are generally understood as the legal protections afforded to certain people during wartime, the general rules governing acceptable means of combat and, in a broad reading, the laws prohibiting certain types of weapon in certain scenarios. The main bodies of international humanitarian law are the Geneva Laws and the Hague Laws. Whilst these laws have frequently been cast in a tragic light –  “if international law is, in some ways, at the vanishing-point of law, the law of war is, perhaps even more conspicuously, at the vanishing-point of international law.” (Lauterpacht, 1946) – it is also true that their direct invocation has become ever more common place.

International Law, in Arkin’s argument, is non-political, non-partisan and inelastic. He is certainly not alone in this view. The relative isolation of legal and strategic analyses from each other has been observed by a number of authors (Ku, Diehl, Simmons, Dallmeyer, & Jacobson, 2001; Ratner & Slaughter, 1999; Smith, 2002). Whilst Arkin does not go so far as to characterise IHL as entirely static and Archimedean, it is certainly fair to say that his conception of the Laws of War is somewhat anaemic. In much the same way a machine may be imagined to operate based on a simple, incorruptible code, Arkin argues that combatants and their leaders are subject to the code of IHL. The failure of the machine to carry out its assigned task may be a result of mechanical failure or a broken electrical cable being unable to deliver an instruction, but the code itself cannot fail. Similarly, if LoW is imagined as an infallible and entirely impartial code, any humanitarian or ethical failure becomes a problem of effective communication or implementation.

This narrow understanding of IHL and its workings is demonstrated in Arkin’s work both by the brevity it merits in his discussion and by his contrasting emphasis on the statistical failure of human combatants to properly adhere to its regulations. Such a paucity of real engagement with the discursive, political and normative constitution of IHL does facilitate the argument that these laws, if effectively coded into autonomous lethal machines, could improve the ethical conduct of war, however it also serves to entirely neglect its complexity, asymmetry and political disposition.

Though contrasting in both focus and approach, the work of both Thomas Smith and Claudia Aradau is particularly instructive in highlighting the mutually constitutive nature of global politics and international law (Aradau, 2007; Smith, 2002). Smith focuses, for example, on how the notion of military necessity has been deployed in an ever more expansive manner in high-tech warfare alongside an increasingly sophisticated legal-technical discourse in order to (often) justify an erosion of distinction between military and civilian targets. He emphasises that, rather than international law remaining silent in these scenarios, it is employed more and more frequently, but that its new ubiquity has come at the cost of “artificiality and elasticity” (Smith, 2002, p. 357). Aradau, in discussing the use of extraordinary rendition, torture and extra-normal internment in the war on terror, explores the problems that arise when international law (or, at least, its interpretation) becomes contingent upon the perceived political necessities of a given moment (Aradau, 2007).

These considerations are more than just interesting academic musings. That the same international actors with the capability to develop and deploy advanced weaponry – including autonomous lethal robots – are also able to influence and interpret the course of IHL in a manner that can be used to endorse and legitimise certain weapons and modes of conflict, while condemning and proscribing others, is surely of great significance. In fact, similar arguments have been made by a number of academics and experts, with varying degrees of optimism about the potential for ethical conduct being guided and/or embedded in IHL.[8],[9] A particularly critical summary is provided by af Jochnick and Normand, who argue that legal warfare has never been necessarily more humane or ethical than illegal warfare. “The development of a more elaborate legal regime has proceeded apace with the increasing savagery and destructiveness of modern war (Jochnick & Normand, 1994).

The effectiveness and uneven application of weapons prohibition regimes also shows the partiality of international law. The strength of the prohibition of anti-personnel landmines and biological weapons or the restricted use of certain types of conventional weapons,[10] for example, sit in stark contrast to the political and legal credibility afforded to hi-tech weaponry such as drones and precision bombs. In the more critical assessments of this trend, some commentators have argued that this demonstrates the willingness of hi-tech capable states to prohibit or condemn those weapons that are accessible to poorer states, but to create legal-technical justifications for those weapons that maintain their military advantage (Jochnick & Normand, 1994; Smith, 2002).

The point made here, perhaps at length, is that the relationship between international humanitarian law and the practice of ethical warfare is far more complex than Arkin articulates. The law is never just a code, and never exists in isolation from the world it purports to regulate. It is instructive to recall Clifford Geertz’ observation that “law… is a distinctive way of imagining the real.” The law may make certain acts appear concrete or legitimate, but it is also very much a discursive and ideational construction that exists in relation to – not abstraction from – the political and the particular. Whilst this does not proscribe the legal from intersecting with the ethical, it certainly demonstrates that they are not necessarily symbiotic.

Lethal Autonomous Technology and Coded Ethics

In much the same way as Arkin’s conceptualisation of international law represents a simplification of the complex reality, it is also true that his imaginary of advanced autonomous lethal technology is rather too neat. Sheila Jasanoff, in her landmark work, Dreamscapes of Modernity, has argued that “truthfulness in the social sciences today… demands simultaneous attention to more forms of agency, more pathways of change, and more narratives of causation” (Jasanoff & Kim, 2015, p. 16). For scholars hoping to critically engage with the transformative capacity of autonomous lethal technology, it is doubtless the case that a more nuanced approach – one that is capable of comprehending the interconnectivity between ideas, law, technology and the conduct of war – is necessary.

Arkin appears to consciously frame technology in instrumental terms, seeing autonomous robots as tools. Autonomous lethal robots exist, in this imagining, as discrete units. They are not seen as having substantive or transformative capacity, and they may be largely understood in terms of their application and efficacy. They do not do anything by virtue of their existence, it is only through interaction with/manipulation by a human agent that the technology may act. It is this framing of technology as unproblematic and instrumental that makes it possible for ethical conduct to be governed by the embedding of LoW into the machine’s hardware.

This section will make two arguments: Firstly, that the relationship between lethal autonomous robots, LoW and ethical conduct is likely to be mutually constitutive and subjective, and; secondly, that the technological or algorithmic negotiation of ethical moments is a deeply flawed practice.

[ more to follow…]

*****

[1] The Laws of War, for this purpose, are (relatively) universally accepted international legal apparatus such as the Geneva Conventions

[2] Asimov’s work has no shortage of devoted, vocal proponents – from Nobel Prize winner Paul Krugman to actor Robin Williams to a number of prominent academics, including J.L. Gaddis

[3] Yes, this is a word. Collins English Dictionary defines Asimovian as “referring to or reminiscent of the work of Isaac Asimov”. Sourced 31/01/2017 at:  https://www.collinsdictionary.com/dictionary/english/asimovian

[4] The present author is also aware that, in choosing to hone in so closely on one paragraph containing a somewhat tertiary argument, they are leaving themselves open to a charge of not seeing the wood for the trees, or indeed, of making mountains out of molehills. It is, of course, my hope that the worthwhileness of this endeavour will hopefully be demonstrated through the course of the work that follows.

[5] For other useful reviews of this technology, please see, for example (Sparrow, 2007), (Bousquet, 2009)(Alston, 2010)(Garcia, 2015)

[6] A full discussion of the discourse surrounding precison/ethical deployment of autonomous technology is beyond the scope of this review, but for more information, the following texts are a good place to start: (Bourne, 2012; IHCR, 2012; Rappert, 2007; Schwarz, 2013; Sparrow, 2007)

[7] The present work does not take up this line of criticism, despite its obvious importance. A full engagement with this debate is beyond the scope of this work, however there is relatively small, but rich body of literature in this field, drawing on, for example, the work of Walter Benjamin or Martin Heidegger. Interesting contributions have been made by (Ceyhan, 2008; Elke Schwarz, 2016; Schwarz, 2013)

[8] Again, a thorough-going discussion of this field is beyond the scope of the present work. For a good example of some of the work in this field, see either (Gathii, 1998; Jochnick & Normand, 1994)

[9] It is probably also worth noting that Hedley Bull, in 1976,  said of the development of international regulations and agreements for the control of (nuclear) arms: “… The concrete meaning they have acquired serves to rationalize the existing distribution of power… [while] Soviet-American cooperation in arms control serves universal purposes it inevitably serves special or bilateral purposes also. These special or bi- lateral purposes reflect the preference of the two great powers for a world order in which they continue to enjoy a privileged position”(Bull, 1976)

[10] Convention on the Prohibition on Anti-Personnel Mines, Biological Weapons Convention and United Nations Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons, respectively

Advertisements

One thought on “Techno-Utopianism, Coded Ethics and Some Confusion on the Three Laws

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s