UK Signals New Autonomous Weapons Doctrine – But what has become of LAWs Verification Debate?

In September 2017, the UK media and a great deal of the international technology and defence press, announced that the UK had laid out a new doctrine in regards to autonomous weaponry. The Guardian reported, on 10th September, that Britain’s military will “commit to ensuring that drones and other remote weaponry are always under human control, as part of a new doctrine designed to calm concerns about the development of killer robots.”[1]

The UK’s announced position on autonomous weaponry was released as part of the August 2017 “Joint Doctrine Publication 0-30.2 – Unmanned Aircraft Systems.”[2] As well as catching the attention of the relatively mainstream press, the document unsurprisingly provoked renewed discussion of the role for autonomous robots in the military, and the feasibility of a potential, pre-emptive ban.

The press framed the Ministry of Defence’s new doctrine as an almost direct response to the August 2017 open letter from leading robotics and AI pioneers to the United Nations, urging them to ban the development and use of killer robots.[3] Meanwhile, many of those involved in the campaign against autonomous weaponry commented that the doctrine did not essentially break new ground, and whilst viewed as an improvement, was much the same as the document it superseded.[4]

One area that has received relatively little renewed interest however, is the potential role for – and likely form of – verification mechanisms in enforcing either a global ban treaty or in simply holding states accountable to their doctrinal commitments. The attention currently focused on autonomous weapons perhaps suggests that this would be an ideal time to re-engage with the issue of verification in relation to LAWs.

There are, of course, a number of impediments to the implementation of any such verification mechanism. While many of these have been documented in the past, now is an ideal time to remind ourselves of the ways this verification could work, and of how its potential pitfalls are cited as reasons for not supporting a global ban. Finally, the acceleration of technological advance in the fields of artificial intelligence and robotics make this a timely opportunity for proponents of a ban to reconsider the opportunity to implement verification regimes before further development.

Definition

The UK’s new doctrine highlights that one of the major obstacles facing any efforts to verify a ban is a straightforward linguistic one. The potentially high ceiling for any definition of autonomy is demonstrated by the JDP, which defines autonomous machines as those “with the ability to understand higher-level intent, being capable of deciding a course of action without depending on human oversight and control.”[5],[6]

The doctrine defines all technologies beneath this ceiling as “automation” rather than “autonomy” – essentially rendering the UK commitment to not developing or using autonomous weaponry a fairly hollow one. Thus, while the JDP announces that:

“The UK does not own, and has no intention of developing, autonomous weapon systems”

The definition of autonomous weapons that is used places no restrictions on current or planned development in autonomy or AI as many other organisations understand them.[7] Any verification regime then, would have to be capable of satisfying all signatories with varying preferred definitions of autonomy.

Monitoring and Openness

These categorical issues also present an initial barrier to the imposition of monitoring for verification. How, for example, in a system that relied on both human and AI decision making, would it be possible to define when the AI was responsible for action, and when a human operator was accountable? At the level of legal enforcement, clarity is required here, otherwise such grey areas will doubtlessly be open to exploitation.

The UK JDP again provides an opportunity to consider the implications of this at a practical level. For example, the JDP states that:

“[…] the authorised entity that holds legal responsibility will be required to exercise some level of supervision throughout. If so, this implies that any fielded system employing weapons will have to maintain a two-way data link between the aircraft and its controlling authority.”[8]

This, intentionally or otherwise, could provide the basis for domestic or international legal monitoring of autonomy in weapons systems. The “two-way data link” could be continuously monitored. However, the continuous operation of this link and access to software and hardware checks necessary for its verification presents a further challenge. Indeed, in a footnote, the JDP does caveat this suggestion by indicating the data link may “not need to be continuous.”[9][10]

As the difference between autonomous and automated capabilities could, in effect be no more than a difference between a few lines of code, many have been quick to point out the extreme difficulty of carrying out any current proposed means of verification and inspection.[11] Existing approaches to formal verification are likely to be inadequate for monitoring even current iterations of semi-autonomous systems. As machines with learning or planning capabilities are developed, new formal verification procedures must also be proposed.

New Technologies and Opportunity

There is a consensus among those who oppose the development and use of LAWs that the likely best opportunity for a verification regime lies in determining meaningful and continuous human oversight or control of unmanned systems. While the complexities of instituting any such verification measure are not disputed, it is worth taking stock of present opportunities in this area.

Directing efforts towards monitoring human involvement in decision making allows for the development of a verification regime that more closely aligns to existing, effective bans and with the laws of war. As far back as 2014, the International Committee for Robot Arms Control made a case for effective verification of this measure. At a technical level, this verification would be implementable with existing technologies – essentially requiring an open “glass box” that records all mission activity, decision exchanges and the information presented to the human supervisor during that exchange.[12]

The issue of verification has, it seems, received relatively little attention in recent years. This short summary piece argues that the expert and disarmament community must re-engage with this debate lest the current opportunities for implementable monitoring risk falling behind the pact of technological advance. At heart, the issue is still one largely determined by political will, but it is also true that a rigorous demonstration of effective and implementable verification would strongly challenge current arguments against the possibility of such a ban.

****

[1] Guardian, https://www.theguardian.com/politics/2017/sep/09/drone-robot-military-human-control-uk-ministry-defence-policy

[2] https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/640299/20170706_JDP_0-30.2_final_CM_web.pdf – henceforth JDP

[3] https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war

[4] Sharkey, https://www.theverge.com/2017/9/12/16286580/uk-government-killer-robots-drones-weapons

[5]https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/640299/20170706_JDP_0-30.2_final_CM_web.pdf pp 13

[6] For a good discussion of this issue, see Gubrud, http://gubrud.net/?p=596

[7] See, for example: EU paper, ICRAC etc

[8] https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/640299/20170706_JDP_0-30.2_final_CM_web.pdf pp 44

[9] Pp 44

[10] For more on this, see also: https://www.armscontrol.org/ACT/2016_10/Features/Stopping-Killer-Robots-Why-Now-Is-the-Time-to-Ban-Autonomous-Weapons-Systems#note13

[11] https://cacm.acm.org/magazines/2017/5/216318-toward-a-ban-on-lethal-autonomous-weapons/fulltext

[12] https://icrac.net/wp-content/uploads/2016/03/Gubrud-Altmann_Compliance-Measures-AWC_ICRAC-WP2-2.pdf. See also https://standards.ieee.org/develop/indconn/ec/ead_reframing%20autonomous%20weapons.pdf or https://intelligence.org/2014/05/09/michael-fisher/

 

 

Advertisements

One thought on “UK Signals New Autonomous Weapons Doctrine – But what has become of LAWs Verification Debate?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s