Responsibility Gap? It's kind of a thing.
Let's disentangle accountability and responsibility. And start thinking about joint development.
Apologies for the delay in KRCP this week – I am on a road trip with a toddler, so there are few opportunities to post. But I found WIFI, and the baby is asleep, so let’s get into it.
Brad, wow – you’ve got some great points here. I completely agree that alignment is more of a process than a destination. But your point clearly demonstrates why it is critical to take that lifecycle perspective for these systems. There is a ‘many hands’ issue when it comes to alignment because so many people participate. But it begs the question – is human intent consistent throughout that process?
We keep referring to ‘human intent’ – but whose intent? Whenever I read or engage in discussions around intent, it is usually commander intent that is the priority. And this makes sense. But does an AI developer necessarily know what their software will be used for, or begin to envision command intent or how their product would (or could) contribute to commander intent?
On my last trip to the Pentagon to chat with research & engineering, the issue of programming and maintaining commander intent was the core concern and kept repeating itself in our discussions. The process of calibrating intent (or alignment) is an issue at the forefront of research and engineering priorities. While I agree a process of alignment and risk management is a useful way forward for embedding human interaction and intent into autonomous systems, there are uncertainties to address. One big one may be aligning (see what I did there?) the conversation with MHC folks and altering the discourse on human influence and interaction.
As for responsibility, this is a related issue. But I am not sure I agree with you that lawyers are not asking these questions – I think there is significant disagreement in the legal community (and certainly in legal academia, but I suppose that’s their job). But the general discussion about the ‘responsibility gap’ always confused me because there are so many different forms of responsibility and accountability – and I understand the ‘gap’ to be largely regarding criminal responsibility. I have never heard the argument that lesser, non-legal sanctions were at issue. And since the lawyers love the precision of terms, let’s disentangle this a bit.
Stockton Center Professor James Kraska’s article about this is extremely useful as it delineates commander accountability and command responsibility as it relates to the deployment of AWS. To (over)simplify, “[t]he commander is accountable for battlefield action regardless of whether subordinates made and compounded errors, machines performed unexpectedly, or an incident arises as an unforeseeable consequence of pure happenstance or the fog of war.” This accountability may not necessarily be criminal violations (though it could include this) but also non-judicial or non-legal mechanisms of accountability. This is a distinct concept from command responsibility.
Command responsibility, by contrast, is a concept in international criminal law in which “the commander may face legal jeopardy for failure to exercise control over forces under command that violate LOAC.” (446). To establish command (or superior) responsibility, then the commander needs to have “effective control” in a superior-subordinate relationship. Can a commander have effective control? Is an AWS a subordinate? Some great work has been done on these questions in the legal community recently. [Yes, lawyers reading, I understand this is an oversimplification.]
Clearly, there is disagreement in the legal community about AWS command accountability and AWS applicability. For example, a Human Rights Watch report argues it is “arguably unjust” to hold commanders accountable for machines “over which they could not have sufficient control.” I find this sentiment in private conversations with some legal-military folks. In these conversations, it becomes clear that “those sitting in air-conditioned boardrooms have the time to make sure these systems operate as they should rather than placing that responsibility on those in the field.” This position certainly exists.
To me, the issues of human intent and responsibility are distinct but intertwined. The focus on commander intent at the engineering stages of AWS development makes sense because, ultimately, this is where command and control exist, as well as ultimate accountability. Despite the disagreement in the legal community, I agree with Kraska’s position that accountability falls with the commander – as with any other type of weapon. But lawyers are paid to disagree – so, of course, there are those who push back.
Nevertheless, it is worth acknowledging the counterpoint. Human intent at design/development is reflected in machine performance, and it is not always clear how the alignment of the ‘many hands and intents’ at multiple stages in the lifecycle timeline can problematize traditional concepts of accountability and responsibility. This is also why a few years ago, there was a flood of legal articles exploring whether AI developers could be charged with war crimes (spoiler alert: they could, but it would be REALLY difficult to prove and extremely unlikely).
All of this is to say, whether we talk about intent or responsibility, this falls on the shoulders of commanders who may (but likely will not) have any technical training or experience with autonomous systems. But this is also why a life-cycle perspective is always important to keep in mind because even with the emphasis on command accountability/intent, there are many others involved in the timeline. And not just developers. But acquisition/procurement officials, testing & evaluation, and (in a very different way) other nations and allies that cooperate or collaborate on technology development.
Which brings me to something I have been researching lately. New technologies (especially dual-use emerging and disruptive technologies) are not developed in isolation. They are often created for civilian applications, and innovation is not spearheaded by military development. But also, nations work together through various arrangements to create this technology.
We’ve seen a lot of these partnerships in the news – AUKUS, NATO’s DIANA projects, and a recent War On the Rocks post about Chinese-Russian technological cooperation. The US also spearheads the AI Partnership for Defense and, of course, the ongoing GGE discussions in Geneva. But when it comes to weapons development specifically (and I realize much of the IR/tech literature is trying to move away from the weapons discussion), how will AWS alter or otherwise influence multinational operations?