Responsibility Gap? That's not a thing.
Does a gap in responsibility prevent the employment of autonomous machines? Unlikely.
Lena, thanks for giving me an opening to take this conversation in a new direction which is the responsibility gap. Before I jump into that though the last thing I’ll mention related to MHC and alignment regards your suggestion that alignment is assumed. What I find interesting about that is that you’re probably right, but I think the assumption is a mistake. I prefer to conceive of alignment as a continuous process rather than a destination. What I worry about with MHC and the assumption that machines are either aligned or not is the tendency to want to solve a problem and then consider it solved. My interpretation of most of the writing on MHC of autonomous weapons is that once MHC is established the weapon will not and should not be modified because MHC would no longer be certain. If we do the same with alignment and lump it into the binary, on/off model of MHC, we are ignoring the reality of how autonomous systems work and how they will be employed.
Instead, we should design a framework that requires continuous alignment of action to human intent throughout a system’s life cycle. This transforms a philosophical discussion into an engineering and risk tolerance trade-off. Instead of considering what we mean by meaningful control and when it can be applied to eliminate risk, we focus on improving alignment performance and the framework for accepting the residual risk of employing an imperfect system within complexity. Which allows us to better accept or reject risk because that calculation is necessary for employment. We can meet International Humanitarian Law knowing there will be failures, confident the alignment process will address those failures continuously within operational imperatives and legal constraints.
Where I think this conversation often breaks down is when the question of responsibility pops up because in the minds of many, MHC is closely tied to closing the responsibility gap. The idea of a responsibility gap with autonomous weapons was first put into popularity by Robert Sparrow in 2007 when he suggested that “as machines become more autonomous a point will be reached where those who order their deployment can no longer be properly held responsible for their actions,” because “Where an agent acts autonomously, then, it is not possible to hold anyone else responsible for its actions.” Really? I’m married to a lawyer and I can confidently tell you that this is absolute nonsense. I am at least 80% responsible for my daughter riding her skateboard down the staircase, believe me.
The responsibility gap argument makes a couple of key mistakes. First that moral responsibility and legal responsibility are the same thing. Second that an artificial agent, in this case an autonomous system, has the same legal status as a human agent. Spoiler, it doesn’t and we have control of that. Third that autonomy in an agent like an autonomous system negates negligence in other agents. It doesn’t.
Let’s deal with moral vs legal responsibility first. The responsibility gap argument is that an artificial agent cannot be held morally responsible for something because it is not a moral agent. Now there is a huge discussion about whether an artificial agent can be a moral agent, and I’m inclined to think that it can if it can act morally. But there are some good arguments that if an artificial agent doesn’t know what it is like to be moral then it cannot experience morality in a way that makes it a moral agent. Big discussion. I’m going to leave it there. But the salient point is that something that is not a moral agent can be held legally responsible. Think about large corporations that are punished for misbehaving.
Second, an artificial agent does not have the same legal status as a human agent. This means we can apply necessary rules to it that suit our purposes without being burdened by conflicting philosophical morality. And this is being articulated with the CCW and GGE declaring that “accountability cannot be transferred to machines.”
Third, the concepts of negligence and responsibility are not tests of autonomy, they are tests of a reasonable standard of care. The requirement for a reasonable standard of care can be spread across many different agents in a legal discussion of any kind of mishap. Even something as simple as a car accident usually finds every participant to be at fault to some degree. And here is the real kicker, sometimes no one failed to take a reasonable standard of care. Even when deaths are involved.
So, let’s apply this to an autonomous weapon that is employed in war and mistakenly strikes a target killing people who didn’t need to be killed. What are we to do? Want to know who never asks this question? Lawyers. My friend Jerry Kaplan once quipped that lawyers never ask him questions about responsibility when a robot harms someone. I have found in my own travels this to be absolutely true. Why? Because the law has mechanisms for determining responsibility, liability, and negligence. These processes may not help with the question of moral responsibility, but I think they close the responsibility gap enough to focus on the actual issue, which is how to build a policy and legal framework that ensures a reasonable standard of care at all points of an autonomous system’s life cycle.
Let me get my stick out so I can beat on this horse again before you run me off: the best course to a routine and acceptable reasonable standard of care is alignment and risk management.
Thanks Brad. Excellent post. This is the kind of discussion we are having in a couple of AI Track II dialogues. Your points are generally accepted by the participants from other nations.
There needs to be an accounting, however, for the seemingly inevitable introduction of true online learning systems. That is, well beyond today’s ML systems that remain largely deterministic.
Since nobody has ever fielded that kind of system, it’s important to address these same kind of responsibility gap concerns sooner rather than later.
A risk-based framework for AI-enabled military systems could lead states to conclude that true online learning LAWS could fit into the ‘unacceptable risk’ category.