To kick off the KRCP discussion, I want to draw attention to the issue I grapple with in my research – meaningful human control (MHC), (AKA, appropriate human judgment). The scope of human involvement in future autonomous weapon systems is central to discussions surrounding responsible military artificial intelligence (AI); but nevertheless, our understanding of MHC in practice are marginal at best. Currently, the discussion is largely about language – is it truly meaningful control? Is it better to characterize it as appropriate? Can a human ever have control over one, or multiple, autonomous systems? I am not trying to say the words don’t matter – they do. But the circular debates around what terms to use are limited without operationalizing MHC. This post offers one step in that direction.
A useful way to operationalize MHC is to examine MHC at different stages of a weapon system lifecycle. It reveals MHC as a process that has different manifestations at different stages. It is possible that MHC embedded at an early stage could be sufficient for policy standards requiring MHC (or appropriate human judgment). Three stages merit close examination here – design/development, operational planning, and tactical planning/engagement.
AI developers create intelligent systems which are capable of learning, analyzing, and predicting. Developers create the software architecture, or the system boundaries, to define the parameters for system behavior. System designers have an important role in imagining all the things they cannot imagine. They must create design principles for the system to encounter unexpected environmental stimuli that could occur simultaneously. The whole purpose of employing autonomous weapon system (or any autonomous system) is for the advantages in speed and accuracy for certain processes too mundane or overwhelming for human cognition. Can we assume that developers take over a certain degree of decision-making traditionally completed by a commander or operator in-theater? By developing the system architecture, does the developer qualify as having MHC to a degree that is sufficient to comply with policies or other requirements of MHC?
The second stage is the operational stage. Despite this stage holding implications for MHC, it has received limited attention. This stage includes a range of decisions that exhibit MHC – including perhaps the most important one: the decision to use an AWS for a particular operational environment. Much of the controversy surrounding AWS’ is the risks posed in an urban environment where civilians are most at risk. While there is reason for this concern, other environments do not pose the same risk (high seas, desert, forest, etc.) Is this assessment an exercise of MHC? I would argue that it is. A responsible commander has the capacity to consider the risks associated with the deployment of an AWS for a particular environment ultimately decide of the risks are worth the operational or strategic gain.
The third stage is tactical planning and engagement and has received the most attention in the MHC debate. This is likely because it is the most straightforward operationalization of MHC and, certainly, where MHC has traditionally existed. Typically, MHC is understood as supervision an AWS with the ability to intervene if the system is malfunctioning or engaging in otherwise unlawful behavior. Certainly, this intervention scenario satisfies as control – though not necessarily meaningful (or even appropriate). It is possible the speed of decision-making will make human intervention unlikely or too soon – think of a machine performing in an efficient, but unexpected, way prompting the operator to intervene and result in mission failure.
On the other end of the spectrum, there are situations when pace is too slow. Kate Devitt recently released a chapter exploring not just the challenge of speed, but when an AWS spends hours of idle processing and the (very) human tendency to have short attention spans in the face of boredom. This is often cited as one of the greatest benefits of an AWS – it will not get too tired or bored, unlike its human counterparts. Does a human operator monitoring such fast-paced, or slow-paced, scenarios exhibit MHC? I argue it depends. It is a form of human control, but does not satisfy a threshold of meaningful or, perhaps, even appropriate. Maybe the better word is necessary but not sufficient.
Meaningful human control is a challenging concept to operationalize and embed in future autonomous weapons. Nevertheless, it is a necessary component to assure lawful, responsible use of autonomous weapons. However, there is flexibility and creativity in what constitutes MHC.
Ultimately, MHC is about maintaining the benefits of human judgment while harnessing the advantages of autonomy. It is a balance that will be difficult to strike, but necessary to find. As the life cycle perspective suggests, there are many moments for human judgment to be embedded in AWS performance. Certainly, the risks of an AWS failing or making mistakes is a separate issue – one I intend to cover in KRCP. But on the role of the human, we need to creatively explore the dimensions of human-machine interaction from the beginning