John is absolutely right. Except we’re talking about different things.
First, thanks to those who commented. Please do more of that, it’s great to see some debate. And thanks to people like Michael Kanaan for sharing this substack.
I’ve taken a while to respond mostly because I’m teaching in Spring quarter and preparing my two classes has totally absorbed my attention. But that delay has also been good for me to consider how to better describe what I’m talking about, and what my concerns are. When people like Jack Shanahan and Colin Carroll are like, “you’re wrong on this one”, it’s best to take some time to think. Especially when I agree with everything they said. Even in the face of new reporting on Lavender and Gospel, those mistakes appear to be within the bounds we can expect from automating target identification. The failures appear to be the result of automation bias and action bias on the part of humans at the action stage of targeting rather than machines intruding into the nomination process.
So, what exactly am I trying to get at here?
To me the fundamental questions are: What are we willing to automate, or maybe more specifically, how much agency are we willing to give machines and what are the costs? These questions make me think about why we would be willing to automate some things and not others. Why would we want machines to find targets but not invent them?
There are some viewpoints out there that support automating everything so long as performance is better.Tthe idea of optimizing outcomes to preserve safety or prosperity seems a noble pursuit. Using machines also enables humans to focus on things we do well, avoiding things we don’t do well or don’t want to do. And in some areas, machines are just better than us and we may need to get out of the way. Because of this, the roles of machines in our lives are evolving, including within decision-making structures like targeting. As my friend Jerry Kaplan said to my class last week, today we are discussing the need for a human in the loop but in the future we may be discussing the need for a machine in the loop.
Is that ok? Mostly I agree with using machines to optimize outcomes, including when autonomously striking targets with lethal force. Where I start to hedge a little is in circumstances where machine agency begins to infringe on human cognitive autonomy.
Cognitive autonomy means having as much control over our subjective experience as possible. Meaning we get to decide how things should be and why they should be that way. If you want to pull on this thread, try here and here to start. The idea of cognitive autonomy is not the same as free will. Free will is binary. You either have it or you don’t. Philosophers have been arguing about free will forever, and it’s a fun conversation, but I’m not going into it for the moment. Instead, I’m focusing on cognitive autonomy to distance my idea from the free will discussion for something more relevant. So rather than consider cognitive autonomy to be binary like free will, as something to have or have not, I’m considering it a quest. There is no on or off, there is simply the human imperative of trying to maximize cognitive autonomy to control our own subjective experiences.
This is not an easy quest. Clearly, our cognitive autonomy is influenced by many things. There is constant, sometimes malign, sometimes benign, pressure from our environment and other humans trying to shape what we think, what we do, how we do it. This is an old game, which is sometimes beneficial and sometimes not, but it has been a human game for most of our history, however, 500-ish years ago, that changed.
We can go all the way back to the Gutenberg printing press, its effect on the Protestant Reformation, and the start of the 30 years war, for how machines have affected human cognitive autonomy. Here’s a whole book on it. Suffice to say, it was a big deal, and the scale of the effect increased over the centuries. But, things really picked up in the last few decades with the arrival of the internet, cheap memory, increasing compute power, and of course, AI. Herb Lin and I wrote on this a few years ago if you want to dig into that more.
The point is that machines have dramatically gained in their ability to influence human cognition and thus cognitive autonomy. Is this bad? Well, it depends on how you think about several questions. Here are a few.
Is cognitive autonomy essential to being human, and is the freedom to make mistakes part of that? If you’re trying to maximize cognitive autonomy, doesn’t that mean you get to decide to do dangerous or stupid things? Is there even an objective concept of what is dangerous or stupid? Doesn’t cognitive autonomy demand that we get to decide for ourselves what is dangerous or stupid? Does the fact we exist in a world with other beings mean we have an obligation to optimize “good” decisions? If we could make a tool to help us make good decisions, do we have an obligation to build it and use it?
Those are interesting questions and hard to answer. Perhaps an easier, more relevant, and productive, way to start is to pick a point in human decision-making where machines agents are acceptable. So, where in a decision-cycle do we want machine agents to assist us in optimizing outcomes? Is it at the cognitive phase or in the action phase? Do we want machines thinking for us or acting for us? Does it matter? I think it does.
Debating what happens when machines start to think for us is not new of course. One of the best examples is The Matrix. I’m reluctant to make a sci-fi reference in what is intended to be a practical discussion but I’m going to anyways because I think it will help.
When Agent Smith has captured Morpheus, he starts monologuing, and makes an important distinction between what came before and what is now. The short version:
“The matrix was re-designed to this, the peak of your civilization. I say your civilization because as soon as we started thinking for you it really became our civilization, which is, of course, what this is all about.”
I’m not worried about machines taking over and putting us in the matrix, though some people think that might be great (looking at you Zuckerberg), but that is a consequence of what Agent Smith is talking about. However, his big point is that cognitive autonomy is Excalibur. Whoever gets to decide what happens, and why, owns the future.
So, let’s take a breath and ask: If I want to maximize human cognitive autonomy in war, can I do that through the targeting process? I think so. And the answer I gave before on how to do that was to co-opt some terms and say humans nominate, machines identify.
Now, Colin had a good insight on LinkedIn, and I agree. Targeting terms are not terribly well-defined and it’s hard to know what, exactly, we mean by using terms like nominating and identifying. I’m going to continue to use target nomination as a term to delineate the point where I think machine agency may begin infringing on human cognitive autonomy. I’m happy to adjust and would be interested if people want to throw out better terms in the comments.
So, specifically, when I say machines nominating targets could infringe on human cognitive autonomy, how does that work? I think it begins with the idea of choice architecture. What I’m referring to is the concept from Richard Thaler and Cass Sunstein in their book “Nudge”.
Thaler and Sunstein are economists and suggest that human behavior can be dramatically influenced by shaping the architecture of choices, or more simply, controlling which choices you have available for a given decision. You cannot choose something that is not an option. This idea is useful in many areas because it gets to the heart of decision-making. A narrow choice architecture, say of three choices, means that whoever, or whatever, is operating inside that architecture can only choose those available options. Let’s assume that doing nothing is always an option, for you Dead Poet Society people out there.
Constraining choice architecture is a great way to ensure that autonomous behavior occurs only within acceptable boundaries. This can be human autonomy or machine and it is baked into military organizations and processes.
The military by organization and equipment has a very specific choice architecture from the Stratcom Commander down to a rifleman, reflected in the tools we give them to operate. What is a submarine commander going to do with his torpedoes? He’s going to sink stuff at sea. He’s not going to shoot down a satellite or build a school. Which is ok, we have other tools and organizations for that. So, when that sub commander is going through his version of the targeting process, he’s considering targets within his choice architecture. His staff may prioritize enemy combatants over enemy shipping but that’s not the nominating I’m talking about because it is already built into the menu of choices. Both are authorized targets for which the crew is equipped, trained, and expected to engage.
When we develop software to identify targets, we are essentially doing the same thing that happened to the sub commander. We’ve given an architecture of choices whose individual selection may have some debate associated with it (which ship, where, when, etc) but the outcome is within an acceptable parameter for the overall effort. This is why I agree with John and others in the scenarios they describe.
So, where’s the problem? I suspect things will get squirrely when we start to let machines determine the choice architecture because those determinations are so closely tied with who is doing the thinking. This is what I mean by nominating and what I mean by considering nominating the line we draw to maintain human cognitive autonomy.
Determining what are targets and what are not targets is a fundamental component of the reasons behind war and ownership of the war, for example when measuring military necessity to decide what is or is not a target before using force. Machine control of the choice architecture for targeting means that machines are thinking for us about the fundamental character of the war. Does that mean the war becomes their war and not our war? I don’t know, but I think it is a reasonable question to explore.
“Machine control of the choice architecture for targeting means that machines are thinking for us about the fundamental character of the war. Does that mean the war becomes their war and not our war? I don’t know, but I think it is a reasonable question to explore.”
That last point helps me tremendously to frame my thinking on the morality of the issue.
If the war is to be our war, where we remain fully accountable for the overall outcomes, the decisions about which people, what things, and under what circumstances they are targeted, need to remain soundly in the human domain. That said, as we narrow down the decision authority to simply finding, fixing and engaging (with human consent) specific targets within the “grand list,” we may find that automation is justifiable and ethical.
Thanks Brad for the very thoughtful reply to the comments on your earlier post.
Well stated, even though I continue to view the use of AI-enabled automation for target nomination as reasonable, legitimate, and, at least in principle, ethical and moral. An assertion that must be qualified, as you write, by the manner in which humans design the applicable choice architectures.
Yet why can't humans shape those choice architectures in ways that still adhere to IHL/LOAC, ROE, SPINS, and all other constraints and restraints that are always placed on military operations? At least to me, this is the essence of our different views on this nomination question (beyond the semantic differences between target identification and nomination).
I'm not qualified to weigh in on the broad philosophical questions you raise (I hope you can convince Heather Roff to do so). Like you, I focus more on the practical side of the question. Until we see evidence of a true online learning AI-enabled autonomous system, I remain optimistic about the ability for the US military to adopt and adapt.
The most successful militaries will be those that, all else being equal, learn how to optimize the roles and responsibilities of, and interdependencies between, humans and machines. In other words, maximizing the benefits of emerging technologies while not becoming subordinate to them.
It's possible that AI and similar disruptive technologies will place an overwhelming technical and cognitive burden on military personnel. The potential to do so will always exist, of course. Yet similar arguments have been made about many other new technologies in the past. A few of those technologies ultimately proved to be so complex that they failed operationally. But many more did not, and military personnel learned how to use even the most complex of them in ways that complemented their skills rather than being cognitively overwhelmed by them.
I argue that military personnel can adapt equally well to an AI-enabled autonomous future, with the same focus on academics, training, simulators, exercises, experimentation, and steadily increasing levels of complexity that the US military has done with every previous technology. Granted, with AI-enabled automation it demands even more attention to the design of human-machine interfaces and human-machine teaming. In this new environment we should insist on brutally candid critiques of individual and organizational performance, that places like the US Weapons School are known for, to ensure continual improvement in processes, procedures, the technology itself (to include HMI), and even choice architectures.
Despite the technological trepidation expressed by many, I am convinced that the current generation of men and women entering the military now will be well prepared for a digital future, even as that future becomes increasingly more autonomous.