6 Comments

“Machine control of the choice architecture for targeting means that machines are thinking for us about the fundamental character of the war. Does that mean the war becomes their war and not our war? I don’t know, but I think it is a reasonable question to explore.”

That last point helps me tremendously to frame my thinking on the morality of the issue.

If the war is to be our war, where we remain fully accountable for the overall outcomes, the decisions about which people, what things, and under what circumstances they are targeted, need to remain soundly in the human domain. That said, as we narrow down the decision authority to simply finding, fixing and engaging (with human consent) specific targets within the “grand list,” we may find that automation is justifiable and ethical.

Expand full comment

Agreed, thanks for jumping in Lane.

Expand full comment

Thanks Brad for the very thoughtful reply to the comments on your earlier post.

Well stated, even though I continue to view the use of AI-enabled automation for target nomination as reasonable, legitimate, and, at least in principle, ethical and moral. An assertion that must be qualified, as you write, by the manner in which humans design the applicable choice architectures.

Yet why can't humans shape those choice architectures in ways that still adhere to IHL/LOAC, ROE, SPINS, and all other constraints and restraints that are always placed on military operations? At least to me, this is the essence of our different views on this nomination question (beyond the semantic differences between target identification and nomination).

I'm not qualified to weigh in on the broad philosophical questions you raise (I hope you can convince Heather Roff to do so). Like you, I focus more on the practical side of the question. Until we see evidence of a true online learning AI-enabled autonomous system, I remain optimistic about the ability for the US military to adopt and adapt.

The most successful militaries will be those that, all else being equal, learn how to optimize the roles and responsibilities of, and interdependencies between, humans and machines. In other words, maximizing the benefits of emerging technologies while not becoming subordinate to them.

It's possible that AI and similar disruptive technologies will place an overwhelming technical and cognitive burden on military personnel. The potential to do so will always exist, of course. Yet similar arguments have been made about many other new technologies in the past. A few of those technologies ultimately proved to be so complex that they failed operationally. But many more did not, and military personnel learned how to use even the most complex of them in ways that complemented their skills rather than being cognitively overwhelmed by them.

I argue that military personnel can adapt equally well to an AI-enabled autonomous future, with the same focus on academics, training, simulators, exercises, experimentation, and steadily increasing levels of complexity that the US military has done with every previous technology. Granted, with AI-enabled automation it demands even more attention to the design of human-machine interfaces and human-machine teaming. In this new environment we should insist on brutally candid critiques of individual and organizational performance, that places like the US Weapons School are known for, to ensure continual improvement in processes, procedures, the technology itself (to include HMI), and even choice architectures.

Despite the technological trepidation expressed by many, I am convinced that the current generation of men and women entering the military now will be well prepared for a digital future, even as that future becomes increasingly more autonomous.

Expand full comment

Love this sir, thanks.

Expand full comment

BTW, this new paper presents exactly the kind of cautionary tale you outline in your posts. (I admit I'm still only at the executive summary).

https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/ethics-of-advanced-ai-assistants/the-ethics-of-advanced-ai-assistants-2024-i.pdf

Expand full comment

Haha! It's 274 pages! Looks really interesting though. I'll start cranking on it as well.

Expand full comment