Discussion about this post

User's avatar
Jack Shanahan's avatar

Thanks John. Brad and I see eye-to-eye on almost everything related to AI, but I fall squarely in your camp on this one. I was waiting for someone to weigh in with a different opinion.

I have quite a bit of experience with targeting, as both a consumer (fighter aircraft, AOC) and producer (Intel Group Commander, with the Group comprising two active duty and several National Guard targeting squadrons...almost the entire Air Force's targeting expertise in one Group).

I agree with everything you say! I'm glad Brad quoted the Air Force targeting 'bible', AFDP 3-60, Targeting. However, I believe he didn't quite capture the point you so rightly highlight -- it's not the nomination that's the issue, it's the leap from nomination to target approval and strike. The entire purpose of the AOC Target Effects Team (TET) is to review target nominations and propose a targeting list (JIPTL) to the C/JFACC for approval. That process will always involve humans reviewing target noms, with lawyers right there alongside the targeteers/TEA.

Time-sensitive targeting (TST) is a different animal of course. But even in that case, the ROE and SPINS will be clear about the level of automation allowed before a target is allowed to be struck. I remain confident that AI will *not* be allowed to nominate and approve kinetic attacks without a human in the process (where that human resides is a legitimate question, which leads directly to some of the legitimate concerns Brad highlighted. But that's for a different post).

As you so rightly underscore, IHL/LOAC + ROE/SPINS still apply, regardless of the level of automation in the targeting cycle. Your list of questions is excellent. The only thing I can add is that all of those questions are distilled down to one question, "what is the risk (including risk to mission, risk to force) of striking (or not striking) this nominated target?" Followed by determining who accepts that risk, and at what level.

All that said, we need to start thinking now about the implications of using GenAI in the targeting cycle. Despite the incredible potential of LLMs, integrating GAI into the targeting process will be fraught. The DoD will need to tread carefully when thinking about how much GenAI is used, and exactly what it's used for.

I'm many years removed from being Gen Z, but I'm with you on this one!

Expand full comment
Michael M. Smith, LL.M.'s avatar

Great post!

(Fellow JAG here - Canadian)

What we're talking about, short of fully delegating use of force decision-making to autonomous systems, is human-machine teaming (HMT). Inherent in this concept, particularly as regards lethal autonomous weapons systems (LAWS) is the concept of meaningful human control (MHC).

I, and I'm sure many others, have come to loathe this amorphous term, but it's really just a collective exercise in trying to effectively describe the arcs of responsibility in HMT.

This gets us to Trust vs. Trustworthiness.

Trustworthiness and trust are intertwined, yet distinct concepts with significant implications for LAWS development and deployment. While trustworthiness (as expressed through reliability) is vital in order for human operators to rely on LAWS, thus establishing MHC, trust can be dangerous; acting as nothing more than a rubber stamp would remove the necessary meaningfulness from control. This tension is sure to be heightened by the ever increasing speed of the battlespace.

I agree that AI could, and should, play an important role in deliberate targeting. Dynamic targeting will present a real challenge, however. I also agree that it will be interesting to see how future end-users interact with systems in HMT environments.

Expand full comment
7 more comments...

No posts