5 Comments
User's avatar
Jack Shanahan's avatar

Thanks Brad. Excellent post. This is the kind of discussion we are having in a couple of AI Track II dialogues. Your points are generally accepted by the participants from other nations.

There needs to be an accounting, however, for the seemingly inevitable introduction of true online learning systems. That is, well beyond today’s ML systems that remain largely deterministic.

Since nobody has ever fielded that kind of system, it’s important to address these same kind of responsibility gap concerns sooner rather than later.

A risk-based framework for AI-enabled military systems could lead states to conclude that true online learning LAWS could fit into the ‘unacceptable risk’ category.

Expand full comment
Caitlin Lee's avatar

Gen Shanahan- when u mention “online learning” LAWS, are you talking about learning how to use weapons via LLMs? I’m researching the prolif of battlefield knowledge so wanted to ask- and if u can recommend any reading I’d be grateful- Caitlin lee

Expand full comment
Jack Shanahan's avatar

Not in this context. What I was referring to was that even the most advanced AI systems used in militaries today are largely deterministic. Even though they are called probabilistic models, the performance once fielded will reflect how they were trained, preferably to a 95+% probability level.

Their behavior ‘in the wild’ is subject to the same stochastic errors you would get for any other sensor. But the systems would still perform within the boundaries of the conditions under which they were trained.

It’s highly likely if not inevitable that future systems will be capable of true self-learning. That is, acting in ways that the human developer and operator may not be able to fully anticipate. Well beyond the system’s training boundary conditions.

In a risk-based framework, states may elect to place that kind of capability into the “unacceptable” category. At least if used within a LAWS. Because that would not only take a human out of any loop, but the system could decide to take lethal actions that a human would not have otherwise permitted.

The NASEM AI T&E report published recently has a little more on the online learning piece (but essentially it’s what I wrote above).

Expand full comment
Jack Shanahan's avatar

I suppose it’s roughly analogous to LLMs, though for weapon systems rather than the current ways in which LLMs are being used.

Expand full comment
Brad Boyd's avatar

Great comment and thanks for joining!

Expand full comment