I’m in the always against camp. My litmus test is to imagine the extreme circumstances and think about that issue making front page news. Here’s one.
Given enough time an AI will make a bad nomination that gets through all the vetting and approvals to strike. Let’s say this error becomes widely publicized right away as a result of the unintended civilian death and injury. When this happens we will be forced to have a broader public debate about the general morality of this idea.
Will the general public say, “AI nominated the wrong target, but the humans in the chain should have stopped it. Watch that thing closer in the future?” It’s possible they would take this view if they see it somewhat like a technical failure in a smart weapon, or another machine level failure.
Alternatively, could a more negative public view emerge, something along the lines of, “how did we ever get to a point that we ask machines for recommendations on who to target? Let’s stop this now?” This might indicate something deeper at play in our value system.
My personal objection is not based on the chance of a technical failure. This could happen with other systems. Machines fail sometimes. If I couldn’t deal with that I would be anti-technology generally, and I’m far from that. My objection is based on a visceral feeling of discomfort with outsourcing certain extremely consequential tasks to AI, even with human checks and balances. Other things I would be uncomfortable with AI making recommendations for are: legal prosecution decisions, prioritizing organ donation lists, determining the constitutionality of laws, emergency triage decisions, etc. Some things just need to stay in the human domain.
Lane, thanks for your comment and for describing your visceral / intuitive discomfort with AI in targeting. I also appreciate the list of other things you think should stay in the human domain. I have a different intuitive response, but I think it's important to challenge our visceral reactions to tech proposals and to be open minded. I was debating my son recently about whether "calling balls and strikes" in Major League Baseball should be automated or whether, for historical and visceral reasons, it should remain in the human domain. He and I have different "gut" reactions to it. I suspect by the time my son (11) is my age (39), many, many things will have changed...including whether umps are still calling balls and strikes.
Appreciate this discussion. It’s very important. There is going to be a lot of healthy debate about what process steps in various domains are appropriate to outsource to AI. Also, about what level of human control and oversight is sufficient. Both factors will depend heavily on the consequences of the specific process. Bringing it back to targeting, an obviously high consequence activity, I’m more conservative and want to see more constrained use of AI. Eventually my view that only humans should nominate targets may be seen as unnecessarily old-fashioned, but I feel like I’m on solid ground there. I’m ok that others see it differently, and I’m willing to respectfully offer my opposing view, as I’ve done here. Thanks again for the excellent analysis and writing.
What we're talking about, short of fully delegating use of force decision-making to autonomous systems, is human-machine teaming (HMT). Inherent in this concept, particularly as regards lethal autonomous weapons systems (LAWS) is the concept of meaningful human control (MHC).
I, and I'm sure many others, have come to loathe this amorphous term, but it's really just a collective exercise in trying to effectively describe the arcs of responsibility in HMT.
This gets us to Trust vs. Trustworthiness.
Trustworthiness and trust are intertwined, yet distinct concepts with significant implications for LAWS development and deployment. While trustworthiness (as expressed through reliability) is vital in order for human operators to rely on LAWS, thus establishing MHC, trust can be dangerous; acting as nothing more than a rubber stamp would remove the necessary meaningfulness from control. This tension is sure to be heightened by the ever increasing speed of the battlespace.
I agree that AI could, and should, play an important role in deliberate targeting. Dynamic targeting will present a real challenge, however. I also agree that it will be interesting to see how future end-users interact with systems in HMT environments.
Michael, thanks for your comments. I like your description of "trust vs. trustworthiness" as distinct concepts, and I agree that trust can be dangerous. I would only offer that there is a difference between "general trust" in technology (e.g., AI/ML) and "specific trust" in a particular application of that technology.
I agree. The balance between Trust and Trustworthiness will have to be struck on a capability-specific basis.
I also cannot take credit for this idea. It was originated by my thesis advisor, Dr. Ian Kerr at the University of Ottawa (who has since passed away) along with Dr. Jason Millar.
Thanks John. Brad and I see eye-to-eye on almost everything related to AI, but I fall squarely in your camp on this one. I was waiting for someone to weigh in with a different opinion.
I have quite a bit of experience with targeting, as both a consumer (fighter aircraft, AOC) and producer (Intel Group Commander, with the Group comprising two active duty and several National Guard targeting squadrons...almost the entire Air Force's targeting expertise in one Group).
I agree with everything you say! I'm glad Brad quoted the Air Force targeting 'bible', AFDP 3-60, Targeting. However, I believe he didn't quite capture the point you so rightly highlight -- it's not the nomination that's the issue, it's the leap from nomination to target approval and strike. The entire purpose of the AOC Target Effects Team (TET) is to review target nominations and propose a targeting list (JIPTL) to the C/JFACC for approval. That process will always involve humans reviewing target noms, with lawyers right there alongside the targeteers/TEA.
Time-sensitive targeting (TST) is a different animal of course. But even in that case, the ROE and SPINS will be clear about the level of automation allowed before a target is allowed to be struck. I remain confident that AI will *not* be allowed to nominate and approve kinetic attacks without a human in the process (where that human resides is a legitimate question, which leads directly to some of the legitimate concerns Brad highlighted. But that's for a different post).
As you so rightly underscore, IHL/LOAC + ROE/SPINS still apply, regardless of the level of automation in the targeting cycle. Your list of questions is excellent. The only thing I can add is that all of those questions are distilled down to one question, "what is the risk (including risk to mission, risk to force) of striking (or not striking) this nominated target?" Followed by determining who accepts that risk, and at what level.
All that said, we need to start thinking now about the implications of using GenAI in the targeting cycle. Despite the incredible potential of LLMs, integrating GAI into the targeting process will be fraught. The DoD will need to tread carefully when thinking about how much GenAI is used, and exactly what it's used for.
I'm many years removed from being Gen Z, but I'm with you on this one!
Sir, thanks for your comments. I appreciate your feedback. Many of those who've reached out to me have expressed a similar sentiment about dynamic targeting (i.e., as you said, TST is different). I haven't read much about (or really considered) how militaries might use generative AI in the targeting process, but I will.
I’m in the always against camp. My litmus test is to imagine the extreme circumstances and think about that issue making front page news. Here’s one.
Given enough time an AI will make a bad nomination that gets through all the vetting and approvals to strike. Let’s say this error becomes widely publicized right away as a result of the unintended civilian death and injury. When this happens we will be forced to have a broader public debate about the general morality of this idea.
Will the general public say, “AI nominated the wrong target, but the humans in the chain should have stopped it. Watch that thing closer in the future?” It’s possible they would take this view if they see it somewhat like a technical failure in a smart weapon, or another machine level failure.
Alternatively, could a more negative public view emerge, something along the lines of, “how did we ever get to a point that we ask machines for recommendations on who to target? Let’s stop this now?” This might indicate something deeper at play in our value system.
My personal objection is not based on the chance of a technical failure. This could happen with other systems. Machines fail sometimes. If I couldn’t deal with that I would be anti-technology generally, and I’m far from that. My objection is based on a visceral feeling of discomfort with outsourcing certain extremely consequential tasks to AI, even with human checks and balances. Other things I would be uncomfortable with AI making recommendations for are: legal prosecution decisions, prioritizing organ donation lists, determining the constitutionality of laws, emergency triage decisions, etc. Some things just need to stay in the human domain.
Lane, thanks for your comment and for describing your visceral / intuitive discomfort with AI in targeting. I also appreciate the list of other things you think should stay in the human domain. I have a different intuitive response, but I think it's important to challenge our visceral reactions to tech proposals and to be open minded. I was debating my son recently about whether "calling balls and strikes" in Major League Baseball should be automated or whether, for historical and visceral reasons, it should remain in the human domain. He and I have different "gut" reactions to it. I suspect by the time my son (11) is my age (39), many, many things will have changed...including whether umps are still calling balls and strikes.
Appreciate this discussion. It’s very important. There is going to be a lot of healthy debate about what process steps in various domains are appropriate to outsource to AI. Also, about what level of human control and oversight is sufficient. Both factors will depend heavily on the consequences of the specific process. Bringing it back to targeting, an obviously high consequence activity, I’m more conservative and want to see more constrained use of AI. Eventually my view that only humans should nominate targets may be seen as unnecessarily old-fashioned, but I feel like I’m on solid ground there. I’m ok that others see it differently, and I’m willing to respectfully offer my opposing view, as I’ve done here. Thanks again for the excellent analysis and writing.
Check this article out.
https://link.springer.com/article/10.1007/s10676-018-9494-0
Great post!
(Fellow JAG here - Canadian)
What we're talking about, short of fully delegating use of force decision-making to autonomous systems, is human-machine teaming (HMT). Inherent in this concept, particularly as regards lethal autonomous weapons systems (LAWS) is the concept of meaningful human control (MHC).
I, and I'm sure many others, have come to loathe this amorphous term, but it's really just a collective exercise in trying to effectively describe the arcs of responsibility in HMT.
This gets us to Trust vs. Trustworthiness.
Trustworthiness and trust are intertwined, yet distinct concepts with significant implications for LAWS development and deployment. While trustworthiness (as expressed through reliability) is vital in order for human operators to rely on LAWS, thus establishing MHC, trust can be dangerous; acting as nothing more than a rubber stamp would remove the necessary meaningfulness from control. This tension is sure to be heightened by the ever increasing speed of the battlespace.
I agree that AI could, and should, play an important role in deliberate targeting. Dynamic targeting will present a real challenge, however. I also agree that it will be interesting to see how future end-users interact with systems in HMT environments.
Michael, thanks for your comments. I like your description of "trust vs. trustworthiness" as distinct concepts, and I agree that trust can be dangerous. I would only offer that there is a difference between "general trust" in technology (e.g., AI/ML) and "specific trust" in a particular application of that technology.
I agree. The balance between Trust and Trustworthiness will have to be struck on a capability-specific basis.
I also cannot take credit for this idea. It was originated by my thesis advisor, Dr. Ian Kerr at the University of Ottawa (who has since passed away) along with Dr. Jason Millar.
Check out this paper by Drs. Kerr and Millar that isn't LAWS-specific, but raises many of the questions we'll inevitably have to answer. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2234645
I'm also a big fan of Dr. Rebecca Crootof's work in the field. This article presents some interesting ways to think about HMT: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2234645
Thanks John. Brad and I see eye-to-eye on almost everything related to AI, but I fall squarely in your camp on this one. I was waiting for someone to weigh in with a different opinion.
I have quite a bit of experience with targeting, as both a consumer (fighter aircraft, AOC) and producer (Intel Group Commander, with the Group comprising two active duty and several National Guard targeting squadrons...almost the entire Air Force's targeting expertise in one Group).
I agree with everything you say! I'm glad Brad quoted the Air Force targeting 'bible', AFDP 3-60, Targeting. However, I believe he didn't quite capture the point you so rightly highlight -- it's not the nomination that's the issue, it's the leap from nomination to target approval and strike. The entire purpose of the AOC Target Effects Team (TET) is to review target nominations and propose a targeting list (JIPTL) to the C/JFACC for approval. That process will always involve humans reviewing target noms, with lawyers right there alongside the targeteers/TEA.
Time-sensitive targeting (TST) is a different animal of course. But even in that case, the ROE and SPINS will be clear about the level of automation allowed before a target is allowed to be struck. I remain confident that AI will *not* be allowed to nominate and approve kinetic attacks without a human in the process (where that human resides is a legitimate question, which leads directly to some of the legitimate concerns Brad highlighted. But that's for a different post).
As you so rightly underscore, IHL/LOAC + ROE/SPINS still apply, regardless of the level of automation in the targeting cycle. Your list of questions is excellent. The only thing I can add is that all of those questions are distilled down to one question, "what is the risk (including risk to mission, risk to force) of striking (or not striking) this nominated target?" Followed by determining who accepts that risk, and at what level.
All that said, we need to start thinking now about the implications of using GenAI in the targeting cycle. Despite the incredible potential of LLMs, integrating GAI into the targeting process will be fraught. The DoD will need to tread carefully when thinking about how much GenAI is used, and exactly what it's used for.
I'm many years removed from being Gen Z, but I'm with you on this one!
Sir, thanks for your comments. I appreciate your feedback. Many of those who've reached out to me have expressed a similar sentiment about dynamic targeting (i.e., as you said, TST is different). I haven't read much about (or really considered) how militaries might use generative AI in the targeting process, but I will.