Thanks to Lena for getting us back on track. We want to generate more content for everyone so we’re taking a look at some different ways to do that. Coming up we’re going to facilitate a discussion on fielding autonomous weapon systems. Why aren’t we seeing them yet? What conditions need to be met for them to appear on the battlefield? Technological, developmental, manufacturing, legal, policy, ethical, and so on. But before we get there, let’s put a final series of thoughts on this cognitive autonomy idea and then we can move on.
Lena already summarized some issues nicely. Are we actually sacrificing cognitive autonomy when we deploy automated systems? Does human-machine teaming compromise cognitive autonomy?
The short answer is yes. We sacrifice cognitive autonomy when we automate something, especially reasoning. This includes human-machine teaming. And you know what? Sometimes that’s good.
Clearly we don’t want to have to think about when to add water and soap when we wash our clothes. Nor do we want to have to plot every updated target location on a map when we receive new reports. If we did those functions ourselves could we better ensure that each action was optimally tied to our intent and goals for our life or mission? Yes, but the energy we expend on that could better be used elsewhere, and the consequences of failure in a clothes washing incident are fairly minor so let’s just let the machine do it.
What about consequences in improper target location? That may be another story. In this case the process of validating a target as properly identified and properly located may cross over into a level of reasoning where the machine’s action fails to align with human intent. The machine’s “thinking” has violated MHC or AHJ.
I don’t want to re-hash examples from previous posts but I wanted to make a quick reference to these scenarios because ultimately cognitive autonomy shouldn’t be considered an objective to reach. We aren’t trying to get to full cognitive autonomy. Nor do I think anyone is trying to push humanity to zero cognitive autonomy.
Why are those bad? Well, let’s start with the easy one. Zero cognitive autonomy would mean being a simple machine. A toaster perhaps. Absolutely no influence on the conditions of your existence. Not a great life, but perhaps you wouldn’t even notice, happily toasting along.
Full cognitive autonomy wouldn’t be much better. In this scenario we’re probably talking a disembodied brain with no external influences that could affect thinking. Completely autonomous. In this scenario you’re probably right back to being a toaster with a better user interface.
So if those are the two poles of cognitive autonomy, where is the optimal place to be between them?
That depends on context.
Let’s visit Lena’s Danish supermarket scenario.
While I think having many cereal options is great (Count Chocula, Fruity Pebbles, Cinnamon Toast Crunch, mmmm the list goes on), clearly the Danes only want Fiber O’s or Colon Flakes. And that’s cool. The key is that their architecture for that choice was knowingly limited, in this case perhaps outsourced to a machine. If the Danes don’t get to enjoy the delights of monster-themed cereal, well that’s on them. No harm done. What we want to avoid is unwittingly allowing a machine to bound or control our pursuit of our goals.
If the Danes were wistfully looking across the Atlantic wondering why Americans seem so joyful at breakfast, they may have no idea that monster-themed cereal is the source of that tranquility. Then we have a problem. Their lives are not what they want them to be due to constraints placed on them by a system not of their own making. Begin the first human-machine war!
Obviously this is absurd, but to sum, I’m trying to suggest that one scenario while not having compete cognitive autonomy, seems to be the right amount. While the second scenario certainly isn’t zero cognitive autonomy, but perhaps it is not quite enough.
So a few key questions emerge to frame this inquiry:
What is the right amount of cognitive autonomy in any given situation?
How do you know how much you have or how much you have surrendered?
How do you adjust your cognitive autonomy?
When should we sacrifice our cognitive autonomy?
The answers to these questions are not well understood and yet we are developing advanced decision support systems without considering all the variables. In some cases, researchers are considering that humans cannot decide for themselves how much cognitive autonomy they should have and when it should be reduced.
At the University of New Mexico researchers are working on drone systems that will reduce human control depending on how well the drones think the human’s cognition is working. As the human becomes more stressed and less capable, the system will increase its decision autonomy and reduce human decision control.
This autonomy is done as a means to help the human but it is done in a way that maximizes action at the expense of human understanding of what is happening. Is that the right answer? I’m not sure, though it is a tempting feature when considering the rigors of warfare.
My forthcoming paper will consider the questions above and attempt to provide a framework for thinking about how much cognitive autonomy we might want in any given situation, how we can develop systems that maximize automation and autonomy while retaining alignment to human intent, and how we can ensure that human cognition remains the master of its own destiny. When it’s published I’ll post a link here if you want to marinate in this discussion some more.
In the meantime we’ll get ready to host some guest authors to start a larger series on what it takes to field autonomous weapons on the modern battlefield. Spoiler: it’s a lot harder than people think, evidenced by the lack of any meaningful lethal autonomy in Ukraine and Gaza (that we know of).
Until then…