First, let me apologize for the radio silence on KRCP. Brad had a lot of teaching obligations this past semester, and I was in the middle of an international move that somehow took a whopping three months.
But we’re back, so let’s jump into this. We left off by discussing cognitive autonomy and the risks of machines “thinking for us” in the targeting cycle.
I’ve been exercising my own cognitive autonomy and thinking about this sentence from Brad’s last post:
“… cognitive autonomy is Excalibur. Whoever gets to decide what happens, and why, owns the future.”
There is a lot to unpack here…but let me begin by saying, are you sure? Are we overplaying how much cognitive autonomy we really have? Now, it has been many years since I last took a philosophy class (there’s a reason I chose international relations instead of philosophy), but there are so many remaining questions to consider here that come to my mind:
Are we actually sacrificing a certain degree of cognitive autonomy when using AI-enabled autonomous systems?
Haven’t other factors already chipped away at the decision space, or choice architecture, and this idea of cognitive autonomy is a bit of an illusion – or, at the very least, already constrained? In other words, do other factors limit our cognitive autonomy, but we don’t consider these as limiting or problematic because a competent authority was responsible for that constraint?
o I’m thinking of geopolitical realities, the national political climate, individual leaders, international law, etc.
o I guess by the time we get to the targeting cycle… isn’t the decision space already so delineated by other factors that have shaped it to employ cognitive autonomy? Does that really mean the machine is “thinking” for us at this point?
o Who needs to do the thinking? Any human – as long as it’s human?
Are there circumstances where a constrained choice architecture would lead to better outcomes? Should we consider circumstances where sacrificing a degree of cognitive autonomy is appropriate or even necessary?
Does human-machine teaming compromise cognitive autonomy?
Individuals react differently to choices – some treasure having the ability to choose, while others dread the prospect of making a choice. It makes me think of a conversation I had with a former Danish colleague over lunch. I was complaining about Danish supermarkets (IYKYK), and she told me, “I much prefer our supermarkets to American ones. We get two choices of cereal; why do you need more? I would rather choose between two than twenty.” I struggled to understand – what if you don’t want those two options? What if they are both bad options?
I realize this is a superficial example, but it may be worth considering how cognitive autonomy is perceived, utilized, and preserved at an individual level. Perhaps my colleague would have preferred that a machine picked her cereal and sent it to her home so she did not have to make a choice at all. I can think of folks that would be more than fine with that. Not to mention, as my Danish context might demonstrate, the potential for national differences in this discussion. Perhaps preserving cognitive autonomy is not as high of a priority in other parts of the world (not necessarily Denmark!) as it is in the US and the American military.
This discussion also makes me think about how quickly humans succumb to cognitive fatigue. I know this has been a fruitful research area for MIT to develop AI assistants that can sense cognitive fatigue and design AI assistants that would sense the fatigue and offer advice to fix it or step in to replace the human if the fatigue is bad enough. The project—already a few years old now—explores the question: What if the machine is working perfectly, and the human is struggling? Could this be a situation where it would be better for a machine to “think” for us? It is difficult to answer because there are so many “it depends” answers. But one of the researchers explains the project:
"In the area of human-machine teaming, we often think about the technology, for example, how do we monitor it, understand it, make sure it's working right. But teamwork is a two-way street, and these considerations aren't happening both ways. What we're doing is looking at the flipside, where the machine is monitoring and enhancing the other side — the human.”
The system would “suggest interventions, or even take action in dire scenarios, to help the individual recover or to prevent harm.”
I know other research projects are taking more drastic steps, such as withholding information from human operators to keep stress levels low and performance optimal—is that really cognitive autonomy if the human does not have all the information? Perhaps others can share their knowledge of ongoing research in the comments so we can create a record of this really interesting work.
I realize this is not quite what you are talking about with the targeting cycle—this is far from an AI-enabled autonomous system that can ‘choose’ targets. But I guess these questions and some of this research on human-machine teaming remind me to take a pause when we make an argument like “humans need to do the thinking.” I am not sure it comes down to such a simple truth, and I am not sure which humans may be best suited for the job. Yes, I know – training and making sure the right folks are in the right job- there are many steps one will take to ensure the right people are making these incredibly difficult decisions. But still, each person has different levels of cognitive capacity (on any given day), and our threshold for fatigue varies.
I also think it is equally possible to get the right people to design systems that are guided by laws, rules, policies, and values that are developed and authorized by humans that delineate the choice architecture for machines. It is hard, for sure. I am working on a project grappling with these very questions, but I think it is possible. What do you all think?
I love this format of an ongoing intellectual conversation. I don’t mind the wait between responses either because you are both real professionals in this area, not full-time social media influencers. That makes each response more of an investment with more value attached to it. Keep it up!
Well said Lena! Your discussion of how our cognitive autonomy is already constrained by myriad internal and external factors, not directly related to AI or autonomous systems, is excellent.
Along related lines, there is a belief that more AI and autonomy will remove crucial decision space from humans. It's well founded, and Brad laid out his case in ways that make a lot of sense. Yet I take the view that just as AI is considered magic until it's merely another app or embedded software, humans are capable of adapting very well to a world of increasing autonomy.
As you noted, this will require, of course, much more attention on human-machine integration. Delineating roles, responsibilities, dependencies, and interdependencies between humans and 'smart' machines. If we don't get that right, then Brad's scenarios become increasingly probable.
Francois Chollet's posting on the differences between cognitive automation, cognitive assistance, and cognitive autonomy is one of the best summaries I've read. I agree with his argument that what we need is more cognitive assistance, not cognitive autonomy (and it's also the more likely future).
https://fchollet.substack.com/p/ai-is-cognitive-automation-not-cognitive