Thanks Brad. Today I submitted my 3,500 word piece on AI and geopolitics to the Italian geopolitics journal Limes. I hope to be able to share it once published. I would like to believe it will be a useful contribution to this dialogue.
I made a few points on LI about TX Hammes’ AWS article. While I agree with many aspects of it, I disagree strongly with his decision to drag DODD 3000.09 into his argument. A non-sequitur as far as I’m concerned.
I would also like to hear Lena’s (and Heather Roff’s, Dave Barnes’) views on it. While I understand his “we have a moral obligation to build AWS”, I’m still somewhat uncomfortable with that framing. You could just as easily counter-argue that we have a moral obligation not to kill fellow humans.
Can't wait to see it! I'm with you on the 3000.09 take. It's not an obstacle. Though I'm a bit skeptical on its efficacy too.
When I read articles like Hammes' I'm always struck by how many different technologies and concepts are lumped together. Especially when citing examples from Ukraine or whatever. There just isn't a unifying concept on what is or isn't autonomous. Including in DoDD 3000.09, hence all the exceptions. That term is shorthand to refer to any system whose output is not wholly reliable without direct supervision (that reliability is also an illusion). Which means that as system complexity increases, the sense of autonomy increases. The problem with that is that the concept of autonomy often holds many other connotations that we've talked about here regarding responsibility and agency. And oftentimes AI and autonomy are used interchangeably. Also wrong. We need to abandon this term in serious discussions and move to a risk-based analysis that Jane Pinelis and I have been talking about for awhile and which appears to have popped up in a recent report by SCSP and Johns Hopkins: https://www.scsp.ai/wp-content/uploads/2023/11/HCAI-Placemat.pdf
This framework focuses on AI which is ok, but I think if we use a similar structure we can better regulate complex systems that automate lethal action on the battlefield. I need to hit Jane up again to restart our talks for an article.
Thanks Brad. Today I submitted my 3,500 word piece on AI and geopolitics to the Italian geopolitics journal Limes. I hope to be able to share it once published. I would like to believe it will be a useful contribution to this dialogue.
I made a few points on LI about TX Hammes’ AWS article. While I agree with many aspects of it, I disagree strongly with his decision to drag DODD 3000.09 into his argument. A non-sequitur as far as I’m concerned.
I would also like to hear Lena’s (and Heather Roff’s, Dave Barnes’) views on it. While I understand his “we have a moral obligation to build AWS”, I’m still somewhat uncomfortable with that framing. You could just as easily counter-argue that we have a moral obligation not to kill fellow humans.
Can't wait to see it! I'm with you on the 3000.09 take. It's not an obstacle. Though I'm a bit skeptical on its efficacy too.
When I read articles like Hammes' I'm always struck by how many different technologies and concepts are lumped together. Especially when citing examples from Ukraine or whatever. There just isn't a unifying concept on what is or isn't autonomous. Including in DoDD 3000.09, hence all the exceptions. That term is shorthand to refer to any system whose output is not wholly reliable without direct supervision (that reliability is also an illusion). Which means that as system complexity increases, the sense of autonomy increases. The problem with that is that the concept of autonomy often holds many other connotations that we've talked about here regarding responsibility and agency. And oftentimes AI and autonomy are used interchangeably. Also wrong. We need to abandon this term in serious discussions and move to a risk-based analysis that Jane Pinelis and I have been talking about for awhile and which appears to have popped up in a recent report by SCSP and Johns Hopkins: https://www.scsp.ai/wp-content/uploads/2023/11/HCAI-Placemat.pdf
This framework focuses on AI which is ok, but I think if we use a similar structure we can better regulate complex systems that automate lethal action on the battlefield. I need to hit Jane up again to restart our talks for an article.