Why having “humans in the loop” in an AI war is an illusion
April 17, 2026

(MIT Technology Review) – We don’t really understand AI’s inner workings, so we’re effectively flying blind.
Most of the public conversation regarding the use of AI-driven autonomous lethal weapons centers on how much humans should remain “in the loop.” Under the Pentagon’s current guidelines, human oversight supposedly provides accountability, context, and nuance while reducing the risk of hacking.
But the debate over “humans in the loop” is a comforting distraction. The immediate danger is not that machines will act without human oversight; it is that human overseers have no idea what the machines are actually “thinking.” (Read More)