Ayyyyyy Eyeeeee
The lie that raced around the world before the truth got its boots on.
--
Tomorrow (June 5) at 7:15PM, I’m in London at the British Library with my novel Red Team Blues, hosted by Baroness Martha Lane Fox.
On Tuesday (June 6), I’m on a Rightscon panel about interoperability.
On Wednesday (June 7), I’m keynoting the Re:publica conference in Berlin.
On Thursday (June 8) at 8PM, I’m at Otherland Books in Berlin.
It didn’t happen.
The story you heard, about a US Air Force AI drone warfare simulation in which the drone resolved the conflict between its two priorities (“kill the enemy” and “obey its orders, including orders not to kill the enemy”) by killing its operator?
It didn’t happen.
The story was widely reported on Friday and Saturday, after Col. Tucker “Cinco” Hamilton, USAF Chief of AI Test and Operations, included the anaecdote in a speech to the Future Combat Air System (FCAS) Summit.
But once again: it didn’t happen:
“Col Hamilton admits he ‘mis-spoke’ in his presentation at the FCAS Summit and the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation,” the Royal Aeronautical Society, the organization where Hamilton talked about the simulated test, told Motherboard in an email.
The story got a lot more play than the retraction, naturally. “A lie is halfway round the world before the truth has got its boots on.”
Why is this lie so compelling? Why did Col. Hamilton tell it?
Because it’s got a business-model.
There are plenty of things to worry about when it comes to “AI.” Algorithmic decision-support systems and classifiers can replicate the bias in their training data to reproduce racism, sexism, and other forms of discrimination at scale.
When a bank officer practices racial discrimination in offering a loan, or when an HR person passes over qualified women for senior technical roles, or a parole board discriminates against Black inmates, or a Child…