Member-only story

Facebook’s genocide filters are really, really bad

An AI that can’t recognize its own training data is a very, very bad AI.

Cory Doctorow
5 min readMar 23, 2022
The Three Wise monkeys; two of their faces have been replaced by: the menacing red eye of HAL900 and Mark Zuckerberg; the hand of the third has been replaced by a Facebook thumbs-up icon. Behind them is a collage of Facebook ads attributed to ‘Gen O. Cide,’ whose image is a battery of Myanmar anti-aircraft guns emblazoned with the phrase ‘We Need To Kill More’ in dripping red letters. Image: Anthony Quintano (modified) https://commons.wikimedia.org/wiki/File:Mark_Zuckerberg_F8_2019_Keynote_(3

In the fall of 2020, Facebook went to war against Ad Observatory, a NYU-hosted crowdsourcing project that lets FB users capture the paid political ads they see through a browser plugin that santizes them of personal information and then uploads them to a portal that disinformation researchers can analyze.

https://pluralistic.net/2020/10/25/musical-chairs/#son-of-power-ventures

Facebook’s attacks were truly shameless. They told easily disproved lies (for example, claiming that the plugin gathered sensitive personal data, despite publicly available, audited source-code that proved this was absolute bullshit).

Why was Facebook so desperate to prevent a watchdog from auditing its political ads? Well, the company had promised to curb the rampant paid political disinformation on its platform as part of a settlement with regulators. Facebook said that its own disinfo research portal showed it was holding up its end of the bargain, and the company hated that Ad Observatory showed that this portal was a bad joke:

https://pluralistic.net/2021/08/06/get-you-coming-and-going/#potemkin-research-program

--

--

Cory Doctorow
Cory Doctorow

Written by Cory Doctorow

Writer, blogger, activist. Blog: https://pluralistic.net; Mailing list: https://pluralistic.net/plura-list; Mastodon: @pluralistic@mamot.fr

Responses (1)