Member-only story
Facebook’s genocide filters are really, really bad
An AI that can’t recognize its own training data is a very, very bad AI.
In the fall of 2020, Facebook went to war against Ad Observatory, a NYU-hosted crowdsourcing project that lets FB users capture the paid political ads they see through a browser plugin that santizes them of personal information and then uploads them to a portal that disinformation researchers can analyze.
https://pluralistic.net/2020/10/25/musical-chairs/#son-of-power-ventures
Facebook’s attacks were truly shameless. They told easily disproved lies (for example, claiming that the plugin gathered sensitive personal data, despite publicly available, audited source-code that proved this was absolute bullshit).
Why was Facebook so desperate to prevent a watchdog from auditing its political ads? Well, the company had promised to curb the rampant paid political disinformation on its platform as part of a settlement with regulators. Facebook said that its own disinfo research portal showed it was holding up its end of the bargain, and the company hated that Ad Observatory showed that this portal was a bad joke:
https://pluralistic.net/2021/08/06/get-you-coming-and-going/#potemkin-research-program