How copyright filters lead to wage-theft
Last week, “Marina” — a piano teacher who publishes free lessons her Piano Keys Youtube channel — celebrated her fifth anniversary by announcing that she was quitting Youtube because her meager wages were being stolen by fraudsters.
Marina posted a video with a snatch of her performance of Beethoven’s “Moonlight Sonata,” published in 1801. The composition is firmly in the public domain, and the copyright in the performance is firmly Marina’s, but it still triggered Youtube’s automated copyright filter.
A corporate entity — identified only by an alphabet soup of initialisms and cryptic LLC names — had claimed Ole Ludwig Van’s masterpiece as their own, identifying it as “Wicca Moonlight.”
Content ID, the automated Youtube filter, flagged Marina’s track as an unauthorized performance of this “Wicca Moonlight” track. Marina appealed the automated judgement, which triggered a message to this shadowy LLC asking if they agreed that no infringement had taken place.
But the LLC renewed its claim of infringement. Marina now faces several unpleasant choices:
- She can allow the LLC to monetize her video, stealing the meager wages she receives from the ads that appear on it
- She can take down her video
- She can provide her full name and address to Youtube in order to escalate the claim, with the possibility that her attackers will get her contact details, and with the risk that if she loses her claim, she can lose her Youtube channel
The incident was a wake-up call for Marina, who is quitting Youtube altogether, noting that it has become a place that favors grifters over creators. She’s not wrong, and it’s worth looking at how that happened.
Content ID was created to mollify the entertainment industry after Google acquired Youtube. Google would spend $100m on filtering tech that would allow rightsholders to go beyond the simple “takedown” permitted by law, and instead share in revenues from creative uses.
But it’s easy to see how this system could be abused. What if people falsely asserted copyright over works to which they had no claim? What if rightsholders rejected fair uses, especially criticism?
In a world where the ownership of creative works can take years to untangle in the courts and where judges’ fair use rulings are impossible to predict in advance, how could Google hope to get it right, especially at the vast scale of Youtube?
The impossibility of automating copyright judgments didn’t stop Google from trying to perfect its filter, adding layers of complexity until Content ID’s appeal process turned into a cod-legal system whose flowchart looks like a bowl of spaghetti.
The resulting mess firmly favors attackers (wage stealers, fraudsters, censors, bullies) over defenders (creators, critics). Attackers don’t need to waste their time making art, which leaves them with the surplus capacity to master the counterintuitive “legal” framework.
You can’t fix a system broke by complexity by adding more complexity to it. Attempts to do so only makes the system more exploitable by bad actors, like blackmailers who use fake copyright claims to extract ransoms from working creators.
But it would be a mistake to think that filterfraud was primarily a problem of shadowy scammers. The most prolific filter scammers and wage-thieves are giant music companies, like Sony Music, who claim nearly *all* classical music:
The Big Tech companies argue that they have an appeals process that can reverse these overclaims, but that process is a joke. Instagram takedowns take a few seconds to file, but *28 months* to appeal.
The entertainment industry are flagrant filternet abusers. Take Warner Chappell, whose subsidiary demonetizes videos that include the numbers “36” and “50”:
Warner Chappell are prolific copyfraudsters. For decades, they fraudulently claimed ownership over “Happy Birthday” (!):
They’re still at it — In 2020 they used a fraudulent claim to nuke a music theory video, and then a human being working on behalf of the company renewed the claim *after* being informed that they were mistaken about which song was quoted in the video:
The fact that automated copyright claims can remove material from the internet leads to a lot of sheer fuckery. In 2019, anti-fascists toyed with blaring copyrighted music at far right rallies to prevent their enemies from posting them online.
At the time, I warned that this would end badly. Just a month before, there had been a huge scandal because critics of extremist violence found that automated filters killed their videos because they featured clips of that violence:
Since then, it’s only gotten worse. The Chinese Communist Party uses copyfraud to remove critical videos from Youtube:
and so does the Beverley Hills Police Department:
But despite all that, the momentum is for *more* filtering, to remove far fuzzier categories of content. The EU’s Terror Regulation has just gone into effect, giving platforms just *one hour* to remove “terrorist” content:
The platforms have pivoted from opposing filter rules to endorsing them. Marc Zuckerberg says that he’s fine with removing legal protections for online platforms unless they have hundreds of millions of dollars to install filters.
The advocates for a filternet insist that all these problems can be solved if geeks just *nerd harder* to automate good judgment, fair appeals, and accurate attributions. This is pure wishful thinking. As is so often the case in tech policy, “wanting it badly is not enough.”
In 2019, the EU passed the Copyright Directive, whose Article1 7 is a “notice and staydown” rule requiring platforms to do instant takedowns on notice of infringement *and* to prevent content from being re-posted.
There’s no way to do this without filters, but there’s no way to make filters without violating the GDPR. The EU trying to figure out how to make it work, and the people who said this wouldn’t require filters are now claiming that filters are fine.
Automating subtle judgment calls is impossible, not just because copyright’s limitations — fair use and others — are grounded in subjective factors like “artistic intent,” but because automating a flawed process creates flaws at scale.
Remember when Jimmy Fallon broadcasted himself playing a video game? NBC automatically claimed the whole program as its copyrighted work, and thereafter, gamers who streamed themselves playing that game got automated takedowns from NBC.
The relentless expansion of proprietary rights over our virtual and physical world raises the stakes for filter errors. The new Notre Dame spire will be a copyrighted work — will filters block videos of protests in front of the cathedral?
And ever since the US’s 1976 Copyright Act abolished a registration requirement, it’s gotten harder to figure out who controls the rights to any work, so that even the “royalty free” music for Youtubers to safely use turned out to be copyrighted:
We need a new deal for content removal, one that favors working creators over wage-thieves who have the time and energy to master the crufty, complex private legal systems each platform grows for itself.
Back in 2019, Slate Future Tense commissioned me to write an sf story about how this stuff might work out in the coming years. The result, “Affordances,” is sadly still relevant today:
Here’s a podcast of the story as well:
Meanwhile, governments from Australia to the UK to Canada are adopting “Harmful Content” rules that are poised to vastly expand the filternet, insisting that it’s better than the alternative.
Cory Doctorow (craphound.com) is a science fiction author, activist, and blogger. He has a podcast, a newsletter, a Twitter feed, a Mastodon feed, and a Tumblr feed. He was born in Canada, became a British citizen and now lives in Burbank, California. His latest nonfiction book is How to Destroy Surveillance Capitalism. His latest novel for adults is Attack Surface. His latest short story collection is Radicalized. His latest picture book is Poesy the Monster Slayer. His latest YA novel is Pirate Cinema. His latest graphic novel is In Real Life. His forthcoming books include The Shakedown (with Rebecca Giblin), a book about artistic labor market and excessive buyer power; Red Team Blues, a noir thriller about cryptocurrency, corruption and money-laundering; and The Lost Cause, a utopian post-GND novel about truth and reconciliation with white nationalist militias.