On Monday, the European Union’s member states approved a package of controversial reforms to the bloc’s copyright laws, known as the Copyright Directive, that the European Parliament passed last month. It came just after Australia implemented a new law to police certain content on social media following the mass shooting at two mosques in New Zealand, which the attacker had livestreamed on Facebook. And last week, the United Kingdom entered the fray, releasing a widely anticipated white paper on “online harms” about keeping citizens safe online.
Together, all three developments represent ways that democratic governments are building out content-filtering regimes on the internet to confront the spread of hate speech, disinformation and other scourges online. But they also underscore a critical trend for internet freedoms, with major international implications: the importance of adequate checks and balances on these kinds of internet policies, in both what content these governments filter and how they do that filtering.
The EU’s Copyright Directive, which essentially aims to modernize copyright rules for the internet, was approved by the European Parliament in late March by a vote of 348 to 274. Protests, in person and online, took place right up to the vote, focused mainly on fears that the new laws would lead to more censorship online. Article 11 of the new law could make companies like Google pay for linking to news content, which critics have said could, among other things, end up including free, open-access Creative Commons websites, which wouldn’t have a way of opting out of the new system. Article 13 is arguably even more controversial, as it essentially aims to establish a filtering system to automatically analyze social media posts and check them against databases of copyrighted materials. A who’s who of early internet pioneers have criticized it as automated internet surveillance.