In late March, as many governments were enforcing lockdowns and restrictions on movement to slow the spread of the coronavirus, Brazilian President Jair Bolsonaro posted a series of videos to social media that showed him strolling around a crowded marketplace near Brasilia. In one clip, Bolsonaro questioned the quarantine measures that were being implemented by local and regional officials in Brazil, which now has the second-worst outbreak of COVID-19 in the Western Hemisphere, with more than 87,000 confirmed cases and 6,000 deaths. He also touted the benefits of hydroxychloroquine, an anti-malarial drug, despite a lack of scientific evidence that it is effective against COVID-19.
The next day, Facebook, Twitter and YouTube made the surprising decision to delete some of Bolsonaro’s videos from their platforms, citing new policies against spreading misinformation about the coronavirus pandemic that would endanger the public. The new rules represent a shift in how social media companies, which have previously been hesitant to regulate posts from politicians and other elected officials, counteract the spread of false or misleading news on their platforms.
To many observers, it was cause for optimism. Finally, tech giants like Facebook, Twitter and Google were taking the dangers of misinformation seriously and acting aggressively to limit its spread. But as the pandemic continues to escalate—along with what World Health Organization officials have called a corresponding “infodemic”—some experts are questioning the new policies’ effectiveness.