This past week, Meta CEO Mark Zuckerberg decided to end the Meta fact checking program, under the logic that, “it's time to get back to our roots around free expression on Facebook and Instagram.” Because “fact-checkers have just been too politically biased and have destroyed more trust than they've created, especially in the US,” Zuckerberg also has decided to replace fact checkers with a community notes feature similar to a feature on X.
Zuckerberg explicitly said in his announcement that these changes are directly in response to Donald Trump’s reelection in the US – his appointment of UFC Chairman Dana White to his board and installation of Republican operative Joel Kaplan as his chief emissary in Washington shows that the policy is meant to curry favor with the President-Elect. The unspoken truth is that laying off tens of thousands of content moderators also will save Meta as much as $5 billion and may result in a short-term stock bump and more money in Zuckerberg’s pocket.
There are many speech-related reasons that Zuckerberg’s policy is misguided. Contrary to some pundits, there is no evidence that conservative speech in the US has been disproportionately policed on Meta platforms. And, contrary to Zuckerberg’s remarks, his own content moderators are not a part of a “legacy media” conspiracy to “censor” any type of speech.
Moreover, although Justice Louis Brandeis famously said, "The remedy for bad speech is more speech” – the quote does not apply in the world of the algorithm, where the worst speech gets a megaphone in the name of increased engagement and attention. Marginalized communities are not going to feel as safe online and members of those communities are now going to be responsible for protecting themselves.
However, the worst part of Meta’s policy change is that content moderation is not meant only to police the conversations that happen online – it is meant to prevent offline harm as well. Although there are myriad examples of offline harms perpetuated as a result of online speech – January 6th 2021 being the most prevalent example in the US – the Rohingya Genocide in Myanmar, perpetuated on Facebook, Facebook Messenger, and WhatsApp shows the most dire consequences of the content moderation regime Zuckerberg is reinstituting.
From 2012-2017, human rights groups and Facebook users in Myanmar were banging down the company’s door. At that time, Myanmar security forces were ethnically cleansing Rohingya Muslims in the north of the country. Thousands were murdered, raped, and sexually violated, and by the end of the genocide, over 700,000 Rohingya were pushed into Bangladesh, where they remain today. Much of the propaganda and government-sponsored hate speech fueling the war originated on Facebook platform, which was functioning as the de facto internet of the country.
According to a 2022 Amnesty International report, some users in Myanmar reported hate speech against the Rohingya as many as 100 times. The Muslim Rohingya were referred to as “dogs” and “Bengali invaders.” One viral post shared in 2013 but not removed until five years later read, “We must fight them the way Hitler did the Jews.” Not until after most of the violence had subsided did Facebook take action, either because reports were ignored or because the offending content did not violate community standards. At the time of the Rohingya genocide, Facebook was only employing one content moderator for 18 million Burmese speakers.
To compensate for Facebook’s dearth of content moderators, the company decided to create a country-specific sticker known as “Panzagar” or “flower speech,” where users could place stickers on posts containing hate speech to promote peace and harmony on the platform. However, the Facebook algorithm merely treated these stickers as another form of engagement similar to a “like, or a “share,” which further amplified the hateful speech.
In 2018, Facebook released a report accepting responsibility, acknowledging that it “wasn’t doing enough to prevent [the] platform from being used to foment division and incite offline violence.” It continued, “We agree that we can and should do more.” The head of the United Nations fact-finding mission agreed, saying that Facebook had played a “determining role” in the genocide and had “substantively contributed” to the deaths of thousands. In response, Facebook modestly updated some of its policies and hired a few additional fact checkers.
Meta’s record had been far from clean since the Myanmar genocide – internal Facebook and third-party investigations have shown that disinformation on Facebook has driven offline violence around the world since 2018, including on January 6th.
However, content moderation against hate speech and misinformation is the best corporate defense, albeit imperfect, against offline violence. The reason: social media algorithms amplify the most incendiary content on the internet for the purposes of engagement. Social media companies can never stop people from writing hateful things or perpetuating falsehoods, but they do not have to give the worst offenders a microphone to galvanize the worst elements of society.
Meta’s new policies do just that. In the name of protecting online speech – the right of people to offend without shame – Zuckerberg’s change is going to cost lives. The new Meta “community notes” feature is just 2025’s version of “flower speech.” Just as in Myanmar, hate speech will be allowed to run wild on Meta properties. A sample of the content that is now permissible under Meta’s new rules: “Gays are not normal,” “Trans people are freaks,” and “A trans person isn’t a he or she, it’s an it.” It is easy to see how such dehumanizing language could lead to violence.
In 2017, Facebook’s crimes against the Rohingya of Myanmar were crimes of indifference – that political turmoil in a faraway country was not worth addressing with money and attention. Meta’s crimes in future acts of violence originating from the unmoderated speech on its various platforms will be due to its cynical political position to put discriminatory online speech in the US ahead of people’s real lives.
Great post! The Myanmar example isn't an anomaly - it's a warning. Algorithms rewarding outrage thrive on division - and that's a scary thought😕