Does Facebook Polarize the American Electorate?
What a recent study co-authored by Meta does and does not say about social media news feeds
Last week, Meta released a study coauthored by a host of academics in the journals Nature and Science, entitled “Like-minded sources on Facebook are prevalent but not polarizing.”1 Using Facebook data leading up to the 2020 US Presidential elections, the study shows that although the Facebook news feed does tend to show like-minded sources to its users, the platform does not further polarize the average user based on the content it displays. From the abstract:
“We found that the [removing ‘like-minded’ sources] increased their exposure to content from cross-cutting sources and decreased exposure to uncivil language, but had no measurable effects on eight preregistered attitudinal measures such as affective polarization, ideological extremity, candidate evaluations and belief in false claims. These precisely estimated results suggest that although exposure to content from like-minded sources on social media is common, reducing its prevalence during the 2020 US presidential election did not correspondingly reduce polarization in beliefs or attitudes.”
Before we jump to the conclusion that the echo chambers inherent to social media news feeds do not increase political polarization in the US, it is important to understand what the study actually tells us about Facebook’s role in today’s political discourse — as well as its limitations in explaining social media’s polarizing influence in the past.
First, a history lesson to set some context.
There are two ways that social media helped polarize the American electorate.
The first element of online polarization occurred when social media changed how readers and viewers consumed information. Leading up to the days of social media, people got their news from television and newspapers. Although conspiracy theories and niche opinions existed, they were not prevalent on public platforms where people got their news each day, even on Fox News. Ordinary citizens formed their beliefs based on what these outlets reported. Although certain outlets were more liberal or conservative than others — see, again, Fox — they were not the primary force polarizing the American electorate.2
Social media’s rise around 2010 created distribution platforms for news content alongside content that users, themselves, could create on their own, no matter how far-fetched or incendiary. Now, rather than deriving their beliefs from a limited number of sources, readers could find any source to justify their pre-held views, regardless of those sources’ grounding in real-world facts. Each of those content sources was then placed in competition with each other for views and clicks within social media feeds. Outlets and content creators quickly learned that readers were most likely to click on the stories that conformed to their prior beliefs and made it seem like the world was on fire.
The second element of online polarization, pouring gasoline on the dynamics created by social media, was the prevalence of algorithms built to promote user engagement and ad impressions by delivering the most addictive content straight to users rather than making them scroll or search. Facebook’s business model turned it, essentially, into a confirmation bias machine, regardless of the facts on any particular topic or any notion of publisher “quality”. 3 The business incentive to drive user engagement created what we now know as “echo chambers,” with Democrats and Republicans consuming what Facebook calls “like-minded content.”
It has been well-documented, including in Frances Haugen’s “Facebook Papers,” that algorithmic amplification and the related filter bubbles have radicalized people online and created online-first hate groups, including QAnon.4 It is also clear, especially leading up to the 2016 election, that social media has further polarized people who were already self-identified partisans and silenced moderate voices.5 Less clear, however, is whether merely using social media polarized the average American user, who did not— and still does not—care much about politics or consume significant amounts of news on the platform. There is also limited public information on how Facebook’s safeguards implemented leading up to elections in 2018 and 2020 impacted polarization.
So, with that context, we return to the current Meta study. The study convincingly demonstrates that the 2020 iteration of Facebook did not further polarize the American electorate or change the political views of the average Facebook user. Even though most Facebook users today consume “like-minded” content, the news diet of the average American user does not contain much politics or news, the ‘like-minded’ content most likely to polarize. Additionally, Facebook deployed controls beyond the contours of the study to limit exposure to content that might spread disinformation or incite violence.
Yet, the study does not turn back the clock to social media before 2020, where the composition of the news feed may have been further weighted towards political and/or news content. It does not cover the period before 2016 when there were limited quality safeguards against misinformation and untrustworthy content or the period before news feeds were boosted by algorithms, when readers were introduced to a variety of new sources. It does not break out the polarizing effects of the news feed on readers who consume a disproportionate amount of political content today — potentially, the most radical users and the ones most likely to tip further towards extremism. Moreover, it does not discuss potential offline causes of polarization, a discussion that would have helped to refocus the spotlight away from Facebook.
So, given the study’s findings and the questions that remain, what should we take away? I recommend you give the study a read and conclude for yourself (or at least read the Washington Post coverage). Nevertheless, here are my top three takeaways:
Facebook’s current form, which does not promote as much politics or news, does not polarize the average user. However, past forms of the platform and its algorithm most likely did, especially as they introduced sources to users that conformed with and amplified their partisan biases.
Content diet matters. If a user is consuming a higher level of politics and/or news on social media, chances are they are coming from like-minded sources and are likely to polarize. In other words, if someone is looking at political content that might tip them into a conspiracy theory (e.g., Jordan Peterson), they are more likely to see content that will take them farther (e.g., QAnon).
Algorithms that are optimized for engagement are more likely to promote low-quality, like-minded content that is more likely to polarize. If your company has an engagement-based algorithm, be sure to impose limits on content quality, type, and sources to manage the news diet of your users and hedge against polarization. Meta had not been imposing those limits in 2016, but they clearly are now, likely with positive effects.6
Moving forward, there will likely be more studies discussing the impact of social media and polarization, and this newsletter is here to address the big ones. As they say on the app formally known as Twitter…watch this space.
https://www.nature.com/articles/s41586-023-06297-w
Pre-social media, gerrymandering and the emegence of the Southern Republican Party after the Civil Rights Movement contributed more to polarization than partisan media consumption. See Why We’re Polarized, Klein, and The Rise of the Southern Republicans, Black and Black.
https://www.niemanlab.org/2021/07/at-first-facebook-was-happy-that-i-and-other-journalists-were-finding-its-tool-useful-but-the-mood-shifted/
https://www.washingtonpost.com/technology/2021/10/25/what-are-the-facebook-papers/
Breaking the Social Media Prism, Ball
https://www.facebook.com/business/news/News-Feed-FYI-Showing-More-High-Quality-Content