This week we, of course, saw a former Facebook employee testifying to the Senate that the social-network company places profits above the safety of its users — the latest round in a seemingly endless go ‘round between big tech companies and U.S. lawmakers. At the risk of editorializing, it’s a debate that could be much more satisfying — and much less a waste of words and effort — if we had more access to something very valuable: solid research on the specific mechanisms by which these platforms work.
Facebook is part of a huge information ecosystem we’re still trying to wrap our minds around (and that includes people working inside Facebook, too), and one way of getting a better sense of the big picture is to focus in on a very small part of it. The old anthropology student in me hungers for research akin to ethnographies that studied, like, how the transfer of ownership of small plots of land works in one specific village in northeast Brazil. Narrowing in one small slice of social interaction can teach you a lot about a society at large. Same goes for social networks.
“The Big Lie and Big Tech”
That’s why it’s exciting to see a new 69-page paper out today from the Carter Center — founded in the 1980s1 by former President Jimmy Carter and former First Lady Rosalynn Carter. Called “The Big Lie and Big Tech,” it’s authored by Michael Baldassaro senior advisor for the Digital Threats to Democracy and Elections Project at the Carter Center; Katie Harbath, until recently a director of public policy at Facebook; and Michael Scholtens, the data analyst for the Digital Threats to Democracy and Elections Project at the Carter Center.
The report focuses specifically on “The Big Lie” — if it’s not totally obvious, that’s the idea that then-President Donald Trump only lost the 2020 election through fraud — and the role the people, publications, and other organizations that have an outsized impact when it comes to spreading objectively false information, what the report usually brands “repeat offenders.”
Here’s the gist of the report, emphasis added:
Myriad forces—politicians, influencers, hyper-partisan media, and ordinary citizens—coalesced to advance “The Big Lie” and other harmful narratives. But “repeat offenders”—media sources known to repeatedly publish misinformation—provided critical connective tissue in the multidirectional spread of narratives on social media. Misinformation repeat offender sources decontextualized, spliced, and reframed individual data points and out-of-context information into broader narrative frames. These narratives were advanced by authentic domestic media sources in a concerted manner, conferring a veneer of legitimacy on the content.
Among the researchers’ findings has to do with the most-shared domains in right-leaning Facebook groups in the months-long period between Joe Biden’s election and inauguration as president:
Overall, 10 of the top 15 domains with the highest increase in link appearances were known misinformation repeat offenders. Specifically, we found a 540% increase in the appearance of links from newsmax.com; a 520% increase for beforeitsnews.com, a 350% increase for oann.com, a 250% increase for gellerreport.com, and a 210% increase for infowars2222.com. Meanwhile, appearances of theepochtimes.com and ntd.com—another Falun Gong-backed media outlet—increased 450% and 555%, respectively. In contrast, we found a 50% decrease in the number of links from foxnews.com—the most widespread news domain found in the preelection period.
Relatedly:
Like findings from right-leaning Facebook groups, repeat offender content was consistently found—albeit at significantly lower levels—during the lead-up to the 2020 election in leftleaning Facebook groups. However, unlike in right-leaning Facebook groups, no significant increases in repeat offender content occurred in left-leaning groups after Election Day.
Another way of thinking about this is the “Bad Apple Theory” of misinformation. That is, rather than defaulting to the idea that everyone on Facebook you disagree with is spreading low-quality information — a seemingly overwhelming problem that’s tempting to confront with a rant from the dais before throwing up your hands and moving on — it’s a smaller segment of users that are ‘spoiling’ an otherwise redeemable experience for everyone else.
Who cares? Understanding the patterns really at work is key to everyone involved — the platforms, the media, regulators — making sense of how to best change those patterns. To that end, the report offers a handful of possible fixes, like:
Limit functionality on repeat offender accounts. In addition to applying warning labels and having their content downranked, repeat offenders should lose additional functionality on their accounts, such as the ability to run ads, share live video/audio, post videos and/or share content and engage in groups.
Is that a right solution? Debatable. But at least work like this is fodder for debate. The full report is worth a read.
And in case you missed, the Carter Center work makes a nice companion to the recently discussed Pew Research Center report on how the election-season use of Twitter and Facebook by members of Congress shifted from 2016 to 2020, and in particular the differing information sources from which Republicans and Democrats pulled.
Throwback read
This isn’t the Carter world’s first rodeo when it comes to the impact of cutting-edge technology on electing a president of the United States:
“When Jimmy Carter was the computer-driven candidate,” Garance Franke-Ruta, The Atlantic.
(By the way, no matter your politics, Carter’s "An Hour Before Daylight: Memories of a Rural Boyhood” is a marvelous book, including because of the impact of technology. Writes Carter, “The event that transformed our family's lives most profoundly came in 1938, when the Rural Electrification Administration brought electricity to some of the most conveniently located farms in the community." Recommended.)
Just for kicks:
Correction: This post previously implied that the Carter Center was founded about 140 years ago, which of course is not right.