Political identity, with Lilliana Mason

“Most political behavior is happening online and we can't see it,” says the Johns Hopkins University political scientist.

Lilliana Mason (@LilyMasonPhD) is an associate professor of government and politics at Johns Hopkins University, author of the book “Uncivil Agreement: How Politics Became Our Identity,” and one of the country’s top thinkers about how the way we Americans understand our places in the social order shapes our political lives.

Mason’s parents are archaeologists, and she spent her early life moving up and down the east coast of the U.S. for their work. Even after her father settled down as an anthropology professor, she didn’t think her own future was in academia. After graduating from Princeton, she took the LSAT, she says, but skipped a line. “Basically the universe told me I couldn’t go to law school,” she jokes. 

Mason was working as a paralegal during the 2004 election that put President George W. Bush back in office when she stumbled across a headline in Britain’s Daily Mirror that asked, “How can 59,054,087 people be so DUMB?” Intrigued, she began exploring the sub-field of political psychology and ended up in an PhD program at Long Island’s State University of New York at Stony Brook, which specialized in the topic.  

Mason was recently a guest on the “Ezra Klein Show” podcast and it was a fascinating conversation about the great hardening of “Democrat” and “Republican” into not just political labels but social ones. But I wanted to know more about the role technology is playing in both creating and perpetuating those identities — given that, as Mason put it when we spoke by Zoom, right now “most political behavior is happening online and we can’t see it because it’s proprietary.” 

(This interview has been edited for length and clarity.)

Scola: How did you come to studying the power of political identity in American politics?

Mason: When my first daughter was born in 2009, I had written and passed my dissertation prospectus but I basically spent a year not looking at it. When I came back to it, I hated it. I asked my advisor whether I could like rewrite my dissertation proposal he said okay. I started reading and I came across this article by [the social psychologist] Marilyn Brewer. For decades people have been studying social identity, but very few people have looked at how multiple identities interact with each other. 

It was like this lightning bolt. I was also reading a book by [University of Pennsylvania political scientist] Matt Levendusky called “The Partisan Sort” which was talking about ideological sorting. There used to be conservative Democrats and liberal Republicans. What Levendusky had found was that Republicans were increasingly conservative and Republicans were increasingly liberal, which means they don’t have cross-cutting identifies any more. So Democrats and Republicans are going to dislike each other more. 

What Brewer was she was saying was, if you have two identities that are really strongly related to each other and really overlapping, those people tend to be more intolerant of people who are not in their groups than people who don’t have overlapping identities. The way I think about it is if you’re Irish and Catholic, you basically only know lots of Irish and lots of Catholic people, and you’re more likely to be intolerant of people who are not Irish or Catholic. But if you’re Irish and Jewish, then you probably know a lot of non-Irish people and non-Jewish people. Your view of the world and of yourself is more complex, and it’s easier to be tolerant of people who are unlike you. 

Then I looked at [Democrats and Republicans] other identities, and it turns out that they’re also very socially different from each other. It’s a gradual process that started happening in the 1960s and it basically culminated by the end of the 1990s. Those other social identities are things like race and religion, and those are really potent identities that matter a lot to people. And those identities had moved into alignment with party. This theory from psychology about overlapping identities could potentially explain, even without thinking about policy preferences, why Democrats and Republicans might hate each other. 

And that turned into my book. And that’s why the title is “Uncivil Agreement” — we don’t have to disagree on policy in order to hate each other. We don’t have to disagree about what the federal government’s role is to hate somebody. 

“Political leadership matters, and political rhetoric matters. We found you can actually reduce people’s approval of violence if they read a message from Joe Biden saying ‘violence is bad’ – even Republicans.”

Why have Americans’ identities become more individually overlapping?

The civil rights legislation of 1964 and 1965 alienated white conservative southern Democrats, and they gradually left the Democratic Party because of that. And easy way to say it is that there use to be white supremacists in both parties, and after the civil rights legislation, they moved not the Republican Party, until the Republican Party became much more white, Christian, rural, and male. And the Democratic Party is sort of just everybody else. If you pick two random Republicans out of the population, the chance that they share a race and religion are quite high. If you pick two Democrats, they chance they share a race and religion are quite low. 

That gives Republicans particular reasons to be intolerant of Democrats, but Democrats don’t have cross-cutting identities with Republicans. Decades ago, Democrats and Republicans were going to church together, so they had that one identity in common. But those cross-cutting identities had disappeared by the early 2000s. If you have a set of identities consistent with the Democratic Party — you live in an urban area, you have high education — you’re also going to dislike Republicans more because you don’t have any exposure to them. 

Your book doesn’t wrestle much with media, including social media. It came out in 2018; were you not thinking of it as much of a factor at the time?

I explicitly avoided talking about media in the first book, because I basically said, that’s a whole other book. 

The way I think about it was that there was the perfect storm of this sorting process and then partisan cable news and the Internet happening all the same time. Partisan cable news made it a lot easier for people to learn who’s ‘us’ and who’s ‘them.’ The Internet allowed people to find each other, or just see information they really wanted to see. It almost facilitated motivated reasoning. We’re all looking for information that makes us feel good, and makes our groups look good. It makes us uncomfortable to see information that makes our groups look bad.

And it was sort of a self-reinforcing cycle where news coverage is so focused on horserace winning and losing. Instead of, ‘Here’s this bill people are voting on in Congress, these are the implications of the bill, and this is what we’d have to give up for it,’ it’s ‘This bill’s a win for Democrats.’ That makes Democrats feel good and Republicans feel bad, because we now have all these identities wrapped around our party. 

Elections are regularly scheduled status competitions, and as the parties have become associated with white people vs. black people, then those elections become a lot more dire, right? It’s no longer, ‘Do you believe in a more active federal government or a less active federal government?’ When people think about race, they immediately think about partisanship. And I think that partially happened because of the media linking those things. 


So the period after the book came out, from 2018 to 2021, we started paying a lot of attention to the role of social media. At that point, is it, ‘We already know that these forces are at work in American politics, and now it’s just more on display because we have lens on how people are thinking’? Or is it that social media exacerbated the sort of sorting we’ve talked about? 

We look at this in the next book, which is called “Radical American Partisanship.”  [Note: Mason’s co-author on that book, due out next year, is Louisiana State University’s Nathan P. Kalmoe.] That one’s actually looking at more extreme forms of partisanship, like Democrats and Republicans approving of violence for political ends and morally disengaging from people in the other party — saying they’re evil, they’re not human — which is a precursor to violence.  

And those types of attitudes can be manipulated by social pressure online. If people read something online that endorses violence, they’ll endorse violence a little bit more. If they’re exposed to violence — if events occur and something goes viral — people’s approval of violence increases. It’s a very quick decay after that, but it does bump up. 

Political leadership matters, and political rhetoric matters. We found you can actually reduce people’s approval of violence if they read a message from Joe Biden saying ‘violence is bad’ – even Republicans. Partisans in both parties vastly overestimate the degree to which people in the other party are violent. When you tell them the correct number, their own approval of violence decreases. Basically, we think that Republicans listening to Biden say that, he’s correcting a misperception in their mind that all Democrats are violent. 

That’s where the media can come in, either adding fuel to the fire of the other party being violent, or they technically have the capacity to dampen it, by saying, ‘You know, it’s only this percentage of them that approve of it’ — just correcting misperceptions. 

There was once the idea that with the Internet, people are going to be exposed to a wider range of people and opinions than they would in their day-to-day lies, and it would expand our minds about the world around us. Why isn’t it, ‘Okay, 90% of the Republicans on my Facebook feed aren’t encouraging violence.’ Why doesn’t it have that dampening effect? It seems to go the other direction, to get people riled up?

But you’re also seeing information on your Facebook feed that tells you Republicans are approving of violence, right? That it not, like, the Republicans I know, but probably most of the rest of them. You can make an exception for the people you know and still maintain your stereotypes against the group. 

“I don’t think it’s a mystery. You want to expose people to as many sympathetic portrayals of people that are not like them as possible.” 

Okay, you’re Mark Zuckerberg or Jack Dorsey. What do you do? 

I’ve actually been to Facebook a few times, and they’re a really weird company, because they have this research group that’s trying to figure how what problems are going on — that’s what got leaked out — and how to fix those problems. 

So they brought in a whole bunch of academics that study affective polarization, which is just Republicans and Democrats hating each other, to get advice on how to reduce it or at least how to detect it. Clearly the recommendations aren’t necessarily going through, but there’s plenty of ideas. 

Facebook did it right after the election — a kinder, happier Facebook algorithm. But they only did it for a month, and then they went back to the original one. I don’t think it’s a mystery. You want to expose people to as many sympathetic portrayals of people that are not like them as possible. 

This is sort of the contact theory from 1950s social psychologists — you just put people together in a non-controversial way. During the Korean War, they desegregated the armed forces because they didn’t have enough people in enough places. They brought in a bunch of sociologists to study what happened, and they found that just being on a desegregated ship decreased the levels of racism among the white servicemen. If you have contact with this other group in a space where you’re all on kind of the same status level, and you’re working towards a common goal, you can learn to like those people. It’s certainly harder to dehumanize them. 

I find myself struggling with keeping some diversity in the people I follow on social media. This might sound ridiculous, but I get some measure of it by following fitness influencers on Instagram, who sometimes share different thinking on, say, sexuality than a lot of the people I’m friends with on Facebook. But then I sometimes feel guilty about following them.

We follow who we want to follow, we create these worlds we want to live in, and we replicate them on social media. And we usually don’t expose ourselves to things that make us feel uncomfortable — unless you’re talking about something where you have a cross-pressure, like the fitness part of it, and then you’ll expose yourself to something unpleasant. 

We also get really sucked into things that are negative. It makes sense evolutionarily. If something bad is happening, we nedd to pay attention to it right now because it could hurt us. If something good has happened, we can get to it later. On a psychological level, we’re built to pay attention to conflict and negativity because it keeps us alive. 

That’s one of the things Trump did so well, and I think it was partly because of his experience on reality television, because reality television is about conflict. As long as there’s something outrageous happening, people pay attention. That’s really what his campaign was, and how he governed. 

So if you’re running one of these social-media platforms, you almost have to overcorrect to reduce people’s attention? 

It would lose money, is the thing, right? The problem is that the financial incentives are absolutely opposite to good public incentives. The more public good you do, the less conflict you have on your site, the less clicks you get, so you get less money. Those goals are completely in conflict with each other, so you have to make a choice.  

A lot of these platforms are trying to take it half-way, like, ‘We’ll ban Trump. We’ll ban COVID misinformation. But we still want a lot of conflict on here because it keeps people paying attention.’ 

Facebook doesn’t want to give people information that they don’t really want to see but need to see. That’s how you reduce all of this animosity: present people with information that’s calming and informational. That’s not about who wins or loses, about who’s evil and who’s a threat to the United States, but that just gives them information. People don’t want to read that. And so you either force them to eat broccoli — and they leave your platform — or you give them they instinctively crave. And you keep them on the platform, but you also subvert democracy. 

Isn’t there, like, a happy medium? A hamburger and fries, where people will eat it up but it’s not going to kill them right away?

I think that’s what they’re trying to do — to figure out how to tweak the algorithm so the candy doesn’t come right at the top, but it’s sprinkled in there to keep people’s attention. 

“What would be helpful is for academics to have access to data about Facebook users, to see what’s really happening with political behavior on Facebook. How are political conversations evolving over time? How are people getting their information? How are they interacting with information they received?

If we knew that, academics could come up with prescriptions for ways to make things better. But we can’t know that.” 

One overlapping identity right now in Washington is being a member of Congress and being angry at Facebook. But between Democrats and Republicans, and even within parties, there’s still not consensus on policy solutions. Could you build on that shared identity to find some way of moving forward? 

It’s possible. The problem is that they don’t understand what Facebook is. Most of the people in Congress don’t actually have the knowledge of what social media is, how it works. So the solutions that they’re going to come up with are really limited. 

Nate Persily at Stanford actually just wrote an op-ed saying, one of the main problems is that Facebook will ask academics, ‘What are the problems in American democracy,’ so that they can fix it, right? But they won’t show us what’s happening on their site. It’s just a black box for us. For those of us who study political behavior, it’s really wild that most of political behavior is happening online and we can’t see it because it’s proprietary. 

What would be helpful is for academics to have access to data about Facebook users, to see what’s really happening with political behavior on Facebook. How are political conversations evolving over time? How are people getting their information? How are they interacting with information they received?

If we knew that, academics could come up with prescriptions for ways to make things better. But we can’t know that. 

One of the reports that leaked out was about how people interact with articles they agree with versus articles they disagree with. They’re responded emotionally, and potentially responding with threats or violence — things there are remarkable for what we think of as normal political discourse. We can’t observe that in the real world. But it’s happening on Facebook. We know it’s happening, but we just can’t measure it.