Dangerous Narratives: How Harmful Media Messaging and Hateful Rhetoric Fuel the Divide
Samantha Owens, Regional Director at Over Zero with the U.S branch joins us to unpack the differences between hate speech, dangerous speech, mis and disinformation, and how certain messages have played a major role in the hate crimes and identity-based violence we continue to see within our society.
Listen in to learn what we can and should be doing as responsible media consumers once we become aware of the messaging tactics we are consuming.
Dangerous Narratives: How Harmful Media Messaging and Hateful Rhetoric Fuel the Divide
[00:00:01 – 00:10:02]
Noelle: What uuuuup!
Miranda: Welcome to the Unpacked Project.
Noelle: We're your hosts, I’m Noelle.
Miranda: And I’m Miranda.
Noelle: We're here to explore all things social justice, it's through casual conversations, interviews and storytelling that we hope to inspire others to take action towards a more compassionate and equitable world.
Miranda: Because honestly it kind of sucks here sometimes.
Noelle: For real, we can do better people.
Miranda: Alright, let's start unpacking.
Noelle: Samantha Owens is Regional Director for at Over Zero with the US branch. They're an organization that partners with community leaders, civil society and researchers to harness the power of communication to prevent, resist the rise above identity-based violence and other forms of group targeted harm. She's an expert in leveraging strategic communication for social change, with extensive experience working across various issues, areas and contexts. She has led and managed communication campaigns to prevent extremism among youth in Bosnia, prevent youth homelessness in the United States and has overseen art initiatives for conflict transformation globally. She holds a Masters in Human Rights from University College London and a BA in Religious Studies and International Studies from Northwestern University. Sam, thank you so much for being here today. Can you just tell us a little bit more about yourself and really how you got into this work that you're doing?
Samantha: Yeah, absolutely. That is a great question and it's great to be here with you today. I guess, from a young age, I’ve always been interested in stories and in particular, the stories we tell about other people and the stories we tell about how they're different, how they're like us. In my extended family, I had quite a few family members who struggled with homelessness, struggled with addiction, and I saw the way that society dehumanize them and really the stories that were told about them that didn't match up with my experience recognizing their full humanity and recognizing all the wonderful parts of them as well. So, I guess, from a young age, I’ve always been really interested in how we see other people and the sort of mental gymnastics we do to justify why certain people are suffering or why it's okay to exclude certain people from certain parts of life”. So, I guess, if I’m going really deeply into the psychology of why I’m interested in this, I would say, that really is what sowed the seeds. In terms of my studies, so, originally, I was a scene kid. I was… it was the early 2000's, I thought I was going to go into the music business. I was a little punk rocker. I wanted to work in the music industry somehow. Had an internship, hated it. So then really got back to what I felt most passionate, most excited about, and that was these stories about how we talk about other people, how societies tell stories about people. And that was the impetus to study religious studies and international studies was what are the guiding principles that different societies have in terms of how they interact with each other, in terms of how they interact with the world? And then from there, I, you know, I went to London, I studied international human rights and worked in homelessness. That was what I was really passionate about. Realized that my interests were a bit broader and I was really interested in things on an international scale, so I worked on projects around the world. I was lucky enough to work in Sarajevo for a number of years and really while I was in Sarajevo and actually before that, I really became kind of obsessed with this idea of the US as a post-conflict society that had never really undergone any meaningful transformation. So, what are the stories that we tell about that, what are the narratives that we tell about other people with within the US as it relates to a post-conflict society that in my opinion, has never really healed.
Noelle: Yeah, and, you know, I think it's great having you on at this point after some of the episodes that we've done. In season one, we talked a lot about bias and our own biases and kind of how those form, and in season two, we were able to start talking about how social media starts feeding those biases, how the narratives that are created on social media, in the news, just all within our society are creating these situations where people aren't able to identify what's true, what's misinformation and just kind of navigating all of that. And as a society, like you said, not really healing from our history and as a result of the, you know, where are we today in the stories that we tell about different communities, especially our marginalized communities, so thank you so much for being here to start digging a little deeper into this topic with us. Over Zero has a lot on their website. So many resources. There was a guide titled ‘Counteracting dangerous narratives during times of uncertainty’ and it was discussed that certain patterns of speech have played a large part in hate crimes and identity-based violence within our society at various points in time, and in fact, it's discussed that these patterns actually increase people's acceptance of discriminatory policies and even violence. So, can you just explain for our listeners a little bit about what hate speech is, what is dangerous speech and what are the differences between them?
Samantha: Yeah, absolutely. So, many of us are familiar with the term ‘hate speech’ which, it refers mostly to the intent behind the speech. So, this is speech, you know, that's targeting another group with vitriol because of their identity, because the speaker or the speakers feel a negative way about a said group because of their identity. When we're looking at the role of communication, when we're looking at the stories we tell in conflict and identity-based violence, it can often be more useful to look at the impact of speech though, specifically the likelihood of a piece of speech in question that will it support, will it increase support for or will it drive people to commit violence or other harms. So, what's the real world impact? And that's where the term ‘dangerous speech’ comes in. Dangerous speech is a framework that was developed, a phrase that was coined by Susan Benesh and ‘The dangerous speech project’ and it comes out of the atrocity prevention world. So, this framework really looks specifically at the risk of speech leading to mass violence, things like genocide and mass atrocities. And it's really been tested and interrogated in this space, in the atrocity prevention space. We also see it though as really immensely valuable when looking at group targeted harm more broadly. So things like systemic violence, discrimination, harmful policies toward marginalized groups. Because many of the driving, you know, psychological and social mechanisms between those things are the same. It's what is the impact of speech that makes us be willing to treat another group badly based on their identity. Just to go into the dangerous speech framework a little bit, it looks at all the different elements of a piece of communication. And this is really the framework that we use when looking at whether or not a speech, a piece of speech is actually dangerous. So, first we look at the speaker. Is the speaker influential? Second, we look at the medium or the channel that's being used to spread the message. So, is the message being shared through a platform that reaches many people? Is it being spread through a platform people see as credible? Through word of mouth. Third, we look at the message itself. So, is the message tapping into existing fears or grievances or some other resident experiences and does it, you know, does it show kind of the hallmarks of a lot of really particularly harmful rhetoric? Fourth, we look at the audience. So, is the audience already primed to mobilize? Does the audience have the capacity to commit violence? That's a really important part. Looking at power differentials, histories of harm. Is the audience that's being spoken to, that's being mobilized, are they capable of committing harm, you know, on a large scale? And finally, we look at the broader context. Specifically, histories of conflict and oppression and power differentials as I’ve said.
[00:10:03 – 00:20:16]
Samantha: So, you need to analyze all these things together to go beyond saying ‘this piece of communication is hateful’ to instead this piece of communication is dangerous and that it's likely to really mobilize people. Mobilize people to commit violence, increase support for violence, increase support for discrimination, systemic violence, really what's it going to drive people to do.
Noelle: No, and what comes to mind when you talk about that really just in the past six months, obviously the Capitol and then all of these trans bands in sports, right? And so, this language and rhetoric around students participating in sports and so, it's interesting to see, I think, kind of this dynamic of hate speech versus dangerous speech. So, thank you for sharing that. I think that's, you know, really important for listeners to acknowledge and understand. So, these types of speech patterns, they tend to create an ‘Us versus them’ right, mentality, which is not only dangerous as you mentioned but it also just continues to divide people. So, in our first season, we actually discussed the relationship between personal biases like this in-group and out-group identification. So, what do you find are some specific narrative patterns that target out-groups?
Samantha: Yeah, that's a great question. So, we… there's a framework developed by Jonathan Leader Maynard, who's a political scientist at Oxford University. He's done a survey of propaganda narrative patterns before political violence and mass atrocities throughout history, and the patterns he has identified, his framework provides a really useful jumping off point for analysis as to how identity-based violence, other harms are justified across groups. So, we like to use his framework. Again, it's this jumping off point to talk about the different kinds of patterns we see and why they resonate. So, in his work, he's identified three distinct… excuse me, but related themes in the messages that are spread about the, you know, the “them” to justify discrimination, violence, other harms. The first of these is ‘collective blame’ which is the idea that an entire group of people is somehow guilty. They're guilty of some sort of transgression. whether that's hurting members of, “our in group” or some sort of moral transgression, they've broken some sort of moral code or moral law. And when we're talking about collective blame, you know, these narratives, it doesn't matter whether this transgression was committed by a few members of the group, by, you know, by group leadership or even if it's been entirely fabricated, the idea here is simply that being a part of this group or having this shared identity makes someone guilty. You're guilty by association. And what this idea of shared guilt does is it shifts the boundaries of what's considered okay to do to a group. So, if a group is guilty of a wrong within this mode of thinking, it's not only okay but may even be necessary or the moral or the right thing to do to punish them, to take punitive action and strike out. Second, we see what Jonathan labels ‘threat construction’ which is the idea that an entire group simply by existing, simply by being who they are is a threat to our group. And this type of threat, the type of threat can be different. Whether it's a threat to life or physical safety, to a way of life, to culture, money, power. You know, the list goes on and on. The threat can vary. The common thread here is that an entire group and all of its members are threats simply by existing. And what this pattern does is it sets up a case for self-defense. It creates this justification that is ‘as long as they exist, we are somehow unsafe’ and we know that, you know, as people, we're really attuned to threat. It's baked in as a survival mechanism, it's baked into our kind of lizard brain and we respond in this really knee-jerk way. So, if it's a physical threat, we fight, we flee, we freeze, we feel fear that prompts us to one of those responses. It kind of overtakes all of our other functions. If the threat is a threat to resources, assets, power, we feel angry and we respond in kind. We lash out. Anger makes us lash out. If it's a threat to purity, you know, whether in a really literal sense, like a threat of disease, which we've seen a lot of narratives in the last year that have painted groups as a threat of disease or a threat of some sort of symbolic moral purity. Purity threats evoke a disgust response that make us want to distance ourselves from the “disgusting thing”. So, we respond to threats in these unconscious knee-jerk ways, which can even make us act in ways that aren't in accordance with our own values. They can override our own moral values, our own codes, our own ethics, and threat is really powerful in that way. Third, within this framework, we see what Jonathan labels “de-identification” which is really… it's really two things. So, it's this idea that every member of a group is somehow the same. They're essential. They are born with something in their essence that makes them bad makes them rotten from the start. And this essence is somehow “less than or different than, you know, the Us”. And this often goes hand in hand with dehumanization, so comparisons to vermin, to disease, to insects. All things we register again on that visceral level as disgusting and threatening and that we need to stamp out. And like the other like the other patterns, this changes the moral grounds of engagement and it makes it in this mode of thinking, somehow okay to do terrible things to these people because they aren't really, you know, they aren't really human like us anyway, they aren't really people anyway. So, just a quick review of these themes, you can see how they're mutually reinforcing as well. So, not only within these patterns, not only does the narrative go that, you know, not only is this group a threat to our group, they also committed this wrong against us and beyond that, they aren't even human in the way that we are. And so, you can see at their core, these narratives provide a sense of justification for doing awful things to other human beings while still kind of, you know, being the good guy in your own story, so they remove a sense of moral responsibility.
Noelle: It's just so interesting, as you're talking and explaining all these things, I have this like, picture of social media and posts and news headlines and things that we see all around us all the time. I think it's so useful for our listeners to hear this, so that they can start maybe being more conscious, right, of the messaging that's around them. We've talked in previous episodes about how we can just be such passive consumers of all this information around us and really it's about educating ourselves about how there are these intentional tactics really, right, and trying to create and other people. And then on the flip side of that, you know, there's also patterns that bolster this in-group identity and help create the justification for violence and harm. So, can you describe some of those types of messaging that are commonly found for the ‘Us group’?
Samantha: Yeah. And that's a really important point. This is really important. So, it's often easy to focus on how groups are other-ed when we talk about us and them divides. Like, it's really easy to focus on the messages about them but actually, the creation of a strong identification within ‘Us’ and a mobilization of that ‘Us’ toward violence is essential as well. It's a key piece of the puzzle. So, if you're telling a story about how your group is the good guys and the other group is the bad guys, you need narratives not just about why they are bad but also how and why we are good, so you need messages about ‘Us’. So, similar to the messages about the ‘them’ that we just talked about, these messages exist really primarily to remove any sense of culpability or agency or moral responsibility and instead paint committing violence, supporting harm as acceptable or necessary or sometimes even heroic. So, the first the first of these narratives, you know, within Jonathan's framework again, is called ‘destruction of alternatives’ which basically is academic for ‘we have no choice’. It's this idea of ‘it's us or it's them’ and that violence, hurting another group, striking out another… against another group is the only way for our group to stay safe, to stay prosperous, to stay pure and so on, and this removes a sense of agency by saying, you know, ‘I had to, it was them or me, it was self-defense’
[00:20:17 – 00:30:08]
Samantha: And when we see this happening, it can become such a strong part of a group's collective mentality that the people who speak out against violence toward another group are painted as weak or naive or even traitorous and can be targeted themselves. And, you know, I think about… one example that comes to mind thinking about this is, in Rwanda, in the genocide in 1994, moderate Hutus were targeted by Hutu militias, you know, alongside Tutsis, as Hutu extremists painted moderate Hutus as traitors and, you know, argued that they needed to be wiped out as well. So, we also see… you know, in terms of these patterns, we see a pattern called ‘valorization’ which is the conflation of the willingness to be violent, to strike out against another group, to being a good and loyal group member. And this goes back to the message, you know, it goes… it speaks to a message about in-group love, right? It's saying, you know, I’m not bad, I just love my group so much that I need to protect them. We see a lot of narratives about protecting women and children, protecting the, you know, the “vulnerable” members of a group with this pattern. And this is often also tied to ideas of masculinity. What it means to be a good man, a good protector and traits like bravery and courage are really applied to a willingness to be violent. So, with, you know, when this pattern really starts to resonate, it makes violence not just acceptable but really something to aspire to. It becomes, you know, a badge of honor or a signal for how much you love your group. And third, again, within Jonathan's framework, we see what he calls ‘futurization’. And futurization is interesting because it's a bit different than the other narrative patterns we've seen. A lot of the other narrative patterns really have primarily to do with fear, right? Futurization has to do with hope and aspiration. And these messages say, you know, our group could have this amazing future, you know, things could be perfect for us but this other group is in the way. So, there's these kind of utopian designs in these messages. We could have this incredible future in which we all have great jobs, our children can play safely, we can live without fear. And this type of messaging is actually very often employed by extremist groups from Daesh or the Islamic state to white nationalist groups in the US. Hope and aspiration are incredibly powerful motivators and drivers of behavior and they shouldn't be underestimated in terms of driving people toward violence or toward harm. And I’ll just say, you know, quickly about, looking at these patterns, you can see again, how these messages can build off each other and be mutually reinforcing. So, if we have, you know, no other options than to commit violence against this group to protect ourselves, to ensure a beautiful future and committing violence will prove that you're a good group member, you can see how all those things work together to create a really strong mobilizing mindset for violence. And you can also see how these messages about the ‘Us’ really interact with and reinforce messages about ‘Them’. So, a group being painted as threatening, for instance, not only lays the groundwork for us to say, ‘hey, it's us or them’ it also means that we turn more inward. If we're feeling threatened, we feel closer and more bonded to our own groups, which then increases how strongly we feel bound to these messages and the social pressures of what a good group member will do or should do. It really turns up the pressure and increases the resonance of these messages about the ‘Us’.
Miranda: It is. I think as you were talking, really what resonated was just the sense of belonging and the language that we can use to either make people feel as if they belong or that if they don't do something, they won't belong, right? And when that's your entire world, your community is your entire world, like, what are you going to do? So very interesting that folks will really go against their values in defense of the group. It's just… honestly, it's very mind-blowing. It's all self-evident but, you know, to hear it, I’m like, ‘okay’. So we've discussed in previous interviews, really how damaging misinformation is and then how rapidly it spreads obviously with social media, you know, in our given time. So you have a guide that actually explains the difference between misinformation and disinformation. So, can you elaborate more on that for us?
Samantha: Yeah. Yeah, definitely. So, I’ll start by saying that, you know, mis and disinformation have always been a challenge for any society. From rumor spreading by word of mouth to propaganda pamphlets being printed and handed out, this is not a new issue for humanity and, you know, we've seen throughout history when new forms of communication, be it the printing press or, the radio or cell phones emerge, they tend to become platforms for sharing and spreading misinformation at new rates, new speed, new breadth, and create new challenges that must be addressed. So, as you said, mis and disinformation is now spreading faster and farther than ever with the widespread use of social media and it's also able to spread in really emotionally resonant forms, with pictures and videos. And distinguishing between mis and disinformation and understanding the distinction between the two can be helpful as a starting point as element of figuring out what action to take both on a macro level and interpersonally. So, misinformation is the spread of false information by people. It can be individuals, groups, who believe that they're sharing something true or something that might be true. Which, you know, I think is something that we've all done at one point or another, you know, participated in. You know, this is everything from spreading relatively innocuous rumors about, say, a celebrity sighting to more serious things like an unfounded allegation of fraud by a public figure to really harmful rumors like baseless claims of voter fraud. You know, the thread here is that this information is being spread by people who believe or may believe it, who think that this is or could be true. Disinformation on the other hand is intentionally misleading. So, disinformation campaigns are carried out by people often or groups, often bad actors looking to sow confusion, looking to sow mistrust, looking to create some sort of chaos in society. And sometimes even with the explicit aim to drive people toward violence or to support violence against different groups of people. And one thing I want to know when talking about mis and disinformation is that while it can be really damaging and is really damaging. In so many areas of life, I think we've seen the damage it's done in terms of public health certainly. Mis and disinformation play a really specific role when we're looking at dangerous speech and calls for harming people based on their identity. So, looking back at the patterns we just discussed, you know, all the different narrative patterns about the ‘Us’, about ‘them’, people need lies for those patterns, they need to spread information that isn't true to support the idea that an entire group of people is somehow rotten in their essence or guilty or deserving of harm and that's what makes mis and disinformation so intertwined with dangerous speech and why individual pieces of mis and disinformation that may seem relatively small or inconsequential can, you know, in reality be quite problematic if they're feeding into these broader narratives about the ‘them’ being bad or threatening or, you know, whatever it may be. And again, this is something we've seen throughout history and across context, mis and disinformation, rumors, propaganda, they've consistently been used as part of dangerous speech to really fuel harms against targeted groups.
Miranda: Yeah, and so, let me ask you, once folks are able to start recognizing this difference between misinformation and disinformation, how should they handle that? Like, how can they act on that?
Samantha: That's a great question. And I think this is something everyone is still grappling with. There's still researching mis and disinformation especially in the, you know, in the internet age is still, you know, an emerging field and we're getting new insights all the time.
[00:30:09 – 00:40:03]
Samantha: But there are a few different ways to look at this. So, I think there's actually a few different questions wrapped up in the question that you just posed and so, I think we can first talk about how to handle mis and disinformation on a more macro level, so society-wise, society-wide, you know, existentially and in terms of practical implications and then, how do we handle it in a more immediate interpersonal way. So, on a macro level, I think we can look at the content of the mis and disinformation that's spreading and parse apart why it's resonating, and we can do this to better understand the underlying dynamics of our own context. So, prevalent threads, rumors, conspiracy theories can provide important insights for what people are looking for, what their grievances are, what underlying beliefs, attitudes or fear might be. So, for example, I think with the rise of Q-anon, we might ask why these specific conspiracy theories are resonating and use that to create longer term structural interventions. So, for instance, if this content is resonating with some audience, audience segments because they deeply distrust public institutions, we can ask about how we might rebuild trust and institutions among these disaffected populations where Q-anon, you know, is really spreading. And if mis and disinformation is that supports specific groups, the idea that specific groups pose an existential threat are spreading, how can we address the underlying belief systems, the broader social narratives that have enabled these stories to be relevant. So, you know, as practitioners, as people who work on creating healthier communication ecosystems, we can look past the surface of the misinformation, the conspiracy theory, the rumor, whatever it may be, and ask ourselves, ‘why is this resonating?’ and then work to address those underlying dynamics. Also, speaking on a macro level, in terms of addressing mis and disinformation on a more tactical, logistical level, you know, there's many different angles from which to approach this, that is largely in the hands of different decision makers and policy makers and one thing that I will note is that an understanding of the intentions behind the people sharing the false information should be taken into account in terms of solutions. So, as we've discussed, for instance, someone sharing something false on Facebook in good faith, is going to require a different intervention approach, a different mechanism than, say, a public official knowingly spreading conspiracy theories. On an interpersonal level, for each of us, you know, as we navigate the world and are either trying to correct pieces of misinformation, get other people to question information we know is objectively false, there are a few important things to know. And the first is that misinformation is really sticky. Meaning; the more we hear something, the more likely we are to believe it. Our brains are just built that way. The associations stick. Second, we know that a lot of intuitive approaches to correcting misinformation can really be ineffective or actually even backfire. So, if someone really believes a piece of misinformation and the broader narrative it's supporting and it's central to their worldview, just telling them they're wrong and challenging them by flooding them with facts is typically pretty ineffective and can even backfire and make people hold on even tighter to an incorrect belief because they feel their worldview is under attack. And, you know, this is because when our beliefs, when our worldviews are challenged, our brains can perceive it and process it in the same way as the threat, making us shut down and go into defensive mode. So, with pieces of misinformation that have to do with contentious issues or issues that are really emotional for people, issues that they hold close to their sense of self or that are important for a group they're part of, this can be especially true. So, we need to find ways to reach people without triggering this type of defensive threat reaction. Ways that we can help them be open to start considering new correct information and ways to make the correct information stick. So, what can we do individually? What are some tools? Offering a simple correction. So say you're issuing a public statement as an organization or, you know, you're correcting something as an individual, there's a few best practices that can diffuse the stickiness that I just talked about, of the misinformation. So, first is if possible, avoid repeating the misinformation itself. Second, use positive framing. So, instead of saying ‘John isn't a thief’ say, ‘John is incredibly honest and trustworthy and respects other people's property’ and third, if you can give an alternative explanation for the piece of misinformation, question do. People really like to have answers for things. People don't like uncertainty. And finally, you can prompt people to question the source of the misinformation. Again, as long as that won't put them in a defensive mode. So, get them to engage thoughtfully with the piece of misinformation without telling them that their favorite, you know, pundit is wrong and out to get them in a spreader of conspiracy theories, which probably is again going to put them in that defensive position, right? So, thinking long term, interpersonally on an individual level, if you have a friend or a family member who seems to be believing or sharing a lot of misinformation, you can, you know, and this is if you have the capacity and the desire to do so, apply some of the macro level existential questions that I just mentioned to this person, to this relationship. So, what is… what about the misinformation is resonating with or appealing to this person? Is this person feeling lonely or scared or angry? Are you in a position to speak to or help address some of those underlying attitudes or emotions? And again, that's if you have the capacity, if you have the desire, if it's safe for you to do so. If it seems to be… if this is happening and it seems to be more of a media literacy issue, can you somehow model thoughtful consumption of information. For example, can you share an anecdote, share a story about how you saw an outrageous claim on Facebook or twitter and clicked on the profile of the person only to discover it was a troll or a bot, you know, have some sort of personal anecdote like that. Also, shedding light on how information ecosystems are structured as well as, you know, like, telling people, educating people about how, you know, algorithms are being structured to really silo people into different information streams can help people understand that just because they're seeing something frequently doesn't mean it's true. Dr. Whitney Phillips, who's great, she does research out of Syracuse University, is doing a lot of really interesting work about what the effects of pulling back sort of the proverbial curtain on online and offline media strategies are for getting people to engage with information more critically. You know, when you're doing any of these things, when you're engaging with someone who's sharing mis and disinformation or misinformation, probably not someone who's sharing disinformation, someone behind a disinformation campaign is probably going to require quite different strategies as I just said, but someone who's sharing misinformation, engaging with empathy, sharing your own stories, you know, your own lessons about consuming media rather than shaming someone is probably going to be a lot more effective.
Noelle: Yeah, so it just sounds like we're up against a lot with this topic. You know, all the messaging with the in-group and, you know, the ‘Us versus them’, in group, out group, which really just plays a lot into people's biases and plays on just systems that we have in these societies too that, like you said way in the beginning, we've never really reconciled the histories of like, all of this interacts and then this misinformation and disinformation that is just so hard to sometimes figure out, I mean, even for us. Like, Miranda and I, when we're researching stuff, we go to throw something up on a post and it's like, you know, sometimes you have to think twice and really research and who's actually doing that in their everyday lives, seeing things and actually checking sources and seeing if this is true. So, we're just seemingly up against a lot, and then on top of that, people feel like they have the right to say certain things, you know, and they want to use the defense of freedom of speech as their first amendment right and, you know, while we hear the term like fighting words and true threats are generally found to be in violation of the first amendment, hate speech and offensive speech are usually and unfortunately protected. So, what are the implications of this, especially for marginalized groups and how can we kind of address some of that?
Samantha: That's a really good question and a really tricky question.
[00:40:04 – 00:50:00]
Samantha: You know, books upon books upon books have been written about this very issue, so I will not and would not ever claim to have any sort of definitive answers about freedom of speech versus hate speech versus dangerous speech but, you know, this is something we think about a lot, so I guess, I’ll share some of the important questions that I ask myself, the things that we consider when grappling with this question or the grappling with these questions. So, first, when we're looking at regulating speech, are we talking about government regulations or are we talking about private companies’ moderation policies. The two often get lumped in together but are really two separate questions. So, one is about whether or not the state has the right to censor you, the other is if private companies like social media platforms, right, have a duty to have responsible moderation policies and algorithms that don't lead people to harmful content. And then, also when we're talking about freedom of speech, are we talking about freedom from all types of accountability or are we talking about, you know, just freedom from prosecution. So, on the first topic, you know, government-based censorship is, again, a really complicated topic with countless implications and one thing to consider is if censorship laws are passed under one administration, for instance, and then the following administration has a widespread policy of targeting activists or political opposition. Can they then use those laws and policies to actually fuel harms toward marginalized or targeted groups? You know, we've seen… around the world, we've seen journalists and political dissidents be targeted by laws that on paper are supposed to protect people from harmful speech. We've seen those laws weaponized. A second thing to consider is that government censorship often feeds into martyrdom narratives that violent actors really thrive on, and they can point to being, you know, prosecuted and censored and say, ‘see, I’m willing to suffer for my commitment to this cause, they're locking me up because I’m telling the truth’ that they can also use some of their, you know, some of the sort of in-group love narratives that we talked about like, ‘I love you all, I care for you all so much that I’m willing to tell the truth and get locked up because, you know, that's how much I care about my group’. In the same vein, it can also drive conversations further underground and strengthen the resolve of people who believe these messages and can strengthen their resolve to hold on to them even tighter. So, their beliefs may become more radical even, as echo chambers become somewhat, you know, clandestine and isolated. And third, outlying dangerous speech, you know, does little to change the underlying dynamics. As we've discussed, dangerous speech is dangerous in the first place because it taps into existing fault lines in a different context. With censorship, you're playing a game of whack-a-mole. You're putting band-aids on instances of dangerous speech while the underlying conditions, grievances, anger, fear, distrust remain unchanged. And again, this is complex and is not something that I would claim to have any sort of definitive answer to. These are just points to consider when grappling with this question. Thinking about private companies and their responsibilities though, those are different questions. What is the responsibility of private companies to participate in building a healthy society? What at least is the baseline, is the responsibility of private companies such as social media platforms? You know, not to aid violent actors in organizing and spreading their messaging and I think this is largely a norms-based question, you know, we as the public, what are we willing to accept or demand? You know, as consumers, as voters, as residents of this country, how can we pressure companies to put responsible boundaries around how they're hosting content, how they're disseminating content, what kind of messages they're spreading, how they're engaging with mis and disinformation and, you know, while we may have a right to stand on the corner and say whatever we want, private companies have no obligation to allow that on their platforms, and some research has shown the de-platforming, so removing people spreading dangerous speech from their various platforms can be an effective strategy in defusing calls for harm and related rhetoric especially if people are de-platformed across platforms. And finally, you know, thinking about this question, when we're talking about censorship, when we're talking about, you know, regulating this kind of speech, there seems to be this narrative out there that that accountability in any form is censorship. You know, so we've already talked at length about the different ways communication and speech can be dangerous and if someone is calling for violence against a group of people, you know, thinking beyond the legal framework, this person should be held accountable. Particularly if they're in a public position, if they're in a position of power or if they're an elected official. Freedom of speech does not mean freedom from any consequences. Be that criticism, loss of financial support, loss of political support etc. and, you know, again, referring back to norms, we can think about as a public, how can we push for accountability for people who are using speech toward dangerous and harmful ends? And obviously, it's easy to speak in the theoretical when it comes to these questions but I want to go back to your point about the impact on people, particularly the people being targeted, the groups being targeted by hateful and dangerous speech. We know that encountering hate speech has negative impacts on individuals, on individuals exposed to it, you know. It's a poison. As far as dangerous speech, we have seen the emotional, the structural, the economic harms and the physical violence that come as a result, you know, in this country and around the world. These things are very real and they have real impact on people's lives. So, what do we do about that? And this, again, I just want to come back to the importance of accountability and holding people accountable for spreading hateful and dangerous messages, you know, not only will hopefully shift their behavior but will also, you know, build and strengthen a social norm that you don't get to say and do these things without consequence. And social norms, as, you know, you mentioned earlier, are incredibly strong drivers of behavior and research is showing more and more that they're perhaps the strongest drivers of human behavior is what we think our group will accept. So, this brings me to,a broader point about solutions, which is that to diffuse the impact of dangerous speech, we need to work to transform the underlying dynamics of our own context. And we can use communications to transform what dangerous speech is tapping into in the first place. This means shifting norms, as I’ve mentioned. This means building a broader ‘We’, redefining the ‘Us’, defining, you know, redefining what our values are and calling on people to live into them. This means articulating a hope for, you know, a just equitable, peaceful future that people can work toward and in the same way that the dangerous speech can mobilize people toward violence, you know, using communication to shape norms, change frames of who people see as the ‘Us’ or the ‘We’ can transform underlying dynamics to create a society where groups who have been or are being marginalized are safe and are no longer being targeted by dangerous speech because the dangerous messages have had their power taken out of them, they've been diffused, right?
Noelle: And I think, just thinking of being in the state of Florida and every… this is scary. And even just thinking of everything happening [Inaudible 00:48:56 – 00:48:58] right now, like trying to sign this bill or signing the bill, barring social media companies from being able to block political candidates and, you know, it comes on the heels, right, of trump being de-platformed from Facebook and Twitter and all this, and like you said, it's just kind of meshing together the private and the government and I think we're starting to have these really muddied waters where we're not really having, I don't know, like, clear guidance, you know, in terms of what is responsible and what is accountability and it's just winds up being about what people's motives are. Like, the political motives and the financial gain of these social media companies and that's, like, a whole other episode we will probably talk about but I just think we have it going on right now in Florida.
Miranda: When I think in addition to what you said, then that's on the heels of the anti-protest bill.
[00:50:01 – 01:00:15]
Miranda: And so, there's this ‘Us versus them’ and Sam, you kind of talkde about what are we willing to accept and, like, really, you know, protesting is a form of going against the things that we're not willing to accept, you know, and here we are being told that we can't, right? So, it's an interesting and scary place to be in, quite honestly. I think, you know, honestly, you've shared so much valuable information today. I super appreciate everything that you brought to our attention. While I always remain hopeful and we get to celebrate our wins, right, we still definitely have a long way to go. You know, I think back everything that you said, right? I think back to slavery, the holocaust, like, all this messaging that's happened over the course of US history, taking land-we've stolen land from people, right? Like, all based off of this messaging, right? So, how would you reimagine media? And this is maybe a very large question but in its simplest form, how would you reimagine media messaging, so the organizations and communities are building an inclusive ‘We’ rather than fueling this divide that we continue to see?
Samantha: Yeah, and this is obviously what I think about all day, every day. This is the question that's constantly on my mind and, you know, as you've said, it feels like we're up against so much and it can be… it can feel really hard not to be totally discouraged but framing is an interesting thing and… I want to start with a non-conflict-related example just to show I guess how kind of malleable people's minds are and how powerful framing can be and how much power we have when we frame things, so just come down this rabbit hole with me for a second and I promise I’ll bring it back to answer your question. So, I want to talk briefly about jaywalking. So, when cars… I think this is such a fascinating case study. When cars were becoming more and more prevalent in the 1920s, streets then were public spaces. And there was this new machine. Infrastructure wasn't necessarily designed for it and there were predictably a lot of deaths. A lot of people were hit by cars. Automakers, you know, were aware of the bad publicity they were getting. There were even political cartoons with cars drawn as the grim reaper. And they, automakers saw that the public was turning on automobiles and decided that they needed to shift the frame. They needed to do something about the way people understood these accidents. So, if someone got hit by a car, they decided, it was no longer going to be the driver's fault, it was going to be the pedestrian's fault. And they came up with the term ‘jaywalking’ which is named after a slang. A slang word of the time of the 20s of a ‘jay’ which means someone who's kind of bumbling, kind of careless, doesn't really know what they're doing. And the auto industry undertook a huge national campaign with posters, with press releases, following auto accidents that framed the pedestrians as at fault, pedestrian safety training programs and a whole host of city ordinances and local laws that created fines for jaywalking. And within the span of a decade, jaywalking was recognized as a public nuisance, you know, with people crossing the street being seen as doing so a hazard to themselves and the city's drivers. And I share this example because I think it's so powerful about how the way we frame things really takes root in how we understand problems and solutions. Right? Who is responsible? You know, how many times are you driving now or you're on a bike and you think, ‘look at that person jaywalking, that's so dangerous’ or, you know, how many times are you jaywalking, you crossing the street…
Noelle: I was just going to say.
Noelle: It just happened. We had an argument about it, like, two weeks ago, like, if you get hit, you won't get any insurance money, you got to stay in the crosswalk.
Samantha: Yeah, see, it's, like… it has totally… it has just totally, like, informed the way everybody sees this one aspect of life. So, the solution in the 1920s could have easily been something along the lines of how do we make automobiles safe for our streets or how can we, you know, make everyone use our streets safely, how can we keep this a public space and, you know, make it work for everyone but instead, there's these made-up categories that create groups of people and responsibility and culpability that still stand today and they were completely, completely made up. My point with this is that I mean, this I find to be an upsetting example but also, strangely inspiring in that it shows that we have the power to challenge existing frames. We have the power to change the… to challenge existing frames and the existing boxes that people are put into. So we can shift language, we can change frames and, you know, people will catch on, communities will catch on, society will catch on, so what this looks like in terms of conflict, in terms of identity based violence, in terms of group targeting is, speaking to people's many different identities, especially unifying identities, such as being residents of a town or cross-cutting identities such as being a parent or a caregiver can start to loosen some of the current frames we're operating in that veer into zero-sum thinking and, you know, geographic-based identities can have particularly strong resonance. So, can you talk to residents of a town about a shared value, for instance? You know, everyone wants to see that schools are safe and happy and fruitful places for children and how can you use that shared value and shared town identity to begin work toward common goals? And this isn't, this isn't about getting people to give up their identities, it's about making stronger points of connection, new points of connection that ultimately make our broader social fabric more tightly woven. And it's about getting people to connect along different lines. And that can also reduce some of the when people have different identity groups to turn to, it can also reduce the social pressure to go along with, you know, really harmful narratives, to go along with harmful action if they understand that this identity group isn't their only option.
Noelle: Well, Sam, thank you so much for coming on today. Like Miranda said, you just shared so much great information and like I said in the beginning, I think it's just really timely for where we are in our episodes because it just brought context to so much we've spoken about and kind of really brings a lot of meaning to understanding some of our previous topics and just the role social media is playing now and just messaging in general, not even necessarily just on social media but news outlets and articles we read. Even trusted sources now, you're like ‘Mm-hmm’. I don't know.
Noelle: So, thank you so much for coming on. It's so psychological that I feel like, totally geeked out. Like, it's just so interesting. And I think it can go either way, right? Like, shifting the frames, we can be responsible with it or we can be really harmful with it, and hopefully we get to a space as a society where we're being really responsible, so that all of us are safe and we talk about inclusion and equity where we're just working towards that version of society that we're hoping for, so thank you so much for coming on today. We really appreciate it. Listeners, stay tuned, we're going to have a lot of resources posted related to this episode and even some activities where we'll be going through different headlines and media outlets and just trying to help build more media literacy and use some of this problem solving to just really kind of dig through all the information that we're in on data with on a daily basis. So, thanks for coming on today, Sam. Bye.
Noelle: Show the unpacked project some love and be sure to like, subscribe and review our podcast. You can also check us out on Instagram at the underscore unpack project.
Miranda: And if you enjoyed today's episode, visit our website at the unpackedproject.com where you can make a donation that supports the research production and operating costs of this work.
Noelle: Shout out to all of our listeners who unpack with us today.
Miranda: See you next week.