Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only T&Cs apply

Find out more

Cliff Lampe: Regulating Social Media

Cliff Lampe: Why is it so hard to regulate extremism on Social Media?
0.8
<v ->Hi, this is Cliff Lampers from the School of Information.</v> And part of what I want to talk to you about in this lecture is about why is it so hard to regulate extremism on social media? There’s a few different reasons and some of them are technical and some of them are social. One and the first one is very much a legal issue. There is a law in the United States called Section 230 of the Communications Decency Act of 1996 and often people just refer to this as Section 230. This is an old and well-tested policy in the United States that indemnifies social media platforms and all telecommunication platforms from content that they carry.
39.9
Here’s the actual text of it and it’s a lot of legalese but what it basically means is if you are a platform like Facebook or Twitter or AT&T, you’re not responsible for what anybody says on your platform whether they’re committing a crime or whether they’re saying hateful things or whatever it happens to be you can’t be sued or held criminally liable for anything that occurs on your platform. Section 230 has been tested many times in court, it’s always shown to be very bulletproof and it means basically that it’s up to, there’s no legal recourse. Social media platforms have to be willing to moderate and regulate themselves because there’s not really a law that says anything about moderation.
79.6
The problem becomes that platforms don’t want to be the moderators of free speech, right? They don’t want to tell us what is true or not true and they don’t want to be in that business. Often when they do they get a lot of false positives. So they’ll moderate out somebody who has a, you know, a non-moderate viewpoint but it’s not an extremist viewpoint and that always causes people to be angry when they get moderated out when they shouldn’t have been.
105.5
In particular, conservatives in the United States have been particularly frustrated with social media companies because they feel like the attempts of social media companies to moderate on their own have often fallen unfairly on the conservative voices and conservative thinkers’ shoulders. So there’s been a lot of attempted regulation, a lot of thoughts about amending Section 230, a lot of meetings and discussion about how can we fairly moderate speech without making a particular set of viewpoints that are not extremists but still not necessarily in accord with the social media companies, subject to moderation.
145.7
Another problem that we have with actually regulating and kind of extremist activity on social media sites is the scale of social media. It’s just too much. Facebook has 40,000 human moderators and it’s a drop in the bucket. As we can see here, 300 hours of video are uploaded to YouTube every single minute, right? 5 billion videos are watched on YouTube every single day, 4,000 videos are uploaded to Facebook on any given second, right? If you scale that up and you look at the number of posts that are occurring across all social media platforms all the time, it is a immense amount of information that no human moderation can really deal with.
186.7
So platforms depend on two tricks to be able to do that. First, they depend on us reporting content, right? Most of what they do is they depend on other users to identify non-normative or hateful content and report it and if it violates the site’s policies then they can remove the content. So it’s almost like using human attention as a way of sorting through that massive pile of data. The other thing that they do is they depend on what’s known as machine learning algorithms, which is another way of saying artificial intelligence. So it’s software, that’s continuously searching for content on these sites that violates the policies of the social media companies and then will attempt to remove that content.
228.1
Sometimes that’s very easy and successful, for instance, YouTube is able to track copyrighted content and remove that relatively readily, and sometimes it’s very hard to do, partially because humans don’t even agree on some of these things, right? What is hate speech or not hate speech? What is harassment or not harassment? When it’s hard for humans to agree on what something is, robots find it even harder to do.
252.3
And then finally, and probably most importantly, extremist viewpoints aren’t illegal, right? So being a hate group isn’t necessarily an illegal thing in the United States. Actions are illegal, right? Conspiracy to commit crime is illegal, slander and libel are illegal, but hate isn’t and so platforms have a really hard time figuring out how do we moderate something and what’s the line between what is hateful and we don’t want on our platform, even though they’re not legally required to remove anything. Of course they don’t also want to run kind of a hate-filled place, so they do moderate a bunch of things.
287.6
But what that line is and how they do it is such an extremely sensitive topic and very delicate because, you know, we have very widely different beliefs in the United States about what constitutes hate speech and what doesn’t and the platforms are right in the middle of that, so at the end of the day, the biggest problem we have is how do we actually define what we’re talking about?
311
So what do we do about all of this if the law and the technology make it that it’s very hard to moderate extremist activity in online space? What do we actually are able to do about it? There’s a few options. One is social media does make it possible to track what people are doing, and that law enforcement’s been very active about tracking extremist groups in social media spaces. The platforms are getting more aggressive about moderating different types of especially what they think of as false information or false information that can lead to wide-scale harm, so for instance, information about vaccines or pandemics or things like that.
344.5
And finally, if we think of this as a public health problem instead of as a technology problem the issue isn’t necessarily that social media is there, the issue is that the hate is there. So how do we use social media as a solution and use it as a way to address the underlying cause of the tension, the hatred, the extremism that we see in the United States, instead of thinking of it as a cause of those issues, and that’s probably our surest pathway to using social media to address extremists or radicalized hate groups in the U S.

Can we regulate?

Professor Cliff Lampe explains the legal, social, and technical reasons why moderating social media is so challenging. The scale of social media adds a layer of complexity to having moderation that is effective. Adding to the complexity within the United States, extremist speech remains ill-defined and technically not illegal.

Do you think that humans will be able to rewrite their relationships with social media to foster hope and correct information as a solution or do we need to be held by regulation and moderation?

This article is from the free online

Understanding and Addressing Extremism Teach-Out

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now