Contact FutureLearn for Support
Skip main navigation
We use cookies to give you a better experience, if that’s ok you can close this message and carry on browsing. For more info read our cookies policy.
We use cookies to give you a better experience. Carry on browsing if you're happy with this, or read our cookies policy for more information.
Misinformation and the Spread of Fake News with many other words in a stylised word cloud.

Media Literacy

It may sound obvious, but if we only have a small learning network, our opportunities for learning may be correspondingly smaller.

Therefore, we have to proactively invest in growing our network. We can choose to do this in any way which we prefer. However, there are some things to keep in mind whatever actions we decide to take.

Firstly, a critical attitude towards the news, information and stories we might find flowing around our network, is wise. Not only are there traditional media biases to think about, but there are also some very important issues and debates to consider, and developing our media literacy can help.

According to The Centre for Media Literacy,

“Media Literacy is a 21st century approach to education. It provides a framework to access, analyze, evaluate and create messages in a variety of forms - from print to video to the Internet. Media literacy builds an understanding of the role of media in society as well as essential skills of inquiry and self-expression necessary for citizens of a democracy” (Thoman and Jolls, 2005, p. 190).

So what should we be aware of when interacting with online media?:

Echo chambers

Traditionally, an echo chamber is an enclosed space where the noises we make are fed directly back to us. This phenomenon applies equally in the virtual world, because we tend to interact with people who are “like us” and have similar views.

So within our own learning networks we may not be exposed to much disagreement or alternative perspectives on a topic, even if digital tools now permit our networks to be geographically diverse. We need to grow our networks widely and make connections to information we may not always agree with.

Filter bubbles

A filter bubble results from personalisation in which machine learning algorithms deployed by platforms such as Facebook selects the information presented to us based on our past behaviour. As a result, we can unknowingly become isolated from information that disagrees with our “world view”.

This TED Talk by Eli Pariser provides a clear warning of the danger of filter bubbles. It is now several years old, but the role played by social media in influencing the outcome of recent political events has only emphasised its importance. And in this short video featuring Bill Gates, he notes that we have underestimated the scale of the problem.

On the positive side, we also now have the capability to check out the facts for ourselves in a way that was difficult and time consuming in the past, through, for example, ease of access to legal proceedings or the source data of research papers.

Filter bubbles are a symptom of the wider issue of algorithmic bias. As machine learning algorithms become more common in our everyday world, and become increasingly vital for enabling us to use the vast amounts of digital data being generated daily, the biases which may creep in to these systems must be carefully considered.

As a result, the Royal Society has recently been investigating the ethics of machine learning algorithms and robotics, and the Horizon Digital Economy Network ‘UnBias’ Project has also been researching into the fairness of information filtering algorithms.

It is important for us all to understand the effects of algorithms on what we are exposed to when we search the Web, access Facebook or are fed targeted advertising, for example, so that we can remain critical thinkers when using our PLNs.

Fake news and our ‘post truth’ world

In November 2016 Oxford Dictionaries announced that it had selected ‘post-truth’ as the word which best reflected “the passing year in language”. They defined it as

“relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief”.

Fake news (a report that is factually wrong), ought to be easy enough to detect and correct. But in a world where a single news item could potentially reach 2 billion people via Facebook, the damage can easily and quickly be done.

Sometimes a fake news item is not immediately obvious. It may be satirical (see The Onion for a good example) or it might contain some elements of truth that make the story seem believable.

Fake news headlines may also be deliberately appealing to serve as Clickbait, aimed at making you follow links or view advertising.

Any of these fake stories might have been shared across your network by someone you trust - because they are very enticing, even if we know they must be fake!

There are also ‘bots’ which can actually write news stories as an automated process without involving any humans at all. The Oxford Internet Institute has conducted research into the use of the bots to automatically generate and spread fake political propaganda news during the UK General Election 2017. They found that for every 4 professional political news stories shared on social media, users also shared 1 automated fake (or junk) news story.

Worryingly, an investigation by BuzzFeed News indicated that fake US election news stories generated more total engagement on Facebook than top election stories from 19 major mainstream news outlets combined.

Recent research and allegations also suggest that national-level actors have been involved in Information Warfare, using social media bots, or fake accounts run by actual people, to undermine societies around the world by posting fake and/or inflammatory statements at critical points, including during elections and after terrorist attacks.

In response to the growing criticism of the platform’s role in spreading mis/disinfomation, Facebook now plans to flag stories of questionable legitimacy with an alert that says “Disputed by 3rd party fact-checkers”.

Also, Facebook, in conjunction with FullFact, has recently released a guide on how to spot fake news.

However, it is likely to be difficult to find a perfect ‘technical fix’ to the issue of fake news and bots. In the end, it will come down to us to be critical about what we see and read on social media as well as in the traditional media.

Therefore, when growing our network we need to take a rational and questioning approach to the people, news and stories we come across.

Are you in a ‘chamber’ or a ‘bubble’? Or have you ever been misled or tricked by fake news?

Share this article:

This article is from the free online course:

Learning in the Network Age

University of Southampton

Course highlights Get a taste of this course before you join: