Skip main navigation

Media Literacy

Explore some of the contemporary debates concerning online media and their implications for our learning networks.
Misinformation and the Spread of Fake News with many other words in a stylised word cloud.
© University of Southampton

It may sound obvious, but if we only have a small learning network, our opportunities for learning may be correspondingly smaller.

Therefore, we have to proactively invest in growing our network. We can choose to do this in any way which we prefer. However, there are some things to keep in mind whatever actions we decide to take.

Firstly, a critical attitude towards the news, information and stories we might find flowing around our network, is wise. Not only are there traditional media biases to think about, but there are also some very important issues and debates to consider, and developing our media literacy can help.

According to The Centre for Media Literacy,

“Media Literacy is a 21st century approach to education. It provides a framework to access, analyze, evaluate and create messages in a variety of forms – from print to video to the Internet. Media literacy builds an understanding of the role of media in society as well as essential skills of inquiry and self-expression necessary for citizens of a democracy” (Thoman and Jolls, 2005, p. 190).
So what should we be aware of when interacting with online media?:

Echo chambers

Traditionally, an echo chamber is an enclosed space where the noises we make are fed directly back to us. This phenomenon applies equally in the virtual world, because we tend to interact with people who are “like us” and have similar views.
So within our own learning networks we may not be exposed to much disagreement or alternative perspectives on a topic, even if digital tools now permit our networks to be geographically diverse. We need to grow our networks widely and make connections to information we may not always agree with.

Filter bubbles

A filter bubble results from personalisation in which machine learning algorithms deployed by platforms such as Facebook selects the information presented to us based on our past behaviour. As a result, we can unknowingly become isolated from information that disagrees with our “world view”.
This TED Talk by Eli Pariser provides a clear warning of the danger of filter bubbles. It is now several years old, but the role played by social media in influencing the outcome of recent political events has only emphasised its importance. And in this short video featuring Bill Gates, he notes that we have underestimated the scale of the problem.
On the positive side, we also now have the capability to check out the facts for ourselves in a way that was difficult and time consuming in the past, through, for example, ease of access to legal proceedings or the source data of research papers.
Filter bubbles are a symptom of the wider issue of algorithmic bias. As machine learning algorithms become more common in our everyday world, and become increasingly vital for enabling us to use the vast amounts of digital data being generated daily, the biases which may creep in to these systems must be carefully considered.
As a result, the Royal Society has recently been investigating the ethics of machine learning algorithms and robotics, and the Horizon Digital Economy Network ‘UnBias’ Project has also been researching into the fairness of information filtering algorithms.
It is important for us all to understand the effects of algorithms on what we are exposed to when we search the Web, access Facebook or are fed targeted advertising, for example, so that we can remain critical thinkers when using our PLNs.

Fake news and our ‘post truth’ world

In November 2016 Oxford Dictionaries announced that it had selected ‘post-truth’ as the word which best reflected “the passing year in language”. They defined it as
“relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief”.
Fake news (a report that is factually wrong), ought to be easy enough to detect and correct. But in a world where a single news item could potentially reach 2 billion people via Facebook, the damage can easily and quickly be done.
Sometimes a fake news item is not immediately obvious. It may be satirical (see The Onion for a good example) or it might contain some elements of truth that make the story seem believable.
Fake news headlines may also be deliberately appealing to serve as Clickbait, aimed at making you follow links or view advertising.
Any of these fake stories might have been shared across your network by someone you trust – because they are very enticing, even if we know they must be fake!
There are also ‘bots’ which can actually write news stories as an automated process without involving any humans at all. The Oxford Internet Institute has conducted research into the use of the bots to automatically generate and spread fake political propaganda news during the UK General Election 2017. They found that for every 4 professional political news stories shared on social media, users also shared 1 automated fake (or junk) news story.
Worryingly, an investigation by BuzzFeed News indicated that fake US election news stories generated more total engagement on Facebook than top election stories from 19 major mainstream news outlets combined. For example, Facebook announced that during the US elections in 2016 about 80,000 posts dealing with “divisive social and political” subjects were generated by accounts backed by bots/trolls in an attempt to influence the outcome of the election. These posts were not critically analysed by many, and hence were reposted many times. Facebook estimates that up to 126 million US citizens saw the posts (almost one third of the entire population).
Recent research and allegations also suggest that national-level actors have been involved in Information Warfare, using social media bots, or fake accounts run by actual people, to undermine societies around the world by posting fake and/or inflammatory statements at critical points, including during elections and after terrorist attacks.
In response to the growing criticism of the platform’s role in spreading mis/disinfomation, Facebook now flags stories of questionable legitimacy with an alert that says “Disputed by 3rd party fact-checkers”.
Also, Facebook, in conjunction with FullFact, has recently released a guide on how to spot fake news and have undertaken other activities to address this situation.
However, it is very difficult to find a perfect ‘technical fix’ to the issue of fake news and bots. For example, during the Covid-19 crisis Facebook still had to take action to stop the spread of fake new concerning remedies, treatments and general misinformation about the virus.
What this means is that, in the end, it will come down to developing our own digital literacies and critical attitudes to what we see and read on social (and traditional) media.
Therefore, when growing our network we need to take an informed, rational and questioning approach to the people, news and stories we come across.
Are you in a ‘chamber’ or a ‘bubble’? Or have you ever been misled or tricked by fake news?
© University of Southampton
This article is from the free online

Learning in the Network Age

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now