Skip main navigation

Data, society, and you

In this article, Siobhan Dunlop discusses how data, algorithms, and machine learning can impact society, equality, and the self.
A printed circuit board
© University of York (author: Siobhan Dunlop)

The data we can gather and interpret opens up a lot of opportunities. We make decisions based on other people’s reviews (even when those people are halfway around the world), track our use of utilities or spending habits with the click of a button, and find new ways to improve people’s lives using information that previously couldn’t be measured.

Last week, we looked at algorithms and how they affect the information we consume. Algorithms don’t just affect which news we see though. Frequently, they are used with data – data that has been measured for decades and data we can gather thanks to technological advances – and machine learning to make decisions and automate processes. Machine learning is a subset of artificial intelligence and sounds complicated, but simply put it is a range of methods involving using computers to spot patterns and infer information without being given exact instructions. Typically computer programs need very precise instructions to work, but with machine learning, the computer ‘learns’ how to group data together or to spot patterns in new data based on previous data.

It may not be immediately clear how machine learning, algorithms, and data relate to our own identity and our wellbeing, but they have implications for society and inequality. This technology is being used to automate decisions previously made by human beings, such as who should get which kind of insurance policy, who can access welfare support, and where the police should target their patrols. Though an algorithm – a set of instructions for how a decision is made – may sound neutral, especially in comparison to humans whose judgement may be coloured by personal prejudices and experiences, there is a lot of research around the ethics of algorithms and areas of algorithmic bias.

As we looked at the previous step, measuring data contains implicit judgements about what is worth measuring. Data is not free from bias and simplifications, but can be used to highlight bias in society, as well as perpetuate that bias. This becomes particularly apparent with one example of using data and algorithms to impact society: predictive policing.

Predictive policing involves using data-based computer methods to identify where crimes might happen next or who might perpetrate them. Predicting policing techniques have been designed to distribute police resources where they are most needed according to crime data. However, there has been a lot of opposition raised to these methods, based on both the crime data itself and what using this data does for inequality.

The crime stats used in predictive policing are only a representation of where crimes have been reported, rather than where crimes are actually occurring. This is generally true of crime statistics, but in this case it causes police to focus patrols on areas where police have investigated crimes. As Hannah Fry writes in her book Hello World, “[b]y sending police into an area to fight crime on the back of the algorithm’s predictions, you can risk getting into a feedback loop” (p. 186). Officers are sent to the area, so they detect more crime. This data is then used in the algorithm’s calculations, so more officers are sent to this same area.

Fry points out another issue: systemic inequality being replicated by these algorithms. By feeding crime data into the system and using that as a method of police decision making, people are judged on the “all-too-predictable consequences of America’s historically deeply unbalanced society” (p. 80). In other words, inequality is perpetuated by these systems because they are based on data containing this inequality.

This is only one example of how data and algorithms can have unintentional effects. In her book Algorithms of Oppression, Safiya Umoja Noble looks at how search engines reinforce racism and racist stereotypes in society, which then impacts how people see their own identity and that of others when they search online. When someone searches online for their own name, or for people like them, what they see will impact on their own wellbeing and sense of identity, whether that is positive or negative.

Data-driven approaches bring a wealth of benefits and allow people to discover patterns in information that can benefit people and society. We must ensure that data is used to highlight inequality so that it can be addressed, rather than perpetuate it. With all this information at our fingertips and the power of machine learning to help us explore it, surely we can put data to good use?

References:

  • Hannah Fry (2018). Hello World: How to Be Human in the Age of the Machine. London: Transworld.
  • Safiya Umoja Noble (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
© University of York (author: Siobhan Dunlop)
This article is from the free online

Digital Wellbeing

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now