Skip main navigation

Gender and Technology

New technologies are emerging all the time and unless these technologies are being developed with inclusivity in mind they can replicate biases and prejudices found in society. Here are a few examples of the negative impacts technologies have had on women and trans and non-binary people:
© Creative Computing Institute

As well as the negative aspects of the technology industry, it’s important that we are aware of the potentially negative impacts of technologies themselves.

New technologies are emerging all the time and unless these technologies are being developed with inclusivity in mind they can replicate biases and prejudices found in society. Here are a few examples of the negative impacts technologies have had on women and trans and non-binary people:

Gender and Facial Recognition Software

The National Institute of Standards and Technology published research discussing the development of facial recognition technology designed to identify whether a man is trying to access a women’s bathroom (1). This was heavily criticised for enforcing the idea that there are an easily identifiable set of characteristics that make up the female or male face, further increasing the discrimination trans people experience in ‘single sex’ environments (2).

In 2019 the University of Colorado published a study that analysed 2,450 images of faces from Instagram using four of the largest providers of facial analysis services – IBM, Amazon, Microsoft and Clarifai. They noted that “On average, the systems were most accurate with photos of cisgender women, getting their gender right 98.3% of the time. They categorised cisgender men accurately 97.6% of the time. But trans men were wrongly identified as women up to 38% of the time. And those who identified as agender, genderqueer or non-binary were misgendered 100% of the time.” (3)

Facial recognition software has also been sold by tech companies to the American police force through government contracts. A group of 450 Amazon employees wrote an open letter calling for Jeff Bezos to stop selling its software, Rekognition, with one employee noting that the software is a “flawed technology that reinforces existing bias” (4). They cited a test in which the software ran photos of every member of Congress against a series of publicly available mugshots. In the end, there were 28 false matches with the incorrect results being disproportionately higher for people of colour (5). Similar issues arose at Google with AI and Ethics researcher Timnit Gebru forcibly resigning from her role at the company due to conflicts over a paper she published which found facial recognition software to be less accurate at identifying women and people of colour (6). All of these issues only prove that when we are designing technologies without an intersectional approach, failing to account for a broad spectrum of gender and racial identities, the consequences can be harmful.

Gender and Deepfakes

Deepfakes are a type of synthetic media in which a form of artificial intelligence replaces the likeness of an individual onto pre-existing images or video, creating media that can convincingly make people appear to be doing or saying things they have not. A video of Mark Zuckerberg gained particular attention in 2019, as it depicted him saying incriminating things about Facebook (7). While the video, and others like it, were discussed for their contribution to fake news, their origins are much more insidious and rooted in cultures of misogyny.

The term ‘deepfake’ can be traced back to around 2017 when it was coined on Reddit, but the software used to create deepfakes had been around for some time. It gained traction when deepfake videos and images of female celebrities mixed with pornography began to circulate on the forums in 2018, forcing Reddit to ban the deepfake subreddit.

In September 2019 AI firm Deeptrace noted that of the approximately 15,000 deepfake videos online 96% were pornographic and 99% of those mapped faces from female celebrities on to porn stars (8). The use of the software for personal gain grew as people utilised the tech to create fabricated intimate images. Some notable cases include a deep fake bot circulating on messaging platform Telegram, which created pornographic images of more than 100,000 women, some of whom were under 18 (9) and the story Noelle Martins, whose struggle with deep fake pornographic images almost cost her career, as they began circulating the internet as early as 2016 (10).

Gender and Amazon’s Recruitment Algorithm

In 2018 Amazon was forced to scrap a recruitment algorithm it had developed to review applicants’ resumes and mechanise the search for top talent. The tool used AI to give candidates scores ranging from one to five stars, in the same way, that shoppers rate products. Upon further inspection from the company’s machine learning experts, it was discovered that the system was not rating candidates equally based on their gender.

This was because the algorithm was trained on observing patterns in resumes submitted to the company over a 10-year period which mostly came from men – a further reflection of male dominance in the tech industry. The result was that Amazon’s system taught itself that male candidates were more desirable, penalising resumes that included the word “women’s”, this included things such as being part of women’s sports groups or women’s colleges and schools.

What are your thoughts on these examples? Were the issues raised something you were already aware of?

References:

  1. Face recognition vendor test (FRVT) – performance of automated gender classification algorithms, National Institute of Standards and Technology.
  2. Os Keyes, 2018. The misgendering machines: trans/HCI applications of automatic gender recognition.
  3. Morgan Klaus Scheuerman, Jacob M. Paul, and Jed R. Brubaker, 2019. How computers see gender: an evaluation of Gender Classification in Commercial Facial Analysis and Image Labeling Services.
  4. Anonymous Amazon employee, 2018. I’m an Amazon employee. My company shouldn’t sell facial recognition tech to police.
  5. Jacob Snow, 2018. Amazon’s face recognition falsely matched 28 members of Congress with mugshots, ACLU.
  6. Karen Hao, 2020. We read the paper that forced Timnit Gebru out of Google. Here’s what it says, MIT Technology Review.
  7. Artists create ‘Zuckerberg’ deepfake video.
  8. Ian Sample, 2020. What are deepfakes and how can you spot them? The Guardian.
  9. Matt Burgess, 2020. A Deepfake Bot is Being Used to Abuse Thousands of Women, Wired.
  10. Noelle Martin, 2020. Deepfake porn almost ruined my life, Elle.
© Creative Computing Institute
This article is from the free online

Gender-Inclusive Approaches in Technology

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now