Skip main navigation

Ethics of generative AI

In this video, Dr Caitlin Bentley explores a range of ethical considerations and debates surrounding generative AI.

This activity explores the ethics of applying generative AI in education and the importance of considering social, ethical and sustainability dimensions while utilising this technology.

It contributes to your achievement of the course learning outcome: discuss the importance of considering social, ethical, inclusivity and sustainability dimensions while utilising generative AI in education. More specifically, by the end of this activity, you will be able to:

  • Identify key issues and ethical considerations for academic staff, students and wider society.
  • Explain access and inclusion issues and potentials.
  • Identify sustainability considerations in discussions about generative AI.

In this step, you will develop an awareness of a broad range of ethical issues with AI and some emerging responses.

In the video for this step, Caitlin lists and describes several complex ethical considerations regarding generative AI. She maps these onto the five layers of AI that you learned about in Activity 1.

Considering these wide-ranging ethical challenges, it is essential to gather a diverse group of experts and community, industry and government representatives together to address these issues collectively. In the UK, initiatives such as the Trustworthy Autonomous Systems Hub and Responsible AI UK are fostering collaboration among these voices to develop suitable solutions.

The goal of these initiatives is to develop AI that is responsible and trustworthy in principle and by design.

Thus far, this has involved supporting research and innovation around:

  • Explainability, accountability or understandability for diverse users – explainability in generative AI refers to whether or not the AI system can provide clear, understandable explanations for its outputs or how it came up with them. Accountability refers to being able to hold AI systems and their developers or owners responsible for the system’s outputs or behaviours.
  • Verification and validation refer to different processes used in the design and testing of AI systems and during their operation. Verification means that the AI system correctly implements the intended purpose of the system and that the underlying code is correct. Validation is more about confirming that the AI meets the needs and expectations of users.
  • In the context of generative AI, robustness refers to the extent to which the AI system can generate good outputs under a variety of conditions, especially if someone is trying to misuse it or if it is being used for something it wasn’t trained to do.
  • Similarly, we’d expect that the AI system would be safe to use and that defences to attacks on the system, its users or operational environment have been addressed.
  • These aspects, combined with how AI systems can adapt and change over time, mean that we also need to think about how our confidence in AI may change as the system evolves over time.
  • We need to have good governance and regulation of AI by policymakers and regulators that are taking into consideration the human values and ethics of their citizens as well.

In the video for this step, Caitlin stresses that it’s crucial that we all play a part in responsible and trustworthy AI systems. A good starting point for this is to develop your understanding of AI’s wide-ranging effects on society and individuals from a variety of perspectives.

The various ethical issues she introduces include labour and environmental concerns, biases from the data used in AI and the potential misuse of AI technology. Although we are making good progress in the research community, we all need to take responsibility for our own learning on these matters.

When you have completed this step, you will have learned about the key ethical issues and considerations for using generative AI. In the next step, you will explore the discussion on access and inclusion, focusing on the uses of generative AI as assistive technology in higher education.

References

The Alan Turing Institute. Facial Recognition, Table 3: Most commonly selected benefits for facial recognition technologies [cited 29 September 2023].

The Alan Turing Institute. Facial Recognition, Table 4: Most commonly selected concerns for facial recognition technologies [cited 29 September 2023].


Join the conversation

Which of the ethical issues presented concerns you the most and why? Ask your friends or colleagues what they believe should be done about it. Do you agree? Why or why not? Where on the spectrum of views collected do you sit, and why?

This article is from the free online

Generative AI in Higher Education

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now