Skip main navigation

Existential thinking: harmful but realistic?

Discussing the potential value of existential thinking regarding AI.
The AI-induced existential fears of some multimillionaires are not necessarily unfounded, and may sometimes align with trusted experts and voices of reason. 


Take the example of Professor Geoffrey Hinton and what we have learnt in the CBC Radio interview.

  • Geoffrey Hinton is a Nobel laureate in Physics, known for his groundbreaking work on artificial neural networks, earning him the title of the “Godfather of AI“. 
  • He is also very aware of how quickly AI has expanded, believes it may become “smarter” than humans and that safety development is a prime concern because of this. 
  • He dubs the end of humanity at the hands of AI as “conceivable”.

In other words, the fears of Bostrom and Musk are echoed by some leading figures in the field of AI.

With that in mind, it is clear that existential fears regarding AI are harmful when perpetuated to cause fear, but they may not be universally unhelpful. It is beneficial to be aware of the risks of AI and to accordingly attempt to make AI development safer and more ethical.


TASK: Skip to the timestamp 7:42 for Geoffrey Hinton’s thoughts on “existential threat” regarding AI in the Amanpour & Co PBS interview (2023).

This article is from the free online

AI Ethics, Inclusion & Society

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now