Skip main navigation

Investigation – different perspectives on AI

Pick a topic below that interests you. Spend a few minutes researching, discussing, thinking about it and share your thoughts in the comments.

Pick a topic below that interest you.  Spend a few minutes researching, discussing, or just thinking about it and share your thoughts in the comments.

1.

If a machine convinces you it’s human, does that mean it understands you? Would passing a Turing-like test equate to actual intelligence or merely advanced mimicry?

2.

Are personalised algorithms helping you decide or deciding for you?
AI systems shape the content we see on social media, news platforms, and streaming services. In light of past controversies (think Cambridge Analytica), how might generative AI escalate manipulation and reshape autonomy in digital spaces?

3.

Does AI amplify global knowledge or just echo the loudest voices?
AI models are trained on vast amounts of internet data, which come disproportionately from certain regions, cultures, and languages. Do these systems democratise access to knowledge, or reinforce dominant worldviews?

4.

If AI reflects our past, how do we stop it from repeating our worst mistakes?
Because AI learns from historical data, it risks inheriting the biases, exclusions, and injustices of previous generations. Is it possible to design systems that don’t reproduce the inequalities embedded in their training data?

5.

If an AI system can outperform experts in complex tasks, but lacks understanding, is it still intelligent?
AI models can now surpass humans in many diverse domains, yet they don’t “know” what they’re doing. Is intelligence about outcomes alone, or must it include awareness and reasoning?

6.

When AI models hallucinate false information, who is responsible – the system, the designer, or the user?
LLMs often produce confident but incorrect outputs, known as hallucinations. When these outputs cause harm, where should accountability lie?

7.

If AI systems learn from data shaped by human bias, can they ever be truly neutral?
From hiring tools to criminal justice software, AI systems often absorb and reproduce the prejudices in their training data. Can neutrality be engineered, or is every system shaped by these biases?

8.

If an AI can generate realistic human faces, voices, and texts, what happens to the concept of “real”?
Advances in generative AI blur the boundary between authentic and synthetic (generated) content. In a world where “deepfakes” are increasingly convincing, how do we define and validate trust, evidence, or originality?

9.

If access to powerful AI requires massive resources, who gets to shape the future of intelligence?
Training leading AI models requires enormous computational power and therefore capital. What are the implications of this for democracy, innovation, and whose priorities are embedded in the systems?

10.

If a system is designed to predict, not understand, why do we trust it to reason?
Large language models succeed by identifying statistical patterns. Should we rethink our expectations of these systems or should we rethink our definition/conception of intelligence?

This article is from the free online

AI Ethics, Inclusion & Society

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now