Hallucination, trust, and reliability
“chatting with a[n] omniscient, eager-to-please intern who sometimes lies to you.” [1]
The dangers of being unaware of this problem can be far reaching. In one case, a lawyer submitted AI-generated case precedents to court that simply did not exist. There are also many examples of hallucinated references appearing in published journal articles, suggesting this is not an issue only for students but also their teachers!
The baby peacock problem
A complicating factor is AI outputs that contain false information can then further pollute content on the internet, undermining profoundly the reliability of information available, reinforcing fallacious content or adding to pre-existing inaccuracies. Often it is in image outputs that we can perceive with greater clarity the consequences of this; for example, images purporting to be of baby peacocks! Try it yourself. Search for “baby peacock” and see how many of the results in the first page of hits look like an actual baby peacock as per image one below:
By Rolf Dietrich Brecher via Flikr, available under cc-by-sa-2.0.
Then compare this to how many hits look something like the one below, which is an image we generated with the prompt “photo realistic baby peacock” using Midjourney.
Image generated in Midjourney by Martin Compton on 28 November 2024.
For educators, this reinforces the vital importance of teaching comprehensive source evaluation skills from an early age. Primary and secondary school pupils should learn to question not just AI-generated content, but all information sources. This includes traditional media, social networks, textbooks, and even teachers too. The goal isn’t to breed cynicism, but to develop healthy scepticism and verification habits.
Practical strategies
Raising information literacy
Regular testing and refinement of the AI system using human feedback and error analysis helps identify patterns of hallucination. It is important to note here that much early human reinforcement work was done by workers in the Global South, whose conditions and work were traumatic and exploitative.
Although not new, students need to continue to learn how to trace information to its original source. By teaching students about AI hallucinations, educators can develop deeper discussions about information literacy, critical thinking, and the nature of knowledge itself. Why do we trust certain sources? How do we verify information? What makes a source reliable?
If we have access to tools that enable us to localise the source material or ‘ground’ responses in verified data by using retrieval-augmented generation (RAG), this will enable much greater trust and reliability. Many schools, colleges, and universities are building and using bots grounded in this way, but many do not have the resource or expertise to do so.
Understanding that ‘everybody makes mistakes’
Of course, as with all things, there are other ways to look at this issue. In contexts such as education and academia, accepting the potential for generative AI to hallucinate could be both pragmatic and beneficial, especially when framed alongside the fallibility of human outputs. Too often the critique of AI hallucination carries with it the assumption that humans do not make mistakes, never make things up, or are not fallible in any way! Much like individuals who may misremember or inaccurately convey information, generative AI’s inaccuracies should not, so the counterargument goes, overshadow its immense potential to support open-ended and exploratory learning processes.
A compelling analogy is to see AI as a junior researcher: someone with access to an extensive library but who is still learning how to access the right sources. Similarly, generative AI, while powerful, may occasionally draw from the wrong ‘book’ or misinterpret the context, much like a human navigating vast amounts of information.
The key is to approach AI with the same critical faculties applied to human work: checking, verifying, and cross-referencing outputs for reliability. Generative AI thrives not as a fact-retrieval tool but as a ‘co-intelligence’ capable of reframing concepts, generating analogies, and fostering creative thinking. In this sense, its ‘mistakes’ can often spark dialogue and deeper understanding, aligning with human learning as an iterative, error-tolerant process. By holding generative AI to appropriate standards, educators could unlock capacity to enrich personalised and active engagement, rather than only focusing on its imperfections.
Note on language: many commentators and teachers dislike the term ‘hallucination’ because it further anthropomorphises AI tech. Alternative suggestions include ‘confabulation’ or ‘delusions’, but these do not have the common currency of hallucination. One that may find its way into the common discourse is ‘AI mirage’, which is the term settled on by Anna Mills and Nate Angell in their endeavour to find a better alternative [2].
Now that you have completed this step, you have considered why and in what ways AI tools can hallucinate. In the next step, you will consider other issues of trust and reliability by looking at disinformation and misinformation.
References
- Schmitz R, Mollick E. Has AI reached the point where a software program can do better work than you? [interview on the Internet]. US: NPR; 2022 Dec 16 [cited 2025 April 23]. Available from: https://www.npr.org/2022/12/16/1143330582/has-ai-reached-the-point-where-a-software-program-can-do-better-work-than-you
- Mills A, Angell N. Are We Tripping? The Mirage of AI Hallucinations [Internet]. United States: [publisher unknown]; 2025 Feb 6 [updated 2025 Feb 13; cited 2025 Mar 4]. Available from: https://ssrn.com/abstract=5127162
Try it out
Try this bot which will always hallucinate. How could you use something like this to support your teaching or learning?
Join the conversation
How far can we trust AI to provide accurate information? Is ‘good enough’ ever enough?
Hallucination is just one example of how AI can generate false or misleading information. For this discussion, think about your experiences with AI tools like ChatGPT, Google Translate, or recommendation systems. Have you encountered any instances where the information provided was incorrect or surprising?
Reach your personal and professional goals
Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.
Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.
Register to receive updates
-
Create an account to receive our newsletter, course recommendations and promotions.
Register for free