But that's the official term from the field of artificial intelligence for a language model making things up, so it's good to know the term.
One feature of hallucinations is that they tend to be different across different outputs. So one thing you can do is ask the same thing multiple times and see if the outputs are consistent.
You could also try the same conversation in different models (Claude, Gemini, Perplexity) and compare the answers. If they are all very different (not just the same information worded differently), that’s a clue that hallucination is happening.
This tutorial is licensed under a Creative Commons Attribution 4.0(opens in a new tab) International License.
1002 North First Street; Vincennes, Indiana 47591
812-888-4165 | libref@vinu.edu
1002 North First Street; Vincennes, Indiana 47591