Skip to Main Content

Tutorial on ChatGPT and Generative AI: Prevent Harmful Uses

These tutorials contain short videos (3 min or less), and quiz questions for self-review of what you learned.

How Does ChatGPT Aim to Prevent Harmful Use?

How Does ChatGPT Aim to Prevent Harmful Use?

How to make hallucination less likely


A language model is not a person, so it can't actually "hallucinate."


But that's the official term from the field of artificial intelligence for a language model making things up, so it's good to know the term.


When the ability to search the web is added to a language model, it's less likely to make things up than a stand-alone model without web search.


Detecting hallucination

One feature of hallucinations is that they tend to be different across different outputs. So one thing you can do is ask the same thing multiple times and see if the outputs are consistent.

You could also try the same conversation in different models (Claude, Gemini, Perplexity) and compare the answers. If they are all very different (not just the same information worded differently), that’s a clue that hallucination is happening.


Even though AI models that can search are less likely to make up facts, it can still happen.

This tutorial is licensed under a Creative Commons Attribution 4.0(opens in a new tab) International License.

Vincennes University

812-888-VUVU | 800-742-9198

1002 North First Street; Vincennes, Indiana 47591

www.vinu.edu/

Shake Library

812-888-4165 | libref@vinu.edu

1002 North First Street; Vincennes, Indiana 47591

vinu.libguides.com/shakelibrary

Jasper Academic Center for Excellence

812-481-5923 | ace@vinu.edu

850 College Ave; Jasper, IN 47546

vinu.libguides.com/jasper