
Generative AI image and video generators will often output biased results. For example, if you ask for an image of a group of doctors or lawyers, it will often portray them as white men.
![]() |
![]() |
|
Prompt: A group of lawyers having a conversation. |
Prompt: People eating lunch outdoors. |
These models are often trained on large amounts of data from the Internet, and that data likely has more examples of data from particular countries, languages, and cultures — groups that aren’t representative of the entire world. So the model learns what to output from that data.
Developers of large AI models have implemented some guardrails to address this problem. But those developers may not have thought of every type of biased data to address, since like all of us, they see the world from their own viewpoint.
Keep an eye out for possible biased outputs and modify your prompts to correct for it.
Be very specific in your prompts. See below for examples.
To help set new digital standards of representation, Dove has created the Real Beauty Prompt Playbook. (PDF download)
The focus of this document is on unrealistic beauty standards for women.
It offers tips on how to create images that are more representative of a diverse range of people on the most popular generative AI models.
See pages 63 - 70 for an inclusive prompting glossary.
This tutorial is licensed under a Creative Commons Attribution 4.0 International License.
1002 North First Street; Vincennes, Indiana 47591
812-888-4165 | libref@vinu.edu
1002 North First Street; Vincennes, Indiana 47591