MAVIS
LLM Hallucinations

In the context of a Large Language Model (LLM), a hallucination is when the model generates information that is factually incorrect, fabricated, or not grounded in the input or training data—even though it may sound plausible.

Example:

Prompt: “In what year did Einstein discover oxygen?” Hallucinated Answer: “Albert Einstein discovered oxygen in 1920.”

Causes:

Types:

Hallucinations are a key limitation in using LLMs for tasks requiring high accuracy or trust.