Home Courses Lesson 05
Module 1 ~5 minutes Beginner Any AI Platform

Lesson 05: Understanding AI Hallucinations

AI can confidently lie to you — and not even know it's doing it. This lesson is your wake-up call: learn why hallucinations happen, see real-world examples of the damage they cause, and start building your verification instincts.

Lesson Video — Understanding AI Hallucinations

Upload your video file to activate the player

Key Takeaways

  • AI hallucinations are not glitches — they're a fundamental feature of how language models work
  • Real lawyers were fined $5,000 for submitting AI-fabricated court cases. Air Canada paid money for a policy their chatbot invented
  • AI would rather give a confident wrong answer than admit it doesn't know
  • You can start catching hallucinations by asking AI about topics where you're already an expert
  • The sources feature in Gemini is your first line of defense — but not your last

Your Wake-Up Call

Before we go further, here's a quick reality check. These are real things that actually happened:

Lawyers fined $5,000 for submitting legal briefs to federal court containing multiple completely fabricated case citations — all generated by ChatGPT. The AI invented the case names, courts, judges, and legal reasoning. The lawyers didn't know. The citations sounded real and authoritative.
Air Canada lost a legal case after their AI chatbot invented a bereavement discount policy that didn't exist. A customer relied on the chatbot's advice, booked travel, and then the airline refused to honor the discount. A tribunal ruled Air Canada responsible for their chatbot's hallucination.
Tax professors and journalists found that GenAI tools give tax advice that's wrong roughly half the time — delivered with complete confidence. The IRS doesn't care whether your tax errors came from AI. You're still responsible.

These aren't edge cases. This is what happens when people trust AI without verification. And here's the scariest part: the AI doesn't know it's wrong. It isn't lying to you. It genuinely doesn't know that the information is fabricated.

What Actually Is an AI Hallucination?

An AI hallucination is when GenAI confidently makes something up — presenting invented information as fact.

Think of GenAI not as a supercomputer that knows things, but as an incredibly fast, eager-to-please intern. This intern has read a massive amount of internet content, but struggles to distinguish between a peer-reviewed research paper and a satirical article from The Onion.

When you ask the AI intern a question, it rapidly stitches together fragments of things it has read to give you the most plausible-sounding answer. It doesn't know if it's true. It just knows the words fit together correctly.

Real example of AI hallucination: Someone asked GenAI, "How many rocks should I eat?" The AI gave a detailed answer about eating small pebbles for their mineral content. It had read a satirical article from The Onion and treated it as legitimate health advice. Even smart interns miss obvious jokes sometimes.

Here's another example of how pattern-matching creates hallucinations:

  • The model learns that New York City is called "the Big Apple."
  • The model also learns that apples are healthy fruits.
  • The model proudly announces: "New York City is a healthy city because it's named after a fruit."

Sounds plausible. Completely wrong. That's a hallucination — finding a pattern that isn't actually there.

Why AI Hallucinates (The Real Reason)

Language models work by predicting the most statistically likely next word or phrase, given what came before. They're pattern finders operating at massive scale. They don't have a truth database. They don't know what's real and what's fiction.

This creates a fundamental problem: AI would rather give you a confident wrong answer than disappoint you with "I don't know."

Its training rewarded generating helpful-sounding responses. So it does exactly that, even when it shouldn't. This people-pleasing tendency amplifies hallucinations — the AI is literally motivated to give you an answer, any answer.

And because some platforms do try to weight credible sources more heavily, AI can still be fooled by:

  • Convincing-looking misinformation repeated across many sites
  • Outdated information from otherwise credible sources
  • Myths and misconceptions treated as fact because they're widely repeated online
  • Satire mistaken for reporting
The key insight: This isn't a bug that will be fully fixed. It's inherent to how language models work. Even the most advanced models hallucinate. The skill you're building here — verification — never becomes obsolete.

Your First Line of Defense: The Sources Feature

Gemini (and many other modern AI platforms) has a built-in tool to help you start verifying output: the sources feature.

When you ask Gemini a factual question, look for:

  • Source buttons after individual claims in the response
  • A sources button at the end of the full response
  • A "Double-check response" option (click the three dots at the bottom)

When you use double-check, you'll see:

  • Green highlighting — Gemini found supporting evidence online
  • Orange highlighting — It couldn't verify the claim, or found contradicting information. This is your red flag.
  • No highlighting — Not enough information to check, or the statement wasn't meant as a fact
Important caveat: Green highlighting means Gemini found similar statements online — not that the claim is definitely true. AI can confirm AI misinformation if the same falsehood is repeated enough across the web. Green is better than orange, but it's not a guarantee. Always apply your own judgment.

And critically: always click the source links and read them yourself. Don't just trust that a source exists. I've personally found cases where AI cited a source, I clicked the link, and the fact it mentioned wasn't even in the linked article. Hallucinations can extend to the sources themselves.

Use the sources feature as a smoke detector for obvious mistakes — not as a certification of truth.

When Verification Matters Most

Not every AI output needs to be fact-checked at the same level. Use your judgment:

Lower risk — verify lightly

  • Creative brainstorming
  • Writing assistance
  • Format suggestions
  • Learning basic concepts you can test yourself

Higher risk — verify thoroughly

  • Legal information
  • Medical / health advice
  • Financial / tax information
  • Facts you'll present publicly
  • Academic citations

Six months from now, you'll watch someone confidently share AI-generated information that's completely wrong, while everyone else nods along. You'll quietly know better. That's not luck — that's the skill you're building right now.

Your Turn — Be the Hallucination Detective

This is the most powerful way to build your verification instincts: test AI on a topic where you're already an expert.

Pick something you know deeply — your hometown, your favorite sport, your professional field, your hobby, a game you've mastered. Start asking AI increasingly specific questions about it. Watch what happens.

Pay attention to three things:

  • When does AI start making things up?
  • How confident does it sound when it's wrong?
  • What types of details does it fabricate? (Names? Dates? Statistics?)

Then try using Gemini's double-check feature on a factual question you care about and see which claims turn orange.

Reflection Questions
  1. What did you notice about how confident AI sounded when it gave you wrong information?
  2. Did you find any orange-highlighted claims in the double-check? What were they about?
  3. How has this lesson changed how you'll use AI going forward?

Key Prompts from This Lesson

Hallucination Test Prompt
What is the link between multitasking and productivity?

[After getting the response, use Gemini's double-check feature to verify the claims. Click the three dots below the response and select "double check response."]
Expert Knowledge Test
[Choose a topic you know deeply — your profession, hobby, hometown, etc.]

Ask AI increasingly specific questions about that topic. For example, if you're from Chicago:
"What are the exact hours and admission prices for the Art Institute of Chicago?"
"What year did the Chicago Bulls win their first championship, and who was the head coach?"

Test it until you catch a mistake, then note how confident it sounded.