Getting started

How to tell when AI is wrong: a beginner's fact-checking guide

16 March 2026 - 7 min read
James, co-founder of Smash Your AI

James

Co-founder of Smash Your AI - 18 years in education, now helping businesses and individuals get real results from AI.

How to spot when AI is wrong and fact-check AI output

AI gets things wrong. I have been caught out by it myself, more than once. And the tricky part is that when AI is wrong, it does not hesitate or flag any uncertainty. It states incorrect information with exactly the same confidence as correct information.

This is the single most important thing to understand about AI tools. They are incredibly useful, but they are not reliable sources of truth. Once you accept that, and learn how to check, you can use them confidently without getting burned.

What is hallucination? (in plain English)

You will hear the word "hallucination" a lot in conversations about AI. It sounds dramatic, but it simply means the AI made something up.

AI does not understand truth the way you and I do. It generates text by predicting what words are most likely to come next, based on patterns it learned during training. Most of the time, those predictions align with reality. But sometimes, the AI produces something that sounds completely plausible but is entirely fabricated.

It is not lying. It does not have an intention to deceive. It just does not know the difference between a real fact and a convincing-sounding sentence. Think of it like a very confident person at a dinner party who always has an answer, even when they are not entirely sure. They are not trying to mislead you. They just cannot help themselves.

Real examples of AI getting things wrong

These are not hypothetical. These are things I have personally encountered or seen from people I have trained.

  • Making up statistics. I once asked AI for data on AI adoption rates in UK businesses. It gave me a very specific figure, "according to a 2025 report by the Office for National Statistics." The number was plausible. The report did not exist. It had invented the source and the statistic entirely.
  • Inventing academic papers. This is a well-known problem. Ask AI to cite sources and it will sometimes create convincing-looking references, complete with author names, journal titles, and publication dates, that do not correspond to any real paper.
  • Confident but wrong on specifics. I once used AI to generate exam revision content and it confidently referenced a specification point that did not exist. It looked completely plausible. The format was right, the language matched the exam board's style, but the actual content point was fabricated.
  • Getting dates and numbers wrong. AI frequently gets specific dates, ages, distances, and quantities slightly wrong. Close enough to sound right, wrong enough to cause problems if you rely on them.
  • Inventing URLs. Ask AI for a link to a specific resource and it will often generate a URL that looks legitimate but leads to a 404 page. The domain might be real, but the specific page path is made up.

The common thread is that all of these errors sound completely convincing. There is nothing in the AI's tone or phrasing that signals "I am guessing here." It delivers wrong answers with the same authority as right ones.

What is AI most likely to get wrong?

Not everything AI produces needs the same level of scrutiny. Some types of information are much more error-prone than others. Here is what to watch out for.

High risk of errors:

  • Specific statistics and numbers
  • Dates, especially for recent events
  • Citations and references to sources
  • URLs and links
  • Niche or specialist facts (the more obscure the topic, the higher the risk)
  • Information about very recent events (within the last few months)
  • Legal, medical, or financial specifics
  • Biographical details about less famous people

Lower risk of errors:

  • Well-known general knowledge
  • Widely documented processes and concepts
  • Common definitions
  • General advice and best practices
  • Creative content (where there is no "wrong" answer)
  • Structural tasks like formatting, summarising your own text, or brainstorming ideas

The pattern is clear. The more specific and verifiable a claim is, the more important it is to check. General concepts are usually fine. Exact numbers, dates, and sources need verification.

A simple 5-step fact-checking process

You do not need to verify every word AI produces. That would defeat the purpose of using it. But for anything you are going to publish, share, or rely on, use this process.

  1. Read it with scepticism, not trust. The biggest mistake people make is reading AI output the same way they would read a trusted website or textbook. Shift your mindset. Assume the AI might be wrong about specific claims and read with a critical eye.

  2. Flag anything specific. Circle or highlight any specific statistics, dates, names, quotes, or claims that the content depends on. These are your fact-checking targets. General statements like "exercise is good for health" do not need checking. "A 2024 NHS study found that 73% of adults..." absolutely does.

  3. Verify the key claims. For each flagged item, do a quick search. Google the statistic. Check the date. Look up the source. This does not take long. A 30-second search can confirm or disprove most claims. If you cannot find any evidence that a claim is true, it probably is not.

  4. Check any sources or links. If the AI cited a report, paper, or website, click the link or search for the source by name. Does it actually exist? Does it actually say what the AI claims it says? If the AI gave you a URL, test it. Broken links and fake citations are common.

  5. Ask the AI to verify itself. This sounds odd, but it works. If you are unsure about something, ask the AI directly: "Are you certain that statistic is accurate? Can you tell me exactly where it comes from?" AI will sometimes admit uncertainty when challenged, or change its answer, which tells you it was not confident in the first place.

This process adds maybe two to five minutes to your workflow. It is a small investment that prevents embarrassing or costly mistakes.

Want a printable fact-checking checklist?

Download our free guide that includes a one-page fact-checking checklist you can keep next to your desk. Plus tips for getting more accurate results from AI in the first place.

Get the free checklist

Tools that help with fact-checking

Some AI tools are better than others when it comes to accuracy and verifiability.

Perplexity

This is my go-to for any question where accuracy matters. Perplexity gives you answers with numbered citations to real sources. You can click through and verify. It is essentially an AI-powered search engine that shows its working. The free version is excellent. Available at perplexity.ai.

NotebookLM (by Google)

If you need AI to work with your own trusted documents, NotebookLM is brilliant. You upload your source material (a textbook, a spec, a report) and it only answers based on what is in those documents. This massively reduces the hallucination problem because it is not making things up. It is pulling from content you have provided and trust.

Gemini with web search

Google Gemini can search the web in real time and often provides links alongside its answers. This makes it easier to verify claims without leaving the conversation. Not perfect, but better than tools that generate answers without any source material.

When to trust AI more vs less

Not every task requires the same level of caution. Here is a practical guide.

Trust more (lower stakes, verification less critical):

  • Brainstorming and idea generation. There is no "wrong" brainstorm.
  • Drafting emails and messages. You are reviewing before sending anyway.
  • Summarising text you provided. It is working from your source material.
  • Formatting and restructuring content. Low risk of factual errors.
  • Creative writing and content ideation. Accuracy is not the goal.

Trust less (higher stakes, always verify):

  • Medical, legal, or financial information. Always verify with a qualified professional.
  • Statistics and data for reports or presentations.
  • Content you are publishing under your name or your company's name.
  • Educational content for students. Getting a fact wrong in a classroom resource is a big deal.
  • Anything involving specific people, companies, or events.
  • Technical instructions where errors could cause harm or damage.

The simplest rule of thumb: the higher the cost of being wrong, the more carefully you should check. A brainstorming list with a dud idea costs you nothing. A published article with a fake statistic could cost you credibility.

How to get more accurate results in the first place

You cannot eliminate errors entirely, but you can reduce them significantly with better prompting.

  • Tell it to flag uncertainty. Add "If you are not confident about any specific fact, say so" to your prompt. AI will not always comply, but it often will, and that gives you useful signals about what to double-check.
  • Ask it not to make things up. "Only include statistics you are confident are accurate. If you do not know a specific number, say so rather than guessing." This simple instruction reduces hallucination noticeably.
  • Provide your own source material. The more context and reference material you give the AI, the less it needs to generate from its training data. Upload the relevant document, paste in the key information, or use a tool like NotebookLM that is designed for this.
  • Ask for sources. "Cite your sources for any statistics or specific claims." The AI may still hallucinate sources, but it makes them easier to check, and asking seems to improve accuracy overall.
  • Cross-reference between tools. If something important comes from ChatGPT, ask the same question to Claude or Gemini. If they all agree, it is more likely to be accurate. If they disagree, that is a red flag worth investigating.

The bottom line

AI is one of the most useful tools I have ever used. I rely on it every single day for my work. But I also treat it the way I would treat a very capable but occasionally unreliable research assistant. I trust the general direction. I verify the specifics.

The people who get burned by AI are the ones who treat it as infallible. The people who get the most from AI are the ones who understand its limitations and work with them.

Use AI boldly. Check it carefully. Publish it confidently.

If you want to learn more about using AI effectively and safely, our online course includes a full module on working with AI output, including advanced fact-checking techniques and strategies for getting more reliable results.

Learn to use AI with confidence

Our online course teaches you how to get reliable, high-quality results from AI tools, including how to spot and avoid errors.

View the course