When AI Gets It Wrong

AI makes mistakes. Knowing what kinds of mistakes it makes, why it makes them, and how to catch them will save you from embarrassing moments and wrong decisions.

This page combines three topics that are all really about the same thing: understanding AI well enough to use it confidently.

Why AI makes mistakes

AI is not a database. It does not look up facts. It generates responses by predicting what words should come next, based on patterns it learned from billions of text examples. Most of the time this produces excellent, accurate output. Sometimes it produces something that sounds completely confident and is completely wrong.

AI is probabilistic, not factual

When you ask a question, Claude does not search the internet (unless you are in Research mode) or look up a verified answer. It generates an answer based on patterns. For most questions about language, writing, and general knowledge, this works extremely well. For specific facts, recent events, prices, and local information, it can be wrong.

What this means for Adam Bike: Do not trust specific prices, dates, competitor information, or Dubai-specific facts without verifying them. Claude knows Dutch bikes exist and roughly how they work. It does not know that a specific Gazelle model is AED 5,999 at your store unless you tell it in your context file.

How to spot bad output

Most bad AI output falls into one of these categories:

Hallucinated specifics

AI confidently states a specific fact that is wrong. Examples: a specific price, a product spec, an address, a phone number, a statistic.

How to catch it: any time AI gives you a specific number, date, or fact, verify it. Do not use a price Claude gives you unless it came from your context file. Do not publish a statistic without checking the source.

Outdated information

AI has a training cutoff date. Events after that date are unknown to it. Prices, regulations, competitor offerings, and market conditions change.

How to catch it: ask Claude when it is uncertain about currency: “Is this information likely to be current, or could it be outdated?” Enable Research mode for questions where current information matters.

Off-brand language

Sometimes the output is technically correct but does not sound like Adam Bike. Too sporty, too formal, too generic.

How to catch it: read the output before using it. If it sounds like it could be from any bike shop anywhere, it needs editing. Your context file reduces this significantly.

Misunderstanding your question

If the output is not what you wanted, the problem is usually the question, not Claude. Vague questions get vague answers.

How to catch it: if you are unhappy with output, before trying again, ask yourself: “Did my prompt include enough context? Was the task clear? Did I specify the format I wanted?”

The 80/20 rule

A useful way to think about AI output: Claude gets roughly 80% right on most tasks. The remaining 20% needs your attention.

For writing tasks (captions, product descriptions, emails), the structure, tone, and ideas are usually good. The specific details (your exact prices, your specific product names, Dubai-specific references) sometimes need adjustment.

Your job is not to check everything. Your job is to read the output with attention on the 20%:

  • Are any specific facts correct?
  • Does it sound like Adam Bike?
  • Is anything missing that the customer needs to know?

Review takes 30 seconds. You are not proofreading every word. You are scanning for anything that needs to change.

Fixing vague prompts

The most common cause of bad output is a vague prompt. Here is how to fix it:

Instead of: “Write a caption about our bikes”

Try: “Write an Instagram caption about the Gazelle e-bike (AED 5,999). Target buyer: Dubai expat parent doing school runs. Tone: warm, not sporty. Under 100 words. End with a question about their commute.”

Instead of: “Reply to this customer”

Try: “Customer asked: [paste message]. Draft a reply from Adam Bike. Under 70 words. Friendly. If they are asking about pricing, mention our BNPL option.”

The more specific you are, the less editing you need to do.

When NOT to trust AI

Some tasks require human judgment and verified sources. Do not use AI output directly for:

Legal matters: Contract terms, liability questions, consumer protection regulations in the UAE. AI can explain concepts but cannot give you legal advice. Use a lawyer.

Medical questions: If a customer or team member has a health question. Refer them to a professional.

Financial and tax decisions: VAT calculations, customs duties, accounting decisions. Verify with your accountant.

Any claim you will make publicly: If you say something in an ad, on your website, or in a press release, verify every specific claim. “Voted number one bike shop in Dubai” requires a source. Do not let AI invent credentials for you.

Context window limits explained

Claude can read a lot of text in one conversation, but there is a limit. This limit is called the context window. Think of it like a whiteboard: Claude can work with everything written on the whiteboard, but the whiteboard has a fixed size.

In practice, this matters for:

Very long documents: If you upload an extremely long file (hundreds of pages), Claude may not process all of it equally. It is better with the beginning and end of a document than the middle.

Very long conversations: After many back-and-forth exchanges, Claude may start to lose track of things mentioned earlier. Starting a fresh conversation resets the whiteboard.

Your context file: Keep your Adam Bike context file concise. Bullet points and short paragraphs work better than long narrative paragraphs. Claude processes dense, scannable text better than flowing prose.

You will rarely hit context limits in normal use. If Claude seems to forget something you mentioned earlier in a long conversation, start a new session.

The practical checklist

Before using any AI output for something public-facing:

  1. Did Claude invent any specific numbers or facts? Verify them.
  2. Does the tone sound like Adam Bike, or does it need adjusting?
  3. Are the prices, product names, and Dubai locations correct?
  4. Is anything missing that a customer would need to know?
  5. Does it make any claims you cannot back up?

This review takes under a minute. Make it a habit.


When something goes wrong:

If Claude gives you something clearly wrong or unhelpful, do not try the same prompt again. Instead:

  1. Tell Claude what was wrong: “This is too generic, it does not sound like our brand.”
  2. Add more context: “Here is a caption I wrote myself that has the right tone: [your example].”
  3. Be more specific about what you need: “Rewrite this, keeping the structure but making it sound warmer and more specific to Dubai.”

Bad output is almost always fixable. The answer is better input.