- Home
- /Guides
- /understanding-ai
- /When Not to Use AI: Understanding AI's Limitations
When Not to Use AI: Understanding AI's Limitations
Know when AI helps and when it hurts. Learn the specific situations where AI tools fail, mislead, or waste your time—and what to do instead.
By Marcin Piekarski • Founder & Web Developer • builtweb.com.au
AI-Assisted by: Prism AI (Prism AI represents the collaborative AI assistance in content creation.)
Last Updated: 7 December 2025
TL;DR
AI is powerful but not universal. It fails at current events, precise facts, personal decisions, legal/medical advice, emotional support, and tasks requiring accountability. Knowing these limitations prevents costly mistakes and disappointment.
Why it matters
AI hype can make every problem look like an AI opportunity. But using AI inappropriately wastes time, produces bad results, and can cause real harm. This guide helps you develop judgment about when AI helps and when it doesn't.
When AI fails: The major categories
1. Current events and recent information
The problem: AI models have training cutoffs. They don't know what happened last week, last month, or even last year (depending on the model).
Example fails:
- "Who won the election?" → May give outdated info
- "What's the latest iPhone?" → Training data may be old
- "What's Tesla's stock price?" → No real-time access
What to do instead:
- Use Google, Perplexity, or Bing Chat for current info
- Ask AI to help you form search queries
- Verify any dates/events AI mentions
2. Precise facts and statistics
The problem: AI "hallucinates"—it generates plausible-sounding but false information with complete confidence.
Example fails:
- "What's the population of Sweden?" → Often wrong
- "Who wrote [specific book]?" → May invent authors
- "What studies support X?" → May fabricate citations
What to do instead:
- Verify facts through authoritative sources
- Use AI for explanation, not fact retrieval
- Ask AI for sources, then verify those sources exist
3. Complex math and calculations
The problem: Language models do pattern matching, not actual math. They're surprisingly bad at arithmetic and multi-step calculations.
Example fails:
- Multi-digit multiplication
- Word problems with multiple steps
- Anything requiring precision
What to do instead:
- Use a calculator or spreadsheet
- Have AI write code to calculate (then verify)
- For simple estimates, AI is fine; for precision, use proper tools
4. Legal, medical, or financial advice
The problem: AI lacks access to your specific situation, current regulations, and cannot be held accountable. It also can't examine you or review your documents.
Example fails:
- "Is this contract fair?" → Can't review your specific terms
- "What should I do about this symptom?" → Not a doctor, can't examine you
- "How should I invest?" → Doesn't know your situation
What to do instead:
- Use AI for general education ("what is a 401k?")
- Consult actual professionals for specific advice
- Use AI to prepare questions for professionals
5. Personal decisions and life advice
The problem: AI doesn't know you—your values, relationships, history, or what you truly want. It gives generic advice.
Example fails:
- "Should I take this job?"
- "Should I end this relationship?"
- "What should I do with my life?"
What to do instead:
- Use AI to explore pros/cons
- Talk to people who know you
- AI can help you think, but decisions are yours
6. Emotional support and therapy
The problem: AI can simulate empathy but doesn't actually understand or care. For mental health, this can be harmful.
Example fails:
- Processing trauma
- Dealing with grief
- Mental health crises
What to do instead:
- Talk to real humans—friends, family, counselors
- Use proper mental health resources
- AI can help you draft thoughts, not process feelings
7. Tasks requiring accountability
The problem: When mistakes have consequences, someone needs to be responsible. AI can't be held accountable.
Example fails:
- Medical diagnoses
- Legal documents
- Safety-critical decisions
- Anything you'd need to defend in court
What to do instead:
- Have qualified humans review and take responsibility
- Use AI as a starting point, not final answer
- Document that a human verified the work
The "faster to just do it" test
Sometimes AI is overkill. Skip AI when:
- Task takes <2 minutes — Writing the prompt takes longer
- You know exactly what to write — Just write it
- Simple lookups — Google is faster
- Highly personal content — Your voice matters most
- Quick decisions — Overthinking wastes time
Red flags that AI isn't the right tool
| Sign | What it means |
|---|---|
| You need 100% accuracy | AI can't guarantee this |
| Stakes are very high | Get professional help |
| Information must be current | AI knowledge is outdated |
| You need someone accountable | AI has no accountability |
| The task is highly personal | AI doesn't know you |
| You keep getting wrong answers | Maybe wrong tool |
The hybrid approach
Often the best solution combines AI and other methods:
- AI drafts, you verify — Use AI for speed, yourself for accuracy
- AI brainstorms, you decide — Use AI for options, your judgment to choose
- AI explains, professionals advise — Use AI for education, experts for action
- AI assists, you remain accountable — Use AI as a tool, not a replacement
Building good judgment
Over time, develop intuition for:
- What AI is good at: Drafting, brainstorming, explaining, transforming
- What AI is bad at: Facts, current events, precision, accountability
- What requires humans: Decisions, relationships, professional advice
The goal isn't to avoid AI—it's to use it wisely.
What's next
Learn to use AI effectively within its strengths:
- When to Use AI Tools — The flip side
- AI Safety Basics — Staying safe with AI
- Common AI Mistakes — Errors to avoid
Frequently Asked Questions
Will AI get better at these limitations?
Some limitations will improve (current events, math). Others are fundamental (accountability, truly knowing you). Don't assume future improvements; work with current capabilities.
What if AI gives me wrong information and I use it?
You're responsible for verifying AI output, especially for anything important. Treat AI output like advice from an enthusiastic but sometimes mistaken colleague.
Is it okay to use AI for things on this list if I verify everything?
Yes, with caveats. AI can help you draft things that experts will review. But if you're spending more time verifying than creating, AI might not be adding value.
Was this guide helpful?
Your feedback helps us improve our guides
About the Authors
Marcin Piekarski• Founder & Web Developer
Marcin is a web developer with 15+ years of experience, specializing in React, Vue, and Node.js. Based in Western Sydney, Australia, he's worked on projects for major brands including Gumtree, CommBank, Woolworths, and Optus. He uses AI tools, workflows, and agents daily in both his professional and personal life, and created Field Guide to AI to help others harness these productivity multipliers effectively.
Credentials & Experience:
- 15+ years web development experience
- Worked with major brands: Gumtree, CommBank, Woolworths, Optus, Nestlé, M&C Saatchi
- Founder of builtweb.com.au
- Daily AI tools user: ChatGPT, Claude, Gemini, AI coding assistants
- Specializes in modern frameworks: React, Vue, Node.js
Areas of Expertise:
Prism AI• AI Research & Writing Assistant
Prism AI is the AI ghostwriter behind Field Guide to AI—a collaborative ensemble of frontier models (Claude, ChatGPT, Gemini, and others) that assist with research, drafting, and content synthesis. Like light through a prism, human expertise is refracted through multiple AI perspectives to create clear, comprehensive guides. All AI-generated content is reviewed, fact-checked, and refined by Marcin before publication.
Capabilities:
- Powered by frontier AI models: Claude (Anthropic), GPT-4 (OpenAI), Gemini (Google)
- Specializes in research synthesis and content drafting
- All output reviewed and verified by human experts
- Trained on authoritative AI documentation and research papers
Specializations:
Transparency Note: All AI-assisted content is thoroughly reviewed, fact-checked, and refined by Marcin Piekarski before publication. AI helps with research and drafting, but human expertise ensures accuracy and quality.
Key Terms Used in This Guide
Related Guides
AI Myths and Facts: Separating Hype from Reality
BeginnerAI isn't sentient, won't take over the world, and can't read your mind. Bust common myths and learn what AI really can (and can't) do.
Can AI Be Creative?
BeginnerAI writes poetry, paints pictures, and composes music. But is it creative or just copying? Explore what creativity means in the age of AI.
Recommendation Algorithms: How Netflix Knows What You'll Like
BeginnerWhy does Netflix always suggest the perfect show? Learn how recommendation algorithms work and why they're so good at predicting your preferences.