Why you need this
AI is everywhere, but that doesn't mean it should be used for everything. Teams waste thousands of hours applying AI to tasks it can't handle, or worse—deploying AI in high-stakes situations where mistakes have serious consequences.
The problem: Without clear criteria, people either over-rely on AI (using it for critical decisions it can't safely make) or under-utilize it (manually doing tasks AI could handle perfectly). Both extremes waste resources and create risk.
This framework solves that. It provides a systematic decision tree and evaluation criteria to determine when AI is appropriate, when it's risky, and when humans should stay in control.
Perfect for:
- Individuals deciding whether to use AI for specific tasks
- Managers setting AI usage policies for teams
- Product teams evaluating AI feature ideas
- Organizations developing responsible AI guidelines
What's inside
Decision Tree: Should I Use AI?
Start with these questions:
1. What are the consequences if AI gets it wrong?
- Low stakes (typo in draft email): AI-friendly
- Medium stakes (customer-facing content): AI with human review
- High stakes (medical advice, legal decisions, financial transactions): Human-led with AI assistance only
2. Can you verify the output?
- Easily verifiable (code that can be tested, claims that can be fact-checked): AI-friendly
- Difficult to verify (highly specialized knowledge, subjective judgment): Use with extreme caution
- Impossible to verify (predictions about unknowable futures): Avoid AI
3. Is creativity or originality required?
- No (formatting, summarization, data extraction): Perfect for AI
- Somewhat (first draft, brainstorming, variations on themes): Great AI use case
- High originality needed (breakthrough innovation, distinctive brand voice): AI assists, humans lead
4. Are there compliance or ethical concerns?
- Regulated data (HIPAA, GDPR, attorney-client privilege): Review data handling policies
- Potential for bias (hiring, lending, criminal justice): Extensive testing and human oversight required
- Public safety impact: Rigorous validation mandatory
Risk Assessment Framework
Evaluate each dimension:
- Accuracy requirements: What error rate is acceptable? (AI rarely achieves 100%)
- Explainability needs: Must you explain how the decision was made? (AI is often a black box)
- Data sensitivity: Are you sharing confidential or personal information?
- Reversibility: Can mistakes be easily corrected, or are they permanent?
- Scale of impact: Does one error affect one person or thousands?
Cost-Benefit Analysis
AI Benefits:
- Time savings (how many hours per week?)
- Cost reduction (vs. hiring, outsourcing, manual labor)
- Scalability (handle more volume without adding headcount)
- Speed (faster turnaround times)
- Consistency (reduced human error)
AI Costs:
- Subscription/API fees
- Time to learn and integrate
- Quality review and fact-checking time
- Risk of errors and remediation costs
- Potential privacy or security incidents
Break-even calculation: When do benefits exceed costs?
15-Question Checklist
Before using AI for a task, ask:
- Can I verify the output myself?
- What happens if AI is confidently wrong?
- Am I sharing sensitive data?
- Is this task repetitive or unique?
- Do I need to explain the reasoning?
- Is speed more important than perfection?
- Can I catch and correct errors easily?
- Does this require genuine creativity?
- Are there legal/compliance implications?
- Will this decision affect others significantly?
- Do I have time to review and refine?
- Is the AI tool trained for this specific use case?
- Can I fall back to manual if AI fails?
- Is there potential for bias in outputs?
- Would a human do this better or faster?
Scoring: More "yes" to questions 1, 4, 6, 7, 11, 13 → AI is appropriate
More "yes" to questions 2, 3, 5, 8, 9, 10, 14, 15 → Proceed with caution or avoid
Red Flags: When NOT to Use AI
- ❌ Life-or-death decisions (medical diagnosis, safety systems)
- ❌ Legal advice or binding contracts
- ❌ Financial decisions you can't afford to get wrong
- ❌ Situations requiring empathy and human judgment (grief counseling, conflict resolution)
- ❌ Tasks involving confidential data without proper controls
- ❌ When you can't verify accuracy yourself
- ❌ High-stakes situations with potential for discrimination
Case Studies
Good AI Decisions:
- ✅ Draft meeting notes → AI generates, human reviews (low stakes, verifiable)
- ✅ Code suggestions → Developer evaluates and tests (verifiable, expertise available)
- ✅ Email subject line ideas → Marketer selects and refines (creative assist, low risk)
Bad AI Decisions:
- ❌ AI-generated legal contracts without lawyer review (high stakes, unverifiable for most users)
- ❌ Automated hiring decisions without human oversight (bias risk, high impact)
- ❌ Medical diagnosis without physician verification (life-or-death, requires expertise)
How to use this framework
- Before each AI task — Run through the decision tree (takes 2 minutes)
- Team policy setting — Use checklist to define approved vs. prohibited AI uses
- Training new employees — Teach the framework as part of AI onboarding
- Evaluating AI tools — Assess whether vendor claims match your use case
Want to go deeper?
This framework helps you decide when to use AI. For implementing AI safely:
- Guide: AI Safety Basics — Understanding AI limitations and risks
- Guide: When to Use AI Tools — Detailed guidance on AI applications
- Resource: AI Risk Assessment Template — Evaluate AI system risks
License & Attribution
This resource is licensed under Creative Commons Attribution 4.0 (CC-BY). You're free to:
- Adapt for your organization's policies
- Share with teams and colleagues
- Use in training materials
Just include this attribution:
"AI Decision Framework" by Field Guide to AI (fieldguidetoai.com) is licensed under CC BY 4.0
Access now
Ready to explore? View the complete resource online—no signup or email required.