- Home
- /Courses
- /Building AI-Powered Products
- /User Experience for AI Products
User Experience for AI Products
Design UX for AI features. Manage expectations, handle failures, and build trust.
Learning Objectives
- ✓Design AI-first UX patterns
- ✓Manage user expectations
- ✓Handle AI errors gracefully
- ✓Build user trust
Why AI UX Needs Its Own Playbook
Traditional software is deterministic — click a button, get the same result every time. AI is probabilistic — ask the same question twice and you might get two different answers, and neither might be perfectly correct. This fundamental difference means the UX rules you've learned for traditional software don't fully apply.
The best AI products acknowledge this uncertainty and design around it. The worst pretend AI is infallible and leave users confused when it inevitably makes mistakes.
Designing for Uncertainty: The Confidence Spectrum
Every AI output exists somewhere on a confidence spectrum. At one end, the AI is very confident and probably right. At the other end, it's essentially guessing. Your UX should reflect where on this spectrum each response falls.
High confidence, low stakes: Auto-complete suggestions in email (like Gmail's Smart Compose). The AI is usually right, and if it's wrong, the user just keeps typing. You can apply these suggestions subtly — greyed-out text that appears inline.
Medium confidence, medium stakes: Content summarisation or classification. Show the result but make it easy to edit or override. Notion AI does this well — it generates text and inserts it in a clearly marked block that users can accept, edit, or discard.
Low confidence, high stakes: Anything involving decisions that matter — medical information, financial advice, legal content. Here you must show the AI's output as a suggestion, display confidence indicators, and make human review mandatory before any action is taken.
The mistake many teams make is treating all AI outputs the same way. Match your UI treatment to the confidence level and the consequences of being wrong.
Loading States and Streaming
AI responses take time — often several seconds. In a world where users expect sub-second page loads, you need to manage this wait carefully.
Streaming responses are the gold standard, and there's a reason ChatGPT popularised this pattern. Instead of showing a spinner for five seconds and then dumping a wall of text, streaming shows the response word by word as the AI generates it. This works because users start reading immediately (reducing perceived wait time), the progressive reveal feels natural (like watching someone type), and users can start evaluating the response before it's complete.
Progress indicators matter when streaming isn't possible. If your AI feature processes a document and returns a result, show what's happening: "Analysing document... Extracting key points... Generating summary..." This transforms a mysterious wait into a transparent process.
Skeleton screens work well for structured outputs. If you know the AI will return a table with three columns, show the empty table structure immediately and fill it in as results arrive.
Perplexity does this exceptionally well. When you ask a question, it immediately shows "Searching..." with the sources it's checking, then streams the answer with numbered citations appearing in real time. The user never feels like they're waiting because something is always happening on screen.
Setting User Expectations
Users need to understand what they're interacting with. This isn't about legal disclaimers buried in footnotes — it's about building trust through transparency.
Label AI-generated content clearly. A simple "AI-generated" or "Written by AI" badge sets the right expectation. Google's AI overviews and Bing's Copilot both label their AI responses prominently.
Describe limitations upfront. When a user first encounters your AI feature, tell them what it's good at and what it's not. "I can answer questions about our product documentation. I might occasionally get details wrong, so please verify important information." This isn't weakness — it's honesty, and users respect it.
Avoid false confidence. If the AI isn't sure about something, don't present the response as definitive fact. Phrases like "Based on the available information..." or "This appears to be..." signal appropriate uncertainty.
Feedback Mechanisms
Users are your best quality assurance team, but only if you make it easy for them to report problems.
Thumbs up/down is the simplest feedback pattern, and it works. ChatGPT uses this on every response. It takes one click, creates no friction, and gives you a massive signal about response quality.
Specific feedback options give you more useful data. When a user clicks thumbs down, ask why: "Was this inaccurate? Off-topic? Unhelpful? Offensive?" This tells you what to fix.
Regeneration lets users try again without rewriting their input. A simple "Regenerate response" button acknowledges that the AI might not get it right the first time and puts the user in control.
Edit and refine gives users the ability to modify the AI's output before accepting it. GitHub Copilot does this well — it suggests code inline, and the developer can accept it fully, partially, or modify it before committing.
Graceful Failure
AI will fail. The API might go down, the model might return nonsense, or the request might hit a content filter. How your product handles these moments defines the user experience more than how it handles the happy path.
Never show raw error messages. "Error 429: Rate limit exceeded" means nothing to most users. Instead, say "We're experiencing high demand right now. Please try again in a moment."
Offer alternatives. If the AI feature fails, what can the user do instead? Maybe they can search your help documentation, contact a human support agent, or try a simpler version of the request.
Degrade gracefully. If your full AI response fails, can you show a partial result? If the summary feature is down, can you show the first paragraph of the document instead? Something useful is always better than an error screen.
Save user input. There's nothing more frustrating than typing a long prompt, getting an error, and losing everything you typed. Always preserve the user's input so they can retry without starting over.
Real-World UX Patterns Worth Studying
ChatGPT: Streaming responses, conversation history, regeneration, model selection, thumbs up/down feedback. The benchmark for conversational AI UX.
Perplexity: Shows sources while generating, numbered citations in the response, follow-up question suggestions. Excellent at building trust through transparency.
Notion AI: Inline AI that works within your existing workflow. Select text, choose an action (summarise, translate, explain), see the result, accept or discard. Minimal disruption to the user's flow.
GitHub Copilot: Inline code suggestions that appear as you type. Tab to accept, keep typing to dismiss. The AI integrates so smoothly into the existing workflow that it feels like a natural extension of the editor, not a separate tool.
Each of these products succeeds because the AI feature fits naturally into how users already work, rather than forcing users to adapt to a new interface.
Key Takeaways
- →Always show when AI is processing
- →Let users edit AI outputs easily
- →Provide confidence levels for predictions
- →Enable feedback and regeneration
- →Fail gracefully with clear next steps
Practice Exercises
Apply what you've learned with these practical exercises:
- 1.Design AI feature mockups
- 2.Create error state designs
- 3.Implement streaming responses
- 4.Add confidence indicators