TL;DR

Autocorrect fixes your typos by comparing what you typed against a dictionary and picking the most likely word. Predictive text goes further -- it uses a small AI language model to guess what word you will type next based on context. Both learn from your habits over time, which is why they get better (and sometimes worse) the more you use them.

Why it matters

Autocorrect and predictive text are the most widely used AI features on the planet. Billions of people interact with them every single day without thinking about it. They are your first real encounter with language models -- the same fundamental technology that powers ChatGPT and other AI assistants. Understanding how they work gives you a practical foundation for understanding much larger AI systems.

Beyond the tech angle, these tools genuinely change how we communicate. They speed up messaging, reduce errors, and make small-screen typing bearable. But they also introduce problems: embarrassing autocorrect fails, privacy questions about what your keyboard learns, and subtle ways they can shape the words you use.

How autocorrect actually works

Early autocorrect (think flip phones in the 2000s) was simple: it had a dictionary of valid words and used basic rules to match what you typed against that list. If "teh" was not in the dictionary, the system would check for words that were one or two letter swaps away -- "the," "tea," "ten" -- and pick the most commonly used one.

Modern autocorrect is more sophisticated. It uses a combination of:

  1. Edit distance: How many letter changes separate your input from a real word. "Teh" is just one swap from "the" (distance of 1), making it a strong match.
  2. Language model context: The system looks at surrounding words. If you typed "I went to teh store," the model knows "the" fits better than "tea" because "the store" is a far more common phrase than "tea store."
  3. Your personal history: If you frequently type the word "teriyaki," the system learns to stop correcting it to "territory."
  4. Touch patterns: On touchscreens, the keyboard knows which keys are next to each other. Typing "goid" probably means "good" because "i" and "o" are adjacent keys.

This layered approach is why modern autocorrect is dramatically better than the old dictionary-lookup systems, though it still makes mistakes when the context is ambiguous.

How predictive text works

Predictive text is a step beyond autocorrect. Instead of fixing what you already typed, it tries to guess what you will type next.

When you see three word suggestions above your keyboard, here is what is happening behind the scenes:

  1. The keyboard reads your recent words. If you typed "Are you free for," the model processes this entire phrase.
  2. A small language model calculates probabilities. Based on patterns learned from billions of text messages, emails, and web pages, it estimates the likelihood of every possible next word. "Dinner" might have a 25% probability, "lunch" might be at 20%, and "coffee" at 10%.
  3. The top three predictions appear. You tap one to insert it instantly.

The language model on your keyboard is a smaller cousin of the models powering ChatGPT. It uses the same core idea -- predicting the next word based on context -- but it is compressed to run efficiently on your phone without draining your battery or requiring an internet connection.

Why it sometimes goes hilariously wrong

Everyone has an autocorrect horror story. You meant to type "I'll be there in a sec" and your phone sent "I'll be there in a sex." These failures happen for specific reasons:

  • Ambiguous context. When multiple words are equally plausible, the model guesses. Sometimes it guesses wrong, especially with short or common words.
  • Learned bad habits. If you accidentally accepted a wrong correction several times, your phone now thinks you prefer that word. This creates a feedback loop where the system confidently suggests the wrong thing.
  • Names and uncommon words. Autocorrect has no way to know that "Krzystof" is your friend's name and not a typo. It will aggressively try to replace unfamiliar words with common ones.
  • Profanity filters. Many keyboards are trained to avoid suggesting certain words, which leads to the infamous "ducking" problem where the system refuses to learn a common expletive.
  • Lack of real understanding. The model does not actually understand what you mean. It operates on statistical patterns. Sometimes the statistically likely word is socially disastrous.

How your keyboard learns your patterns

Most modern keyboards maintain a personal dictionary that adapts to you. Here is what it typically tracks:

  • Words you type frequently that are not in the standard dictionary (names, slang, technical terms)
  • Word pairs and phrases you use often ("on my way," "sounds good," custom greetings)
  • Corrections you accept or reject -- if you always undo a correction, the system learns to stop making it
  • App-specific patterns -- some keyboards notice that you type differently in work email versus group chats

This personalization happens on your device. Apple's keyboard, for example, stores your learned vocabulary locally and does not send it to Apple's servers. Google's Gboard uses a technique called federated learning, where your phone helps improve the global model by sharing patterns (not actual text) in an anonymized way.

Privacy: what does your keyboard know about you?

Your keyboard sees everything you type: passwords, private messages, search queries, financial details. This makes keyboard privacy an important concern.

What to know:

  • Apple (iOS): Learned words stay on your device. Third-party keyboards must declare if they request network access.
  • Google (Gboard): Uses federated learning to improve models without sending your raw text to Google. You can opt out of data sharing in settings.
  • Third-party keyboards (SwiftKey, Grammarly, etc.): Policies vary. Some sync your learned dictionary to the cloud for cross-device use. Read their privacy policies before granting "full access" permissions.

Practical tip: Be cautious with third-party keyboards that request network access. If a keyboard can connect to the internet, it could transmit what you type. Stick to well-known, reputable options.

The connection to larger language models

Your phone's predictive text is genuinely related to systems like ChatGPT and Claude. They share the same fundamental principle: predict the next word based on what came before.

The difference is scale. Your keyboard model might have a few million parameters and predict one word at a time from a short context. GPT-4 has hundreds of billions of parameters and can work with thousands of words of context to generate entire essays.

But the core idea -- learning statistical patterns in language to predict what comes next -- is identical. When you tap a predictive text suggestion on your phone, you are using the same technology family that powers the most advanced AI assistants in the world, just in miniature.

Common mistakes

  • Never resetting a broken dictionary. If autocorrect has learned dozens of wrong words, the fix is simple: reset your keyboard dictionary in Settings. On iPhone, go to Settings > General > Transfer or Reset > Reset Keyboard Dictionary. On Android, find it in your keyboard's settings.
  • Fighting autocorrect instead of training it. If you frequently type a technical term or name, manually type it correctly a few times and accept it. The system will learn. Constantly deleting and retyping teaches it nothing.
  • Assuming autocorrect caught everything. Autocorrect fixes non-words, but it will not catch real words used in the wrong context ("their" vs. "there"). Always proofread important messages.
  • Ignoring keyboard permissions. Granting "full access" to a third-party keyboard means it can potentially access your network. Only grant this to keyboards you trust.

What's next?

  • AI in Smartphones -- the bigger picture of AI features in your pocket
  • AI Writing Assistance -- how AI tools help with longer writing, not just texting
  • AI Privacy Basics -- understanding what data AI tools collect and how to protect yourself
  • Natural Language Processing Basics -- the technology behind how computers understand human language