Skip to main content

Chain-of-Thought

Also known as: CoT, Chain-of-Thought Prompting, Step-by-Step Reasoning

In one sentence

A prompting technique that asks AI to show its reasoning step-by-step, improving accuracy on complex tasks like maths, logic, and multi-step problem-solving.

Explain like I'm 12

When your teacher says 'show your working' on a maths test, you get better marks because you think through each step instead of jumping to the answer. Chain-of-thought does the same thing for AI — asking it to think step-by-step makes it get the right answer more often.

In context

Chain-of-thought prompting was introduced by Google researchers in 2022 and quickly became one of the most widely used prompting techniques. In its simplest form, adding 'Let's think step by step' to a prompt can improve accuracy on reasoning tasks. More advanced versions include few-shot CoT (providing worked examples), self-consistency (generating multiple reasoning paths and picking the most common answer), and tree-of-thought (exploring branching solution paths). In 2025, dedicated reasoning models like OpenAI's o1 and o3 build chain-of-thought into their architecture, generating internal reasoning before answering.

See also

Related Guides

Learn more about Chain-of-Thought in these guides: