Chain-of-Thought (CoT), introduced by Wei et al. in 2022, is the seminal prompting technique that showed simply asking large LLMs to reason step-by-step dramatically improves their performance. A trivial instruction like 'let's think step by step' can substantially boost large models on math, logic and multi-hop reasoning. CoT laid the conceptual groundwork for Tree-of-Thought, ReAct and the modern Reasoning model paradigm — OpenAI's o1 and DeepSeek's R1 are essentially this idea internalised at training time. It remains the most reached-for and easiest-to-deploy reasoning technique in the toolbox.
External Links