Home Topics What is Chain of Thoughts (CoT)? What is chain of thoughts (CoT)?
Explore chain of thoughts with watsonx.ai Subscribe for AI updates
Illustration with collage of pictograms of files and data representing chain of thought prompting

Contributors: Vrunda Gadesha, Eda Kavlakoglu

Chain of thought (CoT) mirrors human reasoning, facilitating systematic problem-solving through a coherent series of logical deductions.

 

Chain of thought prompting is an approach in artificial intelligence that simulates human-like reasoning processes by delineating complex tasks into a sequence of logical steps towards a final resolution. This methodology reflects a fundamental aspect of human intelligence, offering a structured mechanism for problem-solving. In other words, CoT is predicated on the cognitive strategy of breaking down elaborate problems into manageable, intermediate thoughts that sequentially lead to a conclusive answer.1

 

Difference between prompt chaining and chain of thoughts (CoT)

If we think about prompt chaining, it is a more rudimentary form of CoT prompting, where the AI is prompted to generate responses based on a given context or question. In contrast, CoT prompting goes beyond merely generating coherent and relevant responses by requiring the AI to construct an entire logical argument, including premises and a conclusion, from scratch. While prompt chaining focuses on refining individual responses, CoT prompting aims to create a comprehensive and logically consistent argument, thereby pushing the boundaries of AI's problem-solving capabilities.

Consider if an AI is asked "What color is the sky?", the AI would generate a simple and direct response, such as "The sky is blue." However, if asked to explain why the sky is blue using CoT prompting, the AI would first define what "blue" means (a primary color), then deduce that the sky appears blue due to the absorption of other colors by the atmosphere. This response demonstrates the AI's ability to construct a logical argument.

Explore enterprise grade LLMs - IBM research
Why AI governance is a business imperative for scaling enterprise artificial intelligence

Learn about barriers to AI adoptions, particularly lack of AI governance and risk management solutions.

Related content

Register for the guide on foundation models

How does chain of thought prompting work?

Chain of thought prompting leverages large language models (LLMs) to articulate a succession of reasoning steps, guiding the model towards generating analogous reasoning chains for novel tasks. This is achieved through exemplar-based prompts that illustrate the reasoning process, thus enhancing the model's capacity for addressing complex reasoning challenges.2 Let’s understand the flow of this prompting technique by addressing the classic math word problem—solving a polynomial equation.

Example: How does chain of thought prompting work for solving polynomial equations?

Chain of thought (CoT) prompting can significantly aid in solving polynomial equations by guiding a large language model (LLM) to follow a series of logical steps, breaking down the problem-solving process.2 Let's examine how CoT prompting can tackle a polynomial equation.

Consider the example of solving a quadratic equation.

Input prompt: Solve the quadratic equation: x2 - 5x + 6 = 0

When we give this prompt to IBM watsonx.ai chat, we can see the following conversation between human question and AI assistance’s reply.  

 

 

To generate this type of output, the CoT fundamentals works as illustrated in the below image. The final answer of the chain of thought will be "The solutions to the equation x2 − 5x + 6 = 0 are x = 3 and x = 2"

Explore IBM watsonx.ai demo

Chat with a solo model to experience working with generative AI in watsonx.ai.

Related content

Take your skills to the next level with generative AI

Chain of thought variants

Chain of thought (CoT) prompting has evolved into various innovative variants, each tailored to address specific challenges and enhance the model's reasoning capabilities in unique ways. These adaptations not only extend the applicability of CoT across different domains but also refine the model's problem-solving process.3

Zero-shot chain of thought

The zero-shot chain of thought variant leverages the inherent knowledge within models to tackle problems without prior specific examples or fine-tuning for the task at hand. This approach is particularly valuable when dealing with novel or diverse problem types where tailored training data may not be available.4 This approach can leverage the properties of standard prompting and few-shot prompting.

For example, when addressing the question "What is the capital of a country that borders France and has a red and white flag?", a model using zero-shot CoT would draw on its embedded geographic and flag knowledge to deduce steps leading to Switzerland as the answer, despite not being explicitly trained on such queries.

Automatic chain of thought

Automatic chain of thought (auto-CoT) aims to minimize the manual effort in crafting prompts by automating the generation and selection of effective reasoning paths. This variant enhances scalability and accessibility of CoT prompting for a broader range of tasks and users.5, 8

For example, to solve a math problem like "If you buy 5 apples and already have 3, how many do you have in total?", an auto-CoT system could automatically generate intermediate steps, such as "Start with 3 apples" and "Add 5 apples to the existing 3," culminating in "Total apples = 8," streamlining the reasoning process without human intervention.

Multimodal chain of thought

Multimodal chain of thought extends the CoT framework to incorporate inputs from various modalities, such as text and images, enabling the model to process and integrate diverse types of information for complex reasoning tasks.6

For example, when presented with a picture of a crowded beach scene and asked, "Is this beach likely to be popular in summer?", a model employing multimodal CoT could analyze visual cues (including beach occupancy, weather conditions and more) along with its textual understanding of seasonal popularity to reason out a detailed response, such as "The beach is crowded, indicating high popularity, likely increasing further in summer."

These variants of chain of thought prompting not only showcase the flexibility and adaptability of the CoT approach but also hint at the vast potential for future developments in AI reasoning and problem-solving capabilities.

Advantages and limitations

CoT prompting is a powerful technique for enhancing the performance of large language models (LLMs) on complex reasoning tasks, offering significant benefits in various domains such as improved accuracy, transparency, and multi-step reasoning abilities. However, it is essential to consider its limitations, including the need for high-quality prompts, increased computational cost, susceptibility to adversarial attacks, and challenges in evaluating qualitative improvements in reasoning or understanding. By addressing these limitations, researchers and practitioners can ensure responsible and effective deployment of CoT prompting in diverse applications.10

Advantages of chain of thought prompting

Users can benefit from a number of advantages within chain of thought prompting. Some of them include:

  • Improved prompt outputs: CoT prompting improves LLMs' performance on complex reasoning tasks by breaking them down into simpler, logical steps.
  • Transparency and understanding: The generation of intermediate reasoning steps offers transparency into how the model arrives at its conclusions, making the decision-making process more understandable for users.
  • Multi-step reasoning: By systematically tackling each component of a problem, CoT prompting often leads to more accurate and reliable answers, particularly in tasks requiring multi-step reasoning. Multi-step reasoning refers to the ability to perform complex logical operations by breaking them down into smaller, sequential steps. This cognitive skill is essential for solving intricate problems, making decisions, and understanding cause-and-effect relationships. 
  • Attention to detail: The step-by-step explanation model is akin to teaching methods that encourage understanding through detailed breakdowns, making CoT prompting useful in educational contexts.
  • Diversity: CoT can be applied across a broad range of tasks, including but not limited to, arithmetic reasoning, commonsense reasoning, and complex problem-solving, demonstrating its flexible utility.
     
Limitations of chain of thought prompting
Here are some limitions come across during the adoption of chain of thoughts.
 
  • Quality control: The effectiveness of CoT prompting is highly reliant on the quality of the prompts provided, necessitating carefully crafted examples to guide the model accurately.
  • High computational power: Generating and processing multiple reasoning steps requires more computational power and time compared to standard single-step prompting. Thus this technique is more costly to be adopted by any organization.
  • Mislead in concept: There is a risk of generating reasoning paths that are plausible yet incorrect, leading to misleading or false conclusions.
  • Expensive and labor-intensive: Designing effective CoT prompts can be more complex and labor-intensive, requiring a deep understanding of the problem domain and the model's capabilities.
  • Models overfitting: There is a potential risk of models overfitting to the style or pattern of reasoning in the prompts, which could reduce their generalization capabilities on varied tasks.
  • Evaluation and validation: While CoT can enhance interpretability and accuracy, measuring the qualitative improvements in reasoning or understanding can be challenging. It is due to the inherent complexity of human cognition and the subjective nature of evaluating linguistic expressions. However, several approaches can be employed to assess the effectiveness of CoT prompting. For instance, comparing the model's responses to those of a baseline model or human experts can provide insights into the relative performance gains. Additionally, analyzing the intermediate reasoning steps generated by the LLM can offer valuable insights into the decision-making process, even if it is difficult to directly measure the improvements in reasoning or understanding.
     
Explore open source large language models: benefits, risks and types
Advances in chain of thought

The evolution of chain of thoughts (CoT) is a testament to the synergistic advancements across several domains, notably in natural language processing (NLP), machine learning, and the burgeoning field of generative AI. These strides have not only propelled CoT into the forefront of complex problem-solving but also underscored its utility across a spectrum of applications. Here, we delve into the key developments, integrating the specified terms to paint a comprehensive picture of CoT's progress.

Prompt engineering and the original prompt

Innovations in prompt engineering have significantly enhanced models' comprehension and interaction with the original prompt, leading to more nuanced and contextually aligned reasoning paths. This development has been critical in refining CoT's effectiveness.2

Explore 4 methods of prompt engineering (12:41)
Symbolic reasoning and logical reasoning

The integration into symbolic reasoning tasks and logical reasoning tasks has improved models' capacity for abstract thinking and deduction, marking a significant leap in tackling logic-based challenges with CoT.7

For example, symbolic reasoning is solving mathematical equations, such as 2 + 3 = 5. In this case, the problem is broken down into its constituent parts (addition and numbers), and the model deduces the correct answer based on its learned knowledge and inference rules. Logical reasoning, on the other hand, involves drawing conclusions from premises or assumptions, such as "All birds can fly, and a penguin is a bird." The model would then determine that a penguin can fly based on the provided information. The integration of CoT prompting into symbolic reasoning and logical reasoning tasks has allowed LLMs to demonstrate improved abstract thinking and deduction capabilities, enabling them to tackle more complex and diverse problems.

Enhanced creativity

The application of generative AI and transformer architectures has revolutionized CoT, enabling the generation of sophisticated reasoning paths that exhibit creativity and depth. This synergy has broadened CoT's applicability, influencing both academic and practical domains.9

Smaller models and self-consistency

Advances enabling smaller models to effectively engage in CoT reasoning have democratized access to sophisticated reasoning capabilities. The focus on self-consistency within CoT ensures the logical soundness of generated paths, enhancing the reliability of conclusions drawn by models.11

AI assistant

Integrating CoT within chatbots and leveraging state-of-the-art NLP techniques has transformed conversational AI, enabling chatbots to conduct more complex interactions that require a deeper level of understanding and problem-solving proficiency.12

Explore IBM AI assistants

These advancements collectively signify a leap forward in the capabilities of CoT and significance of the integration of chatbots and CoT models, highlighting their potential to revolutionize AI-driven decision-making and problem-solving processes. By combining the conversational abilities of chatbots with the advanced reasoning capabilities of CoT models, we can create more sophisticated and effective AI systems capable of handling a broader range of tasks and applications.

Furthermore, the integration of various applications and CoT models can enhance the overall user experience by enabling AI systems to better understand and respond to user needs and preferences. By integrating natural language processing (NLP) techniques into CoT models, we can enable chatbots to understand and respond to user inputs in a more human-like manner, creating more engaging, intuitive, and effective conversational experiences.

Use cases for chain of thoughts

The chain of thoughts (CoT) methodology, with its ability to decompose complex problems into understandable reasoning steps, has found applications across a wide array of fields. These use cases not only demonstrate CoT's versatility but also its potential to transform how systems approach problem-solving and decision-making tasks. Below, we explore several prominent use cases where CoT has been effectively applied.

Customer service chatbots

Advanced chatbots utilize CoT to better understand and address customer queries. By breaking down a customer's problem into smaller, manageable parts, chatbots can provide more accurate and helpful responses, improving customer satisfaction and reducing the need for human intervention.

Build customer service AI assistants with watsonx assistant
Research and innovation

Researchers employ CoT to structure their thought process in solving complex scientific problems, facilitating innovation. This structured approach can accelerate the discovery process and enable the formulation of novel hypotheses.

Content creation and summarization

In content creation, CoT aids in generating structured outlines or summaries by logically organizing thoughts and information, enhancing the coherence and quality of written content.

See how you can perform text summarization tasks with watsonx.ai (1:40)
Education and learning

CoT is instrumental in educational technology platforms, aiding in the generation of step-by-step explanations for complex problems. This is particularly valuable in subjects like mathematics and science, where understanding the process is as crucial as the final answer. CoT-based systems can guide students through problem-solving procedures, enhancing their comprehension and retention.

Explore AI for business educational experience
AI ethics and decision making

CoT is crucial for elucidating the reasoning behind AI-driven decisions, especially in scenarios requiring ethical considerations. By providing a transparent reasoning path, CoT ensures that AI decisions align with ethical standards and societal norms.

Explore build trustworthy AI - watsonx.governance™ ethical AI

These use cases underscore the transformative potential of CoT across diverse sectors, offering a glimpse into its capacity to redefine problem-solving and decision-making processes. As CoT continues to evolve, its applications are expected to expand, further embedding this methodology in the fabric of technological and societal advancements.

Chain of thought prompting signifies a leap forward in AI's capability to undertake complex reasoning tasks, emulating human cognitive processes. By elucidating intermediate reasoning steps, CoT not only amplifies LLMs' problem-solving acumen but also enhances transparency and interpretability. Despite inherent limitations, ongoing explorations into CoT variants and applications continue to extend AI models' reasoning capacities, heralding future enhancements in AI's cognitive functionalities.

Related resources Discover IBM's Granite LLM

Granite is IBM's flagship series of LLM foundation models based on decoder-only transformer architecture. Granite language models are trained on trusted enterprise data spanning internet, academic, code, legal and finance.

A data leader's guide

Learn how to leverage the right databases for applications, analytics and generative AI.

IBM watsonx.data is an open, hybrid, governed data store

Discover how your organization can scale AI workloads, for all your data, anywhere.

Take the next step

Train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with IBM watsonx.ai, a next-generation enterprise studio for AI builders. Build AI applications in a fraction of the time with a fraction of the data.

Explore watsonx.ai Book a live demo
Footnotes

1 Boshi Wang, S. M. (2022). Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters. 2717-2739, https://doi.org/10.48550/arXiv.2212.10001.

2Jason Wei, X. W. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models. 36th Conference on Neural Information Processing Systems (NeurIPS 2022).

3Zheng Chu, J. C. (2023). A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future. ArXiv, abs/2309.15402.

4Omar Shaikh, H. Z. (2022, December). On Second Thought, Let’s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning. ArXiv, abs/2212.08061. https://doi.org/10.48550/arXiv.2212.08061.

5Zhuosheng Zhang, A. Z. (2022). Automatic Chain of Thought Prompting in Large Language Models. ArXiv, abs/2210.03493. https://doi.org/10.48550/arXiv.2210.03493.

6Zhuosheng Zhang, A. Z. (2023). Multimodal Chain-of-Thought Reasoning in Language Models. ArXiv, abs/2302.00923. https://doi.org/10.48550/arXiv.2302.00923.

7Yao, Z. L. (2023). Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models. ArXiv, abs/2305.16582. https://doi.org/10.48550/arXiv.2305.16582.

8Kashun Shum, S. D. (2023). Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data. ArXiv, abs/2302.12822. https://doi.org/10.48550/arXiv.2302.12822.

9A Vaswani, N. S. (2017). Attention is all you need. Advances in neural information processing systems.

10Zhengyan Zhang, Y. G. (2021). CPM-2: Large-scale Cost-effective Pre-trained Language Models. AI Open, 2, 216--224.

11L Zheng, N. G. (2021). When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings. In Proceedings of the eighteenth international conference on artificial intelligence and law , 159-168.

12S Roller, E. D. (2020). Recipes for building an open-domain chatbot. arXiv preprint arXiv:2004.13637 .