Home Topics Prompt chaining What is prompt chaining?
Explore prompt chaining with watsonx.ai Sign up for AI updates
Isometric illustration for Prompt Chaining

Published: 23 April 2024
Contributors: Vrunda Gadesha, Eda Kavlakoglu

Prompt chaining is a natural language processing (NLP) technique, which leverages large language models (LLMs) that involves generating a desired output by following a series of prompts. In this process, a sequence of prompts is provided to an NLP model, guiding it to produce the desired response. The model learns to understand the context and relationships between the prompts, enabling it to generate coherent, consistent, and contextually rich text[1].

The concept is advance implementation of prompt engineering. It has gained significant attention in the field of NLP due to its ability to improve the quality and controllability of text generation. Effective prompt chain can be implemented as engineering technique over other approaches, such as zero-shot, few-shot or fine-tuned customized models[2]. By providing a clear direction and structure, prompt chaining helps the model to better understand the user's intentions and produce more accurate and relevant responses.

Prompt chaining can enhance the effectiveness of AI assistance in various domains. By breaking down complex tasks into smaller prompts and chaining them together, developers can create more personalized and accurate responses tailored to individual users' needs. This approach not only improves the overall user experience but also allows for greater customization and adaptability in response to changing user requirements or application scenarios[3].

Explore IBM AI assistants
Transform customer service with AI: a comprehensive guide

Explore why AI is a priority for customer service, how to build responsible AI, and its role in optimizing contact centers and elevating customer experience.

Related content

Request a copy of the CEO’s guide to generative AI

Types of prompts

There are two main types of prompts that are generated when working with LLMs. These are:

Simple prompts

These are basic prompts that contain a single instruction or question for the model to respond to. They are typically used to initiate a conversation or to request information. An example of a simple prompt would be: "What is the weather like today?"

 

Complex prompts

These prompts contain multiple instructions or questions that require the model to perform a series of actions or provide a detailed response. They are often used to facilitate more advanced tasks or to engage in deeper conversations. An example of a complex prompt would be: "I'm looking for a restaurant that serves vegan food and is open until 10 pm. Can you recommend one?"

How to simplify complex prompts

Converting a complex prompt into a series of simple prompts can help break down a complex task into smaller sub-tasks. This approach can make it easier for users to understand the steps required to complete a request and reduce the risk of errors or misunderstandings.  

An example: language translation

Consider the scenario where we have information in Spanish language. We need to extract the information from it, but we do not understand Spanish. First, we need to translate the text from Spanish to English. Then, we need to ask a question to extract the information and then translate the extracted information from English to Spanish again. This is a complex task, and if we try to combine these steps into one prompt, it will be too complex, subsequently increasing the likelihood of more errors in the response. As a result, it's best to convert a complex prompt into a sequence of simple prompts. Some steps to do this include:

  1. Identify the main goal or objective of the prompt. 
  2. Break down the main goal into sub-tasks, that is, more specific actions or tasks.
  3. Create a prompt for each specific action or task.
  4. Ensure that each prompt is clear, concise, and unambiguous.
  5. Test the prompts to ensure that they are easy to understand and comprehensive.

Here our complex prompt is: "Consider the given text in Spanish. Translate it into English. Find all the statistics and facts used in this text and list them as bullet points. Translate them again into Spanish."

To convert this complex prompt into simple prompts, we can break down the main goal into smaller actions or tasks, and we can create a chain of prompts as below:

  1. “Read the given Spanish text.”
  2. “Translate the text into English language.”
  3. “Fetch the statistics and facts from the text.”
  4. “Create a bullet point list of all these facts.”
  5. “Translate them in Spanish language.”
How to build a prompt chain

A structured prompt chain is a pre-defined set of prompts or questions designed to guide a user through a specific conversation or series of actions, ensuring a consistent and controlled flow of information[4]. It is often used in customer support, tutoring, and other interactive systems to maintain clarity, accuracy, and efficiency in the interaction. The prompts in a chain are typically linked together, allowing the system to build upon previous responses and maintain context. This approach can help reduce ambiguity, improve user satisfaction, and enable more effective communication between humans and machines.

Build a reference library with different flavoured templates of prompts

Start by gathering a collection of pre-written prompts that can be customized for various scenarios. These templates should cover common tasks, requests, and questions that users might encounter.

Define the primary prompts

Identify the core questions or instructions that need to be conveyed in the prompt chain. These prompts should be simple, clear, and direct, and should be able to stand alone as individual prompts.

Identify the inputs and outputs for the sequence of prompts

Determine the specific pieces of information or actions that the user needs to provide in response to each prompt. These inputs should be clearly defined and easy to understand, and should be linked to the corresponding prompts in the prompt chain.

Implement the whole prompt chain

Use the reference library and primary prompts to build the complete prompt chain. Ensure that each prompt is logically linked to the next one, and that the user is prompted for the necessary inputs at the appropriate points in the sequence.

Test the prompt chain

Once the prompt chain has been built, test it thoroughly to ensure that it is easy to understand and complete. Ask a sample of users to complete the prompt chain and gather feedback on any areas for improvement.

Iterate and refine the prompt chain

Based on the feedback received during testing, make any necessary adjustments or improvements to the prompt chain. This might include rewriting certain prompts, adding or removing prompts, or changing the order in which the prompts are presented.

By following these steps, customer service representatives and programmers can build effective and efficient prompt chains that help guide users through a series of actions or tasks.

Advantages of prompt chaining

Prompt chaining offers several advantages over traditional methods used in prompt engineering. By guiding the model through a series of prompts, prompt chaining enhances coherence and consistency in the text generation leading to more accurate and engaging outputs.

Consistency

By requiring the model to follow a series of prompts, prompt chaining helps maintain consistency in the text generation. This is particularly important in applications where maintaining a consistent tone, style, or format is crucial, such as in customer support or editorial roles [5].

In customer support, prompt chaining can be used to ensure consistent communication with users. For example, the bot might be prompted to address the user using their preferred name or follow a specific tone of voice throughout the conversation.

Build customer service AI assistants with watsonx assistant
Enhanced control

Prompt chaining provides greater control over the text generation, allowing users to specify the desired output with precision. This is especially useful in situations where the input data is noisy or ambiguous, as the model can be prompted to clarify or refine the input before generating a response[6].

In a text summarization system, prompt chaining allows users to control the level of detail and specificity in the generated summary. For instance, the user might first be prompted to provide the content that they're interested in summarizing, such as a research paper. A subsequent prompt could follow to format that summary in a specific format or template.

See how you can perform text summarization tasks with watsonx.ai (2:19)
Reduced Error Rate

Prompt chaining helps reduce error rates by providing the model with better context and more focused input. A structured prompt chaining is helpful to reduce the human efforts and validate the code and outputs more faster. By breaking down the input into smaller, manageable prompts, the model can better understand the user's intentions and generate more accurate and relevant responses[7].

In a machine translation system, before translating a sentence, the system might first prompt the user to specify the source language, target language, and any relevant context or terminology. This helps the model to better understand the source text and generate an accurate translation.

By leveraging these advantages, prompt chaining has the potential to significantly improve the performance and effectiveness of NLP models in various applications, from customer support to streamlined editorial and language translation.

Use cases of prompt chaining

Prompt chaining is a versatile technique that can be applied to a wide range of use cases, primarily falling into two categories: question answering and multi-step tasks.

 

Question answering

As its name suggests, question answering tasks provide answers to frequently asked questions posed by humans. The model automates the response based on context from documents typically found in a knowledge base. Common applications include:

  • Customer Service/Support: Prompt chaining can help users query against a company's knowledge base to find the most relevant answer, improving user experience and efficiency[8].
  • Educational Platforms: Instructors can create interactive learning experiences by prompting students with questions based on their progress, enabling personalized and adaptive learning [9].
  • Research Assistance: Researchers can use prompt chaining to automate the process of searching and analyzing relevant literature, saving time and resources[3][10].
Multi-step tasks

As one might expect, multi-step tasks are comprised of a sequence of steps to achieve a given goal. Some examples of this include:

  • Content Creation: Prompt chaining can streamline various stages of the content creation process, such as researching a topic, creating an outline, writing an article, validating the content, editing, and more[11][12].
  • Programming Development: Prompt chaining can guide developers through a series of steps, starting with basic logic, progressing to pseudo code, and finally implementing specific code in a given language, while also ensuring code validation[3][13].
  • Personalized Recommendations: This use case is applicable across various industries, where prompt chaining can help tailor recommendations based on user preferences, behavior, and historical data[14].

Prompt chaining is a powerful technique that can be used in a variety of real-time applications to help guide users and professionals through a series of actions or tasks. By breaking down complex tasks into a series of simpler prompts, prompt chaining can help ensure that users and professionals understand the steps required to complete a request and provide a better overall experience. Whether it's used in customer service, programming, or education, prompt chaining can help simplify complex processes and improve efficiency and accuracy.

Related resources What is LangChain?

Learn about LangChain, an open source framework, which is commonly used for app development with LLMs.

Use watsonx and LangChain to make a series of calls to a language model

Learn how to chain models to generate a sequence for generic question-answer system.

What is generative AI, what are foundation models, and why does it matter?

Learn how generative AI is transforming businesses and how to prepare your organization for the future.

Developing system and instruction prompts for prompt engineering Llama 2

Best practices for prompt engineering using Llama 2.

Take the next step

Train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with IBM watsonx.ai, a next-generation enterprise studio for AI builders. Build AI applications in a fraction of the time with a fraction of the data.

Explore watsonx.ai Book a live demo
Footnotes

[1]

Pengfei Liu, W. Y. (2021). Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys.

[2]

Gunwoo Yong, K. J. (2022). Prompt engineering for zero‐shot and few‐shot defect detection and classification using a visual‐language pretrained model.

[3]

O. Marchenko, O. R. (2020). Improving Text Generation Through Introducing Coherence Metrics. Cybernetics and Systems Analysis.

[4]

Zhifang Guo, Y. L. (2022). Prompttts: Controllable Text-To-Speech With Text Descriptions. Zhifang Guo, Yichong Leng, Yihan Wu, Sheng Zhao, Xuejiao Tan.

[5]

Jason Wei, X. W. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models.

[6]

Mero, J. (2018). The effects of two-way communication and chat service usage on consumer attitudes in the e-commerce retailing sector. Electronic Markets.

[7]

Yu Cheng, J. C. (2023). Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains. ACM Transactions on Software Engineering and Methodology.

[8]

Tongshuang Sherry Wu, E. J. (2022). PromptChainer: Chaining Large Language Model Prompts through Visual Programming. CHI Conference on Human Factors in Computing Systems Extended Abstracts.

[9]

Shwetha Sridharan, D. S. (2021). Adaptive learning management expert system with evolving knowledge base and enhanced learnability. Education and Information Technologies.

[10]

Boshi Wang, X. D. (2022). Iteratively Prompt Pre-trained Language Models for Chain of Thought. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.

[11]

M. Rice, K. M. (2018). Evaluating an augmented remote assistance platform to support industrial applications. IEEE 4th World Forum on Internet of Things (WF-IoT).

[12]

Cynthia A. Thompson, M. G. (2011). A Personalized System for Conversational Recommendations. J. Artif. Intell. Res.

[13]

Qing Huang, J. Z. (2023). PCR-Chain: Partial Code Reuse Assisted by Hierarchical Chaining of Prompts on Frozen Copilot. IEEE/ACM 45th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion).

[14]

Yafeng Gu, Y. S. (2023). APICom: Automatic API Completion via Prompt Learning and Adversarial Training-based Data Augmentatio. Proceedings of the 14th Asia-Pacific Symposium on Internetware.