Published: 23 April 2024
Contributors: Vrunda Gadesha, Eda Kavlakoglu
Prompt chaining is a natural language processing (NLP) technique, which leverages large language models (LLMs) that involves generating a desired output by following a series of prompts. In this process, a sequence of prompts is provided to an NLP model, guiding it to produce the desired response. The model learns to understand the context and relationships between the prompts, enabling it to generate coherent, consistent, and contextually rich text[1].
The concept is advance implementation of prompt engineering. It has gained significant attention in the field of NLP due to its ability to improve the quality and controllability of text generation. Effective prompt chain can be implemented as engineering technique over other approaches, such as zero-shot, few-shot or fine-tuned customized models[2]. By providing a clear direction and structure, prompt chaining helps the model to better understand the user's intentions and produce more accurate and relevant responses.
Prompt chaining can enhance the effectiveness of AI assistance in various domains. By breaking down complex tasks into smaller prompts and chaining them together, developers can create more personalized and accurate responses tailored to individual users' needs. This approach not only improves the overall user experience but also allows for greater customization and adaptability in response to changing user requirements or application scenarios[3].
Explore why AI is a priority for customer service, how to build responsible AI, and its role in optimizing contact centers and elevating customer experience.
Request a copy of the CEO’s guide to generative AI
There are two main types of prompts that are generated when working with LLMs. These are:
These are basic prompts that contain a single instruction or question for the model to respond to. They are typically used to initiate a conversation or to request information. An example of a simple prompt would be: "What is the weather like today?"
These prompts contain multiple instructions or questions that require the model to perform a series of actions or provide a detailed response. They are often used to facilitate more advanced tasks or to engage in deeper conversations. An example of a complex prompt would be: "I'm looking for a restaurant that serves vegan food and is open until 10 pm. Can you recommend one?"
Converting a complex prompt into a series of simple prompts can help break down a complex task into smaller sub-tasks. This approach can make it easier for users to understand the steps required to complete a request and reduce the risk of errors or misunderstandings.
Consider the scenario where we have information in Spanish language. We need to extract the information from it, but we do not understand Spanish. First, we need to translate the text from Spanish to English. Then, we need to ask a question to extract the information and then translate the extracted information from English to Spanish again. This is a complex task, and if we try to combine these steps into one prompt, it will be too complex, subsequently increasing the likelihood of more errors in the response. As a result, it's best to convert a complex prompt into a sequence of simple prompts. Some steps to do this include:
Here our complex prompt is: "Consider the given text in Spanish. Translate it into English. Find all the statistics and facts used in this text and list them as bullet points. Translate them again into Spanish."
To convert this complex prompt into simple prompts, we can break down the main goal into smaller actions or tasks, and we can create a chain of prompts as below:
A structured prompt chain is a pre-defined set of prompts or questions designed to guide a user through a specific conversation or series of actions, ensuring a consistent and controlled flow of information[4]. It is often used in customer support, tutoring, and other interactive systems to maintain clarity, accuracy, and efficiency in the interaction. The prompts in a chain are typically linked together, allowing the system to build upon previous responses and maintain context. This approach can help reduce ambiguity, improve user satisfaction, and enable more effective communication between humans and machines.
Start by gathering a collection of pre-written prompts that can be customized for various scenarios. These templates should cover common tasks, requests, and questions that users might encounter.
Identify the core questions or instructions that need to be conveyed in the prompt chain. These prompts should be simple, clear, and direct, and should be able to stand alone as individual prompts.
Determine the specific pieces of information or actions that the user needs to provide in response to each prompt. These inputs should be clearly defined and easy to understand, and should be linked to the corresponding prompts in the prompt chain.
Use the reference library and primary prompts to build the complete prompt chain. Ensure that each prompt is logically linked to the next one, and that the user is prompted for the necessary inputs at the appropriate points in the sequence.
Once the prompt chain has been built, test it thoroughly to ensure that it is easy to understand and complete. Ask a sample of users to complete the prompt chain and gather feedback on any areas for improvement.
Based on the feedback received during testing, make any necessary adjustments or improvements to the prompt chain. This might include rewriting certain prompts, adding or removing prompts, or changing the order in which the prompts are presented.
By following these steps, customer service representatives and programmers can build effective and efficient prompt chains that help guide users through a series of actions or tasks.
Prompt chaining offers several advantages over traditional methods used in prompt engineering. By guiding the model through a series of prompts, prompt chaining enhances coherence and consistency in the text generation leading to more accurate and engaging outputs.
By requiring the model to follow a series of prompts, prompt chaining helps maintain consistency in the text generation. This is particularly important in applications where maintaining a consistent tone, style, or format is crucial, such as in customer support or editorial roles [5].
In customer support, prompt chaining can be used to ensure consistent communication with users. For example, the bot might be prompted to address the user using their preferred name or follow a specific tone of voice throughout the conversation.
Prompt chaining provides greater control over the text generation, allowing users to specify the desired output with precision. This is especially useful in situations where the input data is noisy or ambiguous, as the model can be prompted to clarify or refine the input before generating a response[6].
In a text summarization system, prompt chaining allows users to control the level of detail and specificity in the generated summary. For instance, the user might first be prompted to provide the content that they're interested in summarizing, such as a research paper. A subsequent prompt could follow to format that summary in a specific format or template.
Prompt chaining helps reduce error rates by providing the model with better context and more focused input. A structured prompt chaining is helpful to reduce the human efforts and validate the code and outputs more faster. By breaking down the input into smaller, manageable prompts, the model can better understand the user's intentions and generate more accurate and relevant responses[7].
In a machine translation system, before translating a sentence, the system might first prompt the user to specify the source language, target language, and any relevant context or terminology. This helps the model to better understand the source text and generate an accurate translation.
By leveraging these advantages, prompt chaining has the potential to significantly improve the performance and effectiveness of NLP models in various applications, from customer support to streamlined editorial and language translation.
Prompt chaining is a versatile technique that can be applied to a wide range of use cases, primarily falling into two categories: question answering and multi-step tasks.
As its name suggests, question answering tasks provide answers to frequently asked questions posed by humans. The model automates the response based on context from documents typically found in a knowledge base. Common applications include:
As one might expect, multi-step tasks are comprised of a sequence of steps to achieve a given goal. Some examples of this include:
Prompt chaining is a powerful technique that can be used in a variety of real-time applications to help guide users and professionals through a series of actions or tasks. By breaking down complex tasks into a series of simpler prompts, prompt chaining can help ensure that users and professionals understand the steps required to complete a request and provide a better overall experience. Whether it's used in customer service, programming, or education, prompt chaining can help simplify complex processes and improve efficiency and accuracy.
Learn about LangChain, an open source framework, which is commonly used for app development with LLMs.
Learn how to chain models to generate a sequence for generic question-answer system.
Learn how generative AI is transforming businesses and how to prepare your organization for the future.
Best practices for prompt engineering using Llama 2.
Pengfei Liu, W. Y. (2021). Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys. | |
Gunwoo Yong, K. J. (2022). Prompt engineering for zero‐shot and few‐shot defect detection and classification using a visual‐language pretrained model. | |
O. Marchenko, O. R. (2020). Improving Text Generation Through Introducing Coherence Metrics. Cybernetics and Systems Analysis. | |
Zhifang Guo, Y. L. (2022). Prompttts: Controllable Text-To-Speech With Text Descriptions. Zhifang Guo, Yichong Leng, Yihan Wu, Sheng Zhao, Xuejiao Tan. | |
Jason Wei, X. W. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models. | |
Mero, J. (2018). The effects of two-way communication and chat service usage on consumer attitudes in the e-commerce retailing sector. Electronic Markets. | |
Yu Cheng, J. C. (2023). Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains. ACM Transactions on Software Engineering and Methodology. | |
Tongshuang Sherry Wu, E. J. (2022). PromptChainer: Chaining Large Language Model Prompts through Visual Programming. CHI Conference on Human Factors in Computing Systems Extended Abstracts. | |
Shwetha Sridharan, D. S. (2021). Adaptive learning management expert system with evolving knowledge base and enhanced learnability. Education and Information Technologies. | |
Boshi Wang, X. D. (2022). Iteratively Prompt Pre-trained Language Models for Chain of Thought. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. | |
M. Rice, K. M. (2018). Evaluating an augmented remote assistance platform to support industrial applications. IEEE 4th World Forum on Internet of Things (WF-IoT). | |
Cynthia A. Thompson, M. G. (2011). A Personalized System for Conversational Recommendations. J. Artif. Intell. Res. | |
Qing Huang, J. Z. (2023). PCR-Chain: Partial Code Reuse Assisted by Hierarchical Chaining of Prompts on Frozen Copilot. IEEE/ACM 45th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). | |
Yafeng Gu, Y. S. (2023). APICom: Automatic API Completion via Prompt Learning and Adversarial Training-based Data Augmentatio. Proceedings of the 14th Asia-Pacific Symposium on Internetware. | |