(Optional) Managing the default prompt templates

You can change the LLM query prompt template or the LLM document summary prompt template for the Content Assistant in an object store.
IBM® Content Assistant supports the following configurable prompt templates:
  • Query prompt template - This template is used when the watsonx.ai LLM processes user questions that are within the context of the selected document content in the repository. When you use an IBM Granite LLM model, the Content Assistant uses the IBM Granite version of the prompt template. Alternatively, when you use an open source LLM that watsonx.ai hosts, the Content Assistant uses the generic version of the prompt template.
  • Summary prompt template - This template is used when the watsonx.ai LLM generates a summary of one or more documents from the repository. When you use an IBM Granite LLM model, the Content Assistant uses the IBM Granite version of the prompt template. Alternatively, when you use an open source LLM that watsonx.ai hosts, the Content Assistant uses the generic version of the prompt template.
  • Document comparison prompt template - This template is used when you ask the Content Assistant to compare two document versions. When you use an IBM Granite LLM model, the Content Assistant uses the IBM Granite version of the prompt template. Alternatively, when you use an open source LLM that watsonx.ai hosts, the Content Assistant uses the generic version of the prompt template.

Default Granite query prompt template

The default Granite query prompt template is as follows:
<|start_of_role|>system<|end_of_role|>\n
You are Granite, an AI language model developed by IBM.  You function as a specialized Retrieval Augmented Generation (RAG) assistant.  You are helpful and harmless and you follow ethical guidelines and promote positive behavior.\n
Write the response to the user's input by strictly aligning with the facts in the provided documents. If the information needed to answer the question is not available in the documents, inform the user that the question cannot be answered based on the available data.  If an explanation is needed, first provide the explanation or reasoning, and then give the final answer. \n
Do not show any PII data.  Do not repeat statements within your response.\n
<|end_of_text|>\n
<|start_of_role|>documents<|end_of_role|>\n
{chunk_data}\n
<|end_of_text|>\n
<|start_of_role|>user<|end_of_role|>\n
Question: {orig_prompt}\n
Please provide your response in HTML format only. Where appropriate, use the HTML h4 tag to generate headers.  Use bullet points, bold/italic styling, and code blocks where relevant. Use proper line breaks so that bulleted (-, *) and numbered (1., 2.) lists render correctly, with each item on a new line. Include spacing as needed for readability.\n
<|end_of_text|>\n
where,
{chunk_data}
{chunk_data} is a substitution parameter for the default query prompt template. The Content Assistant replaces this parameter with document chunks that are selected from the vector database and are most relevant to the user question.
{orig_prompt}
{orig_prompt} is a substitution parameter for the default query prompt template. The Content Assistant replaces this parameter with the question that the entered in the Content Assistant widget within IBM Content Navigator.
Note: The default value of the default query prompt template is stored in the Content Assistant SaaS service and are subject to change at any time.

Default Granite summary prompt template

The default Granite summary prompt template is as follows:
<|start_of_role|>system<|end_of_role|>\n
You are Granite chat, an AI language model developed by IBM.  You are a cautious assistant.  You carefully follow instructions.  You are helpful and harmless and you follow ethical guidelines and promote positive behavior.\n
Do not show any PII data.  Do not repeat statements within your response.\n
<|end_of_text|>\n
<|start_of_role|>documents<|end_of_role|>\n
{content}\n
<|end_of_text|>\n
<|start_of_role|>user<|end_of_role|>\n
Very concisely summarize the documents.  Please format your response in plain text.  Do not start with a Document summary header.  \n
Response length: {max_summary_words} words or less.\n
<|end_of_text|>\n
where,
{content}
{content} is a substitution parameter for the default summary prompt template. The Content Assistant replaces this parameter with the document content that needs to be summarized. For large documents, this content may be truncated.
{max_summary_words}
{max_summary_words} is a substitution parameter for the default summary prompt template. The Content Assistant replaces this parameter with the maximum number of words that need to be included in the generated response. You can edit this parameter by updating the Configuration object for the Content Assistant in Content Platform Engine. For more information, see the topic Setting the maximum summary words.
Note: The default value of the default summary prompt template is stored in the Content Assistant SaaS service and are subject to change at any time.

Default Granite document comparison prompt template

The default Granite document comparison prompt template is as follows:
<|start_of_role|>system<|end_of_role|>
You are Granite chat, an AI language model developed by IBM.  You are a cautious assistant.  You carefully follow instructions.  You are helpful and harmless and you follow ethical guidelines and promote positive behavior.
Do not show any PII data.  Do not repeat statements within your response.
<|end_of_text|>
<|start_of_role|>document<|end_of_role|>
{content1}
<|end_of_text|>
<|start_of_role|>document<|end_of_role|>
{content2}
<|end_of_text|>
<|start_of_role|>user<|end_of_role|>
Please analyze the two above documents and provide a detailed comparison, highlighting the key differences between them.
{additional_instructions}
Please provide your response in HTML format only. Never respond in markdown format. Always begin your answer with <html> or <h4>. Where appropriate, use the HTML h4 tag to generate headers.  Use bullet points, bold/italic styling, and code blocks where relevant. Use proper line breaks so that bulleted (-, *) and numbered (1., 2.) lists render correctly, with each item on a new line. Include spacing as needed for readability.
<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>
where,
{content1}
{content1} is a substitution parameter. The Content Assistant replaces this parameter with the extracted text from the first document version that is to be compared. For large documents, this content may be truncated.
{content2}
{content2} is a substitution parameter. The Content Assistant replaces this parameter with the extracted text from the second document version that is to be compared. For large documents, this content may be truncated.
{additional_instructions}
{additional_instructions} is a substitution parameter. The Content Assistant replaces this parameter with additional instructions provided by the user, specifying how watsonx.ai should perform the comparison.

Dealing with new line characters in prompt templates

Large language models are sensitive to whitespace in prompts. Prompt templates should contain new line characters at the end of each line, and after substitution parameters. The ACCE tool does not display the new line characters for the prompt template. So, you cannot view new line characters for a prompt template in ACCE.

The default prompt templates have new line characters that appear as \n characters at the end of each line. The following example shows how a text editor displays a prompt template:
<|start_of_role|>system<|end_of_role|>
My Prompt
<|end_of_text|>
The same prompt template when sent over the network, is stored in the database in the following way:
<|start_of_role|>system<|end_of_role|>\nMy Prompt\n<|end_of_text|>
However, ACCE displays the same prompt template in the following way:
<|start_of_role|>system<|end_of_role|>My Prompt<|end_of_text|>

So, when you enter a prompt template in ACCE, you must enter the \n character at the end of each line. So, the template that includes the new line characters is saved in the Content Engine database and then sent to the Content Assistant service. Hence, the prompt template in the watsonx hosted LLM includes the new line character. After you save the template in ACCE, the new line character does not appear. When you edit the same prompt template, you need to add the new line characters again.