Choosing a foundation model in watsonx.ai

To determine which models might work well for your project, find a model that supports the task you need to complete for your use case or that can be tuned to better suit your task. Look for a foundation model that supports the language of the text you need to process. Also, consider model attributes, such as license, pretraining data, model size, and how the model was fine-tuned. After you have a short list of models that best fit your use case, you can test the models to see which ones consistently return the results you want.

Foundation models that support your use case

To get started, find foundation models that can do the type of task that you want to complete.

The following table shows the types of tasks that the foundation models in IBM watsonx.ai support. A checkmark (✓) indicates that the task that is named in the column header is supported by the foundation model. For some of the models, you can click See sample to see a sample prompt that can be used for the task. Alternatively, see Sample prompts to review many prompt samples that are grouped by task type.

Table 1. Foundation model task support
Model Classification Extraction Generation Question-answering Retrieval-augmented generation Summarization Coding Translation
granite-13b-chat-v2
See sample
granite-13b-instruct-v2
See sample
granite-7b-lab
See sample

See sample
granite-8b-japanese
See sample

See sample
granite-20b-multilingual
See sample
granite-3b-code-instruct
See sample
granite-8b-code-instruct
See sample
granite-20b-code-instruct
See sample
granite-34b-code-instruct
See sample
allam-1-13b-instruct
See sample

See sample
codellama-34b-instruct-hf
See sample
elyza-japanese-llama-2-7b-instruct
See sample

See sample
flan-t5-xl-3b
flan-t5-xxl-11b
See sample

See sample

See sample
flan-ul2-20b
See sample

See sample

See sample

See sample
jais-13b-chat
See sample
llama-3-8b-instruct
See sample
llama-3-70b-instruct
See sample
llama-2-13b-chat
See sample
llama-2-70b-chat
See sample
llama2-13b-dpo-v7
See sample

See sample
merlinite-7b
See sample
mixtral-8x7b-instruct-v01
See sample

See sample

See sample

See sample

See sample
mixtral-8x7b-instruct-v01-q
See sample

See sample

See sample

See sample

See sample
mt0-xxl-13b
See sample

See sample

Foundation models that support your language

Many foundation models work well only in English. But some model creators include multiple languages in the pretraining data sets to fine-tune their model on tasks in different languages, and to test their model's performance in multiple languages. If you plan to build a solution for a global audience or a solution that does translation tasks, look for models that were created with multilingual support in mind.

The following table lists natural languages that are supported in addition to English by foundation models in watsonx.ai. For more information about the languages that are supported for multilingual foundation models, see the model card for the foundation model.

Table 2. Foundation models that support natural languages other than English
Model Languages other than English
granite-8b-japanese Japanese
granite-20b-multilingual German, Spanish, French, and Portuguese
allam-1-13b-instruct Arabic
elyza-japanese-llama-2-7b-instruct Japanese
flan-t5-xl-3b Multilingual (See model card)
flan-t5-xxl-11b French, German
jais-13b-chat Arabic
llama2-13b-dpo-v7 Korean
mixtral-8x7b-instruct-v01 French, German, Italian, Spanish
mixtral-8x7b-instruct-v01-q French, German, Italian, Spanish
mt0-xxl-13b Multilingual (See model card)

Foundation models that you can tune

Some of the foundation models that are available in watsonx.ai can be tuned to better suit your needs.

The following tuning method is supported:

  • Prompt tuning: Runs tuning experiments that adjust the prompt vector that is included with the foundation model input. After several runs, finds the prompt vector that can best guide the foundation model to return output that suits your task.

The following table shows the methods for tuning foundation models that are available in IBM watsonx.ai. A checkmark (✓) indicates that the tuning method that is named in the column header is supported by the foundation model.

Table 3. Available tuning methods
Model name Prompt tuning
flan-t5-xl-3b
granite-13b-instruct-v2
llama-2-13b-chat

For more information, see Tuning Studio.

More considerations for choosing a model

Table 4. Considerations for choosing a foundation model in IBM watsonx.ai
Model attribute Considerations
Context length Sometimes called context window length, context window, or maximum sequence length, context length is the maximum allowed value for the number of tokens in the input prompt plus the number of tokens in the generated output. When you generate output with models in watsonx.ai, the number of tokens in the generated output is limited by the Max tokens parameter. For some models, the token length of model output for Lite plans is limited by a dynamic, model-specific, environment-driven upper limit.
Cost The cost of using foundation models is measured in resource units. The price of a resource unit is based on the rate of the billing class for the foundation model.
Fine-tuned After a foundation model is pretrained, many foundation models are fine tuned for specific tasks, such as classification, information extraction, summarization, responding to instructions, answering questions, or participating in a back-and-forth dialog chat. A model that undergoes fine tuning on tasks similar to your planned use typically do better with zero-shot prompts than models that are not fine tuned in a way that fits your use case. One way to improve results for a fine-tuned model is to structure your prompt in the same format as prompts in the data sets that were used to fine tune that model.
Instruction-tuned Instruction-tuned means that the model was fine tuned with prompts that include an instruction. When a model is instruction tuned, it typically responds well to prompts that have an instruction even if those prompts don't have examples.
IP indemnity In addition to license terms, review the intellectual property indemnification policy for the model. Some foundation model providers require you to exempt them from liability for any IP infringement that might result from the use of their AI models. For information about contractual protections related to IBM watsonx.ai, see the IBM watsonx.ai service description.
License In general, each foundation model comes with a different license that limits how the model can be used. Review model licenses to make sure that you can use a model for your planned solution.
Model architecture The architecture of the model influences how the model behaves. A transformer-based model typically has one of the following architectures:
Encoder-only: Understands input text at the sentence level by transforming input sequences into representational vectors called embeddings. Common tasks for encoder-only models include classification and entity extraction.
Decoder-only: Generates output text word-by-word by inference from the input sequence. Common tasks for decoder-only models include generating text and answering questions.
Encoder-decoder: Both understands input text and generates output text based on the input text. Common tasks for encoder-decoder models include translation and summarization.
Regional availability You can work with models that are available in the same IBM Cloud regional data center as your watsonx services.
Supported programming languages Not all foundation models work well for programming use cases. If you are planning to create a solution that summarizes, converts, generates, or otherwise processes code, review which programming languages were included in a model's pretraining data sets and fine-tuning activities to determine whether that model is a fit for your use case.

Learn more

Parent topic: Supported foundation models