Choosing a foundation model in watsonx.ai

There are many factors to consider when you choose a foundation model to use for inferencing from a generative AI project.

For example, for a solution that summarizes call center problem reports, you might want a foundation model with these characteristics:

  • Scores well on benchmarks for summarization tasks
  • Handles large amounts of text, which means a large context window length
  • Can interpret images of damaged items, so accepts inputs in both text and image modalities

Determine which factors are most important for you and your organization.

After you have a short list of models that best fit your needs, you can test the models to see which ones consistently return the results you want.

Foundation models that support your use case

To get started, find foundation models that can do the type of task that you want to complete.

The following table shows the types of tasks that the foundation models in IBM watsonx.ai support. A checkmark (✓) indicates that the task that is named in the column header is supported by the foundation model. For some of the tasks, you can click a link to go to a sample prompt for the task.

Table 1a. Foundation model task support
Model Conversation Tool interaction
from Chat API
Retrieval-augmented generation (RAG) Samples
granite-13b-chat-v2
Chat from Prompt Lab: Sample chat

• RAG from Prompt Lab
RAG from AutoAI
granite-13b-instruct-v2
Chat from Prompt Lab

RAG from Prompt Lab

Generation
granite-7b-lab
Chat from Prompt Lab

• RAG from Prompt Lab
RAG from AutoAI

Summarization
granite-8b-japanese
Q&A
Translation
granite-20b-multilingual
Chat from Prompt Lab

RAG from Prompt Lab

Translation
granite-3-2b-instruct
Samples:
• Chat from Prompt Lab
• From Chat API: Sample

Code
granite-3-8b-instruct
Samples:
• Chat from Prompt Lab
• From Chat API: Sample

Tool-calling sample

Code
granite-guardian-3-2b
Chat from Prompt Lab

RAG from Prompt Lab
granite-guardian-3-8b
Chat from Prompt Lab

RAG from Prompt Lab
granite-3b-code-instruct
Chat from Prompt Lab

Code
granite-8b-code-instruct
Chat from Prompt Lab

Code
granite-20b-code-instruct
Samples:
• Chat from Prompt Lab
• From Chat API: Sample

Code
granite-20b-code-base-schema-linking
Code
granite-20b-code-base-sql-gen
Code
granite-34b-code-instruct
Samples:
• Chat from Prompt Lab
• From Chat API: Sample

Code
allam-1-13b-instruct
Chat from Prompt Lab

Classification
Translation
codellama-34b-instruct-hf
Code
codestral-22b
Code
codestral-2501
Code
elyza-japanese-llama-2-7b-instruct
Classification
Translation
flan-t5-xl-3b
RAG from Prompt Lab
flan-t5-xxl-11b
RAG from Prompt Lab

Samples:
Q&A
Classification
Summarization
flan-ul2-20b
RAG from Prompt Lab
RAG from AutoAI

Samples:
Q&A
Classification
Extraction
Summarization
jais-13b-chat
Chat from Prompt Lab: Sample chat
llama-3-3-70b-instruct
Samples:
• Chat from Prompt Lab: Sample chat
• From Chat API: Sample

Tool-calling sample

RAG from Prompt Lab
llama-3-2-1b-instruct
Chat from Prompt Lab: Sample chat

Tool-calling sample

RAG from Prompt Lab

Code
llama-3-2-3b-instruct
Chat from Prompt Lab: Sample chat

RAG from Prompt Lab

Code
llama-3-2-11b-vision-instruct
Samples:
• Chat from Prompt Lab: Chat with image example
• From Chat API: Sample

Tool-calling sample

RAG from Prompt Lab
llama-3-2-90b-vision-instruct
Samples:
• Chat from Prompt Lab: Chat with image example
• From Chat API: Sample

Tool-calling sample

RAG from Prompt Lab
llama-guard-3-11b-vision
Samples:
Chat from Prompt Lab: Chat with image example
✓ From Chat API: Sample

RAG from Prompt Lab

Classification
llama-3-1-8b-instruct
Chat from Prompt Lab: Sample chat

Tool-calling sample (Multitenant)

• RAG from Prompt Lab
RAG from AutoAI
llama-3-1-70b-instruct
Samples:
• Chat from Prompt Lab: Sample chat
• From Chat API: Sample

Tool-calling sample (Multitenant)

• RAG from Prompt Lab
RAG from AutoAI
llama-3-405b-instruct
Chat from Prompt Lab: Sample chat

Tool-calling sample

RAG from Prompt Lab
llama-3-8b-instruct
Samples:
• Chat from Prompt Lab: Sample chat
• From Chat API: Sample

RAG from Prompt Lab
llama-3-70b-instruct
Samples:
• Chat from Prompt Lab: Sample chat
• From Chat API: Sample

Samples:
• RAG from Prompt Lab
RAG from AutoAI
llama-2-13b-chat
Chat from Prompt Lab: Sample chat

RAG from Prompt Lab
llama2-13b-dpo-v7
Chat from Prompt Lab: Sample chat

RAG from Prompt Lab

Summarization
ministral-8b-instruct
Samples:
• Chat from Prompt Lab

Samples:
Classification
Extraction
Summarization
Translation
mistral-large
Samples:
• Chat from Prompt Lab
• From Chat API: Sample

Tool-calling sample

• RAG from Prompt Lab
RAG from AutoAI

Samples:
Classification
Extraction
Summarization
Code
Translation
mistral-large-instruct-2411
Samples:
• Chat from Prompt Lab

• RAG from Prompt Lab

Samples:
Classification
Extraction
Summarization
Code
Translation
mistral-small-instruct
Samples:
• Chat from Prompt Lab

Samples:
Classification
Extraction
Summarization
Code
Translation
mixtral-8x7b-instruct-v01
Chat from Prompt Lab

• RAG from Prompt Lab
RAG from AutoAI

Samples:
Classification
Extraction
Generation
Summarization
Code
Translation
mt0-xxl-13b
RAG from Prompt Lab

Samples:
Classification
Q&A
pixtral-12b
Chat from Prompt Lab: Chat with image example

RAG from Prompt Lab

Samples:
Classification
Extraction
Summarization
pixtral-large-instruct-2411
Chat from Prompt Lab: Chat with image example

 

Multimodal foundation models

Multimodal foundation models are capable of processing and integrating information from many modalities or types of data. These modalities can include text, images, audio, video, and other forms of sensory input.

The multimodal foundation models that are available from watsonx.ai can do the following types of tasks:

Image-to-text generation
Useful for visual question answering, interpretation of charts and graphs, captioning of images, and more.

The following table lists the available foundation models that support modalities other than text-in and text-out.

Table 1b. Supported multimodal foundation models
Model Input modalities Output modalities
granite-vision-3-2-2b image, text text
llama-4-maverick-17b-128e-instruct-fp8 image, text text
llama-4-scout-17b-16e-instruct image, text text
llama-3-2-11b-vision-instruct image, text text
llama-3-2-90b-vision-instruct image, text text
llama-guard-3-11b-vision image, text text
pixtral-12b image, text text

 

Foundation models that support your language

Many foundation models work well only in English. But some model creators include multiple languages in the pretraining data sets to fine-tune their model on tasks in different languages, and to test their model's performance in multiple languages. If you plan to build a solution for a global audience or a solution that does translation tasks, look for models that were created with multilingual support in mind.

The following table lists natural languages that are supported in addition to English by foundation models in watsonx.ai. For more information about the languages that are supported for multilingual foundation models, see the model card for the foundation model.

Table 2. Foundation models that support natural languages other than English
Model Languages other than English
granite-8b-japanese Japanese
granite-20b-multilingual German, Spanish, French, and Portuguese
allam-1-13b-instruct Arabic
elyza-japanese-llama-2-7b-instruct Japanese
flan-t5-xl-3b Multilingual (See model card)
flan-t5-xxl-11b French, German
jais-13b-chat Arabic
llama2-13b-dpo-v7 Korean
ministral-8b-instruct Multilingual (See model card)
mistral-large Multilingual (See model card)
mixtral-8x7b-instruct-v01 French, German, Italian, Spanish
mt0-xxl-13b Multilingual (See model card)

 

Foundation models that you can tune

Some of the foundation models that are available in watsonx.ai can be tuned to better suit your needs.

The following tuning methods are supported:

  • Fine tuning: Runs tuning experiments that change the parameter weights of the underlying foundation model to guide the model to generate output that is optimized for a task.

  • Prompt tuning: Runs tuning experiments that adjust the prompt vector that is included with the foundation model input. After several runs, finds the prompt vector that can best guide the foundation model to return output that suits your task.

The following table shows foundation models that you can tune by using the tuning methods that are available in IBM watsonx.ai. A checkmark (✓) indicates that the tuning method that is named in the column header is supported by the foundation model that is named at the start of the row.

Table 3. Available tuning methods
Model name Prompt tuning Fine tuning
allam-1-13b-instruct
flan-t5-xl-3b
granite-13b-instruct-v2
granite-3b-code-instruct
granite-8b-code-instruct
granite-20b-code-instruct
llama-2-13b-chat
llama-3-1-8b-instruct

For more information, see Tuning Studio.

Model types and IP indemnification

Review the intellectual property indemnification policy for the foundation model that you want to use. Some third-party foundation model providers require you to exempt them from liability for any IP infringement that might result from the use of their AI models.

IBM-developed foundation models that are available from watsonx.ai have standard intellectual property protection, similar to what IBM provides for hardware and software products.

IBM extends its standard intellectual property indemnification to the output that is generated by covered models. Covered Models include IBM-developed and some third-party foundation models that are available from watsonx.ai. Third-Party Covered Models are identified in table 4.

The following table describes the different foundation model types and their indemnification policies. See the reference materials for full details.

Table 4. Indemnification policy details
Foundation model type Indemnification policy Foundation models Details Reference materials
IBM Covered Model Uncapped IBM indemnification • IBM Granite
• IBM Slate
IBM-developed foundation models that are available from watsonx.ai. License information
Third-Party Covered Model Capped IBM indemnification Mistral Large Third-party covered models that are available from watsonx.ai. License information
Non-IBM Product No IBM indemnification Various Third-party models that are available from watsonx.ai and are subject to their respective license terms, including associated obligations and restrictions. See model information.
Custom Model No IBM indemnification Various Foundation models that you import to use in watsonx.ai are Client content. Client is solely responsible for the selection and use of the model and output and compliance with third-party license terms, obligations, and restrictions.

For more information about third-party model license terms, see Third-party foundation models.

More considerations for choosing a model

Table 5. Considerations for choosing a foundation model in IBM watsonx.ai
Model attribute Considerations
Context length Sometimes called context window length, context window, or maximum sequence length, context length is the maximum allowed value for the number of tokens in the input prompt plus the number of tokens in the generated output. When you generate output with models in watsonx.ai, the number of tokens in the generated output is limited by the Max tokens parameter.
Fine-tuned After a foundation model is pretrained, many foundation models are fine tuned for specific tasks, such as classification, information extraction, summarization, responding to instructions, answering questions, or participating in a back-and-forth dialog chat. A model that undergoes fine tuning on tasks similar to your planned use typically do better with zero-shot prompts than models that are not fine tuned in a way that fits your use case. One way to improve results for a fine-tuned model is to structure your prompt in the same format as prompts in the data sets that were used to fine tune that model.
Instruction-tuned Instruction-tuned means that the model was fine tuned with prompts that include an instruction. When a model is instruction tuned, it typically responds well to prompts that have an instruction even if those prompts don't have examples.
IP indemnity In addition to license terms, review the intellectual property indemnification policy for the model. For more information, see Model types and IP indemnification.
License In general, each foundation model comes with a different license that limits how the model can be used. Review model licenses to make sure that you can use a model for your planned solution.
Model architecture The architecture of the model influences how the model behaves. A transformer-based model typically has one of the following architectures:
Encoder-only: Understands input text at the sentence level by transforming input sequences into representational vectors called embeddings. Common tasks for encoder-only models include classification and entity extraction.
Decoder-only: Generates output text word-by-word by inference from the input sequence. Common tasks for decoder-only models include generating text and answering questions.
Encoder-decoder: Both understands input text and generates output text based on the input text. Common tasks for encoder-decoder models include translation and summarization.
Supported programming languages Not all foundation models work well for programming use cases. If you are planning to create a solution that summarizes, converts, generates, or otherwise processes code, review which programming languages were included in a model's pretraining data sets and fine-tuning activities to determine whether that model is a fit for your use case.

Learn more

Parent topic: Supported foundation models