Foundation model lifecycle
To help you discover and use the latest and best foundation models, the list of foundation models that are available for prompting in watsonx.ai is updated regularly.
Foundation models that are built by IBM are continuously updated and improved. As new versions of IBM foundation models are introduced, older versions of these models move to a legacy state in the model lifecycle. Legacy models are deprecated and eventually removed as they progress through the lifecycle over time.
Similarly, as newer and more effective third-party foundation models are released, older models progress through the same lifecycle in watsonx.ai so that the latest versions of a model family can be featured.
Modifications to IBM foundation models
IBM foundation models are periodically modified by IBM to improve the foundation model performance or security. A modification is a model refresh that might include new capabilities or fixes, but does not meet IBM's criteria to warrant a version update.
After a modification to an IBM foundation model is introduced, the modified version of the model automatically replaces the earlier version of the model when you upgrade your deployment.
Foundation model deprecation
During the deprecation period, you can continue to inference the deprecated foundation model. However, a message is returned with the foundation model output to notify you that the model is deprecated and a new version is available.
A deprecated foundation model can also be constricted. When a deprecated model is in the constricted state, it means the model can be inferenced, but cannot be tuned, trained, or deployed.
When a foundation model is deprecated, the following steps are taken to inform you about the deprecation:
- The foundation model is highlighted in the product user interface with a warning icon
. A tooltip indicates that the deprecated model is scheduled for withdrawal.
- A warning is issued when you inference a deprecated foundation model.
- The Deprecated and withdrawn foundation models table is updated to show foundation models that are deprecated, the releases in which those models are deprecated, and a suitable alternative foundation model for you to consider as a replacement.
When you install or upgrade to a software version in which a foundation model is deprecated or withdrawn, you cannot install that deprecated or withdrawn foundation model.
If a currently deprecated or withdrawn model was previously installed in your cluster at an earlier software version, you can continue to inference the model. However, no software support is provided by IBM for foundation models that are withdrawn from watsonx.ai.
After a model is removed from the software and de-provisioned from the cluster, you must re-provision the model as a custom foundation model in order to keep using the model for inferencing and tuning purposes. For details, see Deploying custom foundation models.
Withdrawn models are highlighted with a warning icon . A tooltip indicates that the model is withdrawn.
| Foundation model name Foundation model ID API model ID |
Deprecated in release | Withdrawn in release | Alternative foundation model |
|---|---|---|---|
| Name: granite-8b-japanese Model ID: ibm-granite-8b-japaneseAPI ID: ibm/granite-8b-japanese |
2.1.2 | To be determined | granite-3-8b-instruct |
| Name: llama-3-1-8b-instruct Model ID: llama-3-1-8b-instructAPI ID: meta-llama/llama-3-1-8b-instruct |
2.1.1 | To be determined | llama-3-2-11b-vision-instruct |
| Name: llama-3-1-70b-instruct Model ID: llama-3-1-70b-instructAPI ID: meta-llama/llama-3-1-70b-instruct |
2.1.1 | To be determined | • llama-3-3-70b-instruct • llama-3-2-90b-vision-instruct |
| Name: granite-7b-lab Model ID: ibm-granite-7b-labAPI ID: ibm/granite-7b-lab |
2.1.0 | To be determined | granite-3-8b-instruct |
| Name: llama2-13b-dpo-v7 Model ID: mncai-llama-2-13b-dpo-v7API ID: mnci/llama2-13b-dpo-v7 |
2.1.0 | To be determined | llama-3-1-8b-instruct |
| Name: llama-3-8b-instruct Model ID: meta-llama-llama-3-8b-instructAPI ID: meta-llama/llama-3-8b-instruct |
2.1.0 | To be determined | • llama-3-1-8b-instruct • llama-3-2-11b-vision-instruct |
| Name: llama-3-70b-instruct Model ID: meta-llama-llama-3-70b-instructAPI ID: meta-llama/llama-3-70b-instruct |
2.1.0 | To be determined | • llama-3-1-70b-instruct • llama-3-2-90b-vision-instruct |
| Name: mt0-xxl-13b Model ID: bigscience-mt0-xxlAPI ID: bigscience/mt0-xxl |
2.1.0 | To be determined | • llama-3-1-8b-instruct • llama-3-2-11b-vision-instruct |
| Name: codellama-34b-instruct-hf Model ID: codellama-codellama-34b-instruct-hfAPI ID: codellama/codellama-34b-instruct-hf |
2.1.1 | 2.1.2 | llama-3-3-70b-instruct |
| Name: granite-13b-chat-v2 Model ID: ibm-granite-13b-chat-v2API ID: ibm/granite-13b-chat-v2 |
2.1.1 | 2.1.2 | granite-3-8b-instruct |
| Name: granite-20b-multilingual Model ID: ibm-granite-20b-multilingualAPI ID: ibm/granite-20b-multilingual |
2.1.1 | 2.1.2 | granite-3-8b-instruct |
| Name: merlinite-7b Model ID: ibm-mistralai-merlinite-7bAPI ID: ibm-mistralai/merlinite-7b |
2.0.3 | 2.1.0 | mixtral-8x7b-instruct-v01 |
| Name: llama-2-70b-chat Model ID: meta-llama-llama-2-70b-chatAPI ID: meta-llama/llama-2-70b-chat |
2.0.3 | 2.1.0 | llama-3.1-70b-instruct |
| Name: mixtral-8x7b-instruct-v01-q Model ID: mixtral-8x7b-instruct-v01-qAPI ID: ibm-mistralai/mixtral-8x7b-instruct-v01-q |
2.0.0 | 2.0.3 | mixtral-8x7b-instruct-v01 |
| Name: granite-13b-chat-v1 Model ID: ibm-granite-13b-chat-v1API ID: ibm/granite-13b-chat-v1 |
1.1.4 | 2.0.1 | granite-13b-chat-v2 |
| Name: granite-13b-instruct-v1 Model ID: ibm-granite-13b-instruct-v1API ID: ibm/granite-13b-instruct-v1 |
1.1.4 | 2.0.1 | granite-13b-instruct-v2 |
| Name: gpt-neox-20b Model ID: eleutherai-gpt-neox-20bAPI ID: eleutherai/gpt-neox-20b |
1.1.4 | 2.0.1 | mixtral-8x7b-instruct-v01 |
| Name: mpt-7b-instruct2 Model ID: ibm-mpt-7b-instruct2API ID: ibm/mpt-7b-instruct2 |
1.1.4 | 2.0.1 | mixtral-8x7b-instruct-v01 |
| Name: starcoder-15.5b Model ID: bigcode-starcoderAPI ID: bigcode/starcoder |
1.1.4 | 2.0.1 | mixtral-8x7b-instruct-v01 |
| IBM watsonx experience version | IBM Software Hub cluster version | watsonx.ai service operand version |
|---|---|---|
| 2.1.2 | 5.1.2 | 10.2.0 |
| 2.1.1 | 5.1.1 | 10.1.0 |
| 2.1.0 | 5.1.0 | 10.0.0 |
| 2.0.3 | 5.0.3 | 9.3.0 |
| 2.0.1 | 5.0.1 | 9.1.0 |
| 2.0.0 | 5.0.0 | 9.0.0 |
| 1.1.4 | 4.8.4 | 8.4.0 |
What to do next
You must choose an alternative supported foundation model to use if any of the following saved resources submit input to a foundation model that is withdrawn:
- Prompt template asset
- Prompt session asset
- Notebook asset
For more information about working with saved prompt assets, see Saving your work.
For more information about how to change the foundation model that is inferenced from a notebook asset, see Inferencing a foundation model with a notebook.
Parent topic: Supported foundation models