IBM foundation models

In IBM watsonx.ai, you can use IBM foundation models that are built with integrity and designed for business.

The Granite family of IBM foundation models includes decoder-only models that can efficiently predict and generate language.

The models were built with trusted data that has the following characteristics:

  • Sourced from quality data sets in domains such as finance (SEC Filings), law (Free Law), technology (Stack Exchange), science (arXiv, DeepMind Mathematics), literature (Project Gutenberg (PG-19)), and more.
  • Compliant with rigorous IBM data clearance and governance standards.
  • Scrubbed of hate, abuse, and profanity, data duplication, and blocklisted URLs, among other things.

IBM is committed to building AI that is open, trusted, targeted, and empowering. For more information about contractual protections that are related to IBM indemnification, see the IBM Client Relationship Agreement.

The following foundation models from IBM are available in watsonx.ai:

For information about the GPU requirements for the supported foundation models, see Foundation models in the IBM Software Hub documentation.

For details about encoder models developed by IBM, see Supported encoder foundation models.

For details about third-party foundation models, see Third-party foundation models.

How to choose a model

To review factors that can help you to choose a model, such as supported tasks and languages, see Choosing a model and Foundation model benchmarks.

A deprecated foundation model is highlighted with a deprecated warning icon Warning icon. A withdrawn foundation models is highlighted with a withdrawn warning icon Error icon. For details about model deprecation and withdrawal, see Foundation model lifecycle.

Foundation model details

The foundation models in watsonx.ai support a range of use cases for both natural languages and programming languages. To see the types of tasks that these models can do, review and try the sample prompts.

Note: A view of all foundation models is available on the Resource hub.

ibm-defense-4-0-micro

This model was introduced with the 2.3.0 release.

The IBM Defense Model is a defense-focused large language model (LLM), fine-tuned by an IBM Granite model. This model is designed to work with Janes foundation defense data to deliver fast, reliable and contextual results for mission-critical tasks in defense organizations.

Note: You must purchase the IBM watsonx.ai Defense Model entitlement separately before you can install and use this model.
Usage

Capable of common generative tasks, including classification, function-calling, summarization, question-answering, retrieval augmented generation, and more.

Size

3 billion parameters

Token limits

Context window length (input + output): 131,072

Supported natural languages

English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese.

Instruction tuning information

The ibm-defense-4-0-micro foundation model is fine-tuned from granite-4-0-micro. The model's training data includes publicly available datasets with permissive licenses, internally generated synthetic data to enhance reasoning, publicly available military datasets, and curated documentation, sample usage, and synthetic examples of Janes API interactions.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Granite Docling

This model was introduced with the 2.3.0 release.

Granite Docling is a multimodal Image-Text-to-Text model efficient for document conversion. The model preserves the core features of Docling while maintaining seamless integration with DoclingDocuments and is capable of parsing PDFs, slides, and scanned pages directly into structured, machine-readable formats.

Usage

Ideal for document understanding and conversion.

Size

258 million parameters

Token limits

Context window length (input + output): 8,192

Supported natural languages

English

Instruction tuning information

Built upon Idefics3 and trained using the nanoVLM framework. Trainig data consists of publicly available datasets and internally constructed synthetic datasets designed to elicit specific document understanding capabilities.

Model architecture

Encoder-decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Learn more
Read the following resources:

Granite 4 models

The Granite 4.0 foundation models belong to the IBM Granite family of models. The granite-4-h-small, granite-4-h-micro and granite-4-h-tiny are instruction-following models built for structured and long-context capabilities. The models use fine-tuning, reinforcement learning, and model merging to improve performance. Granite 4.0 offers better instruction handling and tool use, making it well-suited for enterprise tasks.

This granite-4-h-tiny model was introduced with the 2.3.0 release.

Usage

Designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications. The model is capable of common generative tasks, including summarization, text classification, text extraction, question-answering, retrieval augmented generation (RAG), code related tasks, function-calling tasks, Fill-In-the-Middle (FIM) code and multilingual dialog use cases.

Size
  • Small: 30 billion parameters
  • Tiny: 7 billion paramters
  • Micro: 3 billion paramters
Token limits

Context window length (input + output): 131,072

Supported natural languages

English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 4.0 models for languages beyond these languages.

Instruction tuning information

The Granite 4 models are fine tuned from Granite-4.0-H-Small-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Learn more
Read the following resources:

ibm-defense-3-3-8b-instruct

The IBM watsonx.ai Defense Model is a fine-tuned version of the granite-3-3-8b-instruct model, trained on general defense industry knowledge and the Janes API. The ibm-defense-3-3-8b-instruct model is designed to perform both tool-calling of the Janes Inventory API and retrieval augmented generation (RAG) tasks. The model can synthesize data, extract key insights, identify patterns, and leverage defense-specific knowledge to deliver fast, reliable, and contextual results.

Note: You must purchase the IBM watsonx.ai Defense Model entitlement separately before you can install and use this model.
Usage

Capable of common generative tasks, including code-related tasks, function-calling, instruction following, question-answering and more. Specializes in tool-calling and retrieval augmented generation.

Size

8 billion parameters

Token limits

Context window length (input + output): 128,000

Supported natural languages

English. However, the base granite model also supports German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. You can further fine-tune the Granite model to support additional languages.

Instruction tuning information

Built on top of granite-3-3-8b-base, the model's training data includes publicly available datasets with permissive licenses, internally generated synthetic data to enhance reasoning, publicly available military datasets, and curated documentation, sample usage, and synthetic examples of Janes API interactions.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

granite-3-1-8b-base

The Granite 3.1 8b foundation model is a base model that belongs to the IBM Granite family of models. The model extends the context length of Granite-3.0-8B-Base.

Usage

The Granite 3.1 base foundation model is a is a pre-trained autoregressive foundation model intended for tuning, summarization, text classification, extraction, question-answering, and other long-context tasks.

You can use the granite-3-1-8b-base foundation model for fine tuning purposes.

Size

8 billion parameters

Token limits

Context window length (input + output): 131,072

Supported natural languages

English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1 models for languages beyond these 12 languages.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Learn more
Read the following resources:

Granite Instruct 3.3 Models

The Granite Instruct foundation models belong to the IBM Granite family of models. The granite-3-3-2b-instruct and granite-3-3-8b-instruct foundation models are Granite 3.3 Instruct foundation models. These models build on earlier iterations for improved reasoning, mathematics, coding, and instruction-following capabilities.

Usage
Designed to excel in long-context and instruction-following tasks such as summarization, problem-solving, text translation, reasoning, code tasks, function-calling, and more. Can be integrated into AI assistants across various domains.

Sizes - 8 billion parameters

Token limits

Context window length (input + output)

  • 8b: 131,072

Note: The maximum new tokens, which means the tokens generated by the foundation model per request, is limited to 16,384.

Supported natural languages

English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may fine tune these Granite models for languages beyond these 12 languages.

Supported programming languages

The Granite Instruct models are trained with code written in 116 programming languages.

Instruction tuning information

The Granite Instruct models are fine tuned Granite Instruct base models trained on over 12 trillion tokens with a combination of permissively licensed open-source and proprietary instruction data.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Learn more
Read the following resources:

granite-3-2-8b-instruct

Granite 3.2 Instruct is a long-context foundation model that is fine tuned for enhanced reasoning capabilities. The thinking capability is configurable, which means you can control when reasoning is applied.

Usage

Capable of common generative tasks, including code-related tasks, function-calling, and multilingual dialogs. Specializes in reasoning and long-context tasks such as summarizing long document or meeting transcripts and responding to questions with answers that are grounded in context provided from long documents.

Size

8 billion parameters

Token limits

Context window length (input + output): 131,072

Note: The maximum new tokens, which means the tokens generated by the foundation model per request, is limited to 16,384.

Supported natural languages

English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese

Instruction tuning information

Built on top of Granite-3.1-8B-Instruct, the model was trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Learn more
Read the following resources:

Granite Instruct 3.1 models

The Granite Instruct foundation models belong to the IBM Granite family of models. The granite-3-2b-instruct and granite-3-8b-instruct foundation models are generation 3.0 instruct-tuned language models for tasks like summarization, generation, coding, and more. The foundation models employ a GPT-style decoder-only architecture, with additional innovations from IBM Research and the open community.

A foundation model modification was introduced to update these models to IBM Granite 3.1. The 1.1.0 version of the models build on earlier iterations to provide better support for coding tasks and intrinsic functions for agents.

Usage

Granite Instruct foundation models are designed to excel in instruction-following tasks such as summarization, problem-solving, text translation, reasoning, code tasks, function-calling, and more.

You can use the granite-3-1-8b-base foundation model from the Granite 3.1 model family for fine tuning purposes only. You cannot inference this model directly.

Sizes
  • 2 billion parameters
  • 8 billion parameters
Try it out

Experiment with samples:

Token limits

Context window length (input + output) for Granite 3.0 version of the models

  • 2b: 4,096
  • 8b: 4,096

Context window length (input + output) for Granite 3.1 version of the models

  • 2b: 131,072
  • 8b: 131,072
Supported natural languages

English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified).

Supported programming languages

The Granite Instruct models are trained with code written in 116 programming languages.

Instruction tuning information

The Granite Instruct models are fine tuned Granite Instruct base models trained on over 12 trillion tokens with a combination of permissively licensed open-source and proprietary instruction data.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Learn more
Read the following resources:

granite-13b-instruct-v2

Error icon This foundation model is withdrawn in the 2.3.0 release. See Foundation model lifecycle.

The granite-13b-instruct-v2 model is provided by IBM. This model was trained with high-quality finance data, and is a top-performing model on finance tasks. Financial tasks evaluated include: providing sentiment scores for stock and earnings call transcripts, classifying news headlines, extracting credit risk assessments, summarizing financial long-form text, and answering financial or insurance-related questions.

Usage

Supports extraction, summarization, and classification tasks. Generates useful output for finance-related tasks. Uses a model-specific prompt format. Accepts special characters, which can be used for generating structured output.

Size

13 billion parameters

Try it out

Experiment with samples:

Token limits

Context window length (input + output): 8,192

Note: The maximum new tokens, which means the tokens generated by the foundation model per request, is limited to 4,096.

Supported natural languages

English

Instruction tuning information

The Granite family of models is trained on enterprise-relevant datasets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and datasets used.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Learn more
Read the following resources:

Granite Code models

Foundation models from the IBM Granite family. The Granite Code foundation models are instruction-following models fine-tuned using a combination of Git commits paired with human instructions and open-source synthetically generated code instruction datasets.

The granite-8b-code-instruct v2.0.0 foundation model can process larger prompts with an increased context window length.

Usage

The following Granite Code foundation models are designed to respond to coding-related instructions and can be used to build coding assistants:

  • granite-3b-code-instruct
  • granite-8b-code-instruct
  • granite-20b-code-instruct
  • granite-34b-code-instruct

The following Granite Code foundation models are instruction-tuned versions of the granite-20b-code-base foundation model that are designed for text-to-SQL generation tasks.

  • granite-20b-code-base-schema-linking
  • granite-20b-code-base-sql-gen
Sizes
  • 3 billion parameters
  • 8 billion parameters
  • 20 billion parameters
  • 34 billion parameters
Try it out

Experiment with samples:

Token limits

Context window length (input + output)

  • granite-3b-code-instruct : 128,000

    The maximum new tokens, which means the tokens generated by the foundation model per request, is limited to 8,192.

  • granite-8b-code-instruct : 128,000

    The maximum new tokens, which means the tokens generated by the foundation model per request, is limited to 4,096.

  • granite-20b-code-instruct : 8,192

    The maximum new tokens, which means the tokens generated by the foundation model per request, is limited to 4,096.

  • granite-20b-code-base-schema-linking : 8,192

  • granite-20b-code-base-sql-gen : 8,192

  • granite-34b-code-instruct : 8,192

Supported natural languages

English

Supported programming languages

The Granite Code foundation models support 116 programming languages including Python, Javascript, Java, C++, Go, and Rust. For the full list, see IBM foundation models.

Instruction tuning information

These models were fine-tuned from Granite Code base models on a combination of permissively licensed instruction data to enhance instruction-following capabilities including logical reasoning and problem-solving skills.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Learn more
Read the following resources:

Granite Guardian 3.2 5b

The Granite Guardian foundation models belong to the IBM Granite family of models. The granite-guardian-3-2-5b foundation model is a is a streamlined version of Granite Guardian 3.1 8B designed to detect risks in prompts and responses. The foundation models help with risk detection along many key dimensions in the AI Risk Atlas.

Usage

Granite Guardian foundation models are designed to detect harm-related risks within prompt text or model response (as guardrails) and can be used in retrieval-augmented generation use cases to assess context relevance (whether the retrieved context is relevant to the query), groundedness (whether the response is accurate and faithful to the provided context), and answer relevance (whether the response directly addresses the user's query).

Sizes
  • 5 billion parameters
Try it out

Experiment with samples:

Token limits

Context window length (input + output): 128,000

Supported natural languages

English

Instruction tuning information

The Granite Guardian models are fine tuned Granite Instruct models trained on a combination of human annotated and synthetic data.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Learn more
Read the following resources:

Granite Guardian 3.1 models

The Granite Guardian foundation models belong to the IBM Granite family of models. The Granite Guardian foundation model are fine-tuned Granite Instruct models that is designed to detect risks in prompts and responses. The foundation model helps with risk detection along many key dimensions in the AI Risk Atlas.

A foundation model modification was introduced to update these models to IBM Granite 3.1. The 1.1.0 version of the models build on earlier iterations and are trained on a combination of human-annotated and additional synthetic data to improve performance for risks related to hallucination and jailbreak.

Usage

Granite Guardian foundation models are designed to detect harm-related risks within prompt text or model response (as guardrails) and can be used in retrieval-augmented generation use cases to assess context relevance (whether the retrieved context is relevant to the query), groundedness (whether the response is accurate and faithful to the provided context), and answer relevance (whether the response directly addresses the user's query).

Sizes
  • 8 billion parameters
Try it out

Experiment with samples:

Token limits

Context window length (input + output)

  • 2b: 8,192
  • 8b: 8,192
Supported natural languages

English

Instruction tuning information

The Granite Guardian models are fine tuned Granite Instruct models trained on a combination of human annotated and synthetic data.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Learn more
Read the following resources:

Granite time series models

Granite time series foundation models belong to the IBM Granite family of models. These models are compact, pretrained models for multivariate time series forecasting from IBM Research. The following versions are available to use for data forecasting in watsonx.ai:

  • granite-ttm-512-96-r2
  • granite-ttm-1024-96-r2
  • granite-ttm-1536-96-r2
Usage

You can apply one of these pretrained models on your target data to get an initial forecast without having to train the model on your data. When given a set of historic, timed data observations, the Granite time series foundation models can apply their understanding of dynamic systems to forecast future data values. These models work best with data points in minute or hour intervals and generate a forecast dataset with up to 96 data points per target channel.

Size

1 million parameters

Try it out

See Forecast future values

Context length

Required minimum data points per channel in the API request:

  • granite-ttm-512-96-r2: 512
  • granite-ttm-1024-96-r2: 1,024
  • granite-ttm-1536-96-r2: 1,536
Supported natural languages

English

Instruction tuning information

The Granite time series models were trained on almost a billion samples of time series data from various domains, including electricity, traffic, manufacturing, and more.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Learn more
Read the following resources:

Granite Vision 3.3 2b

The Granite Vision 3.3 2b is a compact and efficient vision-language foundation model that is built for enterprise use cases. The granite-vision-3-3-2b model introduces novel experimental features such as image segmentation, doctag generation, and multi-page support. The model also offers enhanced safety compared to earlier Granite vision models.

Usage

The granite-vision-3-3-2b foundation model is designed for visual document understanding, enabling automated content extraction from tables, charts, infographics, plots, diagrams, and more.

Size

2 billion parameters

Token limits

Context window length (input + output): 131,072

Supported natural languages

English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese.

Instruction tuning information

The granite-vision-3-3-2b foundation model was trained on a curated instruction-following dataset, comprising diverse public datasets and synthetic datasets tailored to support a wide range of document understanding and general image tasks. The model was trained by fine-tuning the granite-3-2b-instruct foundation model with both image and text modalities.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Learn more
Read the following resources:

Granite Vision 3.2 2b

Warning icon This model is deprecated. See Foundation model lifecycle.

Granite Vision 3.2 2b is an image-to-text foundation model that is built for enterprise use cases. This multimodal Granite model is capable of ingesting images and text for tasks like understanding charts, diagrams, graphs, and more.

Usage

The granite-vision-3-2-2b foundation model is designed for visual document understanding, enabling automated content extraction from tables, charts, infographics, plots, diagrams, and more.

Note: It is recommended to use the granite-vision-3-2-2b model only with image files for visual processing and understanding use cases.
Size

2 billion parameters

Token limits

Context window length (input + output): 131,072

Note: The maximum new tokens, which means the tokens generated by the foundation model per request, is limited to 16,384.

Supported natural languages

English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese

Instruction tuning information

The granite-vision-3-2-2b foundation model was trained on a curated instruction-following dataset, comprising diverse public datasets and synthetic datasets tailored to support a wide range of document understanding and general image tasks. It was trained by fine-tuning the granite-3-2b-instruct foundation model with both image and text modalities.

Model architecture

Decoder

License

IBM-developed foundation models are considered part of the IBM watsonx.ai service. For more information about contractual protections related to IBM indemnification, see License information.

Learn more
Read the following resources: