Documentation Index
Fetch the complete documentation index at: https://wwwpoc.ibm.com/llms.txt
Use this file to discover all available pages before exploring further.
Model Collection
View the full Granite Guardian collection on Hugging Face
Run locally with Ollama
Download and run Granite Guardian with Ollama
Demo
Check out Granite Guardian in action
Quick start guide
Try out the quick start guide to identify potential risks
All Granite Guardian resources
Visit the GitHub repository for all Granite Guardian resources
What’s New
Granite Guardian 4.1 8B introduces improved Bring Your Own Criteria (BYOC) support, enabling users to define arbitrary judging criteria beyond the pre-baked safety and hallucination detectors. The model can now faithfully evaluate complex, multi-part requirements such as formatting rules, length constraints, and domain-specific instructions. Key improvements over Granite Guardian 3.3:- BYOC capability: Significant gains on instruction-following and requirement checking benchmarks, enabling the model to faithfully judge complex, multi-part user-defined criteria.
- Best-of-N reward model: Can serve as a reward model for best-of-N selection on verifiable tasks, outperforming dedicated reward models up to 70B parameters.
- Hybrid thinking: Supports both thinking mode (with detailed reasoning traces) and non-thinking mode (low-latency yes/no judgements).
- Function calling: Stronger hallucination detection in agentic workflows.
- Maintained safety and groundedness: Maintains strong performance on OOD safety and RAG groundedness benchmarks.
Overview
The Granite Guardian models are a family of models and LoRA adapters designed to judge if its input and output meet specified criteria. The model comes pre-baked with certain criteria, but is not limited to: jailbreak attempts, profanity, and hallucinations related to tool calls and RAG (retrieval augmented generation) in agent-based systems. Additionally, the model enables users to bring their own criteria (BYOC) and tailor its judging behavior for their specified use case(s). The Granite Guardian LoRA adapters can be layered atop the Granite Guardian models to tackle more specific, downstream tasks. This version of Granite Guardian is a hybrid thinking model that allows the user to operate in thinking or non-thinking mode. In the thinking mode, the model produces detailed reasoning traces with<think> and <score> tags. In the non-thinking mode, the model only produces the judgement score through <score> tags. Since its inception, Granite Guardian has remained in the top 3 on the LLM-Aggrefact Leaderboard (as of 10/2/2025).
The Granite Guardian models are enterprise-grade, risk detection models that are applicable across a wide-range of enterprise applications:
- Detecting harm-related risks within prompt text, model responses, or conversations (as guardrails). These present fundamentally different use cases as the first assesses user supplied text, the second evaluates model generated text, and the third evaluates the last turn of a conversation.
- RAG (retrieval-augmented generation) use-case where the guardian model assesses three key issues: context relevance (whether the retrieved context is relevant to the query), groundedness (whether the response is accurate and faithful to the provided context), and answer relevance (whether the response directly addresses the user’s query).
- Function calling risk detection within agentic workflows, where Granite Guardian evaluates intermediate steps for syntactic and semantic hallucinations. This includes assessing the validity of function calls and detecting fabricated information, particularly during query translation.
- Bring Your Own Criteria (BYOC): Users can define arbitrary evaluation rules to judge whether LLM outputs satisfy diverse requirements such as formatting rules, length constraints, or domain-specific instructions.
Prompting Guide
Granite Guardian 4.1 uses a structured prompt format to instruct the model to judge text against a given criterion.1. Think / No-Think Instruction
Every guardian block begins with a mode selector that controls whether the model produces reasoning traces before scoring:- Think mode (
<think>): The model reasons step-by-step inside<think>...</think>tags before emitting a score. Use this when you want interpretable justifications. - No-think mode (
<no-think>): The model skips reasoning and outputs the score directly. Use this for lower-latency evaluation where only the yes/no judgement is needed.
2. Criteria
After the mode instruction, the block includes a### Criteria: section that tells the model what to judge. This can be:
- A pre-baked criterion, one of the built-in definitions (e.g., function calling hallucination, groundedness, jailbreaking). See the full list below.
- A custom (BYOC) criterion, any user-defined rule expressed in natural language (e.g., “The response contains no more than 3 sentences.”).
3. Scoring Schema
The block always ends with:Putting It Together
The complete guardian block is appended as the final user message:- Think mode:
<think>\n - No-think mode:
<think>\n</think>\n
Definitions for Pre-Baked Criteria
The model is specifically trained to judge if text meets any of the following criteria:- Harm: content considered generally harmful.
- Social Bias: prejudice based on identity or characteristics.
- Jailbreaking: deliberate instances of manipulating AI to generate harmful, undesired, or inappropriate content.
- Violence: content promoting physical, mental, or sexual harm.
- Profanity: use of offensive language or insults.
- Sexual Content: explicit or suggestive material of a sexual nature.
- Unethical Behavior: actions that violate moral or legal standards.
- Context Relevance: retrieved context is not pertinent to answering the user’s question or addressing their needs.
- Groundedness: assistant’s response includes claims or facts not supported by or contradicted by the provided context.
- Answer Relevance: assistant’s response fails to address or properly respond to the user’s input.
- Function Calling Hallucination: assistant’s response contains function calls that have syntax or semantic errors based on the user query and available tool.
Dataset
Granite Guardian is trained on a combination of human annotated and synthetic data. Samples from hh-rlhf dataset were used to obtain responses from Granite and Mixtral models. These prompt-response pairs were annotated for different risk dimensions by a socioeconomically diverse group of people at DataForce. DataForce prioritizes the well-being of its data contributors by ensuring they are paid fairly and receive livable wages for all projects. Additional synthetic data was used to supplement the training set to improve performance for RAG, jailbreak, conversational, and function calling hallucination related risks.Use Granite Guardian
Cookbooks offer an excellent starting point for working with the models, providing a variety of examples that demonstrate how they can be configured for scenarios.- Quick Start Guide provides steps to start using Granite Guardian for judging prompts (user message), responses (assistant message), RAG use cases, or agentic workflows.
- Detailed Guide explores different pre-baked criteria in depth and shows how to assess custom criteria with Granite Guardian.
Quick start example
The model is pre-baked with certain judging criteria (see the section titled Definitions for Pre-baked Criteria for the complete list). We will now see a few examples of how to use the pre-baked criteria as well as how users can specify their own criteria. We will also see how to activate thinking mode. Let us setup the imports, load the model and a utility function to parse the model outputs including reasoning traces or chain-of-thought.Example 1: Detect function calling hallucination
Example 2: Detect lack of groundedness in RAG settings (think mode)
Example 3: Using custom judging criteria
You can define your own criteria by modifying thecriteria string:
Example 4: Requirement checking (judging instruction following)
Beyond safety and hallucination, Granite Guardian can judge whether a response satisfies specific user-defined requirements, such as formatting rules, length constraints, or multi-part instructions:Scope of use
- Granite Guardian models must only be used strictly for the prescribed scoring mode, which generates yes/no outputs based on the specified template. Any deviation from this intended use may lead to unexpected, potentially unsafe, or harmful outputs. The model may also be prone to such behavior via adversarial attacks.
- The reasoning traces or chain of thoughts may contain unsafe content and may not be faithful.
- The model is targeted for risk definitions of general harm, social bias, profanity, violence, sexual content, unethical behavior, jailbreaking, or groundedness/relevance for retrieval-augmented generation, and function calling hallucinations for agentic workflows. It is also applicable for use with custom risk definitions, but these require testing.
- The model is only trained and tested on English data.
- Given their parameter size, the main Granite Guardian models are intended for use cases that require moderate cost, latency, and throughput such as model risk assessment, model observability and monitoring, and spot-checking inputs and outputs. Smaller models, like the Granite-Guardian-HAP-38M for recognizing hate, abuse and profanity can be used for guardrailing with stricter cost, latency, or throughput requirements.