Skip to main content
Granite Guardian enables application developers to screen user prompts and LLM responses for harmful content. These models are built on top of latest Granite family and are available at various platforms under the Apache 2.0 license. This recipe gets you quickly up and running with AI risk detection. You will need the following credentials to run this recipe in Colab:
  • Hugging Face token
  • watsonx API Key
  • watsonx Project ID
  • watsonx url
Instructions for obtaining these credentials can be found here.
Granite Guardian examples may contain offensive language, stereotypes, or discriminatory content.

Get started

Explore sample code in a GitHub repo
https://mintcdn.com/ibmgranite/m3dncz2KrKeb3pcV/granite/docs/images/icons8-google-colab.svg?fit=max&auto=format&n=m3dncz2KrKeb3pcV&q=85&s=fb39ef667c012d0fcef53599b6c5c0fd

Try it out

Execute sample code in Colab