Model card
View the full model card on Hugging Face
Run locally with Ollama
Download and run Granite Code with Ollama
GitHub
All Granite Code resources
Run Granite Code locally on Mac
Guide for local setup
Coding assistant
watsonx Code Assistant
Use Granite
Recipes and examples
Overview
We introduce the Granite series of decoder-only code models for code generative tasks (e.g., fixing bugs, explaining code, documenting code), trained with code written in 116 programming languages. A comprehensive evaluation of the Granite Code model family on diverse tasks demonstrates that our models consistently reach state-of-the-art performance among available open source code LLMs. The key advantages of Granite Code models include:- All-rounder Code LLM: Granite Code models achieve competitive performance on different kinds of code-related tasks, including code generation, explanation, fixing, editing, translation, and more. Demonstrating their ability to solve diverse coding tasks.
- Trustworthy Enterprise-Grade LLM: All our models are trained in a transparent manner, and according to IBM’s AI Ethics principles. We release all our Granite Code models under an Apache 2.0 license for research and commercial use.
- Granite Code base models: base foundational models designed for code-related tasks (e.g., code repair, code explanation, code synthesis).
- Granite Code instruct models: instruction following models fine-tuned using a combination of Git commits paired with human instructions and open source synthetically generated code instruction datasets.
Data for Granite Code
In the spirit of open innovation, data-prep-kit, the framework and pipelines used for preparing the training data for Granite Code models are being open-sourced with Apache 2.0 license. The framework offers data transformation pipelines that can readily scale from laptop-scale to data-center scale for ease of iterative experimentation as well as large-scale production.Model cards
Granite-3b-code-instruct
View model on Hugging Face
Granite-8b-code-instruct
View model on Hugging Face
Granite-20b-code-instruct
View model on Hugging Face
Granite-34b-code-instruct
View model on Hugging Face
Prompts for code
Here, we share sample prompts to get started. Please expect these templates and best practices to be updated as needed. We recommend using Granite code models with the following prompt template - without a system prompt:Basic example
Code generation
As Granite code models are useful in a variety of software development scenarios, we show you some of the most typical in this section. Python function example:Code explanation
Example that requests the explanation of a code function in Python:Code fixing
Example that asks the model to identify a bug in a line of Python code. The prompt also asks the model to return a function that fixes this bug:Code translation
Example that asks the model to translate a function from Python to Java.Math reasoning
Example to solve a math reasoning problem.Function calling
Example that asks the model to choose a function that solves a task. Note that the first part of the question is a custom system prompt.Code completion
Example that asks the model to complete an assert statement. A function for that assert statement is also provided as part of the prompt:Avoiding common issues
- Granite Code models were not designed for language tasks and are not appropriate for such use. Any such use is at your own risk, and you may not rely on resulting output. Please validate all output independently and consider deploying a Hate Abuse Profanity (HAP) filter. See this notebook for reference.
- Increase max output tokens for longer code responses: It is important to increase the Max output tokens to ensure the model does not cut off the code response, resulting in incomplete code.
- Greedy mode for precise output: Use Greedy mode for precise results.
- Be careful about whitespaces and line breaks: Make sure that the prompt templates are correctly implemented, and that there are no unintended white-spaces, and that there is exactly one line-break at the end of the prompt.
- Use existing tags: The model is trained very explicitly to handle the special tags of “System:”, “Question:”, and “Answer:”.
- System Prompt isn’t required for code models: Try out the
Question:{PROMPT} Answer:format without a system prompt first. If you don’t get the desired results, you can try the system prompt as described in the code examples section. You can also try experimenting with other instructions, but first start with the basicQuestion: Answer:format.