Home

Granite

IBM Granite
Achieve over 90% cost savings with Granite's smaller and open models, designed for developer efficiency*
Try Granite Read Granite documentation

Meet Granite

Our third generation of AI language models are here. Fit for purpose and open sourced, these enterprise-ready models deliver exceptional performance against safety benchmarks and across a wide range of enterprise tasks from cybersecurity to RAG.

IBM Granite 3.0: new open, enterprise-ready models Granite 3.0 technical paper

NEW Granite 3.2 preview with reasoning capabilities

Read the blog
Models
Granite 3.1 language models

Base and instruction-tuned language models designed for agentic workflows, RAG, text summarization, text analytics and extraction, classification, and content generation.

Read Granite 3.1 documentation Get language models on Hugging Face
Granite for code

Decoder-only models designed for code generative tasks, including code generation, code explanation, and code editing, trained with code written in 116 programming languages.

Read Granite for code documentation Get code models on Hugging Face
Granite for time series

Lightweight and pre-trained for time-series forecasting, optimized to run efficiently across a range of hardware configurations.

Read Granite time series documentation Get time series models on Hugging Face
Granite Guardian

Safeguard AI with Granite Guardian, ensuring enterprise data security and mitigating risks across a variety of user prompts and LLM responses, with top performance in 15+ safety benchmarks.

Read Granite Guardian documentation Get Granite Guardian on Hugging Face
Granite for geospatial data

NASA and IBM teamed up to create an AI Foundation Model for Earth Observations using large-scale satellite and remote sensing data.

Get the geospatial model on Hugging Face
Granite embedding models

Designed to significantly enhance understanding of user intent and increase the relevance of information and sources in response to a query.

Get the embedded models on Hugging Face
Benchmarks Previous generations of Granite models prioritized specialized use cases. In addition to offering even greater efficacy in those arenas, IBM Granite 3.0 models match—and, in some cases, exceed—the general performance of leading open weight LLMs across both academic and enterprise benchmarks. Explore more benchmarks
Why Granite?
Open

Choose the right model, from sub-billion to 34B parameters, open-sourced under Apache 2.0.

Performant

Don’t sacrifice performance for cost. Granite outperforms comparable models1 across a variety of enterprise tasks.

Trusted

Build responsible AI with a comprehensive set of risk and harm detection capabilities, transparency, and IP protection.

Build with Granite

Deploy open-source Granite models in production with Red Hat Enterprise Linux AI and watsonx, providing you the support and tooling needed to confidently deploy AI at scale. Build faster with capabilities such as tool-calling, 12 languages, multi-modal adaptors (coming soon), and more.

Build a document-based question answering system by using Docling with Granite 3.1

Use IBM Docling and open source Granite 3.1 to perform document visual question answering for various file types

Build a LangChain agentic RAG system using Granite-3.0-8B-Instruct in watsonx.ai

Discover how to build an AI agent that can answer questions

Function calling with IBM Granite 3.0 8B Instruct

In this tutorial, you will use the IBM® Granite-3.0-8B-Instruct model now available on watsonx.ai™ to perform custom function calling.

Post training quantization of Granite-3.0-8B-Instruct in Python with watsonx

Quantize a pre-trained model in a few different ways to show the size of the models and compare how they perform on a task

Using foundation models for time series forecasting

Forecast the future based on learning with the TinyTimeMixer (TTM) Granite Model

Generating SQL from text with LLMs

Convert text into a structured representation and generate a semantically correct SQL query

Build a local AI co-pilot using IBM Granite Code, Ollama, and Continue

Prompt tune a Granite model in Python using a synthetic dataset containing positive and negative customer reviews

View the full granite cookbook

Granite news

Granite 3.2 preview with reasoning capabilities

This preview release provides a sneak peek at the new reasoning capabilities that will be included in our next official release, Granite 3.2.

Granite 3.1 now available

Discover powerful performance, longer context, new embedding models and more.

Granite 3.0 technical paper

This report presents Granite 3.0 and discloses technical details of pre- and post-training to accelerate the development of open foundation models.

IBM Granite 3.0: new open, enterprise-ready models

Trained on 12 languages + 116 programming languages, the new Granite 3.0 8B and 2B models are here. Explore new benchmarks on performance, safety and security + the latest tutorials.

Stay on top of AI news

Podcast | DeepSeek facts vs hype, model distillation, and open source competition

In Mixture of Experts - episode 40, the panel tackles DeepSeek R1 misconceptions, explains model distillation, and dissects the open-source competition landscape.

AI Think Newsletter | Get AI insights delivered

Get curated selection of AI topics, trends and research sent directly to your inbox.

Podcast | DeepSeek-R1, Mistral IPO, FrontierMath controversy and IDC code assistant report

What does the future hold for DeepSeek? In episode 39 of Mixture of Experts, our panel debriefs DeepSeek-R1, Mistrals IPO indication, the FrontierMath controversy and the IDC gen AI code assistants report.

Article | DeepSeek's AI shows power of small models

DeepSeek-R1 is a digital assistant that performs as well as OpenAI’s o1 on certain AI benchmarks for math and coding tasks, was trained with far fewer chips and is approximately 96% cheaper to use, according to the company.

Next steps
Try Granite Read Granite documentation
Read the IBM statement on IP protection

IBM believes in the creation, deployment and utilization of AI models that advance innovation across the enterprise responsibly. IBM watsonx AI and data platform have an end-to-end process for building and testing foundation models and generative AI. For IBM-developed models, we search for and remove duplication, and we employ URL blocklists, filters for objectionable content and document quality, sentence splitting and tokenization techniques, all before model training.

During the data training process, we work to prevent misalignments in the model outputs and use supervised fine-tuning to enable better instruction following so that the model can be used to complete enterprise tasks via prompt engineering. We are continuing to develop the Granite models in several directions, including other modalities, industry-specific content and more data annotations for training, while also deploying regular, ongoing data protection safeguards for IBM developed models. 

Given the rapidly changing generative AI technology landscape, our end-to-end processes are expected to continuously evolve and improve. As a testament to the rigor IBM puts into the development and testing of its foundation models, the company provides its standard contractual intellectual property indemnification for IBM-developed models, similar to those it provides for IBM hardware and software products.

Moreover, contrary to some other providers of large language models and consistent with the IBM standard approach on indemnification, IBM does not require its customers to indemnify IBM for a customer's use of IBM-developed models. Also, consistent with the IBM approach to its indemnification obligation, IBM does not cap its indemnification liability for the IBM-developed models.

The current watsonx models now under these protections include:

(1) Slate family of encoder-only models.

(2) Granite family of a decoder-only model.

Learn more about licensing for Granite models

Footnotes

* How smaller, industry-tailored AI models can offer greater benefits 
https://www.ft.com/partnercontent/ibm/how-smaller-industry-tailored-ai-models-can-offer-greater-benefits.html

1Performance of Granite models conducted by IBM Research against leading open models across both academic and enterprise benchmarks - https://ibm.com/new/ibm-granite-3-0-open-state-of-the-art-enterprise-models