Building AI for business: IBM’s Granite foundation models
7 September 2023
5 min read

It’s an exciting time in AI for business. As we apply the technology more widely across areas ranging from customer service to HR to code modernization, artificial intelligence (AI) is helping increasing numbers of us work smarter, not harder. And as we are just at the start of the AI for business revolution, the potential for improving productivity and creativity is vast.

But AI today is an incredibly dynamic field, and AI platforms must reflect that dynamism, incorporating the latest advances to meet the demands of today and tomorrow. This is why we at IBM continue to add powerful new capabilities to IBM watsonx, our data and AI platform for business.

Today we are announcing our latest addition: a new family of IBM-built foundation models which will be available in watsonx.ai, our studio for generative AI, foundation models and machine learning. Collectively named “Granite,” these multi-size foundation models apply generative AI to both language and code. And just as granite is a strong, multipurpose material with many uses in construction and manufacturing, so we at IBM believe these Granite models will deliver enduring value to your business.

But now let’s take a look under the hood and explain a little about how we built them, and how they will help you take AI to the next level in your business.

IBM’s Granite foundation models are targeted for business

Developed by IBM Research, the Granite models — Granite.13b.instruct and Granite.13b.chat — use a “Decoder” architecture, which is what underpins the ability of today’s large language models to predict the next word in a sequence.

At 13 billion parameter models the Granite models are more efficient than larger models, fitting onto a single V100-32GB GPU. They can also have a smaller impact on the environment while performing well on specialized business-domain tasks such as summarization, question-answering and classification. They are widely applicable across industries, and support other NLP tasks such as content generation, insight extraction and retrieval-augmented generation (a framework for improving the quality of response by linking the model to external sources of knowledge) and named entity recognition (identifying and extracting key information in a text).

At IBM we are laser-focused on building models that are targeted for business. The Granite family of models is no different, and so we trained them on a variety of datasets — totaling 7 TB before pre-processing, 2.4 TB after pre-processing — to produce 1 trillion tokens, the collection of characters that has semantic meaning for a model. Our selection of datasets was targeted at the needs of business users and includes data from the following domains:

  • Internet: generic unstructured language data taken from the public internet
  • Academic: technical unstructured language data, focused on science and technology
  • Code: unstructured code data sets covering a variety of coding languages
  • Legal: enterprise-relevant unstructured language data taken from legal opinions and other public filings
  • Finance: enterprise-relevant unstructured data taken from publicly posted financial documents and reports

By training models on enterprise-specialized datasets, we help ensure our models are familiarized with the specialized language and jargon from these industries and make decisions grounded in relevant industry knowledge.

IBM’s Granite foundation models are built for trust

In business, trust is your license to operate. “Trust us” isn’t an argument, especially when it comes to AI. As one of the first companies to develop enterprise AI, IBM’s approach to AI development is guided by core principles grounded in commitments of trust and transparency. IBM’s watsonx AI and data platform lets you go beyond being an AI user and become an AI value creator. It has an end-to-end process for building and testing foundation models and generative AI — starting with data collection and ending in control points for tracking the responsible deployments of models and applications — focused on governance, risk assessment, bias mitigation and compliance.

Since the Granite models will be available to clients to adapt to their own applications, every dataset that is used in training undergoes a defined governance, risk and compliance (GRC) review process. We have developed governance procedures for incorporating data into the IBM Data Pile which are consistent with IBM AI Ethics principles. Addressing GRC criteria for data spans the entire lifecycle of training data. Our goal is to establish an auditable link from a trained foundation model all the way back to the specific dataset version on which the model was trained.

Much media attention has (rightly) been focused on the risk of generative AI producing hateful or defamatory output. At IBM we know that businesses can’t afford to take such risks, so our Granite models are trained on data scrutinized by our own “HAP detector,” a language model trained by IBM to detect and root out hateful and profane content (hence “HAP”), which is benchmarked against internal as well as public models. After a score is assigned to each sentence in a document, analytics are run over the sentences and scores to explore the distribution, which determines the percentage of sentences for filtering.

Besides this, we apply a wide range of other quality measures. We search for and remove duplication that improves the quality of output and use document quality filters to further remove low quality documents not suitable for training. We also deploy regular, ongoing data protection safeguards, including monitoring for websites known for pirating materials or posting other offensive material, and avoiding those websites. 

And because the generative AI technology landscape is constantly changing, our end-to-end process will continuously evolve and improve, giving businesses results they can trust.

IBM’s Granite foundation models are designed to empower you

Key to IBM’s vision of AI for business is the notion of empowerment. Every organization will be deploying the Granite models to meet its own goals, and every enterprise has its own regulations to conform to, whether they come from laws, social norms, industry standards, market demands or architectural requirements. We believe that enterprises should be empowered to personalize their models according to their own values (within limits), wherever their workloads reside, using the tools in the watsonx platform.

But that’s not all. Whatever you do in watsonx, you retain ownership of your data. We don’t use your data to train our models; you retain control of the models you build and you can take them anywhere.

Granite foundation models: Just the beginning

The initial Granite models are just the beginning: more are planned in other languages and further IBM-trained models are also in preparation. Meanwhile we continue to add open source models to watsonx. We recently announced that IBM is now offering Meta’s Llama 2-chat 70 billion parameter model to select clients for early access and plan to make it widely available later in September. In addition, IBM will host StarCoder, a large language model for code, including over 80+ programming languages, Git commits, GitHub issues and Jupyter notebooks.

In addition to the new models, IBM is also launching new complementary capabilities in the watsonx.ai studio. Coming later this month is the first iteration of our Tuning Studio, which will include prompt tuning, an efficient, low-cost way for clients to adapt foundation models to their unique downstream tasks through training of models on their own trustworthy data. We will also launch our Synthetic Data Generator, which will assist users in creating artificial tabular data sets from custom data schemas or internal data sets. This feature will allow users to extract insights for AI model training and fine tuning or scenario simulations with reduced risk, augmenting decision-making and accelerating time to market.

The addition of the Granite foundation models and other capabilities into watsonx opens up exciting new possibilities in AI for business. With new models and new tools come new ideas and new solutions. And the best part of it all? We’re only getting started.

 
Author
Dinesh Nirmal SVP, IBM Software
Footnotes

Statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only.