As enterprises move from generative artificial intelligence (gen AI) experimentation to production, they are looking for the right choices when it comes to foundation models with an optimal mix of attributes that yield trusted, performant and cost-effective gen AI. Businesses recognize that they cannot scale gen AI with foundation models they cannot trust.

We are pleased to announce that IBM, with its flagship Granite family of models, has been named a strong performer in the Forrester Wave™: AI Foundation Models for Language. Download a copy of the report at no cost.

IBM® Granite™ is IBM’s flagship series of foundation models based on decoder-only transformer architecture. These models are trained on trusted enterprise data spanning internet, academic, code, legal and finance. They are available on IBM watsonx™, RedHat®, other model marketplaces, and open-source platforms such as Hugging Face and GitHub.

IBM’s commitment to trust and reliability in AI: A Forrester-endorsed approach

While most model providers focus solely on improving price-performance, IBM prioritizes trust and clarity while still achieving suitable price-performance when designing and delivering models for various use cases and applications. We obsess about our clients’ success in their gen AI missions, and hence our modus operandi is:build AI right, deliver the right AI.

According to Forrester, the Granite family of models provides enterprise users with some of the most robust and clear insights into the underlying training data. This is important for efficiently refining model behavior for specific use cases and domains, and for protecting enterprises from risk due to any unlicensed content in the training data.

This recognition validates IBM’s differentiated approach to delivering enterprise-grade foundation models, helping clients accelerate the adoption of gen AI into their business workflows while mitigating foundation model-related risks.

Also, the constructive feedback we received from Forrester on potential areas of improvement is timely as IBM reinvigorates its model strategy with open source innovation. Building on our strong foundation of current offerings, we have been rapidly expanding our market presence to serve many enterprise customers, partners and developers, with notable traction.

We actively seek input from industry experts such as Forrester, our clients and the AI ecosystem with an open mind and deductive reasoning as we embark on this growth and transformation journey in foundation models in 2024.

IBM ranked number 2 in current offerings thanks to its differentiated approach to models:

  • Open: Bringing best-in-class IBM and high-performing open models to our watsonx foundation models library.
  • Trusted: Training models on trusted and governed data to bring gen AI to business applications with confidence.
  • Targeted: Designing models for the enterprise and optimizing them for specific business domains and use cases.
  • Empowering: Providing clients with competitively priced model choices to build AI that best suits their unique business needs and risk profiles.

IBM Granite models received a perfect score in this Forrester Wave on content corpus filtering, IP, model transparency and alignment. Granite is trained and tuned with AI Ethics and principles under the governance of the office of privacy and responsible technology, and AI alignment techniques innovated by IBM Research®.

Thanks to these inherent characteristics along with the support of watsonx, a robust and reliable AI and data platform, and hybrid deployment options through Red Hat® OpenShift®, IBM models received top scores for enterprise readiness, governance and security, application development and model management. These validations mirror the results of the recent Stanford Transparency Index, which recognizes Granite models for attributes that make it open and trusted.

Insights from Forrester on IBM Granite

  • The Granite family of models provides enterprise users with robust and clear insights into the underlying training data. This is important for efficiently refining model behavior for specific use cases and domains, and for protecting enterprises from risks associated with unlicensed content in the training data.
  • IBM is a good fit for customers seeking 100% vendor indemnification from model training data and AI platform capabilities that empower AI teams to build solutions.
  • Enterprise customers not only receive a model that includes indemnification clauses but also some of the best enterprise tools designed specifically for developing highly governable AI solutions.

IBM continues to evolve its foundation model library with third-party model offerings to extend multimodality and multilingual capabilities, and bring-your-own-model optionality. This approach complements its research and development and open innovation with the Granite model family.

  • Meta: In April 2024, IBM announced the availability of Llama 3, the next generation of Meta’s open large language model (LLM), on watsonx to help enterprises innovate their AI journeys. The addition of Llama 3 builds on IBM’s collaboration with Meta to advance open innovation for AI. The two companies also created the AI Alliance, which consists of leading organizations across industry, startups, academia, research and government. Since its inception in late 2023, it has grown to include more than 100 members and collaborators.
  • Mistral: At THINK 2024, IBM also announced a new strategic partnership with Mistral AI. IBM will soon offer its customers the Mistral family of commercial models on IBM® watsonx.ai™, both on-premises and in IBM Cloud®. This includes the latest version of Mistral-Large, one of the leading AI models in the market today. IBM looks forward to continuing open source collaboration with Mistral AI, including their popular Mixtral series of open source mixture-of-experts models, and IBM InstructLab-tuned Mistral 7B variant, Merlinite.
  • Saudi Data and AI Authority (SDAIA): At THINK, IBM also premiered ALLaM, an open Arabic LLM from SDAIA, on the watsonx platform. This move highlights how the two organizations are collaborating to transform the Middle East region with new technologies such as gen AI.

Read more about the recent third-party model partnerships that we announced at the THINK conference. The business development and strategic partnership teams at IBM continue to sign special-purpose commercial and independent software vendor agreements to scale model choices on the platform.

As Forrester states, choice and flexibility will be key factors in enterprise decision-making when it comes to foundation models.

IBM relaunches an emboldened model strategy, doubling down on open innovation

Despite IBM’s entry into the foundation models market several months after ChatGPT inaugurated the era of gen AI, its strategy, capabilities and innovations are rooted in decades of pioneering work by IBM Research, product and client engineering teams. Collaborations with innovation partners such as Meta further highlight its responsible approach to innovation.

Forrester’s recognition of Granite models just 8 months following their release exemplifies IBM’s commitment to delivering trusted and high-performing LLM options to its customers.

At the Red Hat Summit and THINK Conference in May, IBM announced a significant milestone in its model strategy by open sourcing Granite models (3B to 34B models). These models were trained on 116 programming languages, democratizing code generation and translation with industry-leading performance results. However, for enterprises to benefit from gen AI systems at scale, it is essential that developers have access to tools and capabilities that enable them to build trusted and reliable solutions. To meet that need, IBM and Red Hat recently started InstructLab, a toolkit designed to align models with skills and knowledge, empowering AI developers to bring enterprise-grade models to scale gen AI across business domains and use cases.

IBM is scaling its market presence through recent strategic partnerships with platform leaders

IBM empowers clients to work with their infrastructure of choice with hybrid multicloud, on-premises and client-managed infrastructure options on watsonx. This enables clients to avoid vendor lock-in and reduce their total cost of ownership. IBM Granite models are also now being merchandised by industry leaders such as NVIDIA, Salesforce and SAP to benefit their enterprise customers with best-in-class language and code models.

  • IBM Granite code models on the NVIDIA API catalog are planned to be offered as inference microservices, designed to simplify and accelerate the deployment of AI models across GPU-accelerated workstations, data centers and cloud platforms.
  • Salesforce recently announced an expanded strategic partnership with IBM to bring together IBM watsonx AI and data platform capabilities with the Salesforce Einstein 1 Platform for greater customer choice and flexibility in AI and data deployment. This empowers teams to make data-driven decisions and take actions directly in their flow of work.
  • SAP recently announced that the IBM Granite model series is now accessible for use across SAP’s portfolio of cloud solutions and applications, underpinned by the gen AI hub in SAP AI Core.
  • Red Hat Enterprise Linux® AI (RHEL AI) inferencing features highly performant, open source licensed, collaboratively developed Granite language and code models from the InstructLab community, fully supported and indemnified by Red Hat.

IBM Granite family of models

Explore the IBM foundation models library on watsonx and open source models on Hugging Face. View our website to learn more.

Try IBM watsonx.ai for free

More from Artificial intelligence

Re-evaluating data management in the generative AI age

4 min read - Generative AI has altered the tech industry by introducing new data risks, such as sensitive data leakage through large language models (LLMs), and driving an increase in requirements from regulatory bodies and governments. To navigate this environment successfully, it is important for organizations to look at the core principles of data management. And ensure that they are using a sound approach to augment large language models with enterprise/non-public data. A good place to start is refreshing the way organizations govern…

IBM announces new AI assistant and feature innovations at Think 2024

4 min read - As organizations integrate artificial intelligence (AI) into their operations, AI assistants that merge generative AI with automation are proving to be key productivity drivers. Despite various barriers to AI, these assistants combine generative AI and automation. This integration helps improve productivity by transforming how we work, offloading repetitive tasks, enabling self-service actions, and providing guidance on completing end-to-end processes. AI assistants from IBM facilitate enterprise adoption of AI to modernize business operations. They are purpose-built, tailored to specific use cases,…

IBM unveils Cloud Pak for Data 5.0

7 min read - Today’s modern technology landscape is experiencing an explosion of data. Organizations need to be able to trust and access this data to generate meaningful insights. Enter IBM Cloud Pak® for Data 5.0, the newest release of the cloud-native insight platform that integrates the tools needed to collect, organize and analyze data within a data fabric architecture. IBM Cloud Pak for Data 5.0 enhances users’ data strategies by including these new features Immersive Experience: Customers can now streamline their IT and day 2 operations with…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters