In 2023, organizational departments such as human resources, IT and customer care focused on generative artificial intelligence (AI) use cases such as summarization, code generation and question-answering to reduce costs and boost productivity. A Gartner executive poll indicates that 55% of organizations are already piloting or implementing generative AI.

The major challenge facing enterprise decision-makers is achieving the right balance between operationalizing generative AI faster and mitigating foundational model-related risks, while staying on top of a rapidly evolving technology landscape. 

The promise of a new technology 

Generative AI is proving its value as a driver of growth and innovation. According to a recent IBM Institute for Business Value (IBV) study, 75% of CEOs believe that gaining a competitive advantage hinges on possessing the most advanced generative AI, with 50% currently integrating it into their products and services. These are just some of the top generative AI use cases for enterprises that IBM has highlighted. 

This is just the beginning. Multimodal, multilingual foundation models and automation agents expand the range of generative AI applications and adoption across business workflows. Also, vertical and domain-specific foundation models with fewer parameters, matching the performance of larger models at a lower cost of inference, gain more market traction. Furthermore, the techniques for training foundation models continue to evolve, unlocking advanced capabilities and efficiencies, and making generative AI even more appealing. 

Challenges for enterprises 

In 2024, enterprise business and technology executives will receive a clear mandate from their leadership and board to transform their business models, offerings and operations with generative AI. An IBM study on responsible AI and ethics reveals that CEOs feel over six times more pressure from their boards and investors to accelerate the adoption of generative AI rather than slowing it down.

According to “The CEO’s guide to generative AI” report from IBM Institute for Business Value (IBV), the experimentation phase for generative AI leaders is short and intense, with 74% of executives reporting that generative AI will be ready for general rollout in the next three years. As enterprise clients transition from exploration to investigation and production with generative AI, they require the right model choices for the right use cases, a robust platform to customize models and infuse AI into their applications, a hybrid cloud to deploy AI in the infrastructure of choice, and a reliable partner who can help scale and operationalize AI with minimal risks.  

61% of CEOs identify concerns about data lineage and provenance as barriers to adopting generative AI, and 85% of executives anticipate direct interactions between generative AI and customers within the next two years. IBM prioritizes AI ethics, transparent data practices, governance capabilities, and indemnification to instill client trust in AI models for the coming era of enterprise-generative AI adoption. 

While the technology offers ample promise, regulators are waking up to the potential risks and social harms, and are drafting policies and laws to help ensure sustainable innovation and diffusion. The EU AI Act, the first-ever comprehensive legal framework for AI worldwide passed by the European Parliament on March 13, and the US Whitehouse Executive Order on AI, announced in October 2023, both signal government commitment and scrutiny of AI. 

Enterprise decision-makers leading generative AI initiatives within their organizations have much to consider. No other technology in the history of humankind has grown so large so soon, catching most business and technology executives by surprise. 

According to another IBV study on responsible AI and Ethics, 58% of business executives see major ethical risks with generative AI adoption, while 79% prioritize AI ethics in their enterprise-wide approach.

What does it take to succeed? 

To succeed in their missions with generative AI, clients need enterprise-grade model options with an easy-to-use toolkit for customization, robust AI or model governance, and flexible deployment from a reliable technology partner. 

Decision-makers evaluating the selection of foundation models for scaling generative AI and achieving a steady return on investment must consider a strategic viewpoint that incorporates both enterprise-grade foundation models, and a robust platform to operationalize them. In this context, understanding the nuances of AI model selection becomes paramount, as decision-makers strive to align technological capabilities with business objectives. 

  1. Gauging trust in AI models, through metrics like transparency indexes and hallucination scores, relies on maintaining transparency in data management, training and evaluation processes.  
  2. Using performance measures allows for determining critical attributes, such as versatility, accuracy and latency for their enterprise use case.  
  3. Cost-effectiveness measures can then help narrow down model choices that deliver necessary performance at lower inferencing costs and with fewer computing resources.

Client users should be able to customize the models for their use cases, company and industry domains by using fine-tuning and prompt-tuning with an easy-to-use toolkit. They also need specialized database capabilities to store, manage and retrieve high-dimensional vectors that power generative AI applications.  
 
Depending on the use case, content used to infer the model and operational considerations, clients should have the flexibility to deploy the models in the infrastructure of their choice. AI guardrails and continuous monitoring help ensure secure and reliable model deployments as organizations scale up generative AI applications. 

Enterprise decision-makers seek a reliable technology partner that understands opportunities and risks in enterprise AI adoption, comprehends key model dimensions, and integrates AI ethics and regulatory preparedness across the generative AI lifecycle, starting with foundation model development. 

Refer to this model evaluation guide for simple decision heuristics that you can apply to refine your model choices. 

The differentiated approach by IBM to delivering enterprise-grade foundation models 

As a leader in enterprise AI and hybrid cloud, IBM consistently provides trusted, performant and cost-effective generative AI products and solutions for our clients. Our approach to models includes: 

  • Opening access to best-in-class IBM and proven open source or third-party models through our IBM watsonx™ foundation models library.  
  • Ensuring models are trained on trusted and governed data for applications that require enterprise-level transparency, governance and performance. For example, IBM publishes the data used to train its Granite models, with three-quarters of pre-training data removed in accordance with IBM’s governance process. 
  • Designing for the enterprise and optimizing for targeted business domains and use cases. For instance, IBM Granite models that are trained on domain-specific, enterprise relevant data performs on-par with 3-5x larger models in accuracy measures at lower latencies, according to an IBM Internal Performance Evaluation.
  • Empowering clients with competitively priced model choices that best suit their unique business needs and risk profiles. 

Put AI to work with watsonx foundation models 

IBM adopts an open ecosystem approach for its model strategy, integrating proprietary, third-party and open-source models into the watsonx platform, a unified data, AI and governance platform. IBM watsonx foundation models constitute a library of trusted, high-performing and cost-effective models accessible to clients directly from IBM® watsonx.ai™ or through IBM watsonx™ AI Assistants in digital labor, customer experience, application modernization and IT operations.  

Employing a hybrid, multi-cloud approach, IBM offers clients the flexibility to deploy models on their preferred infrastructure, be it software as a service or on-premises. Clients can achieve superior price-to-performance ratios with industry or domain-specific and quantized (infrastructure optimized) models, alongside an easy-to-use toolkit for customization (fine-tuning and prompt tuning models), specialized databases, and flexible hybrid cloud deployment options on the watsonx platform.  

With its 2024 roadmap, IBM aims to empower clients with enterprise-grade multimodal (code, text, audio, image, geospatial) and multilingual model options tailored to their specific business needs, regional interests and risk profiles. 

Granite models developed by IBM Research®  

The development of IBM® Granite™ models adheres to its AI ethics code, prioritizing trust and transparency in both training data and the model training process. These models use data sets meeting strict criteria for governance, risk and compliance. We designed chat fine-tuning techniques to mitigate hallucinations and misalignments in model outputs.  

Granite models target specific business domains like finance and use cases such as retrieval augmented generation. They can match or exceed the performance of larger general-purpose models while requiring lower latencies and fewer infrastructure resources.  

Also, Granite models consume only a fraction of graphics processing unit capacity and compute power, resulting in a reduced carbon footprint and total cost of ownership. IBM supports its models by providing clients with strong intellectual property indemnity protection, enabling them to focus on the business impact of AI rather than costly courtroom litigation. 

Learn more about IBM Granite models—data sources, training steps and performance evaluations—by reading the latest research paper

Ready to get started? Take the interactive demo and sign-up for a trial of watsonx.ai.

Learn more about model PoV and offerings by IBM

This is the first post in our series “Enterprise generative AI made simple.” Stay tuned for future posts.

More from Artificial intelligence

IBM unveils Cloud Pak for Data 5.0

7 min read - Today’s modern technology landscape is experiencing an explosion of data. Organizations need to be able to trust and access this data to generate meaningful insights. Enter IBM Cloud Pak® for Data 5.0, the newest release of the cloud-native insight platform that integrates the tools needed to collect, organize and analyze data within a data fabric architecture. IBM Cloud Pak for Data 5.0 enhances users’ data strategies by including these new features Immersive Experience: Customers can now streamline their IT and day 2 operations with…

How IBM and AWS are partnering to deliver the promise of responsible AI

4 min read - The artificial intelligence (AI) governance market is experiencing rapid growth, with the worldwide AI software market projected to expand from USD 64 billion in 2022 to nearly USD 251 billion by 2027, reflecting a compound annual growth rate (CAGR) of 31.4% (IDC). This growth underscores the escalating need for robust governance frameworks that ensure AI systems are transparent, fair and comply with increasing regulatory demands. In this expanding market, IBM® and Amazon Web Services (AWS) have strategically partnered to address…

Reimagine data sharing with IBM Data Product Hub

3 min read - We are excited to announce the launch of IBM® Data Product Hub, a modern data sharing solution designed to accelerate data-driven outcomes across your organization. Today, we're making this product generally available to our clients across the world, following its announcement at the IBM Think conference in May 2024. Data sharing has become the lifeblood of modern organizations, fueling growth and driving innovation. But traditional approaches to data sharing can often be a bottleneck constricting the seamless sharing of data.…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters