Artificial intelligence (AI) adoption is still in its early stages. As more businesses use AI systems and the technology continues to mature and change, improper use could expose a company to significant financial, operational, regulatory and reputational risks. Using AI for certain business tasks or without guardrails in place may also not align with an organization’s core values.
This is where AI governance comes into play: addressing these potential and inevitable problems of adoption. AI governance refers to the practice of directing, managing and monitoring an organization’s AI activities. It includes processes that trace and document the origin of data, models and associated metadata and pipelines for audits.
An AI governance framework ensures the ethical, responsible and transparent use of AI and machine learning (ML). It encompasses risk management and regulatory compliance and guides how AI is managed within an organization.
Foundation models, also known as “transformers,” are modern, large-scale AI models trained on large amounts of raw, unlabeled data. The rise of the foundation model ecosystem (which is the result of decades of research in machine learning), natural language processing (NLP) and other fields, has generated a great deal of interest in computer science and AI circles. Open-source projects, academic institutions, startups and legacy tech companies all contributed to the development of foundation models.
Foundation models can use language, vision and more to affect the real world. They are used in everything from robotics to tools that reason and interact with humans. GPT-3, OpenAI’s language prediction model that can process and generate human-like text, is an example of a foundation model.
Foundation models can apply what they learn from one situation to another through self-supervised and transfer learning. In other words, instead of training numerous models on labeled, task-specific data, it’s now possible to pre-train one big model built on a transformer and then, with additional fine-tuning, reuse it as needed.
Curated foundation models, such as those created by IBM or Microsoft, help enterprises scale and accelerate the use and impact of the most advanced AI capabilities using trusted data. In addition to natural language, models are trained on various modalities, such as code, time-series, tabular, geospatial and IT events data. Domain-specific foundation models can then be applied to new use cases, whether they are related to climate change, healthcare, HR, customer care, IT app modernization or other subjects.
Foundation models are widely used for ML tasks like classification and entity extraction, as well as generative AI tasks such as translation, summarization and creating realistic content. The development and use of these models explain the enormous amount of recent AI breakthroughs.
“With the development of foundation models, AI for business is more powerful than ever,” said Arvind Krishna, IBM Chairman and CEO. “Foundation models make deploying AI significantly more scalable, affordable and efficient.”
It’s essential for an enterprise to work with responsible, transparent and explainable AI, which can be challenging to come by in these early days of the technology.
Most of today’s largest foundation models, including the large language model (LLM) powering ChatGPT, have been trained on information culled from the internet. But how trustworthy is that training data? Generative AI chatbots have been known to insult customers and make up facts. Trustworthiness is critical. Businesses must feel confident in the predictions and content that large foundation model providers generate.
The Stanford Institute for Human-Centered Artificial Intelligence’s Center for Research on Foundation Models (CRFM)(link resides outside ibm.com) recently outlined the many risks of foundation models, as well as opportunities. They pointed out that the topic of training data, including its source and composition, is often overlooked. That’s where the need for a curated foundation model—and trusted governance—becomes essential.
An AI development studio can train, validate, tune and deploy foundation models and build AI applications quickly, requiring only a fraction of the data previously needed. Such datasets are measured by how many “tokens” (words or word parts) they include. They offer an enterprise-ready dataset with trusted data that’s undergone negative and positive curation.
Negative curation is when problematic datasets and AI-based hate are removed, and profanity filters are applied to remove objectionable content. Positive curation means adding items from certain domains, such as finance, legal and regulatory, cybersecurity, and sustainability, that are important for enterprise users.
A fit-for-purpose data store built on an open lakehouse architecture allows you to scale AI and ML while providing built-in governance tools. It can be used with both on-premise and multi-cloud environments. This type of next-generation data store combines a data lake’s flexibility with a data warehouse’s performance and lets you scale AI workloads no matter where they reside.
It allows for automation and integrations with existing databases and provides tools that permit a simplified setup and user experience. It also lets you choose the right engine for the right workload at the right cost, potentially reducing your data warehouse costs by optimizing workloads. A data store lets a business connect existing data with new data and discover new insights with real-time analytics and business intelligence. It helps you streamline data engineering with reduced data pipelines, simplified data transformation and enriched data.
Another benefit is responsible data sharing because it supports more users with self-service access to more data while ensuring security and compliance with governance and local policymakers.
As AI becomes more embedded into enterprises’ daily workflows, it’s even more critical it includes proactive governance—throughout the creation, deployment and management of AI services—that helps ensure responsible and ethical decisions.
Organizations incorporating governance into their AI program minimize risk and strengthen their ability to meet ethical principles and government regulations: 50% of business leadm/products/watsonx-aiers surveyed said the most important aspect of explainable AI is meeting external regulatory and compliance obligations; yet, most leaders haven’t taken critical steps toward establishing an AI governance framework, and 74% are not reducing unintended biases.
An AI governance toolkit lets you direct, manage and monitor AI activities without the expense of switching your data science platform, even for models developed using third-party tools. Software automation helps mitigate risk, manage the requirements of regulatory frameworks and address ethical concerns. It includes AI lifecycle governance, which monitors, catalogs and governs AI models at scale from wherever they reside. It automates capturing model metadata and increases predictive accuracy to identify how AI tools are used and where model training needs to be done again.
An AI governance toolkit also lets you design your AI programs based on principles of responsibility and transparency. It helps build trust in trees and document datasets, models and pipelines because you can consistently understand and explain your AI’s decisions. It also automates a model’s facts and workflows to comply with business standards; identifies, manages, monitors and reports on risk and compliance at scale and provides dynamic dashboards and customizable results. Such a governance program can also translate external regulations into policies for automated adherence, audit support and compliance and provide customizable dashboards and reporting.
Using proper AI governance means your business can make the best use of foundation models while ensuring you are accountable and ethical as you move forward with AI technology.
Proper AI governance is key to harnessing the power of AI while safeguarding against its myriad pitfalls. AI involves responsible and transparent management, covering risk management and regulatory compliance to guide its use within an organization. Foundation models offer a breakthrough in AI capabilities to enable scalable and efficient deployment across various domains.
Watsonx is a next-generation data and AI platform built to help organizations fully leverage foundation models while adhering to responsible AI governance principles. The watsonx.governance toolkit enables your organization to build AI workflows with responsibility, transparency and explainability.
With watsonx organizations can: