Enterprise adoption of AI has doubled over the past five years, with CEOs today stating that they face significant pressure from investors, creditors and lenders to accelerate adoption of generative AI. This is largely driven by a realization that we’ve crossed a new threshold with respect to AI maturity, introducing a new, wider spectrum of possibilities, outcomes and cost benefits to society as a whole.
Many enterprises have been reserved to go “all in” on AI, as certain unknowns within the technology erode inherent trust. And security is typically viewed as one of these unknowns. How do you secure AI models? How can you ensure this transformative technology is protected from cyberattacks, whether in the form of data theft, manipulation and leakage or evasion, poisoning, extraction and inference attacks?
The global sprint to establish an AI lead—whether amongst governments, markets or business sectors—has spurred pressure and urgency to answer this question. The challenge with securing AI models stems not only from the underlying data’s dynamic nature and volume, but also the extended “attack surface” that AI models introduce: an attack surface that is new to all. Simply put, to manipulate an AI model or its outcomes for malicious objectives, there are many potential entrypoints that adversaries can attempt to compromise, many of which we’re still discovering.
But this challenge is not without solution. In fact, we’re experiencing the largest crowdsourced movement to secure AI that any technology has ever instigated. The Biden-Harris Administration, DHS CISA and the European Union’s AI Act have mobilized the research, developer and security community to collectively work to drive security, privacy and compliance for AI.
It is important to understand that security for AI is broader than securing the AI itself. In other words, to secure AI, we are not confined to the models and data solely. We must also consider the enterprise application stack that an AI is embedded into as a defensive mechanism, extending protections for AI within it. By the same token, because an organization’s infrastructure can act as a threat vector capable of providing adversaries with access to its AI models, we must ensure the broader environment is protected.
To appreciate the different means by which we must secure AI—the data, the models, the applications, and full process—we must be clear not only about how AI functions, but exactly how it is deployed across various environments.
An organization’s infrastructure is the first layer of defense against threats to AI models. Ensuring proper security and privacy controls are embedded into the broader IT infrastructure surrounding AI is key. This is an area in which the industry has a significant advantage already: we have the know-how and expertise required to establish optimal security, privacy, and compliance standards across today’s complex and distributed environments. It’s important we also recognize this daily mission as an enabler for secure AI.
For example, enabling secure access to users, models and data is paramount. We must use existing controls and extend this practice to securing pathways to AI models. In a similar vein, AI brings a new visibility dimension across enterprise applications, warranting that threat detection and response capabilities are extended to AI applications.
Table stake security standards—such as employing secure transmission methods across the supply chain, establishing stringent access controls and infrastructure protections, as well as strengthening the hygiene and controls of virtual machines and containers—are key to preventing exploitation. As we look at our overall enterprise security strategy we should reflect those same protocols, policies, hygiene and standards onto the organization’s AI profile.
Even though the AI lifecycle management requirements are still becoming clear, organizations can leverage existing guardrails to help secure the AI journey. For example, transparency and explainability are essential to preventing bias, hallucination and poisoning, which is why AI adopters must establish protocols to audit the workflows, training data and outputs for the models’ accuracy and performance. Add to that, the data origin and preparation process should be documented for trust and transparency. This context and clarity can help better detect anomalies and abnormalities that might present in the data at an early stage.
Security must be present across the AI development and deployment stages—this includes enforcing privacy protections and security measures in the training and testing data phases. Because AI models learn from their underlying data continually, it’s important to account for that dynamism and acknowledge potential risks in data accuracy, and incorporate test and validation steps throughout the data lifecycle. Data loss prevention techniques are also essential here to detect and prevent SPI, PII and regulated data leakage through prompts and APIs.
Securing AI requires an integrated approach to building, deploying and governing AI projects. This means building AI with governance, transparency and ethics that support regulatory demands. As organizations explore AI adoption, they must evaluate open-source vendors’ policies and practices regarding their AI models and training datasets as well as the state of maturity of AI platforms. This should also account for data usage and retention—knowing exactly how, where and when the data will be used, and limiting data storage lifespans to reduce privacy concerns and security risks. Add to that, procurement teams should be engaged to ensure alignment with the current enterprises privacy, security and compliance policies, and guidelines, which should serve as the base of any AI policies that are formulated.
Securing the AI lifecycle includes enhancing current DevSecOps processes to include ML—adopting the processes while building integrations and deploying AI models and applications. Particular attention should be paid to the handling of AI models and their training data: training the AI pre-deployment and managing the versions on an ongoing basis is key to handling the system’s integrity, as is continuous training. It is also important to monitor prompts and people accessing the AI models.
By no means is this a comprehensive guide to securing AI, but the intention here is to correct misconceptions around securing AI. The reality is, we already have substantial tools, protocols, and strategies available to us for secure deployment of AI.
As AI adoption scales and innovations evolve, so will the security guidance mature, as is the case with every technology that’s been embedded into the fabric of an enterprise across the years. Below we share some best practices from IBM to help organizations prepare for secure deployment of AI across their environments:
Learn how to navigate the challenges and tap into the resilience of generative AI in cybersecurity.
Understand the latest threats and strengthen your cloud defenses with the IBM X-Force Cloud Threat Landscape Report.
Find out how data security helps protect digital information from unauthorized access, corruption or theft throughout its entire lifecycle.
A cyberattack is an intentional effort to steal, expose, alter, disable or destroy data, applications or other assets through unauthorized access.
Gain insights to prepare and respond to cyberattacks with greater speed and effectiveness with the IBM X-Force Threat Intelligence Index.
Stay up to date with the latest trends and news about security.