In what could be described as a banner year for technology advancements, 2025 showed how powerful—and dangerous—AI can be in the wrong hands. With bad actors automating complex attacks, using AI tools to engage in social engineering campaigns and manipulating the AI agent to expose sensitive information, it’s no surprise that the year was a game of cat and mouse in terms of mitigating cyberattacks from human and AI-powered enemies. And while the global average of the cost of a data breach fell 9% to USD 4.44 million, the average cost in the US hit a record high of USD 10.22 million.
The cybersecurity threats didn’t end with automated chatbots spamming inboxes and tricking AI agents. This year, we saw what could happen when an organization is caught unprepared to deal with the consequences of integrating new tools like AI agents into their workflow: 13% of companies reported an AI-related security incident, with 97% of those affected acknowledging the lack of proper AI access controls.
Last year’s cybersecurity predictions touched on AI’s increasingly important presence in the cybersecurity preparedness plan. This year, IBM’s predictions for 2026 center on how the integration of autonomous AI into enterprise environments can be both a boon and a burden, depending on whether the proper security measures are implemented—or not.
The agentic shift is no longer theoretical; it’s underway. Autonomous AI agents are reshaping enterprise risk, and legacy security models will crack under the pressure. To stay resilient, organizations must drive a new era of integrated governance and security, built to monitor, validate and control AI behavior at machine speed. This transformation requires embedding security into the very fabric of AI development and governance—ensuring agents operate within ethical and operational boundaries from day one. Anything less risks fragmentation, blind spots and enterprise-wide exposure.
AI is accelerating innovation—but also exposing enterprises to unprecedented risks of intellectual property (IP) loss. In 2026, we’ll see major security incidents where sensitive IP is compromised through shadow AI systems: unapproved tools deployed by employees without oversight. These systems often operate across multiple environments, making it easy for one unmonitored model to trigger widespread exposure. This mirrors the rise of shadow IT a decade ago, but with far higher stakes—AI tools now handle proprietary algorithms, confidential data and strategic decision-making. Closing the gap will require security teams to move at the speed of innovation, delivering approved AI tools and governance frameworks that meet employee needs without sacrificing control.
With the explosion of AI and the rise of autonomous agents, identity is becoming the easiest—and most high-risk—entry point for attackers. Next year, expect a surge in identity-focused attacks as adversaries exploit gaps in how organizations manage and secure these systems. New attack surfaces are emerging through deepfakes, biometric voice spoofing and model manipulation—threats that existing security frameworks were never designed to address.
Given the sensitivity of AI-driven data and agentic workflows, identity will need to be treated as critical national infrastructure. This shift will require specialized threat-hunting capabilities, AI-specific protections and infrastructure-level security controls to defend against increasingly sophisticated external attacks. Identity will no longer be just an access layer—it will be a strategic security priority on par with networks and cloud.
Think Newsletter
Join security leaders who rely on the Think Newsletter for curated news on AI, cybersecurity, data and automation. Learn fast from expert tutorials and explainers—delivered directly to your inbox. See the IBM Privacy Statement.
Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. You can manage your subscriptions or unsubscribe here. Refer to our IBM Privacy Statement for more information.
As autonomous AI agents begin to operate independently across enterprise environments, often outside sanctioned workflows, they access sensitive data with minimal human oversight. These agents replicate and evolve without leaving clear audit trails or conforming to legacy security frameworks. They move faster than conventional monitoring can follow. This creates a new exposure problem: businesses will know data was exposed but won’t know which agents moved it, where it went or why. Systems that can trace agent data access across machine-to-machine interactions will become essential.
As autonomous agents begin to initiate tasks, delegate authority and interact across enterprise systems, accountability itself will fundamentally change. Traditional security models were built for predictable actors accessing known resources. Agentic systems operate differently: they work independently, spawn sub-agents and cross organizational boundaries without much human oversight. When an agent acts on behalf of a user and delegates to another agent, the audit trail becomes unclear. Logs show that authentication succeeded, but they don’t show who authorized the delegation or under what constraints.
Organizations need new accountability systems built for this new reality. They need to trace authorization chains as agents delegate and act across systems in real time. Security teams will know an action occurred but won’t know if it was permitted. Without systems that can answer why an action happened, who authorized it and whether it was within scope, breach investigations and compliance responses will be delayed.
Crypto-agility is emerging as a cornerstone of enterprise resilience. The rapid evolution of cryptographic standards, the explosion of machine identities and the shrinking lifespan of certificates are pushing legacy encryption infrastructure to its breaking point.
But the threat isn’t just theoretical or future-facing—it’s already embedded in how secrets are managed, identities are scaled and trust is maintained across distributed systems. And with quantum computing on the horizon, the urgency to adopt quantum-safe algorithms adds a new layer of complexity. Organizations that lack agility will find themselves exposed—unable to evolve fast enough to meet emerging threats. Crypto-agility will separate those who can respond in real time from those forced to retrofit security into systems that have already moved on.
As organizations continue to reinforce their perimeter defenses, attackers are turning their attention to an often-overlooked target: help desk workflows. These systems were designed with convenience in mind rather than resilience, making them a prime entry point for manipulation. Impersonating employees to request password resets will remain a preferred tactic, thriving on urgency and human trust. Logs may show a legitimate reset, yet the real story lies in the subtle social engineering that triggered it. The Scattered Spider incidents of the past couple of years showcased how convincingly phone-based impersonation can defeat identity checks, and that success will only inspire further innovation in this attack method.