IBM 8 Bar Logo

IBM AI Roadmap

Large-scale self-supervised neural networks, i.e., foundation models, multiply the productivity and the multi-modal capabilities of AI. More general forms of AI emerge to support reasoning and common-sense knowledge.

AI
Roadmap

Strategic milestones
All information being released represents IBM’s current intent, is subject to change or withdrawal, and represents only goals and objectives.

2027

Foundation models in production become scalable

By 2027, we will be routinely doubling the number of foundation model parameters in production for the same energy envelope every 18 months. Training and inference will be 4x more energy efficient vs. 2025.

Why this matters for our clients and the world

Foundation models will scale with growing data volumes and model complexity. watsonx’s full stack infrastructure will be able to support enterprise data warehouses based on foundation model representations.

The technologies and innovations that will make this possible

New foundation models and hybrid architectures will facilitate multi-modal representations and dual-type processing. Algorithmic innovations will make training, adaptation, and compression energy efficient. Network design and performance, system topology, and protocols will be advanced. Hardware and software will be codesigned and watsonx’s full stack optimized.

The platform or infrastructure for delivering these innovations

A more powerful watsonx.ai will be available with more advanced infrastructure incorporating new neural architectures codesigned with AI accelerators and the software stack to support more capable models within the same energy envelope.