IBM AI Roadmap
Large-scale, self-supervised neural networks, which are known as foundation models, multiply the productivity and the multimodal capabilities of AI. More general forms of AI emerge to support reasoning and commonsense knowledge.
AI
Roadmap
Strategic milestones
All information being released represents IBM’s current intent, is subject to change or withdrawal, and represents only goals and objectives.
You can learn more about the progress of individual items by downloading the PDF in the top right corner.
2025
Alter the scaling of generative AI with neural architectures beyond transformers.
We will use a diverse selection of neural architectures beyond, and including, transformers that are co-optimized with purpose-built AI accelerators to fundamentally alter the scaling of generative AI.
Why this matters for our clients and the world
Use case-driven, end-to-end optimizations, from transistors to neurons, will make a vast range of trade-offs available for energy consumption, cost, and deployment form-factors of AI, unlocking its potential at an unprecedented scale.
The technologies and innovations that will make this possible
Novel neural building blocks will transcend traditional attention mechanisms in transformers. Our open foundation model software stack will be capable of exploiting accelerator-specific innovations for more efficient and capable AI. We will automate the composition and optimization of LLM applications based on user-specified criteria and constraints.
How these advancements will be delivered to IBM clients and partners
Watsonx assistants will incorporate multiple AI agents targeted for different data modalities and tasks. Watsonx will support a variety of cost-effective devices in its deployments.