IBM AI Roadmap
Large-scale self-supervised neural networks, which are known as foundation models, multiply the productivity and the multimodal capabilities of AI. More general forms of AI emerge to support reasoning and commonsense knowledge.
AI
Roadmap
Strategic milestones
All information being released represents IBM’s current intent, is subject to change or withdrawal, and represents only goals and objectives.
You can learn more about the progress of individual items by downloading the PDF in the top right corner.
2025
Alter the scaling of generative AI with neural architectures beyond transformers.
We will use a diverse selection of neural architectures beyond, and including, transformers that are co-optimized with purpose-built AI accelerators to fundamentally alter the scaling of generative AI.
Why this matters for our clients and the world
The use of case-driven, end-to-end optimizations, from transistors to neurons, will make a vast range of trade-offs available for energy consumption, cost, and deployment form factors of AI, unlocking its potential at an unprecedented scale.
The technologies and innovations that will make this possible
We will develop novel neural building blocks along with novel reinforcement learning mechanisms to create fundamentally more capable and efficient AI models. We will develop a generative computing runtime to manage lower-level interactions with the model to unleash new capabilities in AI applications without the prompt engineering friction.
How these advancements will be delivered to IBM clients and partners
watsonx assistants will incorporate multiple AI agents targeted for different tasks and corresponding data modalities. watsonx will support a variety of cost-effective accelerators in its deployments.
