IBM 8 Bar Logo

IBM AI Roadmap

Large-scale self-supervised neural networks, which are known as foundation models, multiply the productivity and the multimodal capabilities of AI. More general forms of AI emerge to support reasoning and commonsense knowledge.

AI
Roadmap

Strategic milestones

All information being released represents IBM’s current intent, is subject to change or withdrawal, and represents only goals and objectives.

You can learn more about the progress of individual items by downloading the PDF in the top right corner.

2024

Build multimodal, modular transformers for new enterprise applications.

We will deploy enterprise AI assistants and applications using advanced transformers and developer-friendly frameworks to facilitate processing richer contextual information and enhanced control and monitoring of generative AI.

Why this matters for our clients and the world

LLM applications will broaden their scope by integrating more easily with the core enterprise systems. AI agents and applications will tremendously boost enterprise productivity.

The technologies and innovations that will make this possible

Transformer architectures will become modular and multimodal. We will develop efficient inference techniques to enable cost-effective processing of large context windows. We will develop LLM-oriented orchestration and compositional frameworks along with modules for AI alignment, trust guardrails, and generative-AI–specific monitoring and risk assessment.

How these advancements will be delivered to IBM clients and partners

watsonx will introduce advanced modular LLMs along with developer-friendly application builders and governance features to accelerate development and deployment of AI applications. watsonx assistants will seamlessly integrate code and language to provide out-of-the-box productivity tools.