IBM 8 Bar Logo

IBM AI Roadmap

Large-scale, self-supervised neural networks, which are known as foundation models, multiply the productivity and the multimodal capabilities of AI. More general forms of AI emerge to support reasoning and commonsense knowledge.

AI
Roadmap

Strategic milestones

All information being released represents IBM’s current intent, is subject to change or withdrawal, and represents only goals and objectives.

You can learn more about the progress of individual items by downloading the PDF in the top right corner.

2024

Build multimodal, modular transformers for new enterprise applications.

We will deploy assistants and enterprise applications using transformers that process richer context and large language model (LLM)-oriented frameworks which provide better control and monitoring of generative AI.

Why this matters for our clients and the world

LLM applications will broaden their applicability to mission-critical use cases and integrate more easily with the core enterprise systems. Generative AI will tremendously boost enterprise productivity.

The technologies and innovations that will make this possible

Transformer architectures will be improved to be multimodal and modular with decoupled memory. Larger (200B+) models will be trained on better quality and larger datasets. We will develop LLM-oriented orchestration and composition frameworks with modules for AI alignment, trust guardrails, and LLM-specific monitoring and risk assessment.

How these advancements will be delivered to IBM clients and partners

Watsonx will introduce more advanced models along with new application enablement and governance features to accelerate development and deployment of AI applications. Watsonx assistants will seamlessly integrate code and language to provide out-of-box productivity tools.