IBM AI Roadmap
Large-scale self-supervised neural networks, i.e., foundation models, multiply the productivity and the multi-modal capabilities of AI. More general forms of AI emerge to support reasoning and common-sense knowledge.
All information being released represents IBM’s current intent, is subject to change or withdrawal, and represents only goals and objectives.
Foundation models extends beyond natural language processing.
2023 will expand enterprise foundation model use cases beyond natural language processing (NLP). 100B+ parameter models will be operationalized for bespoke, targeted use cases, opening the door to broader enterprise adoption.
Governance and trust permeate AI
In 2024, we will integrate trust guardrails throughout the AI foundation models lifecycle and AI governance at the organizational level. Data representations will optimize across privacy, fairness, explainability, robustness, etc.
AI becomes more energy and cost efficient
In 2025, we will improve the energy and cost efficiency of foundation model training and inference by 5x and bring 200B+ parameter foundation models to enterprises. It’s all about making them more powerful, useful, and practical.
Foundation models in production become scalable
By 2027, we will be routinely doubling the number of foundation model parameters in production for the same energy envelope every 18 months. Training and inference will be 4x more energy efficient vs. 2025.
Trustworthy and explainable AI starts to reason
2029 will be an inflection point. AI will support diverse forms of reasoning with explainability and trust. Energy efficiency will increase 4x more and scalable, operationalized AI models will be routine in enterprises.
Fully multi-modal AI gives enterprises unprecedented scale
By 2030 and beyond, fully multi-modal architectures will learn diverse data representations, and developers will be able to manipulate them at multiple levels of abstraction to give enterprises competitive advantage.