5–11 May 2026
Georgia World Congress Center
Booth #130 and #160
Atlanta, GA
Enterprises are moving from AI readiness to AI execution—where trust, scale and operational efficiency define success.
IBM helps organizations activate AI-ready data, operationalize trusted AI, automate complex IT environments and modernize applications across hybrid cloud.
Built on open technologies and enterprise platforms, IBM enables measurable business outcomes from AI.
Wednesday, 13 May, 2:15 PM–2:55 PM | Sponsored BO session
When virtualization economics shift, the real risk isn’t migration—it’s the operating model you’re left with. In this session, we’ll explore how Cleveland Clinic used virtualization pressure to catalyze a broader data center modernization strategy: building a cloud operating model inside its own facilities to deliver cloud-like agility without compromising control, compliance or mission-critical reliability.
IBM Consulting® led the transformation, aligning core infrastructure decisions—compute, storage and backup or recovery—with a standardized operating approach, while using AI-assisted analysis to accelerate discovery, reduce uncertainty and shorten planning cycles. The outcome was a net-new operational paradigm focused on consistency and controlled change, modernizing day-two practices through repeatable automation and declarative management.
At the core was OpenShift® Virtualization as the primary modernization path, complemented by Ansible® Automation Platform and Advanced Cluster Management where they delivered added leverage. Attendees will leave with a practical blueprint for converting platform pressure into operational simplification—structuring the right partnership model, making sound infrastructure decisions, industrializing day-two operations and establishing a hybrid foundation ready for secure scale and next-wave AI initiatives.
Tuesday, 12 May, 10:30 AM–11:10 AM EDT | B316—Level 3
As organizations move AI workloads from pilot to production, the real challenge becomes architectural: integrating AI into existing environments, scaling performance predictably and enforcing security and governance across hybrid and sovereign landscapes. Openness is essential—a unified platform that brings together models, data, runtimes and infrastructure with full interoperability and control, without adding complexity.
In this session, IBM experts will show how to build a true AI-ready foundation by using the Red Hat® open, trusted platform running on IBM infrastructure. Purpose-built for enterprise scale, this architecture supports dynamic AI and data workloads, reduces operational complexity and embeds governance, security and sovereignty by design. The validated architecture enables seamless interoperability across environments—from public cloud to on-premises. Learn how to:
Build trusted, production-grade AI with IBM infrastructure and Red Hat.
Select the right models, tools and architecture for a production-ready AI stack across hybrid environments.
Integrate and scale AI securely and efficiently across hybrid and sovereign environments while preserving performance, resilience and governance.
This session is designed for architects, developers and technical leaders building AI platforms that are open by design, secure by architecture and enterprise-ready from day one.
Tuesday, 12 May, 3:45 PM–4:05 PM | Sponsored LT session
AI has made writing code dramatically faster. But in enterprise systems, speed alone does not create value. In fact, ungoverned AI usage often increases risk, rework and long-term cost. This talk explores what it takes to move from individual AI experiments to platform-native, governed AI-assisted development.
Using Java™, Quarkus and Bob as concrete examples, we’ll show how AI must become part of the platform layer: Versioned, constrained, auditable and aligned with architecture to turn acceleration into durable business impact.
Tuesday, 12 May, 1:25 PM–1:45 PM EDT | Discovery Theater 1
As generative AI adoption surges, organizations face a critical challenge: how to serve large language models (LLMs) efficiently without overprovisioning costly graphics processing unit (GPU) resources.
llm-d is a Kubernetes-native distributed inference platform for scaling LLM serving by intelligently routing requests and disaggregating the inference process into prefill and decode stages across multiple vLLM instances.
The real challenge is scaling LLM inference without wasting GPU resources. It’s not just about NVIDIA Multi-Instance GPU (MIG) slices or time-slicing anymore—we need smarter, fine-grained sharing techniques that minimize idle capacity and maximize efficiency.
This means dynamically allocating GPU memory and compute across multiple models, while ensuring cooperation with the Kubernetes scheduler so resource optimization never violates Pod requirements. The goal is smaller, smarter scaling strategies that go beyond static partitioning for cost-effective, high-performance inference.
In this lightning talk, we explore the Kubernetes community’s dynamic resource allocation (DRA) feature. We'll focus on why it’s a promising foundation for fine-grained GPU resource management and what extensions are still needed to achieve efficient LLM inference at scale.
We’ll highlight capabilities such as PrioritizedList, PartitionableDevices and the recently introduced ConsumableCapacity, and explain how they enable dynamic resource requests and adjustments. Through real-world examples, we’ll demonstrate how DRA can help serve multiple models of varying sizes efficiently and cost-effectively on Red Hat OpenShift.
Tuesday, 12 May, 3:30 PM–5:00 PM EDT | A314—Level 3
In this advanced, hands-on lab, participants will learn how to operationalize AI workloads by using Red Hat AI with IBM Fusion’s intelligent data platform. As AI adoption accelerates, scalable, security-focused and performant infrastructure is critical.
This session guides attendees through deploying and managing intelligent applications that use GPU-accelerated compute, container-native storage and MLOps workflows—all within Red Hat OpenShift.
This lab is ideal for data scientists, platform engineers and IT architects seeking to modernize AI workloads while reducing complexity. Participants will leave with hands-on experience combining Red Hat and IBM technologies to deliver resilient, scalable AI solutions in production. Attendees will gain practical experience in:
Session type: Lab
Wednesday, 13 May, 12:15 PM–12:35 PM EDT | Discovery Theater 4
Many enterprises want to embrace Red Hat OpenShift Virtualization, but can’t justify a full rip-and-replace of their existing hardware stack. In this lightning talk, we’ll explore how Fusion Access helps customers migrate from legacy hypervisors to OpenShift Virtualization while maintaining their current infrastructure investments. You’ll learn how this approach accelerates modernization, simplifies operations and sets the stage for a cloud-native future—all without forklift upgrades.
We’ll share key design considerations, early performance lessons and a quick look at how organizations are making the transition today. If you’re looking to bridge legacy virtualization with modern platforms, this session is for you. Attendees will gain valuable insights on:
Session type: Lightning talk
Thursday, 14 May, 11:00 AM–11:40 AM EDT | B404—Level 4
Discover how IBM Fusion for Red Hat AI bridges the gap between data science innovation and IT operations, by transforming GPU-accelerated components into a production-ready, on-premises AI service that delivers speed to insights in days rather than months. Join us as we explore how this turnkey, intelligent AI data platform enables organizations to build, train and deploy models quickly across hybrid cloud environments.
See how teams consolidate virtual machines and AI workloads on a unified platform, providing data sovereignty while accelerating time-to-value in the hybrid cloud. Scale the development and deployment of AI solutions with managed services, across any hardware accelerator for efficient inferences.
Offload the burden of operationalizing AI with Red Hat AI Inference on IBM Cloud® for performance without the management overheads. Run AI-ready infrastructure from a highly available and flexible, open enterprise platform that’s focused on security—even for the most regulated industries.
In this session, you will learn how to:
Session type: Breakout session
Accelerate playbook creation in the Ansible Automation Platform with generative AI-powered content recommendations.
Leverage a fully integrated, turnkey platform for running and maintaining all on-premises Red Hat OpenShift applications.
Experience a one-stop, integrated, end-to-end AI deployment studio.
Use a scalable, cyberresilient and sustainable system optimized for Linux®.
Empower organizations to implement cloud right with a unified platform for infrastructure and security lifecycle management.
Use an open source container application platform based on Kubernetes.
Transform your business with consistency, security and scale across any cloud, on premises or at the edge with Red Hat technologies.
Use a scalable, AI-ready, open and flexible-by-design platform powered on IBM Cloud infrastructure. Simplify complexity, reduce costs and unleash innovation.
Collaborate with an AI software development partner that understands your intent, repo and security standards.
The Belgian Ministry of Finance boosts the consistency, functionality and efficiency of its online tax submission system with help from IBM.
IBM Consulting helps Airbus achieve full autonomy while sustaining the A220 Program.
Unipol partners with IBM to reshape IT operations for enterprise-wide AI adoption.