IBM at Red Hat Summit 2026

May 5-11, 2026

Georgia World Congress Center

Booth #130 & #160

Atlanta, GA

3D IBM letters in blue

Harness the power of open. Turn AI into enterprise advantage.

Harness the power of open. Turn AI into enterprise advantage.

Enterprises are moving from AI readiness to AI execution—where trust, scale, and operational efficiency define success.

IBM helps organizations activate AI-ready data, operationalize trusted AI, automate complex IT environments, and modernize applications across hybrid cloud.

Built on open technologies and enterprise platforms, IBM enables measurable business outcomes from AI.​

Sessions

Wednesday, May 13 2:15pm - 2:55pm | Sponsored BO session

When virtualization economics shift, the real risk isn’t migration — it’s the operating model you’re left with. In this session, we’ll explore how Cleveland Clinic used virtualization pressure to catalyze a broader data center modernization strategy: building a cloud operating model inside its own facilities to deliver cloud-like agility without compromising control, compliance, or mission-critical reliability. 

IBM Consulting led the transformation, aligning core infrastructure decisions — compute, storage, and backup/recovery — with a standardized operating approach, while using AI-assisted analysis to accelerate discovery, reduce uncertainty, and shorten planning cycles. The outcome was a net-new operational paradigm focused on consistency and controlled change, modernizing day-two practices through repeatable automation and declarative management.

At the core was OpenShift Virtualization as the primary modernization path, complemented by Ansible Automation Platform and Advanced Cluster Management where they delivered added leverage. Attendees will leave with a practical blueprint for converting platform pressure into operational simplification — structuring the right partnership model, making sound infrastructure decisions, industrializing day-two operations, and establishing a hybrid foundation ready for secure scale and next-wave AI initiatives.

Tuesday, May 12 | 10:30 AM - 11:10 AM EDT B316 - Level 3

As organizations move AI workloads from pilot to production, the real challenge becomes architectural: integrating AI into existing environments, scaling performance predictably, and enforcing security and governance across hybrid and sovereign landscapes. Openness is essential — a unified platform that brings together models, data, runtimes, and infrastructure with full interoperability and control, without adding complexity. In this session, IBM experts will show how to build a true AI-ready foundation using Red Hat’s open, trusted platform running on IBM Infrastructure. Purpose-built for enterprise scale, this architecture supports dynamic AI and data workloads, reduces operational complexity, and embeds governance, security, and sovereignty by design. The validated architecture enables seamless interoperability across environments — from public cloud to on-premises. Learn how to:​

  • Build trusted, production-grade AI with IBM infrastructure and Red Hat.

  • Select the right models, tools, and architecture for a production-ready AI stack across hybrid environments.

  • Integrate and scale AI securely and efficiently across hybrid and sovereign environments while preserving performance, resilience, and governance.

This session is designed for architects, developers, and technical leaders building AI platforms that are open by design, secure by architecture, and enterprise-ready from day one.

Tuesday, May 12 3:45pm - 4:05pm | Sponsored LT session

AI has made writing code dramatically faster. But in enterprise systems, speed alone does not create value. In fact, ungoverned AI usage often increases risk, rework, and long-term cost. This talk explores what it takes to move from individual AI experiments to platform-native, governed AI-assisted developmen

Using Java, Quarkus, and Bob as concrete examples, we’ll show how AI must become part of the platform layer: Versioned, constrained, auditable, and aligned with architecture, to turn acceleration into durable business impact.

Tuesday, May 12 1:25 PM - 1:45 PM EDT, Discovery Theater 1

As generative AI adoption surges, organizations face a critical challenge: how to serve large language models (LLMs) efficiently without overprovisioning costly graphics processing unit (GPU) resources.

llm-d is a Kubernetes-native distributed inference platform for scaling LLM serving by intelligently routing requests and disaggregating the inference process into prefill and decode stages across multiple vLLM instances.

The real challenge is scaling LLM inference without wasting GPU resources. It’s not just about NVIDIA Multi-Instance GPU (MIG) slices or time-slicing anymore—we need smarter, fine-grained sharing techniques that minimize idle capacity and maximize efficiency. This means dynamically allocating GPU memory and compute across multiple models, while ensuring cooperation with the Kubernetes scheduler so resource optimization never violates Pod requirements. The goal is smaller, smarter scaling strategies that go beyond static partitioning for cost-effective, high-performance inference.

In this lightning talk, we explore the Kubernetes community’s dynamic resource allocation (DRA) feature—why it’s a promising foundation for fine-grained GPU resource management and what extensions are still needed to achieve efficient LLM inference at scale. We’ll highlight capabilities such as PrioritizedList, PartitionableDevices, and the recently introduced ConsumableCapacity, and explain how they enable dynamic resource requests and adjustments. Through real-world examples, we’ll demonstrate how DRA can help serve multiple models of varying sizes efficiently and cost-effectively on Red Hat OpenShift.

Tuesday, May 12 3:30 PM - 5:00 PM EDT A314 - Level 3

In this advanced, hands-on lab, participants will learn how to operationalize AI workloads using Red Hat AI with IBM Fusion’s intelligent data platform. As AI adoption accelerates, scalable, security-focused, and performant infrastructure is critical. This session guides attendees through deploying and managing intelligent applications that use GPU-accelerated compute, container-native storage, and MLOps workflows—all within Red Hat OpenShift. This lab is ideal for data scientists, platform engineers, and IT architects seeking to modernize AI workloads while reducing complexity. Participants will leave with hands-on experience combining Red Hat and IBM technologies to deliver resilient, scalable AI solutions in production. Attendees will gain practical experience in:

  • Building and training machine learning (ML) models.
  • Serving models with Red Hat AI and vLLM for high-throughput, low-latency inference with multi- or single-model setups.
  • Integrating IBM Fusion to simplify data access, caching, and protection across hybrid environments.
  • Deploying AI-enabled apps with watsonx.data and Fusion’s content-aware storage for gen AI use cases. 
  • Managing stateful Kubernetes apps with automated orchestration and backup.

Session type: Lab

Wednesday, May 13 12:15 PM - 12:35 PM EDT, Discovery Theater 4

Many enterprises want to embrace Red Hat OpenShift Virtualization, but can’t justify a full rip-and-replace of their existing hardware stack. In this lightning talk, we’ll explore how Fusion Access helps customers migrate from legacy hypervisors to OpenShift Virtualization while maintaining their current infrastructure investments. You’ll learn how this approach accelerates modernization, simplifies operations, and sets the stage for a cloud-native future—all without forklift upgrades.

We’ll share key design considerations, early performance lessons, and a quick look at how organizations are making the transition today. If you’re looking to bridge legacy virtualization with modern platforms, this session is for you. Attendees will gain valuable insights on:

  • Understand how Fusion Access supports virtualization modernization
  • Learn best practices for transitioning workloads from legacy hypervisors
  • Discover how to preserve hardware investments while adopting OpenShift

Session type: Lightning talk

Thursday, May 14  11:00 AM - 11:40 AM EDT, B404 - Level 4

Discover how IBM Fusion for Red Hat AI bridges the gap between data science innovation and IT operations, by transforming GPU-accelerated components into a production ready, on-premise AI service that delivers speed to insights in days rather than months. Join us as we explore how this turnkey, intelligent AI data platform enables organizations to build, train, and deploy models quickly across hybrid cloud environments. 

See how teams consolidate virtual machines and AI workloads on a unified platform, providing data sovereignty while accelerating time-to-value in the hybrid cloud. Scale the development and deployment of AI solutions with managed services, across any hardware accelerator for efficient inferences. Offload the burden of operationalizing AI with Red Hat AI Inference on IBM Cloud for performance without the management overheads. Run AI-ready infrastructure from a highly available and flexible, open enterprise platform that’s focused on security—even for the most regulated industries.  

In this session, you will learn how to:

  • Establish a private AI cloud with a pre-configured appliance for Day One workloads.
  • Scan and index unstructured data with strict controls in context-aware storage.
  • Use a pre-integrated platform that supports full-stack MLOps for RAG and fine-tuning.
  • Manage lifecycle in AI models of choice with optimized runtimes powered by vLLM.
  • Consolidate workloads and optimize GPU usage with llm-d for efficient tokenization.
  • Support faster, consistent AI workloads on a security-focused, scalable hybrid cloud infrastructure.

Session type: Breakout session

Our portfolio  IBM watsonx Code Assistant for Red Hat Ansible Lightspeed

Accelerate playbook creation in the Ansible Automation Platform with generative AI-powered content recommendations.

Learn more Try for free
IBM Fusion on Red Hat® OpenShift®

Leverage a fully integrated, turnkey platform for running and maintaining all on-premises Red Hat OpenShift applications.

Learn more Explore interactive demo
IBM watsonx.ai®

Experience a one-stop, integrated, end-to-end AI deployment studio.

Learn more Try for free
IBM LinuxONE

Use a scalable, cyberresilient and sustainable system optimized for Linux®.

Learn more Download white paper
HashiCorp, an IBM Company

Empowering organizations to implement cloud right with a unified platform for infrastructure and security lifecycle management.

Learn more Try for free
IBM Turbonomic® for Red Hat OpenShift

Use an open source container application platform based on Kubernetes.

Learn more Explore interactive demo
IBM Consulting and Red Hat

Transform your business with consistency, security and scale across any cloud, on premises or at the edge with Red Hat technologies.

Learn more
Red Hat on IBM Cloud

Scalable, AI-ready, open and flexible by design platform powered on IBM Cloud infrastructure. Simplify complexity, reduce costs, and unleash innovation.

Learn more
IBM Bob

AI software development partner that understands your intent, repo, and security standards.

Learn more
Client stories
Business man holding a pen and lookin through documents
Ministry of Finance, Belgium

Belgian Ministry of Finance boosts the consistency, functionality and efficiency of its online tax submission system with help from IBM.

Read more
Airbus aircraft on land
Airbus

IBM Consulting helps Airbus achieve full autonomy while sustaining the A220 Programme.

Read more
Two businesswoman working together using digital work tablet pc business project
Unipol

Unipol partners with IBM to reshape IT operations for enterprise-wide AI adoption.

Read more
IBM Consulting

Transform your business with consistency, security and scale across any cloud, on premises or at the edge with Red Hat technologies and IBM Consulting.

  1. Explore IBM Consulting for Red Hat clients