AI on IBM LinuxONE

Built-in AI that is scalable, power efficient, and secured
Telum II processor chip

AI-powered performance and innovation

Artificial intelligence (AI) is transforming industries, and businesses require infrastructure that can handle AI workloads both efficiently and securely.

IBM LinuxONE, powered by the IBM Telum® processor, integrates AI acceleration directly into the chip, enabling real-time inferencing of multiple AI models with minimal latency. This advanced capability—combined with predictive AI and large language models—allows businesses to analyze data where it resides, delivering faster and deeper insights for mission-critical applications such as advanced fraud detection, risk analysis, and medical imaging.

Real-time AI insights

The on-chip AI accelerator enables low-latency inference, analyzing data as transactions occur. Memory coherence and direct fabric access eliminate bottlenecks for seamless AI execution.

Higher AI throughput

Using a single integrated accelerator for AI on an OLTP workload on IBM LinuxONE Emperor 5 matches the throughput of running inferencing on a compared remote x86 server with 13 cores.1

Accelerated AI performance

With IBM LinuxONE Emperor 5, process up to 450 billion inference operations per day with 1 ms response time using a Credit Card Fraud Detection Deep Learning model.2

Scalability without compromise

With IBM LinuxONE Emperor 5, process up to 5 million inference operations per second with less than 1 ms response time using a Credit Card Fraud Detection Deep Learning model.3

Scalable AI for complex workloads

PCIe card IBM Spyre Accelerator card

The IBM Spyre™ Accelerator card is a 75W PCIe Gen 5x AI accelerator with 128 GB LPDDR5 memory, optimized for generative AI and multimodal LLMs.8 Featuring 32 (+2) cores with a 2 MB scratchpad per core and >55% core utilization, Spyre scales by card and drawer, enabling businesses to handle complex AI inferencing efficiently across enterprise applications.

By adding the IBM Spyre Accelerator cards to IBM LinuxONE 5, additional use cases are enabled, including generative AI. 

Read the blog about the Spyre Accelerator
Demo: Real-time insurance fraud detection with high performance on IBM LinuxONE.

AI software and solutions for IBM LinuxONE

AI Toolkit for IBM LinuxONE
A curated set of AI frameworks optimized for IBM LinuxONE Integrated Accelerator for AI, delivering enhanced performance with IBM Elite Support.
IBM Synthetic Data Sets
A family of artificially generated datasets that enhance AI model training and LLMs, helping IBM LinuxONE in finance quickly access rich, relevant data for AI initiatives.
Red Hat OpenShift AI
An open platform for managing the lifecycle of predictive and generative AI models, at scale, across hybrid cloud environments.
ONNX
A portable model format that enables cross-framework compatibility, allowing AI developers to build models once and deploy them across various runtimes, tools and compilers.
TensorFlow
A powerful open-source framework for model development, training and inference, providing a rich ecosystem optimized for LinuxONE.
IBM SnapML
Designed for high-speed machine learning training and inference, it leverages the IBM Integrated Accelerator for AI to boost performance for Random Forest, Extra Trees and Gradient Boosting models.
Triton Inference Server
An open-source model server optimized for Linux on Z, supporting both CPU and GPU inference while utilizing SIMD and the IBM Integrated Accelerator for AI.
IBM Z® Deep Learning Compiler
A tool that streamlines deep learning model deployment on IBM Z, allowing data scientists to optimize AI models for mission-critical environments.

ISV applications

IBM is working with the IBM LinuxONE Ecosystem to help ISVs provide solutions for today’s AI, sustainability and cybersecurity challenges.

Explore two innovative solutions that are tailored for financial and healthcare institutions: Clari5 Enterprise Fraud Management on IBM LinuxONE 4 Express for real-time fraud prevention and Exponential AI’s Enso Decision Intelligence Platform on LinuxONE for advanced AI solutions at scale.

Explore Clari5 Explore Exponential AI
Take the next step

Learn more about AI on IBM LinuxONE by scheduling a no-cost 30-minute meeting with an IBM representative.

Explore IBM LinuxONE 5
AI on IBM LinuxONE blog

Read an overview on how AI on IBM LinuxONE boosts business growth and efficency through real-time insights and enterprise-grade performance. 

Read the blog
IBM LinuxONE 5 gets a huge AI boost

Read the Cambrian-AI research paper to explore the technology in LinuxONE 5, and the AI use cases expected to be a good fit for this enterprise-class server.

Read the Cambrian-AI paper
Start your journey to AI on LinuxONE

Explore major considerations for planning an AI use case,  learn what is possible with the Telum chips, and understand next steps to get started.

Get started with AI
Footnotes

1 DISCLAIMER: Disclaimer: Performance results are based on IBM® internal tests running on IBM Systems Hardware of machine type 9175. The OLTP application and PostgreSQL was deployed on the IBM Systems Hardware. The Credit Card Fraud Detection (CCFD) ensemble AI setup consists of two models (LSTM, TabFormer). On IBM Systems Hardware, running the OLTP application with IBM Z Deep Learning Compiler (zDLC) compiled jar and IBM Z Accelerated for NVIDIA® Triton™ Inference Server locally and processing the AI inference operations on cores and the Integrated Accelerator for AI versus running the OLTP application locally and processing remote AI inference operations on a x86 server running NVIDIA Triton Inference Server with OpenVINO™ runtime backend on CPU (with AMX). Each scenario was driven from Apache JMeter™ 5.6.3 with 64 parallel users. IBM Systems Hardware configuration: 1 LPAR running Ubuntu 24.04 with 7 dedicated cores (SMT), 256 GB memory, and IBM FlashSystem® 9500 storage. The Network adapters were dedicated for NETH on Linux. x86 server configuration: 1 x86 server running Ubuntu 24.04 with 28 Emerald Rapids Intel® Xeon® Gold CPUs @ 2.20 GHz with Hyper-Threading turned on, 1 TB memory, local SSDs, UEFI with maximum performance profile enabled, CPU P-State Control and C-States disabled. Results may vary.

2, 3 DISCLAIMER: Performance result is extrapolated from IBM® internal tests running on IBM Systems Hardware of machine type 9175. The benchmark was executed with 1 thread performing local inference operations using a LSTM based synthetic Credit Card Fraud Detection model to exploit the Integrated Accelerator for AI. A batch size of 160 was used. IBM Systems Hardware configuration: 1 LPAR running Red Hat® Enterprise Linux® 9.4 with 6 cores (SMT), 128 GB memory. Results may vary.