AI on IBM Z

Use machine learning to convert data from every transaction into real-time insights

Put AI to work with IBM Z

AI on IBM Z® uses machine learning to convert data from every transaction into real-time insights.  

Uncover insights and gain trusted, actionable results quickly without requiring data movement. Apply AI and machine learning to your most valuable enterprise data on IBM Z by using open source frameworks and tools.

 

Get started on your journey to AI on IBM Z and LinuxONE
Benefits IBM z16™ with the ground-breaking IBM Telum™ processor features on-chip AI acceleration for inferencing at dramatically improved speed and scale. Speed to scale with transaction volume

Achieve up to 19x higher throughput and 20x reduced response time colocating applications and inferencing.

Get real-time insights when needed

Infuse AI into every transaction while still meeting the most stringent SLAs.

Meet green AI sustainability goals

Reduce the energy consumption for inference operation processing by 41x using the Integrated Accelerator for AI.

Featured products Generative AI

Watson Code Assistant™ for Z is a generative AI-powered tool that provides an end-to-end application developer lifecycle. It includes application discovery and analysis, automated code refactoring and COBOL to Java conversion.

Explore watsonx Code Assistant for Z
AI Toolkit

AI Toolkit for Z and LinuxONE consists of IBM® Elite Support and IBM Secure Engineering that vet and scan open source AI serving frameworks and IBM-certified containers for security vulnerabilities and validate compliance with industry regulations.

Explore AI Toolkit for Z and LinuxONE
AI embedded into real-world apps

Machine Learning for z/OS® allows you to build machine learning models by using your platform of choice and quickly deploy those models within transactional applications while maintaining SLAs. 

Explore Machine Learning for z/OS
Related products

AI-infused transactional data

Agile, efficient, secure enterprise data serving for the most demanding hybrid cloud and transactional and analytics applications.

IBM Db2 for z/OS

Python AI Toolkit

Access a library of relevant open source software to support today's AI and machine learning workloads.

Python AI Toolkit for IBM z/OS®

Accelerate TensorFlow Inference

Bring TensorFlow models that are trained anywhere and deploy them close to your business-critical applications on IBM Z, seamlessly using IBM Integrated Accelerator for AI.

IBM ZDNN Plugin for TensorFlow

In-memory computing performance

Move forward with an in-memory compute engine and analytics run time that supports popular big-data languages such as JavaTM, Scala, Python and R.

IBM Z Platform for Apache Spark

Compile .onnx deep learning AI models into shared libraries

Compile compatible AI models into onnx format and run them on IBM Z with minimal dependencies, while also using IBM Integrated Accelerator for AI seamlessly.

IBM Z Deep Learning Compiler

Popular open source tools

Use Anaconda on IBM Z and LinuxONE, and use industry-standard packages such as Scikit-learn, NumPy, and PyTorch with cost-effective zCX containers.

Anaconda on IBM Z and LinuxONE Frequently asked questions
Demo video AI on Linux with the IBM Integrated Accelerator for AI Discover how you can run AI analysis with Linux on IBM Z systems by using processor chips that are designed for AI making your analysis simpler, more secure and with real-time processing at scale.
Resources Optimized Inferencing and Integration with AI

Learn how to enable AI solutions in business-critical use cases, such as fraud detection and credit risk scoring, on the platform.

What If IBM Z could help stop fraud?

Discover low-latency AI on a highly trustworthy and secure enterprise system: the modernized IBM Mainframe.

Take the next step

Discover how to use AI and machine learning to convert data from every transaction into real-time insights. Schedule a no-cost 30-minute meeting with an IBM Z and LinuxONE representative.

Get started
More ways to explore Documentation Support IBM Redbooks Support and services Global financing Flexible pricing Education and training Community Developer community Resources
Footnotes

¹ With IBM LinuxONE Emperor 4, process up to 300 billion inference requests per day with 1 ms response time using a Credit Card Fraud Detection model

DISCLAIMER: Performance result is extrapolated from IBM internal tests running local inference operations in an IBM LinuxONE Emperor 4 LPAR with 48 cores and 128 GB memory on Ubuntu 20.04 (SMT mode) using a synthetic credit card fraud detection model (https://github.com/IBM/ai-on-z-fraud-detection) using the Integrated Accelerator for AI. The benchmark was running with 8 parallel threads each pinned to the first core of a different chip. The lscpu command was used to identify the core-chip topology. A batch size of 128 inference operations was used. Results may vary.