Use machine learning to convert data from every transaction into real-time insights

What if IBM Z could help stop fraud?

AI on IBM Z® uses machine learning to convert data from every transaction into real-time insights.  

Uncover insights and gain trusted, actionable results quickly without requiring data movement. Apply AI and machine learning to your most valuable enterprise data on IBM Z—all while using open-source frameworks and tools.


Featured products

watsonx Code Assistant for IBM Z: Accelerate mainframe application modernization with generative AI

Explore Db2 for z/OS: The enterprise computing foundation for hybrid cloud, data fabric, data lakehouse and AI
Benefits Speed to scale with transaction volume

Achieve up to 19x higher throughput and 20x lower response time co-locating applications and inferencing.

Get real-time insights when needed

Infuse AI into every transaction while still meeting the most stringent SLAs.

Meet green AI and AI-powered sustainability goals

Reduce the energy consumption for inference operation processing by 41x using the Integrated Accelerator for AI, versus running inference operations remotely on a compared x86 server using an NVIDIA GPU.

Software for enterprise AI

AI Toolkit

Discover the latest AI open-source software, with a delivery experience that's as consistent and trusted as other IBM Z software.

AI Toolkit for IBM Z and LinuxONE

Python AI Toolkit

Access a library of relevant open-source software to support today's AI and machine learning workloads.

Python AI Toolkit for IBM z/OS®

AI embedded into real-world apps

Build machine learning models using your platform of choice and quickly deploy those models within transactional applications, while maintaining SLAs.

IBM Machine Learning for z/OS

Accelerate TensorFlow Inference

Bring TensorFlow models trained anywhere and deploy them close to your business-critical applications on IBM Z, leveraging IBM Integrated Accelerator for AI seamlessly.

IBM ZDNN Plugin for TensorFlow

In-memory computing performance

Move forward with an in-memory compute engine and analytics run time that supports popular big-data languages such as JavaTM, Scala, Python and R.

IBM Z Platform for Apache Spark

Compile .onnx deep learning AI models into shared libaries

Compile popular compatible AI models into onnx format and run them on IBM Z with minimal dependencies, while also leveraging IBM Integrated Accelerator for AI seamlessly.

IBM Z Deep Learning Compiler

Popular open-source tools

Use Anaconda on IBM Z and LinuxONE, and leverage industry-standard packages such as Scikit-learn, NumPy and PyTorch— with cost-effective zCX containers.

Anaconda on IBM Z and LinuxONE Frequently asked questions
Resources Optimized Inferencing and Integration with AI on IBM Z systems

Learn how to enable AI solutions in business-critical use cases, such as fraud detection and credit risk scoring, on the platform.

Leveraging AI for fraud

Operationalize anti-fraud on the IBM z16™ to reduce fraud losses in banking, credit cards and payments.

NIST: Computer Security Resource Center

Take control of your data encryption keys and hardware security modules in the cloud with the Cryptographic Module Validation Program.

Learn how to get started on your journey to AI on the IBM zSystems platform.

Discover how you can use Linux to make your AI analysis simpler, more secure, and have real-time processing at scale.

AI operations analytics

Improve systems management, IT operations, application performance and operational resiliency with AI on the mainframe.

Take the next step

Discover how to use AI and machine learning to convert data from every transaction into real-time insights. Schedule a no-cost 30-minute meeting with an IBM Z and LinuxONE representative.

Get started
More ways to explore Documentation Support IBM Redbooks® Support and services Global financing Flexible pricing Education and training Community Developer community Resources

¹ With IBM LinuxONE Emperor 4, process up to 300 billion inference requests per day with 1ms response time using a Credit Card Fraud Detection model

DISCLAIMER: Performance result is extrapolated from IBM internal tests running local inference operations in an IBM LinuxONE Emperor 4 LPAR with 48 cores and 128 GB memory on Ubuntu 20.04 (SMT mode) using a synthetic credit card fraud detection model ( exploiting the Integrated Accelerator for AI. The benchmark was running with 8 parallel threads each pinned to the first core of a different chip. The lscpu command was used to identify the core-chip topology. A batch size of 128 inference operations was used. Results may vary.