IBM Machine Learning for z/OS
Deploy your AI models on z/OS for real-time business insights at scale
Try it now for free Explore product documentation
abstract chevron geometrical shapes
The flagship AI platform for IBM z/OS

IBM Machine Learning for z/OS® brings AI to your mission critical applications running on IBM zSystems™. Infuse machine learning and deep learning models with your z/OS applications and deliver real-time business insights at scale. Easily import, deploy, and monitor models to achieve value from every transaction, and drive new outcomes for your enterprise while maintaining operational SLAs.

Benefits AI at Speed*

Leverage the unprecedented power of IBM z16™ and the Telum™ AIU. Process up to 228K z/OS  CICS® credit card transactions per second with 6 ms response time, each with an in-transaction fraud detection
inference operation using a Deep Learning Model.

AI at Scale*:

Co-locate applications with inferencing requests to help minimize delays caused by network latency. This delivers up to 20x lower response time and up to 19x higher throughput versus sending the same inferencing requests to a compared x86 cloud server with 60ms average network latency.

Trustworthy AI

Leverage trustworthy AI capabilities like explainability and monitor your models in real time for drift, fairness or bias detection and robustness to develop and deploy your AI models on z/OS for mission-critical workloads with confidence.

Compare editions

With the update to version 3.1, WMLz is offering more flexibility to our clients and solution providers with the introduction of 2 new offerings - Enterprise Edition and Core Edition

 

Read the Content solution
Editions Enterprise Edition

Unleash the full power of our end-end full featured ML platform on z/OS. Training anywhere or on IBM Z and readily deploying those models on z/OS applications, co-located with enterprise transaction data and business logic for in-transaction scoring in near real-time without impact to SLAs.

Core Edition

Take advantage of the lightweight version offering core set of AI capabilities including online scoring to quickly get started with machine learning on z/OS.

GUI Configuration

UI (For Model management & Deployment, Admin Dashboard)

Repository Db (Built-in & Db2 for z/OS)

AI Model training tool (Integrated Jupyter Notebook)

Spark ML runtime

Python ML runtime

SparkML & PMML scoring Runtime

Python & ONNX scoring runtime

Inference Services – RESTful interface

Inference Services – Native interface

Integrated in-transaction Scoring (CICS & IMS apps)

Related products IBM Z and Cloud Modernization Stack

Leverage the best of the mainframe and the innovation of the cloud.

IBM Db2® 13 for z/OS

Enhance availability, security and resiliency while improving performance and business results.

IBM Z Anomaly Analytics

Proactively identify operational issues and avoid costly incidents by detecting anomalies in both log and metric data

IBM Db2 Analytics Accelerator for z/OS

Get high-speed data analysis for real-time insight under the control and security of IBM zSystems.

IBM Db2 AI for z/OS

Learn how AI enhances usability, improves operational performance and maintains the health of IBM® Db2® systems.

IBM z/OS

Leverage a secure and scalable operating system for running mission-critical applications.

Python AI Toolkit for IBM z/OS

Access a library of relevant open source software to support today's artificial intelligence (AI) and machine learning (ML) workloads.

Have a question?

Get answers to all your AI on IBM Z questions from our team of AI on Z experts.

More ways to explore WML for z/OS Documentation Overview of WML Documentation Support AI on IBM Z & LinuxONE community Redbooks (3.2 MB) Global financing Technology lifecycle services
Footnotes

AI at Speed* - DISCLAIMER: Performance result is extrapolated from IBM internal tests running a CICS credit card transaction workload with inference operations on an IBM z16. A z/OS V2R4 LPAR configured with 6 CPs and 256 GB of memory was used. Inferencing was done with Machine Learning for z/OS 2.4 running on Websphere Application Server Liberty 21.0.0.12, using a synthetic credit card fraud detection model (https://github.com/IBM/ai-on-z-fraud-detection) and the Integrated Accelerator for AI. Server-side batching was enabled on Machine Learning for z/OS with a size of 8 inference operations. The benchmark was executed with 48 threads performing inference operations. Results represent a fully configured IBM z16 with 200 CPs and 40 TB storage. Results may vary.

AI at Scale* -DISCLAIMER: Performance results are based on an IBM internal CICS OLTP credit card workload with in-transaction fraud detection running on IBM z16. Measurements were done with and without the Integrated Accelerator for AI. A z/OS V2R4 LPAR configured with 12 CPs, 24 zIIPs, and 256 GB of memory was used. Inferencing was done with Machine Learning for z/OS 2.4 running on Websphere Application Server Liberty 21.0.0.12, using a synthetic credit card fraud detection model (https://github.com/IBM/ai-on-z-fraud-detection). Server-side batching was enabled on Machine Learning for z/OS with a size of 8 inference operations. Results may vary.