Machine learning for IBM z/OS

Accelerate your business insights at scale with transactional AI on IBM z/OS

Illustration of a woman working on a laptop

Transactional AI platform

Machine learning for IBM z/OS® (MLz) is a transactional AI platform that runs natively on IBM z/OS. It provides a web user interface (UI), various application programming interfaces (APIs) and a web administration dashboard. The dashboard comes with a powerful suite of easy-to-use tools for model development and deployment, user management and system administration.

Leverage machine learning for IBM z/OS for enterprise AI
AI at speed

Use with IBM z17™ and IBM Telum® II to deliver transactional AI capability. Process up to 282,000 z/OS CICS credit card transactions per second with a 4 ms response time, each with an in-transaction fraud detection inference operation that uses a deep learning model.1

AI at scale

Colocate applications with inferencing requests to help minimize delays caused by network latency. This option cuts response time by up to 20x and boosts throughput by up to 19x compared to an x86 cloud server averaging 60 ms network latency.2

Trustworthy AI

Use trustworthy AI capabilities such as explainability while monitoring your models in real time for drift. Develop and deploy your transactional AI models on z/OS for mission-critical transactions and workloads with confidence.

Transactional AI

Easily import, deploy and monitor models to achieve value from every transaction and drive new outcomes for your enterprise while maintaining operational service level agreements (SLAs).

Features

The new enhanced edition of ML for IBM z/OS delivers improved scoring performance, offers a new version of Spark and Python machine learning runtimes and includes a GUI-guided configuration tool and more.

 

  • Real-time inference: In-transaction scoring through native CICS and WOLA interface for CICS, IMS and BATCH COBOL applications and RESTful interface
  • Various engines support: SparkML, Python, PMML, IBM SnapML, Watson Core Time Series
  • Model lifecycle management: Guided UI, RESTful services
  • Telum II: ONNX and IBM SnapML models
  • Trustworthy AI: Explainability and drift monitoring
Explore the enterprise edition
Collaborative model building in JupyterHub
A shared JupyterHub environment allows multiple data scientists to build and train models together on the z/OS platform, improving collaboration and productivity.
Improved AI monitoring and explainability tools
Enhanced monitoring and clearer visualizations for explainability results help ensure models remain open, reliable and easy to interpret during production use.
Faster multiclass scoring with AI accelerator
MLz supports high-performance multiclass classification scoring by using the on-chip AI accelerator in IBM Z systems through Snap ML, improving model inference speed and efficiency.
Comprehensive ML lifecycle on IBM z/OS
MLz provides a secure, enterprise-grade platform for model development, deployment and management with web UI, APIs and integration with Spark and Python toolkits.

Technical details

Machine learning for IBM z/OS uses both IBM proprietary and open source technologies and requires prerequisite hardware and software.

  • z17™, z16® or z15®
  • z/OS 3.2, 3.1 or 2.5
  • IBM 64-bit SDK for z/OS Java™ Technology Edition version 8, 11 or 17
  • IBM WebSphere Application Server for z/OS Liberty version 22.0.0.9 or later
  • Db2® 13 for z/OS or later only if you choose Db2 for z/OS as the repository metadata database

Related products

IBM Z Anomaly Analytics

Identify operational issues and avoid costly incidents by detecting anomalies in both log and metric data.

Python AI Toolkit for IBM z/OS

Access a library of relevant open source software to support today's AI and ML workloads.

IBM Db2 Analytics Accelerator for z/OS

Get high-speed data analysis for real-time insight under the control and security of IBM Z.

IBM Db2 AI for z/OS

Learn how AI helps enhance usability, improve operational performance and maintain the health of IBM Db2 systems.

Take the next step

Discover how machine learning for IBM z/OS accelerates your business insights at scale with transactional AI on IBM z/OS.

Try it at no cost
More ways to explore Documentation Support Lifecycle services and support Community
Footnotes

DISCLAIMER: Performance result is extrapolated from IBM internal tests conducted on an IBM z17 LPAR configured with 6 CPs and 256 GB memory, running z/OS 3.1. The tests used a CICS OLTP credit card transaction workload with a low Relative Nest Intensity combined with inference operations based on a synthetic credit card fraud detection model (available at https://github.com/IBM/ai-on-z-fraud-detection) that leverages the Integrated Accelerator for AI. The benchmark was performed using 32 threads executing inference operations concurrently. Inference was carried out using Machine Learning for IBM z/OS (v3.2.0) hosted on a Liberty server (v22.0.0.3). Additionally, server-side batching was enabled on Machine Learning for z/OS with a batch size of 8 inference operations. Results may vary.

DISCLAIMER: Performance results are based on an IBM internal CICS OLTP credit card workload with in-transaction fraud detection running on IBM z16. Measurements were done with and without the Integrated Accelerator for AI. A z/OS V2R4 LPAR configured with 12 CPs, 24 zIIPs and 256 GB of memory was used. Inferencing was done with Machine Learning for z/OS 2.4 running on WebSphere Application Server Liberty 21.0.0.12, using a synthetic credit card fraud detection model (https://github.com/IBM/ai-on-z-fraud-detection). Server-side batching was enabled on Machine Learning for z/OS with a size of 8 inference operations. Results might vary.