Accelerate your business insights at scale with transactional AI on IBM z/OS
Machine learning for IBM z/OS® (MLz) is a transactional AI platform that runs natively on IBM z/OS. It provides a web user interface (UI), various application programming interfaces (APIs) and a web administration dashboard. The dashboard comes with a powerful suite of easy-to-use tools for model development and deployment, user management and system administration.
Use with IBM z17™ and IBM Telum® II to deliver transactional AI capability. Process up to 282,000 z/OS CICS credit card transactions per second with a 4 ms response time, each with an in-transaction fraud detection inference operation that uses a deep learning model.1
Colocate applications with inferencing requests to help minimize delays caused by network latency. This option cuts response time by up to 20x and boosts throughput by up to 19x compared to an x86 cloud server averaging 60 ms network latency.2
Use trustworthy AI capabilities such as explainability while monitoring your models in real time for drift. Develop and deploy your transactional AI models on z/OS for mission-critical transactions and workloads with confidence.
Easily import, deploy and monitor models to achieve value from every transaction and drive new outcomes for your enterprise while maintaining operational service level agreements (SLAs).
Machine learning for IBM z/OS uses both IBM proprietary and open source technologies and requires prerequisite hardware and software.
Identify operational issues and avoid costly incidents by detecting anomalies in both log and metric data.
Access a library of relevant open source software to support today's AI and ML workloads.
Get high-speed data analysis for real-time insight under the control and security of IBM Z.
Learn how AI helps enhance usability, improve operational performance and maintain the health of IBM Db2 systems.
1 DISCLAIMER: Performance result is extrapolated from IBM internal tests conducted on an IBM z17 LPAR configured with 6 CPs and 256 GB memory, running z/OS 3.1. The tests used a CICS OLTP credit card transaction workload with a low Relative Nest Intensity combined with inference operations based on a synthetic credit card fraud detection model (available at https://github.com/IBM/ai-on-z-fraud-detection) that leverages the Integrated Accelerator for AI. The benchmark was performed using 32 threads executing inference operations concurrently. Inference was carried out using Machine Learning for IBM z/OS (v3.2.0) hosted on a Liberty server (v22.0.0.3). Additionally, server-side batching was enabled on Machine Learning for z/OS with a batch size of 8 inference operations. Results may vary.
2 DISCLAIMER: Performance results are based on an IBM internal CICS OLTP credit card workload with in-transaction fraud detection running on IBM z16. Measurements were done with and without the Integrated Accelerator for AI. A z/OS V2R4 LPAR configured with 12 CPs, 24 zIIPs and 256 GB of memory was used. Inferencing was done with Machine Learning for z/OS 2.4 running on WebSphere Application Server Liberty 21.0.0.12, using a synthetic credit card fraud detection model (https://github.com/IBM/ai-on-z-fraud-detection). Server-side batching was enabled on Machine Learning for z/OS with a size of 8 inference operations. Results might vary.