AI on IBM Z® uses machine learning to convert data from every transaction into real-time insights.
Uncover insights and gain trusted, actionable results quickly without requiring data movement. Apply AI and machine learning to your most valuable enterprise data on IBM Z by using open source frameworks and tools.
Watson Code Assistant™ for Z is a generative AI-powered tool that provides an end-to-end application developer lifecycle. It includes application discovery and analysis, automated code refactoring and COBOL to Java conversion.
AI Toolkit for Z and LinuxONE consists of IBM® Elite Support and IBM Secure Engineering that vet and scan open source AI serving frameworks and IBM-certified containers for security vulnerabilities and validate compliance with industry regulations.
Machine Learning for z/OS® allows you to build machine learning models by using your platform of choice and quickly deploy those models within transactional applications while maintaining SLAs.
AI-infused transactional data
Agile, efficient, secure enterprise data serving for the most demanding hybrid cloud and transactional and analytics applications.
Python AI Toolkit
Access a library of relevant open source software to support today's AI and machine learning workloads.
Accelerate TensorFlow Inference
Bring TensorFlow models that are trained anywhere and deploy them close to your business-critical applications on IBM Z, seamlessly using IBM Integrated Accelerator for AI.
In-memory computing performance
Move forward with an in-memory compute engine and analytics run time that supports popular big-data languages such as JavaTM, Scala, Python and R.
Compile .onnx deep learning AI models into shared libraries
Compile compatible AI models into onnx format and run them on IBM Z with minimal dependencies, while also using IBM Integrated Accelerator for AI seamlessly.
Popular open source tools
Use Anaconda on IBM Z and LinuxONE, and use industry-standard packages such as Scikit-learn, NumPy, and PyTorch with cost-effective zCX containers.
Learn how to enable AI solutions in business-critical use cases, such as fraud detection and credit risk scoring, on the platform.
Discover low-latency AI on a highly trustworthy and secure enterprise system: the modernized IBM Mainframe.
¹ With IBM LinuxONE Emperor 4, process up to 300 billion inference requests per day with 1 ms response time using a Credit Card Fraud Detection model
DISCLAIMER: Performance result is extrapolated from IBM internal tests running local inference operations in an IBM LinuxONE Emperor 4 LPAR with 48 cores and 128 GB memory on Ubuntu 20.04 (SMT mode) using a synthetic credit card fraud detection model (https://github.com/IBM/ai-on-z-fraud-detection) using the Integrated Accelerator for AI. The benchmark was running with 8 parallel threads each pinned to the first core of a different chip. The lscpu command was used to identify the core-chip topology. A batch size of 128 inference operations was used. Results may vary.