AI on IBM Z® uses machine learning to convert data from every transaction into real-time insights.
Uncover insights and gain trusted, actionable results quickly without requiring data movement. Apply AI and machine learning to your most valuable enterprise data on IBM Z—all while using open-source frameworks and tools.
watsonx Code Assistant for IBM Z: Accelerate mainframe application modernization with generative AI
Achieve up to 19x higher throughput and 20x lower response time co-locating applications and inferencing.
Infuse AI into every transaction while still meeting the most stringent SLAs.
Reduce the energy consumption for inference operation processing by 41x using the Integrated Accelerator for AI, versus running inference operations remotely on a compared x86 server using an NVIDIA GPU.
AI Toolkit
Discover the latest AI open-source software, with a delivery experience that's as consistent and trusted as other IBM Z software.
Python AI Toolkit
Access a library of relevant open-source software to support today's AI and machine learning workloads.
AI embedded into real-world apps
Build machine learning models using your platform of choice and quickly deploy those models within transactional applications, while maintaining SLAs.
Accelerate TensorFlow Inference
Bring TensorFlow models trained anywhere and deploy them close to your business-critical applications on IBM Z, leveraging IBM Integrated Accelerator for AI seamlessly.
In-memory computing performance
Move forward with an in-memory compute engine and analytics run time that supports popular big-data languages such as JavaTM, Scala, Python and R.
Compile .onnx deep learning AI models into shared libaries
Compile popular compatible AI models into onnx format and run them on IBM Z with minimal dependencies, while also leveraging IBM Integrated Accelerator for AI seamlessly.
Popular open-source tools
Use Anaconda on IBM Z and LinuxONE, and leverage industry-standard packages such as Scikit-learn, NumPy and PyTorch— with cost-effective zCX containers.
Learn how to enable AI solutions in business-critical use cases, such as fraud detection and credit risk scoring, on the platform.
Operationalize anti-fraud on the IBM z16™ to reduce fraud losses in banking, credit cards and payments.
Take control of your data encryption keys and hardware security modules in the cloud with the Cryptographic Module Validation Program.
Learn how to get started on your journey to AI on the IBM zSystems platform.
Discover how you can use Linux to make your AI analysis simpler, more secure, and have real-time processing at scale.
Improve systems management, IT operations, application performance and operational resiliency with AI on the mainframe.
¹ With IBM LinuxONE Emperor 4, process up to 300 billion inference requests per day with 1ms response time using a Credit Card Fraud Detection model
DISCLAIMER: Performance result is extrapolated from IBM internal tests running local inference operations in an IBM LinuxONE Emperor 4 LPAR with 48 cores and 128 GB memory on Ubuntu 20.04 (SMT mode) using a synthetic credit card fraud detection model (https://github.com/IBM/ai-on-z-fraud-detection) exploiting the Integrated Accelerator for AI. The benchmark was running with 8 parallel threads each pinned to the first core of a different chip. The lscpu command was used to identify the core-chip topology. A batch size of 128 inference operations was used. Results may vary.