Machine Learning for IBM z/OS Everything you need to get started quickly. Get started - Play video Transcript

Welcome to the Machine Learning for IBM z/OS content solution, your homepage for technical resources.

Machine Learning for IBM z/OS (MLz) is an enterprise machine learning solution for IBM Z. Combining an intuitive user interface and APIs for various services, enterprise teams can import and create models, as well as deploy and monitor them in applications running in z/OS and beyond. When coupled with the power of the Telum II integrated accelerator for AI on the IBM z17, MLz users can achieve real-time insights against every transaction, driving new value across business applications and IT operations.

Use this page to help your enterprise team plan to get started with MLz.

See what Machine Learning for IBM z/OS can do for your business.
Announcements

Leverage Machine Learning for IBM z/OS for Enterprise AI: Telum II on-chip AI Accelerator on z17 with enhanced security and simplification

Read more
Big picture 1. Plan for and install MLz via SMP/E and PTFs 2. Set up MLz using an intuitive configuration tool 3. Import models from file, or develop new ones with an embedded Jupyter server 4. Deploy your models to applications running on IBM Z and beyond 5. Continually monitor models for performance and impact 6. Generate LIME and SHAP explainers of model predictions How to get started
Key capabilities

Machine Learning for IBM z/OS (MLz) provides a suite of tools for development, deployment, and management of machine learning models across your enterprise. With both an intuitive interface and a suite of APIs, models can be deployed as services to applications running on IBM Z, allowing for real-time insights against your mission critical workloads.

MLz 3.2 provides a variety of new features and enhancements, including:

  • Dual Control
  • Scoring timeout for online scoring
  • Serving ID
  • Telum II
  • Drift support
  • Spark 3.5 support
  • Java v17 support
  • Java v11 support
  • JupyterHub support
Use cases for MLz

By using MLz to deploy machine learning models to your core business applications, a number of use cases become available. The following examples are just a selection of possible exploitations. To read more about each use case, and the other relevant technologies used to implement it, visit the Journey to AI on IBM Z and LinuxONE content solution.

Achieve real-time fraud scoring through co-locating an AI model with the IBM Z applications managing your transactions. This enables you to analyze 100% of transactions running through your on-premises systems, and significantly reducing risk to both your business and customers.

Eliminate the manually intensive auditing process through a machine learning model trained in detecting fraud for your claims processing systems. This enables you to rapidly analyze claims and significantly reduce overall human intervention.

By training a model on risk exposure and co-locating it with transactional workloads, unlock insights about risk on a transaction-to-transaction basis.

Co-locate an inferencing model for analyzing loan applications, alongside the automation of rules-based business decisions, to achieve rapid insights, low latency, and minimal exposure of customer and lender data.

Co-locate an inferencing model for analyzing correlations between weather patterns and insurance risk alongside transactional systems, allowing you to achieve rapid insights.

Enable accurate analysis of satellite imagery with a deep learning learning model deployed to the IBM Z platform, ensuring the sensitive care of aerial images while driving rapid insights.

Achieve effective and energy-efficient computer vision for medical imaging through the training and inferencing of AI models on the IBM Z platform.

Prerequisites

In the context of MLz, the z/OS system programmer is primarily focused on installing and configuring the solution for use by their organization.

The installation and configuration roadmap, including the priority and skills required at each step, is thoroughly detailed in Machine Learning for z/OS documentation.

Before beginning installation and configuration, take the following preparatory steps:

In order to successfully install and configure MLz, some additional IBM and open source technologies should be in place.

The minimally required technologies include:

  • IBM z17, z16, z15™, z14, z13®, or zEnterprise® EC12 system
  • z/OS 2.5 or 2.4
  • z/OS UNIX System Services configured
  • z/OS Integrated Cryptographic Service Facility (ICSF)
  • z/OS OpenSSH
  • IBM 64-bit SDK for z/OS Java Technology Edition Version 8 SR7, or Version 11.0.17 or later
  • IBM WebSphere Application Server for z/OS Liberty version 22.0.0.9 or later
  • Python AI Toolkit for IBM z/OS (for MLz Enterprise Edition)

Note: Additional prerequisites will be required if you plan for specific use cases, for example, to use the IBM z16 on-chip AI acceleration for scoring ONNX models or to run a scoring service in a CICS region. For a full listing of prerequisites broken down by use case, view Planning system capacity for ML for IBM z/OS: Basic system capacity.

MLz performs the best when adequate system capacity in terms of server, processor, memory, and disk space is available. Allocate sufficient capacity for MLz on your z/OS system to meet the demands of your enterprise machine learning workload.

CPU per LPAR/server Memory (GB) per LPAR/server DASD/Disk Space (GB) per LPAR/server 1 GCP/4 zIIPs 100 GB 100 GB

Note that additional capacity may be needed depending on the training workload you plan to run on IBM Z. This may be influenced by the number of concurrent training jobs, the type of models being trained, the size of the training data set, the number of data features, and the specification of model parameters. For guidance on planning capacity for your training workloads, view Planning system capacity for ML for IBM z/OS: Capacity consideration for training workload on Z.

Installing MLz

MLz is installable via SMP/E. When receiving a purchase order for MLz, you should receive the following materials:

  • SMP/E images
  • Program directories
  • License information
  • Available maintenance packages

Along with receiving these materials, ensure that you’ve obtained the PTFs containing the latest technical and service updates.

Once all necessary materials are retrieved, complete the installation instructions provided in Installing and configuring ML for IBM z/OS.

Planning for configuration

MLz can be set up with the MLz configuration tool—a graphical interface to gradually walk you through key steps. Before using the MLz configuration tool, complete the following steps:

USS skills are necessary throughout the configuration process, including configuring essential user IDs, port definitions, and authentications for the product.

For a full list of which configuration steps require USS skills, view Roadmap for installing and configuring ML for z/OS Enterprise.

There are a number of terms you will encounter when configuring MLz that are essential for understanding how the product is set up.

For a list of essential terms to know, view Commonly used terms and definitions for installation and configuration.

Your security administrator can help you configure a MLz setup user ID, which is necessary for using the MLz configuration tool. For ease of access control, it is recommended that you plan to also use this same user ID for MLz services.

For further information about configuring user IDs, view Configuring user ID for setting up Machine Learning for IBM z/OS Enterprise.

Your security administrator will also be responsible for ensuring that a RACF KeyRing is created for user IDs that must be created for MLz, and will help to establish the requisite IDs and authorizations. They will also be involved in securing network communications for MLz with AT-TLS.

For further information about securing network connections and configuring the keystore, view Configuring secure network communications for MLz.

Your network administrator can help you reserve the various ports which need to be dedicated to the purpose of MLz communicating across systems and services. A number of required and optional ports should be accounted for prior to using the MLz configuration tool.

For further information about configuring ports for MLz, view Configuring ports for ML for IBM z/OS.

Your z/OS database administrator can help you plan for the storage of metadata objects in the repository of your choice: an embedded database with MLz or a Db2 for z/OS database.

For further information about creating and storing metadata objects, view Creating metadata objects for the repository service.

Configuring MLz

Once you have completed the previously mentioned steps, you can start the MLz configuration tool using a shell script located in the $IML_INSTALL_DIR/iml-utilities/configtool directory. Once opened, the MLz configuration tool will guide you through the process of setting up the various components and services of the product.

As you near completion, you will have an opportunity to review inputs prior to configuration, including the various file paths where configurations will occur. It is recommended that you save these directory locations so that you may directly adjust configurations later as needed.

For further information about starting and using the MLz configuration tool, view Configuring MLz.

Configuration of MLz involves the set-up of the following four major components:

You must select one of two options for storing important metadata about models. Selecting the embedded MLz database doesn’t require further configuration, but limits the availability of MLz services. Selecting Db2 for z/OS allows for high availability, but requires that you have a ready Db2 system, and also requires that you provide network, authentication, and schema details for MLz metadata objects.

You must specify host system details for both the MLz UI and core services, as well as credentials for the default admin user and the MLz application developer. You can also enable audit trace for governance of model data, model versions, and model deployments. Lastly, if you had selected Db2 for z/OS as your metadata repository, you can add MLz core services to a cluster, which routes services to an alternate MLz instance if one isn’t available.

You must create a name for the MLz runtime environment, as well as provide details for both the Spark runtime engine and the Python runtime engine. The Spark runtime engine is a high-performance analytics engine for large-scale data processing able to perform in-memory computing, and you will need to allocate various ports for its configuration, as well as determine the maximum number of retries for port binding. You will also have the option of enabling client-side Spark authentication. The Python runtime environment allows for the use of a Python-based scoring service, as well as the enablement of a Jupyter Notebook server for model development in the MLz UI. For Python, you will need to specify the Python virtual environment name and the Python packages installation source.

You must provide a name for the MLz scoring service. A scoring service is where your AI models are deployed to for real-time inferencing. You can choose to create a standalone scoring instance, or to set it to a cluster for high availability. Creating a cluster scoring services requires that you had selected Db2 for z/OS as your metadata repository, and will require you selecting an available cluster to join, or creating a new one by providing network details for the sysplex distributor.

You must choose whether to enable Trustworthy AI features for your MLz instance. If enabled, you will need to provide network and authentication details for an IBM z/OS Container Extensions (zCX) instance to support the features.

Supported model types

The role of the data scientist is largely based around the development of models, importing them to MLz, and monitoring them. To that end, it is recommended that you review the algorithms, data sources, data types, and model types supported by MLz. For a thorough list of what is supported, Supported algorithms, data sources, data types, and model types.

The following model types can be imported to MLz, or trained, saved, and deployed in MLz:

  • SparkML
  • Scikit-learn
  • XGBoost
  • ARIMA or Seasonal ARIMA
  • Encoder large language models

The following model types can be developed on another platform and imported to MLz to be saved and deployed:

  • PMML
  • ONNX
  • SnapML (when serialized to PMML before import)
Common data scientist tasks on MLz

The following tasks represent the most common responsibilities of data scientists working in MLz:

You can import models to MLz directly through the UI. In the Models tab, click “Import model”, and then select the relevant tab for whether you want to import from file or from IBM Cloud Pak for Data.

If importing from file, you will need to provide a name for the file to be displayed in the MLz UI, and you will need to select the model type.

If importing from IBM Cloud Pak for Data, you will need to provide connection and authentication details, as well as provide a name for the file and select the model type.

You can create models in MLz with a Jupyter Notebook server which can be accessed directly from the UI. In the Models tab, click “Create model” to open a view of the Jupyter Notebook interface which you can use to create and save new models for MLz. If you’re unfamiliar with the interface or basic workflows of notebooks, view the open source documentation published by Project Jupyter.

When utilizing machine learning and deep learning with your transactional applications, it’s important to continually ensure that deployed models are working as intended. MLz offers a method for evaluating the performance of SparkML and PMML models to ensure accuracy.

To do so, select a model from the Models tab, open the overflow menu, and click “Create evaluation”. Then add a data source and choose whether to run the evaluation over a time range, and click “Create”. The evaluation for deployments of that model can then be triggered using the overflow menu in the Deployments tab. The evaluation can also be scheduled to run repeatedly.

For full detail about this procedure, view Evaluating and reevaluating a model.

A new dashboard for generating and managing SHAP and LIME explainers is available in MLz Enterprise Edition v.3.1. You can generate visualized local explanations of specific AI predictions derived from a given model deployment.

To do so, click “Explanations” in the global navigation, or click “Explain” from the action menu of a given model deployment in the Deployments table. Then you will create a subscription, which associates the model deployment with available services such as LIME and SHAP explanations. Upon filling in the necessary details and uploading training data, you can select the explainer type you wish to use in the subscription. Then you can filter the transactions available within the deployment to target a specific subset you want to make available for explanation.

Once the subscription is ready, you can search and/or filter for the transactions for which you want to generate local explanations. You can then create the explanations. Once loaded, will provide visualized results of feature importance using LIME and/or SHAP (depending on your selections during subscription creation).

For full detail about the available trustworthy AI features, view Trustworthy AI.

Deploying models in MLz

As an application developer, you can use MLz to deploy a model, allowing machine learning predictions to be made against the data relevant to your application. In the Models tab, navigate to a row displaying with a ready status (a green check mark will be shown), open the overflow menu, and click “Create deployment”. Then input the relevant details for the deployment, including a display name, a deployment type (online or batch), and a scoring service type (standalone or cluster).

Once completed, your deployment will appear in the Deployments tab of the UI.

Infusing AI with IBM Z applications

By deploying machine learning models with your business applications running on IBM Z, a host of new insights and capabilities can become possible. This topic is thoroughly covered in the publication, Planning AI infusion into applications on IBM Z, as well as the article Deploy AI models for real-time inferencing in your z/OS IMS transactions. The following are brief excerpts and related details to provide an overview of the available infusion methods:

For CICS, the MLz scoring engine can be hosted in a WebSphere Liberty server within the CICS runtime and provides a program ALNSCORE that can be invoked with EXEC CICS LINK, passing the data using CICS channels and containers.

For further detail, view Planning AI infusion into applications on IBM Z.

For IMS, the MLz WOLA (WebSphere Optimized Local Adapter) scoring interface can be invoked from IMS COBOL applications for high performance real-time inferencing in IMS online transactions.

Refer to the Deploy AI models for real-time inferencing article for further detail.

For WebSphere, a Java API can be used to call the MLz scoring feature configured in a WebSphere server.

For further detail, view Planning AI infusion into applications on IBM Z.

An ODM rule driven by the runtime application can be enhanced to reference a model deployed to MLz, and then use the prediction from the model in the rule. ODM uses a highly efficient interface between ODM and MLz.

Many CICS and IMS applications already use ODM rules to inform their decisions, in which case a rule called by the application can be enhanced with machine learning to drive an AI model deployed to MLz. If the application does not currently use ODM rules, it can be updated to use an ODM rule that drives the AI model via MLz, and hence include additional insight in the result from the rule.

For further detail, view Planning AI infusion into applications on IBM Z.

Available APIs

A number of APIs are available to connect MLz services with applications running both on and off the z/OS platform. Follow the links provided to view relevant information for each API.

Learn more General questions

MLz can be configured for high availability (HA) within a sysplex when it is set up with Db2 for z/OS as the metadata repository. An existing MLz instance can be quickly cloned onto a new port within the same LPAR, or in a separate LPAR using the Sysplex Distributor. This allows MLz core services—including model training, deployment, batch scoring, ingestion, repository, and data connection management—to fail over to another active system if necessary.

View IBM Documentation for further detail on this topic.

USS skills are recommended throughout the configuration process. To view what actions will be supported with USS skills, view IBM Documentation.

The Open Neural Network Exchange (ONNX) is an open standard used for converting between different machine learning frameworks. You can import, deploy, and manage ONNX models on MLz by using the required ONNX compiler service.

View IBM Documentation for further detail on this topic.

MLz’s user interface allows for models to be created or imported without extensive knowledge of z/OS. That being said, it’s critical to understand the algorithms, data sources, data types, and model types supported by MLz, and how that relates to available options for creating your machine learning or deep learning models. For example, a data scientist requiring the use of Db2 for z/OS data for a Python or Scala model will need to understand how to connect to their data source through the use of MDS.

View IBM Documentation for further detail on this topic.

Documentation Documentation

Explore the documentation
Technical resources IBM Documentation

View technical content for the end-to-end use of MLz.

Read the documentation
IBM Products page

View high-level information about the MLz product, including pricing and related offerings.

View the webpage
Planning AI infusion into applications on IBM zSystems

View guidance for infusing AI models with applications running in CICS TS, IMS TS, WebSphere, and z/TPF.

Read the document
Redbook: Optimized Inferencing and Integration with AI on IBM zSystems

View detailed guidance for configuring and optimizing AI integration with IBM zSystems.

Read the Redbook
Redbook: Solving challenges of instant payments by using AI on IBM zSystems

Read a business-oriented review for solving challenges around instant payments utilizing MLz.

Read the Redbook
AI on IBM Z & LinuxONE community

Join a community of practitioners and experts discussing AI for the IBM Z and LinuxONE platforms.

Join the community
Related solutions Journey to AI on IBM Z and LinuxONE

Build and train models anywhere, and deploy them on IBM Z and LinuxONE infrastructure.

What's new Last updated July 18, 2025

Updated content to reflect Machine Learning for z/OS v3.2 on IBM Z systems for IBM z17.