Welcome to the Machine Learning for IBM z/OS content solution, your homepage for technical resources.
Machine Learning for IBM z/OS (MLz) is an enterprise machine learning solution for IBM Z. Combining an intuitive user interface and APIs for various services, enterprise teams can import and create models, as well as deploy and monitor them in applications running in z/OS and beyond. When coupled with the power of the Telum II integrated accelerator for AI on the IBM z17, MLz users can achieve real-time insights against every transaction, driving new value across business applications and IT operations.
Use this page to help your enterprise team plan to get started with MLz.
Leverage Machine Learning for IBM z/OS for Enterprise AI: Telum II on-chip AI Accelerator on z17 with enhanced security and simplification
Machine Learning for IBM z/OS (MLz) provides a suite of tools for developing, deploying, and managing machine learning models across your enterprise. Models operations can be performed using APIs or a tailored, intuitive UI. With MLz, enterprises can get real-time insights on their mission critical, transactional workloads.
MLz 3.2 provides the following new features and enhancements:
- Dual Control
- Scoring timeout for online scoring
- Serving ID
- Telum II
- Drift support
- Spark 3.5 support
- Java v17 support
- Java v11 support
- JupyterHub support
The following use cases outlines some common applications of MLz. MLz’s flexibility enables a wide range of possibilities across diverse industries. To read more about each use case, and the other relevant technologies used to implement it, visit the Journey to AI on IBM Z and LinuxONE content solution.
Achieve real-time fraud scoring through co-locating an AI model with the IBM Z applications managing your transactions. This enables you to analyze 100% of transactions running through your on-premises systems, and significantly reducing risk to both your business and customers.
Eliminate the manually intensive auditing process through a machine learning model trained in detecting fraud for your claims processing systems. This enables you to rapidly analyze claims and significantly reduce overall human intervention.
By training a model on risk exposure and co-locating it with transactional workloads, unlock insights about risk on a transaction-to-transaction basis.
Co-locate an inferencing model for analyzing loan applications, alongside the automation of rules-based business decisions, to achieve rapid insights, low latency, and minimal exposure of customer and lender data.
Co-locate an inferencing model for analyzing correlations between weather patterns and insurance risk alongside transactional systems, allowing you to achieve rapid insights.
Enable accurate analysis of satellite imagery with a deep learning learning model deployed to the IBM Z platform, ensuring the sensitive care of aerial images while driving rapid insights.
Achieve effective and energy-efficient computer vision for medical imaging through the training and inferencing of AI models on the IBM Z platform.
z/OS system programmers primarily focus on installing and configuring MLz instances.
Check out the Roadmap for installing and configuring ML for z/OS Enterprise for a comprehensive overview of the installation and configuration process.
Before beginning installation and configuration, take the following preparatory steps:
MLz requires other IBM and open source technologies. The minimally required technologies include:
- IBM z17, z16, z15™, z14, z13®, or zEnterprise® EC12 system
- z/OS 2.5 or 2.4
- z/OS UNIX System Services configured
- z/OS Integrated Cryptographic Service Facility (ICSF)
- z/OS OpenSSH
- IBM 64-bit SDK for z/OS Java Technology Edition Version 8 SR7, or Version 11.0.17 or later
- IBM
WebSphere Application Server for z/OS Liberty version 22.0.0.9 or later - Python AI Toolkit for IBM z/OS (for MLz Enterprise Edition)
Note: For more details, see Installing prerequisite hardware and software for ML for IBM z/OS.
MLz performs the best when adequate system capacity in terms of server, processor, memory, and disk space is available. Allocate sufficient capacity for MLz on your z/OS system to meet the demands of your enterprise machine learning workload.
Note: Additional capacity may be needed depending on your AI workloads. This may be influenced by the number of concurrent training jobs, the type of models being trained, the size of the training data set, the number of data features, and the specification of model parameters. For guidance on planning capacity for your training workloads, view Planning system capacity for ML for IBM z/OS: Capacity consideration for training workload on Z.
You can install MLz using System Modification Program/Extended (SMP/E). Each purchased order for MLz comes with the following materials:
- SMP/E images
- Program directories
- License information
- Available maintenance packages
MLz also offers various PTFs, which helps maintain instances with the latest technical and service updates. See Prerequisites and Maintenance for Machine Learning for IBM z/OS.
After obtaining and completing all prerequisites, follow Installing and configuring ML for IBM z/OS to complete installation.
You can configure MLz with the Machine Learning for IBM z/OS Configuration tool. This user-friendly interface offers a step-by-step experience for a streamlined setup experience.
There are a number of terms that you will encounter when configuring MLz. To ensure a clear configuration process, it is essential to understand these terms. View Commonly used terms and definitions for installation and configuration for definitions.
A z/OS security administrator must configure a RACF KeyRing and a corresponding MLz setup user ID. For convenient access control, it is recommended that you this same user ID for MLz services.
For details about configuring user IDs, see Configuring user ID for setting up Machine Learning for IBM z/OS Enterprise and Configuring additional user IDs.
For details about configuring RACF, see Configuring a keyring-based keystore (JCERACFKS or JCECCARACFKS) for MLz.
A z/OS network administrator must set up a number of required and optional ports. Ports communicate across systems and services.
For more details, view Configuring ports for ML for IBM z/OS.
A z/OS database administrator must plan for the storage of metadata objects in the repository of your choice. This can be an embedded database with MLz or a
For more details, view Creating metadata objects for the repository service.
To launch the MLz Configuration tool using a shell script, locate $IML_INSTALL_DIR/iml-utilities/configtool in your directory. The Configuration tool provides a step-by-step process for both new and experienced z/OS system administrators to complete instance and service configuration.
As you near completion, you can review inputs, including the various file paths where configurations will occur. It is recommended to save these directory locations so that you may directly adjust configurations later as needed.
Follow Configuring MLz for detailed instructions.
You must select one of two options for storing important metadata about models. Selecting the embedded MLz database doesn’t require further configuration, but limits the availability of MLz services. Selecting
You must specify host system details for both the MLz UI and core services, as well as credentials for the default admin user and the MLz application developer. You can also enable audit trace for governance of model data, model versions, and model deployments. Lastly, if you had selected
You must create a name for the MLz runtime environment, as well as provide details for both the Spark runtime engine and the Python runtime engine. The Spark runtime engine is a high-performance analytics engine for large-scale data processing able to perform in-memory computing, and you will need to allocate various ports for its configuration, as well as determine the maximum number of retries for port binding. You will also have the option of enabling client-side Spark authentication. The Python runtime environment allows for the use of a Python-based scoring service, as well as the enablement of a Jupyter Notebook server for model development in the MLz UI. For Python, you will need to specify the Python virtual environment name and the Python packages installation source.
You must provide a name for the MLz scoring service. A scoring service is where your AI models are deployed to for real-time inferencing. You can choose to create a standalone scoring instance, or to set it to a cluster for high availability. Creating a cluster scoring services requires that you had selected
You must choose whether to enable Trustworthy AI features for your MLz instance. If enabled, you will need to provide network and authentication details for an IBM z/OS Container Extensions (zCX) instance to support the features.
Data scientists are responsible for the development, import, and monitoring of models within an MLz instance. To understand the supported and unsupported data artifacts, review the Supported algorithms, data sources, data types, and model types.
Certain model types can be developed and trained within MLz, while others must be trained on another platform and imported into MLz.
The following model types can be trained, saved, and deployed in MLz:
- SparkML
- Scikit-learn
- XGBoost
- ARIMA or Seasonal ARIMA
The following model types can be developed on another platform and imported to MLz to be saved and deployed:
- PMML
- ONNX
- SnapML (when serialized to PMML before import)
The following tasks represent the most common responsibilities of data scientists working in MLz:
You can import models to MLz directly through the UI from your local system or from IBM watsonx.ai:
- If importing from file, you will need to provide a name for the file to be displayed in the MLz UI, and you will need to select the model type.
- If importing from watsonx.ai, you will need to provide connection and authentication details.
You can create models in MLz with a Jupyter Notebook server which can be accessed directly in the UI.
To create a model in Jupyter notebook editor, see Developing a model in the integrated Notebook Editor.
As data changes over time, model quality and prediction accuracy can degrade. As a result, it is important to continually monitor model accuracy and performance, particular when leveraging machine learning and deep learning algorithms.
In MLz, you can create model evaluations that asses a model’s predictive accuracy. These evaluations can run ad hoc or be scheduled to run repeatedly. For more details, see Evaluating and reevaluating a model.
Trustworthy AI offers explanations for individual AI predictions, providing transparency about the decision making process.
To enable Trustworthy AI in MLz, you can create a monitor. Each monitor can be tailored for local explainability or drift detection. Follow the instructions for Creating a monitor for more details.
Beginning in MLz Enterprise Edition v.3.1., you can generate visualizations for explainability and drift results. These visualizations show change over time and enhance understanding of the model and its performance. For more details, see Viewing explanation results for the monitor and Viewing drift results for the monitor.
Check out more Trustworthy AI features in Trustworthy AI.
As an application developer, you can use MLz to deploy a model, allowing machine learning predictions to be made against the data relevant to your application. In the Models tab, navigate to a row displaying with a ready status (a green check mark will be shown), open the overflow menu, and click “Create deployment”. Then input the relevant details for the deployment, including a display name, a deployment type (online or batch), and a scoring service type (standalone or cluster).
Once completed, your deployment will appear in the Deployments tab of the UI.
By deploying machine learning models with your business applications running on IBM Z, a host of new insights and capabilities can become possible. This topic is thoroughly covered in the publication, Planning AI infusion into applications on IBM Z, as well as the article Deploy AI models for real-time inferencing in your z/OS
For
For further detail, view Planning AI infusion into applications on IBM Z.
For
Refer to the Deploy AI models for real-time inferencing article for further detail.
For WebSphere, a Java API can be used to call the MLz scoring feature configured in a WebSphere server.
For further detail, view Planning AI infusion into applications on IBM Z.
An ODM rule driven by the runtime application can be enhanced to reference a model deployed to MLz, and then use the prediction from the model in the rule. ODM uses a highly efficient interface between ODM and MLz.
Many
For further detail, view Planning AI infusion into applications on IBM Z.
A number of APIs are available to connect MLz services with applications running both on and off the z/OS platform. Follow the links provided to view relevant information for each API.
MLz can be configured for high availability (HA) within a sysplex when it is set up with
View IBM Documentation for further detail on this topic.
USS skills are recommended throughout the configuration process. To view what actions will be supported with USS skills, view IBM Documentation.
The Open Neural Network Exchange (ONNX) is an open standard used for converting between different machine learning frameworks. You can import, deploy, and manage ONNX models on MLz by using the required ONNX compiler service.
View IBM Documentation for further detail on this topic.
MLz’s user interface allows for models to be created or imported without extensive knowledge of z/OS. That being said, it’s critical to understand the algorithms, data sources, data types, and model types supported by MLz, and how that relates to available options for creating your machine learning or deep learning models. For example, a data scientist requiring the use of
View IBM Documentation for further detail on this topic.
View technical content for the end-to-end use of MLz.
View high-level information about the MLz product, including pricing and related offerings.
View guidance for infusing AI models with applications running in CICS TS, IMS TS, WebSphere, and z/TPF.
View detailed guidance for configuring and optimizing AI integration with IBM zSystems.
Read a business-oriented review for solving challenges around instant payments utilizing MLz.
Join a community of practitioners and experts discussing AI for the IBM Z and LinuxONE platforms.
Build and train models anywhere, and deploy them on IBM Z and LinuxONE infrastructure.
Updated content to reflect Machine Learning for z/OS v3.2 on IBM Z systems for IBM z17.