IBM Watson Machine Learning for z/OS Everything you need to get started quickly. Get started

Welcome to the IBM Watson Machine Learning for z/OS content solution, your homepage for technical resources.

IBM Watson Machine Learning for z/OS (WMLz) is an enterprise machine learning solution for IBM Z. Combining an intuitive user interface and APIs for various services, enterprise teams can import and create models, as well as deploy and monitor them in applications running in z/OS and beyond. When coupled with the power of the integrated accelerator for AI on the IBM z16, WMLz users can achieve real-time insights against every transaction, driving new value across business applications and IT operations.

Use this page to help your enterprise team plan to get started with WMLz.

See what IBM Watson Machine Learning for z/OS can do for your business.
Announcements

Try the free WMLz Online Scoring Community Edition

Read more
Big picture 1. Plan for and install WMLz via SMP/E and PTFs 2. Set up WMLz using an intuitive configuration tool 3. Import models from file, or develop new ones with an embedded Jupyter server 4. Deploy your models to applications running on IBM Z and beyond 5. Continually monitor models for performance and impact How to get started
Key capabilities

Watson Machine Learning for z/OS (WMLz) provides a suite of tools for development, deployment, and management of machine learning models across your enterprise. With both an intuitive interface and a suite of APIs, models can be deployed as services to applications running on IBM Z, allowing for real-time insights against your mission critical workloads.

The introduction of WMLz 3.1 provides a variety of new features and enhancements, including:

  • WMLz configuration tool: an intuitive browser-based set-up tool for WMLz Core and WMLz Enterprise (including the ability to configure a high availability scoring cluster)
  • Spark 3.2 support
  • Python AI Toolkit for IBM z/OS support
  • An embedded database providing a repository for WMLz metadata
  • Java v11 support
  • Enhanced Jupyter Notebook server access from the WMLz UI dashboard
  • Watson Core time series inferencing support
  • User certificate support for JWT token generation
Use cases for WMLz

By using WMLz to deploy machine learning models to your core business applications, a number of use cases become available. The following examples are just a selection of possible exploitations. To read more about each use case, and the other relevant technologies used to implement it, visit the Journey to AI on IBM Z and LinuxONE content solution.

Achieve real-time fraud scoring through co-locating an AI model with the IBM Z applications managing your transactions. This enables you to analyze 100% of transactions running through your on-premises systems, and significantly reducing risk to both your business and customers.

Eliminate the manually intensive auditing process through a machine learning model trained in detecting fraud for your claims processing systems. This enables you to rapidly analyze claims and significantly reduce overall human intervention.

By training a model on risk exposure and co-locating it with transactional workloads, unlock insights about risk on a transaction-to-transaction basis.

Co-locate an inferencing model for analyzing loan applications, alongside the automation of rules-based business decisions, to achieve rapid insights, low latency, and minimal exposure of customer and lender data.

Co-locate an inferencing model for analyzing correlations between weather patterns and insurance risk alongside transactional systems, allowing you to achieve rapid insights.

Enable accurate analysis of satellite imagery with a deep learning learning model deployed to the IBM Z platform, ensuring the sensitive care of aerial images while driving rapid insights.

Achieve effective and energy-efficient computer vision for medical imaging through the training and inferencing of AI models on the IBM Z platform.

Prerequisites

In the context of WMLz, the z/OS system programmer is primarily focused on installing and configuring the solution for use by their organization.

The installation and configuration roadmap, including the priority and skills required at each step, is thoroughly detailed on IBM Documentation.

Before beginning installation and configuration, take the following preparatory steps:

In order to successfully install and configure WMLz, some additional IBM and open source technologies should be in place.

The minimally required technologies include:

  • IBM z16, z15™, z14, z13®, or zEnterprise® EC12 system
  • z/OS 2.5 or 2.4
  • z/OS UNIX System Services configured
  • z/OS Integrated Cryptographic Service Facility (ICSF)
  • z/OS OpenSSH
  • IBM 64-bit SDK for z/OS Java Technology Edition Version 8 SR7, or Version 11.0.17 or later
  • IBM WebSphere Application Server for z/OS Liberty version 22.0.0.9 or later
  • Python AI Toolkit for IBM z/OS (for WMLz Enterprise Edition)

Note: additional prerequisites will be required if you plan for specific use cases, for example, to use the IBM z16 on-chip AI acceleration for scoring ONNX models or to run a scoring service in a CICS region. For a full listing of prerequisites broken down by use case, view IBM Documentation.

WMLz performs the best when adequate system capacity in terms of server, processor, memory, and disk space is available. Allocate sufficient capacity for WMLz on your z/OS system to meet the demands of your enterprise machine learning workload.

CPU per LPAR/server Memory (GB) per LPAR/server DASD/Disk Space (GB) per LPAR/server 1 GCP/4 zIIPs 100 GB 100 GB

Note that additional capacity may be needed depending on the training workload you plan to run on IBM Z. This may be influenced by the number of concurrent training jobs, the type of models being trained, the size of the training data set, the number of data features, and the specification of model parameters. For guidance on planning capacity for your training workloads, view IBM Documentation.

Installing WMLz

WMLz is installable via SMP/E. When receiving a purchase order for WMLz, you should receive the following materials:

  • SMP/E images
  • Program directories
  • License information
  • Available maintenance packages

Along with receiving these materials, ensure that you’ve obtained the PTFs containing the latest technical and service updates.

Once all necessary materials are retrieved, complete the installation instructions as defined in IBM Documentation.

Planning for configuration

WMLz can be set up with the WMLz configuration tool—a graphical interface to gradually walk you through key steps. Before using the WMLz configuration tool, complete the following steps:

USS skills are necessary throughout the configuration process, including configuring essential user IDs, port definitions, and authentications for the product.

For a full view of which configuration steps require USS skills, view IBM Documentation.

There are a number of terms you will encounter when configuring WMLz that are essential for understanding how the product is set up.

For a listing of essential terms to know, view IBM Documentation.

Your security administrator can help you configure a WMLz setup user ID, which is necessary for using the WMLz configuration tool. For ease of access control, it is recommended that you plan to also use this same user ID for WMLz services.

For further information about configuring the WMLz setup user ID, view IBM Documentation.

Your security administrator will also be responsible for ensuring that a RACF KeyRing is created for user IDs that must be created for WMLz, and will help to establish the requisite IDs and authorizations. They will also be involved in securing network communications for WMLz with AT-TLS.

For further information about securing network connections and configuring the keystore, view IBM Documentation.

Your network administrator can help you reserve the various ports which need to be dedicated to the purpose of WMLz communicating across systems and services. A number of required and optional ports should be accounted for prior to using the WMLz configuration tool.

For further information about configuring ports for WMLz, view IBM Documentation.

Your z/OS database administrator can help you plan for the storage of metadata objects in the repository of your choice: an embedded database with WMLz or a Db2 for z/OS database.

For further information about creating and storing metadata objects, view IBM Documentation.

Configuring WMLz

Once you have completed the previously mentioned steps, you can start the WMLz configuration tool using a shell script located in the $IML_INSTALL_DIR/iml-utilities/configtool directory. Once opened, the WMLz configuration tool will guide you through the process of setting up the various components and services of the product.

As you near completion, you will have an opportunity to review inputs prior to configuration, including the various file paths where configurations will occur. It is recommended that you save these directory locations so that you may directly adjust configurations later as needed.

For further information about starting and using the WMLz configuration tool, view IBM Documentation.

Configuration of WMLz involves the set-up of the following four major components:

You must select one of two options for storing important metadata about models. Selecting the embedded WMLz database doesn’t require further configuration, but limits the availability of WMLz services. Selecting Db2 for z/OS allows for high availability, but requires that you have a ready Db2 system, and also requires that you provide network, authentication, and schema details for WMLz metadata objects.

You must specify host system details for both the WMLz UI and core services, as well as credentials for the default admin user and the WMLz application developer. You can also enable audit trace for governance of model data, model versions, and model deployments. Lastly, if you had selected Db2 for z/OS as your metadata repository, you can add WMLz core services to a cluster, which routes services to an alternate WMLz instance if one isn’t available.

You must create a name for the WMLz runtime environment, as well as provide details for both the Spark runtime engine and the Python runtime engine. The Spark runtime engine is a high-performance analytics engine for large-scale data processing able to perform in-memory computing, and you will need to allocate various ports for its configuration, as well as determine the maximum number of retries for port binding. You will also have the option of enabling client-side Spark authentication. The Python runtime environment allows for the use of a Python-based scoring service, as well as the enablement of a Jupyter Notebook server for model development in the WMLz UI. For Python, you will need to specify the Python virtual environment name and the Python packages installation source.

You must provide a name for the WMLz scoring service. A scoring service is where your AI models are deployed to for real-time inferencing. You can choose to create a standalone scoring instance, or to set it to a cluster for high availability. Creating a cluster scoring services requires that you had selected Db2 for z/OS as your metadata repository, and will require you selecting an available cluster to join, or creating a new one by providing network details for the sysplex distributor.

Supported model types

The role of the data scientist is largely based around the development and importing of models to WMLz. To that end, it is recommended that you review the algorithms, data sources, data types, and model types supported by WMLz. For a thorough listing of what’s supported, view IBM Documentation.

The following model types can be imported to WMLz, or trained, saved, and deployed in WMLz:

  • SparkML
  • Scikit-learn
  • XGBoost
  • ARIMA or Seasonal ARIMA

The following model types can be developed on another platform and imported to WMLz to be saved and deployed:

  • PMML
  • ONNX
Importing models to WMLz

You can import models to WMLz directly through the UI. In the Models tab, click “Import model”, and then select the relevant tab for whether you want to import from file or from IBM Cloud Pak for Data.

If importing from file, you will need to provide a name for the file to be displayed in the WMLz UI, and you will need to select the model type.

If importing from IBM Cloud Pak for Data, you will need to provide connection and authentication details, as well as provide a name for the file and select the model type.

Creating models in WMLz

You can create models in WMLz with a Jupyter Notebook server which can be accessed directly from the UI. In the Models tab, click “Create model” to open a view of the Jupyter Notebook interface which you can use to create and save new models for WMLz. If you’re unfamiliar with the interface or basic workflows of notebooks, view the open source documentation published by Project Jupyter.

Deploying models in WMLz

As an application developer, you can use WMLz to deploy a model, allowing machine learning predictions to be made against the data relevant to your application. In the Models tab, navigate to a row displaying with a ready status (a green check mark will be shown), open the overflow menu, and click “Create deployment”. Then input the relevant details for the deployment, including a display name, a deployment type (online or batch), and a scoring service type (standalone or cluster).

Once completed, your deployment will appear in the Deployments tab of the UI.

Infusing AI with IBM Z applications

By deploying machine learning models with your business applications running on IBM Z, a host of new insights and capabilities can become possible. This topic is thoroughly covered in the publication, “Planning AI infusion into applications on IBM zSystems, as well as the article “Deploy AI models for real-time inferencing in your z/OS IMS transactions”. The following are brief excerpts and related details to provide an overview of the available infusion methods:

'For CICS, the WMLz scoring engine can be hosted in a WebSphere Liberty server within the CICS runtime and provides a program ALNSCORE that can be invoked with EXEC CICS LINK, passing the data using CICS channels and containers.'

Refer to the “Planning AI infusion” publication for further detail.

For IMS, the WMLz WOLA (WebSphere Optimized Local Adapter) scoring interface can be invoked from IMS COBOL applications for high performance real-time inferencing in IMS online transactions.

Refer to the “Deploy AI models for real-time inferencing” article for further detail.

'For WebSphere, a Java API can be used to call the WMLz scoring feature configured in a WebSphere server.'

Refer to the “Planning AI infusion” publication for further detail.

'An ODM rule driven by the runtime application can be enhanced to reference a model deployed to WMLz, and then use the prediction from the model in the rule. ODM uses a highly efficient interface between ODM and WMLz.

Many CICS and IMS applications already use ODM rules to inform their decisions, in which case a rule called by the application can be enhanced with machine learning to drive an AI model deployed to WMLz. If the application does not currently use ODM rules, it can be updated to use an ODM rule that drives the AI model via WMLz, and hence include additional insight in the result from the rule.'

Refer to the “Planning AI infusion” publication for further detail.

Monitoring models for impact

When utilizing machine learning and deep learning with your transactional applications, it’s important to continually ensure that deployed models are working as intended. WMLz offers a method for evaluating the performance of SparkML and PMML models to ensure accuracy.

To do so, select a model from the Models tab, open the overflow menu, and click “Create evaluation”. Then add a data source and choose whether to run the evaluation over a time range, and click “Create”. The evaluation for deployments of that model can then be triggered using the overflow menu in the Deployments tab. The evaluation can also be scheduled to run repeatedly.

For full detail about this procedure, view IBM Documentation.

You can also validate and explain your models using IBM Watson OpenScale on Cloud Pak for Data. This can be performed by selecting an online deployment in WMLz in the Deployments tab, clicking “View details”, and updating the endpoint address of the deployment to that of your WMLz UI. This allows you to configure Watson OpenScale to access the model deployment from your WMLz UI.

For full detail about this procedure, view IBM Documentation.

Available APIs

A number of APIs are available to connect WMLz services with applications running both on and off the z/OS platform. Follow the links provided to view relevant information for each API.

Learn more General questions

WMLz can be configured for high availability (HA) within a sysplex when it is set up with Db2 for z/OS as the metadata repository. An existing WMLz instance can be quickly cloned onto a new port within the same LPAR, or in a separate LPAR using the Sysplex Distributor. This allows WMLz core services—including model training, deployment, batch scoring, ingestion, repository, and data connection management—to fail over to another active system if necessary.

View IBM Documentation for further detail on this topic.

USS skills are recommended throughout the configuration process. To view what actions will be supported with USS skills, view IBM Documentation.

The Open Neural Network Exchange (ONNX) is an open standard used for converting between different machine learning frameworks. You can import, deploy, and manage ONNX models on WMLz by using the required ONNX compiler service.

View IBM Documentation for further detail on this topic.

WMLz’s user interface allows for models to be created or imported without extensive knowledge of z/OS. That being said, it’s critical to understand the algorithms, data sources, data types, and model types supported by WMLz, and how that relates to available options for creating your machine learning or deep learning models. For example, a data scientist requiring the use of Db2 for z/OS data for a Python or Scala model will need to understand how to connect to their data source through the use of MDS.

View IBM Documentation for further detail on this topic.

Documentation IBM Watson Machine Learning for z/OS Documentation

Explore the documentation
Technical resources IBM Documentation

View technical content for the end-to-end use of WMLz.

Read the documentation
IBM Products page

View high-level information about the WMLz product, including pricing and related offerings.

View the webpage
Planning AI infusion into applications on IBM zSystems

View guidance for infusing AI models with applications running in CICS TS, IMS TS, WebSphere, and z/TPF.

Read the document
Redbook: Optimized Inferencing and Integration with AI on IBM zSystems

View detailed guidance for configuring and optimizing AI integration with IBM zSystems.

Read the Redbook
Redbook: Solving challenges of instant payments by using AI on IBM zSystems

Read a business-oriented review for solving challenges around instant payments utilizing WMLz.

Read the Redbook
AI on IBM Z & LinuxONE community

Join a community of practitioners and experts discussing AI for the IBM Z and LinuxONE platforms.

Join the community
Related solutions Journey to AI on IBM Z and LinuxONE

Build and train models anywhere, and deploy them on IBM Z and LinuxONE infrastructure.

Journey to open data analytics

Derive new insights and advantages from each transaction by charting your journey to open data analytics.

Rate this content solution