IBM Watson Machine Learning

You can use IBM Watson Machine Learning to perform payload logging, feedback logging, and to measure performance accuracy, runtime bias detection, drift detection, explainability, and auto-debias function in IBM Watson OpenScale.

IBM Watson OpenScale fully supports the following IBM Watson Machine Learning frameworks:

Table 1. Framework support details

Framework support details
Framework Problem type Data type
AutoAI1 Classification (binary and multi classes) Structured (data, text)
AutoAI Regression Structured or Unstructured2 (text only)
Apache Spark MLlib Classification Structured or Unstructured2 (text only)
Apache Spark MLLib Regression Structured or Unstructured2 (text only)
Keras with TensorFlow3&4 Classification Unstructured2 (image, text)
Keras with TensorFlow3&4 Regression Unstructured2 (image, text)
Python function Classification Structured (data, text)
Python function Regression Structured (data, text)
scikit-learn5 Classification Structured (data, text)
scikit-learn Regression Structured (data, text)
XGBoost6 Classification Structured (data, text)
XGBoost Regression Structured (data, text)

1To learn more about AutoAI, see AutoAI implementation details. For models where the training data is in Cloud Object Storage, there is no support for fairness attributes of type Boolean. However, if the training data is in Db2, Watson OpenScale supports fairness attributes that are Boolean type. When using the AutoAI option, Watson OpenScale does not support models when the model prediction is a binary data type. You must change such models so that the data type of their prediction is a string data type.

2Fairness and drift metrics are not supported for unstructured (image or text) data types.

3Keras support does not include support for fairness.

4Explainability is supported if your model / framework outputs prediction probabilities.

5To generate the drift detection model, you must use scikit-learn version 0.24.1 in notebooks.

6For XGBoost binary and multiple class models, you must update the model to return prediction probability in the form of numerical values for binary models and a list of probabilities per class for multi-class models. Support for the XGBoost framework has the following limitations for classification problems: For binary classification, Watson OpenScale supports the binary:logistic logistic regression function with an output as a probability of True. For multiclass classification, Watson OpenScale supports the multi:softprob function where the result contains the predicted probability of each data point belonging to each class.

XGBoost subscriptions limitation for native XGBoost models

With the latest version WML, the xgboost_0.82 framework is deprecated. The supported framework is xgboost_0.90, which must be used in combination with Python 3.7. To enable and persist XGBoost version 0.90 and Python 3.7, you must patch subscriptions by running the following command:

PATCH /v2/subscriptions/<subscription_id>

For example, the following command provides the full URL and subscription ID:


The expected payload is returned:

    "op": "replace",
    "path": "\/asset\/runtime_environment",
    "value": "xgboost_0.90"

After the subscription is successfully patched and scoring is done, the output data schema will have right modeling role set:

    "metadata": {
        "modeling_role": "class_probability"
    "name": "prediction",
    "nullable": true,
    "type": "double"
}, {
    "metadata": {},
    "name": "probability",
    "nullable": true,
    "type": "double"
}, {
    "metadata": {
        "modeling_role": "prediction-probability"
    "name": "prediction_probability",
    "nullable": true,
    "type": "double"
}, {
    "metadata": {
        "modeling_role": "prediction"
    "name": "scoring_prediction",
    "nullable": true,
    "type": "integer"

AutoAI models and training data

AutoAI automatically prepares data, applies algorithms, or estimators, and builds model pipelines that are best suited for your data and use case. Watson OpenScale requires access to the training data to analyze the model.

Because Watson OpenScale cannot detect the training data location for an AutoAI model like it can for a regular model, you must explicitly provide the needed details to access the training data location:

For more information, see Provide model details and Numeric/categorical data.

Specifying an IBM Watson Machine Learning service instance

Your first step in the Watson OpenScale tool is to specify an IBM Watson Machine Learning instance. Your Machine Learning instance is where you store your AI models and deployments.


You should have provisioned an IBM Watson Machine Learning instance in the same account or cluster where the Watson OpenScale service instance is present. If you have provisioned a IBM Watson Machine Learning instance in some other account or cluster, then you cannot configure that instance with automatic payload logging with Watson OpenScale.

Connect your Machine Learning service instance

Watson OpenScale connects to AI models and deployments in an IBM Watson Machine Learning instance. To connect your service to Watson OpenScale, go to the Configure The configuration tab icon tab, add a machine learning provider, and click the Edit The edit icon icon. In addition to a name and description and whether this is a Pre-production or Production environment type, you must provide the following information that is specific to this type of service instance:

IBM Cloud Pak for Data allows only one API key. When you generate a new API key, the previous API key is automatically revoked. If you update the API key in IBM Cloud Pak for Data, you must also update the API key in Watson OpenScale.


The following limitations apply when you connect to IBM Watson Machine Learning on a separate IBM Cloud Pak for Data instance:

The following limitations apply when you connect to IBM Watson Machine Learning on IBM Cloud:

Next steps

You are now ready to select deployed models and configure your monitors. Watson OpenScale lists your deployed models on the Insights dashboard where you can click the Add to dashboard button. Select the deployments that you want to monitor and click Configure.

For more information, see Configure monitors.

Parent topic: Supported machine learning engines, frameworks, and models