Explaining transactions
For each deployment, you can see explainability data for specific transactions. Depending on the type of model, it can include different types of analysis, such as LIME, contrastive explanations, or the ability to test what if scenarios.
Explanations
Watson OpenScale can generate the following types of explanations to help you understand the behavior of your model:
LIME
Local Interpretable Model-Agnostic Explanations (LIME) is a Python library that Watson OpenScale uses to analyze the input and output values of a model to create human-understandable interpretations of the model. LIME reveals which features are most important for a specific data point. The 5000 perturbations that are typically done for analysis are close to the data point. In an ideal setting, the features with high importance in LIME are the features that are most important for that specific data point. For proper processing of LIME explanations, Watson OpenScale does not support column names with equals sign (=) in the data set.
Contrastive explanations
Contrastive explanations reveal how much values need to change to either change the prediction or still have the same prediction. The factors that need the maximum change are considered more important, so the features with the highest importance in contrastive explanations are the features where the model is least sensitive. For contrastive explanations, Watson OpenScale displays the maximum changes for the same outcome and the minimum changes for a changed outcome. These categories are also known as pertinent positive and pertinent negative values. These values help explain the behavior of the model in the vicinity of the data point for which an explanation is generated.
Viewing explanations by transaction ID
-
Click the Explain a transaction
tab in the navigator. -
Select a model in the Deployed model list.
The Recent transactions list displays all of the transactions that are processed by your model. The list contains columns that provide details about the outcome of each transaction.
-
Enter a transaction ID.
If you use the internal lite version of PostgreSQL, you might not be able to retrieve your database credentials. This might prevent you from seeing transactions. -
Click the Explain button in the Actions column.
The Transaction details page provides an analysis of the factors that influenced the outcome of the transaction.
-
Optional: For further analysis, click the Inspect tab.
You can set new values to determine a different predicted outcome for the transaction. -
Optional: After you set new values, click Run analysis to show how different values can change the outcome of the transaction.

Whenever data is sent to the model for scoring, IBM Watson Machine Learning sets a transaction ID in the HTTP header by setting the X-Global-Transaction-Id field. This transaction ID gets stored in the payload table. To find an explanation
of the model behavior for a particular scoring, specify the transaction ID associated with that scoring request. This behavior applies only to IBM Watson Machine Learning transactions, and is not applicable for non-WML transactions.
Finding a transaction ID in Watson OpenScale
- From the chart, slide the marker across the chart and click the View details link to visualize data for a specific hour.
- Click View transactions to view the list of transaction IDs.
- Click the Explain link in the Action column for any transaction ID, which opens that transaction in the Explain tab.
Finding explanations through chart details
Because explanations exist for model risk, fairness, drift, and performance you can click one of the following links to view detailed transactions:
- From the Evaluations page, in the Number of explanations section, click the number link. In the Select an explanation window, click a transaction, and then, click View.
- For one of the fairness attributes, from the Evaluations page, click the fairness percentage link. Click the attribute, such as sex or age, click the chart, and then click View transactions.
- For the drift monitor, from the Evaluations page, click the drift percentage link. Click the chart, click the drift type, then click a tile to see the transactions associated with that particular drift group.
- For a performance chart, from the Evaluations page, click any of the percentage links. In the Performance section, click Throughput, click the chart, and then click the Explain link that follows the transaction you want to view.
Explaining a categorical model
A categorical model, such as a binary classification model categorizes data into distinct groups. Unlike regression, image, and unstructured text models, Watson OpenScale generates advanced explanations for binary classification models. You can use the Inspect tab to experiment with features by changing the values to see whether the outcome changes.
While the charts are useful in showing the most significant factors in determining the outcome of a transaction, classification models can also include advanced explanations on the Explain and Inspect tabs.
-
The
Explaintab, in addition to basic information about the transaction and model, displays the following information:- Predicted outcome: The outcomes are set in the model.
- How this prediction was determined: Displays the LIME explanation.
- Confidence level: How confident, as a percentage, the Watson OpenScale service is about the analysis.
- Features influencing this prediction: For this transaction, each feature in the model is assigned a percentage of relative weight that indicates how strongly the feature influences the model’s predicted outcome. A negative relative weight percentage indicates that the feature influenced the model towards a different predicted outcome.
-
The
Inspecttab displays the following information as part of the contrastive explanation:- Feature: The feature from the model. If the model was created with meta fields that were not used in training, you have the option of viewing only those features by selecting the Analyze controllable features only option.
- Original value: The original value that is used in training the model.
- New value: You can enter a new value for one or more features to see how it might change the outcome.
- Value for a different outcome: After you run an analysis, you can see what are the mostly likely settings to change the outcome.
- Importance: After you run an analysis, you can see what the relative importance is for each changed feature value.
Explaining image models
Watson OpenScale supports explainability for image data. See the image zones that contributed to the model output and the zones that did not contribute. Click an image for a larger view.
Explaining image model transactions
For an image classification model example of explainability, you can see which parts of an image contributed positively to the predicted outcome and which contributed negatively. In the following example, the image in the positive pane shows the parts which impacted positively to the prediction. The image in the negative pane shows the parts of images that had a negative impact on the outcome.

See the image zones that contributed to the model output and the zones that did not contribute. Click an image for a larger view.
Image model examples
Use the following two Notebooks to see detailed code samples and develop your own Watson OpenScale deployments:
- Tutorial on generating an explanation for an image-based multiclass classification model
- Tutorial on generating an explanation for an image-based binary classification model
Explaining unstructured text models
Watson OpenScale supports explainability for unstructured text data.
If you use a Keras model that takes the input as byte array, you must create a deployable function in IBM Watson Machine Learning. The function must accept the entire text as a single feature in input and not as text that is vectorized and represented as a tensor or split across multiple features. IBM Watson Machine Learning supports the creation of deployable functions. For more information, see Passing payload data to model deployments
For more information, see Working with unstructured text models and Enabling non-space-delimited language support.
Explaining unstructured text transactions
The following example of explainability shows a classification model that evaluates unstructured text. The explanation shows the keywords that had either a positive or a negative impact on the model prediction. The explanation also shows the position of the identified keywords in the original text that was fed as input to the model.

Unstructured text models present the importance of words or tokens. To change the language, select a different language from the list. The explanation runs again by using a different tokenizer.
Unstructured text model example
Use the following Notebook to see detailed code samples and develop your own Watson OpenScale deployments:
Explaining tabular transactions
The following example of explainability shows a classification model that evaluates tabular data.

Next steps
Parent topic: Get model insights