For each deployment, you can see explainability data for specific transactions. Depending on the type of model, it can include different types of analysis, such as LIME, contrastive explanations, or the ability to test what if scenarios.
Viewing explanations by transaction ID
- Click the Explain a transaction tab () in the navigator.
- Type a transaction ID.
To analyze results further, click the Inspect tab, choose whether to analyze controllable feature only, and click Run analysis.
The results of this analysis show how different values can change the outcome of this specific transaction. You must designate which features are controllable. For more information, see Configuring the explainability monitor
Whenever data is sent to the model for scoring, IBM Watson Machine Learning sets a transaction ID in the HTTP header by setting the
X-Global-Transaction-Id field. This transaction ID gets stored in the payload table. To find an explanation of the model behavior for a particular scoring, specify the transaction ID associated with that scoring request. This behavior applies only to IBM Watson Machine Learning transactions, and is not applicable for non-WML transactions.
Finding a transaction ID in Watson OpenScale
- From the chart, slide the marker across the chart and click the View details link to visualize data for a specific hour.
- Click View transactions to view the list of transaction IDs.
- Click the Explain link in the Action column for any transaction ID, which opens that transaction in the Explain tab.
Finding explanations through chart details
Because explanations exist for model risk, fairness, drift, and performance you can click one of the following links to view detailed transactions:
- From the Evaluations page, in the Number of explanations section, click the number link. In the Select an explanation window, click a transaction, and then, click View.
- For one of the fairness attributes, from the Evaluations page, click the fairness percentage link. Click the attribute, such as sex or age, click the chart, and then click View transactions.
- For the drift monitor, from the Evaluations page, click the drift percentage link. Click the chart, click the drift type, then click a tile to see the transactions associated with that particular drift group.
- For a performance chart, from the Evaluations page, click any of the percentage links. In the Performance section, click Throughput, click the chart, and then click the Explain link that follows the transaction you want to view.
Understanding the difference between contrastive explanations and LIME
Local Interpretable Model-Agnostic Explanations (LIME) are a Python library that Watson OpenScale uses to analyze the input and output values of a model to create human-understandable interpretations of the model. However, both LIME and contrastive explanation are valuable tools for making sense of a model they offer different perspectives. Contrastive explanations reveal how much values need to change to either change the prediction or still have the same prediction. The factors that need the maximum change are considered more important in this type of explanation. In other words, the features with highest importance in contrastive explanations are the features where the model is least sensitive. Alternatively, LIME reveals which features are most important for a specific data point. The 5000 perturbations that are typically done for analysis are close to the data point. In an ideal setting, the features with high importance in LIME are the features that are most important for that specific data point. For these reasons, the features with high importance for LIME are different than the features with high importance for contrastive explanations.
For proper processing of LIME explanations, Watson OpenScale does not support column names with equals sign (=) in the data set.
For contrastive explanations, Watson OpenScale displays the maximum changes for the same outcome and the minimum changes for a changed outcome. These categories are also known as pertinent positive and pertinent negative values. These values help explain the behavior of the model in the vicinity of the data point for which an explanation is generated.
Consider an example of a model used for loan processing. It can have the following predictions: Loan Approved, Loan Partially Approved, and Loan Denied. For the sake of simplicity, assume that the model takes only one feature in input: salary. Consider a data point where the salary=150000 and the model predicts Loan Partially Approved. Assume that the median value of salary is 90000. A pertinent positive might be: Even if the salary of the person was 100000, the model still predicts Loan Partially Approved. Alternatively, the pertinent negative is: If the salary of the person was 200000, the model prediction would change to Loan Approved. Thus pertinent positive and pertinent negative together explain the behavior of the model in the vicinity of the data point for which the explanation is generated.
Watson OpenScale always displays a pertinent positive even when no pertinent negatives display. When Watson OpenScale calculates the pertinent negative value, it changes the values of all the features away from their median value. If the value changes away from median, the prediction does not change, then there are no pertinent negatives to display. For pertinent positives, Watson OpenScale finds the maximum change in the feature values towards the median such that the prediction does not change. Practically, there is almost always a pertinent positive to explain a transaction (and it might be the feature value of the input data point itself).
Explaining a categorical model
A categorical model, such as a binary classification model categorizes data into distinct groups. Unlike regression, image, and unstructured text models, Watson OpenScale generates advanced explanations for binary classification models. You can use the Inspect tab to experiment with features by changing the values to see whether the outcome changes.
While the charts are useful in showing the most significant factors in determining the outcome of a transaction, classification models can also include advanced explanations on the
Explaintab, in addition to basic information about the transaction and model, displays the following information:
- Predicted outcome: The outcomes are set in the model.
- How this prediction was determined: Displays the LIME explanation.
- Confidence level: How confident, as a percentage, the Watson OpenScale service is about the analysis.
- Features influencing this prediction: For this transaction, each feature in the model is assigned a percentage of relative weight that indicates how strongly the feature influences the model’s predicted outcome. A negative relative weight percentage indicates that the feature influenced the model towards a different predicted outcome.
Inspecttab displays the following information as part of the contrastive explanation:
- Feature: The feature from the model. If the model was created with meta fields that were not used in training, you have the option of viewing only those features by selecting the Analyze controllable features only option.
- Original value: The original value that is used in training the model.
- New value: You can enter a new value for one or more features to see how it might change the outcome.
- Value for a different outcome: After you run an analysis, you can see what are the mostly likely settings to change the outcome.
- Importance: After you run an analysis, you can see what the relative importance is for each changed feature value.
Explaining image models
Watson OpenScale supports explainability for image data. See the image zones that contributed to the model output and the zones that did not contribute. Click an image for a larger view.
Explaining image model transactions
For an image classification model example of explainability, you can see which parts of an image contributed positively to the predicted outcome and which contributed negatively. In the following example, the image in the positive pane shows the parts which impacted positively to the prediction. The image in the negative pane shows the parts of images that had a negative impact on the outcome.
See the image zones that contributed to the model output and the zones that did not contribute. Click an image for a larger view.
Image model examples
Use the following two Notebooks to see detailed code samples and develop your own Watson OpenScale deployments:
- Tutorial on generating an explanation for an image-based multiclass classification model
- Tutorial on generating an explanation for an image-based binary classification model
Explaining unstructured text models
Watson OpenScale supports explainability for unstructured text data.
If you use a Keras model that takes the input as byte array, you must create a deployable function in IBM Watson Machine Learning. The function must accept the entire text as a single feature in input and not as text that is vectorized and represented as a tensor or split across multiple features. IBM Watson Machine Learning supports the creation of deployable functions. For more information, see Passing payload data to model deployments
Explaining unstructured text transactions
The following example of explainability shows a classification model that evaluates unstructured text. The explanation shows the keywords that had either a positive or a negative impact on the model prediction. The explanation also shows the position of the identified keywords in the original text that was fed as input to the model.
Unstructured text models present the importance of words or tokens. To change the language, select a different language from the list. The explanation runs again by using a different tokenizer.
Unstructured text model example
Use the following Notebook to see detailed code samples and develop your own Watson OpenScale deployments:
Explaining tabular transactions
The following example of explainability shows a classification model that evaluates tabular data.
Questions and answers about explainability
What are the types of explanations shown in Watson OpenScale?
Watson OpenScale provides two types of explanations - Local explanation based on LIME, and Contrastive explanation. For more information, see Understanding the difference between contrastive explanations and LIME.
How do I infer from Local/LIME explanation from Watson OpenScale?
In in Watson OpenScale, LIME reveals which features played most important role in the model prediction for a specific data point. Along with the features their relative importance is also shown.
How do I infer contrastive explanation from Watson OpenScale?
Contrastive explanation in Watson OpenScale shows the minimum change to be made to the input data point that would give a different model prediction than the input data point.
What is what-if analysis in Watson OpenScale?
The explanations UI also provides ability to test what-if scenarios, where in the user can change the feature values of the input datapoint and check its impact on the model prediction and probability.
In Watson OpenScale, for which models is Local/LIME explanation supported?
Local explanation is supported for models that use structured data and of problem type regression and classification and models that use unstructured text, unstructured image data and problem type classification.
In Watson OpenScale, for which models is contrastive explanation and what-if analysis supported?
Contrastive explanations and what-if analyses are supported for models that use structured data and are of problem type classification only.
What are controllable features in Watson OpenScale explainability configuration?
Using controllable features some features of the input data point can be locked, so that they do not change when the contrastive explanation is generated and also they cannot be changed in what if analysis. The features that should not be changed should be set as non-controllable or NO in the explainability configuration.