Explaining transactions

For each deployment, you can see explainability data for specific transactions. Depending on the type of model, it can include different types of analysis, such as LIME, contrastive explanations, or the ability to test what if scenarios.

 

Explanations

Watson OpenScale can generate the following types of explanations to help you understand the behavior of your model:

LIME

Local Interpretable Model-Agnostic Explanations (LIME) is a Python library that Watson OpenScale uses to analyze the input and output values of a model to create human-understandable interpretations of the model. LIME reveals which features are most important for a specific data point. The 5000 perturbations that are typically done for analysis are close to the data point. In an ideal setting, the features with high importance in LIME are the features that are most important for that specific data point. For proper processing of LIME explanations, Watson OpenScale does not support column names with equals sign (=) in the data set.

Contrastive explanations

Contrastive explanations reveal how much values need to change to either change the prediction or still have the same prediction. The factors that need the maximum change are considered more important, so the features with the highest importance in contrastive explanations are the features where the model is least sensitive. For contrastive explanations, Watson OpenScale displays the maximum changes for the same outcome and the minimum changes for a changed outcome. These categories are also known as pertinent positive and pertinent negative values. These values help explain the behavior of the model in the vicinity of the data point for which an explanation is generated.

Viewing explanations by transaction ID

  1. Click the Explain a transaction Explain a transaction tab tab in the navigator.

  2. Select a model in the Deployed model list.
    The Recent transactions list displays all of the transactions that are processed by your model. The list contains columns that provide details about the outcome of each transaction.

    Transaction list

  3. Enter a transaction ID.
    If you use the internal lite version of PostgreSQL, you might not be able to retrieve your database credentials. This might prevent you from seeing transactions.

  4. Click the Explain button in the Actions column.
    The Transaction details page provides an analysis of the factors that influenced the outcome of the transaction.

    Transaction details

  5. Optional: For further analysis, click the Inspect tab.
    You can set new values to determine a different predicted outcome for the transaction.

  6. Optional: After you set new values, click Run analysis to show how different values can change the outcome of the transaction.

    Transaction details on the inspect tab show values that might produce a different outcome

Whenever data is sent to the model for scoring, IBM Watson Machine Learning sets a transaction ID in the HTTP header by setting the X-Global-Transaction-Id field. This transaction ID gets stored in the payload table. To find an explanation of the model behavior for a particular scoring, specify the transaction ID associated with that scoring request. This behavior applies only to IBM Watson Machine Learning transactions, and is not applicable for non-WML transactions.

 

Finding a transaction ID in Watson OpenScale

  1. From the chart, slide the marker across the chart and click the View details link to visualize data for a specific hour.
  2. Click View transactions to view the list of transaction IDs.
  3. Click the Explain link in the Action column for any transaction ID, which opens that transaction in the Explain tab.

 

Finding explanations through chart details

Because explanations exist for model risk, fairness, drift, and performance you can click one of the following links to view detailed transactions:

 

 

Explaining a categorical model

A categorical model, such as a binary classification model categorizes data into distinct groups. Unlike regression, image, and unstructured text models, Watson OpenScale generates advanced explanations for binary classification models. You can use the Inspect tab to experiment with features by changing the values to see whether the outcome changes.

While the charts are useful in showing the most significant factors in determining the outcome of a transaction, classification models can also include advanced explanations on the Explain and Inspect tabs.

 

Explaining image models

Watson OpenScale supports explainability for image data. See the image zones that contributed to the model output and the zones that did not contribute. Click an image for a larger view.

Explaining image model transactions

For an image classification model example of explainability, you can see which parts of an image contributed positively to the predicted outcome and which contributed negatively. In the following example, the image in the positive pane shows the parts which impacted positively to the prediction. The image in the negative pane shows the parts of images that had a negative impact on the outcome.

Explainability image classification confidence detail displays with an image of a tree frog. Different parts of the picture are highlighted in separate frames. Each part shows the extent to which it did or did not help to determine that the image is a frog.

See the image zones that contributed to the model output and the zones that did not contribute. Click an image for a larger view.

Image model examples

Use the following two Notebooks to see detailed code samples and develop your own Watson OpenScale deployments:

 

Explaining unstructured text models

Watson OpenScale supports explainability for unstructured text data.

If you use a Keras model that takes the input as byte array, you must create a deployable function in IBM Watson Machine Learning. The function must accept the entire text as a single feature in input and not as text that is vectorized and represented as a tensor or split across multiple features. IBM Watson Machine Learning supports the creation of deployable functions. For more information, see Passing payload data to model deployments

For more information, see Working with unstructured text models and Enabling non-space-delimited language support.

Explaining unstructured text transactions

The following example of explainability shows a classification model that evaluates unstructured text. The explanation shows the keywords that had either a positive or a negative impact on the model prediction. The explanation also shows the position of the identified keywords in the original text that was fed as input to the model.

Explainability image classification chart is displayed. It shows confidence levels for the unstructured text

Unstructured text models present the importance of words or tokens. To change the language, select a different language from the list. The explanation runs again by using a different tokenizer.

Unstructured text model example

Use the following Notebook to see detailed code samples and develop your own Watson OpenScale deployments:

 

Explaining tabular transactions

The following example of explainability shows a classification model that evaluates tabular data.

Explainability image classification chart is displayed. It shows confidence levels for the tabular data model

Next steps

Parent topic: Get model insights