Explaining transactions

For each deployment, you can see explainability data for specific transactions. Depending on the type of model, it can include different types of analysis, such as LIME, contrastive explanations, or the ability to test what if scenarios.

 

Viewing explanations by transaction ID

  1. Click the Explain a transaction tab (Explain a transaction tab) in the navigator.
  2. Type a transaction ID.
  3. To analyze results further, click the Inspect tab, choose whether to analyze controllable feature only, and click Run analysis.

    The results of this analysis show how different values can change the outcome of this specific transaction. You must designate which features are controllable. For more information, see Configuring the explainability monitor

    Transaction details on the inspect tab show values that might produce a different outcome

Whenever data is sent to the model for scoring, IBM Watson Machine Learning sets a transaction ID in the HTTP header by setting the X-Global-Transaction-Id field. This transaction ID gets stored in the payload table. To find an explanation of the model behavior for a particular scoring, specify the transaction ID associated with that scoring request. This behavior applies only to IBM Watson Machine Learning transactions, and is not applicable for non-WML transactions.

 

Finding a transaction ID in Watson OpenScale

  1. From the chart, slide the marker across the chart and click the View details link to visualize data for a specific hour.
  2. Click View transactions to view the list of transaction IDs.
  3. Click the Explain link in the Action column for any transaction ID, which opens that transaction in the Explain tab.

 

Finding explanations through chart details

Because explanations exist for model risk, fairness, drift, and performance you can click one of the following links to view detailed transactions:

 

Understanding the difference between contrastive explanations and LIME

Local Interpretable Model-Agnostic Explanations (LIME) are a Python library that Watson OpenScale uses to analyze the input and output values of a model to create human-understandable interpretations of the model. However, both LIME and contrastive explanation are valuable tools for making sense of a model they offer different perspectives. Contrastive explanations reveal how much values need to change to either change the prediction or still have the same prediction. The factors that need the maximum change are considered more important in this type of explanation. In other words, the features with highest importance in contrastive explanations are the features where the model is least sensitive. Alternatively, LIME reveals which features are most important for a specific data point. The 5000 perturbations that are typically done for analysis are close to the data point. In an ideal setting, the features with high importance in LIME are the features that are most important for that specific data point. For these reasons, the features with high importance for LIME are different than the features with high importance for contrastive explanations.

For proper processing of LIME explanations, Watson OpenScale does not support column names with equals sign (=) in the data set.

For contrastive explanations, Watson OpenScale displays the maximum changes for the same outcome and the minimum changes for a changed outcome. These categories are also known as pertinent positive and pertinent negative values. These values help explain the behavior of the model in the vicinity of the data point for which an explanation is generated.

Consider an example of a model used for loan processing. It can have the following predictions: Loan Approved, Loan Partially Approved, and Loan Denied. For the sake of simplicity, assume that the model takes only one feature in input: salary. Consider a data point where the salary=150000 and the model predicts Loan Partially Approved. Assume that the median value of salary is 90000. A pertinent positive might be: Even if the salary of the person was 100000, the model still predicts Loan Partially Approved. Alternatively, the pertinent negative is: If the salary of the person was 200000, the model prediction would change to Loan Approved. Thus pertinent positive and pertinent negative together explain the behavior of the model in the vicinity of the data point for which the explanation is generated.

Watson OpenScale always displays a pertinent positive even when no pertinent negatives display. When Watson OpenScale calculates the pertinent negative value, it changes the values of all the features away from their median value. If the value changes away from median, the prediction does not change, then there are no pertinent negatives to display. For pertinent positives, Watson OpenScale finds the maximum change in the feature values towards the median such that the prediction does not change. Practically, there is almost always a pertinent positive to explain a transaction (and it might be the feature value of the input data point itself).

 

Explaining a categorical model

A categorical model, such as a binary classification model categorizes data into distinct groups. Unlike regression, image, and unstructured text models, Watson OpenScale generates advanced explanations for binary classification models. You can use the Inspect tab to experiment with features by changing the values to see whether the outcome changes.

While the charts are useful in showing the most significant factors in determining the outcome of a transaction, classification models can also include advanced explanations on the Explain and Inspect tabs.

 

Explaining image models

Watson OpenScale supports explainability for image data. See the image zones that contributed to the model output and the zones that did not contribute. Click an image for a larger view.

Explaining image model transactions

For an image classification model example of explainability, you can see which parts of an image contributed positively to the predicted outcome and which contributed negatively. In the following example, the image in the positive pane shows the parts which impacted positively to the prediction. The image in the negative pane shows the parts of images that had a negative impact on the outcome.

Explainability image classification confidence detail displays with an image of a tree frog. Different parts of the picture are highlighted in separate frames. Each part shows the extent to which it did or did not help to determine that the image is a frog.

See the image zones that contributed to the model output and the zones that did not contribute. Click an image for a larger view.

Image model examples

Use the following two Notebooks to see detailed code samples and develop your own Watson OpenScale deployments:

 

Explaining unstructured text models

Watson OpenScale supports explainability for unstructured text data.

If you use a Keras model that takes the input as byte array, you must create a deployable function in IBM Watson Machine Learning. The function must accept the entire text as a single feature in input and not as text that is vectorized and represented as a tensor or split across multiple features. IBM Watson Machine Learning supports the creation of deployable functions. For more information, see Passing payload data to model deployments

For more information, see Working with unstructured text models and Enabling non-space-delimited language support.

Explaining unstructured text transactions

The following example of explainability shows a classification model that evaluates unstructured text. The explanation shows the keywords that had either a positive or a negative impact on the model prediction. The explanation also shows the position of the identified keywords in the original text that was fed as input to the model.

Explainability image classification chart is displayed. It shows confidence levels for the unstructured text

Unstructured text models present the importance of words or tokens. To change the language, select a different language from the list. The explanation runs again by using a different tokenizer.

Unstructured text model example

Use the following Notebook to see detailed code samples and develop your own Watson OpenScale deployments:

 

Explaining tabular transactions

The following example of explainability shows a classification model that evaluates tabular data.

Explainability image classification chart is displayed. It shows confidence levels for the tabular data model

 

Questions and answers about explainability

What are the types of explanations shown in Watson OpenScale?

Watson OpenScale provides two types of explanations - Local explanation based on LIME, and Contrastive explanation. For more information, see Understanding the difference between contrastive explanations and LIME.

How do I infer from Local/LIME explanation from Watson OpenScale?

In in Watson OpenScale, LIME reveals which features played most important role in the model prediction for a specific data point. Along with the features their relative importance is also shown.

How do I infer contrastive explanation from Watson OpenScale?

Contrastive explanation in Watson OpenScale shows the minimum change to be made to the input data point that would give a different model prediction than the input data point.

What is what-if analysis in Watson OpenScale?

The explanations UI also provides ability to test what-if scenarios, where in the user can change the feature values of the input datapoint and check its impact on the model prediction and probability.

In Watson OpenScale, for which models is Local/LIME explanation supported?

Local explanation is supported for models that use structured data and of problem type regression and classification and models that use unstructured text, unstructured image data and problem type classification.

In Watson OpenScale, for which models is contrastive explanation and what-if analysis supported?

Contrastive explanations and what-if analyses are supported for models that use structured data and are of problem type classification only.

What are controllable features in Watson OpenScale explainability configuration?

Using controllable features some features of the input data point can be locked, so that they do not change when the contrastive explanation is generated and also they cannot be changed in what if analysis. The features that should not be changed should be set as non-controllable or NO in the explainability configuration.

Next steps