Watson OpenScale uses two types of debiasing: passive and active. Passive debiasing reveals bias, while active debiasing prevents you from carrying that bias forward by changing the model in real time for the current application. In addition to direct bias, Watson OpenScale can determine indirect bias.
Passive debiasing is the work that Watson OpenScale does by itself, automatically, every hour. It is considered passive because it happens without user intervention. When Watson OpenScale checks bias, it also does a debiases the data. It analyzes the behavior of the model, and identifies the data where the model acts in a biased manner.
Watson OpenScale then builds a machine learning model to predict whether the model is likely to act in a biased manner on a given, new data point. Watson OpenScale then analyzes the data that is received by the model, on an hourly basis, and finds the data points that cause bias. For such data points, the fairness attribute is perturbed from minority to majority, and the perturbed data is sent to the original model for prediction. This prediction of the original model is used as the debiased output.
Watson OpenScale debiases all the data that the model receives in the past hour. It also computes the fairness for the debiased output, and displays it in the Debiased model tab.
Active debiasing is a way for you to request and bring debiased results into your application through the REST API endpoint. You are actively directing Watson OpenScale to run debiasing and alter the model so that you can run your application in a non-bias way. In active debiasing, you can use a debiasing REST API endpoint from your application. This REST API endpoint internally calls your model, and checks its behavior.
If Watson OpenScale detects that the model is acting in a biased manner, it perturbs the data, and sends it back to the original model. The output of the original model on the perturbed data is returned as the debiased prediction. If Watson OpenScale determines that the original model is not acting in a biased manner, then Watson OpenScale returns the original model’s prediction as the debiased prediction. Thus, by using this REST API endpoint, you can ensure that your application does not base decisions on biased output.
Steps to select the debiased scoring endpoint link
- On the Evaluations window, click Configure monitors.
- In the navigation pane, click Endpoints.
- In the Information pane, click the Endpoints tab.
- From the Endpoint list, click Debiased transactions.
- From the Code language list, choose the type of code: cURL, Java, or Python.
- To copy the code snippet, click the Copy to clipboard icon.
Viewing fairness results for indirect bias
After you ensure that your model is set up for indirect bias analysis, you can view results of the analysis:
The correlated features start out collapsed. The correlation strength is presented following the feature. The tooltip describes the proxy features. The most relevant three features are displayed. Expand each feature to see values for the three lowest monitored groups and three highest reference groups. For each group, the three most-frequent values and the number of favorable outcomes for that class are displayed.
- To mitigate bias, you must build a new version of the model that fixes the problem. Watson OpenScale stores biased records in the manual labeling table. These biased records need to be manually labeled and then the model needs to be retrained by using this additional data to build a new version of the model that is unbiased.
- You can also extract a list of the individual biased records through the manual labeling table. Connect to the manual labeling table and read the records by using standard SQL queries.