Validating and monitoring AI models with Watson OpenScale
IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable, and compliant no matter where your models were built or are running. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production.
Enterprises use model evaluation to automate and put into service AI lifecycle in business applications. This approach ensures that AI models are free from bias, can be easily explained and understood by business users, and are auditable in business transactions. Model evaluation supports AI models built and run with the tools and model serve frameworks of your choice.
Watch this short video to learn more about Watson OpenScale:
Components of Watson OpenScale
Watson OpenScale has four main areas:
- Insights: The Insights dashboard displays the models that you are monitoring and provides status on the results of model evaluations.
- Explain a transaction: Explanations describe how the model determined a prediction. It lists some of the most important factors that led to the predictions so you can be confident in the process.
- Configuration: Use the Configuration tab to select a database, set up a machine learning provider, and optionally add integrated services.
- Support: The Support tab provides you with resources to get the help you need with Watson OpenScale. Access product documentation or connect with IBM Community on Stack Overflow. To create a service ticket with the IBM Support team, click Manage tickets.
Monitors
Monitors evaluate your deployments against specified metrics. Configure alerts that indicate when a threshold is crossed for a given metric. Watson OpenScale evaluates your deployments based on three default monitors:
- Quality describes the model’s ability to provide correct outcomes based on labeled test data called Feedback data.
- Fairness describes how evenly the model delivers favorable outcomes between groups. The Fairness monitor looks for biased outcomes in your model.
- Drift warns you of a drop in accuracy or data consistency.
Note: You can also create Custom monitors for your deployment.
Automated setup
To quickly see how Watson OpenScale monitors a model, run the demo scenario option that is provided when you first launch Watson OpenScale.
- Sign into your Watson OpenScale instance.
- Click the Add-ons (
) icon. - Click the Watson OpenScale tile.
- Click the Open button.
- To work with the auto setup, click Next.
- You must use the locally installed instance of Watson Machine Learning. There is no option for a remote instance. If prompted, select the local option and click Next.
- Provide either the Host name or IP address without the preceding
https://or final forward slash (/), Port, Database name, Username, and Password for your Db2 database. For Db2 options that are part of your cluster, see Services, Data Sources where you find options, such as Db2 Warehouse and Db2 Advanced Enterprise Server Edition. For an external database, you can use IBM Db2 Database. Click Prepare.
As the model evaluation services are being configured, you can review the demo scenario that displays. When configuration is complete, choose whether to take a tour or exit to the dashboard.
- To take the tour, click Start tour.
- To exit the auto setup and go to the dashboard, click Explore on my own.
Viewing results
After you finish, you are ready to start using Watson OpenScale.
Next steps
-
Learn more about viewing and interpreting the data and monitoring explainability.
-
To learn more about model evaluation in action, see How AI picks the highlights from Wimbledon fairly and fast
Parent topic: Deploying assets