Validating and monitoring AI models with Watson OpenScale

IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable, and compliant no matter where your models were built or are running. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production.

Service This service is not available by default. An administrator must install this service on the IBM Cloud Pak for Data platform, and you must be given access to the service. To determine whether the service is installed, open the Services catalog and check whether the service is enabled.

Enterprises use model evaluation to automate and put into service AI lifecycle in business applications. This approach ensures that AI models are free from bias, can be easily explained and understood by business users, and are auditable in business transactions. Model evaluation supports AI models built and run with the tools and model serve frameworks of your choice.

Watch this short video to learn more about Watson OpenScale:

Trustworthy AI in action

To learn more about model evaluation in action, see How AI picks the highlights from Wimbledon fairly and fast.

Components of Watson OpenScale

Watson OpenScale has four main areas:

Monitors

Monitors evaluate your deployments against specified metrics. Configure alerts that indicate when a threshold is crossed for a metric. Watson OpenScale evaluates your deployments based on three default monitors:

Note: You can also create Custom monitors for your deployment.

Get started with Watson OpenScale

Choose a method for setting up Watson OpenScale.

Parent topic: Deploying assets