Validating and monitoring AI models with Watson OpenScale

IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable, and compliant wherever your models were built or are running. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production

Enterprises use IBM Watson OpenScale to automate and put into service AI lifecycle in business applications. This approach ensures that AI models are free from bias, can be easily explained and understood by business users, and are auditable in business transactions. Watson OpenScale supports AI models built and run in the tools and model serve frameworks of your choice.

 

Automated setup

To quickly see how Watson OpenScale monitors a model, run the demo scenario option that is provided when you first log into the Watson OpenScale UI.

  1. Sign into your Watson OpenScale instance.
  2. Click the Add-ons (the addon icon displays) icon.
  3. Click the Watson OpenScale tile.
  4. Click the Open button.
  5. To work with the auto setup, click Next.
  6. You must use the locally installed instance of Watson Machine Learning. There is no option for a remote instance. If prompted, select the local option and click Next.

  7. Now, provide either the Host name or IP address without the preceding https:// or final forward slash (/), Port, Database name, Username, and Password for your Db2 database. For Db2 options that are part of your cluster, see Services, Data Sources where you find options, such as Db2 Warehouse and Db2 Advanced Enterprise Server Edition. For an external database, you can use IBM Db2 Database. Click Prepare.

As the Watson OpenScale services are being configured, you can review the demo scenario that displays. When configuration is complete, choose whether to take a tour or exit to the dashboard.

Viewing results

After you finish, you are ready to start using Watson OpenScale.

Next steps