Supported machine learning engines, frameworks, and models
The Watson OpenScale service supports the following machine learning engines. Each runtime supports models that are created in the following frameworks:
-
You can use IBM Watson Machine Learning to perform payload logging, feedback logging, and to measure performance accuracy, runtime bias detection, explainability, and auto-debias function in IBM Watson OpenScale.
-
You can use Microsoft Azure ML Studio to perform payload logging, feedback logging, and to measure performance accuracy, runtime bias detection, explainability, and auto-debias function in IBM Watson OpenScale.
-
You can use Microsoft Azure ML Service to perform payload logging, feedback logging, and to measure performance accuracy, runtime bias detection, explainability, and auto-debias function in IBM Watson OpenScale.
-
You can use Amazon SageMaker to perform payload logging, feedback logging, and to measure performance accuracy, runtime bias detection, explainability, and auto-debias function in IBM Watson OpenScale.
-
You can use your custom machine learning framework to perform payload logging, feedback logging, and to measure performance accuracy, runtime bias detection, explainability, and auto-debias function in IBM Watson OpenScale. The custom machine learning framework must have equivalency to IBM Watson Machine Learning.
-
SPSS C&DS (only available in IBM Watson OpenScale for IBM Cloud Pak for Data)
You can use IBM SPSS C&DS to perform payload logging, feedback logging, and to measure performance accuracy, runtime bias detection, explainability, and auto-debias function in IBM Watson OpenScale for IBM Cloud Pak for Data.
Support for multiple machine learning engines
Watson OpenScale supports multiple machine learning engines within a single instance. You can provision them through the Watson OpenScale dashboard configuration or the Python SDK.
When you first set up Watson OpenScale, you may have used the user interface or the automated setup option to provision your first machine learning engine. Adding machine learning engines requires that you either use the configuration tab on the Watson OpenScale dashboard or the Python SDK.
Using the dashboard to add providers
- After you open Watson OpenScale, from the Configure tab, click the Add machine learning provider button.
- Select the provider you want to add.
- Enter the required information, such as credentials and click Save.
After you save your configuration, you are ready to go to the dashboard to choose deployments and configure monitors.
Editing machine learning providers
Do you need to make an edit to a machine learning provider? Click the tile menu icon and then click View & edit details.
Adding machine learning providers by using the Python SDK
You can add more than one machine learning engine to Watson OpenScale by using the Python API wos_client.service_providers.add
method.
IBM Watson Machine Learning
To add the IBM Watson Machine Learning machine learning engine, run the following command:
WML_CREDENTIALS = {
"url": "https://us-south.ml.cloud.ibm.com",
"apikey": IBM CLOUD_API_KEY
}
wos_client.service_providers.add(
name=SERVICE_PROVIDER_NAME,
description=SERVICE_PROVIDER_DESCRIPTION,
service_type=ServiceTypes.WATSON_MACHINE_LEARNING,
deployment_space_id = WML_SPACE_ID,
operational_space_id = "production",
credentials=WMLCredentialsCloud(
apikey=CLOUD_API_KEY, ## use `apikey=IAM_TOKEN` if using IAM_TOKEN to initiate client
url=WML_CREDENTIALS["url"],
instance_id=None
),
background_mode=False
).result
Microsoft Azure ML Studio
To add the Azure ML Studio machine learning engine, run the following command:
AZURE_ENGINE_CREDENTIALS = {
"client_id": "",
"client_secret": "",
"subscription_id": "",
"tenant": ""
}
wos_client.service_providers.add(
name=SERVICE_PROVIDER_NAME,
description=SERVICE_PROVIDER_DESCRIPTION,
service_type=ServiceTypes.AZURE_MACHINE_LEARNING,
#deployment_space_id = WML_SPACE_ID,
#operational_space_id = "production",
credentials=AzureCredentials(
subscription_id= AZURE_ENGINE_CREDENTIALS['subscription_id'],
client_id = AZURE_ENGINE_CREDENTIALS['client_id'],
client_secret= AZURE_ENGINE_CREDENTIALS['client_secret'],
tenant = AZURE_ENGINE_CREDENTIALS['tenant']
),
background_mode=False
).result
Amazon Sagemaker
To add the AWS Sagemaker machine learning engine, run the following command:
SAGEMAKER_ENGINE_CREDENTIALS = {
'access_key_id':””,
'secret_access_key':””,
'region': '}
wos_client.service_providers.add(
name="AWS",
description="AWS Service Provider",
service_type=ServiceTypes.AMAZON_SAGEMAKER,
credentials=SageMakerCredentials(
access_key_id=SAGEMAKER_ENGINE_CREDENTIALS['access_key_id'],
secret_access_key=SAGEMAKER_ENGINE_CREDENTIALS['secret_access_key'],
region=SAGEMAKER_ENGINE_CREDENTIALS['region']
),
background_mode=False
).result
Microsoft Azure ML Service
To add the Azure ML Service machine learning engine, run the following command:
service_type = "azure_machine_learning_service"
added_service_provider_result = wos_client.service_providers.add(
name=SERVICE_PROVIDER_NAME,
description=SERVICE_PROVIDER_DESCRIPTION,
service_type = service_type,
credentials=AzureCredentials(
subscription_id= AZURE_ENGINE_CREDENTIALS['subscription_id'],
client_id = AZURE_ENGINE_CREDENTIALS['client_id'],
client_secret= AZURE_ENGINE_CREDENTIALS['client_secret'],
tenant = AZURE_ENGINE_CREDENTIALS['tenant']
),
background_mode=False
).result
Producing a list of machine learning providers
To view a list of all the bindings, run the list
method:
client.service_providers.list()
uid | name | service_type | created |
---|---|---|---|
e88ms###-####-####-############ | My Azure ML Service engine | azure_machine_learning | 2019-04-04T09:50:33.189Z |
e88sl###-####-####-############ | My Azure ML Studio engine | azure_machine_learning | 2019-04-04T09:50:33.186Z |
e00sjl###-####-####-############ | WML instance | watson_machine_learning | 2019-03-04T09:50:33.338Z |
e43kl###-####-####-############ | My AWS SageMaker engine | sagemaker_machine_learning | 2019-04-04T09:50:33.186Z |
For information about specific machine learning engines, see the following topics:
- Add your Custom machine learning engine.
- Add your Microsoft Azure machine learning studio engine
- Add your Microsoft Azure machine learning service engine
- Add your Amazon SageMaker machine learning engine
For a working example of an actual notebook, see the Watson OpenScale sample notebooks.
While configuring the machine learning providers in Watson OpenScale, what is the difference between pre-production and production subscriptions?
Before you want to put the model for production usage, a model validator would like to configure and validate the model in a pre-production service provider. And this exactly what Watson OpenScale provides whereby you can configure a machine learning provider as pre-production perform all the risk evaluations and once the model evaluation is per the quality standards, then put that model for production usage.
In a pre-production environment, that uses Watson OpenScale after the model is evaluated for risk and approved for usage, do I must reconfigure all the monitors again in production environment?
No, Watson OpenScale provides a way to copy the configuration of pre-production subscription to production subscription.
In Watson OpenScale, can I compare my model deployments in pre-production with a benchmark model to see how good or bad it is?
Yes, Watson OpenScale provides you with the option to compare two model deployments or subscriptions where you can see a side-by-side comparison of the behavior of the two models on each of the monitors configured. To compare go to the model summary page on Watson OpenScale dashboard and select Actions -> Compare
.
Next steps
- Watson OpenScale is now ready for you to add deployments to your dashboard and configure monitors.
- View the API Reference material.
Still have questions?