Compare model versions in a model group
A model group is a collection of up to 10 machine learning model versions that can be deployed to production at the same time using the same scoring endpoint. Over time, the model versions can be evaluated against each other to assess the best ones to keep on your production environment.
- Create a model group
- Compare models in a model group
- Deploy a model group as a web service group
- Example curl requests and responses
Create a model group
To create a group of model versions, complete the following steps:
- In your project assets page, click Model groups. Click Add Model Group.
- Name and describe your model group. Click Next.
- Select the leader. The leader represents the default model version for any REST API requests of
the model group.
- Add up to nine model versions to the model group. Note that they must match the leader's
algorithm and label columns.
- Review the model group and click the Create button. The new model group is created.
From the Overview tab of a model group, you can add or remove models. You can also click Set as leader next to a model to make it the new leader of the group.
In the model group overview, click Generate New Evaluation Scripts to create one new evaluation script for every model version in the group at once. Note that the new evaluation scrips will dissociate your currently associated evaluation scripts from every model version in the group. Select an input data set for the model group. Click the Generate button.
Compare models in a model group
In the model group, click the Evaluations tab to compare the evaluation history of all of the model versions in the group, either by metric or by date. To remove a model version from the visualization, clear its check box.
Deploy a model group as a web service group
In the Watson Machine Learning client, you can deploy a model group as a web service group. One or more model versions within the group can then be scored using the same endpoint.
In the Edit settings page of the web service group deployment, you can enable which model versions to use and which one to be the leader. You can also enable the following routing configurations:
- Leader
- Allows API requests to the leader of the group. The Routing-Option header in the API request must be set to leader. This is the default.
- Specific model
- Allows API requests to a specific model version in the group. The Routing-Option header in the API request must be set to specific.
- Random model
- Allows API requests to a randomly selected model version in the group. The Routing-Option header in the API request must be set to random.
- All models
- Allows requests to every enabled model version in the group. The Routing-Option header in the API request must be set to all.
You can also create evaluation jobs for any or all of the model versions within the group.
Example curl requests and responses
Example curl command to score specific model c5model version
1:
curl -k -X POST \
https://9.87.654.321/dmodelgroup/v1/prodrelease/testdeployment/score \
-H 'Routing-Option: specific' \
-H 'Model-Name: c5model' \
-H 'Model-Version: 1' \
-H 'Authorization: <bearer_token>' \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-d '{"args":{"input_json":[{"income":2000,"sex":"M"}]}}'
Example curl command to score all model versions within the group:
curl -k -X POST \
https://9.87.654.321/dmodelgroup/v1/prodrelease/testdeployment/score \
-H 'Routing-Option: all' \
-H 'Authorization: <bearer_token>' \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-d '{"args":{"input_json":[{"income":2000,"sex":"M"}]}}'
Example output which lists objects that correspond to scoring output from all model
versions in the group. The model can be identified by looking at the assetMetadata
field in the output:
[
{
"assetMetadata": {
"type": "model",
"name": "c5model",
"version": "1",
"isLeader": true
},
"result": {
"additional_fields": {
"fields": [
"$RI-beer_beans_pizza",
"$RC-beer_beans_pizza",
"$RP-beer_beans_pizza"
],
"values": [
[
"3",
0.9828571428571429,
0.9884393063583815
]
]
},
"classes": [
"F",
"T"
],
"predictions": [
"F"
],
"probabilities": [
[
0.9884393063583815,
0.011560693641618497
]
]
},
"stderr": [],
"stdout": []
},
{
"assetMetadata": {
"type": "model",
"name": "c5modelimport",
"version": "1",
"isLeader": false
},
"result": {
"additional_fields": {
"fields": [
"$RI-beer_beans_pizza",
"$RC-beer_beans_pizza",
"$RP-beer_beans_pizza"
],
"values": [
[
"3",
0.9828571428571429,
0.9884393063583815
]
]
},
"classes": [
"F",
"T"
],
"predictions": [
"F"
],
"probabilities": [
[
0.9884393063583815,
0.011560693641618497
]
]
},
"stderr": [],
"stdout": []
},
{
"assetMetadata": {
"type": "model",
"name": "chaidmodel",
"version": "1",
"isLeader": false
},
"result": {
"additional_fields": {
"fields": [
"$RI-beer_beans_pizza",
"$RC-beer_beans_pizza",
"$RP-beer_beans_pizza"
],
"values": [
[
"4",
0.987341772151899,
0.987341772151899
]
]
},
"classes": [
"F",
"T"
],
"predictions": [
"F"
],
"probabilities": [
[
0.987341772151899,
0.0126582278481013
]
]
},
"stderr": [],
"stdout": []
}
]
Example output which lists a single object containing the scoring output from the leader model version:
[
{
"assetMetadata": {
"type": "model",
"name": "c5model",
"version": "1",
"isLeader": true
},
"result": {
"additional_fields": {
"fields": [
"$RI-beer_beans_pizza",
"$RC-beer_beans_pizza",
"$RP-beer_beans_pizza"
],
"values": [
[
"3",
0.9828571428571429,
0.9884393063583815
]
]
},
"classes": [
"F",
"T"
],
"predictions": [
"F"
],
"probabilities": [
[
0.9884393063583815,
0.011560693641618497
]
]
},
"stderr": [],
"stdout": []
}
]