Table of contents

Deploying machine learning models in notebooks (Watson Machine Learning)

This topic describes some techniques for deploying trained machine learning models from notebooks using the IBM Watson Machine Learning Python client library as an alternative to using one of the Watson Studio tools.

 

Training and deploying machine learning models notebook

If you choose to build a machine learning model in a notebook, you should be comfortable with coding in a Jupyter notebook. A Jupyter notebook is a web-based environment for interactive computing. You can run small pieces of code that process your data, and you can immediately view the results of your computation. Using this tool, you can assemble, test, and run all of the building blocks you need to work with data, save the data to the Watson Machine Learning, and deploy the model.

For details on using a notebook editor, see Notebooks.

One of the most popular means of building a machine learning model in a notebook is with the Python client. The Watson Machine Learning Python client is a library that allows you to work with the Watson Machine Learning service. Train, test, and deploy your models as APIs for application development, then share with colleagues using this Python library in a notebook.

Watson Machine Learning Python client library reference

You can access a reference to all of the Python commands for Watson Machine Learning here: Watson Machine Learning Python client library.

Watson Machine Learning REST API reference

For complete documentation and examples for the REST API methods, see IBM Watson Machine Learning API documentation.

Learn from sample notebooks

Since there are many ways to build and train models and then deply them, the best way to learn is to look at annotated samples that step you through the process using different frameworks. For details, see:

Updating the Python client library

The Watson Machine Learning Python client library is installed by default when you create a notebook.

Optionally, if you want to make sure you have all of the latest supported features of the Python client, uninstall the default version and install the latest version. Conversely, there might be times when you want to make sure that your notebook is using a particular version of the Python client that supports features in your notebook. In that case, you can uninstall the default client library and reinstall to get a specific version. Follow these steps to install either the latest version or a particular version of the Python client:

  1. First, uninstall the existing Python client:

    !pip uninstall --yes ibm-watson-machine-learning
    
  2. Then, install the latest client.

    !pip install ibm-watson-machine-learning
    

    or, install a specific client version.

    !pip install ibm-watson-machine-learning==<version number>
    

Using variables in a notebook

There are several pre-defined environment variables in Watson Studio that make it easier to call the Watson Machine Learning Python client APIs.

  • USER_ACCESS_TOKEN: The access token that can be used for authenticating the current user in WML API calls.
  • PROJECT_ID: The guid of the Watson Studio project where your environment is running
  • SPACE_ID: The guid of the deployment space that is associated to the current WS project.

Note: SPACE_ID may be undefined in an environment that was started before the deployment space was associated to the current project. If the value is missing, restart the environment.

Saving a model to the repository

  1. Add a notebook to your project by clicking Add to project and selecting Notebook.

  2. Authenticate with the Python client, following the instructions in Authentication.

  3. Initialize the client with the credentials:

     from ibm_watson_machine_learning import APIClient
     wml_client = APIClient(wml_credentials)
    
  4. (Optional) Create a new deployment space. To use an existing deployment space, skip this step and enter the name of the space in the next step, entering your credentials.

     metadata = {            
         client.spaces.ConfigurationMetaNames.NAME: 'YOUR DEPLOYMENT SPACE NAME,         
         client.spaces.ConfigurationMetaNames.DESCRIPTION:  description',            
         client.spaces.ConfigurationMetaNames.STORAGE: {
                 "type": "bmcos_object_storage",
                 "resource_crn": 'PROVIDE COS RESOURCE CRN '
             },
             client.spaces.ConfigurationMetaNames.COMPUTE: {
                          "name": 'INSTANCE NAME,
                          "crn": 'PROVIDE THE INSTANCE CRN' 
             }
         }
    
     space_details = client.spaces.store(meta_props=metadata)
    
  5. Get the ID for the deployment space:
     def guid_from_space_name(client, space_name):
     instance_details = client.service_instance.get_details()
     space = client.spaces.get_details()
     return(next(item for item in space['resources'] if item['entity']["name"] == space_name)['metadata']['guid'])
    
  6. Enter the details for the deployment space, putting the name of your deployment space in place of ‘YOUR DEPLOYMENT SPACE’.
     space_uid = guid_from_space_name(client, 'YOUR DEPLOYMENT SPACE')
     print("Space UID = " + space_uid)
    

    Out: Space UID = b8eb6ec0-dcc7-425c-8280-30a1d7a9c58a

  7. Set the default deployment space to work.

     client.set.default_space(space_uid)
    

Get the software specification

Your function requires a software specification to run.

  1. To view the list of predefined specifications:

     client.software_specifications.list()
    
  2. Find the id of the software specification environment that the function will be using :

     software_spec_id =  client.software_specifications.get_id_by_name('spss-modeler_18.1')
     print(software_spec_id)
    

Store the model

  1. Store the trained model to the repository and get the model ID. To do so, enter the absolute path of the trained model file, as well as the model name, model type and model runtime. Note that the model name cannot contain characters such as [ ] { } | \ ” % ~ # < > that conflict with forming a valid HTTP request.

     model_details = client.repository.store_model(model="<Trained Model file>",meta_props={
     client.repository.ModelMetaNames.NAME:"<Model Name>",
     client.repository.ModelMetaNames.TYPE:"<model type>",
     client.repository.ModelMetaNames.SOFTWARE_SPEC_UID:software_spec_id }
                                              )
     model_id = client.repository.get_model_uid(model_details)
    

    For example, a trained SPSS model might have metadata like this:

     model_details = client.repository.store_model(model="example.com/my_spss_model",meta_props={
     client.repository.ModelMetaNames.NAME:"my_spss_model",
     client.repository.ModelMetaNames.TYPE:"spss-modeler_18.1",
     client.repository.ModelMetaNames.SOFTWARE_SPEC_UID:software_spec_id }
                                              )
     model_id = client.repository.get_model_id(model_details)
    
  2. Print the model ID:

     print(model_id)
    

    Out: 8a8e68a6-038c-4e13-90d6-729bee9a99cd

After you store a model in the Watson Machine Learning repository, you can view the notebook asset from the project, promote it to a deployment space, and create deployments for the model from the space. For details, see Deploying assets.

Next steps

  • Learn how to deploy a trained model from the deployment space.
  • View details on managing data in a notebook.