Importing models to a deployment space

Import machine learning models trained outside of IBM Watson Machine Learning so that you can deploy and test the models. Review the model frameworks supported for importing models.

Here, to import a trained model means:

  1. Store the trained model in your Watson Machine Learning repository
  2. [Optional] Deploy the stored model in your Watson Machine Learning service

Supported import formats

Supported models and their import options
Model type PMML Spark MLlib scikit-learn XGBoost TensorFlow PyTorch
Importing a model using UI
Importing a model object
Importing a model using path to file
Importing a model using path to directory

For an example of how to add a model programmatically using the Python client, refer to this notebook:

For an example of how to add a model programmatically using the REST API, refer to this notebook:

Refer to these sections for extra information regarding importing models:

Importing a PMML model

Importing a Spark MLlib model

Importing a scikit-learn model

Importing an XGBoost model

Importing a TensorFlow model

Importing a PyTorch model

Importing a model using UI

Follow these steps to import a model using UI:

Step 1: Store the model

  1. From the Assets tab of your project in Watson Studio, in the Models section, click New model
  2. In the page that opens, fill in the basic fields:
    • Specify a name for your new model
    • Confirm that the Watson Machine Learning service instance that you associated with your project is selected in the Machine Learning Service section
  3. Click the radio button labeled From file, and then upload your PMML file
  4. Click Create to store the model in your Watson Machine Learning repository.

Step 2: Deploy the model

After the model is stored from the model builder interface, the model details page for your stored model opens automatically.

Deploy your stored model from the model details page by performing the following steps:

  1. Click the Deployments tab
  2. Click Add deployment
  3. Give the deployment a name and then click Save

Importing a model object

Follow these instructions to import a model object:

  1. If your model is located in a remote location, follow Downloading a model stored in a remote location, and then De-serializing models

  2. Store the model object in your Watson Machine Learning repository. For details, refer to Storing model in Watson Machine Learning repository.

Importing a model using path to file

Follow these steps to import a model using a path to a file:

  1. If your model is located in a remote location, follow Downloading a model stored in a remote location to download it.

  2. If your model is located locally, place it in a specific directory:

    !cp <saved model> <target directory>
    !cd <target directory>
    
  3. For scikit-learn, XGBoost, Tensorflow, and PyTorch models, if the downloaded file is not a .tar.gz archive, make an archive:

    !tar -zcvf <saved model>.tar.gz <saved model>
    

    The model file must be at the top level of the directory, for example:

    assets/
    <saved model>
    variables/
    variables/variables.data-00000-of-00001
    variables/variables.index
    
  4. Use the path to the saved file to store the model file in your Watson Machine Learning repository. For details, refer to Storing model in Watson Machine Learning repository.

Importing a model using path to directory

Follow these steps to import a model using a path to a directory:

  1. If your model is located in a remote location, refer to Downloading a model stored in a remote location.
  2. If your model is located locally, place it in a specific directory:

    !cp <saved model> <target directory>
    !cd <target directory>
    

    For scikit-learn, XGBoost, Tensorflow, and PyTorch models, the model file must be at the top level of the directory, for example:

    assets/
    <saved model>
    variables/
    variables/variables.data-00000-of-00001
    variables/variables.index
    
  3. Use the directory path to store the model file in your Watson Machine Learning repository. For details, refer to Storing model in Watson Machine Learning repository.

Downloading a model stored in a remote location

Follow this sample code to download your model from a remote location:

import os
from wget import download

target_dir = '<target directory name>'
if not os.path.isdir(target_dir):
    os.mkdir(target_dir)
filename = os.path.join(target_dir, '<model name>')
if not os.path.isfile(filename):
    filename = download('<url to model>', out = target_dir)

Storing model in your Watson Machine Learning repository

Use this code to store your model in your Watson Machine Learning repository:

from ibm_watson_machine_learning import APIClient

client = APIClient(<your credentials>)
sw_spec_uid = client.software_specifications.get_uid_by_name("<software specification name>")

meta_props = {
    client.repository.ModelMetaNames.NAME: "<your model name>",
    client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid,
    client.repository.ModelMetaNames.TYPE: "<model type>"}

client.repository.store_model(model=<your model>, meta_props=meta_props)

Notes:

De-serializing models

To de-serialize models, follow these sections:

De-serializing scikit-learn and XGBoost models

Use this code to de-serialize your scikit-learn and XGBoost model:

  import joblib

  <your_model> = joblib.load("<saved model>")

De-serializing Spark MLlib models

Use this code to de-serialize your Spark MLlib model:

  from pyspark.ml import PipelineModel

  <your model> = PipelineModel.load("<saved model>")

Learn more

Parent topic: Deployment spaces