Get started with Watson Machine Learning Accelerator

As a data scientist, get started with IBM Watson® Machine Learning Accelerator.

Accessing data to be used by Watson Machine Learning Accelerator

Start training with Watson Machine Learning Accelerator

Train your data using Watson Machine Learning Accelerator methods:
  • Use the WML Accelerator rest APIs to train your data. .
  • Use the WML Accelerator command line interface (CLI). To download the WML Accelerator CLI, from the WML Accelerator console, navigate to Help > Command Line Tools.
    • Log in to the tool, for example:
      python3 dlicmd.py --logon --rest-host wmla-console-wmla.host.ibm.com --username admin --password password
    • Start using the dlicmd tool, for usage, run:
      python3 dlicmd.py --help

Start training with WML

If you have the WML service installed and connected to the WML Accelerator service, you can run training from WML which can be monitored from WML Accelerator. To connect the WML service, see: Connecting Watson Machine Learning Accelerator to Watson Machine Learning.

Train your data using WML by using the following:

Start using notebooks

Get started using Watson Machine Learning Accelerator notebooks in Cloud Pak for Data. See: Working with Watson Machine Learning Accelerator notebooks in IBM Cloud Pak for Data

Monitor training

After submitting training, log in to the WML Accelerator console to view your training progress.

Deploy models

Deploy trained models as an inference service.
  • Use the elastic distributed inference command line interface (CLI), publish and start running the trained model as a service. To download the elastic distributed inference CLI, from the WML Accelerator console, navigate to Help > Command Line Tools.
    • To get started, add dlim to environment variable PATH on system.
    • Configure dlim:
      dlim config -c https://wmla-console-WML-Accelerator_namespace.router_canonical_hostname/dlim/v1/
    • Save your login token by inputting the username and password.
      dlim config -t -u <username> -x <password>
    • Run the dlim help command:
      dlim --help
  • Use the WML Accelerator rest APIs for inference. To view the rest APIs for inference, from the WML Accelerator console, navigate to Help > API for Inference. To access the console, see Access the console.
  • View all models running as a service from the Workload > Deployed Models page.

Troubleshooting

To troubleshoot problems in your jobs, see application logs.