Get started with Watson Machine Learning Accelerator
As a data scientist, get started with IBM Watson® Machine Learning Accelerator.
Start using Watson Machine Learning
Accelerator in the following ways:
Accessing data to be used by Watson Machine Learning Accelerator
Access data to be used by Watson Machine Learning
Accelerator, see:
.
Start training with Watson Machine Learning Accelerator
Train your data using Watson Machine Learning
Accelerator methods:
- Use the WML Accelerator rest APIs to train your data.
.
- To access the WML Accelerator console, see Access the console.
- Use the WML Accelerator command line interface (CLI). To download the WML Accelerator CLI, from
the WML Accelerator console, navigate to
- Log in to the tool, for
example:
python3 dlicmd.py --logon --rest-host wmla-console-wmla.host.ibm.com --username admin --password password
- Start using the dlicmd tool, for usage, run:
python3 dlicmd.py --help
. - Log in to the tool, for
example:
Start training with WML
If you have the WML service installed and connected to the WML Accelerator service, you can run training from WML which can be monitored from WML Accelerator. To connect the WML service, see: Connecting Watson Machine Learning Accelerator to Watson Machine Learning.
Train your data using WML by using the following:
- Watson Studio Experiment Builder
-
WML API, see https://cloud.ibm.com/apidocs/machine-learning.
Start using notebooks
Get started using Watson Machine Learning Accelerator notebooks in Cloud Pak for Data. See: Working with Watson Machine Learning Accelerator notebooks in IBM Cloud Pak for Data
Monitor training
After submitting training, log in to the WML Accelerator console to view your training progress.
- View the progress of your application, see: View application details
- Monitor the resources that you have used, see: Monitor resource usage
Deploy models
Deploy trained models as an inference service.
- Use the elastic distributed inference command line interface (CLI), publish and start running
the trained model as a service. To download the elastic distributed inference CLI, from the WML
Accelerator console, navigate to
- To get started, add dlim to environment variable PATH on system.
- Configure
dlim:
dlim config -c https://wmla-console-WML-Accelerator_namespace.router_canonical_hostname/dlim/v1/
- Save your login token by inputting the username and
password.
dlim config -t -u <username> -x <password>
- Run the dlim help command:
dlim --help
. - Use the WML Accelerator rest APIs for inference. To view the rest APIs for inference, from the WML Accelerator console, navigate to Access the console. . To access the console, see
- View all models running as a service from the page.
Troubleshooting
To troubleshoot problems in your jobs, see application logs.