Download and configure the elastic distributed inference (dlim) command line tool
Use elastic distributed inference from the command line interface using the
dlim command.
The dlim command lists the inference services and provides details, creates and removes inference services, starts and stops inference services and lists available runtime environments.
To use the dlim command, you must do the following:
- Log in to the WML Accelerator console.
- Download the elastic distributed inference CLI tool. Navigate to dlim tool to download the
- Add dlim to PATH, for
example:
where /usr/local/bin/dlim is the location of the dlim tool.PATH=$PATH:/usr/local/bin/dlim
- Configure
dlim.
where wmla-console is the location of your WML Accelerator console.dlim config -c https://wmla-console/dlim/v1/
For example:dlim config -c https://wmla-console-wmlauser.apps.ibm.com/dlim/v1/
- Save your token for
inference.
where username and password are your log in credentials.dlim config -t -u username -x password
- To see what sub commands are available with the dlim command, run the
dlim --help
command:dlim --help
Examples
Log
in to see available deployed
models:
PATH=$PATH:/usr/local/bin/dlim
dlim config -c https://wmla-console-wmlauser.apps.ibm.com/dlim/v1/
dlim config -s -u admin -x password
dlim model list
Return the inference service
token:
dlim config -s [ -u username -x password]