Elastic distributed inference

Inference is available from IBM Spectrum Conductor Deep Learning Impact either as a service using elastic distributed inference, or as a one time test.

Elastic distributed inference is a secure, robust, scalable inference service which exposes REST API for IBM Spectrum Conductor Deep Learning Impact users to publish and manage inference services, for REST clients to consume the service, and for administrators to manage the service. An inference service can be used for inference by any authorized client.

Figure 1. Elastic distributed inference
Elastic distributed inference

The elastic distributed inference feature can host models for all consumers (or for each line of business). Models are developed and published by developers for a specific consumer. Published models can run as an inference service. Inference services can be started and stopped.

If elastic distributed inference is not enabled during installation, only the test functionality is available. To create an inference model for testing purposes, see Create an inference model. If elastic distributed inference is enabled, you can publish existing test inference models as an inference service, see Publish an inference model as a service.