Table of contents

Provisioning Analytics Engine powered by Apache Spark instances

A project administrator can provision one or more instances of Analytics Engine Powered by Apache Spark after the service is installed.

You need to provision a service instance to use the Spark jobs REST API to submit Spark jobs. You do not have to provision a service instance if you only use Spark environments in Watson Studio.

To provision an Analytics Engine powered by Apache Spark service instance:

  1. From the Navigation menu on the IBM Cloud Pak for Data web user interface, click Services > Services catalog, select the Analytics category and find the Analytics Engine powered by Apache Spark tile. The service must be enabled. If it is not enabled, you need to install the Analytics Engine Powered by Apache Spark service. See Installing the Analytics Engine powered by Apache Spark service.
  2. Click the tile and from the options menu in the upper right of the window, select New instance.
  3. Enter the service instance name. The instance name can contain any alphanumeric character and the special character -.
  4. Optionally add a description and click Next.
  5. Select the instance storage volume type. You can:
    1. Create a new storage volume. Enter the volume name and specify the size in GB. The volume name can contain any alphanumeric character and the special character -. You can select storage volume classes for the storage types that are supported by the version of IBM Cloud Pak for Data that you are running. Ensure that dynamic volume provisioning is enabled. If this hasn’t been configured, the volume will not be created dynamically when the claim request is made and your service instance will not be created.
    2. Use an existing storage volume. Select this option if you want to reuse an existing volume or use the volume that was created for existing instances.
  6. Ensure that the summary is correct and click Create.

What to do next