Configure basic OpenShift settings
In most cases, the default configuration settings are sufficient for your OpenShift cluster. All default configuration settings can be found in kubernetes/config/defaults.env. If the default configuration settings are not sufficient, you can override any settings in a config/paw.env file.
Other advanced options also exist that allow different configuration. For more information, see More basic configuration settings.
Configure the OpenShift project
By default, the start script configures an OpenShift project named paw. If you wish to use a different name, add export PA_KUBE_NAMESPACE=<project> to paw.env.
For example: export PA_KUBE_NAMESPACE=myns
Configure deployment of images
You can configure Planning Analytics Workspace Distributed to copy the docker image files to all worker nodes or employ a private docker registry if one is configured for the OpenShift cluster.
The start script asks whether you want to use a private registry. If you don't want to use a private registry, the start script uses ssh and scp to copy the image archive to all designated worker and storage nodes.
By default, the scripts assume that the same ssh user is to be used on all nodes. If your cluster
uses different ssh users, set
SSH_SAME_USER to false in
If you want to use a private registry, then the
docker command is used to push
the images to the configured private registry. The start script prompts for the host name and port
of the private registry. You are prompted for the registry user name and password when
You can use more configuration settings to change the image copy path, tag prefix, and to support unattended installs. For more information, see Advanced image deployment settings.
Configure an ingress controller
By default, an ingress controller is assumed to be configured for the OpenShift deployment. A TLS certificate is generated for the ingress controller to use.
If you have your own key and certificate that you want to use instead, see Advanced ingress controller configuration settings.
The storage services in Planning Analytics Workspace Distributed employ OpenShift persistent volume claims to persist data.
Three storage types are supported:
The desired storage type is configured by specifying local, shared, or dedicated as the value for PA_KUBE_STORAGE_TYPE in paw.env. If PA_KUBE_STORAGE_TYPE is not specified, the start script prompts you for the desired storage type.
Configure local storage
By default, Planning Analytics Workspace Distributed uses the local storage type. The local storage type uses OpenShift local persistent storage that is pinned to three worker nodes, as specified by the PA_KUBE_STORAGE_NODES environment variable in paw.env.
The storage containers are also pinned to these same nodes, allowing the application to continue to function if one of the nodes becomes inactive. If PA_KUBE_STORAGE_NODES is not specified, then the start script selects three worker nodes to act as the storage nodes.
Local storage employs three persistent volume claims, one for each of the three storage nodes. Storage containers are grouped into three sets: storage-node1, storage-node2, and storage-node3.
For local persistent storage to work properly, the root location on each storage node must exist before the node can be used. The start script asks you whether you would like the script to configure the storage location on each storage node automatically. Note that this requires ssh access to the three storage nodes. If you do not have ssh access to the storage nodes, or wish to configure the storage nodes manually, specify n when prompted.
Configure shared storage
Shared storage employs a single persistent volume claim that is shared by all storage containers. If you are planning to use NFS or another shared storage provider such as Portworx, select shared when prompted. PA_KUBE_STORAGE_NODES is not used with shared storage.
Shared storage supports two types of volumes: NFS and other. The volume type is configurable via the PA_KUBE_VOLUME_TYPE environment variable in paw.env. If PA_KUBE_VOLUME_TYPE is not specified, you are prompted to select the desired type. Shared storage employs a single persistent volume claim, and all storage services reference the single volume claim.
Configure dedicated storage
Dedicated storage employs a separate persistent volume claim for each storage container. This allows for a finer grain of persistence to be specified, allowing storage providers to make optimal decisions regarding placement.
Dedicated storage supports three types of volumes: local, NFS, and other. The volume type is configured with the PA_KUBE_VOLUME_TYPE environment variable in paw.env.
If PA_KUBE_VOLUME_TYPE is not specified, you are prompted to select the desired type. If local is specified for the volume type, then storage exhibits the same semantics as if PA_KUBE_STORAGE_TYPE was set to local. Storage is pinned to three worker nodes, as specified by the PA_KUBE_STORAGE_NODES environment variable in paw.env. The storage containers are also pinned to these same nodes, allowing the application to continue to function if one of the nodes becomes inactive.
Set the desired storage class
Some storage providers expose storage classes that need to be specified in any persistent volume claims.
If the storage provider configured for your cluster employs such storage-classes, PA_KUBE_STORAGE_CLASS can be set to specify the desired storage class. If PA_KUBE_STORAGE_CLASS is not specified in paw.env, you are prompted to specify the desired storage class.
Storage providers and init container execution
Some storage providers may require that the ownership of the mounted volumes be changed within the pods. If the storage class that you are using requires this action, set PA_KUBE_INIT_CONTAINERS to true within paw.env.
Note that this also requires that your cluster be configured to allow the init containers to run as root. To minimize the security exposure, the installation configures a service account called pa-allow-rootuid that is used for all storage pods. The pa-allow-rootuid service account must be added to the appropriate security context object in your cluster that allows containers to run as root.
Set OpenShift resource limits
By default, all Planning Analytics Workspace Distributed containers execute without any limits on the amount of CPU and memory that they consume. However, some environments may require that explicit limits be specified for all containers running in the cluster.
If your deployment has such a requirement, set PA_KUBE_EXPLICIT_LIMITS to true in paw.env. The default configuration values can be found in kubernetes/config/defaults.env. If you need to increase the values, override the appropriate environment variable in paw.env.