Prerequisites

Before deploying AEW, confirm that the following infrastructure and tools are available and configured correctly.

Kubernetes cluster requirements

A Kubernetes cluster is required to deploy IBM Aspera Enterprise Webapps. The following are the specifications for a production-grade setup.

Cluster type
Component Specification
Cluster type Linux-based Kubernetes cluster
Kubernetes version (server version) v1.33 and later
Tip: Red Hat OpenShift is a cloud-based Kubernetes container platform that you can alternatively use to deploy AEW. When using OpenShift, make sure that the Kubernetes version matches the one specified in the Kubernetes cluster requirements table.
Sizing guidelines by workload profile

When scaling AEW based on your expected package and file volume, use the information about Sizing and guidelines by workload profile as a reference.

Note: Primary nodes are required for a high availability (HA) control plane, and worker nodes are required to run application and service pods. A load balancer node is also required on Kubernetes clusters running on physical servers (bare-metal Kubernetes) for ingress routing.
Sample node configuration
Component Specifications
Primary node 16 GB RAM, 8 vCPUs, 250 GB disk
Worker node 32 GB RAM, 16 vCPUs, 250 GB disk
Load balancer node 4 GB RAM, 2 vCPUs
Workload scaling reference
Refer to the Workload scaling reference table for estimates based on typical AEW workloads. Actual sizing may vary based on deployment specifics and traffic load.
Important: If you are using a single-node cluster, run the following command kubectl taint nodes node1 key1=value1:NoSchedule- to remove the control-plane taint and allow workloads to be scheduled on the node.

Kubernetes CLI (kubectl) and cluster access

To interact with your Kubernetes cluster, you'll use the kubectl command line tool to deploy workloads, inspect resources, and verify cluster health.

Supported platforms
  • macOS
  • Linux — RedHat, CentOS
  • Linux — Ubuntu
Install the Kubernetes client

Download and install the Kubernetes client (kubectl), the required client version is 1.33 or later. For installation instructions refer to the official kubernetes documentation.

Kubernetes commands
  • Verify Kubernetes cluster access
    kubectl get nodes
  • Verify cluster permissions
    kubectl auth can-i '*' '*' --all-namespaces
Load balancer

AEW requires an external load balancer to route incoming HTTPs traffic to the Emissary ingress service inside your Kubernetes cluster. You can choose one of the following depending on your environment:

  • Self-managed load balancer: For example, an HA proxy or MetalLB running on a virtual machine or physical server.
    • Suitable for bare-metal or virtual machine deployments.
    • Forwards external HTTPs traffic to each worker node’s Emissary ingress NodePort.
  • Cloud-managed load balancer: For example, AWS Elastic Load Balancer, Azure Load Balancer, Google Cloud Platform Load Balancer.
    • Automatically integrates with Kubernetes Service type LoadBalancer.
    • No manual configuration is required for AEW beyond creating the service.
    Note: For HAProxy, MetalLB, or cloud load balancer configuration instructions, refer to each platform's official documentation.

OpenShift AEW deployment

If you are deploying AEW on Red Hat OpenShift, you must authenticate your terminal session to the target OpenShift cluster so that the asctl command can deploy resources correctly.
  1. Obtain and copy the login command from the OpenShift web UI.
  2. Run the login command in your terminal to authenticate to the cluster.
  3. Verify cluster connectivity and permissions using the kubectl get nodes command.
  4. Once you're logged in and you verified the cluster connectivity, you can proceed to install AEW following the instructions in the installation section.

Storage requirements

Your Kubernetes cluster requires persistent storage, which provides a shared filesystem accessible over the network, ideal for data that needs to be shared across multiple pods. All data (database and uploads) is stored on a single, shared file storage volume ( such as NFS, for example).

Storage for On-Prem Clusters: Installing CSI (Container Storage Interface) drivers

For any non-cloud storage (NFS, vSphere, Ceph, etc.), the corresponding CSI (Container Storage Interface) driver must be installed in your cluster before you can create a StorageClass. The driver is the software that allows Kubernetes to communicate with your storage hardware.

How to Install a CSI Driver and create a simple Shared Storage (NFS)

  1. Identify your storage type. Determine what storage infrastructure you have (either an NFS server, a vSphere environment, or a Ceph cluster).
  2. Find the correct CSI driver. Locate the official CSI driver for that system. Common examples include: NFS, vSPhere, CephFS (File), Ceph RBD (Block).
  3. Follow the driver's installation guide. Each driver has its own installation steps. This task should be performed by your cluster administrator.
    1. Install the NFS CSI Driver (for File Storage)
      # 1. Clone the driver repository 
      
      git clone https://github.com/kubernetes-csi/csi-driver-nfs.git 
      
      cd csi-driver-nfs 
      
      # 2. Deploy the driver to your cluster 
      
      ./deploy/install-driver.sh v4.8.0 local 
      
      # 3. Verify the pods are running 
      
      kubectl get pods -n kube-system | grep nfs 
    2. Install the vSphere CSI Driver (for Block Storage)
      # Example steps for vSphere (simplified). The exact process is more complex.
      
      # 1. Add the VMware Helm chart repository
      helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
      
      # 2. Install the vSphere CSI Driver using Helm
      helm install vsphere-csi vmware-tanzu/vsphere-csi --namespace kube-system \
        --set config.vCenterHost="vcenter.example.com"
      
    3. Verify that your driver is installed
      # List all CSI Drivers registered in the cluster
      kubectl get csidrivers
      
      # Look for pods related to your driver (e.g., nfs, vsphere)
      kubectl get pods -n kube-system | grep -E '(nfs|vsphere)'
      
  4. To proceed to create the StorageClass that is referenced in the provisioner field, you must first verify that the CSI driver is running successfully.
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: my-nfs-storage
      annotations:
        storageclass.kubernetes.io/is-default-class: "true" # Optional: set as default
    provisioner: nfs.csi.k8s.io # Container Storage Interface identifier
    parameters:
      server: my-nfs-server.local # IP or hostname of your NFS server
      share: /path/to/share       # Base path on the server to use
    
  5. When prompted for rwxStorageClass during the installation, enter the name of your Storage Class (for example, my-nfs-storage). To find the available Storage Class in your cluster, run:
    kubectl get storageclass
    Note: If you are using EFS (Amazon Elastic File System), update the storage class:

    Remove the following entries:

    • gidRangeEnd
    • gidRangeStart
    Add the following entries:
    • gid: "1000"
    • uid: "1000"

DNS configuration

To ensure application access through the browser and enable service discovery, you must configure external DNS records for your environment. These DNS entries must resolve to the external IP or hostname of LoadBalancer.

Note: For OCP clusters, you must add the following DNS entries: kibana.<subdomain>.<domain> and monitoring.<subdomain>.<domain>.

Parameters to provide

  • org_name: A short, lowercase identifier for your organization (for example, entertainment). This value is used as a prefix in subdomains.
  • subdomain: The DNS subdomain under which the app will be hosted (for example, it.company.com)
  • domain_name: The root domain name (for example, company.com). Typically used for certificate generation.
Note: orgName is used to construct service-specific URLs, such as <orgName>.<subDomain>. For example, entertainment.it.company.com points to the Ingress and is used for the user portal, while api.it.company.com also points to the Ingress but is used for APIs and internal services.
Validation
After the DNS is configured, verify that each hostname resolves:
nslookup entertainment.it.company.com
Alternatively, you can run:
dig entertainment.it.company.com

TLS certificate

A TLS (Transport Layer Security) certificate is required to secure access to IBM Aspera Enterprise Webapps over HTTPS. This ensures encrypted communication between users’ browsers and your application.

Note: For OCP clusters, you must add the following DNS entries to the certificate's SAN list: kibana.<subdomain>.<domain> and monitoring.<subdomain>.<domain>.
Certificate requirements
  • Common name (commonName): Matches the fully qualified domain name (FQDN). For example: entertainment.it.company.com
  • DNS names (dnsNames): This field lists all the domain names that the certificate should be valid for. It can include specific domains, such as:entertainment.it.company.com, api.it.company.com, or it can use a wildcard to cover multiple subdomains under the same domain like *.it.company.com.

Using any CA-issued certificate

While Let’s Encrypt is the documented example, AEW supports any certificate issued by a trusted CA (public or internal). The requirements are:
  • The certificate covers all AEW hostnames in its Subject Alternative Name (SAN) list.
  • The certificate and key are provided in the correct file format:
    • Public certificate file (the “proof” of identity), for example, tls.crt
    • Private key file (keep strictly confidential), for example, tls.key
    • (Optional) Certificate Authority chain file, needed if using an internal CA, for example, ca.crt
  • The private key must be unencrypted (no passphrase)
Verify your certificate

Before deployment, verify that your certificate’s SAN list includes every domain name AEW will use. If a hostname is missing, users will see a browser warning. An example SAN block might include it.company.com, entertainment.it.company.com, api.it.company.com.

To inspect your .crt file and check the SAN list, run:
openssl x509 -in tls.crt -text -noout | grep DNS 
Example output:
DNS:*.it.company.com, DNS:entertainment.company.com, api.it.compnay.com, DNS:it.company.com