Prerequisites
Before deploying AEW, confirm that the following infrastructure and tools are available and configured correctly.
Kubernetes cluster requirements
A Kubernetes cluster is required to deploy IBM Aspera Enterprise Webapps. The following are the specifications for a production-grade setup.
| Component | Specification |
|---|---|
| Cluster type | Linux-based Kubernetes cluster |
| Kubernetes version (server version) | v1.33 and later |
When scaling AEW based on your expected package and file volume, use the information about Sizing and guidelines by workload profile as a reference.
| Component | Specifications |
|---|---|
| Primary node | 16 GB RAM, 8 vCPUs, 250 GB disk |
| Worker node | 32 GB RAM, 16 vCPUs, 250 GB disk |
| Load balancer node | 4 GB RAM, 2 vCPUs |
kubectl taint nodes node1
key1=value1:NoSchedule- to remove the control-plane taint and allow workloads to be
scheduled on the node. Kubernetes CLI (kubectl) and cluster access
To interact with your Kubernetes cluster, you'll use the kubectl command line
tool to deploy workloads, inspect resources, and verify cluster health.
- macOS
- Linux — RedHat, CentOS
- Linux — Ubuntu
Download and install the Kubernetes client
(kubectl), the required client version is 1.33 or later. For installation
instructions refer to the official kubernetes documentation.
- Verify Kubernetes cluster access
kubectl get nodes - Verify cluster
permissions
kubectl auth can-i '*' '*' --all-namespaces
AEW requires an external load balancer to route incoming HTTPs traffic to the Emissary ingress service inside your Kubernetes cluster. You can choose one of the following depending on your environment:
- Self-managed load balancer: For example, an HA proxy or MetalLB running on a virtual
machine or physical server.
- Suitable for bare-metal or virtual machine deployments.
- Forwards external HTTPs traffic to each worker node’s Emissary ingress NodePort.
- Cloud-managed load balancer: For example, AWS Elastic Load Balancer, Azure Load Balancer,
Google Cloud Platform Load Balancer.
- Automatically integrates with Kubernetes
ServicetypeLoadBalancer. - No manual configuration is required for AEW beyond creating the service.
Note: For HAProxy, MetalLB, or cloud load balancer configuration instructions, refer to each platform's official documentation. - Automatically integrates with Kubernetes
OpenShift AEW deployment
asctl command can deploy resources correctly.- Obtain and copy the login command from the OpenShift web UI.
- Run the login command in your terminal to authenticate to the cluster.
- Verify cluster connectivity and permissions using the
kubectl get nodescommand. - Once you're logged in and you verified the cluster connectivity, you can proceed to install AEW following the instructions in the installation section.
Storage requirements
Your Kubernetes cluster requires persistent storage, which provides a shared filesystem accessible over the network, ideal for data that needs to be shared across multiple pods. All data (database and uploads) is stored on a single, shared file storage volume ( such as NFS, for example).
Storage for On-Prem Clusters: Installing CSI (Container Storage Interface) drivers
For any non-cloud storage (NFS, vSphere, Ceph, etc.), the corresponding CSI (Container Storage Interface) driver must be installed in your cluster before you can create a StorageClass. The driver is the software that allows Kubernetes to communicate with your storage hardware.
How to Install a CSI Driver and create a simple Shared Storage (NFS)
- Identify your storage type. Determine what storage infrastructure you have (either an NFS server, a vSphere environment, or a Ceph cluster).
- Find the correct CSI driver. Locate the official CSI driver for that system. Common examples include: NFS, vSPhere, CephFS (File), Ceph RBD (Block).
- Follow the driver's installation guide. Each driver has its own installation steps. This task
should be performed by your cluster administrator.
- Install the NFS CSI Driver (for File Storage)
# 1. Clone the driver repository git clone https://github.com/kubernetes-csi/csi-driver-nfs.git cd csi-driver-nfs # 2. Deploy the driver to your cluster ./deploy/install-driver.sh v4.8.0 local # 3. Verify the pods are running kubectl get pods -n kube-system | grep nfs - Install the vSphere CSI Driver (for Block Storage)
# Example steps for vSphere (simplified). The exact process is more complex. # 1. Add the VMware Helm chart repository helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts # 2. Install the vSphere CSI Driver using Helm helm install vsphere-csi vmware-tanzu/vsphere-csi --namespace kube-system \ --set config.vCenterHost="vcenter.example.com" - Verify that your driver is
installed
# List all CSI Drivers registered in the cluster kubectl get csidrivers # Look for pods related to your driver (e.g., nfs, vsphere) kubectl get pods -n kube-system | grep -E '(nfs|vsphere)'
- Install the NFS CSI Driver (for File Storage)
- To proceed to create the
StorageClassthat is referenced in theprovisionerfield, you must first verify that the CSI driver is running successfully.apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: my-nfs-storage annotations: storageclass.kubernetes.io/is-default-class: "true" # Optional: set as default provisioner: nfs.csi.k8s.io # Container Storage Interface identifier parameters: server: my-nfs-server.local # IP or hostname of your NFS server share: /path/to/share # Base path on the server to use - When prompted for
rwxStorageClassduring the installation, enter the name of your Storage Class (for example,my-nfs-storage). To find the available Storage Class in your cluster, run:kubectl get storageclassNote: If you are using EFS (Amazon Elastic File System), update the storage class:Remove the following entries:
gidRangeEndgidRangeStart
Add the following entries:gid: "1000"uid: "1000"
DNS configuration
To ensure application access through the browser and enable service discovery, you must configure external DNS records for your environment. These DNS entries must resolve to the external IP or hostname of LoadBalancer.
kibana.<subdomain>.<domain> and
monitoring.<subdomain>.<domain>.Parameters to provide
org_name: A short, lowercase identifier for your organization (for example,entertainment). This value is used as a prefix in subdomains.subdomain: The DNS subdomain under which the app will be hosted (for example,it.company.com)domain_name: The root domain name (for example,company.com). Typically used for certificate generation.
orgName is used to construct service-specific URLs, such as
<orgName>.<subDomain>. For example,
entertainment.it.company.com points to the Ingress and is used for the user portal,
while api.it.company.com also points to the Ingress but is used for APIs and
internal services.nslookup entertainment.it.company.comAlternatively, you can
run:dig entertainment.it.company.com
TLS certificate
A TLS (Transport Layer Security) certificate is required to secure access to IBM Aspera Enterprise Webapps over HTTPS. This ensures encrypted communication between users’ browsers and your application.
kibana.<subdomain>.<domain> and
monitoring.<subdomain>.<domain>.- Common name (
commonName): Matches the fully qualified domain name (FQDN). For example:entertainment.it.company.com - DNS names (
dnsNames): This field lists all the domain names that the certificate should be valid for. It can include specific domains, such as:entertainment.it.company.com,api.it.company.com, or it can use a wildcard to cover multiple subdomains under the same domain like*.it.company.com.
Using any CA-issued certificate
- The certificate covers all AEW hostnames in its Subject Alternative Name (SAN) list.
- The certificate and key are provided in the correct file format:
- Public certificate file (the “proof” of identity), for example,
tls.crt - Private key file (keep strictly confidential), for example,
tls.key - (Optional) Certificate Authority chain file, needed if using an internal CA, for example,
ca.crt
- Public certificate file (the “proof” of identity), for example,
- The private key must be unencrypted (no passphrase)
Before deployment, verify that your certificate’s SAN
list includes every domain name AEW will use. If a hostname is missing, users will see a browser
warning. An example SAN block might include it.company.com,
entertainment.it.company.com, api.it.company.com.
.crt file and check the SAN list,
run:openssl x509 -in tls.crt -text -noout | grep DNS Example
output:DNS:*.it.company.com, DNS:entertainment.company.com, api.it.compnay.com, DNS:it.company.com