Red Hat OpenShift service on Amazon Web Services (ROSA)
You can install Guardium® Insights version 3.2.13 and up with Red Hat OpenShift service on Amazon Web Services (ROSA).
Before you begin
Before you proceed with the installation, make sure that your environment meets the System requirements and prerequisites and verify that you are prepared for installation.
About this task
Complete all tasks in the following order.
Creating a Red Hat account
Procedure
- From a web browser, go to https://console.redhat.com/.
- Follow the instructions to create an account.
Creating an AWS account
Procedure
- From a web browser, go to https://portal.aws.amazon.com/billing/signup.
- Follow the instructions to create an account.
Installing CLI tools
Install the following tools if you are using a Linux-based system.
Procedure
- Install the AWS CLI. The AWS CLI is an open source tool to communicate with an AWS service directly from your OS command line or a remote terminal program. This tool requires some post-install configuration.
- Install the ROSA CLI. This is a Red Hat® tool to create, update, manage, and remove your ROSA cluster and resources.
- Install the OpenShift CLI. This is a Red Hat tool to create and manage Red Hat OpenShift® Container Platform projects.
Configuring ROSA CLI with AWS prerequisites
Verify that you meet AWS access, support, and security prerequisites before you create your ROSA cluster on AWS.
Procedure
- Using the AWS CLI, run the following command.
aws configure
The output is similar to:❯ aws configure AWS Access Key ID [********************]: AWS Secret Access Key [********************]: Default region name [us-east-2]: Default output format [None]:
Important:If AWS account is already configured, you are asked to confirm credentials. If not, you are asked to input account details, keys, and region.
Use your IAM user account and access key. ROSA does not support configuration by using the root user account.
- Verify the AWS configuration
aws sts get-caller-identity
Important: The output shows the UserID, Account, and ARN details. Confirm that these details are correct. - Confirm that the Elastic Load Balancing role exists. If the role doesn't exist, a role is
created when you run the following command.
aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" || aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
- Log in using the ROSA CLI.
rosa login --token=<token>
A Red Hat account is needed for this step. If you do not have a Red Hat account, create one at Red Hat Console. The token can be retrieved from the OpenShift console.
- Verify that the AWS and Red Hat configurations
are correct.
rosa whoami
- Confirm that you have enough quota on AWS for the region to which ROSA was
configured.
rosa verify quota
❯ rosa verify quota I: Validating AWS quota... I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.ht
- Verify that your Red Hat
OpenShift client is
using ROSA.
rosa verify openshift-client
❯ rosa verify openshift-client I: Verifying whether OpenShift command-line tool is available... I: Current OpenShift Client Version: 4.12.0-202208031327
- From the AWS console at https://aws.amazon.com/iam/ or command line, attach the ServiceQuotaFullAccess for the IAM user.
The Amazon Security Token Service (STS) provides enhanced security. It is the recommended access management method for installing and interacting with clusters on ROSA. For more information, see AWS prerequisites for ROSA with STS.
Creating a ROSA cluster
Procedure
- If you are deploying ROSA in your account for the first time, create the
account-wide roles.
rosa create account-roles --mode auto –yes
- Run the following command to begin the ROSA cluster creation in interactive mode.
export CLUSTER_NAME=<name the cluster>
Create the ROSA cluster by using the assigned name.rosa create cluster --cluster-name ${CLUSTER_NAME} --sts --mode auto --yes --interactive
Select the following options:- The compute nodes instance type must be m6i.4xlarge.
- The installer role ARN must be the same as the one that is created in step 1.
- All other settings are based on your organization's needs.
❯ rosa create cluster --cluster-name ${CLUSTER_NAME} --sts --mode auto --yes --interactive I: Interactive mode enabled. Any optional fields can be left empty and a default will be selected. ? Cluster name: rosatest ? Deploy cluster with Hosted Control Plane: No ? Create cluster admin user: Yes ? Username: kubeadmin ? Password: [? for help] ***************************** W: In a future release STS will be the default mode. W: --sts flag won't be necessary if you wish to use STS. W: --non-sts/--mint-mode flag will be necessary if you do not wish to use STS. ? OpenShift version: 4.12.45 ? Configure the use of IMDSv2 for ec2 instances optional/required: optional W: More than one Installer role found ? Installer role ARN: arn:aws:iam::****************:role/ManagedOpenShift-Installer-Role I: Using arn:aws:iam::****************:role/ManagedOpenShift-ControlPlane-Role for the ControlPlane role I: Using arn:aws:iam::****************:role/ManagedOpenShift-Worker-Role for the Worker role I: Using arn:aws:iam::****************:role/ManagedOpenShift-Support-Role for the Support role ? External ID (optional): ? Operator roles prefix: rosatest-w8o2 ? Deploy cluster using pre registered OIDC Configuration ID: No ? Tags (optional): owner *****, use dev ? Multiple availability zones: No ? AWS region: us-east-2 ? PrivateLink cluster: No ? Machine CIDR: 10.0.0.0/16 ? Service CIDR: 172.30.0.0/16 ? Pod CIDR: 10.128.0.0/14 ? Install into an existing VPC: No ? Select availability zones: No ? Enable Customer Managed key: No ? Compute nodes instance type: m6i.4xlarge ? Enable autoscaling: No ? Compute nodes: 3 ? Default machine pool labels (optional): ? Host prefix: 23 ? Machine pool root disk size (GiB or TiB): 300 GiB ? Enable FIPS support: No ? Encrypt etcd data: Yes ? Disable Workload monitoring: Yes I: Creating cluster 'rosatest' I: To create this cluster again in the future, you can run: rosa create cluster --cluster-name rosatest --sts --mode auto --cluster-admin-user kubeadmin --cluster-admin-password ***************************** --role-arn arn:aws:iam::****************:role/ManagedOpenShift-Installer-Role --support-role-arn arn:aws:iam::****************:role/ManagedOpenShift-Support-Role --controlplane-iam-role arn:aws:iam::****************:role/ManagedOpenShift-ControlPlane-Role --worker-iam-role arn:aws:iam::****************:role/ManagedOpenShift-Worker-Role --operator-roles-prefix rosatest-w8o2 --tags "use:dev,owner:*****" --region us-east-2 --version 4.12.45 --replicas 3 --compute-machine-type m6i.4xlarge --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 --etcd-encryption --disable-workload-monitoring I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'rosatest' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. Name: rosatest ID: ********** External ID: Control Plane: Customer Hosted OpenShift Version: Channel Group: stable DNS: Not ready AWS Account: **************** API URL: Console URL: Region: us-east-2 Multi-AZ: false Nodes: - Control plane: 3 - Infra: 2 - Compute: 3 Network: - Type: OVNKubernetes - Service CIDR: 172.30.0.0/16 - Machine CIDR: 10.0.0.0/16 - Pod CIDR: 10.128.0.0/14 - Host Prefix: /23 Workload Monitoring: Disabled Ec2 Metadata Http Tokens: optional STS Role ARN: arn:aws:iam::****************:role/ManagedOpenShift-Installer-Role Support Role ARN: arn:aws:iam::****************:role/ManagedOpenShift-Support-Role Instance IAM Roles: - Control plane: arn:aws:iam::****************:role/ManagedOpenShift-ControlPlane-Role - Worker: arn:aws:iam::****************:role/ManagedOpenShift-Worker-Role Operator IAM Roles: - arn:aws:iam::****************:role/rosatest-w8o2-openshift-machine-api-aws-cloud-credentials - arn:aws:iam::****************:role/rosatest-w8o2-openshift-cloud-credential-operator-cloud-credenti - arn:aws:iam::****************:role/rosatest-w8o2-openshift-image-registry-installer-cloud-credentia - arn:aws:iam::****************:role/rosatest-w8o2-openshift-ingress-operator-cloud-credentials - arn:aws:iam::****************:role/rosatest-w8o2-openshift-cluster-csi-drivers-ebs-cloud-credential - arn:aws:iam::****************:role/rosatest-w8o2-openshift-cloud-network-config-controller-cloud-cr Managed Policies: No State: waiting (Waiting for OIDC configuration) Private: No Created: Dec 18 2023 15:21:21 UTC User Workload Monitoring: disabled Details Page: https://console.redhat.com/openshift/details/s/2ZilfAAMb2KkEErAa4100zDfHx9 OIDC Endpoint URL: https://oidc.op1.openshiftapps.com/********** (Classic) I: Preparing to create operator roles. I: Creating roles using 'arn:aws:iam::****************:user/*****' I: Created role 'rosatest-w8o2-openshift-machine-api-aws-cloud-credentials' with ARN 'arn:aws:iam::****************:role/rosatest-w8o2-openshift-machine-api-aws-cloud-credentials' I: Created role 'rosatest-w8o2-openshift-cloud-credential-operator-cloud-credenti' with ARN 'arn:aws:iam::****************:role/rosatest-w8o2-openshift-cloud-credential-operator-cloud-credenti' I: Created role 'rosatest-w8o2-openshift-image-registry-installer-cloud-credentia' with ARN 'arn:aws:iam::****************:role/rosatest-w8o2-openshift-image-registry-installer-cloud-credentia' I: Created role 'rosatest-w8o2-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::****************:role/rosatest-w8o2-openshift-ingress-operator-cloud-credentials' I: Created role 'rosatest-w8o2-openshift-cluster-csi-drivers-ebs-cloud-credential' with ARN 'arn:aws:iam::****************:role/rosatest-w8o2-openshift-cluster-csi-drivers-ebs-cloud-credential' I: Created role 'rosatest-w8o2-openshift-cloud-network-config-controller-cloud-cr' with ARN 'arn:aws:iam::****************:role/rosatest-w8o2-openshift-cloud-network-config-controller-cloud-cr' I: Preparing to create OIDC Provider. I: Creating OIDC provider using 'arn:aws:iam::****************:user/*****' I: Created OIDC provider with ARN 'arn:aws:iam::****************:oidc-provider/oidc.op1.openshiftapps.com/***********' I: To determine when your cluster is Ready, run 'rosa describe cluster -c rosatest'. I: To watch your cluster installation logs, run 'rosa logs install -c rosatest --watch'
- Check the status of your cluster.
It can take around 30 to 40 minutes to complete this setup. The cluster is ready when you see the cluster that is listed as Ready in the description. Errors, if any, appear in the description of the cluster.
rosa describe cluster --cluster ${CLUSTER_NAME}
- Optional: If a cluster was not created by using the interactive mode, create
an admin user for your cluster.
rosa create admin --cluster=${CLUSTER_NAME}
- Obtain the console URL.
rosa describe cluster --cluster=${CLUSTER_NAME} | grep -i Console
Configuring storage on your ROSA cluster
Configure both RWO (block) and RWX (file) storage for your cluster by using native AWS services.
Procedure
- Install and set up AWS EFS CSI Driver Operator.
- Log in to the OpenShift Container Web Console.
- Click Operators > Operator Hub.
- Search for AWS EFS CSI Driver Operator in the filter box.
- Click Install on the AWS EFS CSI Driver
Operator page. Verify that the following requirements are met:
- The AWS EFS CSI Driver Operator Version is 4.10 or up.
- All namespaces on the cluster (default) are selected.
- The installed namespace is set to openshift-cluster-csi-drivers.
- Click Administration > CustomConnections.
- Search for ClusterCSIDriver in the filter box.
- On the Instances tab, click Create
ClusterCSIDriver and enter the following YAML.
apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed
- Click Create.
- Wait for the EFS pods in the openshift-cluster-csi-drivers to appear.
oc get pods -n openshift-cluster-csi-drivers | grep -i efs
For example,aws-efs-csi-driver-controller-84bf85d9df-86ms5 4/4 Running 0 3d14h aws-efs-csi-driver-controller-84bf85d9df-c7k89 4/4 Running 0 3d14h aws-efs-csi-driver-node-cc4gc 3/3 Running 0 3d14h aws-efs-csi-driver-node-hnh97 3/3 Running 0 3d14h aws-efs-csi-driver-node-sc8ln 3/3 Running 0 3d14h aws-efs-csi-driver-node-sq5qg 3/3 Running 0 3d14h aws-efs-csi-driver-node-w6j5c 3/3 Running 0 3d14h aws-efs-csi-driver-node-wz5kp 3/3 Running 0 3d14h aws-efs-csi-driver-node-xcqxx 3/3 Running 0 3d14h aws-efs-csi-driver-node-zb28n 3/3 Running 0 3d14h aws-efs-csi-driver-operator-76db8f8d97-jh795 1/1 Running 0 3d18h
- Create a secret by using the following command.
apiVersion: v1 kind: Secret metadata: namespace: openshift-cluster-csi-drivers name: aws-efs-cloud-credentials stringData: aws_access_key_id: <access key id> aws_secret_access_key: <secret access key>
Replace <access key id> and <secret access key> with the user ID and key of the IAM user who created the ROSA cluster.
- Create and an EFS volume and configure access for the IAM user
that created the ROSA cluster.
- On the AWS console, open https://console.aws.amazon.com/efs.
- Go to File systems > Create file system. Verify that you are in the correct AWS region, and enter a name for the file system. For Virtual Private Cloud (VPC), select the VPC that was used for the OpenShift cluster that is built by using ROSA CLI. Accept default settings for all other selections.
- Wait for the volume and mount targets to be created fully.
- Select your new EFS volume and go to the Network tab.
- Copy the Security Group ID to your clipboard. For example, sg-012115e5809162746.
- Go to Security Groups and find the Security Group that is used by the EFS volume that is copied from the previous step.
- On the Inbound rules tab, click Edit inbound
rules, and then add a rule with the following settings:
Type: NFS Protocol: TCP Port range: 2049 Source: Custom/IP address range of your nodes (for example: “10.0.0.0/16”)
- Save the rule.
- Create a storage class for your EFS volume.
Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, you can obtain dynamically provisioned persistent volumes. Installing the AWS EFS CSI Driver Operator does not create a storage class by default. Use the following steps to manually create the EFS storage class validated for Guardium Insights.
Verify that you have the following information before you run the command to create your EFS storage class.
- provisioningMode: The EFS access point mode. Use
efs-ap
. - fileSystemId: The EFS file system ID. For example,
- fs-08a5b4467d198bf3e
. - uid: The access point user ID. Use
zero (0)
. - gid: The access point group ID. Use
zero (0)
. - directoryPerms: The directory permissions for the access point root
directory. Use
777
.
- In the OpenShift Container Platform console, click Storage > Storage Classes.
- On the StorageClasses overview page, click Create Storage Class.
- On the StorageClasses create page, switch to yaml and paste the
following code.
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: efs-sc parameters: directoryPerms: "777" fileSystemId: <efs file system ID> #Replace with the Elastic File System (EFS) ID created in step 2. uid: "0" gid: "0" provisioningMode: efs-ap provisioner: efs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: Immediate
- provisioningMode: The EFS access point mode. Use
- Create an EBS storage class. The EBS CSI Driver Operator is installed by default on the
ROSA cluster.
- In the OpenShift Container Platform console, click Storage > Storage Classes.
- On the StorageClasses overview page, click Create Storage Class.
- Switch to yaml and paste the following code for your scenario.
Use the following code for a high performance Guardium Insights deployment.
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ebs-sc parameters: type: io2 iopsPerGB: "500" #required for io1 and io2 type storage classes. This can be calculated based on your volume size and maximum allowable iops. See Amazon EBS volume types. provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer
For minimal EBS deployment, use the following yaml code:allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gp3-csi-fast parameters: encrypted: "true" iops: "16000" throughput: "1000" type: gp3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer
Expanding ROSA cluster to add additional Nodes for Guardium Insights Warehouse workloads
Before you begin
Procedure
- Determine the AWS machine type and number of nodes that are needed by using the Guardium Insights sizing calculator.
- Create a machine pool in the cluster dedicated for Guardium Insights warehouse workloads export.
export CLUSTER_NAME=<name the cluster> export DB_NODE_REPLICAS=<the number of nodes needed for warehouse workloads> export DB_NODE_MACHINE_TYPE=<AWS machine type for the warehouse workloads>
- For the
CLUSTER_NAME
, use the name was created by using the Creating a ROSA cluster procedure. - Select
DB_NODE_REPLICAS
based on the sizing from step 1. - For
DB_NODE_MACHINE_TYPE
, select from https://aws.amazon.com/ec2/instance-types/ based on the sizing in step 1.
rosa create machinepool --cluster=${CLUSTER_NAME} --name=db-nodes --replicas=${DB_NODE_REPLICAS} --instance-type=${DB_NODE_MACHINE_TYPE} --labels='icp4data=database-db2wh'
- For the
- View the list of machine pools.
It can take about 30 to 45 minutes for the nodes to become active and usable.
rosa list machinepools --cluster=${CLUSTER_NAME}
- Wait until all the nodes are added to the OCP cluster and in ready state.
oc get nodes | grep worker
ip-10-0-136-0.us-east-2.compute.internal Ready infra,worker 22h v1.25.14+31e0558 ip-10-0-152-199.us-east-2.compute.internal Ready worker 23h v1.25.14+31e0558 ip-10-0-169-41.us-east-2.compute.internal Ready infra,worker 22h v1.25.14+31e0558 ip-10-0-186-255.us-east-2.compute.internal Ready worker 4h52m v1.25.14+31e0558 ip-10-0-191-113.us-east-2.compute.internal Ready worker 23h v1.25.14+31e0558 ip-10-0-222-137.us-east-2.compute.internal Ready worker 4h52m v1.25.14+31e0558 ip-10-0-226-30.us-east-2.compute.internal Ready worker 4h52m v1.25.14+31e0558 ip-10-0-250-40.us-east-2.compute.internal Ready worker 23h v1.25.14+31e0558
Installing Guardium Insights
Before you begin
Procedure
- Log in to your Red Hat
OpenShift cluster instance.
oc login -u <KUBE_USER> -p <KUBE_PASS> [--insecure-skip-tls-verify=true]
For example,oc login api.example.ibm.com:6443 -u kubeadmin -p xxxxx-xxxxx-xxxxx-xxxxx
- Set these environment variables:
export CP_REPO_USER=<entitlement_user> export CP_REPO_PASS=<entitlement_key> export NAMESPACE=<guardium_insights_namespace> export CASE_NAME=ibm-guardium-insights export CASE_VERSION=2.2.10 #<YOUR_CASE_VERSION> export LOCAL_CASE_DIR=$HOME/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION
- <entitlement_user> and <entitlement_key> are the entitlement user and key, as described in Obtain your entitlement key.
- <guardium_insights_namespace> is the namespace that you create in step 4. This namespace must be 10 or fewer characters in length.
- Install the Guardium Insights operator and
related components.
- Run the pre-install script. This script
sets up secrets and parameters for the Guardium Insights
instance.
export GI_INVENTORY_SETUP=install
oc ibm-pak launch $CASE_NAME \ --version $CASE_VERSION \ --namespace ${NAMESPACE} \ --inventory install \ --action preInstall \ --tolerance 1 \ --args "-n ${NAMESPACE} -h <DB_worker_host> -l <true/false>"
Note: For Red Hat OpenShift service on Amazon Web Services (ROSA), avoid labeling by setting-l
tofalse
.The pre-install script supports these parameters:Table 1. Parameters for preInstall.sh Name Description Type -n
or--i-namespace
Guardium Insights OpenShift namespace (this value must be 10 or fewer characters) Mandatory -h
or--host-datanodes
Specify the comma-delimited host names of the data nodes that you designate for data computation (you can determine the host names by running oc get nodes
).Important: When you manage Hardware cluster requirements, use the larger set of Guardium Insights nodes as your data nodes. To determine which node has the most free requests, issue theoc describe nodes
command and then look in theAllocation
section.Mandatory -l
or--label-datanodes
If you specify true
, the data nodes are labeled as dedicated for data service usage. If you specifyfalse
, labeling is skipped. The default value istrue
.Mandatory -t
or--taint-datanodes
If you specify true
, the data nodes are tainted and dedicated for data service usage. If you specifyfalse
, tainting is skipped. Do not usefalse
to skip tainting for production deployments.Optional -k
or--ingress-keystore
The path of the TLS certificate that is associated with the Guardium Insights application domain.
If you supply a custom ingress, provide the path to its key file. This file can contain only newline (
\n
) delimiters. If you do not supply a custom ingress, a default ofnone
is assumed.For more information, see Domain name and TLS certificates.
Optional -f
or--ingress-cert
The path of the TLS key that is associated with the Guardium Insights application domain.
If you supply a custom ingress, provide the path to its cert file. This file can contain only newline (
\n
) delimiters. If you do not include this, a default ofnone
is assumed.For more information, see Domain name and TLS certificates.
Optional -c
or--ingress-ca
The path of the custom TLS certificate that is associated with the Guardium Insights application domain.
If you supply a custom ingress, provide the path to its certificate authority (CA) file. This file can contain only newline (
\n
) delimiters. If you do not include this, a default ofnone
is assumed.For more information, see Domain name and TLS certificates.
Optional Version 3.4 -q
or--custom-scc
If you specify true
, Guardium Insights pods use a customscc
with a default name ofgi-odf-scc
. If you pass in another value, it applies that value as thescc
name. For a list of available SCCs, runoc get scc
. Guardium Insights normally runs in restricted-v2 SCC. Defaults tofalse
with no customscc
applied.Important: This parameter is only required for Guardium Insights installations that use the storage classes that are provided by OpenShift Data Foundation (ODF) Version 4.14 on non-ROSA and non-ARO deployments.Optional -help
or--help
Displays the preInstall.sh parameters. Optional - Install the catalogs.
oc ibm-pak launch $CASE_NAME \ --version $CASE_VERSION \ --inventory install \ --action install-catalog \ --namespace openshift-marketplace \ --args "--inputDir ${LOCAL_CASE_DIR}"
To verify that the catalogs are installed, run the following command.oc get pod -n openshift-marketplace
The output is similar to:ibm-cloud-databases-redis-operator-catalog-ms97x 1/1 Running 0 12m ibm-db2uoperator-catalog-k8pwc 1/1 Running 0 13m
- Install the operators.
oc ibm-pak launch $CASE_NAME \ --version $CASE_VERSION \ --inventory install \ --action install-operator \ --namespace ${NAMESPACE} \ --args "--registry cp.icr.io --user ${CP_REPO_USER} --pass ${CP_REPO_PASS} --secret ibm-entitlement-key --inputDir ${LOCAL_CASE_DIR}"
- Verify that the operators are installed by running the following
command.
oc get pods -n $NAMESPACE
The output is similar to:NAME READY STATUS RESTARTS AGE db2u-day2-ops-controller-manager-5488d5c844-vvhgt 1/1 Running 0 24h db2u-operator-manager-5fc886d4bc-wwcrv 1/1 Running 0 24h ibm-cloud-databases-redis-operator-6d668d7b88-z7fzh 1/1 Running 0 24h ibm-guardium-insights-operator-75d6c489fd-qfkss 1/1 Running 0 24h mongodb-kubernetes-operator-856bc86746-lfk69 1/1 Running 0 24h
- Run the pre-install script. This script
sets up secrets and parameters for the Guardium Insights
instance.