Installing IBM Storage Scale in a network restricted (air gap) setup with the cloudkit
Learn how to install IBM Storage Scale in air gap setup.
Planning
To be able to execute cloudkit when using the air gap installation option, you must first meet different prerequisites and complete the required configurations.
Two virtual machines are required to set up an air gap environment. These machines serve distinct roles, the cloudkit installer node and the staging node.
cloudkit installer node prerequisite
- The cloudkit installer node is the node responsible for executing the cloudkit binary. This node must be located in AWS and cannot be an on-premise node.
- The cloudkit installer node must be a Virtual Machine(VM) which should be spun in a private subnet (with no NAT attachment) in an AWS existing VPC.
- The VM must have Instance Metadata Service Version 1 (IMDSv1) enabled.
- The security group associated with the VM must have egress (outbound) open to allow traffic from all sources.
- Ensure this VPC has enough private IP addresses, as the same VPC will be utilized for IBM Storage Scale deployment.
- Ensure IBM Storage Scale AMI is already created.
Staging node prerequisite
- The staging node is the node which has public network associated either through internet gateway or NAT gateway.
- This node is used to download the SE package and prerequisite packages.
- This node must have the ability to reach (through s3 bucket, VPC peering or public subnet within the same VPC as cloudkit installer node) the cloudkit installer node to copy the packages.
Configure the VPC endpoints
Virtual Private Cloud (VPC) endpoints offer a private connection between a VPC and other AWS services (such as, EC2, etc), allowing traffic to be routed over the AWS network instead of over the internet.
In an air gap mode, the cloudkit requires
EC2
, STS
,
KMS
, and SNS
endpoints to be configured to the VPC. Follow the
steps below to configure them:- Log in to the AWS Management Console and navigate to the VPC dashboard.
- In the navigation pane, select Endpoints and click Create Endpoint button.
- Choose the
EC2
,STS
,KMS
, andSNS
services for creating endpoints. - Select the VPC and subnet which is used to spin the cloudkit installer VM to create the endpoint.
- Select the cloudkit installer VM security group to use the endpoints.
- Ensure that the DNS name and ipv4 settings are enabled.
- Ensure that the VPC endpoint policy is set to allow Full access.
- Review the endpoint settings and click Create Endpoint.
Configure the terraform provider(s) to work in isolated or air gap mode
Perform the following steps:
- On the air gap cloudkit installer node, create the below directory layout.Note: The versions included in the directory path shown below should be treated as the minimum version required.
mkdir -p $HOME/.terraform.d/plugins/registry.terraform.io/hashicorp/aws/4.61.0/linux_amd64/ mkdir -p $HOME/.terraform.d/plugins/registry.terraform.io/hashicorp/null/3.2.1/linux_amd64/ mkdir -p $HOME/.terraform.d/plugins/registry.terraform.io/hashicorp/template/2.2.0/linux_amd64/ mkdir -p $HOME/.terraform.d/plugins/registry.terraform.io/hashicorp/time/0.9.1/linux_amd64/ mkdir -p $HOME/.terraform.d/plugins/registry.terraform.io/hashicorp/local/2.4.0/linux_amd64/ mkdir -p $HOME/.terraform.d/plugins/registry.terraform.io/hashicorp/tls/4.0.4/linux_amd64/ mkdir -p $HOME/.terraform.d/plugins/registry.terraform.io/integrations/github/5.20.0/linux_amd64/
- Copy the appropriate terraform providers to the appropriate directories.As an example, you can use the staging VM to download the providers as shown below:
wget https://releases.hashicorp.com/terraform-provider-aws/4.61.0/terraform-provider-aws_4.61.0_linux_amd64.zip unzip terraform-provider-aws_4.61.0_linux_amd64.zip rm -rf terraform-provider-aws_4.61.0_linux_amd64.zip wget https://releases.hashicorp.com/terraform-provider-null/3.2.1/terraform-provider-null_3.2.1_linux_amd64.zip unzip terraform-provider-null_3.2.1_linux_amd64.zip rm -rf terraform-provider-null_3.2.1_linux_amd64.zip wget https://releases.hashicorp.com/terraform-provider-template/2.2.0/terraform-provider-template_2.2.0_linux_amd64.zip unzip terraform-provider-template_2.2.0_linux_amd64.zip rm -rf terraform-provider-template_2.2.0_linux_amd64.zip wget https://releases.hashicorp.com/terraform-provider-time/0.9.1/terraform-provider-time_0.9.1_linux_amd64.zip unzip terraform-provider-time_0.9.1_linux_amd64.zip rm -rf terraform-provider-time_0.9.1_linux_amd64.zip wget https://releases.hashicorp.com/terraform-provider-local/2.4.0/terraform-provider-local_2.4.0_linux_amd64.zip unzip terraform-provider-local_2.4.0_linux_amd64.zip rm -rf terraform-provider-local_2.4.0_linux_amd64.zip wget https://releases.hashicorp.com/terraform-provider-tls/4.0.4/terraform-provider-tls_4.0.4_linux_amd64.zip unzip terraform-provider-tls_4.0.4_linux_amd64.zip rm -rf terraform-provider-tls_4.0.4_linux_amd64.zip wget https://releases.hashicorp.com/terraform-provider-github/5.20.0/terraform-provider-github_5.20.0_linux_amd64.zip unzip terraform-provider-github_5.20.0_linux_amd64.zip rm -rf terraform-provider-github_5.20.0_linux_amd64.zip
Copy these providers from staging node to air gapped cloudkit installer node.
cd providers scp terraform-provider-aws_v4.61.0_x5 <cloudkit-vm>:/root/.terraform.d/plugins/registry.terraform.io/hashicorp/aws/4.61.0/linux_amd64/ scp terraform-provider-null_v3.2.1_x5 <cloudkit-vm>:/root/.terraform.d/plugins/registry.terraform.io/hashicorp/null/3.2.1/linux_amd64/ scp terraform-provider-template_v2.2.0_x4 <cloudkit-vm>:/root/.terraform.d/plugins/registry.terraform.io/hashicorp/template/2.2.0/linux_amd64/ scp terraform-provider-time_v0.9.1_x5 <cloudkit-vm>:/root/.terraform.d/plugins/registry.terraform.io/hashicorp/time/0.9.1/linux_amd64/ scp terraform-provider-local_v2.4.0_x5 <cloudkit-vm>:/root/.terraform.d/plugins/registry.terraform.io/hashicorp/local/2.4.0/linux_amd64/ scp terraform-provider-tls_v4.0.4_x5 <cloudkit-vm>:/root/.terraform.d/plugins/registry.terraform.io/hashicorp/tls/4.0.4/linux_amd64/ scp terraform-provider-github_v5.20.0 <cloudkit-vm>:/root/.terraform.d/plugins/registry.terraform.io/integrations/github/5.20.0/linux_amd64/
- Create CLI configuration file (.terraformrc) in the user's home
directory.
# cat .terraformrc disable_checkpoint = true disable_checkpoint_signature = false provider_installation { filesystem_mirror { path = "/root/.terraform.d/plugins/" include = ["hashicorp/aws", "hashicorp/null", "hashicorp/template", "hashicorp/time", "hashicorp/tls", "hashicorp/local", "integrations/github"] } }
Extract the SE package on the air gapped cloudkit installer node
Copy the IBM Storage Scale SE package to the air gapped
cloudkit installer node (you can copy it from the staging node and then further copy it to the
cloudkit installer VM). Extract the SE package on the installer node to obtain the cloudkit binary
and the associated scale components.
- Install the rpm dependencies (
createrepo_c
,python3-pip
,sqlite-3
, andpip3
dependencies, such asjmespath
andansible
). - Disable the strict host key checking.
echo "Host *" > ~/.ssh/config echo " StrictHostKeyChecking no" >> ~/.ssh/config chmod 400 ~/.ssh/config
Verify if all the dependencies are met or not
using:
./cloudkit init --airgap
Note: To install the ansible or toolkit dependencies, you can go to
/usr/lpp/mmfs/5.1.8.0/ansible-toolkit/ansible/ansible_library/rhel9 and
executepip3 install ansible-5.9.0.tar.gz -f ./ --no-index --no-compile
--disable-pip-version-check --no-cache-dir.
cloudkit using air gap option
Perform the following steps to execute the cloudkit using the air gap option:
- To create a cluster in the air gap based mode, use the option
below.
./cloudkit create cluster --airgap
- To grant a filesystem access to a compute cluster in the air gap based mode, use the option
below.
./cloudkit grant filesystem --airgap
- To revoke a filesystem access provided to the compute cluster in the air gap based mode, use the
option below.
./cloudkit revoke filesystem --airgap
Limitations
- Create repository and image functionalities are not supported in air gap mode.
- Grant and revoke filesystem functionalities are supported to compute cluster only.