How To
Summary
How to install Red Hat OpenShift for Cloud Pak for Security (CP4S) use?
Environment
Steps
Distributed Installation
WARNING: These instructions are for test environments only and not for production environments. This example uses bare minimum specifications for bare minimum test use.
NOTE: These instructions are provided as a courtesy. If you have issues, do not open a case, but rather use our forum to ask questions. These instructions demo one way to set up a testing environment, there are many ways that are documented by Red Hat.
- Gateway Host
Server to contain the firewall, router, and DHCP (for DNS use).
- Download PFSense and upload it to VSphere
Note: If on a shared system it might already be uploaded - In VSphere, expand the CD/DVD menu for the gateway host
- Select
Datastore ISOFile and check the Connected box - Click Browse and navigate to the PFSense ISO
- Select connect at Power On
- Select OK
- Power on the
Virtual Machineand make it boot from the CD/DVD - Configure IP addresses for WAN and LAN
- Open a browser and open the PFSense WebUI at https://{PFSense IP}
- Login with defaults
user admin and password pfsense - Click Firewall, then NAT
- Select Outbound
- Select Manual Outbound NAT rule generation
- Click Services, then DHCP Server
- Select Enable DHCP server
- Configure your LAN Range
- Configure the DNS servers
- Type in the Domain name
WARNING: Assign a cluster name and a domain that is consistent. These settings cannot be changed later or the cluster might stop working. - TFTP Server put the Services host IP
- Select Enables network booting
- Next Server put the Services host IP
- Default BIOS file name, type pxelinux0
- Root path, type /var/lib/tftpboot
- Configure static IP addresses for the MAC Address of the Bootstrap,
Controllers, and WorkerVMs - Select Save
- Download PFSense and upload it to VSphere
- Services Host
Used for BIND, Load Balancer, network repository, FTP server, and NFS server.
- Expand the CD/DVD menu
- Select
Datastore ISOFile and check the Connected box - Click Browse and navigate to the CentOS or RHEL ISO
- Select connect at Power On
- Select OK
- Power on the virtual machine and make it boot from the CD/DVD
- Follow installation prompts
- Once installation is completed login
mkdir -p /root/openshift-upimkdir -p /root/openshift-upi/openshift-images/mkdir -p /root/openshift-upi/files-openshift-install/mkdir -p /root/openshift-upi/setupdir-openshift-install/mkdir -p /root/openshift-upi/services-backup/mkdir -p /root/cp4s-1.7.2/mkdir -p /root/cp4s-1.7.2/cli-tools/mkdir -p /root/cp4s-1.7.2/tls-certs/mkdir -p /root/cp4s-1.7.2/nfs-dynamic-provisioner/- Download the container image
Note: Version 4.7.13 is used in this example cd /root/openshift-upi/openshift-images/wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/4.7.13/rhcos-live-initramfs.x86_64.imgwget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/4.7.13/rhcos-live-kernel-x86_64wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/4.7.13/rhcos-live-rootfs.x86_64.imgyum install -y bind bind-utils setroubleshoot-server jqmkdir -p /etc/named/zones/vi /etc/named/named.conf
Changeallow-queryand make it look like{ localhost; {LAN_IPs}/24}
Comment out thelisten-on-v6port
Addinclude "/etc/named/zones/"- Create the zone configuration,
BIND9-zones-config:
zone "oc.intranet" {
type master;
file "/etc/named/zones/db.oc.intranet";
};
zone "ops.intranet" {
type master;
file "/etc/named/zones/db.ops.intranet";
};
zone "{redacted}.10.in-addr.arpa" {
type master;
file "/etc/named/zones/db.{redacted}.12";
};
zone "{redacted}.10.in-addr.arpa" {
type master;
file "/etc/named/zones/db.{redacted}.14";
}; - Create the BIND9 zone db file,
BIND9-zones-db.oc.intranet.txt:
$TTL 604800
@ IN SOA ns1.oc.intranet. admin.oc.intranet. (
2021120801 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ; Negative Cache TTL
)
; name servers - NS records
IN NS ns1.oc.intranet.
IN MX 10 smtp.oc.intranet.
; name servers - A records
ns1 IN A {REDACTED}.100
smtp IN A {REDACTED}.100
;
cp4s-services IN A {REDACTED}.100
cp4s-auth IN A {REDACTED}.101
; OpenShift Container Platform Cluster - A records
cp4s-bootstrap IN A {REDACTED}.20
cp4s-ocp-master01 IN A {REDACTED}.1
cp4s-ocp-master02 IN A {REDACTED}.2
cp4s-ocp-master03 IN A {REDACTED}.3
cp4s-ocp-worker01 IN A {REDACTED}.11
cp4s-ocp-worker02 IN A {REDACTED}.12
cp4s-ocp-worker03 IN A {REDACTED}.13
cp4s-ocp-worker04 IN A {REDACTED}.14
cp4s-ocp-worker05 IN A {REDACTED}.15
cp4s-ocp-worker06 IN A {REDACTED}.16
; OpenShift internal cluster IPs - A records
api.cp4s IN A {REDACTED}.100
api-int.cp4s IN A {REDACTED}.100
helper.cp4s IN A {REDACTED}.100
*.apps.cp4s IN A {REDACTED}.100
etcd-0.cp4s IN A {REDACTED}.1
etcd-1.cp4s IN A {REDACTED}.2
etcd-2.cp4s IN A {REDACTED}.3
console-openshift-console.apps.cp4s IN A {REDACTED}.100
oauth-openshift.apps.cp4s IN A {REDACTED}.100
; OpenShift internal cluster IPs - SRV records
_etcd-server-ssl._tcp 86400 IN SRV 0 10 2380 etcd-0.cp4s.oc.intranet.
_etcd-server-ssl._tcp 86400 IN SRV 0 10 2380 etcd-1.cp4s.oc.intranet.
_etcd-server-ssl._tcp 86400 IN SRV 0 10 2380 etcd-2.cp4s.oc.intranet.
; CP4S
cp4s01 IN A {REDACTED}.100
_7898374FD37B7F1AFDE6EA7404A9E1E8.cp4s01.oc.intranet 3600 IN CNAME D52734166E4C217889EC6FEB795E7704.678511DC4A1F293CB88FAFD1CC24EC72.5589d348952283d.comodoca.com - Create the BIND9 zone db file,
BIND9-reversed-zones-db.{REDACTED].12.txt:
$TTL 604800
@ IN SOA ns1.oc.intranet. admin.oc.intranet. (
2021112001 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ; Negative Cache TTL
)
; name servers - NS records
IN NS ns1.oc.intranet.
; name servers - PTR records
100 IN PTR ns1.oc.intranet.
; API (Load Balancer host)
100 IN PTR api.cp4s.oc.intranet.
100 IN PTR api-int.cp4s.oc.intranet.
100 IN PTR helper.cp4s.oc.intranet.
; OpenShift Container Platform Cluster - PTR records
20 IN PTR cp4s-bootstrap.oc.intranet.
1 IN PTR cp4s-ocp-master01.oc.intranet.
2 IN PTR cp4s-ocp-master02.oc.intranet.
3 IN PTR cp4s-ocp-master03.oc.intranet.
11 IN PTR cp4s-ocp-worker01.oc.intranet.
12 IN PTR cp4s-ocp-worker02.oc.intranet.
13 IN PTR cp4s-ocp-worker03.oc.intranet.
14 IN PTR cp4s-ocp-worker04.oc.intranet.
15 IN PTR cp4s-ocp-worker05.oc.intranet.
16 IN PTR cp4s-ocp-worker06.oc.intranet. - Create the BIND9 zone db file,
BIND9-reversed-zones-db.{REDACTED}.14.txt:
$TTL 604800
@ IN SOA ns1.ops.intranet. admin.ops.intranet. (
2021102301 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ; Negative Cache TTL
)
; name servers - NS records
IN NS ns1.ops.intranet.
; name servers - PTR records
100 IN PTR ns1.ops.intranet.
; OpenShift Container Platform Cluster - PTR records
100 IN PTR cp4s-services.ops.intranet.
101 IN PTR cp4s-auth.ops.intranet.
102 IN PTR cp4s-bastion.ops.intranet.
254 IN PTR cp4s-gateway.ops.intranet. - Create the BIND9 zone db file,
BIND9-zones-db.ops.intranet.txt:
$TTL 604800
@ IN SOA ns1.ops.intranet. admin.ops.intranet. (
2021102304 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ; Negative Cache TTL
)
; name servers - NS records
IN NS ns1
; name servers - A records
ns1.ops.intranet. IN A {REDACTED}.100
cp4s-services IN A {REDACTED}.100
cp4s-auth IN A {REDACTED}.101
cp4s-bastion IN A {REDACTED}.102
cp4s-gateway IN A {REDACTED}.254 firewall-cmd --permanent --add-port={53/udp,53/tcp}firewall-cmd --permanent --add-service={dns,rpc-bind}firewall-cmd --reloadsystemctl --now enable namedsystemctl start namedsystemctl status named- Test with:
nslookup {FQDN_CP4S_Node} yum install -y haproxy jq- Modify the /etc/haproxy.cfg with the FQDN host entries configured in PFSense DNS records
firewall-cmd --permanent --add-port={6443,8081,9001,22623}/tcpfirewall-cmd -reloadsetsebool -P haproxy_connect_any 1systemctl --now enable haproxysystemctl start haproxyyum install -y httpd- NOTE: This step is optional if the HTTPd service runs on another machine.
sed -i 's/Listen 80/Listen 8081/' /etc/httpd/conf/httpd.conf firewall-cmd --permanent --add-service=httpfirewall-cmd --permanent --add-service=httpsfirewall-cmd --permanent --add-port={8081,80,443}/tcpfirewall-cmd -reloadsemanage port -m -t http_port_t -p tcp 8081systemctl --now enable httpdsystemctl start httpdcp -fv /root/openshift-upi/openshift-images/rhcos-live-{rootfs,kernel}* /var/www/html/chmod a+rX -R /var/www/html/chown apache:apache -R /var/www/html/restorecon -R -v '/var/www/html/'yum install -y nfs-utilsmkdir -p /media/nfs/cp4s-storageecho '/media/nfs/projects/cp4s01/ 10.11.12.0/24 (rw,sync,no_root_squash,no_wdelay,nohide,insecure)' >> /etc/exportsfirewall-cmd --permanent --add-service={nfs,rpc-bind,mountd}firewall-cmd -reloadsystemctl --now enable nfs-serversystemctl start nfs-servershowmount -eyum install -y tftp-servercp -fv /usr/lib/systemd/system/tftp.service /etc/systemd/system/tftp-server.servicecp -v /usr/lib/systemd/system/tftp.socket /etc/systemd/system/tftp-server.socketvi /etc/systemd/system/tftp-server.service
[Unit]
Description=Tftp Server
Requires=tftp-server.socket
Documentation=man:in.tftpd
[Service]
ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot
StandardInput=socket
[Install]
WantedBy=multi-user.target
Also=tftp-server.socketvi /etc/systemd/system/tftp-server.socket
[Unit]
Description=Tftp Server Activation Socket
[Socket]
ListenDatagram=69
[Install]
WantedBy=sockets.targetmkdir -p /var/lib/tftpboot/pxelinux.cfg/cd /var/lib/tftpboot/wget https://raw.githubusercontent.com/leoaaraujo/openshift_pxe_boot_menu/main/files/bg-ocp.png -O /var/lib/tftpboot/bg-ocp.pngcp /usr/share/syslinux/{chain.c32,ldlinux.c32,libcom32.c32,libutil.c32,mboot.c32,memdisk,menu.c32,pxelinux.0,vesamenu.c32} /var/lib/tftpboot/cp -fv /root/openshift-upi/openshift-images/rhcos-live-{initramfs,kernel}* /var/lib/tftpboot/vi /var/lib/tftpboot/pxelinux.cfg/default
Paste in PXE-configausearch -c 'in.tftpd' --raw | audit2allow -M my-intftpdsemodule -X 300 -i my-intftpd.pprestorecon -R -v '/var/lib/tftpboot/'- Install CLI Tools
cd /root/openshift-upi/files-openshift-install/ssh-keygen -t ed25519
NOTE: To avoid issues with other types of keys, ed25519 is the recommended type.- Copy the private key to the jump server
- WARNING: The remaining steps are for a one time use only and become invalid after 24 hours. If an error occurs, the installation folder must be deleted and re-created from scratch.
Download install-config.yaml template vi /root/openshift-upi/files-openshift-install/install-config.yaml
Change baseDomain from example.com to domain used
Under metadata, change name from test to cluster name to be used
At the pullSecret section, copy the content of the pullSecret file inside single quotation marks
At the sshKey section, copy the content of the public ssh key file (previously created) inside single quotation marks
The ssh key configured in the install-config.yaml is injected into that users home dir.cd /root/openshift-upi/setupdir-openshift-install/cp -fv /root/openshift-upi/files-openshift-install/install-config.yaml .openshift-install create manifestsopenshift-install create ignition-configschmod o+r *.igncp -fv *.ign /var/www/html/chmod a+rX -R /var/www/chown apache:apache -R /var/www/html/restorecon -R -v '/var/www/html/'- Test the files are accessible:
curl http://$(hostname):9001/bootstrap.ign | jq -C . | head
- RHOCP Hosts
Distributed Hosts that set up and run CP4S. For now, we are installing RHOC on them.
- Boot Node
- In vSphere Client, under VMs and Templates, select your Bootstrap server
- Select Actions > Edit Settings
- Go to the tab VM Options
- Expand
Boot Options - Select Force BIOS setup
- Click OK
- Start the VM with the green arrow or Actions > Power > Power On
- In the boot tab, move Network boot to first instance
- Select Save and Exit
- After PXE boot loads the menu, select Bootstrap
Note: The bootstrap updates itself to the latest. Despite the version provided, by default RHCOS updates itself.
Some errors are expected due to components not being loaded.
The VM reboots two or more times. Allow at least 20 minutes for this process. - After restart is complete, bootstrap will show the login prompt.
- From the services host:
ssh core@{bootstrap_FQDN}} "journalctl -b -f -u release-image.service -u bootkube.service"
NOTE: core is default user for RHOC. - Wait for logs to display four or more created yaml files:
99_openshift-cluster-api_master-user-data-secret.yaml
99_openshift-cluster-api_worker-user-data-secret.yaml
99_openshift-machineconfig_99-master-ssh.yaml
99_openshift-machineconfig_99-worker-ssh.yaml ssh core@{bootstrap_FQDN}} "openshift-install wait-for bootstrap-complete --log-level=debug"- Wait for
"INFO API v{version} up"to display
- Control Node
- In vSphere Client, under VMs and Templates, select your Control server
- Select Actions > Edit Settings
- Go to the tab VM Options
- Expand
BootOptions - Select Force BIOS setup
- Click OK
- Start the VM with the green arrow or Actions > Power > Power On
- In the boot tab, move Network boot to first instance
- Select Save and Exit
- After
PXE bootloads the menu, selectmaster
Note: The bootstrap updates itself to the latest. Despite the version provided, by default RHCOS updates itself.
Some errors are expected due to components not being loaded.
The VM reboots two or more times. Allow at least 20 minutes. - Wait for login prompt to display
ssh core@{controller_FQDN}} "openshift-install wait-for bootstrap-complete --log-level=debug"- Wait for "Bootstrap status: Complete"
- Repeat steps 1 - 13 for each Control Node in your environment
- Verify that all master nodes are in Ready state:
oc login -u kubeadmin -p $(cat /root/openshift-upi/setupdir-openshift-install/auth/kubeadmin-password) https://console-openshift-console.apps.ejc-cp4s.oc.intranet:6443 --insecure-skip-tls-verify=true
NOTE: The credentials for the default user, kubeadmin, and the password are stored in/root/openshift-upi/setupdir-openshift-install/auth/ - Verify all Cluster Operators:
oc get clusteroperators
AVAILABLE = True
PROGRESSING = False
DEGRADED = False
NOTE: The pods for each Cluster Operator (co) can take few minutes to transition to the right state. oc whoami --show-console- Save URL in notes
- Compute Node
WARNING: DO NOT INSTALL
WORKERNODES IN PARALLEL!- In vSphere Client, under VMs and Templates, select your Compute server
- Select Actions > Edit Settings
- Go to the tab VM Options
- Expand
Boot Options - Select Force BIOS setup
- Click OK
- Start the VM with the green arrow or Actions > Power > Power On
- In the boot tab, move Network boot to first instance
- Select Save and Exit
- After PXE boot loads the menu, select
worker
Note: The bootstrap updates itself to the latest. Despite the version provided, by default RHCOS updates itself.
Some errors are expected due to components not being loaded.
The VM reboots two or more times. Allow at least 20 minutes. - Once you see the login prompt, repeat steps 1 - 10 for each Compute Node
- Wait for all Compute Nodes to be in "Ready" state:
oc get nodes
If you don't see them or they are in "NotReady" state, wait 2 minutes and poll again:
oc get csr
oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
oc get nodes
- Boot Node
Additional Information
Related Information
Document Location
Worldwide
Was this topic helpful?
Document Information
Modified date:
21 July 2023
UID
ibm16560922