IBM Support

Cloud Pak for Security: Creating a VMware Test Environment: Part 2 - Install OpenShift

How To


Summary

How to install Red Hat OpenShift for Cloud Pak for Security (CP4S) use?

Environment

Red Hat 7.9

Steps

Distributed Installation

WARNING: These instructions are for test environments only and not for production environments. This example uses bare minimum specifications for bare minimum test use.

NOTE: These instructions are provided as a courtesy. If you have issues, do not open a case, but rather use our forum to ask questions. These instructions demo one way to set up a testing environment, there are many ways that are documented by Red Hat.

  • Gateway Host

    Server to contain the firewall, router, and DHCP (for DNS use).

    1. Download PFSense and upload it to VSphere
      Note: If on a shared system it might already be uploaded
    2. In VSphere, expand the CD/DVD menu for the gateway host
    3. Select Datastore ISO File and check the Connected box
    4. Click Browse and navigate to the PFSense ISO
    5. Select connect at Power On
    6. Select OK
    7. Power on the Virtual Machine and make it boot from the CD/DVD
    8. Configure IP addresses for WAN and LAN
    9. Open a browser and open the PFSense WebUI at https://{PFSense IP}
    10. Login with defaults user admin and password pfsense
    11. Click Firewall, then NAT
    12. Select Outbound
    13. Select Manual Outbound NAT rule generation
    14. Click Services, then DHCP Server
    15. Select Enable DHCP server
    16. Configure your LAN Range
    17. Configure the DNS servers
    18. Type in the Domain name
      WARNING: Assign a cluster name and a domain that is consistent. These settings cannot be changed later or the cluster might stop working.
    19. TFTP Server put the Services host IP
    20. Select Enables network booting
    21. Next Server put the Services host IP
    22. Default BIOS file name, type pxelinux0
    23. Root path, type /var/lib/tftpboot
    24. Configure static IP addresses for the MAC Address of the Bootstrap, Controllers, and Worker VMs
    25. Select Save
  • Services Host

    Used for BIND, Load Balancer, network repository, FTP server, and NFS server.

    1. Expand the CD/DVD menu
    2. Select Datastore ISO File and check the Connected box
    3. Click Browse and navigate to the CentOS or RHEL ISO
    4. Select connect at Power On
    5. Select OK
    6. Power on the virtual machine and make it boot from the CD/DVD
    7. Follow installation prompts
    8. Once installation is completed login
    9. mkdir -p /root/openshift-upi
    10. mkdir -p /root/openshift-upi/openshift-images/
    11. mkdir -p /root/openshift-upi/files-openshift-install/
    12. mkdir -p /root/openshift-upi/setupdir-openshift-install/
    13. mkdir -p /root/openshift-upi/services-backup/
    14. mkdir -p /root/cp4s-1.7.2/
    15. mkdir -p /root/cp4s-1.7.2/cli-tools/
    16. mkdir -p /root/cp4s-1.7.2/tls-certs/
    17. mkdir -p /root/cp4s-1.7.2/nfs-dynamic-provisioner/
    18. Download the container image
      Note: Version 4.7.13 is used in this example
    19. cd /root/openshift-upi/openshift-images/
    20. wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/4.7.13/rhcos-live-initramfs.x86_64.img
    21. wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/4.7.13/rhcos-live-kernel-x86_64
    22. wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/4.7.13/rhcos-live-rootfs.x86_64.img
    23. yum install -y bind bind-utils setroubleshoot-server jq
    24. mkdir -p /etc/named/zones/
    25. vi /etc/named/named.conf
      Change allow-query and make it look like { localhost; {LAN_IPs}/24}
      Comment out the listen-on-v6 port
      Add include "/etc/named/zones/"
    26. Create the zone configuration, BIND9-zones-config:
      zone "oc.intranet" {
      type master;
      file "/etc/named/zones/db.oc.intranet";
      };
      zone "ops.intranet" {
      type master;
      file "/etc/named/zones/db.ops.intranet";
      };
      zone "{redacted}.10.in-addr.arpa" {
      type master;
      file "/etc/named/zones/db.{redacted}.12";
      };
      zone "{redacted}.10.in-addr.arpa" {
      type master;
      file "/etc/named/zones/db.{redacted}.14";
      };
    27. Create the BIND9 zone db file, BIND9-zones-db.oc.intranet.txt:
      $TTL 604800
      @ IN SOA ns1.oc.intranet. admin.oc.intranet. (
      2021120801 ; Serial
      604800 ; Refresh
      86400 ; Retry
      2419200 ; Expire
      604800 ; Negative Cache TTL
      )
      ; name servers - NS records
      IN NS ns1.oc.intranet.
      IN MX 10 smtp.oc.intranet.
      ; name servers - A records
      ns1 IN A {REDACTED}.100
      smtp IN A {REDACTED}.100
      ;
      cp4s-services IN A {REDACTED}.100
      cp4s-auth IN A {REDACTED}.101
      ; OpenShift Container Platform Cluster - A records
      cp4s-bootstrap IN A {REDACTED}.20
      cp4s-ocp-master01 IN A {REDACTED}.1
      cp4s-ocp-master02 IN A {REDACTED}.2
      cp4s-ocp-master03 IN A {REDACTED}.3
      cp4s-ocp-worker01 IN A {REDACTED}.11
      cp4s-ocp-worker02 IN A {REDACTED}.12
      cp4s-ocp-worker03 IN A {REDACTED}.13
      cp4s-ocp-worker04 IN A {REDACTED}.14
      cp4s-ocp-worker05 IN A {REDACTED}.15
      cp4s-ocp-worker06 IN A {REDACTED}.16
      ; OpenShift internal cluster IPs - A records
      api.cp4s IN A {REDACTED}.100
      api-int.cp4s IN A {REDACTED}.100
      helper.cp4s IN A {REDACTED}.100
      *.apps.cp4s IN A {REDACTED}.100
      etcd-0.cp4s IN A {REDACTED}.1
      etcd-1.cp4s IN A {REDACTED}.2
      etcd-2.cp4s IN A {REDACTED}.3
      console-openshift-console.apps.cp4s IN A {REDACTED}.100
      oauth-openshift.apps.cp4s IN A {REDACTED}.100
      ; OpenShift internal cluster IPs - SRV records
      _etcd-server-ssl._tcp 86400 IN SRV 0 10 2380 etcd-0.cp4s.oc.intranet.
      _etcd-server-ssl._tcp 86400 IN SRV 0 10 2380 etcd-1.cp4s.oc.intranet.
      _etcd-server-ssl._tcp 86400 IN SRV 0 10 2380 etcd-2.cp4s.oc.intranet.
      ; CP4S
      cp4s01 IN A {REDACTED}.100
      _7898374FD37B7F1AFDE6EA7404A9E1E8.cp4s01.oc.intranet 3600 IN CNAME D52734166E4C217889EC6FEB795E7704.678511DC4A1F293CB88FAFD1CC24EC72.5589d348952283d.comodoca.com
    28. Create the BIND9 zone db file, BIND9-reversed-zones-db.{REDACTED].12.txt:
      $TTL 604800
      @ IN SOA ns1.oc.intranet. admin.oc.intranet. (
      2021112001 ; Serial
      604800 ; Refresh
      86400 ; Retry
      2419200 ; Expire
      604800 ; Negative Cache TTL
      )
      ; name servers - NS records
      IN NS ns1.oc.intranet.
      ; name servers - PTR records
      100 IN PTR ns1.oc.intranet.
      ; API (Load Balancer host)
      100 IN PTR api.cp4s.oc.intranet.
      100 IN PTR api-int.cp4s.oc.intranet.
      100 IN PTR helper.cp4s.oc.intranet.
      ; OpenShift Container Platform Cluster - PTR records
      20 IN PTR cp4s-bootstrap.oc.intranet.
      1 IN PTR cp4s-ocp-master01.oc.intranet.
      2 IN PTR cp4s-ocp-master02.oc.intranet.
      3 IN PTR cp4s-ocp-master03.oc.intranet.
      11 IN PTR cp4s-ocp-worker01.oc.intranet.
      12 IN PTR cp4s-ocp-worker02.oc.intranet.
      13 IN PTR cp4s-ocp-worker03.oc.intranet.
      14 IN PTR cp4s-ocp-worker04.oc.intranet.
      15 IN PTR cp4s-ocp-worker05.oc.intranet.
      16 IN PTR cp4s-ocp-worker06.oc.intranet.
    29. Create the BIND9 zone db file, BIND9-reversed-zones-db.{REDACTED}.14.txt:
      $TTL 604800
      @ IN SOA ns1.ops.intranet. admin.ops.intranet. (
      2021102301 ; Serial
      604800 ; Refresh
      86400 ; Retry
      2419200 ; Expire
      604800 ; Negative Cache TTL
      )
      ; name servers - NS records
      IN NS ns1.ops.intranet.
      ; name servers - PTR records
      100 IN PTR ns1.ops.intranet.
      ; OpenShift Container Platform Cluster - PTR records
      100 IN PTR cp4s-services.ops.intranet.
      101 IN PTR cp4s-auth.ops.intranet.
      102 IN PTR cp4s-bastion.ops.intranet.
      254 IN PTR cp4s-gateway.ops.intranet.
    30. Create the BIND9 zone db file, BIND9-zones-db.ops.intranet.txt:
      $TTL 604800
      @ IN SOA ns1.ops.intranet. admin.ops.intranet. (
      2021102304 ; Serial
      604800 ; Refresh
      86400 ; Retry
      2419200 ; Expire
      604800 ; Negative Cache TTL
      )
      ; name servers - NS records
      IN NS ns1
      ; name servers - A records
      ns1.ops.intranet. IN A {REDACTED}.100
      cp4s-services IN A {REDACTED}.100
      cp4s-auth IN A {REDACTED}.101
      cp4s-bastion IN A {REDACTED}.102
      cp4s-gateway IN A {REDACTED}.254
    31. firewall-cmd --permanent --add-port={53/udp,53/tcp}
    32. firewall-cmd --permanent --add-service={dns,rpc-bind}
    33. firewall-cmd --reload
    34. systemctl --now enable named
    35. systemctl start named
    36. systemctl status named
    37. Test with:
      nslookup {FQDN_CP4S_Node}
    38. yum install -y haproxy jq
    39. Modify the /etc/haproxy.cfg with the FQDN host entries configured in PFSense DNS records
    40. firewall-cmd --permanent --add-port={6443,8081,9001,22623}/tcp
    41. firewall-cmd -reload
    42. setsebool -P haproxy_connect_any 1
    43. systemctl --now enable haproxy
    44. systemctl start haproxy
    45. yum install -y httpd
    46. NOTE: This step is optional if the HTTPd service runs on another machine.
      sed -i 's/Listen 80/Listen 8081/' /etc/httpd/conf/httpd.conf
    47. firewall-cmd --permanent --add-service=http
    48. firewall-cmd --permanent --add-service=https
    49. firewall-cmd --permanent --add-port={8081,80,443}/tcp
    50. firewall-cmd -reload
    51. semanage port -m -t http_port_t -p tcp 8081
    52. systemctl --now enable httpd
    53. systemctl start httpd
    54. cp -fv /root/openshift-upi/openshift-images/rhcos-live-{rootfs,kernel}* /var/www/html/
    55. chmod a+rX -R /var/www/html/
    56. chown apache:apache -R /var/www/html/
    57. restorecon -R -v '/var/www/html/'
    58. yum install -y nfs-utils
    59. mkdir -p /media/nfs/cp4s-storage
    60. echo '/media/nfs/projects/cp4s01/ 10.11.12.0/24 (rw,sync,no_root_squash,no_wdelay,nohide,insecure)' >> /etc/exports
    61. firewall-cmd --permanent --add-service={nfs,rpc-bind,mountd}
    62. firewall-cmd -reload
    63. systemctl --now enable nfs-server
    64. systemctl start nfs-server
    65. showmount -e
    66. yum install -y tftp-server
    67. cp -fv /usr/lib/systemd/system/tftp.service /etc/systemd/system/tftp-server.service
    68. cp -v /usr/lib/systemd/system/tftp.socket /etc/systemd/system/tftp-server.socket
    69. vi /etc/systemd/system/tftp-server.service
      [Unit]
      Description=Tftp Server
      Requires=tftp-server.socket
      Documentation=man:in.tftpd

      [Service]
      ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot
      StandardInput=socket

      [Install]
      WantedBy=multi-user.target
      Also=tftp-server.socket
    70. vi /etc/systemd/system/tftp-server.socket
      [Unit]
      Description=Tftp Server Activation Socket

      [Socket]
      ListenDatagram=69

      [Install]
      WantedBy=sockets.target
    71. mkdir -p /var/lib/tftpboot/pxelinux.cfg/
    72. cd /var/lib/tftpboot/
    73. wget https://raw.githubusercontent.com/leoaaraujo/openshift_pxe_boot_menu/main/files/bg-ocp.png -O /var/lib/tftpboot/bg-ocp.png
    74. cp /usr/share/syslinux/{chain.c32,ldlinux.c32,libcom32.c32,libutil.c32,mboot.c32,memdisk,menu.c32,pxelinux.0,vesamenu.c32} /var/lib/tftpboot/
    75. cp -fv /root/openshift-upi/openshift-images/rhcos-live-{initramfs,kernel}* /var/lib/tftpboot/
    76. vi /var/lib/tftpboot/pxelinux.cfg/default
      Paste in PXE-config
    77. ausearch -c 'in.tftpd' --raw | audit2allow -M my-intftpd
    78. semodule -X 300 -i my-intftpd.pp
    79. restorecon -R -v '/var/lib/tftpboot/'
    80. Install CLI Tools
    81. cd /root/openshift-upi/files-openshift-install/
    82. ssh-keygen -t ed25519
      NOTE: To avoid issues with other types of keys, ed25519 is the recommended type.
    83. Copy the private key to the jump server
    84. WARNING: The remaining steps are for a one time use only and become invalid after 24 hours. If an error occurs, the installation folder must be deleted and re-created from scratch.
      Download install-config.yaml template
    85. vi /root/openshift-upi/files-openshift-install/install-config.yaml
      Change baseDomain from example.com to domain used
      Under metadata, change name from test to cluster name to be used
      At the pullSecret section, copy the content of the pullSecret file inside single quotation marks
      At the sshKey section, copy the content of the public ssh key file (previously created) inside single quotation marks
      The ssh key configured in the install-config.yaml is injected into that users home dir.
    86. cd /root/openshift-upi/setupdir-openshift-install/
    87. cp -fv /root/openshift-upi/files-openshift-install/install-config.yaml .
    88. openshift-install create manifests
    89. openshift-install create ignition-configs
    90. chmod o+r *.ign
    91. cp -fv *.ign /var/www/html/
    92. chmod a+rX -R /var/www/
    93. chown apache:apache -R /var/www/html/
    94. restorecon -R -v '/var/www/html/'
    95. Test the files are accessible:
      curl http://$(hostname):9001/bootstrap.ign | jq -C . | head
  • RHOCP Hosts

    Distributed Hosts that set up and run CP4S. For now, we are installing RHOC on them.

    • Boot Node
      1. In vSphere Client, under VMs and Templates, select your Bootstrap server
      2. Select Actions > Edit Settings
      3. Go to the tab VM Options
      4. Expand Boot Options
      5. Select Force BIOS setup
      6. Click OK
      7. Start the VM with the green arrow or Actions > Power > Power On
      8. In the boot tab, move Network boot to first instance
      9. Select Save and Exit
      10. After PXE boot loads the menu, select Bootstrap
        Note: The bootstrap updates itself to the latest. Despite the version provided, by default RHCOS updates itself.
        Some errors are expected due to components not being loaded.
        The VM reboots two or more times. Allow at least 20 minutes for this process.
      11. After restart is complete, bootstrap will show the login prompt.
      12. From the services host:
        ssh core@{bootstrap_FQDN}} "journalctl -b -f -u release-image.service -u bootkube.service"
        NOTE: core is default user for RHOC.
      13. Wait for logs to display four or more created yaml files:
        99_openshift-cluster-api_master-user-data-secret.yaml
        99_openshift-cluster-api_worker-user-data-secret.yaml
        99_openshift-machineconfig_99-master-ssh.yaml
        99_openshift-machineconfig_99-worker-ssh.yaml
      14. ssh core@{bootstrap_FQDN}} "openshift-install wait-for bootstrap-complete --log-level=debug"
      15. Wait for "INFO API v{version} up" to display
    • Control Node
      1. In vSphere Client, under VMs and Templates, select your Control server
      2. Select Actions > Edit Settings
      3. Go to the tab VM Options
      4. Expand Boot Options
      5. Select Force BIOS setup
      6. Click OK
      7. Start the VM with the green arrow or Actions > Power > Power On
      8. In the boot tab, move Network boot to first instance
      9. Select Save and Exit
      10. After PXE boot loads the menu, select master
        Note: The bootstrap updates itself to the latest. Despite the version provided, by default RHCOS updates itself.
        Some errors are expected due to components not being loaded.
        The VM reboots two or more times. Allow at least 20 minutes.
      11. Wait for login prompt to display
      12. ssh core@{controller_FQDN}} "openshift-install wait-for bootstrap-complete --log-level=debug"
      13. Wait for "Bootstrap status: Complete"
      14. Repeat steps 1 - 13 for each Control Node in your environment
      15. Verify that all master nodes are in Ready state:
        oc login -u kubeadmin -p $(cat /root/openshift-upi/setupdir-openshift-install/auth/kubeadmin-password) https://console-openshift-console.apps.ejc-cp4s.oc.intranet:6443 --insecure-skip-tls-verify=true
        NOTE: The credentials for the default user, kubeadmin, and the password are stored in /root/openshift-upi/setupdir-openshift-install/auth/
      16. Verify all Cluster Operators:
        oc get clusteroperators
        AVAILABLE = True
        PROGRESSING = False
        DEGRADED = False

        NOTE: The pods for each Cluster Operator (co) can take few minutes to transition to the right state.
      17. oc whoami --show-console
      18. Save URL in notes
    • Compute Node

      WARNING: DO NOT INSTALL WORKER NODES IN PARALLEL!

      1. In vSphere Client, under VMs and Templates, select your Compute server
      2. Select Actions > Edit Settings
      3. Go to the tab VM Options
      4. Expand Boot Options
      5. Select Force BIOS setup
      6. Click OK
      7. Start the VM with the green arrow or Actions > Power > Power On
      8. In the boot tab, move Network boot to first instance
      9. Select Save and Exit
      10. After PXE boot loads the menu, select worker
        Note: The bootstrap updates itself to the latest. Despite the version provided, by default RHCOS updates itself.
        Some errors are expected due to components not being loaded.
        The VM reboots two or more times. Allow at least 20 minutes.
      11. Once you see the login prompt, repeat steps 1 - 10 for each Compute Node
      12. Wait for all Compute Nodes to be in "Ready" state:
        oc get nodes
        If you don't see them or they are in "NotReady" state, wait 2 minutes and poll again:
        oc get csr
        oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
        oc get nodes

Additional Information

Document Location

Worldwide


[{"Type":"MASTER","Line of Business":{"code":"LOB24","label":"Security Software"},"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSTDPP","label":"IBM Cloud Pak for Security"},"ARM Category":[{"code":"a8m0z000000Xat9AAC","label":"Documentation"},{"code":"a8m0z0000001h8uAAA","label":"Install"},{"code":"a8m0z0000001js1AAA","label":"Openshift"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"1.9.0"}]

Document Information

More support for:
IBM Cloud Pak for Security

Component:
Documentation, Install, Openshift

Software version:
1.9.0

Document number:
6560922

Modified date:
21 July 2023

UID

ibm16560922

Manage My Notification Subscriptions