Node pre-deployment steps

Before installing the IBM Storage Ceph cluster, be sure to fulfill all the requirements needed.

Procedure

Perform the following steps to fullfill all requirements.

  1. Register all the nodes to the Red Hat Network or Red Hat Satellite and subscribe to a valid pool:
    subscription-manager register 
    subscription-manager subscribe --pool=8a8XXXXXX9e0
  2. Enable access for all the nodes in the Ceph cluster for the following repositories:
    • rhel-9-for-x86_64-baseos-rpms
    • rhel-9-for-x86_64-appstream-rpms
      subscription-manager repos --disable="*" --enable="rhel-9-for-x86_64-baseos-rpms" --enable="rhel-9-for-x86_64-appstream-rpms"
  3. Update the operating system RPMs to the latest version and reboot, if needed.
    dnf update -y
    reboot
  4. Select a node from the cluster to be your bootstrap node.
    ceph1 is the bootstrap node in this example. Only on the bootstrap node ceph1, enable the ansible-2.9-for-rhel-9-x86_64-rpms and rhceph-5-tools-for-rhel-9-x86_64-rpms repositories:
    subscription-manager repos --enable="ansible-2.9-for-rhel-9-x86_64-rpms" --enable="rhceph-5-tools-for-rhel-9-x86_64-rpms"
  5. Configure the hostname using the bare/short hostname in all the hosts.
    hostnamectl set-hostname <short_name>
  6. Verify the hostname configuration for deploying IBM Storage Ceph with cephadm.
    hostname
    Example output: ceph1
  7. Modify /etc/hosts file and add the fqdn entry to the 127.0.0.1 IP by setting the DOMAIN variable with our DNS domain name.
    DOMAIN="example.domain.com"
    
    cat <<EOF >/etc/hosts
    127.0.0.1 $(hostname).${DOMAIN} $(hostname) localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1       $(hostname).${DOMAIN} $(hostname) localhost6 localhost6.localdomain6
    EOF
  8. Check the long hostname with the fqdn using the hostname -f option.
    hostname -f
    Example output:
    ceph1.example.domain.com
    Note: To understand more about these required changes, see Fully qualified domain names vs bare host names within Ceph product documentation.
  9. Run the following steps on bootstrap node.
    ceph1 is the bootstrap node in this example.
    1. Install the cephadm-ansible RPM package:
      sudo dnf install -y cephadm-ansible
      Important: To run the Ansible playbooks, you must have ssh passwordless access to all the nodes that are configured to the IBM Storage Ceph cluster. Ensure that the configured user (for example, deployment-user) has root privileges to invoke the sudo command without needing a password.
    2. Use a custom key.
      Configure the selected user (for example, deployment-user) ssh config file to specify the id or key that will be used for connecting to the nodes via ssh.
      cat <<EOF > ~/.ssh/config
      Host ceph*
         User deployment-user
         IdentityFile ~/.ssh/ceph.pem
      EOF
    3. Build the Ansible inventory.
      cat <<EOF > /usr/share/cephadm-ansible/inventory
      ceph1
      ceph2
      ceph3
      ceph4
      ceph5
      ceph6
      ceph7
      [admin]
      ceph1
      ceph4
      EOF
      Note: Here, the Hosts (Ceph1 and Ceph4) belonging to two different data centers are configured as part of the [admin] group on the inventory file and are tagged as _admin by cephadm. Each of these admin nodes receive the admin ceph keyring during the bootstrap process so that when one data center is down, we can check using the other available admin node.
    4. Verify that ansible can access all nodes using ping module before running the pre-flight playbook.
      ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b
      Example output:
      ceph6 | SUCCESS => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/libexec/platform-python"
          },
          "changed": false,
          "ping": "pong"
      }
      ceph4 | SUCCESS => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/libexec/platform-python"
          },
          "changed": false,
          "ping": "pong"
      }
      ceph3 | SUCCESS => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/libexec/platform-python"
          },
          "changed": false,
          "ping": "pong"
      }
      ceph2 | SUCCESS => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/libexec/platform-python"
          },
          "changed": false,
          "ping": "pong"
      }
      ceph5 | SUCCESS => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/libexec/platform-python"
          },
          "changed": false,
          "ping": "pong"
      }
      ceph1 | SUCCESS => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/libexec/platform-python"
          },
          "changed": false,
          "ping": "pong"
      }
      ceph7 | SUCCESS => {
          "ansible_facts": {
              "discovered_interpreter_python": "/usr/libexec/platform-python"
          },
          "changed": false,
          "ping": "pong"
      }
    5. Navigate to the /usr/share/cephadm-ansible directory.
    6. Run ansible-playbook with relative file paths.
      ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"
      The preflight playbook Ansible playbook configures the IBM Storage Ceph dnf repository and prepares the storage cluster for bootstrapping. It also installs podman, lvm2, chronyd, and cephadm. The default location for cephadm-ansible and cephadm-preflight.yml is /usr/share/cephadm-ansible.

      For more information, see Installing > IBM Storage Ceph installation > Running the preflight playbook within IBM Storage Ceph documentation.