Node pre-deployment steps
Before installing the IBM Storage Ceph cluster, be sure to fulfill all the requirements needed.
Procedure
Perform the following steps to fullfill all requirements.
- Register all the nodes to the Red Hat Network or Red Hat Satellite and subscribe to a
valid pool:
subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0
- Enable access for all the nodes in the Ceph cluster for the following repositories:
rhel-9-for-x86_64-baseos-rpms
rhel-9-for-x86_64-appstream-rpms
subscription-manager repos --disable="*" --enable="rhel-9-for-x86_64-baseos-rpms" --enable="rhel-9-for-x86_64-appstream-rpms"
- Update the operating system RPMs to the latest version and reboot, if needed.
dnf update -y
reboot
- Select a node from the cluster to be your bootstrap node.
ceph1
is the bootstrap node in this example. Only on the bootstrap nodeceph1
, enable theansible-2.9-for-rhel-9-x86_64-rpms
andrhceph-5-tools-for-rhel-9-x86_64-rpms
repositories:subscription-manager repos --enable="ansible-2.9-for-rhel-9-x86_64-rpms" --enable="rhceph-5-tools-for-rhel-9-x86_64-rpms"
- Configure the
hostname
using the bare/short hostname in all the hosts.hostnamectl set-hostname <short_name>
- Verify the hostname configuration for deploying IBM Storage
Ceph with
cephadm.
Example output: ceph1hostname
- Modify /etc/hosts file and add the fqdn entry
to the 127.0.0.1 IP by setting the DOMAIN variable with our DNS domain
name.
DOMAIN="example.domain.com" cat <<EOF >/etc/hosts 127.0.0.1 $(hostname).${DOMAIN} $(hostname) localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 $(hostname).${DOMAIN} $(hostname) localhost6 localhost6.localdomain6 EOF
- Check the long hostname with the fqdn using the hostname -f option.
Example output:hostname -f
ceph1.example.domain.com
Note: To understand more about these required changes, see Fully qualified domain names vs bare host names within Ceph product documentation. - Run the following steps on bootstrap node.
ceph1
is the bootstrap node in this example.- Install the cephadm-ansible RPM package:
sudo dnf install -y cephadm-ansible
Important: To run the Ansible playbooks, you must havessh
passwordless access to all the nodes that are configured to the IBM Storage Ceph cluster. Ensure that the configured user (for example,deployment-user
) has root privileges to invoke the sudo command without needing a password. - Use a custom key. Configure the selected user (for example,
deployment-user
) ssh config file to specify the id or key that will be used for connecting to the nodes via ssh.cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOF
- Build the Ansible inventory.
cat <<EOF > /usr/share/cephadm-ansible/inventory ceph1 ceph2 ceph3 ceph4 ceph5 ceph6 ceph7 [admin] ceph1 ceph4 EOF
Note: Here, the Hosts (Ceph1
andCeph4
) belonging to two different data centers are configured as part of the [admin] group on the inventory file and are tagged as_admin
bycephadm
. Each of these admin nodes receive the admin ceph keyring during the bootstrap process so that when one data center is down, we can check using the other available admin node. - Verify that
ansible
can access all nodes using ping module before running the pre-flight playbook.ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b
Example output:ceph6 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/libexec/platform-python" }, "changed": false, "ping": "pong" } ceph4 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/libexec/platform-python" }, "changed": false, "ping": "pong" } ceph3 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/libexec/platform-python" }, "changed": false, "ping": "pong" } ceph2 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/libexec/platform-python" }, "changed": false, "ping": "pong" } ceph5 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/libexec/platform-python" }, "changed": false, "ping": "pong" } ceph1 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/libexec/platform-python" }, "changed": false, "ping": "pong" } ceph7 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/libexec/platform-python" }, "changed": false, "ping": "pong" }
- Navigate to the /usr/share/cephadm-ansible directory.
- Run ansible-playbook with relative file
paths.
The preflight playbook Ansible playbook configures the IBM Storage Cephansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"
dnf
repository and prepares the storage cluster for bootstrapping. It also installs podman, lvm2, chronyd, and cephadm. The default location for cephadm-ansible and cephadm-preflight.yml is /usr/share/cephadm-ansible.For more information, see Installing > IBM Storage Ceph installation > Running the preflight playbook within IBM Storage Ceph documentation.
- Install the cephadm-ansible RPM package: