Adding hosts
Bootstrapping the IBM Storage Ceph installation creates a working storage cluster, consisting of one Monitor daemon and one Manager daemon within the same container. As a storage administrator, you can add additional hosts to the storage cluster and configure them.
NOTE: For Red Hat Enterprise Linux 8, running the preflight playbook installs podman
, lvm2
, chrony
, and cephadm
on all hosts listed in the Ansible inventory file.
NOTE: For Red Hat Enterprise Linux 9, you need to manually install podman
, lvm2
, chrony
, and cephadm
on all hosts and skip steps for running ansible playbooks as the preflight playbook is not supported.
NOTE: When using a custom registry, be sure to log in to the custom registry on newly added nodes before adding any Ceph daemons.
Syntax
# ceph cephadm registry-login --registry-url CUSTOM_REGISTRY_NAME --registry_username REGISTRY_USERNAME --registry_password REGISTRY_PASSWORD
Example
# ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1
Prerequisites
A running IBM Storage Ceph cluster.
Root-level or user with sudo access to all nodes in the storage cluster.
Register the nodes to IBM subscription.
Ansible user with sudo and passwordless
ssh
access to all nodes in the storage cluster.
Procedure
From the node that contains the admin keyring, install the storage cluster’s public SSH key in the root user’s
authorized_keys
file on the new host:NOTE: In the following procedure, use either
root
, as indicated, or the username with which the user is bootstrapped.Syntax
ssh-copy-id -f -i /etc/ceph/ceph.pub user@NEWHOST
Example
[root@host01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@host02 [root@host01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@host03
Navigate to the
/usr/share/cephadm-ansible
directory on the Ansible administration node.Example
[ansible@admin ~]$ cd /usr/share/cephadm-ansible
From the Ansible administration node, add the new host to the Ansible inventory file. The default location for the file is
/usr/share/cephadm-ansible/hosts
. The following example shows the structure of a typical inventory file:Example
[ansible@admin ~]$ cat hosts host02 host03 host04 [admin] host01
NOTE: If you have previously added the new host to the Ansible inventory file and run the preflight playbook on the host, skip to step 4.
Run the preflight playbook with the
--limit
option:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=ibm" --limit NEWHOST
Example
[ansible@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=ibm" --limit host02
The preflight playbook installs
podman
,lvm2
,chrony
, andcephadm
on the new host. After installation is complete,cephadm
resides in the/usr/sbin/
directory.For Red Hat Enterprise Linux 9, install
podman
,lvm2
,chrony
, andcephadm
manually:Example
[root@host01 ~]# dnf install podman lvm2 chrony cephadm
From the bootstrap node, use the
cephadm
orchestrator to add the new host to the storage cluster:Syntax
ceph orch host add NEWHOST
Example
[ceph: root@host01 /]# ceph orch host add host02 Added host 'host02' with addr '10.10.128.69' [ceph: root@host01 /]# ceph orch host add host03 Added host 'host03' with addr '10.10.128.70'
Optional: You can also add nodes by IP address, before and after you run the preflight playbook. If you do not have DNS configured in your storage cluster environment, you can add the hosts by IP address, along with the host names.
Syntax
ceph orch host add HOSTNAME IP_ADDRESS
Example
[ceph: root@host01 /]# ceph orch host add host02 10.10.128.69 Added host 'host02' with addr '10.10.128.69'
View the status of the storage cluster and verify that the new host has been added. The STATUS of the hosts is blank, in the output of the
ceph orch host ls
command.Example
[ceph: root@host01 /]# ceph orch host ls