Adding or removing hosts

Add and remove hosts in your storage cluster by using the ceph_orch_host module in your Ansible playbook.

Prerequisites

  • A running IBM Storage Ceph cluster.

  • Register the nodes to the CDN and attach subscriptions.

  • Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster.

  • Installation of the cephadm-ansible package on the Ansible administration node.

  • New hosts have the storage cluster’s public SSH key.

    For more information about copying the storage cluster's public SSH keys to new hosts, see Adding hosts.

Procedure

  1. Use the following procedure to add new hosts to the cluster:

    1. Log in to the Ansible administration node.

    2. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node:

      Example

      [ansible@admin ~]$ cd /usr/share/cephadm-ansible
    3. Add the new hosts and labels to the Ansible inventory file.

      Syntax

      sudo vi INVENTORY_FILE
      
      NEW_HOST1 labels="['LABEL1', 'LABEL2']"
      NEW_HOST2 labels="['LABEL1', 'LABEL2']"
      NEW_HOST3 labels="['LABEL1']"
      
      [admin]
      ADMIN_HOST monitor_address=MONITOR_IP_ADDRESS labels="['ADMIN_LABEL', 'LABEL1', 'LABEL2']"

      Example

      [ansible@admin cephadm-ansible]$ sudo vi hosts
      
      host02 labels="['mon', 'mgr']"
      host03 labels="['mon', 'mgr']"
      host04 labels="['osd']"
      host05 labels="['osd']"
      host06 labels="['osd']"
      
      [admin]
      host01 monitor_address= 10.10.128.68 labels="['_admin', 'mon', 'mgr']"
      Note: If you have previously added the new hosts to the Ansible inventory file and ran the preflight playbook on the hosts, skip to step 3.
    4. Run the preflight playbook with the --limit option:

      Syntax

      ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=ibm" --limit NEWHOST

      Example

      [ansible@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=ibm" --limit host02

      The preflight playbook installs podman, lvm2, chrony, and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory.

    5. Create a playbook to add the new hosts to the cluster:

      Syntax

      sudo vi PLAYBOOK_FILENAME.yml
      
      ---
      - name: PLAY_NAME
        hosts: HOSTS_OR_HOST_GROUPS
        become: USE_ELEVATED_PRIVILEGES
        gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS
        tasks:
          - name: NAME_OF_TASK
            ceph_orch_host:
              name: "{{ ansible_facts[hostname] }}"
              address: "{{ ansible_facts[default_ipv4][address] }}"
              labels: "{{ labels }}"
            delegate_to: HOST_TO_DELEGATE_TASK_TO
      
          - name: NAME_OF_TASK
            when: inventory_hostname in groups[admin]
            ansible.builtin.shell:
              cmd: CEPH_COMMAND_TO_RUN
            register: REGISTER_NAME
      
          - name: NAME_OF_TASK
            when: inventory_hostname in groups[admin]
            debug:
              msg: "{{ REGISTER_NAME.stdout }}"
      Note: By default, Ansible runs all tasks on the host that matches the hosts line of your playbook. The ceph orch commands must run on the host that contains the admin keyring and the Ceph configuration file. Use the delegate_to keyword to specify the admin host in your cluster.

      Example

      [ansible@admin cephadm-ansible]$ sudo vi add-hosts.yml
      
      ---
      - name: add additional hosts to the cluster
        hosts: all
        become: true
        gather_facts: true
        tasks:
          - name: add hosts to the cluster
            ceph_orch_host:
              name: "{{ ansible_facts['hostname'] }}"
              address: "{{ ansible_facts['default_ipv4']['address'] }}"
              labels: "{{ labels }}"
            delegate_to: host01
      
          - name: list hosts in the cluster
            when: inventory_hostname in groups['admin']
            ansible.builtin.shell:
              cmd: ceph orch host ls
            register: host_list
      
          - name: print current list of hosts
            when: inventory_hostname in groups['admin']
            debug:
              msg: "{{ host_list.stdout }}"

      In this example, the playbook adds the new hosts to the cluster and displays a current list of hosts.

    6. Run the playbook to add additional hosts to the cluster:

      Syntax

      ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME.yml

      Example

      [ansible@admin cephadm-ansible]$ ansible-playbook -i hosts add-hosts.yml
  2. Use the following procedure to remove hosts from the cluster:

    1. Log in to the Ansible administration node.

    2. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node:

      Example

      [ansible@admin ~]$ cd /usr/share/cephadm-ansible
    3. Create a playbook to remove a host or hosts from the cluster:

      Syntax

      sudo vi PLAYBOOK_FILENAME.yml
      
      ---
      - name: NAME_OF_PLAY
        hosts: ADMIN_HOST
        become: USE_ELEVATED_PRIVILEGES
        gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS
        tasks:
          - name: NAME_OF_TASK
            ceph_orch_host:
              name: HOST_TO_REMOVE
              state: STATE
      
          - name: NAME_OF_TASK
            ceph_orch_host:
              name: HOST_TO_REMOVE
              state: STATE
            retries: NUMBER_OF_RETRIES
            delay: DELAY
            until: CONTINUE_UNTIL
            register: REGISTER_NAME
      
          - name: NAME_OF_TASK
            ansible.builtin.shell:
              cmd: ceph orch host ls
            register: REGISTER_NAME
      
          - name: NAME_OF_TASK
              debug:
                msg: "{{ REGISTER_NAME.stdout }}"

      Example

      [ansible@admin cephadm-ansible]$ sudo vi remove-hosts.yml
      
      ---
      - name: remove host
        hosts: host01
        become: true
        gather_facts: true
        tasks:
          - name: drain host07
            ceph_orch_host:
              name: host07
              state: drain
      
          - name: remove host from the cluster
            ceph_orch_host:
              name: host07
              state: absent
            retries: 20
            delay: 1
            until: result is succeeded
            register: result
      
           - name: list hosts in the cluster
             ansible.builtin.shell:
               cmd: ceph orch host ls
             register: host_list
      
           - name: print current list of hosts
             debug:
               msg: "{{ host_list.stdout }}"

    In this example, the playbook tasks drain all daemons on host07, removes the host from the cluster, and displays a current list of hosts.

  3. Run the playbook to remove host from the cluster:

    Syntax

     ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME.yml

    Example

     [ansible@admin cephadm-ansible]$ ansible-playbook -i hosts remove-hosts.yml

Verification

  • Review the Ansible task output displaying the current list of hosts in the cluster:

    Example

    TASK [print current hosts] ******************************************************************************************************
    Friday 24 June 2022  14:52:40 -0400 (0:00:03.365)       0:02:31.702 ***********
    ok: [host01] =>
      msg: |-
        HOST    ADDR           LABELS          STATUS
        host01  10.10.128.68   _admin mon mgr
        host02  10.10.128.69   mon mgr
        host03  10.10.128.70   mon mgr
        host04  10.10.128.71   osd
        host05  10.10.128.72   osd
        host06  10.10.128.73   osd