Problems due to missing prerequisites
Use this information to ensure that prerequisites are met before using the installation toolkit for installation, deployment, and upgrade.
- Passwordless SSH setup
- Repository setup
- Firewall configuration
- CES IP address allocation
- Addition of CES IPs to /etc/hosts
Passwordless SSH setup
The installation toolkit performs verification during the precheck phase to ensure that passwordless SSH is set up correctly. You can manually verify and set up passwordless SSH as follows.- Verify that passwordless SSH is set up by using the following commands.
- Verify that the user can log into the node by using the host name of the node successfully
without being prompted for any input and that there are no
warnings.
ssh HostNameofFirstNode ssh HostNameofSecondNode
- Verify that the user can log into the node by using the FQDN of the node successfully without
being prompted for any input and that there are no
warnings.
ssh FQDNofFirstNode ssh FQDNofSecondNode
Repeat this on all nodes.
- Verify that the user can log into the node successfully by using the IP address of the node
without being prompted for any input and that there are no
warnings.
ssh IPAddressofFirstNode ssh IPAddressofSecondNode
Repeat this on all nodes.
- Verify that the user can log into the node by using the host name of the node successfully
without being prompted for any input and that there are no
warnings.
- If needed, set up passwordless SSH using the following commands.Note: This is one of the several possible ways of setting up passwordless SSH.
- Generate the SSH key.
ssh-keygen
Repeat this command on all nodes.
- Run the following commands.
ssh-copy-id FQDNofFirstNode ssh-copy-id FQDNofSecondNode
Repeat this step on all nodes.
- Run the following commands.
ssh-copy-id HostNameofFirstNode ssh-copy-id HostNameofSecondNode
Repeat this step on all nodes.
- Run the following commands.
ssh-copy-id IPAddressofFirstNode ssh-copy-id IPAddressofSecondNode
Repeat this step on all nodes.
- Generate the SSH key.
Repository setup
- Verify that the repository is set up depending on your operating system. For example, verify
that yum repository is set up by using the following command on all cluster
nodes.
yum repolist
This command should run clean with no errors if the Yum repository is set up.
Firewall configuration
It is recommended that firewalls are in place to secure all nodes. For more information, see Securing the IBM Storage Scale system using firewall.
- If you need to open specific ports, use the following steps on Red Hat Enterprise Linux nodes.
- Check the firewall status.
systemctl status firewalld
- Open ports required by the installation toolkit.
firewall-cmd --permanent --add-port 8889/tcp firewall-cmd --add-port 8889/tcp firewall-cmd --permanent --add-port 10080/tcp firewall-cmd --add-port 10080/tcp
- Check the firewall status.
CES IP address allocation
As part of the deployment process, the IBM Storage Scale checks routing on the cluster and applies CES IPs as aliases on each protocol node. Furthermore, as service actions or failovers, nodes dynamically lose the alias IPs as they go down and other nodes gain additional aliases to hold all of the IPs passed to them from the down nodes.
Example - Before deploymentThe only address here is 192.168.251.161,
which is the ssh address for the node. It is held by the eth0
adapter.
# ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>mtu 1500
inet 192.168.251.161 netmask 255.255.254.0 broadcast 192.168.251.255
inet6 2002:90b:e006:84:250:56ff:fea5:1d86 prefixlen 64 scopeid 0x0<global>
inet6 fe80::250:56ff:fea5:1d86 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:a5:1d:86 txqueuelen 1000 (Ethernet)
RX packets 1978638 bytes 157199595 (149.9 MiB)
RX errors 0 dropped 2291 overruns 0 frame 0
TX packets 30884 bytes 3918216 (3.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# ip addr
2: eth0:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:a5:1d:86 brd ff:ff:ff:ff:ff:ff
inet 192.168.251.161/23 brd 192.168.251.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 2002:90b:e006:84:250:56ff:fea5:1d86/64 scope global dynamic
valid_lft 2591875sec preferred_lft 604675sec
inet6 fe80::250:56ff:fea5:1d86/64 scope link
valid_lft forever preferred_lft forever
Example - After deployment
Now that the CES IP addresses exist,
you can see that aliases called eth0:0
and eth0:1
have
been created and the CES IP addresses specific to this node have been
tagged to it. This allows the ssh IP of the node to exist at the same
time as the CES IP address on the same adapter, if necessary. In
this example, 192.168.251.161 is the initial ssh IP. The CES IP 192.168.251.165
is aliased onto eth0:0
and the CES IP 192.168.251.166
is aliased onto eth0:1
. This occurs on all protocol
nodes that are assigned a CES IP address. NSD server nodes or any
client nodes that do not have protocols installed on them do not get
a CES IP.
Furthermore, as service actions or failovers, nodes
dynamically lose the alias IPs as they go down and other nodes gain
additional aliases such as eth0:1
and eth0:2
to
hold all of the IPs passed to them from the down nodes.
# ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.251.161 netmask 255.255.254.0 broadcast 192.168.251.255
inet6 2002:90b:e006:84:250:56ff:fea5:1d86 prefixlen 64 scopeid 0x0<global>
inet6 fe80::250:56ff:fea5:1d86 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:a5:1d:86 txqueuelen 1000 (Ethernet)
RX packets 2909840 bytes 1022774886 (975.3 MiB)
RX errors 0 dropped 2349 overruns 0 frame 0
TX packets 712595 bytes 12619844288 (11.7 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>mtu 1500
inet 192.168.251.165 netmask 255.255.254.0 broadcast 192.168.251.255
ether 00:50:56:a5:1d:86 txqueuelen 1000 (Ethernet)
eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>mtu 1500
inet 192.168.251.166 netmask 255.255.254.0 broadcast 192.168.251.255
ether 00:50:56:a5:1d:86 txqueuelen 1000 (Ethernet)
# ip addr
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:a5:1d:86 brd ff:ff:ff:ff:ff:ff
inet 192.168.251.161/23 brd 9.11.85.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.251.165/23 brd 9.11.85.255 scope global secondary eth0:0
valid_lft forever preferred_lft forever
inet 192.168.251.166/23 brd 9.11.85.255 scope global secondary eth0:1
valid_lft forever preferred_lft forever
inet6 2002:90b:e006:84:250:56ff:fea5:1d86/64 scope global dynamic
valid_lft 2591838sec preferred_lft 604638sec
inet6 fe80::250:56ff:fea5:1d86/64 scope link
valid_lft forever preferred_lft forever
Addition of CES IPs to /etc/hosts
Although it is highly recommended that all CES IPs are maintained in a central DNS and that they are accessible using both forward and reverse DNS lookup, there are times when this might not be possible. IBM Storage Scale always verify that forward or reverse DNS lookup is possible. To satisfy this check without a central DNS server containing the CES IPs, you must add the CES IPs to /etc/hosts and create a host name for them within /etc/hosts. The following example shows how a cluster might have multiple networks, nodes, and IPs defined.
For example:
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
# These are external addresses for GPFS
# Use these for ssh in. You can also use these to form your GPFS cluster if you choose
198.51.100.2 ss-deploy-cluster3-1.example.com ss-deploy-cluster3-1
198.51.100.4 ss-deploy-cluster3-2.example.com ss-deploy-cluster3-2
198.51.100.6 ss-deploy-cluster3-3.example.com ss-deploy-cluster3-3
198.51.100.9 ss-deploy-cluster3-4.example.com ss-deploy-cluster3-4
# These are addresses for the base adapter used to alias CES-IPs to.
# Do not use these as CES-IPs.
# You could use these for a gpfs cluster if you choose
# Or you could leave these unused as placeholders
203.0.113.7 ss-deploy-cluster3-1_ces.example.com ss-deploy-cluster3-1_ces
203.0.113.10 ss-deploy-cluster3-2_ces.example.com ss-deploy-cluster3-2_ces
203.0.113.12 ss-deploy-cluster3-3_ces.example.com ss-deploy-cluster3-3_ces
203.0.113.14 ss-deploy-cluster3-4_ces.example.com ss-deploy-cluster3-4_ces
# These are addresses to use for CES-IPs
203.0.113.17 ss-deploy-cluster3-ces.example.com ss-deploy-cluster3-ces
203.0.113.20 ss-deploy-cluster3-ces.example.com ss-deploy-cluster3-ces
203.0.113.21 ss-deploy-cluster3-ces.example.com ss-deploy-cluster3-ces
203.0.113.23 ss-deploy-cluster3-ces.example.com ss-deploy-cluster3-ces
In this example, the first two sets of addresses have unique host names and the third set of addresses that are associated with CES IPs are not unique. Alternatively, you could give each CES IP a unique host name but this is an arbitrary decision because only the node itself can see its own /etc/hosts file. Therefore, these host names are not visible to external clients/nodes unless they too contain a mirror copy of the /etc/hosts file. The reason for containing the CES IPs within the /etc/hosts file is solely to satisfy the IBM Storage Scale CES network verification checks. Without this, in cases with no DNS server, CES IPs cannot be added to a cluster.