Configuring the cluster resources
You can configure the password authentication for the cluster nodes, and define the resource specifications for each node on the Secure Service Container partitions. The resource specification includes the CPU numbers, memory size, port range, and network settings.
This procedure is intended for users with role cloud administrator.
Before you begin
- Check that you complete the steps in the Installing the Secure Service Container for IBM Cloud Private CLI tool.
- Check that you create at least 200 GB for the data pool on each Secure Service Container partition by following the steps in Configuring Secure Service Container storage.
- Check that you have the following configuration data for each Secure Service Container partition (also known as LPAR), which are received from the IBM Z or LinuxONE system administrator.
- IP address
- master user ID and password
On the x86 server, complete the following steps as a root user.
Go to the
configdirectory that you extracted the Secure Service Container for IBM Cloud Private archive file, and update the
hostsfile to configure the password authentication for cluster nodes. In the following
hostsexample file, the Master user ID in the login setting of the Secure Service Container partition with IP address
10.152.151.105was set to
masterand the Master password to
[master] 10.152.151.100 ansible_user="root" ansible_ssh_pass="root_user_password" ansible_ssh_common_args="-oPubkeyAuthentication=no" ansible_become="true" [worker] 10.152.151.105 rest_user="ssc_master_user" rest_pass="ssc_master_password"
[master]section contains x86 server details with IP address, username, password for IBM Cloud Private master installation.
[worker]section contains Secure Service Container partition IP address, username, and password for the zAppliance REST API. If you use different Secure Service Container partitions to host the worker or proxy nodes, you must list each of partitions under the
[worker]section. The zAppliance REST API username
rest_passvalues are the user ID and password used to log in to the Secure Service Container partition and UI, and initially set in the login setting of the Secure Service Container partition as:
- Master user ID
- Master password
ssc4icp-config.yamlfile to configure the nodes on the Secure Service Container partition with required CPU, memory, port range, and network specifications by using a specified template.
ssc4icp-config.yaml example file shows that one cluster
DemoCluster contains 2 worker nodes with resources specified in the
template1 on the Secure Service Container partition with an IP address
10.152.151.105, and 1 proxy nodes with resources specified in the
template2 on the same Secure Service Container partition. See ssc4icp-config.yaml examples for more examples
of the cluster configuration.
cluster: name: "DemoCluster" datapool: "exists" masterconfig: internal_ips: ['192.168.0.251'] subnet: "192.168.0.0/24" LPARS: -ipaddress: '10.152.151.105' containers: -template: "template2" count: 1 internal_ips: ['192.168.0.254'] proxy_external_ips: ['172.16.0.4'] -ipaddress: '10.152.151.105' containers: -template: "template1" count: 2 internal_ips: ['192.168.0.252','192.168.0.253'] template1: name: "worker" type: "WORKER" cpu: "4" memory: "4098" port_range: '15000' root_storage: "60G" icp_storage: "140G" internal_network: subnet: "192.168.0.0/24" gateway: "192.168.0.1" parent: "encf700" template2: name: "proxy" type: "PROXY" cpu: "3" memory: "1024" port_range: '16000' root_storage: "60G" icp_storage: "140G" internal_network: subnet: "192.168.0.0/24" gateway: "192.168.0.1" parent: "encf700" proxy_external_network: subnet: "172.16.0.0/24" gateway: "172.16.0.1" parent: "encf900"
datapooldefines that the quotagroup still exists after the containers are deleted. If the value is not set for the
datapoolparameter, the CLI deletes quotagroup after the uninstallation.
masterconfigdefines the network configurations for the master node.
internal_ipsdefines an array of IP addresses for each worker and proxy node. See Configuring the network for proxy and worker nodes for information on how to plan the IP addresses for your worker and proxy nodes.
proxy_external_ipsdefines an external IP addresses for the proxy node. The external workloads can use the value of
proxy_external_network_ipto access the proxy node.
countdefines the number of nodes that will be created on the partition. Note that the value of
countis an integer and can not be enclosed by using the quotation marks.
cpudefines the number of CPU cores to be assigned for the node.
memorydefines the memory size (in MB) to be assigned for the node.
typedefines the type of node to be created by the CLI tool. The value must be
namedefines the name of the proxy or worker node. The maximum length of a node name is 20 characters.
internal_networkdefines the subnet, gateway, and parent network interface settings of the worker or proxy node.
proxy_external_networkdefines the external subnet, gateway, and parent network interface settings of the proxy node.
parentdefines the parent network device name that data traffic will physically go through on the node. See Configuring Secure Service Container network devices for the instructions on how to get those values.
port_rangedefines the starting port number on each Secure Service Container partition to be assigned to each node.
root_storagedefines the size of storage (G for GB or M for MB) allocated to the root file system. It must be set to at least what is required by IBM Cloud Private. IBM Cloud Private requires storage under the root file system that is used to store temporary files during the installation. For example, IBM Cloud Private 3.1.2 requires 50 GB under the root file system. Therefore, the
root_storageparameter must be set to 50 GB plus an adequate buffer for the operating system on the node itself.
icp_storagedefines the size of the storage (G for GB or M for MB) allocated to the IBM Cloud Private node runtime. It must be set to at least the sum of the directory sizes used by a node at runtime, as specified by the IBM Cloud Private system requirements. For example, based on IBM Cloud Private 3.1.2 system requirements, a node requires at runtime:
- at least 100 GB under /var/lib/docker
- at least 10 GB under /var/lib/kubelet
icp_storageparameter must be set to 110 GB plus an adequate buffer to run custom kubernetes applications. The default value is 140 GB.
You can now create worker and proxy nodes on the Secure Service Container partitions by following the steps in the Creating the cluster nodes topics.