Configuring the cluster resources

You can configure the password authentication for the cluster nodes, and define the resource specifications for each node on the Secure Service Container partitions. The resource specification includes the CPU numbers, memory size, port range, and network settings.

This procedure is intended for users with role cloud administrator.

Before you begin

Procedure

On the x86 server, complete the following steps as a root user.

  1. Go to the config directory that you extracted the Secure Service Container for IBM Cloud Private archive file, and update the hosts file to configure the password authentication for cluster nodes. In the following hosts example file, the Master user ID in the login setting of the Secure Service Container partition with IP address 10.152.151.105 was set to master and the Master password to somesecurepassword.

    [master]
    10.152.151.100 ansible_user="root" ansible_ssh_pass="root_user_password" ansible_ssh_common_args="-oPubkeyAuthentication=no" ansible_become="true"
    [worker]
    10.152.151.105 rest_user="ssc_master_user" rest_pass="ssc_master_password"
    

    Where

    • The [master] section contains x86 server details with IP address, username, password for IBM Cloud Private master installation.
    • The [worker] section contains Secure Service Container partition IP address, username, and password for the zAppliance REST API. If you use different Secure Service Container partitions to host the worker or proxy nodes, you must list each of partitions under the [worker] section. The zAppliance REST API username rest_user and password rest_pass values are the user ID and password used to log in to the Secure Service Container partition and UI, and initially set in the login setting of the Secure Service Container partition as:
      • Master user ID
      • Master password
  2. Update the ssc4icp-config.yaml file to configure the nodes on the Secure Service Container partition with required CPU, memory, port range, and network specifications by using a specified template.

The following ssc4icp-config.yaml example file shows that one cluster DemoCluster contains 2 worker nodes with resources specified in the template1 on the Secure Service Container partition with an IP address 10.152.151.105, and 1 proxy nodes with resources specified in the template2 on the same Secure Service Container partition. See ssc4icp-config.yaml examples for more examples of the cluster configuration.

cluster:
    name: "DemoCluster"
    datapool: "exists"
    masterconfig:
      internal_ips: ['192.168.0.251']
      subnet: "192.168.0.0/24"
LPARS:
  -ipaddress: '10.152.151.105'
     containers:
      -template: "template2"
       count: 1
       internal_ips: ['192.168.0.254']
       proxy_external_ips: ['172.16.0.4']
  -ipaddress: '10.152.151.105'
     containers:
      -template: "template1"
       count: 2
       internal_ips: ['192.168.0.252','192.168.0.253']
template1:
     name: "worker"
     type: "WORKER"
     cpu: "4"
     memory: "4098"
     port_range: '15000'
     root_storage: "60G"
     icp_storage: "140G"
     internal_network:
       subnet: "192.168.0.0/24"
       gateway: "192.168.0.1"
       parent: "encf700"
template2:
     name: "proxy"
     type: "PROXY"
     cpu: "3"
     memory: "1024"
     port_range: '16000'
     root_storage: "60G"
     icp_storage: "140G"
     internal_network:
       subnet: "192.168.0.0/24"
       gateway: "192.168.0.1"
       parent: "encf700"
     proxy_external_network:
       subnet: "172.16.0.0/24"
       gateway: "172.16.0.1"
       parent: "encf900"

Note:

Next

You can now create worker and proxy nodes on the Secure Service Container partitions by following the steps in the Creating the cluster nodes topics.