Creating initial resource groups
Setting up useful resource groups is essential to making full and efficient use of cluster capabilities. Follow these steps to create resource groups that set aside specific hosts for management duties and divide up the remainder of your hosts that are based on maximum memory.
About this task
You have created your consumer tree based on your business needs and you have added most of your hosts to your cluster, but have not yet set up an extensive resource plan, created new resource groups, or modified default resource groups. You are preparing to customize the plan for your applications and want to divide your hosts by memory as you expect to run varied workload, with some requiring not less than 1000 MB of maximum memory and others requiring very little memory.
- Access to hosts with the necessary amount of maximum memory.
- No need to wait for appropriate hosts to become available.
- Workload that requires very little memory does not get hosts with a large maximum memory.
At a glance:
- Plan your groups.
- Check the ManagementHosts resource group.
- Review and modify the primary host candidate list.
- Create new dynamic resource groups.
- Create new resource groups by host name.
- Assign the new resource groups to a consumer.
- Modify your resource plan for new resource groups.
- How to grow: Advanced resource group.
Procedure
-
Plan your groups.
-
Understand resource groups.
Resource groups are logical groups of hosts and provide a simple way of organizing and grouping resources (hosts) for convenience. Instead of creating policies for individual resources, you can create and apply them to an entire group.
The cluster administrator can define multiple resource groups, assign them to consumers, and configure a distinct resource plan for each group.
Resource groups are either specified by host name or by resource requirement using the select string.
By default, EGO comes configured with three resource groups:
InternalResourceGroup
,ManagementHosts
, andComputeHosts
.InternalResourceGroup
andManagementHosts
should be untouched, butComputeHosts
can be kept, modified, or deleted as required.For more information, see Understanding resource groups.
-
Gather the facts.
You need to know which hosts you have reserved as management hosts. You identified these hosts as part of the installation and configuration process. If you want to select different management hosts than the ones you originally chose:
- Uninstall and reinstall IBM® Spectrum Symphony on the compute hosts that you now want to designate as management hosts.
- Run egoconfig mghost.
The tag mg is assigned to the new management host to differentiate it from a compute host. The hosts you identify as management hosts are subsequently added to the ManagementHosts resource group.
Management hosts run the essential services that control and maintain your cluster. You therefore need powerful, stable computers that you can dedicate to management duties. Note that management hosts are expected to only run services, not to execute workload.
Ensure that you designate one of your managements host as the primary host, and another one or two hosts as failover candidates to the primary (the number of failover candidates is up to you, and may depend on the size of your production cluster).- Make a list of hosts that have been installed with the full package, and that have the tag
mg assigned to them (from having run egoconfig mghost).
You should be able to get a list from the person who installed your cluster.
- Review the list of management hosts.
Ask yourself if these are your most trusted hosts with the reliability they need to be responsible for the entire cluster.
- (Optional) Remove any listed management hosts you do not trust:
- If you have configured automatic startup during your cluster setup, run
egoremoverc.sh.
Running this command prevents automatic startup when the host reboots, which keeps the host from being re-added dynamically to the cluster.
- Run egoconfig unsetmghost to remove the host from the management host group.
Running this command removes the host entry from ego.cluster.cluster_name.
- If the host is a primary candidate, run egoconfig masterlist to remove the host from the failover order.
- Restart the primary host to change the local host from a management host to a compute host, and for the cluster file to get read again.
- If you have configured automatic startup during your cluster setup, run
egoremoverc.sh.
- (Optional) Designate different management hosts.
- For each Linux® host you wish to designate as a management
host, including primary candidates, do the following:
- Run the egoconfig mghost command:
egoconfig mghost EGOshare
where EGOshare is the shared directory that contains important files such as configuration files to support primary host failover (once the egoconfig mghost command is run and the files are copied over).
For example, egoconfig mghost /share/ego
Note that the shared directory is the same for all management hosts.
- Set the environment on the local host so that $EGO_CONFDIR is set properly
and the changes take effect.
Setting this environment variable changes $EGO_CONFDIR from a local to shared directory.
- Restart the primary host so that the cluster file gets read again.
- Run the egoconfig mghost command:
- For each Windows host you wish to designate as a
management host, including the primary
candidates, do the following:
- Run the egoconfig mghost command:
egoconfig mghost EGOshare domain_name\user_name password
where EGOshare is the shared directory that contains important files such as configuration files to support primary host failover (once the egoconfig mghost command is run and the files are copied over), user_name is the egoadmin account, and password is the egoadmin password.
For example, egoconfig mghost \\Hostx.mycompany.com\EGO\share mycompany.com\egoadmin mypasswd
Note: The shared directory is the same for all management hosts. Also, be sure to use a fully qualified domain name. - Restart the primary host so that the cluster file gets read again.
- Run the egoconfig mghost command:
- For each Linux® host you wish to designate as a management
host, including primary candidates, do the following:
-
Recognize the default configurations.
To help orient you, here is a list of the default resource groups and resource plan components you see and work with in the cluster management console:
- Resource groups:
ComputeHosts
(executes workload)InternalResourceGroup
(runs important EGO components and services)ManagementHosts
(runs important EGO components and services)
In this tutorial, we work with the
ComputeHosts
resource group and create new resource groups. - Resource plan (the resource group which launches on the cluster management console first is
ComputeHosts
):Only consumers registered to a selected resource group show. Select different resource groups to modify corresponding resource plans.In this tutorial, we update the resource plan to include the new resource group you create.
- Resource groups:
-
Understand resource groups.
-
Check the
ManagementHosts
resource group.The ManagementHosts resource group is created during the installation and configuration process. Each time you install and configure the full package on a host, that host is statically added to the ManagementHosts resource group.
Ensure that the trusted hosts you identified in step 1b are the same as the hosts that were configured to be management hosts.
- Log in to the cluster management console as a cluster administrator.
-
Click Resources > Resource Planning (Slot) > Resource Groups.
A list of all resource groups displays.
By default, your resource groups are ComputeHosts, InternalResourceGroup, and ManagementHosts.
-
Click ManagementHosts.
The properties for ManagementHosts display.
CAUTION:Do not, under any circumstances, modify any of the ManagementHosts properties (except for the description). You could seriously damage your cluster. -
Note and compare the hosts listed in the Member hosts section.
The hosts that are members of the ManagementHosts resource group are listed here.
Do these hosts match the list of hosts you made in step 1b: Gather the facts? If not, contact the person in charge of installation and make sure each management host is configured properly.
You need the exact host names for the next step.
-
Review and modify the primary host
candidate list.
-
Select System & Services > Cluster > Primary and Failovers.
A summary displays.
The primary host is the first host in the Primary Candidates list. Other host names may be listed as candidates (in the Primary Candidates list) or as available hosts (in the Available Hosts list).
-
Review primary and candidates.
The primary host is the host listed first in the candidates column. All others under the candidate list should be eligible hosts that are also part of the ManagementHosts resource group.
- Check the host names against the list you made when you checked the ManagementHosts resource group.
- Use the controls to move hosts around. Add any hosts that you want as primary candidates into the candidates column in the
order you want them to fail over.
You cannot remove the primary host.
-
Select System & Services > Cluster > Primary and Failovers.
-
Create new dynamic resource groups.
Note: Ensure that workload is not running while you perform this task because it involves removing an existing resource group.
When you delete a resource group, those hosts are no longer assigned to a consumer. Therefore, complete this task before changing your resource plan for the first time. If you have modified the resource plan and want to save those changes, export the resource plan before starting this task.
You can create resource groups that automatically place all your compute hosts in two (or more) different resource groups. You can split your hosts up this way if some of the applications or workload you plan to run on the cluster have distinct or important memory requirements.
You can logically group hosts into resource groups based on any criteria that you find important to the applications and workload you intend to run. For example, you may wish to distinguish hosts based on OS type or CPU factor.
-
Select Resources > Resource Planning (Slot) > Resource Groups.
A list of your existing resource groups displays.
By default, your resource groups are ComputeHosts, InternalResourceGroup, and ManagementHosts.
CAUTION:The InternalResourceGroup and ManagementHosts groups should never be deleted. They are special resource groups that contain hosts used for EGO services. The ComputeHosts resource group should not be deleted unless the hosts used by the out-of-box applications have been moved to another resource group. -
Click Global Actions > Create a Resource Group.
The resource group properties display.
-
Fill in the resource group properties.
- Enter a name that describes the hosts that you are going to select for this group. In this example, we use
maxmem_high
. - Do one of the following to define the number of slots per host:If the parameter EGO_ENABLE_SLOTS_EXPR=N in the ego.conf file, select 1 slot per CPU; otherwise, define the calculation for the number of slots based on maximum memory in the host:
- Choose Number of slots per host is equal to.
- Select Maximum Memory from the resource list.
- Select / from the list of operators.
- Enter 500 in the text box.
- Make sure the resource selection method is Dynamic (Requirements).
- Under Hosts to Show in List, select Hosts filtered by resource requirement.
- In the text box for the resource requirement string, enter select(!mg && maxmem > 1000).
The command select ignores any hosts belonging to the ManagementHosts resource group (!mg) and adds any non-management host with a maximum memory of 1001 MB or more (maxmem > 1000).
- Click Refresh Host List.
In the Member hosts section, a list of any hosts (as found in the current cluster) that meet the requirements you specified with the select string is generated.
- Review the hosts in the member section and make any modifications
you need to the select string until the member list is correct.
Only hosts that currently match the requirements are displayed here. However, the list is dynamic. As you add hosts to the cluster that meet these requirements, they are automatically added to this resource group.
- Click Check for overlaps to make sure the member hosts do not belong to any other resource groups.
If you have overlaps, modify your selection string until overlaps no longer exist. Hosts must never overlap between resource groups. Having overlaps causes the hosts to be double-counted (or more) in the resource plan, resulting in recurring under-allocation of some consumers. The exception is with hosts listed in InternalResourceGroup: although all hosts in the cluster are listed here, they are not double-counted in the resource plan.
- Once you have no overlaps, click Create.
- Enter a name that describes the hosts that you are going to select for this group. In this example, we use
-
Click Resource Groups again.
A list of resource groups displays, including the
maxmem_high
group you just created. -
Create a second resource group.
Note: You can skip this step and go to step 5.Follow the preceding steps with the following differences:
- Name the second resource group maxmem_low.
- Add the selection string select(!mg && !(maxmem > 1000)).
This resource group is now made up of any compute host not belonging to the ManagementHosts resource group and excluding hosts you specified for the
maxmem_high
resource group.Specify one resource group that excludes all other resource groups or selection string requirements (specify using
"not" (!)
). That way, all your hosts fall into one resource group or another.
You have now deleted the ComputeHosts resource group and split all your hosts, except those belonging to the ManagementHosts resource group, into two new groups: one made up of hosts with memory over 1000 MB (
maxmem_high
) and one made up of all other hosts with memory of 1000 MB or less (maxmem_low
).
-
Select Resources > Resource Planning (Slot) > Resource Groups.
-
Create a new resource group by host name.
If you did not create two resource groups in the preceding step or did not include all hosts in one of the two resource groups, you can now create a resource group by listing host names. Complete this step to include any hosts that may not be already included in a resource group that is dynamic. Any new compute hosts that are later added to the cluster, and that you want to add to this resource group, must be manually added.
You must have already added most of your hosts to the cluster.
- Click Resources > Resource Planning (Slot) > Resource Groups.
- From the Global Actions drop-down list, select Create a Resource Group.
-
Identify the new resource group on the Properties page:
- Specify a resource group name.
In this example, we use
my_static
.Resource group names must adhere to the following naming rules:- The resource group name can be a maximum of 64 characters.
- The resource group name must begin with a letter.
- The resource group name must contain only the following characters: 0-9, a-z, A-Z, -, or _.
- Include a description (maximum 200 characters) of the resource group.
- Leave the default setting of 1 slot per CPU for Workload Slots (this defines how many slots per host you would like to have the system count; unless you are an advanced user, do not change this setting).
- For Resource Selection Method, select Static (List of Names).Static resource selection means that you are manually selecting specific hosts to belong to this resource group.
- Specify a resource group name.
-
Under Hosts to Show in List, select All hosts.
A list of all hosts that belong to your cluster displays.
-
Review the hosts found in your cluster:
- Click Member hosts to expand the section and review the hosts found in your cluster.
- Review your member hosts and select the hosts you want using the check boxes.
If you select no member hosts, all hosts in your cluster are added to this resource group when you create it.
- Click Check for overlaps.If any hosts overlap, remove them from this resource group or remove them from the overlapping resource group.
- Click Create.
-
Assign the new resource groups to a consumer.
You must have already created the consumers that you want.
- Select Resources > Consumers.
-
Select a consumer to assign the new resource group to.
- If you have already created your consumers by modifying the out-of-box structure, using the tree, locate and click the consumer to which you want to assign the new resource group.
- If you have not modified the consumer tree, click SampleApplications from the consumer tree pane to assign the new resource group to this consumer.
- Select a consumer to assign the new resource group to.
- Click Consumer Properties.
- Specify one or more resource groups to which this consumer should have access.
-
Click Apply.
The Consumer Properties page updates and your changes are saved:
-
Modify your resource plan for new resource groups.
If you know you intend to create more resource groups, do that first even if you do not know all the details of the resource groups.
Any time you add, modify, or delete a resource group, you must manage resource distribution for these resource groups using the resource plan.
- Click Resources > Resource Planning (Slot) > Resource Plan.
-
Use the Resource Group drop-down menu to switch between resource groups and modify your
resource plan details for each resource group.
Note: Resources groups that do not yet have consumers assigned to them do not appear in the menu. Consumers must first be assigned from the Resources > Consumers page.Never make any changes to the ManagementHosts resource group in the resource plan.
-
How to grow: Advanced resource group.
Now that you have basic resource groups (one for your management hosts and two or more for your compute hosts), you can begin to specialize and split up one resource group that is based on available memory.
For example, if you know that an application you run requires not only machines with 1001 MB of available memory or more, but also two or more CPUs, you can create a new resource group (and then modify the existingmaxmem_high
resource group) to make these specific resources available to any consumer. The new resource groupmaxmemhighmultiCPU
would have the selection string:select(!mg && maxmem > 1000 && ncpus>=2)
You would then modify the existing resource groupmaxmem_high
to read:select(!mg && !(ncpus>=2) && maxmem > 1000)
As a result, the
maxmem_high
group uses only single CPU hosts.