Setting up useful resource groups is essential to making full and efficient use of cluster capabilities. Following the steps below, you create resource groups that set aside specific hosts for management duties and divvy up the remainder of your hosts based on maximum memory.
Resource groups are logical groups of hosts. Resource groups provide a simple way of organizing and grouping resources (hosts) for convenience; instead of creating policies for individual resources, you can create and apply them to an entire group. Groups can be made of resources that satisfy a specific static requirement in terms of OS, memory, swap space, CPU factor, and so on, or that are explicitly listed by name.
The cluster administrator can define multiple resource groups, assign them to consumers, and configure a distinct resource plan for each group. For example:
Define multiple resource groups: A major benefit in defining resource groups is the flexibility to group your resources based on attributes that you specify. For example, if you run workload or use applications that need a Linux OS with not less than 1000 MB of maximum memory, then you can create a resource group that only includes resources meeting those requirements.
Configure a resource plan based on individual resource groups: Tailoring the resource plan for each resource group requires you to complete several steps. These include adding the resource group to each desired top-level consumer (thereby making the resource group available for other sub-consumers within the branch), along with configuring ownership, enabling lending/borrowing, specifying share limits and share ratio, and assigning a consumer rank within the resource plan.
Resource groups generally fall into one of three categories:
Resource groups that include compute hosts with certain identifiable attributes a consumer may require in a requested resource (for example, resources with large amounts of memory; considered “dynamic”—new hosts added to the cluster that meet the requirements are automatically added to the resource group)
Resource groups that only include certain compute hosts (for example, so that specified resources are accessed by approved consumers; considered “static”—any new hosts added to the cluster have to be manually added to the resource group)
Resource groups that encompass management hosts only (reserved for running services, not a distributed workload; for example, the out-of-the-box “ManagementHosts” group)
Resource groups are either specified by host name or by resource requirement using the select string.
By default, EGO comes configured with three resource groups: InternalResourceGroup, ManagementHosts, and ComputeHosts. InternalResourceGroup and ManagementHosts should be left untouched, but ComputeHosts can be kept, modified, or deleted as required.
You need to know which hosts you have reserved as management hosts. You identified these hosts as part of the installation and configuration process. If you want to select different management hosts than the ones you originally chose, you must uninstall and then reinstall EGO on the compute hosts that you now want to designate as management hosts (a master host requires installing the full package), and then run egoconfig mghost. The tag mg is assigned to the new management host, in order to differentiate it from a compute host. The hosts you identify as management hosts are subsequently added to the ManagementHosts resource group.
Management hosts run the essential services that control and maintain your cluster and you therefore need powerful, stable computers that you can dedicate to management duties. Note that management hosts are expected to run only services, not to execute workload.
Ensure that you designate one of your managements host as the master host, and another one or two hosts as failover candidates to the master (the number of failover candidates is up to you, and may depend on the size of your production cluster).
To help orient you, here is a list of the default resource groups and resource plan components you see and work with in the Platform Management Console:
In this tutorial, we work with the ComputeHosts resource group and create new resource groups.
Resource plan (default resource group upon opening page is ComputeHosts):
Only consumers registered to a selected resource group show. Select different resource groups to modify corresponding resource plans.
In this tutorial, we update the resource plan to include the new resource group you create.
The ManagementHosts resource group is created during the installation and configuration process. Each time you install and configure the full package on a host, that host is statically added to the ManagementHosts resource group.
You need to ensure that the trusted hosts you identified in the section Gather the facts (above) are the same as the hosts that were configured to be management hosts.
You must be logged on to the Platform Management Console as a cluster administrator. You should not be running any workload while you perform this task because it involves removing an existing resource group.
When you delete a resource group, those hosts are no longer assigned to a consumer. Therefore, you should complete this task before changing your resource plan for the first time. If you have modified the resource plan and want to save those changes, export the resource plan before starting this task.
You can create resource groups that automatically place all your compute hosts in two (or more) different resource groups. You can split your hosts up this way if some of the applications or workload you plan to run on the Symphony cluster have distinct or important memory requirements.
You can logically group hosts into resource groups based on any criteria that you find important to the applications and workload you intend to run. For example, you may wish to distinguish hosts based on OS type or CPU factor.
If you did not create two resource groups in the following task or did not include all hosts in one of the two resource groups, you can now create a resource group by listing host names.
You must be logged on to the Platform Management Console.
You should have already added most of your hosts to the cluster.
Create a new resource group by host name to include any hosts that may not be already included in a resource group that is dynamic.
Any new compute hosts that are later added to the cluster, and that you want to add to this resource group, must be manually added.
Now that you have basic resource groups (one for your management hosts and two or more for your compute hosts) you can begin to specialize and split up one resource group that is based on available memory.
For example, if you know that an application you run requires not only machines with 1001 MB of available memory or more, but also two or more CPUs, you can create a new resource group (and then modify the existing “maxmem_high” resource group) to make these specific resources available to any consumer. The new resource group “maxmemhighmultiCPU” would have the selection string:
select(!mg && maxmem > 1000 && ncpus>=2)
You would then modify the existing resource group “maxmem_high” to read:
select(!mg && !(ncpus>=2) && maxmem > 1000)
As a result, the maxmem_high group uses only single CPU hosts.