Implement multisystem management and deployment with IBM PureApplication System
IBM PureApplication System V2.0 introduced support for multisystem management and deployment. By adding multiple PureApplication Systems to a management domain, you can perform catalog and user management across the systems in the domain. Within a management domain, you can create one or more deployment subdomains. A deployment subdomain enables patterns and shared services to be deployed across up to four systems. Multisystem management and deployment provide extra flexibility that simplifies the implementation of high availability for products such as IBM WebSphere® Application Server, IBM DB2®, and IBM Business Process Manager.
This tutorial helps you to set up a management domain and deployment subdomain by using IBM PureApplication System. You learn the requirements for setting up a management domain and a deployment subdomain. You also learn the restrictions that apply when you deploy patterns by using a deployment subdomain. This information gives you an informed start in planning your multisystem IBM PureApplication System implementation.
Be sure that you have 18.104.22.168 interim fix 1 or higher installed on all IBM PureApplication Systems before you set up your domain. This article lists some additional capabilities that are available in newer versions.
Rules for defining management domain and deployment subdomains
Multisystem deployments can provide clear benefits when you deploy certain IBM products through patterns. Of course, you need at least two PureApplication Systems to start. Before you can deploy your first multisystem, you must set up the following:
- Management domain
- Deployment subdomain
- Externally managed cloud groups and environment profile
When the externally managed environment profile is ready, you can perform multisystem deployments by using that environment profile.
Figure 1 shows an example of a management domain with two deployment subdomains.
Figure 1. Management domain and deployment subdomains
The following rules apply when you set up a management domain and deployment subdomains:
- Your system (PureApplication System) or instance (PureApplication Software) can belong to at most one domain. PureApplication Service on SoftLayer cannot participate in a management domain or deployment subdomain.
- PureApplication Software instances can belong to a domain but not to a subdomain. (Multi–system deployment is not supported for PureApplication Software.)
- You can create one or more subdomains, which are completely contained within the domain. PureApplication systems within the domain can belong to one subdomain, at most, within the domain.
- With PureApplication V2.2, your subdomain can contain up to four systems, but there is no limit to the number of subdomains that you create in your domain. If your subdomain contains any systems that are currently at a PureApplication version level that is older than V2.2, the domain is limited to two systems rather than four systems.
- Because of latency limitations for the subdomain communications, do not locate the systems in your subdomain more than 300 km apart.
- You must have the same platform type within each management domain. IBM PureApplication System models W1500 and W2500 can coexist in a domain, together with PureApplication Software. Likewise, IBM PureApplication System models W1700, W2700, and W3700 can coexist in a domain. However, you cannot have an Intel® based system (W1500 or W2500) and a POWER® based system (W1700, W2700, or W3700) in the same management domain.
Before you implement your system, be sure to think carefully about what you want for your setup. After a management domain exists, you cannot remove a system from there if that system is also part of a deployment subdomain. Furthermore, you cannot remove a system from a deployment subdomain without deleting the subdomain. To delete a subdomain, you must first delete all pattern instances and shared services that are deployed within that subdomain to externally managed environment profiles. This requirement is true even for externally managed deployments whose virtual machines are on only one of the systems in the subdomain.
Management domain network and security requirements
Before you create a PureApplication management domain, let’s review the network and security requirements.
Additional IP addresses
PureApplication systems must configure additional system management IP addresses before they can be added to the domain. These addresses are used for workload management of deployed virtual machines; their use is explained in more detail in Externally managed cloud groups and environment profiles as part of the external cloud group configuration. These additional IP addresses are configured on the System > Network Configuration page, within the section “Additional IPs for Cloud Management by way of External Networks.”
PureApplication System models W1500 and W2500 must configure one additional IP address, as illustrated in Figure 2; PureApplication System models W1700, W2700, and W3700 must configure four additional IP addresses. In both cases, the new addresses must be defined in the same subnet as the existing system management IP addresses. The additional management addresses must be IPv4 addresses only.
Figure 2. Configuring additional workload console access management IP address on PureApplication System models W1500/W2500
You must ensure that the PureApplication Systems (or PureApplication Software installations) in your management domain share IP connectivity between all of their system management IP addresses. If you have a firewall in place between the system management VLANs, ensure that ICMP traffic and TCP traffic on ports 22, 443, 1191, and 49300–49320, is permitted.
Figure 3 shows what you need when you are planning for a domain that contains two PureApplication Systems.
Figure 3. Networking for two PureApplication Systems within a management domain
Establishing trust between systems within the management domain
By default, PureApplication Systems are configured to trust the IBM self-signed console SSL certificate preinstalled by IBM. If you configured your own SSL certificate for the console, then you must import this certificate into the truststore for all other systems in the domain. Figure 3 shows what this configuration would look like in a domain that contains two PureApplication Systems.
Use the following procedure to import a console certificate into the truststore of all other systems in the domain:
- Save the certificate for the system that is being added to the domain or whose console certificate is being updated. If you have a copy of the certificate, use that .crt file. Alternatively, you can obtain the current certificate from the system by navigating to the system with your browser, clicking the browser’s security icon, and exporting the certificate as a .crt file.
- Use the command-line interface to find the location name for the
system that is being added to the domain or whose console certificate
is being replaced:
$ bin/pure -h hostnameA -u user -p password -c "print admin.racks.location_name"
- Using the location name that you obtained from step 2, use the
command-line interface to import this system’s certificate to all
other systems in the domain. The following command requires that you
have either the hardware administration role with permission to
"Manage hardware resources," or else the security administration role
with permission to "Manage security."
$ bin/pure -a -h hostnameB -u user -p password -c
$ bin/pure -a -h hostnameC -u user -p password -c
- Repeat this process as needed to import the remaining systems’ certificates to other systems in the domain.
All systems in your management domain must be configured to use LDAP and to use the same LDAP configuration. You configure security under System > System Security, as shown in Figure 4.
Figure 4. System security in the PureApplication web console
Effectively, this configuration enables the systems to have a shared trust authority for user authentication. Most subdomain activities, such as copying artifacts or deploying patterns, require an LDAP user as shown in Figure 5.
Figure 5. Example of an LDAP user in PureApplication web console
Although user authentication is shared among the systems in the domain, role-based user permissions (authorization) are managed locally for each system; user roles must be assigned manually on each system within the domain.
Domain management operations can be performed by local users, assuming that the users have sufficient permissions on that system.
To add a system to the domain, you must have both full security and full hardware administration permissions on the system from which you perform the addition. You must also provide user credentials that have full security and full hardware administration permissions on the system that is being added to the domain. Adding the system to the domain creates a relationship of trust between the system and all other existing systems in the domain.
All other domain and subdomain operations (create subdomain, add system to subdomain, delete subdomain, remove system from domain) require only hardware administration authority, and only on the system where the operation is performed.
Figure 6. Creating management domain in PureApplication web console
Catalog content can be transferred and synchronized between systems within the same management domain. These operations can be driven from the web console or through the command-line interface. However, this functionality is limited to virtual images, script packages, and add–ons. So, for example, a virtual system pattern would need to be manually exported and imported.
Upgrade and mixed-release considerations
As you incrementally upgrade systems, your domain and subdomain might have systems at different version levels. PureApplication System supports this configuration with some limitations.
Additional considerations specific to particular versions of PureApplication System are documented as release notes. Refer to coexistence requirements for PureApplication V22.214.171.124 and coexistence requirements for PureApplication V2.2.
A high degree of coordination and communication takes place between the systems and the deployed instances in your deployment subdomain. Therefore, a number of prerequisites, configuration requirements, and limitations apply to your deployment subdomain.
Setup of external iSCSI target
The PureApplication Systems in your subdomain exist in a mirroring relationship: they share information about deployed instances in real time. Two systems in the subdomain serve as mirrors and store independent copies of the data; any additional systems in the subdomain serve only as clients for the shared data. This data is stored in a new LUN (logical unit number) called SubdomainData that is created on the master storage node of each mirror system at the time the subdomain creation is completed. At least 512 GB must be available on each system before it can be added to the subdomain. You can verify the amount of free storage capacity by selecting Hardware > Storage Devices. An example is shown in Figure 7.
Figure 7. Confirming free storage capacity on master storage node in PureApplication System
Because of latency limitations for the subdomain communications, do not locate the systems in your subdomain more than 300 km apart. The mirroring relationship also requires you to provide an external iSCSI target to serve as a tiebreaker. This tiebreaker enables the mirror systems to establish quorum in case they lose communication with each other. The iSCSI target must have a minimum size of 1 GB, and is subject to the same distance and latency recommendation as the systems.
The tiebreaker is used to establish quorum. Therefore, if the systems in your subdomain have separate power sources and subnets, we recommend that you locate the tiebreaker on a third power source and network. Your firewall must also permit iSCSI traffic (TCP ports 860 and 3260) between the primary, secondary, and floating management IP addresses on your PureApplication Systems and at least one of the IP addresses for the iSCSI target. Note that PureApplication attempts to ping the iSCSI target for the subdomain before it actually sets it up as the iSCSI target. So you also need to ensure that ICMP traffic can reach the iSCSI target host. Figure 8 illustrates the connectivity between the systems and tiebreaker.
Figure 8. System communications with iSCSI tiebreaker
If both mirror systems lose access to the tiebreaker but can still communicate with each other, the subdomain availability is not affected. If the systems lose communication with each other but can communicate with the tiebreaker, the winning system is still able to maintain the subdomain on behalf of itself and any client systems that can still connect to it. On systems where the subdomain is unavailable, deployments to external environment profiles are not allowed: existing deployments cannot scale, new deployments cannot be issued, and deployments cannot be deleted.
The deployment data that is stored and shared on this internal mirror is not encrypted, either at rest on the system storage controllers, or in transit between the system management IP addresses as it is mirrored between the systems. This data includes secure keys that are used by the system to manage the deployed instances. Be sure that you plan for adequate physical security for the systems, and adequate network security for the data in transit.
Create the deployment subdomain
The management domain and its deployment subdomains are configured through the PureApplication web console. Select System > Management Domain Configuration and click Create Deployment Subdomain as shown in Figure 9.
Figure 9. Deployment subdomain configuration in the PureApplication web console
After you create the subdomain, add your PureApplication Systems to it by either dragging and dropping them into the subdomain, or by clicking Add a Location in the subdomain. When you add the second system to the subdomain, you are prompted to configure the iSCSI tiebreaker, as shown in Figure 10. (Note that the user and password here are optional.)
Figure 10. Window for the iSCSI tiebreaker configuration
After you configure the iSCSI tiebreaker, the systems complete the creation of the subdomain, which takes several minutes. When completed, the deployment subdomain looks like Figure 11.
Figure 11. Configured deployment subdomain in PureApplication web console
Externally managed cloud groups and environment profiles
Before PureApplication System V2.0, an internal IPv6 management VLAN was automatically configured for each cloud group. This VLAN is used for communication between the PureSystem Management Nodes (PSMs) and VMs, but also for communication between shared services and VMs.
To support multisystem deployment, externally managed cloud groups were introduced in V2.0. Instead of using an internal IPv6 VLAN for management communication, these cloud groups use an external management VLAN. This external VLAN is required for management communication between the systems within a deployment subdomain.
Internally managed cloud groups continue to be fully supported on PureApplication System. While externally managed cloud groups are required for multi-cloud and multisystem deployment, they have some limitations compared to internally managed cloud groups, described here. Because of these limitations, be sure to evaluate your deployment needs and carefully plan the allocation of your compute nodes to internally or externally managed cloud groups — or both. You cannot convert existing cloud groups from one type of management to another.
Management VLAN for externally managed cloud group
Before you can create an externally managed cloud group, you need to set up an external VLAN within your network and configure this VLAN to be connected to PureApplication System. You also need to create an IP group for that external management VLAN, and a special environment profile.
Additional management IP addresses
When you use externally managed cloud groups, the PureSystems Managers must be able to access the deployed virtual machines over the VLAN for external cloud group management. To set up this access, you must expose additional management services to the PureApplication management network. From that external network, the services must be able to access the external cloud group management network. Typically, you must set up routing between those VLANs within the data center network.
In addition to the (floating) system console IP address, the workload console must be configured with an IPv4 address within the PureApplication management network. When you work with PureApplication System model W1700, W2700, or W3700, you must assign three more IPv4 addresses to DLPAR Management. The new IPv4 addresses must be defined within the existing PureApplication management network; that is, by using the same network and subnet as the original (floating) system console IPv4 address.
You defined these additional management addresses in Additional IP addresses before you added your systems to a management domain.
The diagram in Figure 12 shows how the management IP addresses on the system management VLAN interact with a deployed virtual machine. Be aware that routing within the Data Center Network is required between the system management VLAN and the cloud group management VLAN.
Figure 12. Configuring additional management IP addresses on PureApplication System models W1500 or W2500
IP group for management VLAN
Before you can create a new externally managed cloud group, you must have at least one IP group defined that is Used For the purpose of Cloud Management. Defining such an IP group does not differ from defining an IP group used for Data; it’s a normal IP group. Figure 13 shows how to define this IP group used for Cloud Management. Remember to assign it to the cloud group management VLAN that you defined and configured earlier.
Figure 13. Configuring an IP group used for Cloud Management
In the web console, you can see that this new IP group is indeed used for Cloud Management. You can see an example in Figure 14.
Figure 14. Configuring a Cloud Management IP group
The default route for your virtual machines is normally assigned to the first data interface (eth1 or en1). As a result, you must now configure additional routes for the management interface eth0 or en0 to ensure that management traffic is not routed over eth1 or en1. You can define additional routes as part of the IP group definition.
You must configure the following routes for each Cloud Management IP group:
- A subnet route via the IP group’s gateway address to the network of the local system management VLAN.
- A subnet route via the IP group’s gateway address to each network used by any other Cloud Management IP group with which this IP group shares an environment profile; or within another environment profile linked to this environment profile by the use of shared service references. (This route is required only when the different Cloud Management IP groups do not share VLANs.)
You don't need to configure a route to the System Management network of the remote system.
As mentioned earlier, layer-3 network routing must be in place within the data center network between the various VLANs. Figure 15 shows what this routing would look like for two PureApplication Systems; routing must be in place for the following VLANs:
- Cloud group management VLAN "C1" and System Management VLAN "S1"
- Cloud group management VLAN "C2" and System Management VLAN "S2"
- Cloud group management VLAN "C1" and cloud group management VLAN "C2"
- System Management VLAN "S1" and System Management VLAN "S2"
- Data VLAN "D1" and Data VLAN "D2".
Figure 15. Management communications for externally managed deployments
As you can see from Figure 15, you must also ensure that certain network communication is allowed between these VLANs. This requirement implies that you open ports in the firewalls if you have firewalls in place between the VLANs. Setting up this communication can complicate the implementation, so that is why we listed the specific protocols, ports, and directions needed here. We do not discuss the communication between the two Data VLANs ("D1" and "D2"). This communication greatly depends on the actual pattern you are deploying across the deployment subdomain.
With this requirement in mind, we strongly recommend the use of "stretched" VLANs across PureApplication Systems where possible. Networking teams often offer stretched VLANs across data centers. Using stretched cloud management and data VLANs greatly simplifies the multisystem implementation; using a stretched management VLAN is encouraged for consistency.
You can use the included sample Python script, validateRouting.py, to validate the routing configuration of all of the management IP groups referenced by your externally managed environment profiles. This script can be run from any system, provided that the script can access the floating address of the PureSystems Manager. Because this script uses environment profiles to infer which IP groups need to have routes to each other, you need to create and configure your environment profiles before you run it. Run this script by using an LDAP user ID that has a minimum of both "view all workload resources" and "view all hardware resources" permissions. You can use this script to test the external management configuration on a single system, and on a multisystem subdomain; if you are testing a multisystem subdomain, validate all systems separately.
For simplicity, use the PureApplication command-line interface. Listing 1 shows how to run the Python script. The script reports warnings if expected routes are not found, and success if all expected routes are configured correctly.
Listing 1. Running the sample Python script
C:\Tools\pure.cli\bin>pure -h intel-system-2 -u admin -p ******** 15 -f C:\Temp\validateRouting.py Checking profile ExtMgdTest Checking profile RedBook HADR External Checking profile RedBook-External-R1R4 Checking profile Redbook-External-R1R4 w10.x.x.x SUCCESS
Also, keep in mind that there are specific requirements for the ports and protocols that need to be met. See Related topics for details.
Creating externally managed cloud groups
Figure 16. Creating an externally managed cloud group
When you create an externally managed cloud group, you do not select a VLAN ID. Instead, you create one or more IP groups that are designated to be used for Cloud Management instead of Data. These IP groups are used to assign IPv4 addresses to the management interfaces of virtual machines deployed in these cloud groups. As with IP groups used for data traffic, the addresses in your cloud management IP groups must have both forward and reverse lookups defined in your DNS.
The system does not enforce any requirements for the VLANs used for management IP groups. You can create new VLANs for use only for VM management, or you can share VLANs with your data IP groups. In any case, your externally managed cloud groups are not marked available until you define IP groups for them, for both management and for data.
Creating externally managed environment profiles
To deploy to these externally managed cloud groups, you must also create externally managed environment profiles that reference the cloud groups. You cannot mix internally managed cloud groups with externally managed environment profiles, or the reverse.
The externally managed environment profile is the integration point between the systems for multisystem deployments. You can add externally managed cloud groups from any of the systems in the subdomain to your externally managed environment profile.
Working with multisystem deployments
Whether they are deployed to a single system or across multiple systems, deployments to externally managed environment profiles depend on some activation and interface changes. These changes establish the externally managed network interface and reference multiple systems and cloud groups in a single deployed instance. Therefore, only newer pattern content supports deployment to an externally managed environment profile. The following restrictions apply:
- Patterns that use one or more Hypervisor Edition virtual images cannot
be deployed to externally managed cloud groups. As a result, classic
(and promoted classic) virtual system patterns cannot be used with
multisystem deployment. Attempting to deploy yields this error:
CWZKS0417E: Failed to deploy because one or more virtual images do not support multi-target deployment.
- Virtual appliances cannot be deployed to externally managed environment profiles.
- Patterns that rely on plug-ins and pattern types that depend (directly or indirectly) on a Foundation pattern type that is V126.96.36.199 or lesser cannot be deployed to externally managed environment profiles. All plug-ins must require Foundation 2.1 or greater as a signal of their compatibility with externally managed networks and multisystem references.
Deploying to an externally managed environment profile
When you deploy a pattern across multiple systems in a subdomain, the pattern itself needs to exist only on the deploying system. However, for each virtual machine that is placed on a system, that system must already contain exact copies of all pattern artifacts required for that virtual machine: virtual image, script packages, pattern types, plug-ins, software components, installation manager repository, and so on. The system verifies this content as part of its placement recommendation and validation, except for the installation manager repository content. You can enable the system to choose the placement of all virtual machines, or you can choose the placement yourself.
Because externally managed deployments can span multiple cloud groups, it is not meaningful in this case to scope shared service usage to a single cloud group. Instead, when you deploy to an externally managed environment profile, the shared service association is established at the scope of the environment profile, rather than the cloud group. So, when you use externally managed environment profiles, you need a shared services instance for each of them. Shared service instances can help isolate applications from one another, or even production from non-production environments. Figure 17 illustrates the difference in scope for internally and externally managed environment profiles.
This consideration applies to all externally managed environment profiles, regardless of whether your system is part of a deployment subdomain, and regardless of whether your environment profile currently contains cloud groups from multiple systems. Carefully plan your externally managed cloud group and environment profile configuration to avoid a multiplication of the number of shared service deployments that are needed.
Figure 17. Difference in scope of shared services for internally and externally managed environment profiles
To corroborate user identity between the systems, all workload-related operations that have the potential to span multiple systems require an LDAP user. Therefore, in addition to needing the appropriate access to be granted on each system, you must also be logged in as an LDAP user to create, view, or manage externally managed profiles – and to create, view, or manage deployments to these profiles. All of these resources are hidden from non-LDAP users, regardless of those users’ access levels and roles. Access controls for multisystem environment profiles and deployments are automatically synchronized between the systems in the subdomain.
Availability, scaling, and recovery considerations
Multisystem deployment provides a foundation for high availability of pattern instances across systems or data centers. However, this availability depends on the design of your application and any services it is using. The application and its services must be properly designed and configured to support high availability to tolerate the loss of access to the virtual machines on one system.
When a virtual machine fails, if that virtual machine is considered nonpersistent, the system, in many cases, attempts to recover it by cloning a new virtual machine. For multisystem deployments, this recovery is only attempted on the same system as the original failing virtual machine. Background information on persistent and nonpersistent virtual machines is documented in the PureApplication IBM Knowledge Center.
Horizontal scaling for a multisystem deployment attempts to roughly balance the virtual machines across the cloud groups and systems. Beginning with PureApplication V2.2, horizontal scaling can scale new virtual machines to cloud groups or systems on which an instance of the virtual machine was not originally deployed, if the location has all necessary artifacts and capacity. Before PureApplication V2.2 horizontal scaling could scale only within cloud groups or systems on which an instance of the virtual machine was already deployed.
An entity that is called the shared service registry provider exists for each shared service only on the system from which the shared service deployment was initiated. Therefore, if this deploying system is temporarily unavailable to the subdomain, new shared service clients cannot connect to the shared service. New clients cannot connect to the shared service even if the shared service VMs on the remaining systems are still able to service clients, and even if the client VMs are all on the surviving systems.
This article explained how to configure a multisystem domain and subdomain, and how to configure all of the other resources needed for multisystem deployment, including IP groups, cloud groups, and environment profiles. You learned about the requirements and restrictions for multisystem deployment to better prepare you for planning your multisystem implementation.
- developerWorks series on Network design for IBM PureApplication System:
- IBM PureApplication System Release Note: PureApplication System and Software interdependencies for multisystem environments
- IBM Knowledge Center topics:
Upgrade and mixed-release considerations in the multisystem environment
Firewall requirements for multisystem domains and subdomains
Firewall requirements for PureApplication System patterns