Location of hub monitoring server

On System z®, the hub can be located on either z/OS® or z/Linux. The hub may also be installed on a distributed operating system such as Windows™, UNIX™, or Linux™. A number of factors must be taken into consideration when deciding where to locate the hub monitoring server.

Many organizations prefer the reliability and availability characteristics of the z/OS platform for the hub monitoring server. If most of your monitoring agents are on a z/OS system, placing the hub monitoring server on a z/OS system can shorten the communications path.

Alternatively, if most of your monitoring agents are on distributed systems, you might prefer a distributed platform for your hub monitoring server. If you install the hub monitoring server on a distributed system, such as a Windows, Linux, or AIX® system, you have the option of deploying the portal server on the same system to shorten that communications path. If your hub is not running on a z/OS system, it may make sense to have a remote monitoring server on your z/OS LPARs.

The following additional factors should also be taken into consideration in deciding where to locate the hub:

  • Security requirements
  • Available or required CPU cycles
  • Data conversion considerations
  • Network topology
  • Failover capabilities and requirements

The hub monitoring server is the focal point for the entire monitoring environment. This server is under a significant load.

Place the hub monitoring server inside the data center on a high-performance network. Connectivity between the hub monitoring server and other directly connected components such as the remote monitoring servers must be fast and reliable.

Security requirements

When you are selecting the platform where the hub monitoring server will be deployed, you should consider what your security requirements are.

Access to the Tivoli® Enterprise Portal is controlled by user accounts (IDs) defined to the portal server. Authentication of those IDs can be enabled through either the hub Tivoli Enterprise Monitoring Server or the Tivoli Enterprise Portal Server. The hub Tivoli Enterprise Monitoring Server can be configured to authenticate user IDs by using either the local system registry or an external LDAP-enabled registry. The Tivoli Enterprise Portal Server can be configured to authenticate through an external LDAP registry. You can provide security for the enhanced 3270 user interface, the 3270 OMEGAMON® (menu system) and OMEGAMON II® (CUA) interfaces by using a combination of security types and implementations.

For more information on security options, see Which security options to enable. Do not implement user authentication before you complete a basic installation of Tivoli Management Services components and monitoring agents and test their operation.

Available or required CPU cycles

Place the hub monitoring server on a supported platform that has the required resources available (memory, processor, storage) and where it will not be constrained. In some cases, local organizational standards or requirements might dictate where processing resources for the hub monitoring server must be allocated.

Data conversion considerations

When data is passed between mainframe and distributed platforms, the data must be converted from ASCII (distributed) to EBCDIC (mainframe), or vice versa, for each request. Data conversion is always completed by the receiving component in a Tivoli Monitoring environment. Because the Tivoli Enterprise Portal Server can be located only on ASCII-based systems, for environments that contain z/OS monitored systems, data translation cannot be completely avoided. However, minimizing the amount of data conversion can significantly improve the performance of the environment.

If only z/OS OMEGAMON agents are reporting to the infrastructure, a hub monitoring server on a z/OS system will not affect data conversion. Conversion takes place only once, when the portal server receives its data from the hub monitoring server. If both distributed and z/OS based agents are reporting to the infrastructure, the data conversion time can be reduced by hosting the hub monitoring server on a distributed system. For distributed agents connecting to a hub monitoring server on a distributed system, no data conversion is required. The z/OS agents connect to a remote monitoring server on a z/OS system and data conversion occurs only once, when the hub monitoring server receives data from this z/OS-based remote monitoring server. After the z/OS data is installed on the hub, no additional conversion is required when it is received by the portal server because the data has already been converted.

Historical data conversion is another area of consideration. Storing history at the agent is recommended, so in this case there is no monitoring server data conversion. However, if you collect history at the monitoring server where your agents are using different data conversion protocols than the monitoring server is using, you must complete the conversion process. This data must also go through yet another conversion when it is offloaded and converted again when it is retrieved from the data warehouse, if one is used.

Network topology

Connectivity between the hub monitoring server and other directly connected components such as the remote monitoring servers must be fast and reliable. For all successful deployments, you must understand the network infrastructure. When planning your deployment, take into account each of the following important factors:

  • Location of firewalls

    Tivoli Management Services supports most common firewall configurations, including those that use address translation (application proxy firewall is a notable exception). To enable this support, use the piped protocols, which open a single port on the firewall for communication by IBM® products. If your environment includes a firewall between any components that must communicate with each other, you must specify at least one piped protocol during configuration.

  • Whether you have NAT (network address translation)

    Network address translation limits the visibility of resources between different network segments. This issue can affect the monitoring infrastructure if this is not taken into account when configuring the ability for agents and the Tivoli Enterprise Portal to connect to the hub monitoring server.

  • Network bandwidth between WAN (wide area network) links

    The amount of bandwidth between the hub monitoring server and the connected infrastructure components can influence where the hub monitoring server is placed. In general, place the hub monitoring server where the most connected components have the fastest bandwidth access to it. For example, if two data center locations are serviced by a single hub, where possible place the hub the data center that will send the most traffic to it.

  • Number of agents that are located across WAN links

    The amount of bandwidth between the hub monitoring server and the connected agents can influence where the hub monitoring server is placed. In general, place the hub monitoring server where most of the connected agents components have the fastest bandwidth access to it. Where this placement is not possible, the use of remote monitoring servers to act as agent concentrators might be a feasible option.

For more information on setting up communications across firewalls or between components that use NAT, see How to set up communications between components. Also see the section on "Firewalls" in the IBM Tivoli Monitoring: Installation and Setup Guide.

Failover capabilities and requirements

The requirement for high availability should dictate the degree of failover required for the hub monitoring server. A manual restart of the hub on another platform is sufficient, or you might need a high-availability hub. The important point is to understand the level of failover required, and then implement the hub monitoring server configuration with this in mind.