Installing Connect:Direct for UNIX
Before you install Connect:Direct® for UNIX, complete the worksheets to identify all information required to perform the installation.
Connect:Direct for UNIX may be installed on a local disk or a shared disk file system, also known as a clustered file system. Examples of clustered file systems are IBM’s GPFS, Veritas Cluster File System, and Red Hat Global File System.
Connect:Direct for UNIX requires that you install a server and at least one client location. You can install Connect:Direct for UNIX in different configurations:
- Install the server on a local system and the clients on remote systems
- Install the server and at least one client on a local system and the remaining clients on remote systems
- Install using a Silent Installation. See Connect:Direct for Unix silent installation in the Mass Deployment documentation library.
Preparing to Install Connect:Direct for UNIX in a Cluster Environment
Connect:Direct for UNIX supports clustering software to allow two or more computers to appear to other systems as a single system. All computers in the cluster are accessible through a single IP address. Connect:Direct for UNIX can be installed in two types of clustering environments: high availability and load balancing clustering environments.
High-Availability Cluster Environments
Consider the following information when planning to use Connect:Direct for UNIX in a high-availability cluster environment.
Supported High-Availability Cluster Environments
Connect:Direct for UNIX is certified to operate in the following high-availability cluster environments:
- Connect:Direct for UNIX high-availability cluster multiprocessing (HACMP) environment
- Hewlett-Packard MC/Service Guard
- SunCluster versions 2.2, 3.0, and 3.2
- Veritas Infoscale Availability (formerly Veritas Cluster Server)
- Red Hat High Availability Add-On
If you plan to install Connect:Direct for UNIX in a high-availability cluster environment, complete the following tasks:
- Install the clustering software on each computer in the cluster, including setting up a logical host or application package.
- Create a user with the same name and user ID on each cluster node.
- Create a Connect:Direct subdirectory on a shared file system on a shared volume group.
- Ensure that the shared file system is owned by the IBM® Connect:Direct user.
- Install IBM Connect:Direct on the shared file system.
- Perform the procedures necessary to define the high-availability settings and configure the cluster environment.
Limitations of High-Availability Clusters
When running Connect:Direct for UNIX in a high-availability cluster environment, be aware of the following limitations:
- If a failure occurs, all Processes being held will be restarted when IBM Connect:Direct is restarted. This includes Processes that are held by the operator as well as Processes held in error. This could cause a security risk.
- When a IBM Connect:Direct ndmsmgr process associated with a IBM Connect:Direct Process is killed, the Process is not automatically restarted and is put in the Held in Error state. It must be manually restarted; otherwise, the IBM Connect:Direct Process is restarted when the cluster restart occurs.
Load-Balancing Cluster Environments
In a load-balancing cluster environment, an incoming session is distributed to one of the Connect:Direct for UNIX instances based on criteria defined in the load balancer. Generally, from the point of view of the nodes behind the load balancer, only incoming or SNODE sessions are affected by the load balancer. PNODE, or outgoing sessions, operate the same way as non-cluster Connect:Direct for UNIX PNODE sessions.
SNODE Server Considerations for Load-Balancing Clusters
Consider the following when planning and setting up the Connect:Direct for UNIX SNODE servers in a load balancing cluster:
- The servers used for the
Connect:Direct for UNIX instances behind the load balancer must all have access to common shared disk storage because of the following:
- Any copy statement source and destination files for SNODE processes must reside in directories accessible to all servers.
- All nodes must have access to a common SNODE work area and that area must be on a cluster file system or a Network File System version 4 (NFSv4) or greater resource. This includes the Amazon Elastic File System (EFS), as it is mounted via NFSv4 protocol. NFSv3 is not supported.
- All servers must be of the same platform type (for example, all Solaris SPARC or all Linux Intel) and the same Connect:Direct for UNIX version and maintenance level.
- The system clocks on all servers must be synchronized in order for copy checkpoint/restart and run task synchronization to work.
- The administrator user ID used to install Connect:Direct for UNIX must be defined on each server and must be the same user and group number on each server.
SNODE Setup for Load-Balancing Clusters
Consider the following when planning and setting up the Connect:Direct for UNIX SNODEs in a load-balancing cluster:
- One Connect:Direct for UNIX node should be installed on each server behind the load balancer.
- Each node should be installed by the same user ID.
- Each node should have the same Connect:Direct for UNIX node name.
- Each node should have the same node-to-node connection listening port.
- A directory should be established for the shared SNODE work area used by the Connect:Direct for UNIX nodes behind the load balancer. This directory should be owned by the Connect:Direct for UNIX administrator ID and must be accessible to all of the servers behind the load balancer.
- Each node should specify the same path to the directory used for the shared SNODE work area. Specify this path in the snode.work.path parameter of the ndm.path record in the initialization parameter file.
Limitations of Load Balancing Clusters
When running Connect:Direct for UNIX in a cluster environment, be aware of the following limitations:
- If an incoming session fails and is restarted by the PNODE, then the restarted session may be assigned to any of the instances behind the load balancer and will not necessarily be established with the original SNODE instance.
- When shared SNODE work areas are configured and the run task is on the SNODE, then at restart time, Connect:Direct for UNIX cannot determine whether the original task is still active or not because the restart session may be with a different server. If you set the global run task restart parameters to yes in the initialization parameters file, a task could be restarted even though it may be active on another machine. Therefore, exercise caution when specifying restart=y.
- Each SNODE instance that receives a session for a given Process creates a TCQ entry for the Process. Each SNODE instance has its own TCQ file, and these files are not shared among SNODE instances. Only the work files created in the shared work area are shared among SNODE instances.
- When a Process is interrupted and restarted to a different SNODE instance, the statistics records for that Process is distributed between the two SNODE instances involved. As a result, you cannot select all the statistics records for a Process.
Conventions to Observe When Installing Connect:Direct for UNIX
Observe the following conventions when you install Connect:Direct for UNIX:
- Characters used in Netmap Node Names (or Secure+ Node Names or Secure+ Alias Names) should be restricted to A-Z, a-z, 0-9 and @ # $ . _ - to ensure that the entries can be properly managed by Control Center, or IBM Sterling Connect:Direct Application Interface for Java™ for Java (AIJ) programs.
- Although Connect:Direct for UNIX Process names can be up to 255 characters long, some IBM Connect:Direct platforms, such as Connect:Direct for z/OS® limit Process names to eight characters. Processes running between UNIX and platforms that limit Process names to eight characters can have unpredictable results if longer names are specified.
- For Integrated File Agent installation, Secure+ installation is mandatory. If Secure+ is not already installed, Connect:Direct for UNIX installer will install Secure+ before installing Integrated File Agent.
- Acceptable responses during an install or upgrade are listed in brackets, where y specifies yes, n specifies no, and a specifies all.
- The default response is capitalized. Press Enter to accept the default value.
- Do not use colons (:) for values in the installation and customization scripts.
- Do not use keywords for values.
- Press Enter after each entry to continue.
- Terminate any procedure by pressing Ctrl-C.
Special Considerations
This section contains considerations in addition to the procedures defined. Refer to the following notes before installing the product.
- Although Connect:Direct for UNIX Process names can be up to 255 characters long, some IBM Connect:Direct platforms, such as Connect:Direct for z/OS limit Process names to eight characters. Processes running between UNIX and platforms that limit Process names to eight characters can have unpredictable results if longer names are specified.
- For Integrated File Agent installation, Secure+ installation is mandatory. If Secure+ is not already installed, Connect:Direct for UNIX installer will install Secure+ before installing Integrating File Agent.
- Customers may desire to replace configuration files or the directories created by
configuration, namely
ndm/cfg/{CDU node name}
andwork/{CDU node name}
, with symbolic links. As long as the redirection points to a supported file system as documented in the Supported File Systems section, we will provide best effort support for these symbolic links. This means that if an issue is reported, we will not deny support solely based on the existence of these symbolic links. However, if Support determines that the symbolic link is a factor in the issue, you may be asked to modify or delete the link to resolve the issue.
Installing IBM Connect:Direct
To install Connect:Direct for UNIX:
Procedure
Customizing Connect:Direct for UNIX
The customization script starts automatically after the installation is complete to set up the Connect:Direct for UNIX operating environment. It is located in d_dir/etc, where d_dir is the IBM Connect:Direct installation directory, and may be run by itself if needed for future configuration changes. The option you select determines what Connect:Direct for UNIX operating environment is configured: the Connect:Direct for UNIX Server only, the Connect:Direct for UNIX Client only, or the Connect:Direct for UNIX Server and Client.
About this task
After you customize the environment, you need to configure Connect:Direct for UNIX for using root privilege to create a Strong Access Control List (SACL) file and to set the owner and permissions of IBM Connect:Direct executables. You must create the SACL file and set the owner and permissions before you can run Connect:Direct for UNIX. See Configuring Connect:Direct for UNIX Using Root Privilege for more information about this process.
The customization script prompts you to begin the customization procedure:
Procedure
Upgrading to Connect:Direct Integrated File Agent
You can upgrade from an old release of Connect:Direct for UNIX node with Standalone file agent installed inside the Connect:Direct installation directory to Connect:Direct Integrated File Agent, through either interactive or silent installer.
Interactive Upgrade
Standalone File Agent detected in the Connect:Direct for UNIX installation directory.
Do you want to upgrade to the Integrated File Agent? If so, then please be aware
Secure+ and Connect:Direct Web services are required for Integrated File Agent.
Secure+ will be installed on your system if not already installed. Press Enter to
continue with the File Agent upgrade procedure:[Y/n]
Press
Enter to upgrade from Standalone File Agent to Integrated File Agent.Silent Upgrade
- If File Agent is not installed inside Connect:Direct installation directory, specifying the option cdai_installFA=”y” inside the options file or as a parameter to silent installer will install Integrated File Agent as a part of upgrade process.
- If Standalone File Agent is installed inside Connect:Direct installation directory, specifying the option cdai_installFA=”y” inside the options file or as a parameter to silent installer will convert Standalone File Agent to Integrated File Agent as a part of upgrade process.
Setting Up Additional Configuration Requirements for IBM HACMP
In addition to modifying the configuration files, complete the following steps to complete the IBM high-availability cluster multiprocessing (HACMP) setup:
Procedure
Setting Up Additional Configuration Requirements for Hewlett-Packard MC/ServiceGuard
The HP Solutions Competency Center (SCC) has successfully integrated IBM Connect:Direct with MC/Service Guard. The implementation and certification of IBM Connect:Direct followed the SCC’s high availability Implementation and Certification Process. Refer to the Implementation and Certification With Hewlett-Packard’s MC/ServiceGuard High Availability Software document located on the Support on Demand Web site.