IBM Support

IBM WebSphere Load Balancer Linux configuration with IBM Z

Troubleshooting


Problem

This technote describes limitations and restrictions for IBM WebSphere Load Balancer on IBM Z Series with Open System adapter (OSA) cards.

Cause

MAC forwarding method requires all servers in the Load Balancer configuration to be on the same network segment regardless of the platform. Active network devices such as router, bridges, and firewalls interfere with Load Balancer because Load Balancer functions as a specialized router, modifying only the link-layer headers to its next and final hop. Any network topology in which the next hop is not the final hop is not valid for Load Balancer.

Note: Tunnels, such as channel-to-channel (CTC) or inter-user communication vehicle (IUCV), are often supported. However, Load Balancer must forward across the tunnel directly to the final destination, it cannot be a network-to-network tunnel.

There is a limitation for the Load Balancer servers with Linux on IBM Z that share the OSA card because this adapter operates differently than most network cards. The OSA card has its own virtual link layer implementation that has nothing to do with ethernet, which is presented to the Linux™ and z/OS™ hosts behind it. Effectively, each OSA card looks just like ethernet-to-ethernet hosts (and not to the OSA hosts), and hosts that use it respond to it as if it is ethernet.
The OSA card also performs some functions that relate to the IP layer directly. Responding to ARP (address resolution protocol) requests is one example of a function that it performs. Another is that shared OSA routes IP packets based on destination IP address, instead of on ethernet address as a layer 2 switch. Effectively, the OSA card is a bridged network segment unto itself.

The Load Balancer for Linux on IBM Z can forward to hosts on the same OSA or to hosts on the ethernet. All the hosts on the same shared OSA are effectively on the same segment.

Load Balancer can forward out of a shared OSA because of the nature of the OSA bridge. The bridge knows the OSA port that owns the cluster address. The bridge knows the MAC address of hosts directly connected to the ethernet segment. Therefore, Load Balancer can MAC-forward across one OSA bridge.

However, Load Balancer cannot forward into a shared OSA. The OSA for the backend server advertises the OSA MAC address for the server IP. When a packet arrives with the ethernet destination address of the server's OSA and the IP of the cluster, the server's OSA card cannot match the packet to a local host. The same principles that permit OSA-to-ethernet MAC-forwarding to work out of one shared OSA does not apply when forwarding into a shared OSA.

Resolving The Problem

The Load Balancer Linux configurations on IBM Z that have OSA cards must use special configuration for MAC forwarding.
  1. Using platform features
    Note: Load Balancer for IPv4 and IPv6 is not tested with CTC, IUCV, or HiperSocket.  Verify the function works as expected in a test environment. If the servers in the Load Balancer configuration are on the same platform type, define point-to-point (CTC or IUCV) connections between Load Balancer and each server. Set up the endpoints with private IP addresses. The point-to-point connection is used for Load Balancer-to-server traffic only.  Add the servers to the Load Balancer by using the IP address of the server endpoint of the tunnel. With this configuration, the cluster traffic comes through the Load Balancer OSA card and is forwarded across the point-to-point connection where the server responds through its own default route. The response uses the server's OSA card to leave, which might or might not be the same card.
  2. Using Load Balancer's GRE feature
    If point-to-point connections cannot be defined, use the Load Balancer's encapsulation forwarding (GRE or IPIP). The encapsulation allows the Load Balancer to forward packets across routers.

    With the GRE protocol, the client to cluster packet is received by Load Balancer, encapsulated, and sent to the server. At the server, the original client to cluster packet is encapsulated, and the server responds directly to the client. The advantage with using GRE is that Load Balancer sees only the client-to-server traffic, not the server-to-client traffic. The disadvantage is that it reduces the maximum segment size (MSS) of the TCP connection due to encapsulation requirements.

    To configure Load Balancer to forward with GRE encapsulation, use the following command to add the servers:


    dscontrol server add cluster_add@port@backend_server encapforward yes encaptype gre encapcond always

    To configure Linux™ systems to perform native GRE encapsulation, for each backend server, issue the following commands:

    modprobe ip_gre
    ip tunnel add grelb0 mode gre ikey 3735928559
    ip link set grelb0 up
    ip addr add cluster_addr dev grelb0


    Note: Do not define the cluster address on the loopback of the backend servers.

    With backend servers running z/OS™ operating system, you must use z/OS™ specific commands to configure the servers to perform GRE encapsulation.

[{"Type":"MASTER","Line of Business":{"code":"LOB45","label":"Automation"},"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSEQTP","label":"WebSphere Application Server"},"ARM Category":[{"code":"a8m0z000000CbskAAC","label":"IBM Edge Load Balancer-\u003EConfiguration"}],"ARM Case Number":"","Platform":[{"code":"PF016","label":"Linux"}],"Version":"8.5.5;9.0.0;9.0.5"}]

Document Information

Modified date:
27 December 2023

UID

swg21210768