HiperSockets Network Concentrator
You can configure a HiperSockets Network Concentrator on a QETH device in layer 3 mode.
The HiperSockets Network Concentrator connects systems to an external LAN within one IP subnet using HiperSockets. HiperSockets Network Concentrator connected systems look as if they were directly connected to the LAN. This simplification helps to reduce the complexity of network topologies that result from server consolidation.
- From the LAN into an IBM Z® Server environment
- From systems that are connected by a different HiperSockets Network Concentrator into an IBM Z Server environment
Design
A connector Linux® system forwards traffic between the external OSA interface and one or more internal HiperSockets interfaces. The forwarding is done via IPv4 forwarding for unicast traffic and via a particular bridging code (xcec_bridge) for multicast traffic.
A script named ip_watcher.pl observes all IP addresses registered in the HiperSockets network and configures them as proxy ARP entries on the OSA interfaces. The script also establishes routes for all internal systems to enable IP forwarding between the interfaces.
All unicast packets that cannot be delivered in the HiperSockets network are handed over to the connector by HiperSockets. The connector also receives all multicast packets to bridge them.
Setup
The setup principles for configuring the HiperSockets Network Concentrator are as follows:
- leaf nodes
- The leaf nodes do not require a special setup. To attach them to the HiperSockets network, their setup should be as if they were directly attached to the LAN. They do not have to be Linux systems.
- connector systems
- In the following, HiperSockets Network Concentrator IP refers
to the subnet of the LAN that is extended into the HiperSockets
net.
- If you want to support forwarding of all packet types, define the OSA interface for traffic into the LAN as a multicast router.
- All HiperSockets interfaces that are involved must be set up
as connectors: set the route4 attributes of the corresponding devices to
primary_connector
or tosecondary_connector
. Alternatively, you can add the OSA interface name to the start script as a parameter. This option results in HiperSockets Network Concentrator ignoring multicast packets, which are then not forwarded to the HiperSockets interfaces. - IP forwarding must be enabled for the connector partition. Enable the forwarding
either manually with the command
Alternatively, you can enable IP forwarding in the /etc/sysctl.conf configuration file to activate IP forwarding for the connector partition automatically after booting.sysctl -w net.ipv4.ip_forward=1
- The network routes for the HiperSockets interface must be removed. A network route for the HiperSockets Network Concentrator IP subnet must be established through the OSA interface. To establish a route, assign the IP address 0.0.0.0 to the HiperSockets interface. At the same time, assign an address used in the HiperSockets Network Concentrator IP subnet to the OSA interface. These assignments set up the network routes correctly for HiperSockets Network Concentrator.
- To start HiperSockets Network Concentrator, run the script start_hsnc.sh. You can specify an interface name as optional parameter. The interface name makes HiperSockets Network Concentrator use the specified interface to access the LAN. There is no multicast forwarding in that case.
- To stop HiperSockets Network Concentrator, use the command killall ip_watcher.pl to remove changes that are caused by running HiperSockets Network Concentrator.
Availability setups
If a connector system fails during operation, it can simply be restarted. If all the startup commands are run automatically, it will instantaneously be operational again after booting. Two common availability setups are mentioned here:
- One connector partition and one monitoring system
- As soon as the monitoring system cannot reach the connector for a specific timeout (for example, 5 seconds), it restarts the connector. The connector itself monitors the monitoring system. If it detects (with a longer timeout than the monitoring system, for example, 15 seconds) a monitor system failure, it restarts the monitoring system.
- Two connector systems monitoring each other
- In this setup, there is an active and a passive system. As soon as the passive system detects a failure of the active connector, it takes over operation. To take over operation, it must reset the other system to release all OSA resources for the multicast_router operation. The failed system can then be restarted manually or automatically, depending on the configuration. The passive backup HiperSockets interface can either switch into primary_connector mode during the failover, or it can be set up as secondary_connector. A secondary_connector takes over the connecting function, as soon as there is no active primary_connector. This setup has a faster failover time than the first one.
Hints
- The MTU of the OSA and HiperSockets link should be of the same size. Otherwise, multicast packets that do not fit in the link's MTU are discarded as there is no IP fragmentation for multicast bridging. Warnings are printed to a corresponding syslog destination.
- The script ip_watcher.pl prints error messages to the standard error descriptor of the process.
xcec-bridge
logs messages and errors to syslog. On Red Hat® Enterprise Linux 8.6, issue journalctl to find these messages.- Registering all internal addresses with the OSA adapter can take several seconds for each address.
- To shut down the HiperSockets Network Concentrator function, issue killall ip_watcher.pl. This script removes all routing table and Proxy ARP entries added during the use of HiperSockets Network Concentrator.
- Broadcast bridging is active only on OSA or HiperSockets hardware that can handle broadcast traffic without causing a bridge loop. If you
see the message "
Setting up broadcast echo filtering for ... failed
" in the message log when you set the qeth device online, broadcast bridging is not available. - Unicast packets are routed by the common Linux IPv4 forwarding mechanisms. As bridging and forwarding are done at the IP Level, the IEEE 802.1q VLAN and the IPv6 protocol are not supported.