SevOne NMS Advanced Network Configuration Guide
ABOUT
This document describes the installation of a SevOne virtual appliance. A virtual appliance can be a SevOne Performance Appliance Solution (vPAS) or a SevOne Dedicated NetFlow Collector (vDNC), each of which runs the SevOne Network Management Solution (NMS) software.
In this guide if there is,
- [any reference to master] OR
- [[if a CLI command contains master] AND/OR
- [its output contains master]],
it means leader.
And, if there is any reference to slave, it means follower.
CONFIGURE NETWORK BONDING
Since bonding is a network-level configuration which is also dependent on the network infrastructure, it has limited implication to the NMS operation irrespective of which bonding mode is used as long as the IP address is available over the network.
SevOne has only tested the active-backup mode and the steps below are based on this configuration. If you prefer to use an alternate bonding mode which is supported by the operating system and its infrastructure, you may configure the same by referring to its operating system documentation.
The bonding mode can be one of the following. Please adjust as necessary if you desire a mode other than active-backup. Ensure that your infrastructure supports the selected bonding mode. Please refer to the following links for details.
- https://www.kernel.org/doc/Documentation/networking/bonding.txt
- https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/ch-configure_network_bonding
Mode | Description |
---|---|
0 (balance-rr) | Round-robin policy: Transmit packets in sequential order from the first available follower through the last. This mode provides load balancing and fault tolerance. |
1 (active-backup) | Active-backup policy: Only one follower in the bond is active. A different follower becomes active if, and only if, the active follower fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode. |
2 (balance-xor) | XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo follower count]. This selects the same follower for each destination MAC address. This mode provides load balancing and fault tolerance. |
3 (broadcast) | Broadcast policy: transmits everything on all follower interfaces. This mode provides fault tolerance. |
4 (802.3ad) |
IEEE 802.3ad Dynamic link aggregation: Creates aggregation groups that share the same speed and duplex settings. Uses all followers in the active aggregator according to the 802.3ad specification. Prerequisites
|
5 (balance-tlb) |
Adaptive transmit load-balancing: Channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each follower. Incoming traffic is received by the current follower. If the receiving follower fails, another follower takes over the MAC address of the failed receiving follower. Prerequisite
|
6 (balance-alb) | Adaptive load-balancing: Includes balance-tlb plus receive load-balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load-balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the followers in the bond such that different peers use different hardware addresses for the server. |
➤ Create Bonded Interface
using NMCLI
Execute the following steps.
- Using ssh, log into SevOne NMS appliance as
support.
ssh support@<NMS appliance>
- Create bonded interface. For example,
bond0.
Examplenmcli connection add type bond con-name bond0 ifname bond0 \ ip4 10.168.116.5/22 gw4 10.168.116.1 ipv4.method manual \ ipv6.method ignore \ bond.options "mode=active-backup,miimon=100,downdelay=400,updelay=400" Output: Connection 'bond0' (e8048f88-80e5-43c1-a981-3bcb9b8ccc69) successfully added.
- Add the first interface to bond0 created
above.
Examplenmcli connection add type bond-slave ifname ens160 master bond0 Output: Connection 'bond-slave-ens160' (fe926d34-0e12-46e3-a07c-fa78e5e0c8a3) successfully added.
- Bring the connection up for the first follower
interface.
Examplenmcli conn up bond-slave-ens160 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ ActiveConnection/51).
- Set the first follower interface to the primary interface for the
bond0.
nmcli device modify bond0 +bond.options "primary=ens160" Output: Connection successfully reapplied to device 'bond0'.
- For each additional interface, add it as a follower and then, bring it
up.
nmcli connection add type bond-slave ifname ens33 master bond0 Output: Connection 'bond-slave-ens33' (073517f9-723a-4abe-a97e-fc10077f0ef3) successfully added.
nmcli connection up bond-slave-ens33 Output: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ ActiveConnection/51).
using NMTUI
- Using ssh, log into SevOne NMS appliance as
support.
ssh support@<NMS appliance>
- Enter NMTUI.
nmtui
- Select Edit a connection and click <RETURN>.
- Navigate to <Add> and click <RETURN>.
- Choose Bond from New Connection list.
- Navigate to <Create> and click <RETURN>.
- You are in Edit Connection to create your new bonded interface.Note: Add a follower for every interface in the setup, change the mode of the interface, and adjust other settings.
- When adding followers, choose Ethernet from New Connection
list.
- Add a proper Device name. The device name must be the interface name. For example,
ens160, eth1, eth2.
- Navigate to <OK> to save the configuration.
- Select Activate a connection to confirm that all connections are
active.
- All active follower/bond interfaces show an asterisk ( * ) to the left of its name. If
you do not see an asterisk, go to the connection and activate it.
- Execute the following command to view the list of active network connections. All
follower interfaces along with bond0 interface must appear in the
list.
Examplenmcli connection Output: NAME UUID TYPE DEVICE bond0 9aa751f6-828d-42dc-90ad-764ee6eb1b8f bond bond0 ens160 e2f7df86-55c8-4227-a00d-0f048e030b1a ethernet ens160 Wired connection 1 69c7ab6d-d760-3ba3-acfc-9ff7f05e6453 ethernet ens192 Wired connection 2 3a9ca5e0-9c9e-3bee-9329-d3482093de2c ethernet ens224 Ethernet connection 1 8331fa43-b8fb-4afe-95e8-a801508d3d8b ethernet --
- Verify the configurations.Warning: The output in the examples below are for active-backup mode. The bonding options / configuration may vary for different deployments. It is recommended that the configuration is verified according to the mode selected.
Examples
cat /etc/sysconfig/network-scripts/ifcfg-bond0 Output: BONDING_OPTS="downdelay=400 miimon=100 mode=active-backup updelay=400" TYPE=Bond BONDING_MASTER=yes PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none IPADDR=10.168.116.5 PREFIX=22 GATEWAY=10.168.116.1 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=no NAME=bond0 UUID=f3fa06e0-66b7-47d6-aff3-3a3db27ba596 DEVICE=bond0 ONBOOT=yes
cat /proc/net/bonding/bond0 Output: Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: ens160 (primary_reselect always) Currently Active Slave: ens160 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 400 Down Delay (ms): 400 Slave Interface: ens160 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:50:56:8c:11:13 Slave queue ID: 0 Slave Interface: ens33 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:50:56:8c:2c:d0 Slave queue ID: 0
- From another appliance, verify that you are able to ping the IP address of bond0 created
in the example above.
ping 10.168.116.5 Output: PING 10.168.116.5 (10.168.116.5): 56 data bytes 64 bytes from 10.168.116.5: icmp_seq=0 ttl=63 time=0.293 ms 64 bytes from 10.168.116.5: icmp_seq=1 ttl=63 time=0.331 ms 64 bytes from 10.168.116.5: icmp_seq=2 ttl=63 time=0.249 ms 64 bytes from 10.168.116.5: icmp_seq=3 ttl=63 time=0.285 ms
Important: If the ping does not respond, restart the network service on the appliance you are bonding.systemctl restart network
CONFIGURE VIRTUAL IP (VIP)
➤ Introduction
As of SevOne NMS 6.1 release, new and existing / upgraded configurations of VIP do not work. Please contact SevOne Support to obtain a temporary fix. This issue is not applicable to SevOne NMS versions <= v5.7.2.32.
In SevOne NMS, the role of polling a device or collecting flows is assigned to a peer. A peer may be:
- a standalone appliance with no resiliency or
- a pair of two appliances for the purpose of resilience and availability of the peer
To achieve the resiliency of a SevOne NMS peer, a standby secondary appliance, generally known as a Hot Standby Appliance (HSA) for a polling peer, or a Hot Standby for a Dedicated Netflow Collector (HDNC), is added to a make a pair along with the primary appliance.
Generally, the primary appliance (PAS or DNC) assumes the active role and is responsible for polling and/or flow collection from the devices designated on that peer. Initially, the secondary appliance assumes the passive role and is on standby, while it is constantly updated by replicating from the primary, and at the same time, consistently ensuring that the primary appliance is available and is able to communicate with the pair.
A secondary appliance may assume the active role if the primary appliance becomes unavailable due to any reason (after the specified duration of the failover time setting), thereby providing the capability for that peer to continue polling and/or receive flows and continue to perform the NMS functions.
For the purpose of all internal NMS operations and inter-peer communication within the cluster, SevOne NMS uses the Base IP address of the appliance that is attached and configured on the Interface for the server. A user can access SevOne NMS graphical user interface portal by reaching the Base IP (or via a resolved hostname) of the primary or secondary appliance, irrespective of the appliance being in active or passive role.
The interaction of SevOne NMS peer with the polled devices, or for devices sending flows to SevOne NMS, the Base IP address of the active appliance in the pair is used for the communication. If a failover happens or if there is a condition of a network disconnect between the primary and secondary appliances, then the passive appliance is promoted to an active role for that peer. In such cases of transitioning of the active role from the primary to the secondary appliance or vice versa, the transition of NMS services is transparent to the user, as long as the devices are able to communicate over the required network ports with the primary/secondary appliance's Base IP address.
In SevOne NMS, the Base IP address is a requirement and is configured and bound to the primary interface. There are different operational or business use-case scenarios where the user may require an access to a SevOne NMS peer in a PAS/HSA pair using a single floating IP address called Virtual IP (VIP) address that is not bound to any single server's physical interface.
➤ Use-Cases
Below are a few common use-case scenarios where a user may prefer to have access to a NMS peer pair using a single Virtual IP address.
Single point of access to NMS Cluster Graphical User Interface portal
A SevOne NMS cluster can be accessed using the IP address or FQDN of any of the member appliances in the cluster for the graphical user interface portal access. SevOne NMS administrators may want to provide access to the cluster using a single IP address or FQDN which points to a single peer - it may be the Cluster Leader peer or any other peer in the cluster. A VIP configuration on that specific pair ensures that access to SevOne NMS will always be maintained via the active polling peer.
Poll devices via a single IP address
Generally, the devices being polled on a peer pair require that the device is able to communicate with SevOne NMS via both, the primary and the secondary appliances of the pair. In some configurations, for example, where the user has strict access control required for the devices, it requires the device to have Access Control Lists (ACL) configured for both, primary and secondary NMS appliances. This increases the administrative overhead, especially if there are thousands of devices across the environment.
The user prefers to have the ability to poll devices on SevOne NMS peer using a single IP address, irrespective of the primary or the secondary appliance being the active poller at any given time. In such scenarios, a Virtual IP can be configured for the peer in SevOne NMS, and as long as the Virtual IP has network connectivity with the device on the required ports, polling can happen via the VIP.
Send Network Flows to SevOne NMS on a single IP address configured on Flow devices
Flow devices which send network flows to SevOne NMS require configuration of the device(s) to send flows to both, SevOne NMS primary and secondary appliances to ensure that flows are available when a failover happens.
To avoid the extra configuration and the extra bandwidth requirement to send the same flows to two different servers, the user may prefer to configure the flow device to send the flows to a single IP address. In such cases, the user may configure a VIP on SevOne NMS and configure the network flow device on their end to send the flows to the VIP address. No additional static route configuration is required on the NMS peer/DNC when devices are sending flows to the VIP as long as the user network allows the NetFlow traffic from the device to SevOne NMS peer appliances on the specific port configured for receiving flows on SevOne NMS.
Third-party Application Integration with SevOne NMS
The user may have third-party applications that integrate with SevOne NMS using the SevOne API. It may be a challenge for the user's third-party applications to be able to track the failure of SevOne NMS appliances in a pair, to continue to send the requests to the available appliance in case the active appliance were to fail or become unavailable.
The user may configure a VIP on the target NMS Peer pair and then the third-party application can be configured to send the API requests to the required VIP address of the peer.
➤ Prerequisites
- SevOne NMS 6.x
- SevOne NMS Primary (PAS) IP address
- SevOne NMS Secondary (HSA) IP address
- Network address of the PAS/HSA
- Network IP Prefix of the PAS/HSA IP address
- IP address of the gateway of PAS/HSA network
- SevOne NMS Peer Id for the pair where the Virtual IP will be configured
- Virtual IP Address to be configured on the NMS Pair
- Virtual IP Address Network IP Prefix to be configured on the SevOne NMS Pair
- Network address of the Virtual IP
- Cluster Leader Active Appliance's IP Address
- Static Route Information for Virtual IP (optional, only if custom static routes are
required)Important: Configuration of Virtual IP is not supported with only a standalone peer. A secondary appliance must be added to the primary appliance before configuring Virtual IP on that peer pair.
If SevOne NMS peer is configured with a Virtual IP, you are required to use Static IP configuration for both, the Base IP and the Virtual IP in the NMS network configuration. Using DHCP may automatically overwrite the network configuration settings and result in an unexpected behavior such as, removal of the Virtual IP-specific configuration.
The Base IP address is configured as the default index (IPADDR) on the network interface and the Virtual IP is then set as the next index at IPADDR1 and PREFIX1.
SevOne has only tested Virtual IP configuration in SevOne NMS where the Virtual IP address belongs to the same network subnet as the Base IP address. There may be various different customer network scenarios specific to different environments, and it may work if the Virtual IP is on a different subnet, provided the customer network supports such configuration at the network/firewall level. This may even require additional custom static routes configured on SevOne NMS for meeting this requirement. However, this is not in scope of SevOne tested configurations.
The default route in SevOne NMS is always expected to be via the Base IP address. The default route cannot be changed to use the Virtual IP on any SevOne NMS peer.
➤ Configuration Steps
To configure a Virtual IP on SevOne NMS, the easiest way is to open a Command Line Interface (CLI) session as support user for the primary and secondary pair where the Virtual IP is to be configured. Execute the following steps.
Create Variables
Create variables by populating them in single-quotes, with the specific values that apply to your setup. These variables must be created and set on both the PAS and HSA.
pas_ip='<IP address of the PAS Appliance>'
hsa_ip='<IP address of HSA Appliance>'
base_network='<Base IP Network address>'
base_prefix='<Base Network IP address Network Prefix>'
base_gateway='<Base Network Default Gateway IP address>'
peer_id=$(mysqlconfig -Ne "select server_id from net.peers where \
primary_ip = HEX(INET6_ATON(\"$pas_ip\" )) ")
vip='<Virtual IP address>'
vip_prefix='<Virtual IP address Network Prefix>'
vip_network='<Virtual IP Network Address>'
# Optional (only required for Static Routes)
ext_network='<IP address for External Device/Application Network requiring static route>'
ext_prefix='<IP address Network Prefix for External Network>'
vip_gateway='<IP address of the Gateway for the External Network>'
for PAS
Using ssh, log into SevOne NMS appliance (PAS) as support.
ssh support@<PAS appliance>
Example
PAS$ pas_ip='10.168.117.48'
PAS$ hsa_ip='10.168.117.58'
PAS$ base_network='10.168.116.0'
PAS$ base_prefix='22'
PAS$ base_gateway='10.168.116.1'
PAS$ peer_id=$(mysqlconfig -Ne "select server_id from net.peers where \
primary_ip = HEX(INET6_ATON(\"$pas_ip\" )) ")
PAS$ vip='10.168.118.64'
PAS$ vip_prefix='22'
PAS$ vip_network='10.168.116.0'
# Optional (only required for Static Routes)
PAS$ ext_network='10.128.24.0'
PAS$ ext_prefix='22'
PAS$ vip_gateway='10.168.116.3'
for HSA
Using ssh, log into SevOne NMS appliance (HSA) as support.
ssh support@<HSA appliance>
Example
HSA$ pas_ip='10.168.117.48'
HSA$ hsa_ip='10.168.117.58'
HSA$ base_network='10.168.116.0'
HSA$ base_prefix='22'
HSA$ base_gateway='10.168.116.1'
HSA$ peer_id=$(mysqlconfig -Ne "select server_id from net.peers where \
secondary_ip = HEX(INET6_ATON(\"$hsa_ip\" )) ")
HSA$ vip='10.168.118.64'
HSA$ vip_prefix='22'
HSA$ vip_network='10.168.116.0'
# Optional (only required for Static Routes)
HSA$ ext_network='10.128.24.0'
HSA$ ext_prefix='22'
HSA$ vip_gateway='10.168.116.3'
Use Network Prefix for Virtual IP
To add the Virtual IP to the connection, you need your Virtual IP (VIP) address and the Network Prefix. For configuring a Virtual IP, you must use the Network Prefix as the Netmask cannot be used.
- PREFIXn - The Network prefix used for all configurations except aliases and ippp devices. It takes precedence over NETMASK when both PREFIX and NETMASK are set.
- NETMASKn - The Subnet mask useful for aliases and ippp devices. For all other configurations, use PREFIX instead.
Configure Virtual IP on Primary (PAS) and Secondary (HSA)
Perform the following steps on both, PAS and HSA appliances, to configure the Virtual IP on the NMS pair.
Check Existing Network Connections
for PAS
PAS$ nmcli connection show
Example: Network Connections
PAS$ nmcli connection show
NAME UUID TYPE DEVICE
ens160 afe20483-ba17-4955-aed6-1706093e8b88 ethernet ens160
for HSA
HSA$ nmcli connection show
Example: Network Connections
HSA$ nmcli connection show
NAME UUID TYPE DEVICE
ens160 d6ad7e7f-87bb-42f4-9481-bff76859081d ethernet ens160
Set appropriate Network Connection Name as a Variable
In the following example, connection name is ens160. You may change the network connection name to the name specific to your environment based on the output from the previous command.
The connection names may differ on a physical PAS or vPAS. In the example above, it is ens160 but it could be any other valid network connection name such as, en0, as an example.
Example: for PAS
PAS$ connection_name='ens160'
Example: for HSA
HSA$ connection_name='ens160'
Copy Current Connection's Configuration File for Passive Appliance VIP Configuration
Copy the current configuration file which will be used when the appliance is a passive appliance in followerreplication mode.
Example: for PAS
PAS$ cp -a /etc/sysconfig/network-scripts/ifcfg-${connection_name} \
/etc/sysconfig/network-scripts/vip-disabled-ifcfg-${connection_name}
Example: for HSA
HSA$ cp -a /etc/sysconfig/network-scripts/ifcfg-${connection_name} \
/etc/sysconfig/network-scripts/vip-disabled-ifcfg-${connection_name}
Update Network Connection to add Virtual IP Address and Network Prefix for Active Appliance VIP Configuration
You must update the active connection identified in Check Existing Network Connections with the Virtual IP address and its Network Prefix. By executing the following command, it updates the network configuration files but does not automatically calculate the IP Subnet to identify the Network Prefix.
Set Virtual IP Address and Network Prefix
for IPv4
Example: for PAS
PAS$ nmcli connection modify ${connection_name} +ipv4.addresses "$vip/$vip_prefix"
Example: for HSA
HSA$ nmcli connection modify ${connection_name} +ipv4.addresses "$vip/$vip_prefix"
for IPv6
Example: for PAS
PAS$ nmcli connection modify ${connection_name} +ipv6.addresses "$vip/$vip_prefix"
Example: for HSA
HSA$ nmcli connection modify ${connection_name} +ipv6.addresses "$vip/$vip_prefix"
Configure Network Configuration Scripts for Network Connection to use Virtual IP
SevOne NMS requires that the default route is set via the Base IP address. It is not supported to have the default route to be configured over the Virtual IP address since all inter-peer communication in SevOne NMS happens via the Base IP address. Static routes may be configured for custom requirements - please refer to section Custom Static Route for Virtual IP (optional).
Verify Network Configuration Files (no Static Routes)
The network configuration files must be verified to ensure that they are updated correctly.
for PAS
PAS$ cat /etc/sysconfig/network-scripts/ifcfg-${connection_name}
If using DHCP, it may automatically override the settings of this configuration and have an unexpected behavior such as, removal of the Virtual IP as it uses the same configuration file. The Static IP configuration must be used for both, the Base IP and the Virtual IP.
Example: Verify updated network configuration files
PAS$ cd /etc/sysconfig/network-scripts
PAS$ cat /etc/sysconfig/network-scripts/ifcfg-${connection_name}
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens160
UUID=afe20483-ba17-4955-aed6-1706093e8b88
DEVICE=ens160
ONBOOT=yes
IPADDR=10.168.117.48
PREFIX=22
GATEWAY=10.168.116.1
IPADDR1=10.168.118.64
PREFIX1=22
for HSA
HSA$ cat /etc/sysconfig/network-scripts/ifcfg-${connection_name}
If using DHCP, it may automatically override the settings of this configuration and have an unexpected behavior such as, removal of the Virtual IP as it uses the same configuration file. The Static IP configuration must be used for both, the Base IP and the Virtual IP.
Example: Verify updated network configuration files
HSA$ cd /etc/sysconfig/network-scripts/
HSA$ cat /etc/sysconfig/network-scripts/ifcfg-${connection_name}
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens160
UUID=d6ad7e7f-87bb-42f4-9481-bff76859081d
DEVICE=ens160
ONBOOT=yes
IPADDR=10.168.117.58
PREFIX=22
GATEWAY=10.168.116.1
IPADDR1=10.168.118.64
PREFIX1=22
Custom Static Route for Virtual IP (optional)
The default route is always required to be configured via the Base IP address on SevOne NMS as all Inter-Peer Communication (IPC) happens via the Base IP address.
Other non-IPC network traffic with SevOne NMS peers such as, polling of devices, third-party applications integration, etc. may be restricted on the Base IP network in the customer environment based on the network policies. In such cases, customer may have to set up custom static routes based on their own network environment for all such non-IPC traffic to happen via the Virtual IP network. By default, it always uses the Base IP as the default route and may fail to communicate.
Static Routes (if required)
- Configure the static routes for the network connection. The static routes are added to the route
configuration file for the network connection. Once configured as shown in the steps below, the
settings will be configured persistently and will remain even after a system restart. The existing
network routes configured on the server will be maintained. SevOne recommends backup of the existing
route configurations and ensure that the new static routes do not conflict with the existing
routes.
for PAS
PAS$ ls -l /etc/sysconfig/network-scripts/route-${connection_name} PAS$ cat /etc/sysconfig/network-scripts/route-${connection_name} PAS$ cp -ap /etc/sysconfig/network-scripts/route-${connection_name} \ /etc/sysconfig/network-scripts/backup-route-${connection_name}.$(date +%Y%m%d-%H%M%S)
for HSA
HSA$ ls -l /etc/sysconfig/network-scripts/route-${connection_name} HSA$ cat /etc/sysconfig/network-scripts/route-${connection_name} HSA$ cp -ap /etc/sysconfig/network-scripts/route-${connection_name} \ /etc/sysconfig/network-scripts/backup-route-${connection_name}.$(date +%Y%m%d-%H%M%S)
- Static routes can be set using the same format as the Linux Command Line Interface (CLI)
ip route command. There are various configuration options for static routes however, the
following shows how to configure one persistent static route to be managed by SevOne NMS. The static
route configuration must be handled manually if additional persistent static routes are needed or if
the static routes require specific options configured. For more details, please refer to https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/configuring-static-routes_configuring-and-managing-networking.
Configure Static Routes for network connection
Important: SevOne NMS only supports the scripted method of configuration for static routes format as shown below. The command format using predefined variables below cannot be used for additional static route options OR if configuring more than one static route. For additional options and/or static routes, configure and maintain the static routes manually by following the operating system vendor link above. It is a good practice to always maintains a backup of the manually configured static routes configuration files.for PAS
PAS$ cd /etc/sysconfig/network-scripts PAS$ echo "ADDRESS0=${ext_network}" > \ /etc/sysconfig/network-scripts/route-${connection_name} PAS$ echo "NETMASK0=$(ipcalc ${ext_network}/${ext_prefix} \ --netmask | sed -n -e 's/^.*NETMASK=//p')" \ >> /etc/sysconfig/network-scripts/route-${connection_name} PAS$ echo "GATEWAY0=${vip_gateway}" \ >> /etc/sysconfig/network-scripts/route-${connection_name} PAS$ echo "OPTIONS0=\"src ${vip}\"" \ >> /etc/sysconfig/network-scripts/route-${connection_name}
for HSA
HSA$ cd /etc/sysconfig/network-scripts HSA$ echo "ADDRESS0=${ext_network}" > \ /etc/sysconfig/network-scripts/route-${connection_name} HSA$ echo "NETMASK0=$(ipcalc ${ext_network}/${ext_prefix} \ --netmask | sed -n -e 's/^.*NETMASK=//p')" \ >> /etc/sysconfig/network-scripts/route-${connection_name} HSA$ echo "GATEWAY0=${vip_gateway}" \ >> /etc/sysconfig/network-scripts/route-${connection_name} HSA$ echo "OPTIONS0=\"src ${vip}\"" \ >> /etc/sysconfig/network-scripts/route-${connection_name}
Verify Network Configuration Files (optional, only if including Static Routes)
The network configuration files must be verified to ensure that they are updated correctly.
for PAS
PAS$ cat /etc/sysconfig/network-scripts/ifcfg-${connection_name}
# Optional (this file exists only if Static Routes are configured)
PAS$ cat /etc/sysconfig/network-scripts/route-${connection_name}
Example: Verify updated network configuration files
PAS$ cd /etc/sysconfig/network-scripts
PAS$ cat /etc/sysconfig/network-scripts/ifcfg-${connection_name}
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens160
UUID=afe20483-ba17-4955-aed6-1706093e8b88
DEVICE=ens160
ONBOOT=yes
IPADDR=10.168.117.48
PREFIX=22
GATEWAY=10.168.116.1
IPADDR1=10.168.118.64
PREFIX1=22
# Optional (this file exists only if Static Routes are configured)
PAS$ cat /etc/sysconfig/network-scripts/route-${connection_name}
ADDRESS0=10.128.24.0
NETMASK0=255.255.252.0
GATEWAY0=10.168.116.3
OPTIONS0="src 10.168.118.64"
for HSA
HSA$ cat /etc/sysconfig/network-scripts/ifcfg-${connection_name}
# Optional (this file exists only if Static Routes are configured)
HSA$ cat /etc/sysconfig/network-scripts/route-${connection_name}
Example: Verify updated network configuration files
HSA$ cd /etc/sysconfig/network-scripts
HSA$ cat /etc/sysconfig/network-scripts/ifcfg-${connection_name}
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens160
UUID=d6ad7e7f-87bb-42f4-9481-bff76859081d
DEVICE=ens160
ONBOOT=yes
IPADDR=10.168.117.58
PREFIX=22
GATEWAY=10.168.116.1
IPADDR1=10.168.118.64
PREFIX1=22
# Optional (this file exists only if Static Routes are configured)
HSA$ cat /etc/sysconfig/network-scripts/route-${connection_name}
ADDRESS0=10.128.24.0
NETMASK0=255.255.252.0
GATEWAY0=10.168.116.3
OPTIONS0="src 10.168.118.64"
Move and Link Updated Network Connection's Configuration Files
Move Appropriate Files
Move the updated configuration file /etc/sysconfig/network-scripts/ifcfg-<connection_name> to /etc/sysconfig/network-scripts/vip-enabled-ifcfg-<connection_name> file which will be used by SevOne NMS when the appliance is the Active appliance in master (i.e., leader) replication mode.
for PAS
PAS$ mv /etc/sysconfig/network-scripts/ifcfg-${connection_name} \
/etc/sysconfig/network-scripts/vip-enabled-ifcfg-${connection_name}
for HSA
HSA$ mv /etc/sysconfig/network-scripts/ifcfg-${connection_name} \
/etc/sysconfig/network-scripts/vip-enabled-ifcfg-${connection_name}
Link Appropriate Files
Create the soft links to the correct Virtual IP configuration files based on the appliance's current role in SevOne NMS to the network connection configuration file. The network connection configuration file ifcfg-${connection_name} on the Active appliance must point to the vip-enabled-ifcfg-${connection_name} file and the Passive appliance must point to the vip-disabled-ifcfg-${connection_name} file.
for PAS - add Link and Verify
PAS$ cd /etc/sysconfig/network-scripts/
PAS$ ln -sf vip-enabled-ifcfg-${connection_name} ifcfg-${connection_name}
PAS$ ls -l /etc/sysconfig/network-scripts/ifcfg-${connection_name}
for HSA - add Link and Verify
HSA$ cd /etc/sysconfig/network-scripts/
HSA$ ln -sf vip-disabled-ifcfg-${connection_name} ifcfg-${connection_name}
HSA$ ls -l /etc/sysconfig/network-scripts/ifcfg-${connection_name}
Update 'virtual_ip' for Current Peer (in the Config Database instance of the Cluster Leader)
To enable SevOne NMS to use the Virtual IP Address, update the virtual_ip column in the peers table for the pair. The update to the database must be performed from the Cluster Leader appliance by updating the config database instance.
Identify IP Address of Cluster Leader
PAS$ cluster_master_ip=$(mysqlconfig -Ne "select ip_normalize(ip) \
from net.peers where master = 1")
Set a variable with the command to be executed
PAS$ cmd=$(echo "update net.peers set virtual_ip = HEX(INET6_ATON( '${vip}' )) \
where server_id = $peer_id")
SSH into the Cluster Leader to update the net.peers table and the virtual_ip column
PAS$ ssh $cluster_master_ip "mysqlconfig -e \"$cmd\" "
on PAS: Verify the update has completed successfully
PAS$ mysqlconfig -e "select * from net.peers where server_id = $peer_id \G"
on HSA: Verify the update has completed successfully
HSA$ mysqlconfig -e "select * from net.peers where server_id = $peer_id \G"
Example: Update virtual_ip in the peers table from the Cluster Leader config database
on PAS
PAS$ cluster_master_ip=$(mysqlconfig -Ne "select ip_normalize(ip) \
from net.peers where master = 1")
PAS$ cmd=$(echo "update net.peers set virtual_ip = HEX(INET6_ATON('${vip}')) \
where server_id = ${peer_id}")
PAS$ ssh $cluster_master_ip "mysqlconfig -e \"$cmd\" "
PAS$ mysqlconfig -e "select * from net.peers where server_id = $peer_id \G"
************************* 1. row *************************
server_id: 1
name: jb-vip-01
ip: 0AA87530
primary_ip: 0AA87530
secondary_ip: 0AA8753A
active_appliance: PRIMARY
disabled: 0
virtual_ip: 0AA87640
master: 1
user: support
pass:
capacity: 10000
interface_limit: 33
flow_limit: 10000
netflow_interface_count: 0
server_load: 614
flow_load: 0
model: PAS
proxy_port: 8123
proxy_user: 99bnqiHZEpVSRH/61I/xuQ==
proxy_pass: 99bnqiHZEpVSRH/61I/xuQ==
group_poller_device_count: 0
group_poller_object_count: 0
selfmon_device_count: 1
selfmon_object_count: 69
Update MySQL Permissions (Optional)
This applies only if specific static routes are enabled.
If specific static routes are enabled for the pair during SevOne NMS Virtual IP configuration, MySQL permissions must be granted for IP address, $vip, to all pairs in the cluster. Execute the steps below.
- Identify the IP Address of the Cluster
Leader.
ssh support@<PAS appliance>
PAS$ cluster_master_ip=$(mysqlconfig -Ne \ "select ip_normalize(ip) from net.peers where master = 1")
- SSH into the Cluster Leader and execute
SevOne-fix-mysql-permissions.
PAS$ podman exec -it nms-nms-nms /bin/bash PAS$ ssh $cluster_master_ip "/usr/local/scripts/SevOne-fix-mysql-permissions"
- Verify the pair (PAS/HSA) can connect to the Cluster Leader without being rejected.
on PAS
ssh support@<PAS appliance>
PAS$ mysql -h $cluster_master_ip -u support -p
on HSA
ssh support@<HSA appliance>
HSA$ mysql -h $cluster_master_ip -u support -p
➤ Reboot Appliances
Reboot the appliances to ensure that the new network configurations have been applied.
on PAS
PAS$ podman exec -it nms-nms-nms /bin/bash
PAS$ SevOne-shutdown reboot
on HSA
HSA$ podman exec -it nms-nms-nms /bin/bash
HSA$ SevOne-shutdown reboot
➤ Verify Virtual IP
Verify Virtual IP (no Static Routes)
The Virtual IP must be up only on the active appliance. Ensure that the Virtual IP on the passive appliance is not up. Execute the following commands.
on PAS
# <connection_name> must be your connection
PAS$ ip addr show <connection_name>
PAS$ ip route show
PAS$ route -n
Example
PAS$ ip addr show ens160
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:be:3b:9f brd ff:ff:ff:ff:ff:ff
inet 10.168.117.48/22 brd 10.168.119.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet 10.168.118.64/22 brd 10.168.119.255 scope global secondary noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::dfe8:13cd:2fde:454d/64 scope link noprefixroute
valid_lft forever preferred_lft forever
PAS$ ip route show
default via 10.168.116.1 dev ens160 proto static metric 100
10.168.116.0/22 dev ens160 proto kernel scope link src 10.168.117.48 metric 100
10.168.116.0/22 dev ens160 proto kernel scope link src 10.168.118.64 metric 100
172.17.0.0/24 dev docker0 proto kernel scope link src 172.17.0.1
PAS$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.168.116.1 0.0.0.0 UG 100 0 0 ens160
10.168.116.0 0.0.0.0 255.255.252.0 U 100 0 0 ens160
10.168.116.0 0.0.0.0 255.255.252.0 U 100 0 0 ens160
172.17.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
on HSA
# <connection_name> must be your connection
HSA$ ip addr show <connection_name>
HSA$ ip route show
HSA$ route -n
Example
HSA$ ip addr show ens160
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:be:81:da brd ff:ff:ff:ff:ff:ff
inet 10.168.117.58/22 brd 10.168.119.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::9696:45e5:b048:1b93/64 scope link noprefixroute
valid_lft forever preferred_lft forever
HSA$ ip route show
default via 10.168.116.1 dev ens160 proto static metric 100
10.168.116.0/22 dev ens160 proto kernel scope link src 10.168.117.58 metric 100
172.17.0.0/24 dev docker0 proto kernel scope link src 172.17.0.1
HSA$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.168.116.1 0.0.0.0 UG 100 0 0 ens160
10.168.116.0 0.0.0.0 255.255.252.0 U 100 0 0 ens160
172.17.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
Verify Virtual IP (Static Routes configured)
The Virtual IP must be up only on the active appliance. Ensure that the Virtual IP on the passive appliance is not up. Execute the following commands.
on PAS
# <connection_name> must be your connection
PAS$ ip addr show <connection_name>
PAS$ ip route show
PAS$ route -n
Example
PAS$ ip addr show ens160
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:be:3b:9f brd ff:ff:ff:ff:ff:ff
inet 10.168.117.48/22 brd 10.168.119.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet 10.168.118.64/22 brd 10.168.119.255 scope global secondary noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::dfe8:13cd:2fde:454d/64 scope link noprefixroute
valid_lft forever preferred_lft forever
PAS$ ip route show
default via 10.168.116.1 dev ens160 proto static metric 100
10.128.24.0/22 via 10.168.116.3 dev ens160 proto static src 10.168.118.64
10.168.116.0/22 dev ens160 proto kernel scope link src 10.168.117.48 metric 100
10.168.116.0/22 dev ens160 proto kernel scope link src 10.168.118.64 metric 100
172.17.0.0/24 dev docker0 proto kernel scope link src 172.17.0.1
PAS$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.168.116.1 0.0.0.0 UG 100 0 0 ens160
10.128.24.0 10.168.116.3 255.255.252.0 UG 0 0 0 ens160
10.168.116.0 0.0.0.0 255.255.252.0 U 100 0 0 ens160
10.168.116.0 0.0.0.0 255.255.252.0 U 100 0 0 ens160
172.17.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
on HSA
# <connection_name> must be your connection
HSA$ ip addr show <connection_name>
HSA$ ip route show
HSA$ route -n
Example
HSA$ ip addr show ens160
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:be:81:da brd ff:ff:ff:ff:ff:ff
inet 10.168.117.58/22 brd 10.168.119.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::9696:45e5:b048:1b93/64 scope link noprefixroute
valid_lft forever preferred_lft forever
HSA$ ip route show
default via 10.168.116.1 dev ens160 proto static metric 100
10.168.116.0/22 dev ens160 proto kernel scope link src 10.168.117.58 metric 100
172.17.0.0/24 dev docker0 proto kernel scope link src 172.17.0.1
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.168.116.1 0.0.0.0 UG 100 0 0 ens160
10.168.116.0 0.0.0.0 255.255.252.0 U 100 0 0 ens160
172.17.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
REMOVAL VIRTUAL IP (VIP) CONFIGURATION
This section provides the details on how to remove VIP configuration from SevOne NMS. If the VIP configuration is no longer required, the following steps enables the user to remove the VIP configuration from any NMS primary or secondary appliance.
- It is assumed that steps in section CONFIGURE VIRTUAL IP (VIP) were used to configure Virtual IP. If any other process was followed, the steps to remove Virtual IP may not work as expected.
- If any custom/static routes are configured and still required, you must retain those routing configurations. The steps below remove the interface rules file. Please reconfigure the rules file as per the new requirements without the Virtual IP.
- Ensure that all the devices and NMS configurations currently using the Virtual IP for polling or receiving flows, are able to communicate over the Base IP (if not already configured) to minimize the impact of data loss when removing the Virtual IP configuration. Please ensure that the devices are configured to communicate via the Base IP of both, the Primary and Secondary appliances of the pair.
Perform the following steps on both, PAS (primary) and HSA (secondary) appliances of the NMS peer for which the Virtual IP configuration needs to be removed.
- Check existing network connections.
for PAS
PAS$ nmcli connection show
Example: Network Connections
PAS$ nmcli connection show Output: NAME UUID TYPE DEVICE ens160 afe20483-ba17-4955-aed6-1706093e8b88 ethernet ens160
for HSA
HSA$ nmcli connection show
Example: Network Connections
HSA$ nmcli connection show Output: NAME UUID TYPE DEVICE ens160 d6ad7e7f-87bb-42f4-9481-bff76859081d ethernet ens160
- Set appropriate network connection name as a variable. In the following example, connection name
is ens160. You may change the network connection name to the name specific to your environment based
on the output from the previous command.Note: You must use the correct connection by setting the variable with the correct connection name as there may be other active connections in your environment. For example, a docker setup on your SevOne NMS.
The connection names may differ on a physical PAS or vPAS. In the example above, it is ens160 but it could be any other valid network connection name such as, en0, as an example.
Example: for PAS
PAS$ connection_name='ens160'
Example: for HSA
HSA$ connection_name='ens160'
- Backup the existing configuration files.
for PAS
PAS$ cd /etc/sysconfig/network-scripts PAS$ mkdir vip-bkup-$(date +%d%b%y) PAS$ cp -ap -L *${connection_name}* vip-bkup-$(date +%d%b%y)
for HSA
HSA$ cd /etc/sysconfig/network-scripts HSA$ mkdir vip-bkup-$(date +%d%b%y) HSA$ cp -ap -L *${connection_name}* vip-bkup-$(date +%d%b%y)
- Update the cluster's peers table. Perform the commands on the PAS.
for PAS
PAS$ peer_id=$(mysqldata -BNe "select value from local.settings \ where setting='server_id';" ) PAS$ cluster_master_ip=$(mysqlconfig net -BNe "select ip_normalize(ip) \ from peers where master = 1") PAS$ cmd=$(echo "update net.peers set virtual_ip = NULL \ where server_id = ${peer_id}") PAS$ ssh $cluster_master_ip "mysqlconfig -e \"$cmd\" " PAS$ mysqlconfig -e "select * from net.peers where server_id = ${peer_id} \G"
- Update the operating system network configuration.
for PAS
PAS$ cd /etc/sysconfig/network-scripts PAS$ unlink ifcfg-ens160 PAS$ mv vip-disabled-ifcfg-ens160 ifcfg-ens160
for HSA
HSA$ cd /etc/sysconfig/network-scripts HSA$ unlink ifcfg-ens160 HSA$ mv vip-disabled-ifcfg-ens160 ifcfg-ens160
- Remove network configuration files for Virtual IP and static routes.Important: This step removes the interface specific routes. Please perform the necessary steps manually if the static routes are configured for any other purpose other than the Virtual IP for this interface.
for PAS
PAS$ rm vip-enabled-ifcfg-ens160 route-ens160
for HSA
HSA$ rm vip-enabled-ifcfg-ens160 route-ens160
- Verify the network configuration file has no Virtual IPs configured. Also, confirm that the Base
IP is correctly configured in the network configuration file.
for PAS
PAS$ ls -l /etc/sysconfig/network-scripts/ifcfg-${connection_name} PAS$ cat /etc/sysconfig/network-scripts/ifcfg-${connection_name}
Example
PAS$ ls -l /etc/sysconfig/network-scripts/ifcfg-${connection_name} -rw-r--r--. 1 root root 334 Feb 11 09:26 /etc/sysconfig/network-scripts/ifcfg-ens160 PAS$ cat /etc/sysconfig/network-scripts/ifcfg-${connection_name} TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens160 UUID=afe20483-ba17-4955-aed6-1706093e8b88 DEVICE=ens160 ONBOOT=yes IPADDR=10.168.117.48 PREFIX=22 GATEWAY=10.168.116.1
for HSA
HSA$ ls -l /etc/sysconfig/network-scripts/ifcfg-${connection_name} HSA$ cat /etc/sysconfig/network-scripts/ifcfg-${connection_name}
Example
HSA$ ls -l /etc/sysconfig/network-scripts/ifcfg-${connection_name} -rw-r--r--. 1 root root 334 Feb 11 09:28 /etc/sysconfig/network-scripts/ifcfg-ens160 HSA$ cat /etc/sysconfig/network-scripts/ifcfg-${connection_name} TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens160 UUID=d6ad7e7f-87bb-42f4-9481-bff76859081d DEVICE=ens160 ONBOOT=yes IPADDR=10.168.117.58 PREFIX=22 GATEWAY=10.168.116.1
Important: SevOne strongly recommends that both, primary and secondary, appliances are restarted to ensure that the network configuration has been applied correctly after removal of the Virtual IP.If a data collection outage is to be avoided, please restart the secondary appliance first.
for HSA
HSA$ podman exec -it nms-nms-nms /bin/bash HSA$ SevOne-shutdown reboot
for PAS
PAS$ podman exec -it nms-nms-nms /bin/bash PAS$ SevOne-shutdown reboot
Important: If removal of Virtual IP configuration is followed as a part of the steps to CHANGE VIRTUAL IP (VIP) CONFIGURATION, restart of NMS appliances is not required at this stage. You may REMOVAL VIRTUAL IP (VIP) CONFIGURATION and CONFIGURE VIRTUAL IP (VIP) and then, perform the reboot of the appliances to avoid multiple reboots.
CHANGE VIRTUAL IP (VIP) CONFIGURATION
Before changing the Virtual IP configuration, ensure that all the devices and NMS configurations which are currently using the VIP for the purpose of polling or receiving flows will now be able to communicate over the new Virtual IP (if not already configured) to minimize the impact of data loss after changing the Virtual IP configuration. It has to be ensured that the devices are configured to communicate via the new Virtual IP of both, the Primary and Secondary appliances of the pair.
To change the Virtual IP configuration, execute the steps in,
CHANGE IP ADDRESS
To change the IP address on a SevOne appliance, please contact SevOne Support.
CHANGE IP ADDRESS USING 'SevOne-change-ip' COMMAND
SevOne-change-ip is an interactive command that guides you through changing the IP address of a SevOne appliance. It will make all the necessary updates to the peers table and also, run the necessary fix commands.
- All appliances in the cluster must be reachable at the time of running the SevOne-change-ip command. If any appliance in the cluster is currently unreachable, then the change of IP address must not be performed as the changes may fail to propagate to the unreachable peers.
- Up on completion of change IP address command, you will be prompted to reboot your system.Warning: SevOne-change-ip command does not support Virtual IP, bonded interfaces, or multiple interfaces present on the device. For such configurations, use the manual procedure documented below.
Run the interactive command to change the IP address of a SevOne
appliance
Example
podman exec -it nms-nms-nms /bin/bash
SevOne-change-ip -i
Output:
=== Interface name ens160
=== Current Peer Info:
--- Bootproto: /etc/sysconfig/network-scripts/ifcfg-ens160:none
--- Hostname: sevone
--- IP: /etc/sysconfig/network-scripts/ifcfg-ens160:10.129.25.96
--- Netmask:
--- Broadcast:
--- Gateway:
Enter the new hostname (default: sevone): nw-master
Enter the new IP address (default: /etc/sysconfig/network-scripts/ifcfg-ens160:10.129.25.96):
10.129.27.166
Enter the new netmask address (default: ): 255.255.252.0
Enter the new brd address (default: ): 10.129.27.255
Enter the new gateway address (default: ): 10.129.24.1
=== Backing up configuration files
=== Updating IP Address
=== Writing Host File Header
=== Changing the hostname
=== Setting for ens160
=== Adding Hosts file settings
=== Updating kafka-server.properties
kafka: stopped
kafka: started
=== Updating server2.cnf
=== Updating api IP address
<<< Reading API directory from '/config/appliance/settings/api/directory'.
>>> Writing '10.129.27.166' to '/config/appliance/settings/api/ip'.
--- Reading api.wsdl...
--- Replacing "www.sevone.com\/soap3" with "10.129.27.166\/soap3".
<<< Clearing WSDL cache.
--- All done.
=== Updating peers table for Others
=== Updating peers table for IP = 10.129.27.166
--- Updating primary IP
=== Checking that we updated the peers table correctly
--- Successfully updated the peers table
=== Preventing wrongful failover
--- Successfully updated the peers table
=== Updating peer replication
Setting replication master for 10.129.25.47
Setting replication master for 10.129.26.192 config
Setting replication master for 10.129.26.192 data
=== Printing updated peers table
--- Peer 2:
--- Hostname: PEER
--- IP: 10.129.25.47
--- Primary IP: 10.129.25.47
--- Peer 1:
--- Hostname: sevone
--- IP: 10.129.27.166
--- Primary IP: 10.129.27.166
--- Secondary IP: 10.129.26.192
=== Updating web proxy configuration
--- Updating web proxy config on 10.129.25.96
--- Web proxy will restart on reboot
--- Successfully updated web proxy config on 10.129.25.96
--- Updating web proxy config on 10.129.26.192
--- Successfully updated web proxy config on 10.129.26.192
--- Updating web proxy config on 10.129.25.47
--- Successfully updated web proxy config on 10.129.25.47
=== Updating peer MySQL permissions
--- Updating MySQL permission settings for peer(10.129.25.96)
--- Updating MySQL permission settings for peer(10.129.26.192)
--- Updating MySQL permission settings for peer(10.129.25.47)
=== Removing autogenerated MySQL UUID's
=== Done Updating IP Address
=== Removing backups
You must restart the appliance for these changes to take effect
Restart now? [yes/no] yes
Rebooting now. Run 'SevOne-fix-ssh-keys' after reboot.
➤ Change IP Address Manually
The IP address of a SevOne appliance can be changed manually by using NMCLI or NMTUI.
Where applicable, it is recommended that the IP address is changed using the SevOne-change-ip command and not done manually. However, in situations where it is not possible to use the command, changes will need to be made manually to the network configuration and in the NMS database tables.
using NMCLI
Execute the following steps to change the IP address manually using NMCLI.
- Using ssh, log into SevOne NMS appliance as
support.
ssh support@<NMS appliance>
- Execute the following command to view all the
interfaces.
nmcli Output: ens160: connected to ens160 "VMware VMXNET3" ethernet (vmxnet3), 00:50:56:BE:C7:12, hw, mtu 1500 ip4 default inet4 10.129.27.33/22 route4 0.0.0.0/0 route4 10.129.24.0/22 inet6 fe80::250:56ff:febe:c712/64 route6 fe80::/64 route6 ff00::/8 docker0: unmanaged "docker0" bridge, 02:42:A9:B2:4F:F5, sw, mtu 1500 lo: unmanaged "lo" loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536 DNS configuration: servers: 10.168.0.50 10.168.16.50 10.205.8.50 domains: wifi.sevone.com wilm.sevone.com sevone.com network.qa interface: ens160
- To modify an interface, execute the following
command.
nmcli connection modify <interface name> ipv4.address <ip_address/prefix>
Example
nmcli connection modify ens160 ipv4.address 10.129.27.55/22
Note: The default gateway, default via, must be in the subnet of the new IP address. However, if the default gateway also needs to be modified, execute following command.nmcli connection modify ens160 ipv4.gateway <Gateway_IP>
Example
nmcli connection modify ens160 ipv4.gateway 10.129.24.0
using NMTUI
Execute the following steps to change the IP address manually using NMTUI.
- Using ssh, log into SevOne NMS appliance as
support.
ssh support@<NMS appliance>
- To modify an interface, execute the following
command.
nmtui edit <interface name>
Example
nmtui edit ens160
Navigate to the IP address to modify
Verify IP Address Change at Network-Level
For both, NMCLI or NMTUI
Example: View /etc/sysconfig/network-scripts/ifcfg-ens160 configuration file
cat /etc/sysconfig/network-scripts/ifcfg-ens160
Output:
# Generated by dracut initrd
NAME=ens160
DEVICE=ens160
ONBOOT=yes
NETBOOT=yes
UUID=e2f7df86-55c8-4227-a00d-0f048e030b1a
IPV6INIT=yes
BOOTPROTO=none
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPADDR=10.129.27.33
PREFIX=22
GATEWAY=10.129.24.0
Change IP Address in NMS Configuration
Depending on the NMS appliance on which the network IP address change has been performed, the NMS configuration for that NMS appliance must be updated to reflect the change. Based on the appliance type (primary / secondary) and current role (active / passive), use one of the applicable options below.
IP Address Changed for 'active' Cluster Leader Appliance
Execute the following step to change the replication IP when the IP address has been changed for an 'active' Cluster Leader appliance. All other active peer's replication host must be updated. The script below must be executed on the Cluster Leader.
NEWMASTERIP="<new_master_or_leader_ip>";
PEERIPS=$(ssh ${NEWMASTERIP} "/usr/local/scripts/mysqlconfig net -e \"SELECT ip_normalize(ip) \
FROM peers WHERE master != 1 \" --skip-column-names");
# This following is executed on all peers in the cluster
for IP in $PEERIPS; do
echo "--- updating replication source on $IP"
ssh ${IP} "/usr/local/scripts/mysqlconfig net -e \"STOP SLAVE; \
CHANGE MASTER TO master_host='${NEWMASTERIP}', \
master_port=3307; START SLAVE \" ";
done;
IP Address Changed for 'active' Peer
Execute the following step to change the replication IP when the IP address has been changed for an 'active' peer (i.e., other than the Cluster Leader appliance). If the IP address has been changed for an active appliance in a pair, you must execute the following commands on its secondary / passive appliances.
mysqldata -e "STOP SLAVE; CHANGE MASTER TO master_host='<MASTER_IP>', \
master_port=3306; START SLAVE"
mysqlconfig -e "STOP SLAVE; CHANGE MASTER TO master_host='<MASTER_IP>', \
master_port=3307; START SLAVE"
Update NMS 'net.peers' Table
Scenario# 1
mysqlconfig -e "update peers set \
ip=HEX(INET6_ATON('<IP_ADDRESS>')), \
[primary|secondary]_ip=HEX(INET6_ATON('<IP_ADDRESS>')) \
where server_id='<SERVER_ID>' "
Example: If primary appliance and 'active' of the pair
mysqlconfig -e "update peers set \
ip=HEX(INET6_ATON('<IP_ADDRESS>')),\
primary_ip=HEX(INET6_ATON('<IP_ADDRESS>')) \
where server_id='<SERVER_ID>' "
OR
Example: If secondary appliance and 'active' of the pair
mysqlconfig -e "update peers set \
ip=HEX(INET6_ATON('<IP_ADDRESS>')),\
secondary_ip=HEX(INET6_ATON('<IP_ADDRESS>')) \
where server_id='<SERVER_ID>' "
OR
Example: If not in a HSA pair
mysqlconfig -e "update peers set \
ip=HEX(INET6_ATON('<IP_ADDRESS>')),\
primary_ip=HEX(INET6_ATON('<IP_ADDRESS>')) \
where server_id='<SERVER_ID>' "
Scenario# 2
mysqlconfig -e "update peers set \
[primary|secondary]_ip=HEX(INET6_ATON('<IP_ADDRESS>')) \
where server_id='<SERVER_ID>' "
Example: If primary appliance and passive of the pair
mysqlconfig -e "update peers set \
primary_ip=HEX(INET6_ATON('<IP_ADDRESS>')) \
where server_id='<SERVER_ID>' "
OR
Example: If secondary appliance and passive of the pair
mysqlconfig -e "update peers set \
secondary_ip=HEX(INET6_ATON('<IP_ADDRESS>')) \
where server_id='<SERVER_ID>' "
Fix MySQL Permissions
Execute the following command from the Cluster Leader active appliance.
from Cluster Leader
podman exec -it nms-nms-nms /bin/bash
SevOne-fix-mysql-permissions
Fix Hosts on All Peers
The following command must be executed from Cluster Leader active appliance.
podman exec -it nms-nms-nms /bin/bash
SevOne-peer-do "SevOne-fix-hosts-file -y"
Update API IP Address
Execute the following command on the NMS appliance where the IP address has been changed.
podman exec -it nms-nms-nms /bin/bash
SevOne-api-change-ip <IP_ADDRESS>
Restart Daemons
Restart the daemons cluster-wide by executing the command below from Cluster Leader.
podman exec -it nms-nms-nms /bin/bash
SevOne-peer-do "supervisorctl restart SevOne-masterslaved SevOne-requestd"
Reboot Appliance
If the IP address on the appliance has changed, please reboot the appliance for the new IP address to take effect.
podman exec -it nms-nms-nms /bin/bash
SevOne-shutdown reboot
CHANGE FROM IPv4 TO IPv6
To change your SevOne NMS from IPv4 to IPv6, execute the steps below.
- Using ssh, log into SevOne NMS Cluster Leader as
support.
ssh support@<SevOne NMS Cluster Leader IP address or hostname>
- Disable SevOne-masterslaved cluster-wide.
podman exec -it nms-nms-nms /bin/bash SevOne-peer-do "supervisorctl stop SevOne-masterslaved"
- This step must be performed on each peer.Important: Please do not change the IP address of the Cluster Leader until the IP address of all the peers in the cluster have been changed first.
- Using ssh, log into SevOne NMS peer as
support.
ssh support@<SevOne NMS peer IP address or hostname>
- Run steps in sectionCHANGE IP ADDRESS USING 'SevOne-change-ip' COMMAND to change the IP address of the peer you are logged into.
- Repeat steps a. and b. until you have changed the IP address of each peer in the cluster.
- Once the IP address of all the peers has been changed, change the IP address of the Cluster Leader. IP address of the Cluster Leader must always be changed last.
- Using ssh, log into SevOne NMS peer as
support.
- Ensure that each peer can talk to every other applicable peer in the cluster. Execute the
following command.
podman exec -it nms-nms-nms /bin/bash SevOne-act check peers
- Re-enable SevOne-masterslaved cluster-wide. You must be on Cluster Leader.
ssh support@<SevOne NMS Cluster Leader IP address or hostname>
podman exec -it nms-nms-nms /bin/bash SevOne-peer-do "supervisorctl start SevOne-masterslaved"
CHANGE HOSTNAME
To change the hostname of a NMS appliance that is already in its final state of Cluster Configuration, requires some key NMS configurations to be updated and services to be restarted. SevOne recommends to perform the hostname change for a NMS appliance only after it is configured in the NMS cluster.
Execute the following steps to change the hostname of the NMS appliance.
- Using ssh, log into SevOne NMS appliance as
support.
ssh support@<NMS appliance>
- Execute the following command to check the hostname. In the example below, you will see that
Static hostname contains the current hostname,
queen-01.
Example
hostnamectl Output: Static hostname: queen-01 Icon name: computer-vm Chassis: vm Machine ID: eb9b779ed6804087be6db92938c01905 Boot ID: a902fadfd7de494c811e79ebc702736f Virtualization: vmware Operating System: Red Hat Enterprise Linux 8.10 (Ootpa) CPE OS Name: cpe:/o:redhat:enterprise_linux:8::baseos Kernel: Linux 4.18.0-553.el8_10.x86_64 Architecture: x86-64
- Run the following command to change the hostname. Let's assume you are changing the static
hostname from queen-01 to
regulus-01.
hostnamectl set-hostname <enter new hostname>
- Using ssh, log back into SevOne NMS appliance as
support.
ssh support@<NMS appliance>
- Execute the following command to confirm the hostname change. In the example below, you will see
that Static hostname will contain the new hostname,
regulus-01.
Example
hostnamectl Output: Static hostname: regulus-01 Icon name: computer-vm Chassis: vm Machine ID: eb9b779ed6804087be6db92938c01905 Boot ID: a902fadfd7de494c811e79ebc702736f Virtualization: vmware Operating System: Red Hat Enterprise Linux 8.10 (Ootpa) CPE OS Name: cpe:/o:redhat:enterprise_linux:8::baseos Kernel: Linux 4.18.0-553.el8_10.x86_64 Architecture: x86-64
PEER COMMUNICATION OVER NAT
➤ Overview
This topic describes the firewalld NAT configuration to support SevOne NMS clusters over Network Address Translation (NAT). It provides a solution for the IP Table NAT configuration to communicate with peers.
- If peers in a cluster are on different networks, they should be able to communicate with each other using the static NAT IP address. The primary and its secondary appliance cannot be split between two different networks.
- After enabling firewalld in the NAT configuration, please check the firewalld services and ports configuration.
- Only static NAT is supported. Dynamic NAT and PAT (Port Address Translation) are not supported.
- NAT configuration over Hub-and-Spoke is not supported.
- If the NMS cluster is configured with Virtual IP addresses, firewalld NAT configuration is not supported.
- Peers may contain either NAT IP or physical IP in the peers table. NAT rules should be able to apply accordingly.
- Must work in hybrid deployment model.
➤ Network Architecture
Network Address
When a customer does not want to expose internal IP addresses to the external network, NAT is applied to the internal IP addresses when routing to the external network. There is no need to use a NAT'd IP for Internal Network communications.
NAT IP addresses are routable between External Network and Internal Network only. The hosts in the internal network must use the physical internal IP address to route to other hosts in the internal network.
Only static NAT (one-to-one NAT) is supported. Dynamic NAT/PAT are not supported.
As an example, the basic network addressing scheme is summarized as shown in the table below.
Route To | |||||
Internal Network | External Network | ||||
Route From | Internal Network | Physical IP Address | Physical IP Address | ||
External Network | NAT IP Address | Physical IP Address |
As an extension to this, a single NMS cluster might have peers in multiple External Networks (for example, an MSP shared cluster supporting multiple customers) and the Internal peers may appear at different NAT addresses to each External Network (a Hub-and-spoke configuration which is currently not supported).
SevOne NMS Peers
SevOne NMS peer functions as a Cluster Leader, a polling PAS or a DNC. A peer typically comprises of two appliances - a primary and a secondary appliance. The standalone peers have only the primary appliance. The primary and secondary appliances may be located in separate subnets or separate physical Data Centers. However, both appliances in a peer reside either in the internal network or the external network. A peer is never split between the internal network and the external network.
Each appliance in the network must have its own unique physical IP address.
SevOne NMS Peers Table
In SevOne NMS, the peers table maintains one record for each peer. Each peer can have only one IP address assigned to its primary and one IP address to its secondary (if a secondary exists). The peer will always be referred by the IP address of the current active appliance for the peer and there can optionally be one Virtual IP assigned to each peer.
The peers table is replicated to all peers and is identical on all peers in the cluster.
For SevOne NMS to function properly, apart from hub-and-spoke clusters, all peers must be reachable in a fully meshed configuration using the peering addresses contained in the peers table.
In standard configuration, SevOne NMS is unable to support the network architecture in which peers in the internal network are addressed using their physical addresses from other peers within the internal network and NAT addresses from peers in the external network.
firewalld
firewalld is a firewall management tool for Linux operating systems which supports the processing of IP packet filtering rules, including NAT, on the Linux host.
By applying firewalld NAT rules, except for hub-and-spoke clusters, it is possible to ensure a full-mesh connectivity between SevOne NMS peers while supporting SevOne NMS clustering and the common peers table addressing in the normal way without any changes to the SevOne NMS application.
➤ Deployment Scenarios
firewalld NAT rules can be applied either on the internal network appliances or on the external network appliances within a given SevOne NMS cluster. Both schemes achieve the same effect.
The choice of where to deploy the firewalld NAT rules depends on the deployment model for the cluster and the history of whether the cluster has already been built in one or other data center.
Typically, it is preferable to deploy firewalld NAT rules to the minimal number of appliances, or to avoid applying rules to appliances which have already been added to an NMS cluster since this may necessitate changes to the NMS Peers IP addressing:
- In a typical scenario, where the majority of the NMS cluster is in the internal network and only the DNC peers are located in the external network, it would typically be preferable to first build the cluster in the internal network, Apply firewalld NAT to External Network when DNCs are added.
- If there are no appliances in the external network then there is no need to apply any firewalld NAT rules and the cluster can be built in a normal way. If peers are added to the external network at a later stage then firewalld NAT rules must be applied on the external peers to avoid changes to the internal peers.
SevOne recommends that in all cases firewalld NAT rules must be applied to the peer appliances before attempting to add the peer to the NMS cluster. If firewalld NAT rules need to be applied to an existing NMS cluster, apply NAT rules to non-NAT'd servers only so that we won't need to change the peers table entry.
Apply firewalld NAT to Internal Network Peers
In this model, the NMS peering address maintained in the peers table is the NAT IP address of the internal peers and the physical IP address of the external peers.
firewalld NAT rules are applied to the internal peers to enable them to communicate with each other using the NAT IP addresses within the internal network.
External peers do not require any firewalld NAT configuration as they can route to the internal peers using the NAT IP addresses stored in the NMS peers table.
Example
firewalld NAT Rules
The firewalld NAT rules are applied to each internal host.
Table | Chain | Rule | Description |
---|---|---|---|
NAT | OUTPUT | DNAT | Destination address translation from NAT IP address to physical IP address for each internal network host. |
NAT | INPUT | SNAT | Source address translation from physical IP address to NAT IP address for each internal network host. |
NAT | Post-Routing | SNAT | Source address translation from physical IP address to NAT IP address for the local host. |
Example
The NMS cluster peers table has the following peers.
Peer 1 | NAT IP Address | 10.133.72.26 | 10.133.72.27 | Internal Network |
Peer 2 | Physical IP Address | 10.133.72.21 | 10.133.72.22 | External Network |
The host, 10.133.72.26 has the following firewalld NAT rules.
ipv4 nat OUTPUT 0 -d 10.133.72.26 -j DNAT --to-destination 10.168.180.21
ipv4 nat OUTPUT 0 -d 10.133.72.27 -j DNAT --to-destination 10.168.180.22
ipv4 nat POSTROUTING 0 -d 10.168.180.21 -j SNAT --to-source 10.133.72.26
ipv4 nat INPUT 0 -s 10.168.180.22 -j SNAT --to-source 10.133.72.27
Add Peer to Internal Network
firewalld NAT configuration must be applied to an internal peer before adding it to the NMS cluster.
- Create or update the nat.csv file containing the physical and NAT IP addresses of each internal peer including the new internal peer which is being added.
- Update hosts.ini file containing all hosts where NAT rules are applied along with the new internal peer which is being added.
- Ensure that the new peer has ssh connectivity prior to applying a new peer.
- Update firewalld NAT rules on all other existing internal peers and add it on a new peer. Please refer to section ➤ Implementation for details.
- Check that the new internal peer has connectivity to/from all other peers in the cluster using the NAT IP address of the new internal peer.
- Add the new internal peer to the NMS cluster using its NAT IP address.
Add Peer to External Network
- No firewalld NAT configuration changes are required.
- Add the external network peer to the NMS cluster using the physical IP address.
Apply firewalld NAT to External Network
In this model, the NMS peering address is the physical IP address of all peers in both internal and external networks.
firewalld NAT rules are applied to each of the external peers to enable them to communicate with the internal peers using the physical IP addresses of internal peers.
Internal peers do not require any firewalld NAT rules configuration since they can route to the external network and the internal peers using their physical addresses.
firewalld NAT Rules
The firewalld NAT rules are applied to each external host.
Table | Chain | Rule | Description |
---|---|---|---|
NAT | OUTPUT | DNAT | Destination address translation from physical IP address to NAT IP address for each internal network host. |
NAT | INPUT | SNAT | Source address translation from NAT IP address to physical IP address for each internal network host. |
Example
The NMS cluster peers table has the following peers.
Peer 1 | Physical IP Address | 10.168.180.21 | 10.168.180.22 | Internal Network |
Peer 2 | Physical IP Address | 10.133.72.21 | 10.133.72.22 | External Network |
The external network peer appliances contain the following firewalld NAT rules.
ipv4 nat OUTPUT 0 -d 10.168.180.21 -j DNAT --to-destination 10.133.72.26
ipv4 nat OUTPUT 0 -d 10.168.180.22 -j DNAT --to-destination 10.133.72.27
ipv4 nat INPUT 0 -s 10.133.72.26 -j SNAT --to-source 10.168.180.21
ipv4 nat INPUT 0 -s 10.133.72.27 -j SNAT --to-source 10.168.180.22
Add Peer to Internal Network
firewalld NAT configuration must be applied or updated to any existing external peers before an internal peer is added to the cluster.
- Create or update the nat.csv file containing the physical and NAT IP addresses of each internal peer including the new internal peer which is being added.
- Create or update hosts.ini file containing external hosts.
- Apply or update firewalld NAT rules to all existing external peers. Please refer to section ➤ Implementation for details.
- Check that the new internal peer has connectivity to/from all other peers in the cluster using the physical IP address of the new internal peer.
- Add the new internal peer to the NMS cluster using its physical IP address.
Add Peer to External Network
firewalld NAT configuration must be applied to the new external peer before it is added to the NMS cluster.
- Confirm that the existing nat.csv file includes all internal peers.
- Check for ssh connection with the new external peer.
- Update/create hosts.ini file with the new external peer.
- Apply the firewalld NAT rules using command in section ➤ Implementation.
- Check that the new external peer has connectivity to/from all other peers in the cluster using their physical IP addresses.
- Add the new external peer to the NMS cluster using its physical IP address.
➤ Implementation
Create nat.csv & hosts.ini files
- Using a text editor of your choice, create a nat.csv text file in any directory. For
example, /etc/SevOne/nat. The nat.csv file must contain the physical and NAT IP addresses for
all internal appliances (where NAT is applied). The format of the .csv file is as
follows.
<physical IP 1>,<NAT IP 1> <physical IP 2>,<NAT IP 2> ...
Important: The nat.csv file does not contain the header row.Example
10.168.180.21, 10.133.72.26 10.168.180.22, 10.133.72.27
- Using a text editor of your choice, create a hosts.ini text file in any directory. For example,
/etc/SevOne/nat. The hosts.ini file must contain the host IP addresses where NAT rules are
applied. The format of the .csv file is as
follows.
<hostip1> <hostip2> ...
Important: The hosts.ini file does not contain the header row.Example
10.133.72.21 10.133.72.22
Generate firewalld NAT Rules
- Execute the following command to generate NAT rules on all peers mentioned in the
hosts.ini file.
podman exec -it nms-nms-nms /bin/bash SevOne-act firewall apply-nat --natfile /etc/SevOne/nat/nat.csv \ --ipfile /etc/SevOne/nat/hosts.ini
The script detects the IP address of the local host and determines the appropriate firewalld NAT rules depending on whether the rules are being applied to a NAT’d host (Internal Network) or a non-NAT’d host (External Network).
- If the rules are not as expected, correct the nat.csv file. To verify if the NAT rules
are correct, please execute the command in Verify Rules.
- Execute the following command to flush the rules.
podman exec -it nms-nms-nms /bin/bash SevOne-act firewall flush-nat --ipfile /etc/SevOne/nat/hosts.ini
- Apply NAT rules again.
podman exec -it nms-nms-nms /bin/bash SevOne-act firewall apply-nat --natfile /etc/SevOne/nat/nat.csv \ --ipfile /etc/SevOne/nat/hosts.ini
- Execute the following command to flush the rules.
- To verify if rules are successfully added, execute the following command on each host to check
the firewalld NAT
rules.
firewall-cmd --direct --get-all-rules
Check if rules are permanently set
firewall-cmd --permanent --direct --get-all-rules
Note: You may check /etc/firewalld/direct.xml if rules are added permanently or not.
➤ Verification
firewall-cmd Commands
The following are useful firewall-cmd commands.
Check if rules are loaded in firewall
firewall-cmd --direct --get-all-rules
Check if rules are loaded permanently
firewall-cmd --permanent --direct --get-all-rules
Check if rules are present in a file
cat /etc/firewalld/direct.xml
Check if NAT entries have been added successfully
mysqldata -e "select * from local.nat_config_info"
Output:
+----+---------------+---------------+---------------------+
| id | physical_ip | nat_ip | created_at |
+----+---------------+---------------+---------------------+
| 1 | 10.168.180.21 | 10.133.72.26 | 2020-06-25 05:05:49 |
| 2 | 10.168.180.22 | 10.133.72.27 | 2020-06-25 05:05:50 |
+----+---------------+---------------+---------------------+
Manual Peer Connectivity Checks
Prior to adding a new peer to the NMS cluster, manual connectivity checks must be performed to ensure that the new peer has connectivity to/from all other peers using the IP address which will be used to add the peer to the NMS cluster.
- Confirm that each existing peer in the cluster is reachable using ssh and mysql
from the new peer using the peer IP address which is contained in the NMS peers
table.
ssh <existing peering address> mysql -h <existing peering address>
Important: MySQL connections will be rejected until the new peer has been added to the NMS cluster. Confirm that the IP address in a connection message is the peering address of the peer attempting to make the connection - it can be either the NAT IP address or the physical IP address which is used for adding a new peer. - Confirm that the new peer is reachable using ssh from each existing peer using the
peering address of the new peer.
ssh <new peering address> mysql -h <new peering address>
Important: MySQL connections will be rejected until the new peer has been added to the NMS cluster. Confirm that the IP address in a connection message is the peering address of the peer attempting the connection.
NMS Cluster Connectivity Checks
After the NMS cluster has been built, the standard NMS cluster health check scripts can be used to verify that the required connectivity is working between all peers and to troubleshoot any issues observed during the clustering activity.
- Execute the following command to perform a full checkout of the
cluster.
podman exec -it nms-nms-nms /bin/bash SevOne-act check checkout –full-cluster
Important: Individual check scripts can be executed if any errors are detected. - To detect issues with connectivity and port access, execute the following
command.
podman exec -it nms-nms-nms /bin/bash SevOne-act check peers --full-cluster -v SevOne-act check mysql-full-mesh-connectivity --full-cluster -v
firewalld Service Checks
- Execute the following command to check if firewalld service is enabled and
active.
firewalld.service - firewalld - dynamic firewall daemon Output: Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-06-23 10:00:54 UTC; 1 day 20h ago Docs: man:firewalld(1) Main PID: 22923 (firewalld) Tasks: 2 Memory: 29.0M CGroup: /system.slice/firewalld.service └─22923 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid
➤ Firewall Enabled?
Since NAT rules are added permanently, no new steps are required during the NMS upgrade. For NAT to work,
- Firewall must be enabled on each peer in the cluster.
- Ensure that firewalld is running after the upgrade. If not, execute the following
command.
systemctl restart firewalld
➤ FAQs
What happens if the IP address of NAT'd server is changed?
If the IP address is changed using SevOne-change-ip or if the IP address of the NAT'd appliance is changed, firewalld rules must be regenerated.
What is the procedure to apply the NAT rules if an appliance has to be rebuilt that already has a NAT configuration?
firewalld rules must be regenerated.
What if I add a new peer to the cluster?
If you add a new peer to the cluster, firewalld NAT configuration rules must be applied before the peer is added to the NMS cluster.
What if I need to remove a peer using NAT?
For the peer that has been removed, NAT rules must be flushed out manually. Once removed, you must execute the flush rules script, as mentioned in Generate firewalld NAT Rules, on it. On the remaining peers, NAT rules will have to be applied again with the updated config.
What if I want to change the current Cluster Leader peer where NAT may be configured?
If there is no change in the IP address of NAT’d servers (i.e., current Cluster Leader peer), the NAT rules do not need to be reapplied. However, if there is a change of IP address in any of the peers, the config will have to be flushed and reconfigured by using the NAT utility commands such as SevOne-act firewall flush-nat and SevOne-act firewall apply-nat. Also, described in section Generate firewalld NAT Rules.
What happens if the hostname is changed?
No changes are required.
If NAT is configured, does it require firewalld service to be running?
Yes, firewalld service must always be up and running if NAT is configured.
How about bonding when you have active-backup setup?
NAT configuration is not supported over Virtual IP. If bonding is configured to use Virtual IP, it will not be supported.
What steps do I need to follow if I am using a custom port and do not have firewalld enabled?
Please refer to the following sections in SevOne NMS System Administration Guide to add the port rule as required.
- Administration > Cluster Manager > Cluster Settings tab > Firewall subtab.
- Administration > Cluster Manager > [specific appliance] > Peer Settings tab > Firewall subtab.
INTERFACE BONDING FOR ACTIVE-BACKUP FAILURE
➤ Overview
Bonding joins multiple network interfaces together. There are various modes of bonding supported by the Linux Ethernet Bonding Driver (https://www.kernel.org/doc/Documentation/networking/bonding.txt). However, this document will only discuss the most common and applicable mode of bonding, active-backup. This mode provides network fault tolerance within a single appliance. For example, if an appliance has two interfaces which are bonded in active-backup and connected to two different switches, if the switch of the primary interface fails, the secondary interface will take over. For in-depth details, please refer to Linux Ethernet Bonding Driver (https://www.kernel.org/doc/Documentation/networking/bonding.txt) and the Networking Guide for Red Hat Enterprise Linux (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/).
➤ Configure Bonding
Use Network Manager Command Line Tool, NMCLI, to configure bonding. This tool provides a convenient way of making changes to the network configurations without the need for directly editing text files. You will require the relevant networking details to be configured in the bonded interface. In the example below, IPv4 networking details are used.
- IP Address - 10.168.116.5
- Subnet Mask - 255.255.252.0 (CIDR of /22)
- Gateway - 10.168.116.1
- Using ssh, log into SevOne NMS appliance as
support.
ssh support@<NMS appliance>
- Determine the names of the available network connections that you would like to bond. In
the example below, bonding between ens160 and ens33 is
performed.
nmcli conn show Output: NAME UUID TYPE DEVICE ens160 e2f7df86-55c8-4227-a00d-0f048e030b1a ethernet ens160 ens33 f2b05eaa-e9fa-3f03-a503-623de3bae7c5 ethernet ens33
- Create the bonded interface using the networking details provided by the user. For example,
- Connection Name - bond0
- IPv4 - 10.168.116.5 with a subnet mask in CIRD notation of /22 (255.255.252.0)
- Mode - Active Backup
- Link Monitoring - MII
- Monitoring Frequency - 100ms
- Link Up Delay - 400ms (suggested 4x the monitoring frequency)
- Link Down Delay - 400ms (suggested 4x the monitoring
frequency)
Example
nmcli con add type bond con-name bond0 ifname bond0 \ ip4 10.168.116.5/22 gw4 10.168.116.1 ipv4.method manual \ ipv6.method ignore \ bond.options "mode=active-backup,miimon=100,downdelay=400,updelay=400" Output: Connection 'bond0' (e8048f88-80e5-43c1-a981-3bcb9b8ccc69) successfully added.
- Add the first interface to the bond0 you just
created.
nmcli con add type bond-slave ifname ens160 master bond0 Output: Connection 'bond-slave-ens160' (fe926d34-0e12-46e3-a07c-fa78e5e0c8a3) successfully added.
- Bring up the first follower
interface.
nmcli conn up bond-slave-ens160 Output: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ ActiveConnection/51).
- Set the first follower interface to the primary interface for
bond0.
nmcli dev mod bond0 +bond.options "primary=ens160" Output: Connection successfully reapplied to device 'bond0'.
- For each additional interface, add it as a follower and then, bring it
up.
nmcli con add type bond-slave ifname ens33 master bond0 Output: Connection 'bond-slave-ens33' (073517f9-723a-4abe-a97e-fc10077f0ef3) successfully added.
nmcli conn up bond-slave-ens33 Output: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ ActiveConnection/51).
- Verify
configurations.
cat /etc/sysconfig/network-scripts/ifcfg-bond0 BONDING_OPTS="downdelay=400 miimon=100 mode=active-backup updelay=400" TYPE=Bond BONDING_MASTER=yes PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none IPADDR=10.168.116.5 PREFIX=22 GATEWAY=10.168.116.1 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=no NAME=bond0 UUID=f3fa06e0-66b7-47d6-aff3-3a3db27ba596 DEVICE=bond0 ONBOOT=yes
cat /proc/net/bonding/bond0 Output: Ethernet Channel Bonding Driver: v3.7.1 (January 27, 2020) Bonding Mode: fault-tolerance (active-backup) Primary Slave: ens160 (primary_reselect always) Currently Active Slave: ens160 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 400 Down Delay (ms): 400 Slave Interface: ens160 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:50:56:8c:11:13 Slave queue ID: 0 Slave Interface: ens33 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:50:56:8c:2c:d0 Slave queue ID: 0
- From another appliance, verify that you can ping bond0 IP and receive a
response.
SevOne-test$ ping 10.168.116.5 PING 10.168.116.5 (10.168.116.5): 56 data bytes 64 bytes from 10.168.116.5: icmp_seq=0 ttl=63 time=0.293 ms 64 bytes from 10.168.116.5: icmp_seq=1 ttl=63 time=0.331 ms 64 bytes from 10.168.116.5: icmp_seq=2 ttl=63 time=0.249 ms 64 bytes from 10.168.116.5: icmp_seq=3 ttl=63 time=0.285 ms
- If you do not receive a response, reload all connection files from disk and restart the network
service on the appliance you are bonding with.Important: Please execute the entire command below as-is to avoid any disconnect or box unreachable issue.
/usr/bin/nmcli c reload; /usr/bin/nmcli networking off; /usr/bin/nmcli networking on
- You now have the bonding set up and working. You may try to bring down the primary interface to
ensure that the follower takes
over.
nmcli conn down bond-slave-ens160 Output: Connection 'bond-slave-ens160' successfully deactivated (D-Bus active path: /org/freedesktop/ NetworkManager/ActiveConnection/55)
cat /proc/net/bonding/bond0 Output: Ethernet Channel Bonding Driver: v3.7.1 (January 27, 2020) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: ens33 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 400 Down Delay (ms): 400 Slave Interface: ens33 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:50:56:8c:2c:d0 Slave queue ID: 0
nmcli conn up bond-slave-ens160
cat /proc/net/bonding/bond0 Output: Ethernet Channel Bonding Driver: v3.7.1 (January 27, 2020) Bonding Mode: fault-tolerance (active-backup) Primary Slave: ens160 (primary_reselect always) Currently Active Slave: ens160 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 400 Down Delay (ms): 400 Slave Interface: ens33 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:50:56:8c:2c:d0 Slave queue ID: 0 Slave Interface: ens160 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:50:56:8c:11:13 Slave queue ID: 0