Preparing for a multi-node deployment
Prepare your environment before you install Standard Edition in your multi-node cluster.
The preparation commands in the following sections are for a root user. If you are a non-root user, you can run these commands by using sudo.
Prepare the additional disks
You must prepare the disks that you added for each data directory in the storage requirements section.
The commands to complete these tasks depend on your environment, the file system you choose, and the type of disk that you add.
The commands in the following sections are only examples and are intended to show the requirements for preparing the disks for use. You must use the commands that work for your environment.
Identify the disks on your host
To see the available devices on your system, run the following command on node0 and node1:
lsblk
Here is a sample output of node0.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 63.5M 1 loop /snap/core20/2015
loop1 7:1 0 87M 1 loop /snap/lxd/27037
loop3 7:3 0 111.9M 1 loop /snap/lxd/24322
loop4 7:4 0 63.9M 1 loop /snap/core20/2182
loop5 7:5 0 40.9M 1 loop /snap/snapd/20290
loop6 7:6 0 40.4M 1 loop /snap/snapd/20671
vda 252:0 0 250G 0 disk
├─vda1 252:1 0 1M 0 part
└─vda2 252:2 0 248G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 248G 0 lvm /
vdb 252:16 0 1000G 0 disk
Here is a sample output of node1.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 63.5M 1 loop /snap/core20/2015
loop1 7:1 0 87M 1 loop /snap/lxd/27037
loop3 7:3 0 111.9M 1 loop /snap/lxd/24322
loop4 7:4 0 63.9M 1 loop /snap/core20/2182
loop5 7:5 0 40.9M 1 loop /snap/snapd/20290
loop6 7:6 0 40.4M 1 loop /snap/snapd/20671
vda 252:0 0 250G 0 disk
├─vda1 252:1 0 1M 0 part
└─vda2 252:2 0 248G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 248G 0 lvm /
vdb 252:16 0 1000G 0 disk
vdc 252:32 0 500G 0 disk
vdd 252:48 0 500G 0 disk
Make a file system
For each disk that you added on node0 and node1, you must make a file system.
You can use any suitable file system for your disks. Ext4 and XFS are two popular Linux file systems. The choice between Ext4 and XFS depends on your preferences and the specific needs of your system. Ext4 is a good choice for most systems, but if you need a more reliable file system, XFS might be a better option.
-
Here are examples of using the
ext4file system.-
On node0 (instana-0), you can use the following command to make a single file system:
for disk in vdb ; do echo "make filesystem for $disk" mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/$disk done -
On node1 (instana-1), you can use the following command to make three file systems:
for disk in vdb vdc vdd ; do echo "make filesystem for $disk" mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/$disk done
-
-
Here are examples of using the
xfsfile system.-
On node0 (instana-0), you can use the following command to make a single file system:
for disk in vdb; do echo "make filesystem for $disk" mkfs.xfs -f -i size=1024 -L $disk /dev/$disk done -
On node1 (instana-1), you can use the following command to make three file systems:
for disk in vdb vdc vdd; do echo "make filesystem for $disk" mkfs.xfs -f -i size=1024 -L $disk /dev/$disk done
-
Create the directories
On your host, make sure that you create the four directories. See Hardware requirements.
The following example commands use the default directory paths. You can create the directories in any location of your choice. However, make sure that you use the correct path when you mount the directories.
-
On node0 (
instana-0), create theobjectsdirectory:mkdir -p /mnt/instana/stanctl/objects -
On node1 (
instana-1), create the following three directories:mkdir -p /mnt/instana/stanctl/data mkdir -p /mnt/instana/stanctl/metrics mkdir -p /mnt/instana/stanctl/analytics
Add mount paths
You must add mount paths to the directories that you created. You can view the list of the devices in Identify the disks on your host.
First, replace the device names with their UUIDs (Universally Unique Identifiers) in the file system table (fstab) files on node0 (instana-0) and node1 (instana-1).
Sometimes, the device names might change after a reboot, especially when you attach multiple disks. In such situations, a mismatch between the disks and the original mount paths can happen. To avoid these issues, replace the device names with
their UUIDs (Universally Unique Identifiers) in the fstab file.
Update fstab file with UUIDs
To update the fstab, complete the following steps:
- Based on the storage capacity, identify the devices that you want to use for each directory. Note the device names.
- Get the UUIDs of all devices.
See the following example command and output:blkid$ blkid /dev/sdb /dev/sdb: UUID="86ceb289-ba28-448d-b41f-71e647fc4536" BLOCK_SIZE="4096" TYPE="ext4" - Update the file system table (fstab) file with the device UUID.
- Open the
/etc/fstabfile in your preferred text editor. - Add the UUID to
/etc/fstab. For each device, add the UUID as shown in the following example:UUID=<device_uuid> /mnt/instana/stanctl/data ext4 discard,defaults,nofail 0 0 UUID=<device_uuid> /mnt/instana/stanctl/metrics ext4 discard,defaults,nofail 0 0 UUID=<device_uuid> /mnt/instana/stanctl/analytics ext4 discard,defaults,nofail 0 0 UUID=<device_uuid> /mnt/instana/stanctl/objects ext4 discard,defaults,nofail 0 0 - Save the file.
- Open the
Then, as a precaution, take a backup of your fstab files before you create the mount paths. Run the following command on node0 (instana-0) and node1 (instana-1).
cp /etc/fstab /etc/fstab.backup
The following example commands use the default mount paths and the disks that we used as examples in the previous sections. The commands might vary based on your environment. Also, if you created the directories in custom paths, make sure to use those paths.
If you created custom mount paths, make sure to add the --volume-<directory-name> flag to the stanctl up --multi-node-enable command when you install your Self-Hosted Standard Edition.
For example, if you added /data/analytics as the mount path, then use stanctl up --multi-node-enable --volume-analytics /data/analytics.
-
Ext4 example commands.
- Commands for node0 (
instana-0).echo "UUID=<device_vdb_uuid> /mnt/instana/stanctl/objects ext4 discard,defaults,nofail 0 0" >> /etc/fstab - Commands for node1 (
instana-1).echo "UUID=<device_vdb_uuid> /mnt/instana/stanctl/analytics ext4 discard,defaults,nofail 0 0" >> /etc/fstab echo "UUID=<device_vdc_uuid> /mnt/instana/stanctl/metrics ext4 discard,defaults,nofail 0 0" >> /etc/fstab echo "UUID=<device_vdd_uuid> /mnt/instana/stanctl/data ext4 discard,defaults,nofail 0 0" >> /etc/fstab
- Commands for node0 (
-
XFS example commands.
- Commands for node0 (
instana-0).echo "UUID=<device_vdb_uuid> /mnt/instana/stanctl/objects xfs discard,defaults,nofail 0 0" >> /etc/fstab - Commands for node1 (
instana-1).echo "UUID=<device_vdb_uuid> /mnt/instana/stanctl/analytics xfs discard,defaults,nofail 0 0" >> /etc/fstab echo "UUID=<device_vdc_uuid> /mnt/instana/stanctl/metrics xfs discard,defaults,nofail 0 0" >> /etc/fstab echo "UUID=<device_vdd_uuid> /mnt/instana/stanctl/data xfs discard,defaults,nofail 0 0" >> /etc/fstab
- Commands for node0 (
Verify the mount paths
Verify that the directory is mounted on the correct disk. See Prepare the additional disks to get the device name.
lsblk <device name>
Mount the file systems
Mount all the file systems. Run the following command on node0 (instana-0) and node1 (instana-1).
mount -a
Kernel parameters
To install the Instana backend successfully, you must set the following kernel parameters correctly on all the nodes.
vm.swappiness
Set vm.swappiness to 0 to make sure that application pages are not moved to swap space.
sh -c 'echo vm.swappiness=0 >> /etc/sysctl.d/99-stanctl.conf' && sysctl -p /etc/sysctl.d/99-stanctl.conf
fs.inotify.max_user_instances
Set fs.inotify.max_user_instances to 8192 to make sure that the system allows a maximum of 8192 inotify instances.
sh -c 'echo fs.inotify.max_user_instances=8192 >> /etc/sysctl.d/99-stanctl.conf' && sysctl -p /etc/sysctl.d/99-stanctl.conf
Transparent Huge Pages
Disable Transparent Huge Pages (THP) permanently for memory management.
- If you have an Ubuntu or Debian host, run these commands:
sed -i "s/\(GRUB_CMDLINE_LINUX=\".*\)\"/\1 transparent_hugepage=never\"/" "/etc/default/grub" update-grub - If you have a Red Hat® Enterprise Linux®, CentOS Stream, Amazon Linux, or Oracle Linux host, run this command:
grubby --args="transparent_hugepage=never" --update-kernel ALL - If you have a SUSE Linux Enterprise Server (SLES) host, run these commands:
sed -i "s/\(GRUB_CMDLINE_LINUX=\".*\)\"/\1 transparent_hugepage=never\"/" "/etc/default/grub" grub2-mkconfig -o /boot/grub2/grub.cfg
A system reboot is required for these changes to take effect.
After the reboot, verify that the THP is disabled.
cat /sys/kernel/mm/transparent_hugepage/enabled
The following output indicates that THP is disabled.
always madvise [never]
Packages and environment variables
On some hosts, you need to install missing packages or set the required environment variables and paths.
Make sure that you complete the tasks on all the nodes in your cluster.
Amazon Linux 2023 hosts
In Amazon Linux 2023, the container-selinux and k3s-selinux packages are missing. These configurations are necessary for the Standard Edition installer to install Instana without any interruption.
-
Install the
container-selinuxpackage.dnf install -y container-selinux -
Install the
k3s-selinuxpackage.dnf install -y https://rpm.rancher.io/k3s/stable/common/centos/8/noarch/k3s-selinux-1.6-1.el8.noarch.rpm
Red Hat Enterprise Linux and CentOS Stream hosts
On Red Hat Enterprise Linux and CentOS Stream hosts, the /usr/local/bin directory is not included in the PATH environment variable by default. The Standard Edition installer needs this directory in the PATH to run some commands during installation.
To add /usr/local/bin directory to the PATH environment variable, complete the following steps:
-
Add the
export PATHcommand to.bashrcor.bash_profilefile.echo 'export PATH=$PATH:/usr/local/bin' >> ~/.bashrcOR
echo 'export PATH=$PATH:/usr/local/bin' >> ~/.bash_profile -
Apply the change. Use the name of the file that you updated in the previous command.
source ~/.bashrcOR
source ~/.bash_profile -
Verify that the
/usr/local/bindirectory is included in thePATHenvironment variable.echo $PATH
If the path update is successful, you see /usr/local/bin in the output.
Tenant and unit names
During Instana installation, you must provide the tenant and unit names.
After you install Instana, you cannot change the tenant or unit name.
The following restrictions apply to tenant and unit names:
- The name must match the following regular expression pattern:
^[a-z][a-z0-9]*$ - The name must not exceed 15 characters.
- The name must begin with an alphabetical character.
- The name must consist of only alphanumeric characters.
- All characters must be in lowercase.
For example, test is a unit name and marketing is a tenant name.
For more information about tenants and units, see units.instana.io.
Networking requirements
Your Instana domain and the hosts in your cluster must be reachable from outside your on-premises environment. Make sure that you update the Domain Name System (DNS) settings and set firewall rules on your hosts.
DNS settings
Make sure you have a domain name and a DNS zone for your Instana environment. Then, add DNS A records in the zone for the following domains:
| Domain | Description | Example name |
|---|---|---|
Base domain <base_domain> |
The fully qualified domain name (FQDN) that you can use to reach Instana. Points to the IP address of your host. | instana.example.com |
Agent acceptor subdomain agent-acceptor.<base_domain> |
Domain name for Instana agent traffic. Points to the IP address of your host. | agent-acceptor.instana.example.com |
OTLP HTTP acceptor subdomain otlp-http.<base_domain> |
Domain name for OpenTelemetry collector OTLP/HTTP traffic. Points to the IP address of your host. |
otlp-http.instana.example.com |
OTLP gRPC acceptor subdomain otlp-grpc.<base_domain> |
Domain name for OpenTelemetry collector OTLP/gRPC traffic. Points to the IP address of your host. |
otlp-grpc.instana.example.com |
Tenant and unit subdomain <unit-name>-<tenant-name>.<base_domain> |
Domain name for a unit and its tenant. Points to the IP address of your host. | test-marketing.instana.example.com |
For detailed steps about adding DNS A records, refer to the documentation of your domain registrar.
If you want to use only <base_domain> for all your Ingress traffic, see Configuring with a single domain.
In multi-node clusters, the base domain points to the IP address of node0 (instana-0). For more information about the IP address requirements, see IP addresses.
Firewall rules
Certain Linux distributions might have potential conflicts or restrictions that the firewall imposes. These conflicts might impact network communication and service discovery that are necessary for the installation. Therefore, if your firewall is open, add the required ports and rules to it.
If you have an external firewall, see the firewall documentation for information on how to open ports.
For Standard Edition deployment on Amazon Web Services (AWS), you must open all the ports in the security group even if the firewall is disabled.
Complete the following steps to open the required ports:
-
On Ubuntu hosts, run the following commands on all nodes. Use the IP addresses of the nodes in the commands.
ufw allow 22/tcp ufw allow 80/tcp ufw allow 443/tcp ufw allow 8443/tcp ufw allow from <node0 (instana-0) IP> to any port 22 proto tcp ufw allow from <node0 (instana-0) IP> to any port 6443,10250,2379,2380,5001,9443,53 proto tcp ufw allow from <node0 (instana-0) IP> to any port 8472,53 proto udp ufw allow from <node1 (instana-1) IP> to any port 6443,10250,2379,2380,5001,9443,53 proto tcp ufw allow from <node1 (instana-1) IP> to any port 8472,53 proto udp ufw allow from <node2 (instana-2) IP> to any port 6443,10250,2379,2380,5001,9443,53 proto tcp ufw allow from <node2 (instana-2) IP> to any port 8472,53 proto udp ufw allow from 10.42.0.0/16 to any ufw allow from 10.43.0.0/16 to any ufw allow in on lo ufw allow out on lo ufw reload -
On Debian, Red Hat Enterprise Linux, CentOS Stream, Amazon Linux, Oracle Linux, and SUSE Linux Enterprise Server (SLES) hosts, run the following commands on all nodes. Use the IP addresses of the nodes in the commands.
firewall-cmd --permanent --add-port=22/tcp firewall-cmd --permanent --add-port=80/tcp firewall-cmd --permanent --add-port=443/tcp firewall-cmd --permanent --add-port=8443/tcp firewall-cmd --new-zone=internal-access --permanent firewall-cmd --permanent --zone=internal-access --add-source=<node0 (instana-0) IP> firewall-cmd --permanent --zone=internal-access --add-source=<node1 (instana-1) IP> firewall-cmd --permanent --zone=internal-access --add-source=<node2 (instana-2) IP> firewall-cmd --permanent --zone=internal-access --add-port=22/tcp firewall-cmd --permanent --zone=internal-access --add-port=6443/tcp firewall-cmd --permanent --zone=internal-access --add-port=10250/tcp firewall-cmd --permanent --zone=internal-access --add-port=2379/tcp firewall-cmd --permanent --zone=internal-access --add-port=2380/tcp firewall-cmd --permanent --zone=internal-access --add-port=5001/tcp firewall-cmd --permanent --zone=internal-access --add-port=8472/udp firewall-cmd --permanent --zone=internal-access --add-port=9443/tcp firewall-cmd --permanent --zone=internal-access --add-port=53/udp firewall-cmd --permanent --zone=internal-access --add-port=53/tcp firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 firewall-cmd --permanent --zone=trusted --add-interface=lo firewall-cmd --reload
Verify ports
Verify whether the ports in Required ports are opened.
Ubuntu host
Use the following command to verify the ports that are blocked by a firewall. If a firewall is enabled, then the command lists the ports that are blocked.
ufw status verbose
To open a port, run the following command:
ufw allow <port_number>
Debian, Red Hat Enterprise Linux, CentOS Stream, Amazon Linux, Oracle Linux, and SLES
To verify whether a port is blocked by a firewall, use the following command:
firewall-cmd --query-port=<port_number>/tcp
If the command output is no, then the specified port is closed.
To open the port, use the following commands:
firewall-cmd --zone=public --add-port=<port_number>/tcp --permanent
firewall-cmd --reload
Configuring an HTTP proxy
Define the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables.
When you run the stanctl up --multi-node-enable command to install the Standard Edition, the installation automatically uses the environment variable values from the current shell.
Define the environment variables by using the following commands:
HTTP_PROXY=http://your-proxy.example.com:<port_number>
HTTPS_PROXY=http://your-proxy.example.com:<port_number>
NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
nm-cloud-setup utility on RHEL and CentOS Stream hosts
On Red Hat Enterprise Linux and CentOS Stream hosts, if nm-cloud-setup is enabled, you must disable it on each node and reboot the nodes.
Use the following commands:
-
Check whether
nm-cloud-setupis enabled.systemctl is-enabled nm-cloud-setup.serviceIf the service is disabled, the command returns
disabled, otherwise it returnsenabled. -
If
nm-cloud-setupis disabled, proceed to the next section. Ifnm-cloud-setupis enabled, disable it.systemctl disable nm-cloud-setup.service nm-cloud-setup.timer -
Reboot the node.
systemctl reboot
TLS certificate and key
The Instana Standard Edition needs a transport layer security (TLS) certificate and key.
The certificate must be issued for the domains that are specified in the DNS settings section.
If you do not want to specify any certificate, you can use a self-signed certificate that is generated during installation.
SSH configuration
Secure Shell (SSH) configuration is required only in a multi-node cluster.
To install Instana, users must connect from instana-0 to instana-1 and instana-2 by using SSH without entering a password.
- The root user must have passwordless SSH access to all three nodes.
- Non-root user must have passwordless SSH access to all three nodes and sudo privileges on each node.
Users can run the stanctl command as root or as a user with sudo access.
Complete the following steps to generate SSH keys and share them between the three nodes.
-
If no SSH keys exist in your cluster, generate an SSH key pair on node0 (
instana-0).ssh-keygen -t rsa -
Copy the public key content to the
$HOME/.ssh/authorized_keysfile of node1 (instance-1) and node2 (instance-2). -
If the
sshdservices in node1 (instana-1) and node2 (instana-2) listen on an alternative port instead of the default SSH port 22, update the$HOME/.ssh/configin node0 (instana-0) to use that port.cat <<EOF | sudo tee -a ~/.ssh/config Host * port <alternate_port_number> EOF -
Test the SSH connection between node0 (
instana-0) node and the other two nodes.ssh <username>@<node_ip>
If the SSH connection is successful, you see a prompt for the next command. If you get a permission denied error, make sure that you copied the public key to the correct user account on the node.
What's next
Proceed with installing Instana. For more information, see Installing Instana backend and data stores.