A primary goal of the sample analytics pipeline is
to make adoption and setup easy. Therefore, Podman is used extensively to install and configure the
various components. You might need to make modifications to the installation to suit your
environment.
The sample analytics pipeline can be installed on Linux® on an x86 system or Linux on IBM Z® environments.
The installation process uses Podman to download, install,
and configure the various open-source software, including Grafana, MariaDB or MySQL, Apache Kafka, and various Python packages.
Before you begin
- Ensure that you have the following prerequisites installed:
Red Hat®
Enterprise Linux (RHEL) 9.6 or later.
Podman 5.4.0 or later. Use the yum package to install
Podman.
If you are using Docker or RHEL 8 (or both), see
Migrating from Docker to Podman for information about how to migrate to Podman and RHEL
9.
Note: 
These
instructions and scripts were implemented and tested on a virtual machine that has no other Podman
images or containers. If you use these instructions and scripts on a machine that has other Podman
images and containers, carefully review each step, script, and other information to ensure no
undesirable behaviors result.

Podman is configured to run rootless. Complete the
following steps to enable your pods to run after your user session exits:
- Enter the following command:
loginctl enable-linger $user
- Open the default.target file. Enter the following
command:
sudo vi /etc/systemd/system/default.target
- Add the following statements at the end of the default.target
file:
[Install]
WantedBy=default.target

If you are running on RHEL 8, you must complete the
following steps to ensure podman stats commands work:
- Enter the following commands:
- sudo mkdir -p /etc/containers
- echo -e "[engine]\ncgoup_manager = \"cgroupfs\"" | sudo tee
/etc/containers/containers.conf
- systemctl --user restart podman
- Take one of the following actions:
- If you have grub, complete the following steps:
- Open the grub file. Enter the following command:
sudo vi
/etc/default/grub
- In the grub file, add the following value to the
GRUB_CMDLINE_LINUX
key:systemd.unified_cgroup_hierarchy=1
- Enter the following command:
sudo grub2-mkconfig -o "$(readlink -e
/etc/grub2.conf)"
- Enter the following command:
sudo reboot
- If you have the grubby, enter the following commands:
- sudo grubby --update-kernel=ALL
--args='systemd.unified_cgroup_hierarchy'
- sudo reboot

- Ensure that you understand the sample analytics pipeline
server configurations and components.
- Determine your preferred server configuration (single or dual) and which servers will be
Linux on an x86 system or Linux on IBM Z.
- To use real-time runtime metrics collection, ensure that you have
adequate disk space on the collection server or single-server configuration for the real-time runtime metrics collection logging. For more information, see Runtime metrics collection log files.
Procedure
If you want to set up a dual-server configuration, complete the following steps on both
the collection server and the analytics server, unless the step explicitly indicates to do the step
on only one of the servers. If you want to set up a single-server configuration, complete the
following steps on a single server.
Notes:
- All modifications to configuration files must be done in the
tpf_sap/user_files directory. After you make the modifications, you must
rebuild the container images by using the tpf_prepare_configurations.sh script,
stop the container if it is currently running, and then build the container. For example, see step
15.
- If you receive errors from any of the following steps, see Sample analytics pipeline installation errors.

- Copy the
base/tpfrtmc/bin/tpf_sample_analytics_pipeline.tar.gz file in binary format
from your z/TPF source repository to the home directory on your Linux machine. Enter the following command to extract the content from the tar file:
tar -xf
tpf_sample_analytics_pipeline.tar.gz
Change the default credentials to credentials that
are used in your environment. Default credentials are specified in the
tpf_sap/tpf_default_credentials.text file. These credentials are used in
various scripts that are provided with the sample analytics pipeline. Change the username and password to values that are more secure for your environment. When you
change passwords, you must make updates to various files in the
tpf_sap/tpf_files directory.

Decide which version of each component to
use. There are many components in use in the sample analytics pipeline. The component versions that are indicated were
stable at the time of release. To use the latest versions of these components, modify the version
numbers that are specified in the
tpf_sap/user_files/tpf_prepare_configurations.yml and
tpf_sap/user_files/tpf_zrtmc_analyzer_files/requirements.txt files with the
latest version.

Copy the
base/tpfrtmc/bin/tpfrtmc.tar.gz file in binary format from your z/TPF source repository to the
tpf_sap/tpf_files/tpf_rtmc/ directory. For the collection server or a
single-server configuration, enter the following commands to extract the content from the tar
file:
- cp tpfrtmc.tar.gz tpf_sap/tpf_files/tpf_rtmc/
- cd ~/tpf_sap/tpf_files/tpf_rtmc
- tar -xf tpfrtmc.tar.gz

Copy the
base/tpfrtmc/bin/tpf_zmatc_analyzer.tar.gz file in binary format from your
z/TPF source repository to the
tpf_sap/tpf_files/tpf_zmatc_analyzer/ directory. For the analytics server or a
single-server configuration, enter the following commands to extract the content from the tar
file:
- cp tpf_zmatc_analyzer.tar.gz
tpf_sap/tpf_files/tpf_zmatc_analyzer/
- cd ~/tpf_sap/tpf_files/tpf_zmatc_analyzer/
- tar -xf tpf_zmatc_analyzer.tar.gz

- Define your Apache Kafka hosts, encryption settings, topic settings, and
programmatic variables in the
tpf_sap/user_files/kafka_hosts.yml
file. For more information about how to configure
this file, see the comments in the file.
If Python 3.8 and the pyyaml
library are not installed on your system, enter the following commands to install
them. These libraries are required by the
tpf_prepare_configurations.sh script.
- sudo yum install python38
- sudo python3 -m pip install --upgrade pyyaml

Change your directory to the
tpf_files/tpf_utility_scripts directory. Enter the following
command: cd
~/tpf_sap/tpf_files/tpf_utility_scripts

Prepare your
configuration. This configuration determines if the server will use MariaDB or MySQL, Linux on an x86 system or Linux on IBM Z, use trusted dependency repositories, and more.
This step also builds the images that are required for the server.
- Define your settings in the
tpf_sap/user_files/tpf_prepare_configurations.yml file.
- Enter the following command to configure your
server:
./tpf_prepare_configurations.sh
Each image is built by the tpf_prepare_configurations.sh script. If you are
prompted with an option to choose the repository, choose the docker repository option by selecting
the down arrow key twice and pressing enter.
To view which files are edited and what changes are made to achieve your desired settings, see
the tpf_prepare_configurations.sh script.

Start the database container.
Enter the following commands:
- Optional: If you make any changes to the SQL code or content in the
user_files directory that affects the database, you must rebuild the image of
the database container. Enter the following
command:
./tpf_prepare_configurations.sh db
- Start the database container. Enter the following
command:
podman kube play --userns=keep-id --net
slirp4netns:port_handler=slirp4netns
../../user_files/tpf_kube_files/tpf-db-kube.yaml

Start the Kafka container. For
the collection server or single-server configurations, enter the following commands:
- Optional: If you make any changes to the Kafka configuration or content in
the user_files directory that affects Kafka, you must rebuild the image of the
Kafka container. Enter the following
command:
./tpf_prepare_configurations.sh
kafka
- Optional: If you need to rebuild the Kafka container, first remove all
files and folders in the tpf_sap/tpf_files/tpf_kafka/volumes/kafka-logs/
directory. Enter the following command:
rm -rf
tpf_sap/tpf_files/tpf_kafka/volumes/kafka-logs/*
If you do not remove the
existing files and folders before you rebuild the container, you might receive the following error
from the Kafka broker when the tpf-kafka-broker container
starts:
The Cluster ID jw3FiOddStufuL211VzUjQ doesn't match stored clusterId.
- Start the Kafka container. Enter the following
command:
podman kube play --userns=keep-id --net
slirp4netns:port_handler=slirp4netns
../../user_files/tpf_kube_files/tpf-kafka-kube.yaml

Set up the database tables and stored procedures
by running the SQL script. Enter the following
command: ./tpf_setup_db.sh

Create the Apache Kafka topics. Run the following script for the
collection server or single-server
configurations: ./tpf_create_kafka_topics.sh

- Run the following script for the collection server or single-server
configurations:
./tpf_modify_kafka_topics.sh
hostname:port
Start the tpfrtmc offline utility container. For the collection server or
single-server configurations, complete the following steps:
- Optional: If you make any changes to the RTMC configuration or any file in the
tpfrtmc directory, you must rebuild the image of the tpfrtmc offline utility container. Enter the following
command:
./tpf_prepare_configurations.sh
rtmc
- Start the tpfrtmc offline utility container.
Enter the following command:
podman kube play --userns=keep-id --net
slirp4netns:port_handler=slirp4netns
../../user_files/tpf_kube_files/tpf-rtmc-realtime-kube.yaml
For example, if you need to modify the
RTMC properties file, complete the following steps:
- Modify the tpf_sap/user_files/tpf_rtmc/RTMCProperties.realtime.yaml
file.
- Rebuild the image. Enter the following command:
./tpf_prepare_configurations.sh
rtmc
- Stop the container. Enter the following command:
podman kube play --userns=keep-id
--net slirp4netns:port_handler=slirp4netns
../../user_files/tpf_kube_files/tpf-rtmc-realtime-kube.yaml --down
- Restart the container. Enter the following command:
podman kube play
--userns=keep-id --net slirp4netns:port_handler=slirp4netns
../../user_files/tpf_kube_files/tpf-rtmc-realtime-kube.yaml

- Optional: Configure the ZRTMC analyzer
instances to support multiple z/TPF systems.
Start the ZRTMC analyzer
container. For the analytics server or single-server configurations, complete the following
steps:
- Optional: If you make any changes to the ZRTMC analyzer configuration or
any file in the zrtmc directory, you must rebuild the image of the
tpf_zrtmc_analyzer container. Enter the following
command:
./tpf_prepare_configurations.sh
zrtmc-analyzer
- Start the ZRTMC analyzer container. Enter the following
command:
podman kube play --userns=keep-id --net
slirp4netns:port_handler=slirp4netns
../../user_files/tpf_kube_files/tpf-zrtmc-analyzer-kube.yaml
For example, if you need to modify the ZRTMC analyzer properties file, complete
the following steps:
- Modify the
tpf_sap/user_files/tpf_zrtmc_analyzer_files/profile/tpf_zrtmc_analyzer_profile.yml
file.
- Rebuild the image. Enter the following command:
./tpf_prepare_configurations.sh
zrtmc-analyzer
- Stop the container. Enter the following command:
podman kube play --userns=keep-id
--net slirp4netns:port_handler=slirp4netns
../../user_files/tpf_kube_files/tpf-zrtmc-analyzer-kube.yaml --down
- Restart the container. Enter the following command:
podman kube play
--userns=keep-id --net slirp4netns:port_handler=slirp4netns
../../user_files/tpf_kube_files/tpf-zrtmc-analyzer-kube.yaml
Note: The ZRTMC analyzer connects to both Apache Kafka and the database upon startup. Any data that
is available on the configured Apache Kafka topics
will start being processed.

Start the ZMATC analyzer
container. For the analytics server or single-server configurations, complete the following
steps:
- Optional: If you make any changes to the ZMATC analyzer configuration or
any file in the zmatc directory, you must rebuild the image of the
tpf_zmatc_analyzer container. Enter the following
command:
./tpf_prepare_configurations.sh
zmatc-analyzer
- Start the ZMATC analyzer container. Enter the following
command:
podman kube play --userns=keep-id --net
slirp4netns:port_handler=slirp4netns
../../user_files/tpf_kube_files/tpf-zmatc-analyzer-kube.yaml
For example, if you need to modify the ZMATC analyzer properties file, complete
the following steps:
- Modify the tpf_sap/user_files/tpf_zmatc_analyzer/tpf_zmatc_properties.yaml
file.
- Rebuild the image. Enter the following command:
./tpf_prepare_configurations.sh
zmatc-analyzer
- Stop the container. Enter the following command:
podman kube play --userns=keep-id
--net slirp4netns:port_handler=slirp4netns
../../user_files/tpf_kube_files/tpf-zmatc-analyzer-kube.yaml --down
- Restart the container. Enter the following command:
podman kube play
--userns=keep-id --net slirp4netns:port_handler=slirp4netns
../../user_files/tpf_kube_files/tpf-zmatc-analyzer-kube.yaml
Note: The ZMATC analyzer performs analysis on all available message analysis tool results in the database on the analytics
server.

Start the Grafana container.
For the analytics server or single-server configurations, enter the following commands:
- Optional: If you make any changes to the Grafana configuration or content
in the user_files directory that affects Grafana, you must rebuild the image of
the Grafana container. Enter the following
command:
./tpf_prepare_configurations.sh
grafana
- Start the Grafana container. Enter the following
command:
podman kube play --userns=keep-id --net
slirp4netns:port_handler=slirp4netns
../../user_files/tpf_kube_files/tpf-grafana-kube.yaml

- Optional:
If you have
an active firewall, ensure that the ports specified in the YAML (.yml) files
are open. Take one of the following actions:
- To open all of the default ports, run the
tpf_sap/tpf_files/tpf_utility_scripts/tpf_open_firewall_ports.sh
script.
- To modify the ports, complete the following steps:
- Enter the following command for each port that is exposed by the YAML files:
sudo
firewall-cmd --zone=public --add-port=portID/tcp
--permanent
where
portID represents the following ports:
- For MariaDB or MySQL: 3306
- For Grafana: 3000
- For Apache Kafka: 2181, 9092, 9093, 8082,
8000
- For tpfrtmc: 9090, 9095
- Reload the firewall. Enter the following command:
sudo firewall-cmd
--reload

- Optional:
If you plan to
process tapes created by the name-value pair collection process with the ZCNVP
command, start the tpf-rtmc-tape container. For the collection server or single-server
configurations, enter the following commands:
- Optional: If you make any changes to the tpfrtmc configuration or content
in the user_files directory that affects tpfrtmc, you must rebuild the image of
the tpf-rtmc-tape container. Enter the following
command:
./tpf_prepare_configurations.sh
tape
- Start the tpf-rtmc-tape container. Enter the following
command:
podman kube play --userns=keep-id
../../user_files/tpf_kube_files/tpf-rtmc-tape-kube.yaml

Results
The analytics pipeline is now fully functional.