You can build an analytics pipeline to process
runtime metrics collection data in accordance with your enterprise
solutions. For example, you can use some of the sample analytics pipeline container definitions, install and configure
components natively, use existing components in your enterprise, or any combination of techniques to
suit your needs.
The Apache Kafka data formats and message analysis tool database
layout can serve as your IBM supported interface to implement your own analytics pipeline.
Procedure
- Install, configure, and start a MariaDB or MySQL database.
- Follow the installation instructions in the runtime metrics collection entry in Optional z/TPF and z/TPFDF product software.
- Configure your database for
performance.
- Follow the instructions in the
tpf_data_sci/Docker/tpf_db_docker_files/tpf_setup_db.sh script that is provided
with the sample analytics pipeline to set up the database tables
and stored procedures.
- Install, configure, and start Apache Kafka.
- Follow the installation instructions in the runtime metrics collection entry in Optional z/TPF and z/TPFDF product software.
- Consider using the tpf_data_sci/Docker/tpf_create_kafka_topics.sh
and tpf_data_sci/Docker/tpf_modify_kafka_topics.sh scripts that are provided
with the sample analytics pipeline to create and configure the
required topics.
- Install, configure, and start the tpfrtmc offline utility.
- Create a runtime metrics collection properties
file to configure runtime metrics collection.
- Start the tpfrtmc offline utility.
- Optional: Install, configure, and start the ZRTMC analyzer.
- For code and installation requirements, see the
tpf_data_sci/Docker/tpf_zrtmc_analyzer_docker_files/Dockerfile file that is
provided in the sample analytics pipeline.
- Configure the ZRTMC analyzer profile
file.
- Configure multiple tpf_zrtmc_analyzer
containers.
- Optional: Install, configure, and start Grafana.
- Optional: Set up runtime metrics collection for the message analysis tool.