Architecture
OMEGAMON® AI Insights is deployed as a Docker container. The entire application, along with its dependencies and configurations, is packaged into several portable and self-sufficient containers. Communication between OMEGAMON AI Insights and the ELK stack (Elasticsearch + Logstash + Kibana) utilizes HiperSockets, a VPN, or HTTPs.
The services provided by OMEGAMON AI Insights emanate from the interaction between four main components – An OMEGAMON monitoring agent, OMEGAMON AI Insights itself, OMEGAMON Data Provider and the ELK stack (Elasticsearch + Logstash + Kibana). An understanding of how the various products and components interface with each other can help you leverage OMEGAMON AI Insights effectively.
The entire application, along with its dependencies and configurations, is packaged into portable and self-sufficient containers that provide multiple services.
- First an OMEGAMON monitoring agent collects the necessary data and sends it in JSON format, via OMEGAMON Data Provider, to a Logstash listener.
- There the data is stored in an Elasticsearch repository.
- Next, OMEGAMON AI Insights accesses the data from the Elasticsearch repository, and inputs it to its machine learning algorithms.
- Finally, after the algorithms have run their course, their computations are sent back to Elasticsearch.
- Kibana retrieves these computed results and displays them along with live data, which meanwhile has been accumulating in Elasticsearch. By superposing the model predictions with live data, you can get a clear view of any anomalous behavior.
- When critical anomalies are detected, alerts are triggered, and email is sent to stakeholders.
At its core, OMEGAMON AI Insights is a Python application that runs on Linux for z/OS and accepts data in JSON format. At run time, the application code is executed on a Linux for Z instance. The configuration files on the Linux server controls the execution of the OMEGAMON AI Insights code.