Monitoring applications
Instana introduces the next generation of APM with its application hierarchy of services, endpoints, and application perspectives across them. The main goal is to simplify the monitoring of your business' service quality. Based on the data that is collected from traces and component sensors, Instana discovers your application landscape directly from the services that are being implemented.
Traditional Application Performance Management (APM) solutions are about managing the performance and availability of applications.
An application for APM tools is a static set of code runtimes (for example JVM or CLR) that are monitored by using an agent. Normally the application is defined as a configuration parameter on each agent.
This concept, which was a good model for classical 3-tier applications, does not work anymore in modern (micro)service applications. A service does not always belong to exactly one application. Think of a credit card payment service that is used in the online store of a company and also at their Point of Sales. A solution to this problem might be to define every service as an application, but that would introduce some new issues:
- Too many applications to monitor. Treating every service as an application would result in hundreds or thousands of applications. Monitoring them using dashboards do not work - too much data for humans.
- Loss of context. As every service is treated separately, it would not be possible to understand dependency and understand the role of the service in the context of a problem.
- Summary
- Application Dependency Map
- Error messages
- Log messages
- Infrastructure
- Smart Alerts
- Time range adjustment
- Approximate Data
Summary
Latency Distribution
Latency Distribution chart is perfect for investigating latency-related issues of your applications, services, or endpoints. A latency range can be selected on the chart and by using the "View in Analytics" menu item, you can further explore the specific calls in Unbounded Analytics.
Infrastructure Issues & Changes
Infrastructure issues and changes related to your applications, services or endpoints are shown on the respective Dashboard "Summary" tab, to help you find correlations with interesting application metric changes, such as increase of "Erroneous Call Rate" or "Latency".
To learn more about some specific issues or changes, select a wanted time range on the chart and click the View Events menu item, which brings you to the Events view.
Processing Time
The Processing Time chart helps you understand how much time is spent doing processing in an application, a service or an endpoint itself (Self
), and how much time is spent calling the downstream dependencies,
which is broken down by the call type, such as Http
, Database
, Messaging
, Rpc
, SDK
, etc.
For example, if the latency of a call to the Shop
service is 1000ms
, the Shop
service makes an HTTP call to the Payment
service that takes 300ms
and then another database call
to the Catalog
service that takes 200ms
, the self processing time of the Shop
service is 1000-300-200=500ms
.
Time shift
To compare metrics to past timeframes, you can use a Time Shift functionality as shown in the image. Be aware of decreased precision when you are comparing metrics against historical data.
Application Dependency Map
The dependency map is available for each application and provides,
- an overview of the service dependencies within your application.
- a visual representation of calls between services to understand communication paths and throughput.
- different layouts to quickly gain an understanding of the application's architecture.
- comfortable access to service views (dashboards, flows, calls, and issues).
Error messages
Error messages are all messages that are collected from errors that happened during code execution of a service. For example, if an exception is thrown during processing and it is not caught and handled by the application code, it is listed
on the Error Messages
tab. An example would be an unhandled exception in a servlet's doGet
method that causes the request to be responded to with HTTP 500.
Log messages
Log Messages are collected from instrumented logging libraries or frameworks (see, for example, the section "Logging" in the list of supported libraries. When a service writes
a log message with severity WARN
or higher through a logging library, the message is displayed on the "Log Messages" tab. Additionally, captured log messages are shown in the trace details in the context of their trace. If a log message was written with severity ERROR
or higher, it is marked as an error. Note that log messages with a severity lower than WARN
are not tracked.
Infrastructure
From the Application Perspective view or Services dashboard, it is possible to navigate to the corresponding infrastructure component shown on the Infrastructure Monitoring view.
The "Unmonitored" infrastructure component
The list of infrastructure components for an application or service might sometimes show or include the "Unmonitored" host or container or process.
The "Unmonitored" component indicates that for some or all calls to this service, we were unable to link it to a specific infrastructure component. As Services are "logical" entities, we are often able to link it to infrastructure components through the monitored process. It does not hold, for example, for third-party web services, which we don't monitor but where we still create Services and Endpoints based on hostname + path. Since no host or process is known, these services would be resulting in the "Unknown" infrastructure component being shown.
Smart Alerts
View a list of all your configured Smart Alerts. Click an alert to view its configuration, modify it, or view its revision history. If required, you can also disable or remove the alert.
For information on how to add an alert, see Smart Alerts docs.
Time range adjustment
The time range that is used in Instana dashboards or analytics might slightly differ from the selected time range in the time picker. The dashboard or analytics time range excludes the first and the last partial bucket. For example, when you select the Last 24 hours preset in the time picker at 3:15 PM on 20 January, the time range is adjusted to 3:30 PM 19 January–3:00 PM 20 January. This adjustment is done because the respective chart granularity is 30 minutes. The time range adjustment ensures consistency among different widgets on the same page and avoids misinterpretation of partial buckets as an unexpected metric trend, for example, a drop in number of calls.
Approximate Data
When you view a dashboard or do a query in Analytics over a specific time range beyond the past seven days, you might see the Approximate Data Indicator on different Widgets, which is used to indicate that Instana is accessing a reduced amount of statistically significant traces and calls to serve the queries. Example:
Traces and calls that occur rarely might not be represented in such scenarios.