Monitoring Java virtual machine (JVM)
You can comprehensively monitor your Java virtual machine (JVM) with Instana to identify bottlenecks and optimize performance. To monitor JVM, install the Instana host agent. After you install the agent, the Instana Java sensor automatically activates, collecting real-time metrics and tracing data that you can view in the Instana UI.
The Java sensor provides automated code instrumentation for supported technologies, zero-configuration health monitoring of JVM instances, and end-to-end traces of requests across all systems.
Supported information
The Java sensor supports the following languages, operating systems and runtimes:
Supported languages
The sensor supports the following languages:
- Clojure
- Java
- Kotlin
- Scala
Supported operating systems
The Java sensor supports the operating systems that are consistent with the host agents requirements, which can be checked in the Supported operating systems section of each host agent, such as Supported operating systems for Unix.
Supported Java distributions and runtimes
For information about supported Java distributions and runtimes, see Supported JVM distributions.
Supported frameworks and libraries for tracing
The Java sensor instruments several frameworks and libraries for tracing. For more information, see Instrumented frameworks and libraries. For the deprecated Java 6 runtime, see Instrumented frameworks and libraries for deprecated runtime Java 6
System requirements
Before you install the Instana agent, ensure the necessary system requirements are met. For more information, see System requirements.
Installing the Instana agent
To monitor JVM, you must install the Instana host agent. For more information, see Installing host agents. The agent automatically deploys, configures, and installs the Java sensor. To ensure that your Java applications are instrumented, make sure that your JVM distribution is supported.
Excluding JVMs from monitoring
To avoid the attachment of the Instana agent to a JVM, you can set the INSTANA_IGNORE environment variable to true in your JVM environment.
Optional: Configuring the sensor
After you install the Instana agent, the Java sensor is automatically installed and configured. Although you do not need any configuration for out-of-the-box metrics and distributed tracing, you can configure individual components of the sensor.
For more information about configuring the sensor, see Configuring Java sensor.
After the Java sensor is configured, it automatically starts collecting metrics from the JVM. You can view these metrics in the Instana UI. The Java sensor also supports other features of Instana like automatic tracing, custom tracing, and automatic profiling.
Metrics collection
The Java sensor monitors the JVM instance and collects the following metrics from it:
To view these metrics, complete the following steps:
- In the sidebar of the Instana UI, select Infrastructure.
- Click a specific monitored host.
The JVM dashboard displays all the collected metrics for the JVM instance.
Configuration data
The following table lists the configuration data that is collected from the JVM instance:
| Configuration | Description |
|---|---|
| Java version | The Java version used by the JVM |
| Java runtime | The Java Runtime Environment (JRE) implementation |
| Maximum heap | Maximum heap size available for the JVM |
| Class path | The class path parameter set in the JVM |
| JVM arguments | The startup options and configuration parameters passed to the JVM |
| Services | Logical service names identified and monitored by Instana |
Performance metrics
The following performance metrics are collected from the JVM instance:
Memory metrics
The following table summarizes the memory metrics used to measure memory usage in the JVM:
| Performance metric | Description | Data source | Units |
|---|---|---|---|
| Memory used | Total memory currently used by the JVM | java.lang.Runtime#totalMemory |
Bytes |
| Heap memory | Maximum heap size available for the JVM - Used heap memory: Difference between java.lang.Runtime#totalMemory and java.lang.Runtime#freeMemory. - Maximum heap size: Determined by parsing the -Xmx command-line parameter or collected from java.lang.Runtime#maxMemory. - Percentage of heap memory used: (Heap memory used/Total heap memory)*100 |
java.lang.Runtime methods |
Bytes or Percentage (%) |
| Memory pool | Memory pool usage of heap and non-heap pools, displayed as a graph over a selected time period | ManagementFactory#getMemoryPoolMXBeans |
Bytes |
| In Use | Size of the heap memory currently used by the JVM (usage and utilization) | java.lang.management.MemoryUsage |
Usage: MiB Utilization: Percentage (%) |
| Swimming pool | Name of the memory region managed by the JVM | ManagementFactory#getMemoryPoolMXBeans |
— |
| Early years | Initial memory size allocated at JVM startup | getInit |
Bytes |
| Max | Maximum memory size that the JVM can allocate to that pool | getMax |
Bytes |
| Value | Memory size currently in use | getUsage |
Bytes |
Threads metrics
The following table summarizes the information related to threads metrics:
| Performance metric | Description | Data source | Units |
|---|---|---|---|
| Threads | Number of threads that are in different states as displayed on a graph over a selected time period: new, runnable, timed-waiting, waiting, or blocked |
java.lang.management.ThreadMXBean#getAllThreadIds |
Count |
| New | Number of threads created but not yet started | ThreadMXBean#getThreadInfo |
Count |
| Runnable / Actionable | Number of threads that are runnable and eligible for CPU execution | ThreadMXBean#getThreadInfo |
Count |
| Timed-Waiting | Number of threads waiting for a specified period of time | ThreadMXBean#getThreadInfo |
Count |
| Waiting | Number of threads waiting for a specified period of time | ThreadMXBean#getThreadInfo |
Count |
| Blocked | Number of threads blocked while waiting to acquire a lock | ThreadMXBean#getThreadInfo |
Count |
Garbage collection (GC) metrics
The following table summarizes the information related to garbage collection metrics:
| Performance Metric | Description | Data source | Unit |
|---|---|---|---|
| Garbage Collection | Garbage collection activation and runtime values as displayed on a graph over a selected time period | - Garbage collection information: ManagementFactory#getGarbageCollectorMXBeans - Graph values: java.lang.management.GarbageCollectorMXBean |
— |
| PS Scavenge Time | Total time spent on GC in the Young region (Eden + Survivor) (Minor GC) | getCollectionTime |
Milliseconds |
| PS MarkSweep Time | Total time taken on GC in the Old region (Major GC) | getCollectionTime |
Milliseconds |
| PS Scavenge Calls | Number of minor GC runs | getCollectionCount |
Count |
| PS MarkSweep Calls | Number of major GC runs | getCollectionCount |
Count |
- `getCollectionTime` and `getCollectionCount` values are the calculated differential over a 1‑second interval.
- `getCollectionTime` is the approximate elapsed time in milliseconds for accumulated garbage collection.
- `getCollectionCount` is the invocation count.
Application metrics
The following table summarizes information related to the application performance metrics:
| Performance metric | Description | Data source | Unit |
|---|---|---|---|
| Suspension / Interruption | Delay in application execution time due to JVM, operating system, or CPU scheduling in the last second. | Calculated per the in-app Instana measurement thread. | Milliseconds |
Derived metrics
The following table lists the available metrics that are derived based on the performance metrics, which are calculated from a JVM instance:
| Performance metrics | Description | Data source | Metric name |
|---|---|---|---|
| Memory After GC | The amount of memory that an application uses after a Garbage Collection (GC) event occurs. When the JVM sensor reports a global invocation of garbage collection, the memory value at that time is used to report the Memory After GC value. | Memory usage statistics and garbage collection events | memory.gc.after |
| Memory Before GC | The amount of memory that an application uses before a Garbage Collection (GC) event occurs. When the JVM sensor reports a global invocation of garbage collection, the memory value previous to this invocation is used to report the Memory Before GC value. | Memory usage statistics and garbage collection events | memory.gc.before |
| Memory After GC Percentage | The proportion of total available memory that an application uses after a Garbage Collection (GC) event. It is the percentage of memory in use after a global garbage collection, relative to the maximum memory that the JVM used. | Memory used performance statistic, maximum memory used, and garbage collection statistics | memory.gc.afterPercentage |
| Memory Before GC Percentage | The proportion of total available memory that an application uses before a Garbage Collection (GC) event. It is the percentage of memory in use before a global garbage collection, relative to the maximum memory that the JVM used. | Memory used performance statistic, maximum memory used, and garbage collection statistics | memory.gc.beforePercentage |
Health signatures
Each sensor has a curated knowledge base of health signatures that are evaluated continuously against the incoming metrics. These health signatures are used to raise issues or incidents that depend on user impact.
Built-in events trigger issues or incidents based on failing health signatures on entities, and custom events trigger issues or incidents based on the thresholds of an individual metric of any entity.
For more information about the built-in events for the Java sensor, see the Built-in events reference.
Custom metrics
Instana supports some common Java metrics libraries. If you use the following libraries, you can manually instrument your application code to collect custom metrics:
For more information, see Custom tracing.
Using Dropwizard metrics for custom JVM monitoring
If the JVM loads the Dropwizard metrics library, custom metrics are collected and displayed on the JVM dashboard. To prevent the backend from overloading, there is a default limit of 200 metrics.
To disable or change the limit of collected metrics, use the following configuration:
com.instana.plugin.java:
dropwizardMetricCollection:
enabled: false
limit: 200
If you are using Dropwizard metrics as part of the Dropwizard framework, see Monitoring Dropwizard.
Other metrics
Apart from configuration, performance, and custom metrics, the Java sensor also collects other metrics, such as live thread dump and heap dump.
Live Thread Dump
To view a live thread dump for the JVM, click Get Thread Dump.
Heap Dump
To create a heap dump for the JVM, click Get Heap Dump. To store the heap dump, indicate a location local to the JVM.
Tracing Java applications
The Java sensor in Instana uses the following methods to trace Java applications:
- Instana AutoTrace: Automatic tracing of Java applications without requiring manual configuration or code changes.
- Custom tracing: Manual instrumentation of specific parts of the Java application code to capture custom metrics and gain deeper insights.
- Instana AutoProfile: Automatic profiling of Java applications provides detailed information about performance, CPU usage, memory allocation, and other system resources.
Instana AutoTrace
By default, the Java sensor monitors all requests and automatically creates a distributed trace for each of them. This distributed trace includes cross-host and cross-language tracing. For more information, see Instana AutoTrace™.
You can view these traces in the Instana UI. For more information, see Analyzing traces and calls.
Logging
You can view only logs that are at level WARN and later.
When Log4j, Log4j2, or Logback is used to enable a more precise correlation of logging and tracing, Instana automatically populates the Mapped Diagnostic Context (MDC) with the trace ID. The MDC variable name is instana.trace.id. For more information about using the logging frameworks in format strings, see the documentation for your logging framework.
Custom tracing
The Java sensor provides a fully automated out-of-the-box tracing instrumentation. But in some cases, you might prefer to send custom traces to your Instana dashboard. You can use the following methods to implement custom tracing:
Java Trace SDK
If you want to instrument a framework that is not yet supported by Instana, or monitor the requests of a custom application, use the Java Trace SDK, and view the GitHub repo.
Before you implement custom tracing by using the SDK, see the tracing best practices.
Configuration-based Java Trace SDK
You might encounter situations where using the Java Trace SDK, which requires manipulating the source code or contacting someone who can do that, is not feasible or desirable. In these cases, use the configuration-based Java Trace SDK. Although less feature rich than the programmatic Java Trace SDK, the configuration-based Java Trace SDK allows a declarative configuration of spans and tags that cover many common use cases.
Before you implement custom tracing by using the configuration-based Java Trace SDK, see the Tracing best practices.
Java OpenTracing API
To collect traces that are described through the OpenTracing API, you must use Java OpenTracing. For more information, see OpenTracing.
OpenCensus Instana Trace Exporter
Instana provides an OpenCensus Trace Exporter for applications that are written in Java. By using the Instana agent processes as a proxy, Instana forwards traces that are exported by applications that are instrumented with Census to its backend.
For more information, see the OpenCensus Exporters.
Instana AutoProfile
Profiles are essential for locating performance hot spots and bottlenecks at the code level. They are instrumental in reducing resource consumption and improving performance.
Instana AutoProfile™ generates and reports process profiles to Instana. Unlike development-time and on-demand profilers, where you must manually initiate profiling, AutoProfile™ automatically schedules and continuously performs profiling appropriate for critical production environments.
For more information, see Instana AutoProfile™.
Excluding JVMs
You can set the INSTANA_IGNORE environment variable to true in your JVM environment to avoid the attachment of Instana Agent to JVM.
Troubleshooting
You might encounter some monitoring issues with Instana. For more information, see Troubleshooting.