Memory leaks are common in almost any language—including garbage-collected languages—and Go is no exception. A reference to an object (if not properly managed) may be left assigned, even if unused. This usually happens on an application-logic level but can also be an issue inside an imported package.

Unfortunately, it’s hard to detect and fix memory leaks in development or staging environments—both because the production environment has different and more complex behavior and because many memory leaks take hours or even days to manifest themselves.

How to find memory leaks in production

Golang has a powerful profiling toolset called pprof that includes a heap allocation profiler. The heap profiler gives you the size of the allocated heap and the number of objects per stack trace (i.e., the source code location where the memory was allocated). This is critical information but is not sufficient as a single profile. To detect if there is an actual leak over a period of time, you need to record and compare regular allocation profiles.

Some issues when using pprof against production environments include the following:

  • The profiler’s HTTP handler, which accepts profiling requests, must attach itself to the application’s HTTP server or have one running. This means you should take extra security measures to protect the listening port.
  • Locating and accessing the application node’s host to run the go tool pprof may be tricky in container environments like Kubernetes.
  • If the application has crashed or can’t respond to the pprof request, no profiling is possible.
  • Obtaining the historical stack trace view of heap allocations—a regular manual pprof execution—requires interactive result analysis and comparison.

Using IBM Instana for automatic memory leak detection and profiling

The IBM Instana platform automates the collection of heap allocation profiles, solving the above-mentioned issues. The IBM Instana Go Profiler, initialized in the application, continuously records and reports allocation profiles to the dashboard.

See the IBM Instana Profiling documentation for detailed setup instructions. After the application is restarted or deployed, the profiles will be available in the dashboard in a historically comparable form.

Similar profile history is automatically available for CPU usage, blocking calls and HTTP handlers. CPU, memory and GC metrics from Go runtime are also automatically available in the dashboard.

If you aren’t already an IBM Instana user, you can get started with a free two-week trial


More from IBM Instana

Observing Camunda environments with IBM Instana Business Monitoring

3 min read - Organizations today struggle to detect, identify and act on business operations incidents. The gap between business and IT continues to grow, leaving orgs unable to link IT outages to business impact.  Site reliability engineers (SREs) want to understand business impact to better prioritize their work but don’t have a way of monitoring business KPIs. They struggle to link IT outages to business impacts because data is often siloed and knowledge is tribal. It forces teams into a highly reactive mode…

Buying APM was a good decision (so is getting rid of it)

4 min read - For a long time, there wasn’t a good standard definition of observability that encompassed organizational needs while keeping the spirit of IT monitoring intact. Eventually, the concept of “Observability = Metrics + Traces + Logs” became the de facto definition. That’s nice, but to understand what observability should be, you must consider the characteristics of modern applications: Changes in how they’re developed, deployed and operated The blurring of lines between application code and infrastructure New architectures and technologies like Docker,…

Debunking observability myths – Part 5: You can create an observable system without observability-driven automation

3 min read - In our blog series, we’ve debunked the following observability myths so far: Part 1: You can skip monitoring and rely solely on logs Part 2: Observability is built exclusively for SREs Part 3: Observability is only relevant and beneficial for large-scale systems or complex architectures Part 4: Observability is always expensive In this post, we'll tackle another fallacy that limits the potential of observability—that you can create an observable system without observability driven by automation. Why is this a myth? The notion that…

Top 8 APM metrics that IT teams use to monitor their apps

5 min read - A superior customer experience (CX) is built on accurate and timely application performance monitoring (APM) metrics. You can’t fine-tune your apps or system to improve CX until you know what the problem is or where the opportunities are. APM solutions typically provide a centralized dashboard to aggregate real-time performance metrics and insights to be analyzed and compared. They also establish baselines to alert system administrators to deviations that indicate actual or potential performance issues. IT teams, DevOps and site reliability…