Identifying CPU hotspots with line-of-code precision is critical when it comes to troubleshooting performance issues, identifying performance bottlenecks or improving response for a better customer experience.

Go’s pprof toolset has powerful tools for CPU profiling and visualizing different aspects of profiles. It is very useful in the development stage. However, profiling CPU usage in production environments has different requirements:

  • We need to be able to see CPU profiling information continuously to have a historical baseline for optimization or troubleshooting.
  • Production environments are increasingly becoming container-based and automatically orchestrated, for example, when using Kubernetes, which implies that it might not be easy to locate and connect to a production machine to initiate the profiler remotely.
  • If the application has crashed, we need information before the fact.

Using IBM Instana AutoProfile for automatic CPU profiling

The IBM Instana always-on AutoProfile feature is designed for profiling and monitoring production environments. It automates the collection of CPU profiles.

After you restart or deploy the application, the profiles will be available in the Dashboard in a historically comparable form.

If you’re not already an IBM Instana user, sign up for a free two-week trial

Categories

More from IBM Instana

In observability, “automation” is spelled I-N-S-T-A-N-A

3 min read - Modern application environments need real-time automated observability to have visibility and insights into what is going on. Because of the highly dynamic nature of microservices and the numerous interdependencies among application components, having an automated approach to observability is essential. That’s why traditional solutions like New Relic struggle to keep up with monitoring in cloud-native environments.  Automation in observability is a requirement When an application is not performing properly, customers are unhappy and your business can suffer. If your observability…

Debunking observability myths – Part 6: Observability is about one part of your stack

3 min read - In our blog series, we’ve debunked the following observability myths so far: Part 1: You can skip monitoring and rely solely on logs Part 2: Observability is built exclusively for SREs Part 3: Observability is only relevant and beneficial for large-scale systems or complex architectures Part 4: Observability is always expensive Part 5: You can create an observable system without observability-driven automation Today, we're delving into another misconception about observability—the belief that it's solely applicable to a specific part of your stack or…

Observing Camunda environments with IBM Instana Business Monitoring

3 min read - Organizations today struggle to detect, identify and act on business operations incidents. The gap between business and IT continues to grow, leaving orgs unable to link IT outages to business impact.  Site reliability engineers (SREs) want to understand business impact to better prioritize their work but don’t have a way of monitoring business KPIs. They struggle to link IT outages to business impacts because data is often siloed and knowledge is tribal. It forces teams into a highly reactive mode…

Buying APM was a good decision (so is getting rid of it)

4 min read - For a long time, there wasn’t a good standard definition of observability that encompassed organizational needs while keeping the spirit of IT monitoring intact. Eventually, the concept of “Observability = Metrics + Traces + Logs” became the de facto definition. That’s nice, but to understand what observability should be, you must consider the characteristics of modern applications: Changes in how they’re developed, deployed and operated The blurring of lines between application code and infrastructure New architectures and technologies like Docker,…