Today, we are excited to announce that Prometheus remote write integration is now a key part of IBM Cloud Monitoring. 

IBM Cloud Monitoring is a cloud-native, container-intelligence management system that you can include as part of your IBM Cloud architecture to gain operational visibility into the performance and health of your applications, services and platforms. With this feature, Prometheus’ built-in remote write capability forwards metrics from your existing Prometheus servers to the IBM Cloud Monitoring intstance, which expands coverage to new use cases and environments where you can’t install an agent to obtain metric data. 

For those of you who want to continue to run your own Prometheus environments but send data to the IBM Cloud Monitoring backend or for environments where the agents co-exist with Prometheus servers, you can offload scaling a long term retention storage to IBM Cloud Monitoring and maintain your existing setup while reducing operational overhead. With all of your telemetry data in one place, you can use existing dashboards or build new ones that combine and group data from various environments and across your entire software stack:

Additionally, by leveraging remote write capability, you can also obtain metrics from environments where the Sysdig agent cannot be installed, such as Windows, zOS, Power or any non-x86-based architectures typically seen in IoT or edge computing environments. After you configure remote write in your Prometheus YAML file, Prometheus data will begin flowing into IBM Cloud Monitoring almost instantly. 

How do I start using Prometheus remote write?

All IBM Cloud Monitoring instances currently have Prometheus remote write functionality enabled. To configure Prometheus servers in your environment to remote write, add the remote_write block to your prometheus.yml configuration file. To authenticate against the Prometheus remote write endpoint, you will need to use an Authorization Header with your API token as Bearer Token (not to be confused with your monitoring instance Sysdig agent access key). For instance, configure your remote write section like this: 

global:
  external_labels:
    [ <labelname>: <labelvalue> ... ]
remote_write:
- url: "https://<region-url>/prometheus/remote/write"
  bearer_token: "<your API Token>"

You can also use the bearer_token_file entry to refer to a file instead of directly including the API token, which is most often used if you store this in a Kubernetes secret. 

From version v2.26, Prometheus allows a new way to configure the authorization by including a section within your remote_write block called authorization:

global:
  external_labels:
    [ <labelname>: <labelvalue> ... ]
remote_write:
- url: "https://<region-url>/prometheus/remote/write"
  authorization:
    credentials: "<your API Token>"

Here, you can also use the credentials_file option, like above.

Note: Prometheus does not reveal the bearer_token value in the UI.

How do I control metrics sent via Prometheus remote write?

By default, all metrics scraped by your Prometheus servers are written to the Prometheus remote write endpoint when you configure remote write. These metrics will include a remote_write: true label when stored in IBM Cloud Monitoring, for easy identification.

You can specify additional custom label/value pairs to be sent along with each time series using the external_labels block within the global section. This allows you to filter or scope metrics when using them, similar to what you would do when setting up an agent tag.

For instance, if you have two different Prometheus servers in your environment configured to remote write, you could easily include an external label to differentiate them. 

Prometheus Server 1 configuration: 

global:
  external_labels:
    provider: prometheus1
remote_write:
- url: ...

Prometheus Server 2 configuration: 

global:
  external_labels:
    provider: prometheus2
remote_write:
- url: ...

To control which metrics you want to keep, drop or replace, you can include relabel_config entries as shown in the following example where metrics are only being sent from one specific namespace called myapps-ns:

remote_write:
- url: https://<region-url>/prometheus/remote/write
  bearer_token_file: /etc/secrets/sysdig-api-token
  write_relabel_configs:
  - source_labels: [__meta_kubernetes_namespace]
    regex: ‘myapp-ns’
    action: keep

IBM Cloud Monitoring regional endpoints

The following list contains the public endpoints for Prometheus remote write available per region:

Pricing

Prometheus remote write cost is based on metric ingestion, thus the price is calculated the same as for metrics collected using the Sysdig agent with IBM Cloud Monitoring. For more information on IBM Cloud Monitoring pricing, refer to our docs page.

Categories

More from Cloud

Kubernetes version 1.28 now available in IBM Cloud Kubernetes Service

2 min read - We are excited to announce the availability of Kubernetes version 1.28 for your clusters that are running in IBM Cloud Kubernetes Service. This is our 23rd release of Kubernetes. With our Kubernetes service, you can easily upgrade your clusters without the need for deep Kubernetes knowledge. When you deploy new clusters, the default Kubernetes version remains 1.27 (soon to be 1.28); you can also choose to immediately deploy version 1.28. Learn more about deploying clusters here. Kubernetes version 1.28 In…

Temenos brings innovative payments capabilities to IBM Cloud to help banks transform

3 min read - The payments ecosystem is at an inflection point for transformation, and we believe now is the time for change. As banks look to modernize their payments journeys, Temenos Payments Hub has become the first dedicated payments solution to deliver innovative payments capabilities on the IBM Cloud for Financial Services®—an industry-specific platform designed to accelerate financial institutions' digital transformations with security at the forefront. This is the latest initiative in our long history together helping clients transform. With the Temenos Payments…

Foundational models at the edge

7 min read - Foundational models (FMs) are marking the beginning of a new era in machine learning (ML) and artificial intelligence (AI), which is leading to faster development of AI that can be adapted to a wide range of downstream tasks and fine-tuned for an array of applications.  With the increasing importance of processing data where work is being performed, serving AI models at the enterprise edge enables near-real-time predictions, while abiding by data sovereignty and privacy requirements. By combining the IBM watsonx data…

The next wave of payments modernization: Minimizing complexity to elevate customer experience

3 min read - The payments ecosystem is at an inflection point for transformation, especially as we see the rise of disruptive digital entrants who are introducing new payment methods, such as cryptocurrency and central bank digital currencies (CDBC). With more choices for customers, capturing share of wallet is becoming more competitive for traditional banks. This is just one of many examples that show how the payments space has evolved. At the same time, we are increasingly seeing regulators more closely monitor the industry’s…