May 21, 2021 By Carlos Tolon 3 min read

Today, we are excited to announce that Prometheus remote write integration is now a key part of IBM Cloud Monitoring. 

IBM Cloud Monitoring is a cloud-native, container-intelligence management system that you can include as part of your IBM Cloud architecture to gain operational visibility into the performance and health of your applications, services and platforms. With this feature, Prometheus’ built-in remote write capability forwards metrics from your existing Prometheus servers to the IBM Cloud Monitoring intstance, which expands coverage to new use cases and environments where you can’t install an agent to obtain metric data. 

For those of you who want to continue to run your own Prometheus environments but send data to the IBM Cloud Monitoring backend or for environments where the agents co-exist with Prometheus servers, you can offload scaling a long term retention storage to IBM Cloud Monitoring and maintain your existing setup while reducing operational overhead. With all of your telemetry data in one place, you can use existing dashboards or build new ones that combine and group data from various environments and across your entire software stack:

Additionally, by leveraging remote write capability, you can also obtain metrics from environments where the Sysdig agent cannot be installed, such as Windows, zOS, Power or any non-x86-based architectures typically seen in IoT or edge computing environments. After you configure remote write in your Prometheus YAML file, Prometheus data will begin flowing into IBM Cloud Monitoring almost instantly. 

How do I start using Prometheus remote write?

All IBM Cloud Monitoring instances currently have Prometheus remote write functionality enabled. To configure Prometheus servers in your environment to remote write, add the remote_write block to your prometheus.yml configuration file. To authenticate against the Prometheus remote write endpoint, you will need to use an Authorization Header with your API token as Bearer Token (not to be confused with your monitoring instance Sysdig agent access key). For instance, configure your remote write section like this: 

global:
  external_labels:
    [ <labelname>: <labelvalue> ... ]
remote_write:
- url: "https://<region-url>/prometheus/remote/write"
  bearer_token: "<your API Token>"

You can also use the bearer_token_file entry to refer to a file instead of directly including the API token, which is most often used if you store this in a Kubernetes secret. 

From version v2.26, Prometheus allows a new way to configure the authorization by including a section within your remote_write block called authorization:

global:
  external_labels:
    [ <labelname>: <labelvalue> ... ]
remote_write:
- url: "https://<region-url>/prometheus/remote/write"
  authorization:
    credentials: "<your API Token>"

Here, you can also use the credentials_file option, like above.

Note: Prometheus does not reveal the bearer_token value in the UI.

How do I control metrics sent via Prometheus remote write?

By default, all metrics scraped by your Prometheus servers are written to the Prometheus remote write endpoint when you configure remote write. These metrics will include a remote_write: true label when stored in IBM Cloud Monitoring, for easy identification.

You can specify additional custom label/value pairs to be sent along with each time series using the external_labels block within the global section. This allows you to filter or scope metrics when using them, similar to what you would do when setting up an agent tag.

For instance, if you have two different Prometheus servers in your environment configured to remote write, you could easily include an external label to differentiate them. 

Prometheus Server 1 configuration: 

global:
  external_labels:
    provider: prometheus1
remote_write:
- url: ...

Prometheus Server 2 configuration: 

global:
  external_labels:
    provider: prometheus2
remote_write:
- url: ...

To control which metrics you want to keep, drop or replace, you can include relabel_config entries as shown in the following example where metrics are only being sent from one specific namespace called myapps-ns:

remote_write:
- url: https://<region-url>/prometheus/remote/write
  bearer_token_file: /etc/secrets/sysdig-api-token
  write_relabel_configs:
  - source_labels: [__meta_kubernetes_namespace]
    regex: ‘myapp-ns’
    action: keep

IBM Cloud Monitoring regional endpoints

The following list contains the public endpoints for Prometheus remote write available per region:

Pricing

Prometheus remote write cost is based on metric ingestion, thus the price is calculated the same as for metrics collected using the Sysdig agent with IBM Cloud Monitoring. For more information on IBM Cloud Monitoring pricing, refer to our docs page.

More from Announcements

Success and recognition of IBM offerings in G2 Summer Reports  

2 min read - IBM offerings were featured in over 1,365 unique G2 reports, earning over 230 Leader badges across various categories.   This recognition is important to showcase our leading products and also to provide the unbiased validation our buyers seek. According to the 2024 G2 Software Buyer Behavior Report, “When researching software, buyers are most likely to trust information from people with similar roles and challenges, and they value transparency above other factors.”  With over 90 million visitors each year and hosting more than 2.6…

Manage the routing of your observability log and event data 

4 min read - Comprehensive environments include many sources of observable data to be aggregated and then analyzed for infrastructure and app performance management. Connecting and aggregating the data sources to observability tools need to be flexible. Some use cases might require all data to be aggregated into one common location while others have narrowed scope. Optimizing where observability data is processed enables businesses to maximize insights while managing to cost, compliance and data residency objectives.  As announced on 29 March 2024, IBM Cloud® released its next-gen observability…

Unify and share data across Netezza and watsonx.data for new generative AI applications

3 min read - In today's data and AI-driven world, organizations are generating vast amounts of data from various sources. The ability to extract value from AI initiatives relies heavily on the availability and quality of an enterprise's underlying data. In order to unlock the full potential of data for AI, organizations must be able to effectively navigate their complex IT landscapes across the hybrid cloud.   At this year’s IBM Think conference in Boston, we announced the new capabilities of IBM watsonx.data, an open…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters