May 21, 2021 By Carlos Tolon
Shadi Albouyeh
3 min read

Today, we are excited to announce that Prometheus remote write integration is now a key part of IBM Cloud Monitoring. 

IBM Cloud Monitoring is a cloud-native, container-intelligence management system that you can include as part of your IBM Cloud architecture to gain operational visibility into the performance and health of your applications, services and platforms. With this feature, Prometheus’ built-in remote write capability forwards metrics from your existing Prometheus servers to the IBM Cloud Monitoring intstance, which expands coverage to new use cases and environments where you can’t install an agent to obtain metric data. 

For those of you who want to continue to run your own Prometheus environments but send data to the IBM Cloud Monitoring backend or for environments where the agents co-exist with Prometheus servers, you can offload scaling a long term retention storage to IBM Cloud Monitoring and maintain your existing setup while reducing operational overhead. With all of your telemetry data in one place, you can use existing dashboards or build new ones that combine and group data from various environments and across your entire software stack:

Additionally, by leveraging remote write capability, you can also obtain metrics from environments where the Sysdig agent cannot be installed, such as Windows, zOS, Power or any non-x86-based architectures typically seen in IoT or edge computing environments. After you configure remote write in your Prometheus YAML file, Prometheus data will begin flowing into IBM Cloud Monitoring almost instantly. 

How do I start using Prometheus remote write?

All IBM Cloud Monitoring instances currently have Prometheus remote write functionality enabled. To configure Prometheus servers in your environment to remote write, add the remote_write block to your prometheus.yml configuration file. To authenticate against the Prometheus remote write endpoint, you will need to use an Authorization Header with your API token as Bearer Token (not to be confused with your monitoring instance Sysdig agent access key). For instance, configure your remote write section like this: 

global:
  external_labels:
    [ <labelname>: <labelvalue> ... ]
remote_write:
- url: "https://<region-url>/prometheus/remote/write"
  bearer_token: "<your API Token>"

You can also use the bearer_token_file entry to refer to a file instead of directly including the API token, which is most often used if you store this in a Kubernetes secret. 

From version v2.26, Prometheus allows a new way to configure the authorization by including a section within your remote_write block called authorization:

global:
  external_labels:
    [ <labelname>: <labelvalue> ... ]
remote_write:
- url: "https://<region-url>/prometheus/remote/write"
  authorization:
    credentials: "<your API Token>"

Here, you can also use the credentials_file option, like above.

Note: Prometheus does not reveal the bearer_token value in the UI.

How do I control metrics sent via Prometheus remote write?

By default, all metrics scraped by your Prometheus servers are written to the Prometheus remote write endpoint when you configure remote write. These metrics will include a remote_write: true label when stored in IBM Cloud Monitoring, for easy identification.

You can specify additional custom label/value pairs to be sent along with each time series using the external_labels block within the global section. This allows you to filter or scope metrics when using them, similar to what you would do when setting up an agent tag.

For instance, if you have two different Prometheus servers in your environment configured to remote write, you could easily include an external label to differentiate them. 

Prometheus Server 1 configuration: 

global:
  external_labels:
    provider: prometheus1
remote_write:
- url: ...

Prometheus Server 2 configuration: 

global:
  external_labels:
    provider: prometheus2
remote_write:
- url: ...

To control which metrics you want to keep, drop or replace, you can include relabel_config entries as shown in the following example where metrics are only being sent from one specific namespace called myapps-ns:

remote_write:
- url: https://<region-url>/prometheus/remote/write
  bearer_token_file: /etc/secrets/sysdig-api-token
  write_relabel_configs:
  - source_labels: [__meta_kubernetes_namespace]
    regex: ‘myapp-ns’
    action: keep

IBM Cloud Monitoring regional endpoints

The following list contains the public endpoints for Prometheus remote write available per region:

Pricing

Prometheus remote write cost is based on metric ingestion, thus the price is calculated the same as for metrics collected using the Sysdig agent with IBM Cloud Monitoring. For more information on IBM Cloud Monitoring pricing, refer to our docs page.

More from Cloud

How a US bank modernized its mainframe applications with IBM Consulting and Microsoft Azure

9 min read - As organizations strive to stay ahead of the curve in today's fast-paced digital landscape, mainframe application modernization has emerged as a critical component of any digital transformation strategy. In this blog, we'll discuss the example of a US bank which embarked on a journey to modernize its mainframe applications. This strategic project has helped it to transform into a more modern, flexible and agile business. In looking at the ways in which it approached the problem, you’ll gain insights into…

The power of the mainframe and cloud-native applications 

4 min read - Mainframe modernization refers to the process of transforming legacy mainframe systems, applications and infrastructure to align with modern technology and business standards. This process unlocks the power of mainframe systems, enabling organizations to use their existing investments in mainframe technology and capitalize on the benefits of modernization. By modernizing mainframe systems, organizations can improve agility, increase efficiency, reduce costs, and enhance customer experience.  Mainframe modernization empowers organizations to harness the latest technologies and tools, such as cloud computing, artificial intelligence,…

Modernize your mainframe applications with Azure

4 min read - Mainframes continue to play a vital role in many businesses' core operations. According to new research from IBM's Institute for Business Value, a significant 7 out of 10 IT executives believe that mainframe-based applications are crucial to their business and technology strategies. However, the rapid pace of digital transformation is forcing companies to modernize across their IT landscape, and as the pace of innovation continuously accelerates, organizations must react and adapt to these changes or risk being left behind. Mainframe…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters