December 5, 2019 By Ying Liu 5 min read

How to bring your own custom metrics when scaling your CF applications.

Previously, we announced new autoscaling capabilities for IBM Cloud Foundry Public that allowed you to customize autoscaling. Now, we are excited to introduce a new feature to scale your Cloud Foundry applications with your own custom metrics. This blog post will guide you through this new feature.

Cloud Foundry Autoscaling helps you to scale-in/out your application horizontally by adding or removing application instances. This ensures that an application can run across multiple-instances for purposes of High Availability.   

Overview

The standard metrics supported by Cloud Foundry Autoscaling are “Memory,” “Memory Utilization,” “CPU,”  “Response Time,” and “Throughput.” In some cases, however, a developer may want additional metrics to satisfy their scaling requirements. One typical scenario is “queue depth.” For an application built with a Producer-Consumer pattern, a message queue is used to save the messages sent by the Producer. If the Producer generates more messages than can be processed by the Consumer(s), the message queue depth will increase over time. If we use “queue depth” as the scaling metric, we can select to scale-out the Consumer instances dynamically when the “queue depth” becomes too high, and scale-in when the pending messages have been reduced.

Now, let’s take a closer look to see how to configure autoscaling to handle custom metrics with a sample application designed in a Producer-Consumer pattern. 

Configuration

Let’s start with a web application implemented in Golang. To simulate the Producer-Consumer pattern with minimal code, I use a channel as the task queue and use http requests to add tasks to the queue. With more http requests coming in, the consumers in the single application instances can’t handle the incoming tasks, and the length of the channel is increasing. In this situation, I would like to scale out the number of application instances to add more consumers working in parallel.  

The first step is to configure the autoscaling policy for the application. Once you push your code as a Cloud Foundry application on IBM Cloud, you can find an Autoscaling tab when checking the application details. 

When creating a policy on the Autoscaling tab, you will see that the previous Metric drop-down list is now changed to an input box that allows you to input the desired metric name and save the policy configuration.

Besides naming your metric in the policy configuration, a credential is also required for authorization when emitting your metrics to the Autoscaling server. In the Credential tab, you can either create a credential randomly or set your own credential as the authorization token. After clicking the Create button, a JSON credential will be displayed on the same page. 

Emit application metrics

At this point, we have a running application on Cloud Foundry, a scaling policy for our custom metric, and the required credentials. Next, we need to add some code to the application so that it can emit the required metric to the Autoscaling server. This can be done very easily with a few simple steps.

According to the metric emit RESTful API definition, the following tasks are required: 

Step One: Build the request authorization with Autoscaling credential 

The content of the credential JSON file needs to be set to an application in Cloud Foundry. 

The most flexible way is by using “cf set-env” to set the credential entries as environment variables—then you can get the credential inside the container as Golang snippets, as shown below:

username := os.Getenv("username"))
password := os.Getenv("password"))
...

Or, if you already have multiple service bindings to your application, and it has related code snippets to fetch credentials from VCAP_SERVICES in the implementation, you can deliver the credential as a user-provided-service.

The related Cloud Foundry commands are as follows: 

cf create-user-provided-service <your service instance name> -p credential.json
cf bind-service <your application name>  <your service instance name>

Once the credential variable is ready, you can build your http request header similar to the following Golang snippets:

emitURL := fmt.Sprintf("%s/v1/apps/%s/metrics", credentials.url, credentials.appid)
req, err := http.NewRequest("POST", emitURL, body)        
req.SetBasicAuth(credentials.Username, credentials.Password)       
....

Step Two: Build a request body with the metric details

As shown in the above sample, you need to emit your custom metric to the “URL” specified in the credential JSON file with API POST /v1/apps/:guid/metrics.

A JSON payload is required with the above API to submit metric name, value, unit (optional), and the corresponding instance index:

{
    "instance_index": <INSTANCE INDEX>,
    "metrics": [
      {
        "name": "<CUSTOM METRIC NAME>",
        "value": <CUSTOM METRIC VALUE>,
        "unit": "<CUSTOM METRIC UNIT>",
      }
    ]
  }

In the above, <INSTANCE INDEX> is the index of the current application instance. In Golang, you can fetch the index from environment variable CF_INSTANCE_INDEX with the following:

instanceIndex, err := strconv.Atoi(os.Getenv("CF_INSTANCE_INDEX"))

Then, you can build your metric blocks:

metric := CustomMetrics{
    InstanceIndex: instanceIndex,
    Metrics: []Metrics{
      {
        Name:"queuelength", 
        Value: <metricValue>,
      },
    },
  }

Testing

Once the coding is done, you need to re-push your application to IBM Cloud Foundry. Please check your application logs to ensure there are no errors when delivering the metric.

You can also double check the metric emission through the Autoscaling UI dashboard by switching to the metric tab. Here is the metric diagram for the above sample application after adding the workload: 

In addition, the Autoscaling history tab will help you to understand what and why the scaling actions were triggered in the past for audit purpose:

This is just a sample to demonstrate how the custom metrics feature works in IBM Cloud Foundry. You can customize the metric names as you wish by using letters, numbers, and underscore, and you can emit the application metrics with your favorite programming language as long as it conforms to the metric emit RESTful API definition

See the blog post “Autoscale Your Cloud Foundry Applications on IBM Cloud” to learn more best practices of IBM Cloud Foundry Autoscaling.

About IBM Cloud Foundry

IBM Cloud Foundry is a Cloud Foundry-certified development platform and the fastest and the cheapest way to build and host a cloud native application on IBM Cloud. Developers use Cloud Foundry because it is faster and easier to build, test, deploy, and scale applications. Furthermore, it provides a choice of multicloud, developer frameworks, and application services. 

IBM Cloud is a Cloud Foundry Certified Provider and offers three deployment options: Cloud Foundry Public, Cloud Foundry Enterprise, and Cloud Foundry Private.

Get started and deploy your first application today.

Was this article helpful?
YesNo

More from Cloud

Cloud investments soar as AI advances

3 min read - These days, cloud news often gets overshadowed by anything and everything related to AI. The truth is they go hand-in-hand since many enterprises use cloud computing to deliver AI and generative AI at scale. "Hybrid cloud and AI are two sides of the same coin because it's all about the data," said Ric Lewis, IBM’s SVP of Infrastructure, at Think 2024. To function well, generative AI systems need to access the data that feeds its models wherever it resides. Enter…

3 keys to building a robust hybrid cloud risk strategy

2 min read - Hybrid cloud has become the new normal for enterprises in nearly all industries. Many enterprises have also deployed a hybrid multicloud environment that’s reliant on an ecosystem of different cloud service providers. According to an IBM Institute for Business Value report, 71% of executives think it’s difficult to realize the full potential of a digital transformation without having a solid hybrid cloud strategy in place. Managing complex business operations across a hybrid multicloud environment presents leaders with unique challenges, not…

The power of embracing distributed hybrid infrastructure

2 min read - Data is the greatest asset to help organizations improve decision-making, fuel growth and boost competitiveness in the marketplace. But today’s organizations face the challenge of managing vast amounts of data across multiple environments. This is why understanding the uniqueness of your IT processes, workloads and applications demands a workload placement strategy based on key factors such as the type of data, necessary compute capacity and performance needed and meeting your regulatory security and compliance requirements. While hybrid cloud has become…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters