Managing resource allocation for logging services

Stable performance of your logging services depends on proper resource allocation. Careful consideration should be given to capacity planning before deployment. It is also important to periodically review and adjust resource allocations.

You can adjust resources that are allocated for logging services. A few examples are:

Before you begin, consider the following tips:

Tuning Points

Different chart parameters are available to adjust the resource allocations of logging service components. Elasticsearch data pod, Elasticsearch client pod, and Logstash are a few examples. The following tables list resource-related chart parameters for each component.

Table 1. Elasticsearch data node
Parameter Description Default Notes
elasticsearch.data.replicas The number of initial pods in the data cluster 2
elasticsearch.data.heapSize The JVM heap size to allocate to each Elasticsearch data pod 1024m Production value range from 4g to 12g
elasticsearch.data.memoryLimit The maximum memory (including JVM heap and file system cache) to allocate to each Elasticsearch data pod 2048Mi Production value range from 8Gi to 24Gi, roughly twice the heap size
elasticsearch.data.storage.size The minimum size of the persistent volume 10Gi

For resiliency and load balancing, only one Elasticsearch data pod can run on each host machine. As a result:

Table 2. Elasticsearch client node
Parameter Description Default Notes
elasticsearch.client.replicas The number of initial pods in the client cluster 1
elasticsearch.client.heapSize The JVM heap size to allocate to each Elasticsearch client pod 512m Production value range from 2g to 8g
elasticsearch.client.memoryLimit The maximum memory (including JVM heap and file system cache) to allocate to each Elasticsearch client pod 1536Mi Add at least 512Mi to heap size
Table 3. Logstash
Parameter Description Default Notes
logstash.replicas The initial pod cluster size 1
logstash.heapSize The JVM heap size to allocate to Logstash 512m Production value range from 2g to 8g
logstash.memoryLimit The maximum allowable memory for Logstash. Includes both JVM heap and file system cache. 10246Mi Add at least 512Mi to heap size
Table 4. Elasticsearch master node
Parameter Description Default Notes
elasticsearch.master.replicas The number of initial pods in the cluster 1
elasticsearch.master.heapSize The JVM heap size to allocate to each Elasticsearch master pod 1024 Production value range from 1g to 4g
elasticsearch.master.memoryLimit The maximum memory (including JVM heap and file system cache) to allocate to each Elasticsearch master pod 1536Mi Add at least 512Mi to heap size
Table 5. Kibana
Parameter Description Default
kibana.replicas The initial pod cluster size 1
kibana.maxOldSpaceSize Maximum old space size (in MB) of the V8 JavaScript engine 1024
kibana.memoryLimit The maximum allowable memory for Kibana 1280Mi
  1. Extract the existing logging chart parameters
    1. Extract Helm parameters by running the following command: helm get values logging --tls > values-old.yaml
    2. Optionally, apply prior adjustments. All Kubernetes resource manifest adjustments that are made by using the kubectl command are overridden with values that are defined in chart parameters. Replica count, JVM heap size, or container memory limits are a few examples. If prior Kubernetes resource manifests were adjusted, make sure that you apply the same adjustments to values-old.yaml.
  2. Prepare chart parameters.

    1. Create a values-override.yaml file to include the following parameters:

         # This example contains all parameters related to resource allocation
         # Only include the parameters that need to be adjusted appropriate to your need
         logstash:
           replicas: 1
          heapSize: "512m"
           # at least 0.5g more than heap size
           memoryLimit: "1024Mi"
      
         kibana:
         replicas: 1
           # maximum old space size (in MB) of the V8 Javascript engine
           maxOldSpaceSize: "1024"
           # at least 0.25g more  than maximum old space size
           memoryLimit: "1280Mi"
      
         elasticsearch:
           client:
             replicas: 1
             heapSize: "1024m"
             # at least 0.5g more than heap size
             memoryLimit: "1536Mi"
      
           master:
             replicas: 1
             heapSize: "1024m"
             # at least 0.5g more than heap size
             memoryLimit: "1536Mi"
      
           data:
             replicas: 2
             heapSize: "1024m"
             # about 2 times the heap size
             memoryLimit: "2048M"
      
  3. Download the chart.

    1. Identify the chart version.

      Logging chart versions vary based on the installed IBM Cloud Private version. You can use the IBM Cloud Private management console to find chart versions in the service catalog. The logging chart is identified by the name, ibm-icplogging under the mgmt-repo repository. You can also select SOURCE & TAR FILES from the IBM Cloud Private management console to find a local link to a chart.

    2. Download the chart .tar file.

      Run the following command by using the local link found in Step 3:

      curl -k https://<master ip>:8443/mgmt-repo/requiredAssets/ibm-icplogging-x.y.z.tgz > ibm-icplogging-x.y.z.tgz
      
  4. Upgrade the Helm chart.

    • For managed mode logging service, run the following command. Replace x.y.z with the version that you found in Step 3. Remove the --recreate-pods option if you are not adjusting the Elasticsearch master pod replica count.

      helm upgrade logging ibm-icplogging-x.y.z.tgz -f values-old.yaml -f values-override.yaml --namespace kube-system --recreate-pods --force --timeout 600 --tls
      
    • For standard mode logging service, run the following command. Replace x.y.z with the version that you found in Step 3. Remove the --recreate-pods option if you are not adjusting the Elasticsearch master pod replica count.

      helm upgrade your_release_name ibm-icplogging-x.y.z.tgz -f values-old.yaml -f values-override.yaml --namespace your_namespace --recreate-pods --force --timeout 600 --tls
      
  5. The logging service becomes available in approximately 5 - 10 minutes. You can also check Helm upgrade status by using the following command:

      helm history --tls logging