Managing resource allocation for logging services
Stable performance of your logging services depends on proper resource allocation. Careful consideration should be given to capacity planning before deployment. It is also important to periodically review and adjust resource allocations.
You can adjust resources that are allocated for logging services. A few examples are:
- Elasticsearch data pod heap and container memory size
- Elasticsearch data pod replica count
- Logstash pod replica count
Before you begin, consider the following tips:
- These instructions apply to logging services in both
standardandmanagedmodes. - Elastic monitoring is a free feature that you can enable to view cluster health.
- A brief interruption in Elasticsearch availability occurs while the updated resource settings are applied.
- To reduce the footprint for a minimal deployment, default values in the logging service chart allocate a minimal amount of resources. Adjust the allocation for production usage, or when other dependent services are enabled. For example, services such as Vulnerability Advisor or audit logging.
- Performance tuning is a science as much as a craft. The default resource allocation ranges are intended to give you a range to start within.
- To avoid data loss or decreased availability, careful planning is needed when you reduce resources.
Tuning Points
Different chart parameters are available to adjust the resource allocations of logging service components. Elasticsearch data pod, Elasticsearch client pod, and Logstash are a few examples. The following tables list resource-related chart parameters for each component.
| Parameter | Description | Default | Notes |
|---|---|---|---|
elasticsearch.data.replicas |
The number of initial pods in the data cluster | 2 |
|
elasticsearch.data.heapSize |
The JVM heap size to allocate to each Elasticsearch data pod | 1024m |
Production value range from 4g to 12g |
elasticsearch.data.memoryLimit |
The maximum memory (including JVM heap and file system cache) to allocate to each Elasticsearch data pod | 2048Mi |
Production value range from 8Gi to 24Gi, roughly twice the heap size |
elasticsearch.data.storage.size |
The minimum size of the persistent volume | 10Gi |
For resiliency and load balancing, only one Elasticsearch data pod can run on each host machine. As a result:
- If installed in
managedmode, number of data pods cannot exceed the available ICPmanagementnode in the cluster - If installed in
standardmode, number of data pods cannot exceed the available ICPworkernode in the cluster - Make sure that you have enough
managementorworkernodes in the cluster before you increase the data pod count
| Parameter | Description | Default | Notes |
|---|---|---|---|
elasticsearch.client.replicas |
The number of initial pods in the client cluster | 1 |
|
elasticsearch.client.heapSize |
The JVM heap size to allocate to each Elasticsearch client pod | 512m |
Production value range from 2g to 8g |
elasticsearch.client.memoryLimit |
The maximum memory (including JVM heap and file system cache) to allocate to each Elasticsearch client pod | 1536Mi |
Add at least 512Mi to heap size |
| Parameter | Description | Default | Notes |
|---|---|---|---|
logstash.replicas |
The initial pod cluster size | 1 |
|
logstash.heapSize |
The JVM heap size to allocate to Logstash | 512m |
Production value range from 2g to 8g |
logstash.memoryLimit |
The maximum allowable memory for Logstash. Includes both JVM heap and file system cache. | 10246Mi |
Add at least 512Mi to heap size |
| Parameter | Description | Default | Notes |
|---|---|---|---|
elasticsearch.master.replicas |
The number of initial pods in the cluster | 1 |
|
elasticsearch.master.heapSize |
The JVM heap size to allocate to each Elasticsearch master pod | 1024 |
Production value range from 1g to 4g |
elasticsearch.master.memoryLimit |
The maximum memory (including JVM heap and file system cache) to allocate to each Elasticsearch master pod | 1536Mi |
Add at least 512Mi to heap size |
| Parameter | Description | Default |
|---|---|---|
kibana.replicas |
The initial pod cluster size | 1 |
kibana.maxOldSpaceSize |
Maximum old space size (in MB) of the V8 JavaScript engine | 1024 |
kibana.memoryLimit |
The maximum allowable memory for Kibana | 1280Mi |
- Extract the existing logging chart parameters
- Extract Helm parameters by running the following command:
helm get values logging --tls > values-old.yaml - Optionally, apply prior adjustments. All Kubernetes resource manifest adjustments that are made by using the
kubectlcommand are overridden with values that are defined in chart parameters. Replica count, JVM heap size, or container memory limits are a few examples. If prior Kubernetes resource manifests were adjusted, make sure that you apply the same adjustments tovalues-old.yaml.
- Extract Helm parameters by running the following command:
-
Prepare chart parameters.
-
Create a
values-override.yamlfile to include the following parameters:# This example contains all parameters related to resource allocation # Only include the parameters that need to be adjusted appropriate to your need logstash: replicas: 1 heapSize: "512m" # at least 0.5g more than heap size memoryLimit: "1024Mi" kibana: replicas: 1 # maximum old space size (in MB) of the V8 Javascript engine maxOldSpaceSize: "1024" # at least 0.25g more than maximum old space size memoryLimit: "1280Mi" elasticsearch: client: replicas: 1 heapSize: "1024m" # at least 0.5g more than heap size memoryLimit: "1536Mi" master: replicas: 1 heapSize: "1024m" # at least 0.5g more than heap size memoryLimit: "1536Mi" data: replicas: 2 heapSize: "1024m" # about 2 times the heap size memoryLimit: "2048M"
-
-
Download the chart.
-
Identify the chart version.
Logging chart versions vary based on the installed IBM Cloud Private version. You can use the IBM Cloud Private management console to find chart versions in the service catalog. The logging chart is identified by the name,
ibm-icploggingunder themgmt-reporepository. You can also select SOURCE & TAR FILES from the IBM Cloud Private management console to find a local link to a chart. -
Download the chart
.tarfile.Run the following command by using the local link found in Step 3:
curl -k https://<master ip>:8443/mgmt-repo/requiredAssets/ibm-icplogging-x.y.z.tgz > ibm-icplogging-x.y.z.tgz
-
-
Upgrade the Helm chart.
-
For
managedmode logging service, run the following command. Replacex.y.zwith the version that you found in Step 3. Remove the--recreate-podsoption if you are not adjusting the Elasticsearch master pod replica count.helm upgrade logging ibm-icplogging-x.y.z.tgz -f values-old.yaml -f values-override.yaml --namespace kube-system --recreate-pods --force --timeout 600 --tls -
For
standardmode logging service, run the following command. Replacex.y.zwith the version that you found in Step 3. Remove the--recreate-podsoption if you are not adjusting the Elasticsearch master pod replica count.helm upgrade your_release_name ibm-icplogging-x.y.z.tgz -f values-old.yaml -f values-override.yaml --namespace your_namespace --recreate-pods --force --timeout 600 --tls
-
-
The logging service becomes available in approximately 5 - 10 minutes. You can also check Helm upgrade status by using the following command:
helm history --tls logging