Configuring for an IBM Power environment
The following configuration settings are recommended when installing IBM® Cloud Private in an IBM Power environment.
Operating system swap partition settings
Add a small swap partition and disable swap accounting to control short spikes in pod memory usage that are beyond the set limit and to avoid out-of-memory errors.
Note: The swap space should not be larger than 4 GB. The swap space should not be shared with drives that manage large amounts of I/O activity, like /var/lib/docker
and /var/log
.
-
Enable a small swap partition (2-4 GB) on each node in the cluster.
-
For Red Hat Enterprise Linux, add
swapaccount=0
to the kernel command line by completing the following steps:-
Open the
/etc/default/grub
file. -
Add
swapaccount=0
to the existing options by editing the GRUB_CMDLINE_LINUX option. -
Run the following command:
grub2-mkconfig -o /boot/grub2/grub.cfg
-
Reboot the nodes before installing IBM Cloud Private.
Note: Swap accounting is disabled by default on Ubuntu and SUSE Linux Enterprise Server, so no kernel command line changes are required for those operating systems.
-
IBM Cloud Private settings
Some of your IBM Cloud Private environment settings are determined by how you are configuring your IBM Power environment. The following sections describe a sample environment and its recommended configuration settings.
Important: The recommendations in this section are for clusters that have both a Power master and Power management nodes. For a mixed cluster in which Power nodes are only workers, only apply the operating system changes in the Operating system section.
Characteristics of the configuration:
-
The number of CPUs is 32, or greater (as identified by the operating system).
CPUs that are identified by the operating system can be different than the CPU that is configured on the LPAR profile for PowerVM hypervisors. For example, the operating system of a PowerVM Logical Partition (LPAR) that is configured with 4 vCPUs reports 32 CPUs. This is because the default setting of SMT=8. The method that the operating system uses to determine the number of CPUs that are identified is determined by the following formula:
(vCPUs * SMT value)
This example resolves as the following formula:
(4 * 8 = 32)
To determine this value, you can run one of the following commands:
/proc/cpuinfo
or
lscpu
-
RAM is 64 GB, or greater.
-
For the PowerVM hypervisor, the processor entitlement must have a value that is greater than or equal to 2 for the master and the management nodes.
Preinstallation settings
If you have 32 or more CPUs, add the following settings to the IBM Cloud Private config.yaml
file:
auth-idp:
platform_auth:
resources:
limits:
memory: 2500Mi
identity_manager:
resources:
limits:
memory: 2500Mi
identity_provider:
resources:
limits:
memory: 2500Mi
icp_audit:
resources:
limits:
memory: 1000Mi
auth-pap:
auth_pap:
resources:
limits:
memory: 2500Mi
icp_audit:
resources:
limits:
memory: 1000Mi
mariadb:
mariadb:
resources:
limits:
memory: 1000Mi
mariadb_monitor:
resources:
limits:
memory: 1000Mi
logging:
logstash:
memoryLimit: 3000Mi
elasticsearch:
client:
memoryLimit: 3000Mi
master:
memoryLimit: 3000Mi
data:
memoryLimit: 4500Mi
kibana:
memoryLimit: 3000Mi
monitoring:
prometheus:
resources:
limits:
memory: 3000Mi
alertmanager:
resources:
limits:
memory: 1000Mi
grafana:
resources:
limits:
memory: 2000Mi
helm-api:
helmapi:
resources:
limits:
memory: 1000Mi
rudder:
resources:
limits:
memory: 1000Mi
auditService:
resources:
limits:
memory: 1000Mi
helm-repo:
helmrepo:
resources:
limits:
memory: 1500Mi
auditService:
resources:
limits:
memory: 1000Mi
mgmt-repo:
mgmtrepo:
resources:
limits:
memory: 1000Mi
auditService:
resources:
limits:
memory: 1000Mi
platform-api:
platformApi:
resources:
limits:
memory: 1000Mi
platformDeploy:
resources:
limits:
memory: 1000Mi
platform-ui:
resources:
limits:
memory: 1000Mi
image-security-enforcement:
resources:
limits:
memory: 1000Mi
catalog-ui:
catalogui:
resources:
limits:
memory: 1000Mi
service-catalog:
service_catalog:
apiserver:
resources:
limits:
memory: 1000Mi
controllerManager:
resources:
limits:
memory: 1000Mi
nginx-ingress:
ingress:
config:
disable-access-log: 'true'
keep-alive-requests: '10000'
upstream-keepalive-connections: '64'
worker-processes: "5"
extraArgs:
publish-status-address: "{{ proxy_external_address }}"
enable-ssl-passthrough: true
Post installation settings
You can optionally add an alert that notifies you if the memory usage of a container reaches 90% of its available memory. This alert indicates the need to revisit the size of the pod. Complete the following steps to create the alert:
-
Create a file named
pod-mem-usage-alert.yaml
with the following contents:apiVersion: monitoringcontroller.cloud.ibm.com/v1 kind: AlertRule metadata: name: pod-mem-usage spec: enabled: true data: |- groups: - name: podMemUsage rules: - alert: podMemUsage expr: (sum(container_memory_working_set_bytes) by (name, pod_name, namespace)/sum(container_spec_memory_limit_bytes) by (name, pod_name, namespace)) > 0.90 and (sum(container_memory_working_set_bytes) by (name, pod_name, namespace)/sum(container_spec_memory_limit_bytes) by (name, pod_name, namespace)) != Inf for: 30m annotations: description: 'Pod {{ $labels.pod_name }} in namespace {{ $labels.namespace }} is reaching memory limit threshold' summary: Memory Utilization of Pod is reaching limit
-
Implement the new alert by entering the following command:
kubectl apply -f pod-mem-usage-alert.yaml
See the Alerts section of IBM Cloud Private Cluster Monitoring for more information about alerts. Tip: See https://icp-master-ip:8443/alertmanager to view your active alerts.