After installing your Cloud App Management server, you can configure the Kubernetes data collector for monitoring the applications in
your Kubernetes environment.
Use this procedure if you have no Ansible server.
The Kubernetes data collector manages the collection, enrichment,
and dispatch of Kubernetes topology, event, and performance data.
Before you begin
Prerequisites:
- Helm client and server (Tiller) version 2.11.0 or
higher. Otherwise, follow the instructions in Configuring Kubernetes monitoring without Helm
- Kubectl client on the environment from where you are installing
- Kubernetes version 1.7 or higher (available with IBM® Cloud
Private version 3.2.0 or higher and OpenShift
version 3.9 or higher)
Considerations:
- If you want to deploy a Kubernetes data collector
that is configured to point to another Cloud App Management server on the same cluster, you must deploy it in a
different namespace so as not to disrupt the configuration secret (or secrets) in use by active
releases.
- If
you are installing the Kubernetes data collector into
another namespace, you must also assign
docker_group
and with a value that matches
the namespace. Example: ansible-playbook helm-main.yaml
--extra-vars="cluster_name=myCluster50 release_name=camserver namespace=sample docker_group=sample
tls_enabled=true"
- If
you are installing on an OpenShift environment, you might first need to grant the Helm Tiller
service edit access to the project or namespace where you want to install the Kubernetes data collector. For more information, see https://blog.openshift.com/getting-started-helm-openshift/.
- When
you install on an OpenShift environment, you might need to override the default security
configuration. Otherwise, it is possible for the user ID to be at variance with what is expected by
the application image, resulting in exceptions such as permission errors. For more information, see
the OpenShift Cookbook topic, How can I enable an image to run as a
set user ID?.
- If your environment configuration requires adding entries to a Pod’s
/etc/hosts file (to provide Pod-level overrides of hostname resolution when DNS
and other options are not applicable), specify the appropriate hostAliases in this helm chart's
values.yaml file. For more information on hostAliases, please see the Kubernetes documentation.
About this task
Deploying the Kubernetes data collector involves
downloading the data collectors installation eImage, logging into the Cloud App Management
console and downloading the data collector
configuration package, installing the data collector, and validating the installation.
The eImage is the data collectors package and contains all the installable data collectors. The
configuration package (ConfigPack) contains the ingress URLs and authentication information required
to configure the data collector package to communicate with the Cloud App Management server.
Procedure
Download the eImage data collectors installation tar file and the
data collector configuration package:
-
If you haven't already, download the data collectors installation eImage (part number CC5H0EN) from IBM
Passport Advantage®.
-
Download the data collector configuration package:
-
Log in to the Cloud App Management
console, click the
Get Started link on the Welcome page, then select .
-
Click the New integration button.
-
In the Standard monitoring agents section, select the ICAM Data
Collectors
Configure button.
-
Select Download file and specify the directory where you want to save
the compressed data collector configuration package, ibm-cloud-apm-dc-configpack.tar.
-
Move the downloaded installation package and the configuration package to a node in the cluster
that you want to monitor:
Examples using secure copy:
scp my_path_to_download/app_mgmt_k8sdc.tar.gzroot@my.env.com:/my_path_to_destination
scp my_path_to_download/ibm-cloud-apm-dc-configpack.tar root@my.env.com:/my_path_to_destination
where
- my_path_to_download is the path to where the installation tar file or
configuration package file was downloaded
- root@my.env.com is your user ID on the system where the kubectl client is
configured to point to the environment to be monitored
- my_path_to_destination is the path to the environment that you want to
monitor
Install the Kubernetes data collector in the Kubernetes cluster that you
want to monitor:
-
If you are not installing from your master node, configure the
kubectl client to point to the master node of the cluster that you want to monitor.
In the IBM Cloud
Private management console, you can click and follow the instructions to run the kubectl config
commands.
-
Initialize Helm:
-
Log in to your Docker registry.
docker login -u my_username -p my_password my_clustername:my_clusterport
where
- my_username and my_password are the user name and
password for the Docker registry
- my_clustername is the name of the cluster that you're monitoring
- my_clusterport is the port number for the Docker registry
-
Extract the Kubernetes data collector installation
package from the installation tar file that you downloaded in step 3:
tar -xvf appMgtDataCollectors_2019.4.0.2.tar.gz
cd appMgtDataCollectors_2019.4.0.2
tar -xvf app_mgmt_k8sdc.tar.gz
cd app_mgmt_k8sdc
-
Extract the data collector configuration package file that you secure copied in step 3:
tar -xvf my_path_to/ibm-cloud-apm-dc-configpack.tar
The data collector ConfigPack is extracted to the
appMgtDataCollectors_2019.4.0 directory.
-
Load the Docker images:
docker load -i app_mgmt_k8sdc_docker.tar.gz
-
Discover the k8-monitor Docker image repositories and tags:
K8_MONITOR_IMAGE_REPO=`docker images | grep icam-k8-monitor | head -1 | awk '{print $1}'`
K8_MONITOR_IMAGE_TAG=`docker images | grep icam-k8-monitor | grep APM | head -1 | awk '{print $2}'`
-
Create the required configuration and security secrets:
kubectl -n my_namespace create -f ibm-cloud-apm-dc-configpack/dc-secret.yaml
kubectl -n my_namespace create secret generic ibm-agent-https-secret \
--from-file=ibm-cloud-apm-dc-configpack/keyfiles/cert.pem \
--from-file=ibm-cloud-apm-dc-configpack/keyfiles/ca.pem \
--from-file=ibm-cloud-apm-dc-configpack/keyfiles/key.pem
where
- my_namespace is the namespace where you want your configuration secrets to
be created.
-
Tag and push the Docker images to the Docker registry:
docker tag $K8_MONITOR_IMAGE_REPO:$K8_MONITOR_IMAGE_TAG my_docker_registry:my_docker_registry_port/my_docker_group/k8-monitor:$K8_MONITOR_IMAGE_TAG
docker push my_docker_registry:my_docker_registry_port/my_docker_group/k8-monitor:$K8_MONITOR_IMAGE_TAG
where
- my_docker_registry is the host of the Docker registry where you want to
store the image. Example:
mycluster.icp
.
- my_docker_registry_port is the Docker registry service port. For example:
8500
- my_docker_group is the Docker group in the registry where you want to store
your images. Example:
myRegistry:8500/mydockergroup
.
If you are
installing the data collector in a different namespace from the default, you must also assign
docker_group
with the same name as the namespace.
-
Install the Helm Chart:
Install the Helm Chart with HTTPS enabled. If TLS is not enabled, do not include
--tls in the last
set command:
helm install app_mgmt_k8sdc_helm.tar.gz --name my_release_name --namespace my_namespace \
--set k8monitor.image.repository=my_docker_registry:my_docker_registry_port \
--set k8monitor.clusterName=my_cluster_name \
--set k8monitor.imageNamePrefix=my_docker_group/ \
--set k8monitor.imageTag=$K8_MONITOR_IMAGE_TAG \
--set k8monitor.ibmAgentConfigSecret=dc-secret \
--set k8monitor.ibmAgentHTTPSSecret=ibm-agent-https-secret
where
- my_release_name is the Helm release name for data collector. Choose a
release name that does not yet exist in your environment.
- my_namespace is the namespace where you want your data collector to be
installed.
- my_docker_registry is the host of the Docker registry where the image is
stored.
- my_docker_registry_port is the Docker registry service port.
- my_cluster_name is a unique name to distinguish your cluster from the other
clusters being monitored. Only alphanumeric characters and "-" are supported, with no spaces. If you
enter
invalid characters, they are removed from the name. It is not recommended that
you change the cluster name after deployment
- my_docker_group is the Docker group in the registry where you want to store
your images. Example:
myRegistry:8500/mydockergroup
.
If you are
installing the data collector in a different namespace from the default, you must also assign
docker_group
with the same name as the namespace.
Validate the deployment:
-
After the installation script has completed, wait for the deployment to become ready as
indicated by this message:
kubectl get deployment my_ReleaseName-k8monitor --namespace=myReleaseNamespace
Depending on the size and health of your environment, it can take up to 10 minutes for the
Kubernetes data collector to start up and output logs
that you can review (see
Checking the Kubernetes installation logs). The
data collector startup creates a Kubernetes event, which generates an informational incident.
-
View the data collector metrics and incidents in the Cloud App Management
console to confirm that the data collector is
successfully
monitoring:
- Select the Resources tab. Find your Kubernetes resource types. For
instructions, see Viewing your managed resources. If this is your first installation,
you'll see 1 cluster.
- Select the Incidents tab and click All
incidents, then click and filter by Priority 4 incidents. You should see incidents about
Kubernetes monitoring availability. For more information, see Managing incidents.
Results
- The Kubernetes data collector is installed and
begins sending metrics to the Cloud App Management server for
display in the Resource dashboard pages. Incidents are generated for any
native Kubernetes events.
- The
ibm-k8monitor-config
ConfigMap is created in your default namespace as part
of Kubernetes data collector deployment. The ConfigMap
contains the ProviderId that is used to distinguish this cluster's resources from the others in your
tenant namespace and is crucial to enable multi-cluster support. Do not delete, move, or rename this
resource. If this ConfigMap is deleted and the Kubernetes data collector is restarted or is deployed or
redeployed, your data duplicates itself within the tenant because the monitor sees it as a new
cluster.
What to do next
- For each Kubernetes cluster that you want to monitor, repeat the steps starting with step 3 to
install the Kubernetes data collector and validate the
deployment.
- If you reconfigure or provide your own ingress certificates
post-deployment, you must restart the agent bootstrap service, download the updated ConfigPack, and
reconfigure your deployed data collectors to use the updated configurations. For more information,
see Configuring a custom server certificate.
- If you want to change your
cluster_name
post-deployment, enter the following command and change the CLUSTER_NAME
environment variable: kubectl edit deployment
my_ReleaseName-k8monitor
.
- For troubleshooting the deployment, see Kubernetes data collector issues.
- Use the resource dashboards and create thresholds to monitor your
Kubernetes environment. For more information, see Viewing your managed resources and Kubernetes metrics for thresholds.
- To learn more about
Kubernetes and best practices in managing your environment, see the PDF on IBM Cloud App
Management Developer Center: Kubernetes, CPU, and memory management
.