Renewing Kubernetes 1.10.x cluster certificates

This topic applies only when you have Kuberenetes 1.10.x. To find the Kubernetes version, enter the following command:

kubectl version --short

The Kubernetes cluster certificates have a lifespan of one year. It is important to know when your certificate expires. To determine the expiry date, run the following command as root user on the Kubernetes master:

find /etc/kubernetes/pki/ -type f -name "*.crt" -print|egrep -v 'ca.crt$'|xargs -L 1 -t  -i bash -c 'openssl x509  -noout -text -in {}|grep After'

For example, if your certificate expires in May 1, 2021, your output will resemble the following:

bash -c openssl x509  -noout -text -in /etc/kubernetes/pki/apiserver.crt|grep After
            Not After : May  1 00:25:47 2021 GMT
bash -c openssl x509  -noout -text -in /etc/kubernetes/pki/apiserver-kubelet-client.crt|grep After
            Not After : May  1 00:30:35 2021 GMT
bash -c openssl x509  -noout -text -in /etc/kubernetes/pki/front-proxy-client.crt|grep After
            Not After : May  1 00:31:02 2021 GMT

You should renew before the expiry date. If the Kubernetes cluster certificate expires on the Kubernetes master, then the kubelet service will fail. Issuing a kubectl command, such as kubectel get pods or kubectl exec -it container_name bash, will result in a message similar to Unable to connect to the server: x509: certificate has expired or is not yet valid.

If your Kubernetes cluster certificate has not expired and your system is still operational, you do not need to plan for a system outage because IBM Financial Crimes Insight will remain operational during the following procedure.

Procedure

To regenerate a new certificate and update worker nodes:

  1. If you have already renewed a Kubernetes certificate before, validate that the Kubernetes version in the kubeadm.yaml file matches the Kubernetes version that you are using. Otherwise, create a configuration file in /root named kubeadm.yaml with advertiseAddress set to the IP address of your Kubernetes master node. For example:
    apiVersion: kubeadm.k8s.io/v1alpha1
    kind: MasterConfiguration
    api:
      advertiseAddress: 10.165.80.110
    kubernetesVersion: v1.10.1
    apiServerCertSANs:
    - 172.30.32.1
    Note: The kubernetesVersion must match the Kubernetes version that you are using. To find the Kubernetes version, enter the following command:
    kubectl version --short
    To determine the apiServerCertSANs, use the CLUSTER-IP value from this command:
    kubectl get svc -l'component=apiserver'
    
    If the CLUSTER-IP matches the advertiseAddress, the last two lines of the configuration file are not required. For example, you can omit the following lines:
    apiServerCertSANs: 
      - 172.30.32.1
    If you have more than one IP address, the advertiseAddress: is the internal IP address that the worker nodes use to communicate back to the Kubernetes master. apiServerCertSANs: is the external IP address that end users use to connect to the Kubernetes master.
  2. Back up the existing Kubernetes certificates by running the following commands:
    mkdir -p $HOME/fci103-k8s-old-certs/pki
    /bin/cp -p /etc/kubernetes/pki/*.* $HOME/fci103-k8s-old-certs/pki
    ls -l $HOME/fci103-k8s-old-certs/pki/
    The output should resemble the following:
    [root@fcitest231 ~]# mkdir -p $HOME/fci103-k8s-old-certs/pki
    [root@fcitest231 ~]# /bin/cp -p /etc/kubernetes/pki/*.* $HOME/fci103-k8s-old-certs/pki
    [root@fcitest231 ~]# ls -l $HOME/fci103-k8s-old-certs/pki/
    total 56
    -rw-r--r-- 1 root root 1261 Mar  2  2020 apiserver.crt
    -rw-r--r-- 1 root root 1094 Mar  2  2020 apiserver-etcd-client.crt
    -rw------- 1 root root 1679 Mar  2  2020 apiserver-etcd-client.key
    -rw------- 1 root root 1675 Mar  2  2020 apiserver.key
    -rw-r--r-- 1 root root 1099 Mar  2  2020 apiserver-kubelet-client.crt
    -rw------- 1 root root 1679 Mar  2  2020 apiserver-kubelet-client.key
    -rw-r--r-- 1 root root 1025 Mar  2  2020 ca.crt
    -rw------- 1 root root 1679 Mar  2  2020 ca.key
    -rw-r--r-- 1 root root 1025 Mar  2  2020 front-proxy-ca.crt
    -rw------- 1 root root 1675 Mar  2  2020 front-proxy-ca.key
    -rw-r--r-- 1 root root 1050 Mar  2  2020 front-proxy-client.crt
    -rw------- 1 root root 1675 Mar  2  2020 front-proxy-client.key
    -rw------- 1 root root 1675 Mar  2  2020 sa.key
    -rw------- 1 root root  451 Mar  2  2020 sa.pub
  3. Remove the existing certificate and key files:
    /bin/rm /etc/kubernetes/pki/apiserver.key
    /bin/rm /etc/kubernetes/pki/apiserver.crt
    /bin/rm /etc/kubernetes/pki/apiserver-kubelet-client.crt
    /bin/rm /etc/kubernetes/pki/apiserver-kubelet-client.key
    /bin/rm /etc/kubernetes/pki/front-proxy-client.crt
    /bin/rm /etc/kubernetes/pki/front-proxy-client.key
  4. Create new certificates:
    kubeadm --config /root/kubeadm.yaml alpha phase certs apiserver
    kubeadm --config /root/kubeadm.yaml alpha phase certs apiserver-kubelet-client
    kubeadm --config /root/kubeadm.yaml alpha phase certs front-proxy-client
    The output should resemble the following:
    root@fcitest231 ~]# kubeadm --config /root/kubeadm.yaml alpha phase certs apiserver
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [fcitest231.rtp.raleigh.ibm.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 9.42.138.79]
    [root@fcitest231 ~]# kubeadm --config /root/kubeadm.yaml alpha phase certs apiserver-kubelet-client
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [root@fcitest231 ~]# kubeadm --config /root/kubeadm.yaml alpha phase certs front-proxy-client
    [certificates] Generated front-proxy-client certificate and key.
  5. Back up the existing configuration files by running the following commands:
    /bin/cp -p /etc/kubernetes/*.conf $HOME/fci103-k8s-old-certs
    ls -ltr $HOME/fci103-k8s-old-certs
    The output should resemble the following:
    [root@fcitest231 ~]# ls -ltr $HOME/fci103-k8s-old-certs
    total 36
    -rw------- 1 root root 5451 Mar  2  2020 admin.conf
    -rw------- 1 root root 5599 Mar  2  2020 kubelet.conf
    -rw------- 1 root root 5487 Mar  2  2020 controller-manager.conf
    -rw------- 1 root root 5431 Mar  2  2020 scheduler.conf
    drwxr-xr-x 2 root root 4096 Sep  2 20:40 pki
  6. Remove the old configuration files:
    /bin/rm /etc/kubernetes/admin.conf
    /bin/rm /etc/kubernetes/kubelet.conf
    /bin/rm /etc/kubernetes/controller-manager.conf
    /bin/rm /etc/kubernetes/scheduler.conf 
  7. Generate new configuration files:
    kubeadm --config /root/kubeadm.yaml alpha phase kubeconfig all
    The output should resemble the following:
    root@fcitest231 ~]# kubeadm --config /root/kubeadm.yaml alpha phase kubeconfig all
    [kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
  8. Ensure that your kubectl service is using the correct configuration files:
    /bin/cp /etc/kubernetes/admin.conf $HOME/.kube/config
    export KUBECONFIG=.kube/config
    Warning: If your certificates have not yet expired and your system is operational, you are finished. Do not continue to the next step.
  9. Reboot the Kubernetes master node.
  10. After the server restarts, check to ensure that the kubelet service is running:
    systemctl status kubelet
    Important: If you are on a single server configuration, stop here and do not proceed to the next step to rejoin the worker nodes.
  11. To rejoin the worker nodes, you require a cluster token.
    To do so, run the following command:
    kubeadm token list
    Then copy the cluster token to your clipboard. The token appears similar to:
    6dihyb.d09sbgae8ph2atjv
  12. SSH into each of the worker nodes and reconnect them to the Kubernetes master node.
  13. Join the worker nodes back into the Kubernetes cluster:
    kubeadm join --token=cluster_token master_ip:6443
    Where cluster_token is the token created in Step 11 and master_ip is the IP address of the Kubernetes master node.
    Note: Some versions of kubeadm use a --print-join-command command line parameter. In these cases, kubeadm outputs the kubeadm join command required to reconnect with the Kubernetes master. If this occurs, enter this command (copy and paste) on each worker node.
  14. Confirm that kubelet services are running and communication between the worker nodes and Kubernetes master is working.
  15. Wait a few minutes. Then from the Kubernetes master node, run the following command to confirm that the worker nodes are available:
    kubectl get nodes