Renewing Kubernetes cluster certificates

The Kubernetes cluster certificates have a lifespan of one year. It is important to know when your certificate expires. To determine the expiry date, run the following command as root user on the Kubernetes master:

find /etc/kubernetes/pki/ -type f -name "*.crt" -print|egrep -v 'ca.crt$'|xargs -L 1 -t  -i bash -c 'openssl x509  -noout -text -in {}|grep After'

For example, if your certificate expires in May 1, 2021, your output will resemble the following:

bash -c openssl x509  -noout -text -in /etc/kubernetes/pki/apiserver.crt|grep After
            Not After : May  1 00:25:47 2021 GMT
bash -c openssl x509  -noout -text -in /etc/kubernetes/pki/apiserver-kubelet-client.crt|grep After
            Not After : May  1 00:30:35 2021 GMT
bash -c openssl x509  -noout -text -in /etc/kubernetes/pki/front-proxy-client.crt|grep After
            Not After : May  1 00:31:02 2021 GMT

You should renew before the expiry date. If the Kubernetes cluster certificate expires on the Kubernetes master, then the kubelet service will fail. Issuing a kubectl command, such as kubectel get pods or kubectl exec -it container_name bash, will result in a message similar to Unable to connect to the server: x509: certificate has expired or is not yet valid.

If your Kubernetes cluster certificate has not expired and your system is still operational, you do not need to plan for a system outage because IBM Financial Crimes Insight will remain operational during the following procedure.

Procedure

If you have already renewed a Kubernetes certificate before, skip steps 1 to 4. Otherwise, start the procedure at step 1.

To regenerate a new certificate and update worker nodes:

  1. Create a configuration file in /root named kubeadm.yaml with advertiseAddress set to the IP address of your Kubernetes master node. For example:
    apiVersion: kubeadm.k8s.io/v1alpha1
    kind: MasterConfiguration
    api:
      advertiseAddress: 10.165.80.110
    kubernetesVersion: v1.7.5
    apiServerCertSANs:
    - 172.30.32.1
    Note: The kubernetesVersion must match the Kubernetes version that you are using. To find the Kubernetes version, enter the following command:
    kubectl version --short
    To determine the apiServerCertSANs, use the CLUSTER-IP value from this command:
    kubectl get svc -l'component=apiserver'
    
    If the CLUSTER-IP matches the advertiseAddress, the last two lines of the configuration file are not required. For example, you can omit the following lines:
    apiServerCertSANs: 
      - 172.30.32.1
    If you have more than one IP address, the advertiseAddress: is the internal IP address that the worker nodes use to communicate back to the Kubernetes master. apiServerCertSANs: is the external IP address that end users use to connect to the Kubernetes master.
  2. Run the following command to determine the kubeadm tool version:
    kubeadm version -o short
  3. If the kubeadm version is not 1.8.15, update kubeadm from version 1.7 to 1.8.15. If the server does not have internet access, manually download and upload the kubeadm executable to the Kubernetes master.
    curl -SSL https://dl.k8s.io/release/v1.8.15/bin/linux/amd64/kubeadm > ./kubeadm.1.8.15
  4. Replace kubeadm in the /usr/bin directory:
    chmod a+rx kubeadm.1.8.15
    sudo mv /usr/bin/kubeadm /usr/bin/kubeadm.1.7
    sudo mv kubeadm.1.8.15 /usr/bin/kubeadm
    kubeadm version -o short
  5. Back up the existing Kubernetes certificates by running the following commands:
    mkdir -p $HOME/fci102-k8s-old-certs/pki
    /bin/cp -p /etc/kubernetes/pki/*.* $HOME/fci102-k8s-old-certs/pki
    ls -l $HOME/fci102-k8s-old-certs/pki/
    The output should resemble the following:
    [root@fcidiw79 ~]# mkdir -p $HOME/fci102-k8s-old-certs/pki
    [root@fcidiw79 ~]# /bin/cp -p /etc/kubernetes/pki/*.* $HOME/fci102-k8s-old-certs/pki
    [root@fcidiw79 ~]# ls -l $HOME/fci102-k8s-old-certs/pki/
    total 48
    -rw-r--r-- 1 root root 1253 Apr 30 20:25 apiserver.crt
    -rw------- 1 root root 1679 Apr 30 20:25 apiserver.key
    -rw-r--r-- 1 root root 1099 Apr 30 20:30 apiserver-kubelet-client.crt
    -rw------- 1 root root 1679 Apr 30 20:30 apiserver-kubelet-client.key
    -rw-r--r-- 1 root root 1025 May  4  2018 ca.crt
    -rw------- 1 root root 1675 May  4  2018 ca.key
    -rw-r--r-- 1 root root 1025 May  4  2018 front-proxy-ca.crt
    -rw------- 1 root root 1679 May  4  2018 front-proxy-ca.key
    -rw-r--r-- 1 root root 1050 Apr 30 20:31 front-proxy-client.crt
    -rw------- 1 root root 1675 Apr 30 20:31 front-proxy-client.key
    -rw------- 1 root root 1679 May  4  2018 sa.key
    -rw------- 1 root root  451 May  4  2018 sa.pub
  6. Remove the existing certificate and key files:
    /bin/rm /etc/kubernetes/pki/apiserver.key
    /bin/rm /etc/kubernetes/pki/apiserver.crt
    /bin/rm /etc/kubernetes/pki/apiserver-kubelet-client.crt
    /bin/rm /etc/kubernetes/pki/apiserver-kubelet-client.key
    /bin/rm /etc/kubernetes/pki/front-proxy-client.crt
    /bin/rm /etc/kubernetes/pki/front-proxy-client.key
  7. Create new certificates:
    kubeadm --config /root/kubeadm.yaml alpha phase certs apiserver
    kubeadm --config /root/kubeadm.yaml alpha phase certs apiserver-kubelet-client
    kubeadm --config /root/kubeadm.yaml alpha phase certs front-proxy-client
    The output should resemble the following:
    root@fcidiw79 ~]# kubeadm --config /root/kubeadm.yaml alpha phase certs apiserver
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [fcidiw79.rtp.raleigh.ibm.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 9.42.138.79]
    [root@fcidiw79 ~]# kubeadm --config /root/kubeadm.yaml alpha phase certs apiserver-kubelet-client
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [root@fcidiw79 ~]# kubeadm --config /root/kubeadm.yaml alpha phase certs front-proxy-client
    [certificates] Generated front-proxy-client certificate and key.
  8. Back up the existing configuration files by running the following commands:
    /bin/cp -p /etc/kubernetes/*.conf $HOME/fci102-k8s-old-certs
    ls -ltr $HOME/fci102-k8s-old-certs
    The output should resemble the following:
    [root@fcidiw79 ~]# ls -ltr $HOME/fci102-k8s-old-certs
    total 32
    -rw------- 1 root root 5447 Apr 30 20:33 admin.conf
    -rw------- 1 root root 5587 Apr 30 20:33 kubelet.conf
    -rw------- 1 root root 5487 Apr 30 20:33 controller-manager.conf
    -rw------- 1 root root 5431 Apr 30 20:33 scheduler.conf
    drwxr-xr-x 2 root root  288 Sep  2 13:35 pki
  9. Remove the old configuration files:
    /bin/rm /etc/kubernetes/admin.conf
    /bin/rm /etc/kubernetes/kubelet.conf
    /bin/rm /etc/kubernetes/controller-manager.conf
    /bin/rm /etc/kubernetes/scheduler.conf 
  10. Generate new configuration files:
    kubeadm --config /root/kubeadm.yaml alpha phase kubeconfig all
    The output should resemble the following:
    root@fcidiw79 ~]# kubeadm --config /root/kubeadm.yaml alpha phase kubeconfig all
    [kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
  11. Ensure that your kubectl service is using the correct configuration files:
    /bin/cp /etc/kubernetes/admin.conf $HOME/.kube/config
    export KUBECONFIG=.kube/config
    Warning: If your certificates have not yet expired and your system is operational, you are finished. Do not continue to the next step.
  12. Reboot the Kubernetes master node.
  13. After the server restarts, check to ensure that the kubelet service is running:
    systemctl status kubelet
    Important: If you are on a single server configuration, stop here and do not proceed to the next step to rejoin the worker nodes.
  14. To rejoin the worker nodes, you require a cluster token.
    To do so, run the following command:
    kubeadm token list
    Then copy the cluster token to your clipboard. The token appears similar to:
    6dihyb.d09sbgae8ph2atjv
    Attention: Skip this step for Kubernetes 1.7.x since the cluster token is preset. For Kubernetes versions later than 1.7.x, you must create a new token since the token generated at installation has a limited lifetime. If you are on Kubernetes 1.8 or later, run the following command:
    kubeadm token create
  15. SSH into each of the worker nodes and reconnect them to the Kubernetes master node.
    Note: It is recommended that you install kubeadm 1.8.15 on all worker nodes to be consistent with the version on the Kubernetes master. Follow the same procedure as on the Kubernetes master to upgrade the worker nodes.
  16. Join the worker nodes back into the Kubernetes cluster:
    kubeadm join --token=cluster_token master_ip:6443
    Where cluster_token is the token created in Step 14 and master_ip is the IP address of the Kubernetes master node.
    Note: Some versions of kubeadm use a --print-join-command command line parameter. In these cases, kubeadm outputs the kubeadm join command required to reconnect with the Kubernetes master. If this occurs, enter this command (copy and paste) on each worker node.
  17. Confirm that kubelet services are running and communication between the worker nodes and Kubernetes master is working.
  18. Wait a few minutes. Then from the Kubernetes master node, run the following command to confirm that the worker nodes are available:
    kubectl get nodes