Enabling FIPS

To run foundational services on a Federal Information Processing Standards (FIPS) compliant system, use the following planning information steps.

Enabling FIPS on your Red Hat OpenShift cluster

Procedure

  1. Enable FIPS on Red Hat OpenShift cluster.

    1. Set fips: true in the install-config.yaml file.
  2. Enable IPSec.

    1. Set networkType: OVNKubernetes in the install-config.yaml file.
    2. Create manifest and deploy cluster. For more information, see Configuring IPSec tunnels.
  3. Enable etcd after deploying your OpenShift Container Platform cluster.

    1. Follow the steps in Encrypting etcd data.

    Note: AWS access key ID and secret access key are required to deploy clusters on AWS (in both FIPS or non-FIPS mode).

  4. Enable FIPS mode on all nodes.

    For more information, see:

    Restriction: FIPS is supported only on x86_64 hardware.

  5. Install Red Hat OpenShift Container Platform in FIPS mode.

    For more information, see:

Configuring storage

For information on storage, see the Red Hat OpenShift documentation on Storage.

If you use an external storage provider and if your deployment's storage must be FIPS compliant, refer to your storage provider's documentation to ensure that your storage meets this requirement.

Configuring etcd encryption

To enable etcd encryption with AES-CBC, complete the following steps:

  1. Modify the APIServer object:

    oc edit apiserver
    
  2. Set the encryption field type to aescbc:

    spec:
      encryption:
        type: aescbc
    
  3. Save the file to apply the changes. Depending on the size of your cluster, it can take 20 minutes or longer for the encryption process to complete.

  4. Verify that the etcd encryption is successful.

    1. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully encrypted by running the following command:

      oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
      

    The output shows EncryptionCompleted upon successful encryption:

       EncryptionCompleted
       All resources encrypted: routes.route.openshift.io
    

For more information, see Encrypting etcd data.

Configuring IPSec tunnels

With IPsec enabled, all network traffic between nodes on the OVN-Kubernetes Container Network Interface (CNI) cluster network travels through an encrypted tunnel. You can provide additional FIPS protection for traffic across different nodes on the cluster by configuring IPSec tunnels.

IPsec is disabled by default when you install OpenShift 4.x clusters. IPsec encryption can be enabled only during cluster installation and cannot be disabled after it is enabled.

Complete the steps in this section to install an OpenShift cluster with IPSec enabled.

  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      ./openshift-install create install-config --dir <installation_directory>
      

      For <installation_directory>, specify the directory name to store the files that the installation program creates.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

      2. Select AWS as the platform to target.

      3. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.

        Note: Your AWS account key ID and secret access key are required when running the following command to create the install-config yaml file:

           ./openshift-install create install-config --dir <installation_directory>
        

        The AWS account key ID and secret access key are required whenever you create a cluster on AWS, as AWS needs to mark the owner for the cluster that you create.

      4. Select the AWS region to deploy the cluster to.

      5. Select the base domain for the Route 53 service that you configured for your cluster.

      6. Enter a descriptive name for your cluster.

      7. Paste the pull secret from the Red Hat OpenShift Cluster Manager.

  2. Modify the install-config.yaml file. You can find more information about the available parameters in the Installation configuration parameters section of Red Hat documentation.

  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    For more information on the installation configuration file, see Creating the installation configuration file.

  4. After creating your install-config.yaml file, switch the default CNI (OpenShift SDN) to OVN-Kubernetes CNI by updating the install-config.yaml file:

    apiVersion: v1
    baseDomain: rober.lab
    compute:
    - hyperthreading: Enabled
      name: worker
      replicas: 0
    controlPlane:
      hyperthreading: Enabled
      name: master
      replicas: 3
    metadata:
      name: ocp4
    networking:
      clusterNetworks:
      - cidr: 10.254.0.0/16
        hostPrefix: 24
      networkType: OVNKubernetes
      serviceNetwork:
      - 172.30.0.0/16
    platform:
      none: {}
    pullSecret: '$(< ~/.openshift/pull-secret)'
    sshKey: '$(< ~/.ssh/id_rsa.pub)'
    
  5. Generate the manifests from install-config.yaml by running the following command:

    openshift-install create manifests
    
  6. Create a cluster-network-03-config.yaml file and copy to the manifests directory:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      defaultNetwork:
        type: OVNKubernetes
        ovnKubernetesConfig:
          ipsecConfig: {}
          mtu: 1400
    
  7. Deploy your cluster:

    openshift-install create cluster --dir <installation_directory>
    
  8. Once your OpenShift installation is finished, verify that IPSec is successfully enabled.

    1. Check that the ovn-ipsec daemonset that manages the daemons responsible for configuring IPSec by running oc get ds -n openshift-ovn-kubernetes ovn-ipsec:

       oc get ds -n openshift-ovn-kubernetes ovn-ipsec
       NAME        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
       ovn-ipsec   6         6         6       6            6           beta.kubernetes.io/os=linux   3d3h
      
    2. By running oc get pod -n openshift-ovn-kubernetes -o wide | grep ipsec, you can verify that IPSec pods are running in all the nodes of your OpenShift cluster:

      oc get pod -n openshift-ovn-kubernetes -o wide | grep ipsec
       ovn-ipsec-4qp86        1/1     Running   0          38m   192.168.7.23   master2.ocp4.rober.lab   <none>           <none>
       ovn-ipsec-pk7vh        1/1     Running   0          38m   192.168.7.21   master0.ocp4.rober.lab   <none>           <none>
       ovn-ipsec-q4mwj        1/1     Running   0          22m   192.168.7.11   worker0.ocp4.rober.lab   <none>           <none>
       ovn-ipsec-trz5m        1/1     Running   0          22m   192.168.7.12   worker1.ocp4.rober.lab   <none>           <none>
       ovn-ipsec-vjmw8        1/1     Running   0          38m   192.168.7.22   master1.ocp4.rober.lab   <none>           <none>
      

For more information, see Specifying advanced network configuration in the Red Hat documentation.

Configuring services to enable FIPS

There are components that must be configured to be FIPS compliant. Some services in foundational services are not FIPS compliant by default and require manual configuration. Refer to the following sections to configure services and components to enable FIPS.

Configuring Zen route

To ensure that your instance of Zen is FIPS compliant, you can configure the Zen route to be reencrypt instead of passthrough. Zen route is created as a passthrough route by default.

Case 1: Configuring Zen route with encryption at the time of creation

Complete the steps in this section if you are creating your ZenService CR for the first time and want to set the Zen route as reencrypt.

  1. Set zenDefaultIngressReencrypt: true in the ZenService custom resource (CR). For example:

    spec:
      csNamespace: ibm-common-services   # where Bedrock is deployed
      iamIntegration: true
      storageClass: rook-cephfs          # file type storage class
      zenCoreMetaDbStorageClass: rook-ceph-block   # block type storage class
      zenDefaultIngressReencrypt: true
    
  2. Once ZenService CR is created, the Zen route is set as reencrypt

    ibm-common-services        cpd                              cpd-ibm-common-services.apps.ocp411-fips.cp.fyre.ibm.com                                          ibm-nginx-svc                    ibm-nginx-https-port   reencrypt/Redirect     None
    

Case 2: Configuring an existing Zen route (that is passthrough) to be reencrypt

Complete the steps in this section if your ZenService CR already exists as a passthrough route and you wish to update it to reencrypt.

  1. Create route-secret, my-tls-secret.

    1. Follow the instructions in Replacing the foundational services endpoint certificate to create ca.crt, tls.key, and tls.crt.

    2. Create route-secret in the ZenService namespace:

      Note: $zen_namespace refers to the location where the existing zen instance was deployed.

      oc create secret generic my-tls-secret -n $zen_namespace --from-file=ca.crt=./ca.crt --from-file=tls.crt=./tls.crt --from-file=tls.key=./tls.key`
      
  2. Update the ZenService CR by adding the following in the spec section:

    zenCustomRoute:
       route_reencrypt: true  # default false
       route_secret: my-tls-secret  ## must be set, secret created in the preceding step
    
  3. Once ZenService finishes updating, the Zen route is set to reencrypt:

    zen-tenant                 cpd                              cpd-zen-tenant.apps.ocp411-fips.cp.fyre.ibm.com                                                   ibm-nginx-svc                    ibm-nginx-https-port   reencrypt/Redirect     None
    

Configuring Events operator

The Events operator is a dependent product and it is up to the implementation of the product that uses the Events operator to decide whether they allow an external user to control the Kafka custom resource.

The Kafka custom resource defines Kafka listeners which Kafka client applications can connect to. There are two types of listeners, which are internal or route. These listeners are marked by the keyword type: internal or type: route in the listeners section of the Kafka custom resource. To hide a route, make sure that you use type: internal and the Events operator will not create an external route.

For example:

spec:
  kafka:
    listeners:
      - name: listener1
        port: 9094
        type: internal
        tls: true
        authentication:
          type: tls

Optional: Configuring IBM Cloud Pak foundational services by using the CommonService custom resource

In foundational services version 3.22, FIPS is not enabled by default. However, many common services components will automatically be FIPS compliant when running on a FIPS-enabled cluster. Below are additional settings and configuration options to fine tune the FIPS compliance of particular components.

Note: The steps in this section are required only if you use cp-proxy end point.

  1. Run the following command:

    oc edit commonservice -n ibm-common-services common-service
    
  2. Enable FIPS by setting the variable as fipsEnabled: true in the CommonService CR and initial template of OperandConfig:

    spec:
     fipsEnabled: true
    

    Notes:

    • You must set the variable as fipsEnabled: true if you are using cp-proxy end point. This will enable FIPS compliant “strict” mode for NGINX Ingress service.
    • For information on FIPS compliant "strict" mode, see Services that support FIPS.
    apiVersion: operator.ibm.com/v3
    kind: CommonService
    metadata:
      name: common-service
      namespace: ibm-common-services
    spec:
      fipsEnabled: true
    

After FIPS is enabled in the CommonServices CR, it applies the fipsEnabled: true flag to ibm-iam-operator, ibm-management-ingress-operator, and ibm-ingress-nginx-operator.

More information on FIPS