Preparing storage on IBM Cloud Public (ROKS)

All instances of an operator on IBM Cloud® need a place to store its log files. If you plan to run the scripts to generate a custom resource (CR), the cluster setup script creates a persistent volume claim (PVC) and copies the JDBC drivers for you.

Before you begin

Before you deploy an automation container on IBM Cloud (your target cluster platform is ROKS), you must configure your client environment and create an OpenShift cluster. For more information, see Preparing your cluster and Preparing a client to connect to the cluster.

About this task

You must prepare the storage of the operator before you create an instance of the operator.

You can attach endurance storage with gid storage classes. The deployment script creates the following classes, which define the reclaimPolicy to Retain for production environments.
  • cp4a-file-retain-bronze-gid
  • cp4a-file-retain-silver-gid
  • cp4a-file-retain-gold-gid

From 21.0.3-IF008, the script no longer creates the three cp4a storage classes. Use the gid storage classes: ibmc-file-bronze-gid, ibmc-file-silver-gid, and ibmc-file-gold-gid instead.

If you plan to use Portworx storage in a multi-zone region (MZR), use the portworx-shared-sc storage class.

Restriction: Task Manager and Aspera integration with Business Automation Navigator does not work with Portworx storage in an MZR.

The YAML files to create these storage classes are provided in the cert-kubernetes/descriptors folder.

Important: If you plan to run the installation scripts and want to use the default storage, decide whether to create a new namespace before you run the scripts. You can create a namespace beforehand or when you run the cluster setup script. If you do not want to use the IBM Entitled Registry to pull the container images, then you need a namespace to load the images to a target registry.

Procedure

  1. Log in to your ROKS cluster.
    oc login --token=<token> --server=https://<cluster-ip>:<port>

    Where the <token> is your API token for your user on the cluster, and <cluster-ip>:<port> is the IP address and port number of the cluster. You can get these values by clicking Copy Login Command in the OCP console.

    The following example command shows that the token is an almost-unique fixed-size 256-bit (32-byte) hash.

    oc login --token=sha256~5a0GogeS4oEUfG5yFCcPE2Qf-rz5exEUiFaZ4V0Iy1Y --server=https://api.ocp4616-cp4ba.cp.example.com:6443
    
  2. Create a namespace for the operator and CP4BA deployment.

    You can use an existing project in the cluster or create a new namespace. If you are planning an all namespaces installation, the openshift-operators project is used for the operator, but you must have a different project for your CP4BA deployment. You can create a project in the OpenShift console or on the OCP CLI by running the following command.

    oc new-project <project_name> --description="<description>" --display-name="<display_name>"

    Change the scope in the OpenShift cluster to the new project (cp4ba-project) or openshift-operators for an all namespaces installation.

    oc project <project_name>
  3. Optional: If you want to create a storage class manually, then create a YAML file and name it operator-sc.yaml.

    Refer to the YAML files provided in the descriptors folder to find examples. The following example shows a bronze storage class:

    apiVersion: storage.k8s.io/v1beta1
    kind: StorageClass
    metadata:
      name: cp4a-file-retain-bronze-gid
      labels:
        kubernetes.io/cluster-service: "true"
    provisioner: ibm.io/ibmc-file
    parameters:
      type: "Endurance"
      iopsPerGB: "2"
      sizeRange: "[20-12000]Gi"
      billingType: "hourly"
      classVersion: "2"
      gidAllocate: "true"
    reclaimPolicy: Retain
    volumeBindingMode: Immediate

    For more information about downloading cert-kubernetes, see Preparing a client to connect to the cluster.

    ROKS multi-zone region (MZR) classic on a bare metal server with a Portworx storage class is supported, as is ROKS VPC MZR using an OpenShift Data Foundation (ODF) storage class. For Portworx, you must use "portworx-shared-sc" as the storage class name. The following YAML shows a sample storage class:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
      name: portworx-shared-sc
    parameters:
      repl: "3"
      shared: "true"
    provisioner: kubernetes.io/portworx-volume
    reclaimPolicy: Delete
    volumeBindingMode: Immediate
    Restriction: If you use Portworx, Task Manager and Aspera integration with Business Automation Navigator is not supported.
  4. Optional: If you did the previous step, apply the new storage class.
    oc apply -f operator-sc.yaml
  5. Optional: If you want to create a fast storage class manually, then create a YAML file for the shared log volume, and name it operator-fast-sc.yaml.
  6. Optional: If you did the previous step, apply the new fast storage class.
    oc apply -f operator-fast-sc.yaml
  7. Optional: Create a claim for the PV dynamically by using the descriptors/operator-shared-pvc.yaml file.

    Replace the storage classes with the names of the storage classes that you created.

  8. Optional: If you did the previous step, deploy the PVC.
    oc create -f descriptors/operator-shared-pvc.yaml

What to do next

Confirm that the STATUS of the PVCs are bound before you move to the next step by running the following command in the <project_name>.

oc get pvc

You can now check that you have access to the container images. For more information, see Getting access to images from the public IBM Entitled Registry.