Configuring backup after installing Guardium Insights

You can add backup values after upgrading Guardium® Insights from 3.1.x to 3.2.x.

Before you begin

When following these instructions, you will be creating a PVC for installation. When applying the patch, the "claimName": in the oc patch command must match the name of the PVC that you create.

Procedure

  1. Deploy Network File System (NFS) to your Guardium Insights cluster. There are multiple ways of doing this. For example, you can clone the repo in your terminal by using the command git clone https://github.com/kubernetes-incubator/external-storage.git kubernetes-incubator. For this example, use the kubernetes-incubator-staging folder. This folder contains rbac.yaml and deployment.yaml with the staging namespace already configured.
    1. Change PROVISONER_NAME value from value:fuseim.pri.ifs to value:storage.io/nfs.
    2. Update class.yaml to match the PROVISONER_NAME from step a.
    3. Deploy modifications.
      oc create -f deploy/class.yaml
      oc create -f deploy/deployment.yaml
  2. Create a persistent volume (PV) and persistent volume claim (PVC) in accordance with the NFS from step 1. These examples show you how to create the PV and PVC - but you may need to adjust them according to your needs:
    1. Use the yaml file backuppv.yaml
      # This yaml file is to be used to create a PV based on the existing NFS:
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        annotations:
          pv.kubernetes.io/provisioned-by: storage.io/nfs
        name: i-am-nfs-v320-backup
      spec:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 500Gi
        nfs:
          path: /data/insights
          server: 10.21.42.111
        persistentVolumeReclaimPolicy: Retain
        storageClassName: managed-nfs-storage
        volumeMode: Filesystem
      to create and apply the PV use the following commands.
      oc project staging
      oc apply -f backuppv.yaml
      where staging is the namespace where Guardium Insights is in.
    2. Create a PVC yaml file and apply it in the same manner as the PV. For example:
      # This yaml file is to be used to create a PVC based on the existing PV:
      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: <GI_Backup_PVC> # This is the name the will be defined by the customer and passed into the oc patch commands under the claimName property.
        annotations:
          volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 500Gi # Size of the storage that the PVC will obtain from the PV
        claimRef:
          namespace: staging
          name: i-am-nfs-v320-backup # Name of the PV previously configured with the StorageClassName
  3. Edit Guardium Insights CustomResource (CR) with backup values by using the code from the following example:
    Where the name values are as follows:
    • Postgres
      • name:gi-postgres-backup
    • MongoDb
      • name:gi-backup-support-mount
    • DB2
      • name: gi-backup-support-mount
    oc patch guardiuminsights $(oc get guardiuminsights -o jsonpath='{range.items[*]}{.metadata.name}') --type merge -p '{"spec":{"guardiumInsightsGlobal":{"backupsupport":{"enabled":"true","name":"backup-support-pvc"}}}}'
    Note:
    • If the PVC is automatically mounted it has the "storageClassName": value as "rook-cephfs", otherwise if the value is "managed-nfs-storage" run the patch command in step 4.
    • The PVC must be specified in the Guardium Insights CR under the guardiumInsightsGlobal.backupsupport.name section when guardiumInsightsGlobal.backupsupport.enabled is set to true.
  4. Mount Postgres to the NFS PV from step 2 using the code from the following example:
    oc patch postgres-sts
    oc get guardiuminsights -o jsonpath='{range .items[*]}{.metadata.name}')-postgres-keeper --type='json' -p 
    '[{"op":"add","path":"/spec/template/spec/volumes/2","value":{"name":"gi-postgres-backup",
    "persistentVolumeClaim":{"claimName":"<GI_Backup_PVC>"}}},{"op":"add","path":"/spec/template/spec/containers/0/volumeMounts/3",
    "value":{"mountPath":"/opt/data/backup","name":"gi-postgres-backup"}}]' 
  5. Mount MongoDBCommunity to the NFS PV from step 2 using the following steps:
    1. Update your claimName to name of your PVC volume with the following code
      oc patch $(oc get mongodbcommunity -oname) --type='json' -p 
      '[{"op":"add","path":"/spec/statefulSet/spec/template/spec/vol
      umes","value":
      [{"name":"$BACKUP_MONGO_CLAIM_NAME","persistentVolumeClaim":
      {"claimName":"$BACKUP_PVC_NAME"}}]}]'
    2. Update the MongoDBCommunity container with the volumesMount section
      MONGOD_CONTAINER_JSON=$(oc get $(oc get mongodbcommunity -
      oname) -ojson | jq 
      '.spec.statefulSet.spec.template.spec.containers[0]' | jq --
      arg backup_claim "$BACKUP_MONGO_CLAIM_NAME" '. + 
      { "volumeMounts": [ { "name": "$BACKUP_MONGO_CLAIM_NAME", 
      "mountPath": "/opt/data/backup" } ] }' -c)
      
      echo $MONGOD_CONTAINER_JSON
    3. Install the Patch for MongoDB
       oc patch $(oc get mongodbcommunity -oname) --type='json' -p 
      "[{\"op\":\"replace\",\"path\":\"/spec/statefulSet/spec/templa
      te/spec/containers/0\",\"value\":$MONGOD_CONTAINER_JSON}]"
    4. Verify completion by running the following code
      oc get mongodbcommunity -oyaml
  6. Mount DB2ucluster to the NFS PV from step 2 using the code from the following example:
    oc patch db2ucluster $(oc get guardiuminsights -o jsonpath='{range .items[*]}{.metadata.name}')-db2 --type='json' -p
    '[{"op":"add","path":"/spec/storage/3","value":{"name":"backup","claimName":"<GI_Backup_PVC>",
    "spec":{"resources":{}},"type":"existing"}}]' 
  7. Note: The claimName for all three databases is the same '<GI_Backup_PVC>'.
    Verify mounting of Postgres, MongoDBCommunity, and Db2:
    oc describe pvc <pvc_name>