Configuring backup after installing Guardium Insights
You can add backup values after upgrading Guardium® Insights from 3.1.x to 3.2.x.
Before you begin
"claimName":
in the oc patch
command must
match the name of the PVC that you create.Procedure
-
Deploy Network File System (NFS) to your Guardium
Insights cluster. There are multiple ways of doing this. For
example, you can clone the repo in your terminal by using the command
git clone https://github.com/kubernetes-incubator/external-storage.git kubernetes-incubator
. For this example, use the kubernetes-incubator-staging folder. This folder contains rbac.yaml and deployment.yaml with the staging namespace already configured.- Change
PROVISONER_NAME
value fromvalue:fuseim.pri.ifs
tovalue:storage.io/nfs
. - Update class.yaml to match the
PROVISONER_NAME
from step a. - Deploy modifications.
oc create -f deploy/class.yaml oc create -f deploy/deployment.yaml
- Change
- Create a persistent volume (PV) and persistent volume claim (PVC) in accordance with the
NFS from step 1. These examples show you how to create the
PV and PVC - but you may need to adjust them according to your needs:
- Use the yaml file
backuppv.yaml
to create and apply the PV use the following commands.# This yaml file is to be used to create a PV based on the existing NFS: apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: storage.io/nfs name: i-am-nfs-v320-backup spec: accessModes: - ReadWriteMany capacity: storage: 500Gi nfs: path: /data/insights server: 10.21.42.111 persistentVolumeReclaimPolicy: Retain storageClassName: managed-nfs-storage volumeMode: Filesystem
whereoc project staging oc apply -f backuppv.yaml
staging
is the namespace where Guardium Insights is in. - Create a PVC yaml file and apply it in the same manner as the PV.
For example:
# This yaml file is to be used to create a PVC based on the existing PV: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <GI_Backup_PVC> # This is the name the will be defined by the customer and passed into the oc patch commands under the claimName property. annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: - ReadWriteMany resources: requests: storage: 500Gi # Size of the storage that the PVC will obtain from the PV claimRef: namespace: staging name: i-am-nfs-v320-backup # Name of the PV previously configured with the StorageClassName
- Use the yaml file
backuppv.yaml
- Edit Guardium
Insights CustomResource (CR) with
backup values by using the code from the following example: Where the
name
values are as follows:- Postgres
name:gi-postgres-backup
- MongoDb
name:gi-backup-support-mount
- DB2
name: gi-backup-support-mount
oc patch guardiuminsights $(oc get guardiuminsights -o jsonpath='{range.items[*]}{.metadata.name}') --type merge -p '{"spec":{"guardiumInsightsGlobal":{"backupsupport":{"enabled":"true","name":"backup-support-pvc"}}}}'
Note:- If the PVC is automatically mounted it has the
"storageClassName":
value as"rook-cephfs"
, otherwise if the value is"managed-nfs-storage"
run the patch command in step 4. - The PVC must be specified in the Guardium
Insights CR
under the
guardiumInsightsGlobal.backupsupport.name
section whenguardiumInsightsGlobal.backupsupport.enabled
is set totrue
.
- Postgres
- Mount Postgres to the NFS PV from step 2 using
the code from the following example:
oc patch postgres-sts oc get guardiuminsights -o jsonpath='{range .items[*]}{.metadata.name}')-postgres-keeper --type='json' -p '[{"op":"add","path":"/spec/template/spec/volumes/2","value":{"name":"gi-postgres-backup", "persistentVolumeClaim":{"claimName":"<GI_Backup_PVC>"}}},{"op":"add","path":"/spec/template/spec/containers/0/volumeMounts/3", "value":{"mountPath":"/opt/data/backup","name":"gi-postgres-backup"}}]'
- Mount MongoDBCommunity to the NFS PV from step 2
using the following steps:
- Update your
claimName
to name of your PVC volume with the following codeoc patch $(oc get mongodbcommunity -oname) --type='json' -p '[{"op":"add","path":"/spec/statefulSet/spec/template/spec/vol umes","value": [{"name":"$BACKUP_MONGO_CLAIM_NAME","persistentVolumeClaim": {"claimName":"$BACKUP_PVC_NAME"}}]}]'
- Update the MongoDBCommunity container with the
volumesMount
sectionMONGOD_CONTAINER_JSON=$(oc get $(oc get mongodbcommunity - oname) -ojson | jq '.spec.statefulSet.spec.template.spec.containers[0]' | jq -- arg backup_claim "$BACKUP_MONGO_CLAIM_NAME" '. + { "volumeMounts": [ { "name": "$BACKUP_MONGO_CLAIM_NAME", "mountPath": "/opt/data/backup" } ] }' -c) echo $MONGOD_CONTAINER_JSON
- Install the Patch for MongoDB
oc patch $(oc get mongodbcommunity -oname) --type='json' -p "[{\"op\":\"replace\",\"path\":\"/spec/statefulSet/spec/templa te/spec/containers/0\",\"value\":$MONGOD_CONTAINER_JSON}]"
- Verify completion by running the following code
oc get mongodbcommunity -oyaml
- Update your
- Mount DB2ucluster to the NFS PV from step 2
using the code from the following example:
oc patch db2ucluster $(oc get guardiuminsights -o jsonpath='{range .items[*]}{.metadata.name}')-db2 --type='json' -p '[{"op":"add","path":"/spec/storage/3","value":{"name":"backup","claimName":"<GI_Backup_PVC>", "spec":{"resources":{}},"type":"existing"}}]'
- Note: TheVerify mounting of Postgres, MongoDBCommunity, and Db2:
claimName
for all three databases is the same '<GI_Backup_PVC>
'.oc describe pvc <pvc_name>