Backing up EDB Postgres

EDB Postgres supports the backup of the database with no downtime. You can recover a backup at any point in time from the first available base backup in your system. The EDB Postgres operator can be configured to orchestrate a continuous backup infrastructure that is based on the Backup and Restore Manager (Barman) tool.

Before you begin

For more information, see Backup and Recovery of EDB Postgres and the use of Barman. You can schedule your backups by creating a ScheduledBackup resource, which includes a schedule parameter. The value of the schedule parameter is a CRON schedule specification. For more information about creating a scheduled backup specification, see CRON schedule.

Your business-critical environments can be running on different cloud-based storage providers, so refer to the documentation for these options:

Depending on your object storage provider, store the credentials to access your storage device and the destination path for your storage.

AWS S3
Use the AWS documentation to create the access key and access secret. For new users, create a policy to access your S3 bucket. An example destination path is s3://mys3bucket/. Collect the following information about your environment:
  • ACCESS_KEY_ID: The ID of the access key that is used to upload files into S3.
  • ACCESS_SECRET_KEY: The secret for the access key.
  • ACCESS_SESSION_TOKEN: Optional session token, when it is required.
Microsoft Azure Blob Storage
To access your storage account for Microsoft Azure Blob Storage, collect the following information about the account credentials:

For more information, see Azure Blob Storage. An example destination path is https://STORAGEACCOUNTNAME.blob.core.windows.net/CONTAINERNAME/.

Google Cloud Storage

Create the GOOGLE_APPLICATION_CREDENTIALS environment variable. An example destination path is gs://mygsbucket. For more information, see Set up Application Default Credentials.

Procedure

  1. If you did not download the cert-kubernetes repository, then go ahead and download it to your client machine.

    For more information about downloading cert-kubernetes, see Preparing your cluster for an online deployment.

  2. Log in to the target cluster as the <cluster-admin> user.

    Using the OpenShift CLI:

    oc login https://<cluster-ip>:<port> -u <cluster-admin> -p <password>

    On ROKS, if you are not already logged in:

    oc login --token=<token> --server=https://<cluster-ip>:<port>
  3. Change the directory to the extracted cert-kubernetes/scripts folder.
    cd ${PATH_TO_EXTRACTED_FILES}/cert-kubernetes/scripts
  4. Run the backup script and follow the prompts in the command window.
    ./cp4a-edb-production-backup-restore.sh
    1. Select "1" to generate the necessary backup files.
    2. Continue to answer the questions with the information that you prepared in the Before you begin section.

    Verify that the script generated the secret and YAML files in /{WORKING_DIRECTORY}/cert-kubernetes/scripts/EDB.

  5. Create the secret with the credentials that you stored for your chosen storage provider.

    For AWS, apply the following YAML file.

    oc apply -f /{WORKING_DIRECTORY}/cert-kubernetes/scripts/EDB/aws-creds.yaml

    For Microsoft Azure, apply the following YAML file.

    oc apply -f /{WORKING_DIRECTORY}/cert-kubernetes/scripts/EDB/azure-creds.yaml

    For Google Cloud, apply the following YAML file.

    oc apply -f /{WORKING_DIRECTORY}/cert-kubernetes/scripts/EDB/create-google-secret.sh
  6. Apply the EDB Postgres custom resource update by running the following command.
    oc apply -f /{WORKING_DIRECTORY}/cert-kubernetes/scripts/EDB/postgres-cp4ba.yaml

    The custom resource now includes a new backup section. The following YAML shows an example backup section for AWS S3.

     backup:
        barmanObjectStore:
          s3Credentials:
            accessKeyId:
              name: aws-creds
              key: ACCESS_SECRET_ID
            secretAccessKey:
              name: aws-creds
              key: ACCESS_SECRET_KEY
          destinationPath: s3://mys3bucket/edb/
  7. Apply the scheduled backup to start backing up your database.
    oc apply -f /{WORKING_DIRECTORY}/cert-kubernetes/scripts/EDB/postgres-cp4ba-schedule-backup.yaml

    The ScheduledBackup resource is now installed on the cluster.

    apiVersion: postgresql.k8s.enterprisedb.io/v1
    kind: ScheduledBackup
    metadata:
      name: postgres-cp4ba-schedule-backup
    spec:
      schedule: 0 0 0 * * *
      immediate: false
      cluster:
        name: postgres-cp4ba
    Warning: The script does not validate the CRON schedule format. If you apply the ScheduledBackup with a malformed schedule, you might see an error similar to the following message.
    The Backup "postgres-cp4ba-schedule-backup-1" is invalid: spec.schedule: Invalid value: 

    Before you run the backup script with AWS as your storage provider, create a new destination folder or bucket on AWS and select the appropriate cronjob format. The correct cronjob format can prevent potential errors in Kubernetes.