Installing a CP4BA Workflow Process Service deployment

Workflow Process Service is a small-footprint business automation environment for testing and running workflow processes that coordinate manual tasks and services. You can install Workflow Process Service Runtime or Workflow Process Service Authoring on Red Hat OpenShift Container Platform (OCP). The steps include the process to prepare, deploy, and configure Workflow Process Service.

To install Workflow Process Service Authoring, you have two options:

If you run into issues while installing Workflow Process Service Runtime, see Troubleshooting Workflow Process Service.

If you are upgrading, see Upgrading Workflow Process Service Runtime deployments from 22.0.1.

Preparing for a Workflow Process Service Runtime deployment

Workflow Process Service Runtime requires an IBM Cloud Pak for Business Automation installation, and integrates with components in Cloud Pak for Business Automation.

  1. Make sure that you have the resources you need for your deployment. See Planning for Workflow Process Service.
  2. Plan and prepare your deployment on your cluster by completing the steps in Preparing for a production deployment.

Deploying required Workflow Process Service Runtime components

To install Workflow Process Service Runtime, you must use the Cloud Pak for Business Automation operator to configure Resource Registry, root Certificate Authority (CA), and IBM Automation foundation.

If you already installed one of the Cloud Pak for Business Automation deployment patterns, you can proceed directly to step 2. For instructions to install a deployment pattern, see Creating a production deployment.

  1. If you didn't install a deployment pattern, you must customize the Cloud Pak for Business Automation custom resource (CR) to configure the required components.
    1. Create the following .yaml file, and replace the value of sc_slow_file_storage_classname, sc_medium_file_storage_classname, and sc_fast_file_storage_classname.
      apiVersion: icp4a.ibm.com/v1
      kind: ICP4ACluster
      metadata:
         name: icp4adeploy
         labels:
           app.kubernetes.io/instance: ibm-dba
           app.kubernetes.io/managed-by: ibm-dba
           app.kubernetes.io/name: ibm-dba
           release: 22.0.2
      spec:
         appVersion: 22.0.2
         ibm_license: "accept"
         ## shared configuration among all tribe
         shared_configuration:
           show_sensitive_log: true
           ## Use this parameter to specify the license for the IBM Cloud Pak for Business Automation deployment for the rest of the Cloud Pak for Business Automation components.
           ## This value could differ from the rest of the licenses.
           sc_deployment_license: production
           sc_deployment_type: custom
           ## On OCP 3.x and 4.x, the user script will populate these three (3) parameters based on your input for "enterprise" deployment.
           ## If you are manually deploying without using the user script, then you would provide the different storage classes for the slow, medium
           ## and fast storage parameters below. If you only have 1 storage class defined, then you can use that 1 storage class for all 3 parameters.
           storage_configuration:
             sc_slow_file_storage_classname: "<require>"
             sc_medium_file_storage_classname: "<require>"
             sc_fast_file_storage_classname: "<require>"
           sc_deployment_platform: OCP
         ## this field is required to deploy Resource Registry (RR)
         resource_registry_configuration:
           replica_size: 1
    2. If you want to configure one or more LDAP configurations, use the ldap_configuration parameter in the icp4acluster CR. For more information about LDAP configuration, see LDAP configuration. You can also configure LDAP in post-deployment by using the Common UI console.
  2. Wait a few minutes, then run the command oc get icp4acluster -o yaml to make sure that IBM Automation foundation, root Certificate Authority, and Resource Registry are ready. Make sure that .status.components.prereq.rootCAStatus is Ready and .status.components.prereq.rootCASecretName is filled with the correct secret name. Make sure that .status.endpoints["Resource Registry"] appears in the endpoints list. For example:
    status:
        components:
          ...
          prereq:
            conditions: []
            iafStatus: Ready
            rootCASecretName: icp4adeploy-root-ca
            rootCAStatus: Ready
          resource-registry:
            rrAdminSecret: icp4adeploy-rr-admin-secret
            rrCluster: Ready
            rrService: Ready
          ...
        endpoints:
        - name: Resource Registry
          scope: Internal
          type: gRPC
          uri: icp4adeploy-dba-rr-client:2379
  3. Make sure that Zen and Resource Registry pods are listed in the oc get pod command result. For example:
    [root@xxxxxx]# oc get pod
    NAME                                                              READY   STATUS      RESTARTS   AGE
    create-secrets-job-x4wh9                                          0/1     Completed   0          2d20h
    iaf-ai-operator-controller-manager-6b56dd5457-gnr52               1/1     Running     0          6d20h
    iaf-core-operator-controller-manager-67fc9bb46c-k2c4q             1/1     Running     1          6d20h
    iaf-eventprocessing-operator-controller-manager-6d84cc6b9bzn8cl   1/1     Running     5          6d20h
    iaf-flink-operator-controller-manager-86c5fbd469-4xtp8            1/1     Running     0          6d20h
    iaf-operator-controller-manager-565b46cc7d-zcgrg                  1/1     Running     1          6d20h
    iaf-zen-tour-job-kqx8q                                            0/1     Completed   0          2d20h
    iam-config-job-hhc4p                                              0/1     Completed   0          2d20h
    ibm-common-service-operator-7f68dc5bb8-p84vp                      1/1     Running     0          6d20h
    ibm-cp4a-operator-9fcdbf54b-mjt2w                                 1/1     Running     0          2d1h
    ibm-elastic-operator-controller-manager-9c644c68c-48vrn           1/1     Running     0          6d20h
    ibm-nginx-d4b995cc9-b6bml                                         1/1     Running     0          2d20h
    ibm-nginx-d4b995cc9-v9p6q                                         1/1     Running     0          2d20h
    icp4adeploy-dba-rr-b65c004a68                                     1/1     Running     0          2d20h
    icp4adeploy-rr-backup-1631167500-p4zmn                            0/1     Completed   0          2m22s
    icp4adeploy-rr-setup-pod                                          0/1     Completed   0          2d20h
    setup-nginx-job-4jpw7                                             0/1     Completed   0          2d20h
    usermgmt-86cf946f6c-7lm48                                         1/1     Running     0          2d20h
    usermgmt-86cf946f6c-tcbl5                                         1/1     Running     0          2d20h
    zen-audit-75c79f5f6c-97s2x                                        1/1     Running     0          2d20h
    zen-core-57cbcb46b-7bvzh                                          1/1     Running     0          2d20h
    zen-core-57cbcb46b-h8fxv                                          1/1     Running     0          2d20h
    zen-core-api-6997d5d6bb-hpjvd                                     1/1     Running     0          2d20h
    zen-core-api-6997d5d6bb-swm9h                                     1/1     Running     0          2d20h
    zen-metastoredb-0                                                 1/1     Running     0          2d20h
    zen-metastoredb-1                                                 1/1     Running     1          2d20h
    zen-metastoredb-2                                                 1/1     Running     0          2d20h
    zen-metastoredb-certs-zzsjw                                       0/1     Completed   0          2d20h
    zen-metastoredb-init-n8k6w                                        0/1     Completed   0          2d20h
    zen-post-requisite-job-rqk89                                      0/1     Completed   0          2d20h
    zen-pre-requisite-job-wpdpv                                       0/1     Completed   0          2d20h
    zen-watcher-6948b74d68-gc9d5                                      1/1     Running     0          2d20h

Deploying Workflow Process Service Runtime

After configuring IBM Cloud Pak for Business Automation components, you can deploy Workflow Process Service Runtime.
  1. If you use an embedded PostgreSQL server, you can proceed directly to step 2. If you have an external PostgreSQL server, complete the following steps:
    1. Update the values for database.external.databaseName, database.external.dbCredentialSecret, and database.external.dbServerCertSecret. For information about other database parameters, see Workflow Process Service parameters.
    2. Create a database in your external PostgreSQL server and create a user secret, where username corresponds to the database username, and password corresponds to the database password. If you want to enable certificate-based authentication, you do not need a password for wfps-db-secret. For example, your file might look similar to:
      apiVersion: v1
      kind: Secret
      metadata:
        name: wfps-db-secret
      type: Opaque
      stringData:
        username: "wfpsadmin"
        password: "password"
    3. By default, SSL communication is enabled. If you want to disable SSL, change the value of database.external.enableSSL to false.
      If you want to enable SSL, create a CA certificate secret with the ca.crt key, by using the ca.crt file that is exported from your PostgreSQL server. For the secret name, enter the value of database.external.dbServerCertSecret. For example, if you are enabling SSL by itself, the command might look similar to:
      kubectl create secret generic wfps-db-cacert-secret --from-file=ca.crt=./ca_crt.pem
      Your database configuration might look similar to:
      spec:   
      database:
          external:
            type: postgresql
            enableSSL: true
            dbServerCertSecret: wfps-db-cacert-secret
      Optionally, if you want to enable both SSL and database certificate-based authentication, create the secret with client.crt and client.key. Set the value of spec.database.external.sslMode to verify-ca or verify-full. To create your secret, run a command similar to:
      kubectl create secret generic wfps-db-cacert-secret --from-file=ca.crt=./ca_crt.pem --from-file=tls.crt=./client.crt --from-file=tls.key=./client.key
    4. Optional: If you want to use custom Java™ database connectivity (JDBC) files inside the Workflow Process Service Runtime server, set the database.customJDBCPVC parameter. The persistent volume claim (PVC) should be in ROX (ReadOnlyMany) or RWX (ReadWriteMany) access mode, otherwise high availability disaster recovery (HADR) will be impacted, since all pods must be allocated to the same node and the PVC is mounted at the /shared/resources/jdbc/postgresql directory inside the container. The jdbc/postgresql directory must be created inside customJDBCPVC. For example, the structure of the remote file system might look like:
      jdbc
      ├── postgresql
            ├── postgresql-42.2.15.jar
  2. Create a custom resource YAML file for your Workflow Process Service Runtime configuration. For more information about parameters, see Workflow Process Service parameters. After you complete the following steps, your custom resource might look similar to the following:
    apiVersion: icp4a.ibm.com/v1
    kind: WfPSRuntime
    metadata:
      name: wfps-instance1
    spec:
      appVersion: "22.0.2"
      deploymentLicense: production    
      admin:
        username: "<required>"
      license:
        accept: true
    1. Optional: It is recommended to update the value of admin.username with a LDAP user.
      By default, the operator sets admin.username to be the Common Services admin user from the platform-auth-idp-credential secret in the Common Services namespace.
      • If you are using the shared Common Services, the namespace is ibm-common-services.
      • If you are using a dedicated Common Services, you can find the namespace from the common-service-maps ConfigMap from the kube-public namespace. For more information about the common-service-maps ConfigMap, see step 2 in Setting up the cluster in the OpenShift console.

      You can configure LDAP in Identity Access Management and then set the LDAP user to be admin.username. The Workflow Process Service Runtime operator will automatically configure the LDAP user as a Zen user. To configure LDAP, see step 1 of Completing post-deployment tasks for Workflow Process Service Runtime.

    2. If you want to let the Workflow Process Service Runtime operator provision an embedded PostgreSQL instance, you must make sure that your OCP cluster has a default storage class defined. If there is no default storage class defined, set the storage class name by using the spec.persistent.storageClassName parameter. For example:
      spec:
         persistent:
           storageClassName: <storage_class_name>
    3. Optional: If you want to add custom files inside the Workflow Process Service Runtime server, you can update the node.customFilePVC parameter. The persistent volume claim (PVC) must be in ROX (ReadOnlyMany) or RWX (ReadWriteMany) access mode, otherwise HADR is affected, since all pods must be allocated to the same node and the PVC is mounted at the /opt/ibm/bawfile directory inside the container. For example, the customFilePVC might look similar to:
      spec:
        node:
           customFilePVC: my-custom-wfps-pvc
    4. Optional: If you want to enable the full text search feature, include the following lines:
      spec: 
        capabilities: 
          fullTextSearch: 
            enable: true 
            adminGroups: 
            - example_group 
            esStorage:
              storageClassName: BlockStorageClassName
              size: 50Gi
            esSnapshotStorage:
              storageClassName: BlockStorageClassName
              size: 10Gi
      If you installed IBM Automation foundation Elasticsearch, you don't need to add capabilities.fullTextSearch.esStorage and capabilities.fullTextSearch.esSnapshotStorage. If you didn't install IBM Automation foundation Elasticsearch and capabilities.fullTextSearch.enable is set to true, you must add capabilities.fullTextSearch.esStorage and capabilities.fullTextSearch.esSnapshotStorage in the custom resource yaml file. The StorageClass for Elasticsearch and Elasticsearch snapshot should create the storage type of the PVs in block mode rather than file system mode.
    5. Apply the custom resource by running the following command:
      oc apply -f <custom_resource_name>.yaml
    6. After a few minutes, verify that you see your pods, services, and route. If you chose embedded PostgreSQL, the PostgreSQL server pod and service are also listed. For example:
      [root@xxxxxxx]# oc get pod
      wfps-instance1-postgre-0 1/1 Running 0 21h
      wfps-instance1-wfps-runtime-server-0 1/1 Running 0 21h                         1/1     Running     0          21h
      [root@xxxxxx]# oc get service
      NAME                                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
      wfps-instance1-postgre-any                              ClusterIP   172.30.60.216    <none>        5432/TCP                     6d
      wfps-instance1-postgre-r                                ClusterIP   172.30.43.94     <none>        5432/TCP                     6d
      wfps-instance1-postgre-ro                               ClusterIP   172.30.234.237   <none>        5432/TCP                     6d
      wfps-instance1-postgre-rw                               ClusterIP   172.30.105.46    <none>        5432/TCP                     6d
      wfps-instance1-wfps-headless-service                      ClusterIP   None             <none>        9443/TCP                   6d
      wfps-instance1-wfps-service  
      [root@xxxxxx]# oc get route
      NAME   HOST/PORT                                          PATH   SERVICES        PORT                   TERMINATION            WILDCARD
      cpd    cpd-cp4a-project.apps.xxxxxx.cp.fyre.ibm.com          ibm-nginx-svc   ibm-nginx-https-port   passthrough/Redirect   None

Completing post-deployment tasks for Workflow Process Service Runtime

  1. Configure your LDAP connection.
    1. To access your cluster's Common UI console, see Accessing your cluster by using the console.
    2. To configure your LDAP connection, see Configuring LDAP connection.
  2. Add LDAP users in Cloud Pak Platform UI.
    1. Connect to the URL: https://cluster_address, where cluster_address is the IBM Cloud Pak console route. You can get the IBM Cloud Pak console route by running the command:
      oc get route cpd -o jsonpath='{.spec.host}' && echo
      The output might look similar to:
      cpd-namespace_name.apps.mycluster.mydomain
      Using the example output, the console URL would look similar to:
      https://cpd-namespace_name.apps.mycluster.mydomain/zen
    2. Log in to the IBM Cloud Pak dashboard and select OpenShift authentication for kubeadmin, or log in with the IBM provided credentials from step 1a if you are an admin.
    3. Go to Manage users > Add users.
    4. Type the names of users that you want to add, and click Next.
    5. Assign the users to roles, or add them to a group. You can add your LDAP user under Users or you can add your LDAP user group under User groups. For both users and user groups, make sure that at least one role is selected. For example, roles include administrator, automation administrator, automation analyst, automation developer, automation operator, and user.
    6. Click Add to register the users.

Verifying your Workflow Process Service Runtime deployment

To access services provided by Workflow Process Service Runtime, you might need to log in with your LDAP user and password.
  1. Make sure your Workflow Process Service Runtime deployment is ready by running the command:
    oc get wfps <cr-name> -o=jsonpath='{.status.components.wfps.configurations[*].value}'
    The output might look similar to:
    <cr-name>-admin-client-secret Ready Ready Ready Ready
  2. To access the Workplace console, you have two options. You can run the command:
    oc get wfps cr-name -o=jsonpath='{.status.endpoints[2].uri}'
    Alternatively, you can get the Workplace console URL by manually joining the different sections of the address. For example:
    https://(oc get route cpd -o jsonpath="{.spec.host}")/cr_name-wfps/Workplace
    For example, the resulting Workplace console URL might look like:
    https://cpd-cp4a-project.apps.xxxxxx.cp.fyre.ibm.com/<cr-name>-wfps/Workplace
  3. To access the Operations REST APIs Swagger UI, you have two options. You can run the command:
    oc get wfps cr-name o=jsonpath='{.status.endpoints[3].uri}'
    Alternatively, you can manually splice the Operations REST APIs Swagger UI URL:
    https://(oc get route cpd -o jsonpath="{.spec.host}")/cr_name-wfps/ops/explorer
    For example, the resulting Operations REST APIs Swagger UI URL might look like:
    https://cpd-cp4a-project.apps.xxxxxx.cp.fyre.ibm.com/<cr-name>-wfps/ops/explorer
  4. To construct the URLs of exposed REST services and exposed web services, you must locate the endpoint of Workflow Process Service Runtime in the custom resource file's status field. To determine the URL of your REST services and web services, complete the following steps:
    1. Run the command:
      oc get wfps wfps-instance1 -o yaml
    2. In the endpoints section, locate the URI of the external Workflow Process Service Runtime instance. For example:
          - name: External Base URL
            scope: External
            type: https
            uri: https://cpd-wfps3.apps.fjk-ocp474.cp.example.com/wfps-instance1-wfps
    3. The URLs of your REST services have the following structure:
      http://host_name:port/[custom_prefix/]automationservices/rest/process_app_acronym/[snapshot_acronym/]rest_service_name/docs{}
      Where, https://host_name:port/[custom_prefix/] is your URI value from the previous step, process_app_name is the name of the process application, snapshot_name is the optional name of the snapshot, and rest_service_name is the name of the REST service.
    4. The URL of your web services have the following structure:
      https://host_name:port/[custom_prefix/]teamworks/webservices/process_app_name/[snapshot_name/]web_service_name.tws
      Where, https://host_name:port/[custom_prefix/] is your URI value from step 4b, process_app_name is the name of the process application, snapshot_name is the optional name of the snapshot, and web_service_name is the name of the web service.

Managing your embedded PostgreSQL server

  1. To access data in your PostgreSQL server:
    1. Run the command oc get cluster to get the PostgreSQL cluster name. For example, the cluster name might be similar to: wfps-instance1-postgre.
    2. Run the command kubectl port-forward --address 0.0.0.0 wfps-instance1-postgre-rw 5432:5432 on the OCP infrastructure node. The infrastructure node IP and port (5432) are the database server and database port that are externally accessible.
    3. Get the username and password from the wfps-instance1-postgre-app secret to access the default database wfpsdb. To expose more PostgreSQL services, see Exposing Postgres Services.
  2. To check your license, run the command oc get cluster to get the PostgreSQL cluster name. For example, the cluster name might be similar to wfps-instance1-postgre.
  3. Run the command oc get cluster wfps-instance1-postgre -o yaml to check the license status. The output might look like:
     licenseStatus:
          isTrial: true
          licenseExpiration: "2024-10-01T00:00:00Z"
          licenseStatus: Valid license (IBM - Data & Analytics (Cloud))
          repositoryAccess: false
          valid: true
  4. To configure backup and recovery for PostgreSQL, see Backup and Recovery.
  5. You can configure the operator's management of the EDB PostgreSQL cluster.
    1. If you want to manage the embedded PostgreSQL cluster, you need to update the value of spec.database.managed.managementState to Unmanaged in the Workflow Process Service Runtime custom resource yaml file. After you update the value of spec.database.managed.managementState, the Workflow Process Service Runtime operator will not manage the embedded PostgreSQL cluster. To change the parameters and resources of the PostgreSQL cluster, see PostgreSQL Configuration and Resource management. To add nodeSelector and select the nodes that a pod can run on, see Node selection through nodeSelector.
      When you are in the Unmanaged state, you need to manually delete the PostgreSQL cluster after deleting the Workflow Process Service Runtime instance. For example, to delete your cluster, your command might look similar to:
      oc delete cluster wfps-instance1-postgre
      where wfps-instance1-postgre is the name of your PostgreSQL cluster.
    2. If you want the Workflow Process Service Runtime operator to manage the EDB PostgreSQL cluster, set spec.database.managed.managementState to Managed. The PostgreSQL cluster will have the default configuration and the PostgreSQL cluster will be deleted automatically after the Workflow Process Service Runtime instance is deleted.