Installing a CP4BA Workflow Process Service deployment
Workflow Process Service is a small-footprint business automation environment for testing and running workflow processes that coordinate manual tasks and services. You can install Workflow Process Service Runtime or Workflow Process Service Authoring on Red Hat OpenShift Container Platform (OCP). The steps include the process to prepare, deploy, and configure Workflow Process Service.
- Option 1: To install Workflow Process Service Authoring by itself or together with Workflow Process Service Runtime by using a PDF, see Installing a pattern by following the instructions in PDF files. This option is recommended.
- Option 2: To install Workflow Process Service Authoring by itself or with other IBM Cloud Pak® for Business Automation capabilities by using the online documentation, follow the steps in one of the options:
- Preparing for a Workflow Process Service Runtime deployment
- Deploying required Workflow Process Service Runtime components
- Deploying Workflow Process Service Runtime
- Completing post-deployment tasks for Workflow Process Service Runtime
- Verifying your Workflow Process Service Runtime deployment
- Managing your embedded PostgreSQL server
If you run into issues while installing Workflow Process Service Runtime, see Troubleshooting Workflow Process Service.
If you are upgrading, see Upgrading Workflow Process Service Runtime deployments from 22.0.1.
Preparing for a Workflow Process Service Runtime deployment
Workflow Process Service Runtime requires an IBM Cloud Pak for Business Automation installation, and integrates with components in Cloud Pak for Business Automation.
- Make sure that you have the resources you need for your deployment. See Planning for Workflow Process Service.
- Plan and prepare your deployment on your cluster by completing the steps in Preparing for a production deployment.
Deploying required Workflow Process Service Runtime components
To install Workflow Process Service Runtime, you must use the Cloud Pak for Business Automation operator to configure Resource Registry, root Certificate Authority (CA), and IBM Automation foundation.
If you already installed one of the Cloud Pak for Business Automation deployment patterns, you can proceed directly to step 2. For instructions to install a deployment pattern, see Creating a production deployment.
- If you didn't install a deployment pattern, you must customize the Cloud Pak for Business Automation custom resource
(CR) to configure the required components.
- Create the following
.yamlfile, and replace the value ofsc_slow_file_storage_classname,sc_medium_file_storage_classname, andsc_fast_file_storage_classname.apiVersion: icp4a.ibm.com/v1 kind: ICP4ACluster metadata: name: icp4adeploy labels: app.kubernetes.io/instance: ibm-dba app.kubernetes.io/managed-by: ibm-dba app.kubernetes.io/name: ibm-dba release: 22.0.2 spec: appVersion: 22.0.2 ibm_license: "accept" ## shared configuration among all tribe shared_configuration: show_sensitive_log: true ## Use this parameter to specify the license for the IBM Cloud Pak for Business Automation deployment for the rest of the Cloud Pak for Business Automation components. ## This value could differ from the rest of the licenses. sc_deployment_license: production sc_deployment_type: custom ## On OCP 3.x and 4.x, the user script will populate these three (3) parameters based on your input for "enterprise" deployment. ## If you are manually deploying without using the user script, then you would provide the different storage classes for the slow, medium ## and fast storage parameters below. If you only have 1 storage class defined, then you can use that 1 storage class for all 3 parameters. storage_configuration: sc_slow_file_storage_classname: "<require>" sc_medium_file_storage_classname: "<require>" sc_fast_file_storage_classname: "<require>" sc_deployment_platform: OCP ## this field is required to deploy Resource Registry (RR) resource_registry_configuration: replica_size: 1 - If you want to configure one or more LDAP configurations, use the
ldap_configurationparameter in theicp4aclusterCR. For more information about LDAP configuration, see LDAP configuration. You can also configure LDAP in post-deployment by using the Common UI console.
- Create the following
- Wait a few minutes, then run the command
oc get icp4acluster -o yamlto make sure that IBM Automation foundation, root Certificate Authority, and Resource Registry are ready. Make sure that.status.components.prereq.rootCAStatusisReadyand.status.components.prereq.rootCASecretNameis filled with the correct secret name. Make sure that.status.endpoints["Resource Registry"]appears in the endpoints list. For example:status: components: ... prereq: conditions: [] iafStatus: Ready rootCASecretName: icp4adeploy-root-ca rootCAStatus: Ready resource-registry: rrAdminSecret: icp4adeploy-rr-admin-secret rrCluster: Ready rrService: Ready ... endpoints: - name: Resource Registry scope: Internal type: gRPC uri: icp4adeploy-dba-rr-client:2379 - Make sure that Zen and Resource Registry pods are listed in the
oc get podcommand result. For example:[root@xxxxxx]# oc get pod NAME READY STATUS RESTARTS AGE create-secrets-job-x4wh9 0/1 Completed 0 2d20h iaf-ai-operator-controller-manager-6b56dd5457-gnr52 1/1 Running 0 6d20h iaf-core-operator-controller-manager-67fc9bb46c-k2c4q 1/1 Running 1 6d20h iaf-eventprocessing-operator-controller-manager-6d84cc6b9bzn8cl 1/1 Running 5 6d20h iaf-flink-operator-controller-manager-86c5fbd469-4xtp8 1/1 Running 0 6d20h iaf-operator-controller-manager-565b46cc7d-zcgrg 1/1 Running 1 6d20h iaf-zen-tour-job-kqx8q 0/1 Completed 0 2d20h iam-config-job-hhc4p 0/1 Completed 0 2d20h ibm-common-service-operator-7f68dc5bb8-p84vp 1/1 Running 0 6d20h ibm-cp4a-operator-9fcdbf54b-mjt2w 1/1 Running 0 2d1h ibm-elastic-operator-controller-manager-9c644c68c-48vrn 1/1 Running 0 6d20h ibm-nginx-d4b995cc9-b6bml 1/1 Running 0 2d20h ibm-nginx-d4b995cc9-v9p6q 1/1 Running 0 2d20h icp4adeploy-dba-rr-b65c004a68 1/1 Running 0 2d20h icp4adeploy-rr-backup-1631167500-p4zmn 0/1 Completed 0 2m22s icp4adeploy-rr-setup-pod 0/1 Completed 0 2d20h setup-nginx-job-4jpw7 0/1 Completed 0 2d20h usermgmt-86cf946f6c-7lm48 1/1 Running 0 2d20h usermgmt-86cf946f6c-tcbl5 1/1 Running 0 2d20h zen-audit-75c79f5f6c-97s2x 1/1 Running 0 2d20h zen-core-57cbcb46b-7bvzh 1/1 Running 0 2d20h zen-core-57cbcb46b-h8fxv 1/1 Running 0 2d20h zen-core-api-6997d5d6bb-hpjvd 1/1 Running 0 2d20h zen-core-api-6997d5d6bb-swm9h 1/1 Running 0 2d20h zen-metastoredb-0 1/1 Running 0 2d20h zen-metastoredb-1 1/1 Running 1 2d20h zen-metastoredb-2 1/1 Running 0 2d20h zen-metastoredb-certs-zzsjw 0/1 Completed 0 2d20h zen-metastoredb-init-n8k6w 0/1 Completed 0 2d20h zen-post-requisite-job-rqk89 0/1 Completed 0 2d20h zen-pre-requisite-job-wpdpv 0/1 Completed 0 2d20h zen-watcher-6948b74d68-gc9d5 1/1 Running 0 2d20h
Deploying Workflow Process Service Runtime
- If you use an embedded PostgreSQL server, you can proceed
directly to step 2. If you have an external PostgreSQL server, complete the
following steps:
- Update the values for
database.external.databaseName,database.external.dbCredentialSecret, anddatabase.external.dbServerCertSecret. For information about other database parameters, see Workflow Process Service parameters. - Create a database in your external PostgreSQL server and create a user
secret, where
usernamecorresponds to the database username, andpasswordcorresponds to the database password. If you want to enable certificate-based authentication, you do not need a password forwfps-db-secret. For example, your file might look similar to:apiVersion: v1 kind: Secret metadata: name: wfps-db-secret type: Opaque stringData: username: "wfpsadmin" password: "password" - By default, SSL communication is enabled. If you want to disable SSL, change
the value of
database.external.enableSSLtofalse.If you want to enable SSL, create a CA certificate secret with theOptionally, if you want to enable both SSL and database certificate-based authentication, create the secret withca.crtkey, by using theca.crtfile that is exported from your PostgreSQL server. For the secret name, enter the value ofdatabase.external.dbServerCertSecret. For example, if you are enabling SSL by itself, the command might look similar to:
Your database configuration might look similar to:kubectl create secret generic wfps-db-cacert-secret --from-file=ca.crt=./ca_crt.pemspec: database: external: type: postgresql enableSSL: true dbServerCertSecret: wfps-db-cacert-secretclient.crtandclient.key. Set the value ofspec.database.external.sslModetoverify-caorverify-full. To create your secret, run a command similar to:kubectl create secret generic wfps-db-cacert-secret --from-file=ca.crt=./ca_crt.pem --from-file=tls.crt=./client.crt --from-file=tls.key=./client.key - Optional: If you want to use custom Java™ database connectivity (JDBC) files inside the Workflow Process Service Runtime server, set the
database.customJDBCPVCparameter. The persistent volume claim (PVC) should be inROX(ReadOnlyMany) orRWX(ReadWriteMany) access mode, otherwise high availability disaster recovery (HADR) will be impacted, since all pods must be allocated to the same node and the PVC is mounted at the/shared/resources/jdbc/postgresqldirectory inside the container. Thejdbc/postgresqldirectory must be created insidecustomJDBCPVC. For example, the structure of the remote file system might look like:jdbc ├── postgresql ├── postgresql-42.2.15.jar
- Update the values for
- Create a custom resource YAML file for your Workflow Process Service Runtime
configuration. For more information about parameters, see Workflow Process Service parameters. After you complete the following steps, your custom
resource might look similar to the following:
apiVersion: icp4a.ibm.com/v1 kind: WfPSRuntime metadata: name: wfps-instance1 spec: appVersion: "22.0.2" deploymentLicense: production admin: username: "<required>" license: accept: true- Optional: It is recommended to update the value of
admin.usernamewith a LDAP user.By default, the operator setsadmin.usernameto be the Common Services admin user from theplatform-auth-idp-credentialsecret in the Common Services namespace.- If you are using the shared Common Services, the namespace is
ibm-common-services. - If you are using a dedicated Common Services, you can find the namespace
from the
common-service-mapsConfigMap from thekube-publicnamespace. For more information about thecommon-service-mapsConfigMap, see step 2 in Setting up the cluster in the OpenShift console.
You can configure LDAP in Identity Access Management and then set the LDAP user to be
admin.username. The Workflow Process Service Runtime operator will automatically configure the LDAP user as a Zen user. To configure LDAP, see step 1 of Completing post-deployment tasks for Workflow Process Service Runtime. - If you are using the shared Common Services, the namespace is
- If you want to let the Workflow Process Service Runtime operator
provision an embedded PostgreSQL instance, you must make sure that your OCP cluster has a default
storage class defined. If there is no default storage class defined, set the storage class name by
using the
spec.persistent.storageClassNameparameter. For example:spec: persistent: storageClassName: <storage_class_name> - Optional: If you want to add custom files inside the Workflow Process Service Runtime server, you can
update the
node.customFilePVCparameter. The persistent volume claim (PVC) must be inROX(ReadOnlyMany) orRWX(ReadWriteMany) access mode, otherwise HADR is affected, since all pods must be allocated to the same node and the PVC is mounted at the/opt/ibm/bawfiledirectory inside the container. For example, thecustomFilePVCmight look similar to:spec: node: customFilePVC: my-custom-wfps-pvc - Optional: If you want to enable the full text search feature, include the following
lines:
If you installed IBM Automation foundation Elasticsearch, you don't need to addspec: capabilities: fullTextSearch: enable: true adminGroups: - example_group esStorage: storageClassName: BlockStorageClassName size: 50Gi esSnapshotStorage: storageClassName: BlockStorageClassName size: 10Gicapabilities.fullTextSearch.esStorageandcapabilities.fullTextSearch.esSnapshotStorage. If you didn't install IBM Automation foundation Elasticsearch andcapabilities.fullTextSearch.enableis set totrue, you must addcapabilities.fullTextSearch.esStorageandcapabilities.fullTextSearch.esSnapshotStoragein the custom resourceyamlfile. The StorageClass for Elasticsearch and Elasticsearch snapshot should create the storage type of the PVs in block mode rather than file system mode. - Apply the custom resource by running the following
command:
oc apply -f <custom_resource_name>.yaml - After a few minutes, verify that you see your pods, services, and route. If you chose embedded
PostgreSQL, the PostgreSQL server pod and service are
also listed. For example:
[root@xxxxxxx]# oc get pod wfps-instance1-postgre-0 1/1 Running 0 21h wfps-instance1-wfps-runtime-server-0 1/1 Running 0 21h 1/1 Running 0 21h[root@xxxxxx]# oc get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wfps-instance1-postgre-any ClusterIP 172.30.60.216 <none> 5432/TCP 6d wfps-instance1-postgre-r ClusterIP 172.30.43.94 <none> 5432/TCP 6d wfps-instance1-postgre-ro ClusterIP 172.30.234.237 <none> 5432/TCP 6d wfps-instance1-postgre-rw ClusterIP 172.30.105.46 <none> 5432/TCP 6d wfps-instance1-wfps-headless-service ClusterIP None <none> 9443/TCP 6d wfps-instance1-wfps-service[root@xxxxxx]# oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD cpd cpd-cp4a-project.apps.xxxxxx.cp.fyre.ibm.com ibm-nginx-svc ibm-nginx-https-port passthrough/Redirect None
- Optional: It is recommended to update the value of
Completing post-deployment tasks for Workflow Process Service Runtime
- Configure your LDAP connection.
- To access your cluster's Common UI console, see Accessing your cluster by using the console.
- To configure your LDAP connection, see Configuring LDAP connection.
- Add LDAP users in Cloud Pak Platform UI.
- Connect to the URL:
https://cluster_address, where cluster_address is the IBM Cloud Pak console route. You can get the IBM Cloud Pak console route by running the command:
The output might look similar to:oc get route cpd -o jsonpath='{.spec.host}' && echo
Using the example output, the console URL would look similar to:cpd-namespace_name.apps.mycluster.mydomainhttps://cpd-namespace_name.apps.mycluster.mydomain/zen - Log in to the IBM Cloud Pak dashboard and select OpenShift authentication for
kubeadmin, or log in with the IBM provided credentials from step 1a if you are an admin. - Go to .
- Type the names of users that you want to add, and click Next.
- Assign the users to roles, or add them to a group. You can add your LDAP user under Users or you can add your LDAP user group under User groups. For both users and user groups, make sure that at least one role is selected. For example, roles include administrator, automation administrator, automation analyst, automation developer, automation operator, and user.
- Click Add to register the users.
- Connect to the URL:
Verifying your Workflow Process Service Runtime deployment
- Make sure your Workflow Process Service Runtime deployment is
ready by running the command:
The output might look similar to:oc get wfps <cr-name> -o=jsonpath='{.status.components.wfps.configurations[*].value}'<cr-name>-admin-client-secret Ready Ready Ready Ready - To access the Workplace console, you have two options. You can run the command:
Alternatively, you can get the Workplace console URL by manually joining the different sections of the address. For example:oc get wfps cr-name -o=jsonpath='{.status.endpoints[2].uri}'
For example, the resulting Workplace console URL might look like:https://(oc get route cpd -o jsonpath="{.spec.host}")/cr_name-wfps/Workplacehttps://cpd-cp4a-project.apps.xxxxxx.cp.fyre.ibm.com/<cr-name>-wfps/Workplace - To access the Operations REST APIs Swagger UI, you have two options. You can run the
command:
Alternatively, you can manually splice the Operations REST APIs Swagger UI URL:oc get wfps cr-name o=jsonpath='{.status.endpoints[3].uri}'
For example, the resulting Operations REST APIs Swagger UI URL might look like:https://(oc get route cpd -o jsonpath="{.spec.host}")/cr_name-wfps/ops/explorerhttps://cpd-cp4a-project.apps.xxxxxx.cp.fyre.ibm.com/<cr-name>-wfps/ops/explorer - To construct the URLs of exposed REST services and exposed web services, you must locate the
endpoint of Workflow Process Service Runtime in the custom
resource file's status field. To determine the URL of your REST services and web services, complete
the following steps:
- Run the command:
oc get wfps wfps-instance1 -o yaml - In the
endpointssection, locate the URI of the external Workflow Process Service Runtime instance. For example:- name: External Base URL scope: External type: https uri: https://cpd-wfps3.apps.fjk-ocp474.cp.example.com/wfps-instance1-wfps - The URLs of your REST services have the following
structure:
Where,http://host_name:port/[custom_prefix/]automationservices/rest/process_app_acronym/[snapshot_acronym/]rest_service_name/docs{}https://host_name:port/[custom_prefix/]is your URI value from the previous step, process_app_name is the name of the process application, snapshot_name is the optional name of the snapshot, and rest_service_name is the name of the REST service. - The URL of your web services have the following
structure:
Where,https://host_name:port/[custom_prefix/]teamworks/webservices/process_app_name/[snapshot_name/]web_service_name.twshttps://host_name:port/[custom_prefix/]is your URI value from step 4b, process_app_name is the name of the process application, snapshot_name is the optional name of the snapshot, and web_service_name is the name of the web service.
- Run the command:
Managing your embedded PostgreSQL server
- To access data in your PostgreSQL server:
- Run the command
oc get clusterto get the PostgreSQL cluster name. For example, the cluster name might be similar to:wfps-instance1-postgre. - Run the command
kubectl port-forward --address 0.0.0.0 wfps-instance1-postgre-rw 5432:5432on the OCP infrastructure node. The infrastructure node IP and port (5432) are the database server and database port that are externally accessible. - Get the username and password from the
wfps-instance1-postgre-appsecret to access the default databasewfpsdb. To expose more PostgreSQL services, see Exposing Postgres Services.
- Run the command
- To check your license, run the command
oc get clusterto get the PostgreSQL cluster name. For example, the cluster name might be similar towfps-instance1-postgre. - Run the command
oc get cluster wfps-instance1-postgre -o yamlto check the license status. The output might look like:licenseStatus: isTrial: true licenseExpiration: "2024-10-01T00:00:00Z" licenseStatus: Valid license (IBM - Data & Analytics (Cloud)) repositoryAccess: false valid: true - To configure backup and recovery for PostgreSQL, see Backup and Recovery.
- You can configure the operator's management of the EDB PostgreSQL cluster.
- If you want to manage the embedded PostgreSQL cluster, you
need to update the value of
spec.database.managed.managementStatetoUnmanagedin the Workflow Process Service Runtime custom resourceyamlfile. After you update the value ofspec.database.managed.managementState, the Workflow Process Service Runtime operator will not manage the embedded PostgreSQL cluster. To change the parameters and resources of the PostgreSQL cluster, see PostgreSQL Configuration and Resource management. To addnodeSelectorand select the nodes that a pod can run on, see Node selection through nodeSelector.When you are in theUnmanagedstate, you need to manually delete the PostgreSQL cluster after deleting the Workflow Process Service Runtime instance. For example, to delete your cluster, your command might look similar to:
whereoc delete cluster wfps-instance1-postgrewfps-instance1-postgreis the name of your PostgreSQL cluster. - If you want the Workflow Process Service Runtime operator to
manage the EDB PostgreSQL cluster, set
spec.database.managed.managementStatetoManaged. The PostgreSQL cluster will have the default configuration and the PostgreSQL cluster will be deleted automatically after the Workflow Process Service Runtime instance is deleted.
- If you want to manage the embedded PostgreSQL cluster, you
need to update the value of