IBM Support

IBM Sterling Configure Price Quote on Red Hat OpenShift Container Platform - Readme document

Fix Readme


Abstract

This document describes how to deploy the IBM Sterling Configure Price Quote (CPQ) v10 software on Red Hat OpenShift container platform.

Content

Note: This content is for the latest version. For version-specific document, check out the Readme file that is shipped with the charts.
What's New in this release - v10.0.0.22
  • Upgraded Order Management to FP27, FieldSales to FP16, VisualModeler, and Configurator to FP22
  • Defect Fixes
  • Security Fixes
  • Qualified on the 4.4, 4.5, and 4.6 Red Hat OpenShift Container Platform versions
Introduction
The IBM Sterling Configure Price Quote v10.0.0.22 is built and deployed on Red Hat OpenShift 4.6 Container Platform. The Red Hat OpenShift 4.6 platform features Catalog experience for installation of Helm charts.
The Helm charts that are onboarded to the Red Hat Helm Repository appear in the Red Hat OpenShift Developer Catalog out of the box. This feature facilitates the user to deploy Helm charts from the Web User Interface, rather than the command-line interface. This feature is available in the section Developer Catalog -> Add -> From Catalog -> Check Helm Chart Filter.

This document describes how to deploy CPQ Software v10.0.0.22 on Red Hat OpenShift container platform. The Helm chart does not install a database server. Database needs to be set up and configured separately for CPQ Software.

Note: The Helm chart supports the deployment of CPQ software with both the Db2 and Oracle database.

Highlights of Previous Releases
  • Featured Helm Form in the Developer Catalog of Web Console of OCP(4.6) that made it easy to populate yaml values.
  • Introduced timeZone property in CPQ applications to set the time zone of the environment in which the applications are running.
  • Added support to execute custom script after the IBM Sterling Omni Configurator (OC) Server startup.
  • Added support to disable the deployment of Configurator UI.
  • Implemented EhCache, an in-memory cache, in OmniConfigurator for improved performance. The EhCache works in a distributed mode and makes use of RMI replication to support multiple Pods. For more information, see topic Ehcache enablement in OC.
  • Enhanced Visual Modeler Customization. For more information, see topic Install repository. 
  • Added support to populate default value for Function Handler and Custom Control Properties.
    For more information, see topic Out of Box value of Function Handler and Custom Control Properties.
  • Added mechanism to apply Fix Pack. For more information, see topic Fixpack Installation for IFS and Fixpack Installation for VM and OC.
  • Upgraded Liberty Truststore to use PKCS12 standard.
  • Upgraded Liberty image to 19.0.0.6.
  • Added support to turn on/off the Swagger UI and API docs through the IBM Sterling Omni Configurator deployment chart.
  • Added support to execute the VisualModeler Database SQL migration script that uses existing data setup job.
  • The Applications Pods run with an arbitrary user ID, inside the containers.
  • The Pod logs are externalized in the repository.
  •  Implemented enhanced security for cleartext sensitive fields
  •  Upgraded Red Hat to Universal Base Image v8 for Base Containers
  •  Upgraded Liberty Server to v20 for Application Containers
  •  Upgraded Helm client to 3.6x
  •  Container Images are now signed and certified for enhanced security
Checklist
* Use the checklist to launch the CPQ applications
* Helper scripts can be found in prereqs.zip to create prerequisite cluster objects. This .zip file is packaged along with these charts.
1. Helm is installed
2. NFS is installed, for an NFS option chosen for the configurator repository
3. Configurator Repository is installed
4. PV and PVC are created
5. Hostnames are identified for VisualModeler, Configurator, FieldSales
6. Certificates for application hostnames are available
7. Secrets are created for these certificates
8. User is created in Database
9. Secrets are created with Database access information and key store
10. Charts downloaded and fields in values.yaml are populated
11. Install charts through helm installation cmd
12. Factory Data is loaded into Database by running Data jobs from these charts
Details
The workload consists of application pods(along with associated objects like services and deployments), one for each application that is VisualModeler, Configurator, and FieldSales. The image names for these applications are -
  • VisualModeler(VM) App - cpq-vm-app
  • OmniConfigurator(OC) App - cpq-oc-app
  • Base image for VMOC - cpq-vmoc-base
  • Field Sales App - cpq-ifs-app
  • Field Sales Agent - cpq-ifs-agent
  • Field Sales Base - cpq-ifs-base

Chart Details

This chart performs the following actions:

  • -ibm-cpq-prod-vmappserver: Creates deployment for the IBM Sterling Visual Modeler (VM) application server with 1 replica by default.
  • -ibm-cpq-prod-ocappserver: Creates a deployment for the IBM Sterling Omni Configurator (OC) application server with 1 replica by default.
  • -ibm-cpq-prod-vmappserver: Creates a service. This service is used to access the IBM Sterling Visual Modeler application server that uses a consistent IP address.
  • -ibm-cpq-prod-ocappserver: Creates a service. This service is used to access the IBM Sterling Omni Configurator application server that uses consistent IP address.
  • -ibm-cpq-prod-vmdatasetup: Creates a job.  This job is used for performing data setup for IBM Sterling Configure Price Quote that is required to deploy and run the IBM Sterling Configure Price Quote application. This job cannot be created if the data setup is disabled at the time of installation or upgrade.
  • -ibm-cpq-prod-vm-config: Creates a configmap. This configmap is used to provide IBM Sterling Visual Modeler and Liberty configuration.
  • -ibm-cpq-prod-oc-config: Creates a configmap. This configmap is used to provide IBM Sterling Omni Configurator and Liberty configuration.

Prerequisites for CPQ

  1. Kubernetes version >= 1.17.0

  2. Ensure that the Db2 or Oracle database server is installed and the database is accessible from inside the cluster. For database time zone considerations, see section Time zone considerations.

  3. Ensure that the docker images for CPQ software are loaded to an appropriate docker registry. The default images for the CPQ software can be downloaded from IBM Entitled Registry after you purchase the software from the IBM Marketplace. Alternatively, the images can also be downloaded from IBM Passport Advantage. The customized images for CPQ Software can also be used.

  4. Ensure that the docker registry is configured in Manage -> Resource Security -> Image Policies and also ensure that docker images can be pulled on all of Kubernetes worker nodes.

  5. When you are using Podman for image operations like launching the base container, make sure you have the root privileges for the user who is logged in. Alternatively, you can use sudo.

Dependency of CPQ Applications

The VM, OC, and IFS applications work together and hence their configurations are depended on each other. When the applications are deployed over https protocol, they need to be aware of each other. This prerequisite is achieved by exchanging the certificates and hence it is important to understand the dependencies because it can impact the installation sequence of these applications.
  1. OC requires certificate of IFS. Hence, when you install OC, make sure a certificate is created for IFS and populate the ingress fields: Values.ifsappserver.ingress.host and Values.ifsappserver.ingress.ssl.secretname.
  2. VM requires certificate of OC. Hence, when you install VM, make sure a certificate is created for OC and populate the ingress fields: Values.ocappserver.ingress.host and Values.ocappserver.ingress.ssl.secretname.
  3. IFS requires certificate of OC. Hence, when you install IFS, make sure a certificate is created for OC and populate the ingress fields: Values.ocappserver.ingress.host and Values.ocappserver.ingress.ssl.secretname.
Note: If you do the configuration change after installation of an application, you need to reinstall it to take the change into effect.

Installing the Chart: Installing the IBM Sterling Visual Modeler and IBM Sterling Omni Configurator

Prepare a custom values.yaml file based on the configuration section. Ensure that the application license is accepted by setting the global.license value to true.
Note:
  1. There is an integration between VM and OC, VM and SBC, IFS and OC. All the sections in values.yaml such as global, vmappserver, ocappserver, ifs, ifsappserver, ifsagentserver, ifsdatasetup, vmdatasetup, runtime, and metering, needs to be populated, before installing any applications (VM, OC, or IFS).
  2. So, during the installation of VM, the application requires ingress host of OC and ssl certificate of OC. The same applies for OC and IFS as well.
  3. The integration works when the user populates one section at a time and install the application. For example, user must not populate vmappserver and install VM.
Air gap Installation
You can install certified containers in an air gap environment where your Kubernetes cluster does not have access to the internet. Therefore, it is important to properly configure and install the certified containers in such an environment.
Downloading CPQ Software case bundle
You can download the CPQ software case bundle and the Helm chart from the remote repositories to your local machine, which are used for offline installation by running the following command:
bash
cloudctl case save \
--case \
--outputdir
For more help on cloudctl case save, run cloudctl case save -h.
Setting credentials to pull or push certified container images
To set up the credentials for downloading the certified container images from IBM Cloud Registry to your local registry, run the appropriate command.
- For local registry without authentication
bash
# Set the credentials to use for source registry
cloudctl case launch \
--case \
--inventory ibmCpqProd \
--action configure-creds-airgap \
--args "--registry $SOURCE_REGISTRY --user $SOURCE_REGISTRY_USER --pass $SOURCE_REGISTRY_PASS"
- For local registry with authentication
bash
# Set the credentials for the target registry (your local registry)
cloudctl case launch \
--case \
--inventory ibmCpqProd \
--action configure-creds-airgap \
--args "--registry $TARGET_REGISTRY --user $TARGET_REGISTRY_USER --pass $TARGET_REGISTRY_PASS"
Mirroring the certified container images
To mirror the certified container images and configure your cluster by using the provided credentials, run the following command:
bash
cloudctl case launch \
--case \
--inventory ibmCpqProd \
--action mirror-images \
--args "--registry --inputDir "
The certified container images are pulled from the source registry to your local registry that you can use for offline installation.
Installing the Helm chart in an air gap environment
Before you begin, ensure that you review and complete the [prerequisites.](#deployment-prerequisites)
To install the Helm chart, run the following command:
bash
cloudctl case launch \
--case \
--namespace \
--inventory ibmCpqProd \
--action install \
--args "--releaseName --chart "

# --releaseName: refers to the name of the release.
# --chart: refers to the path of downloaded chart.
Uninstalling the Helm chart in an air gap environment
To uninstall the Helm chart, run the following command:
bash
cloudctl case launch \
--case \
--namespace \
--inventory ibmCpqProd \
--action uninstall \
--args "--releaseName "

# --releaseName: refers to the name of the release.

Installing the CASE

Here are steps demonstrate an example to install the helm chart into the default namespace of Red Hat OpenShift.
Note: The file permissions that are configured in IBM Sterling Visual Modeler and IBM Sterling Omni Configurator pods, use '1001' value  for owner and 'root' for group, for the application-related folders.

Prerequisites for installing IBM Sterling Visual Modeler and IBM Sterling Omni Configurator

Option 1 to obtain the images: Pulling images directly from the Entitled Registry

By default, the charts are programmed to pull the image directly from Entitled Registry - cp.icr.io/ibm-cpq. (Alternatively, you can change this behavior by obtaining images by following the Option 2.)

To pull images from Entitled Registry, follow these steps:

1.Create a secret: oc create secret docker-registry er-secret --docker-username=iamapikey --docker-password=[ER-Prod API Key] --docker-server=cp.icr.io

2.Populate the following field in charts in the values.yaml file: global.image.pullsecret with the secret name that is "er-secret"

These steps enable the charts to pull the images from Entitled Registry, when they are installed that use helm.

Option 2 to obtain the images: Pushing images to Red Hat OpenShift Image Registry

An image that is downloaded, can be uploaded to your Red Hat OpenShift Image Registry to use the downloaded image by doing the following steps:

  1. On a node where you have access to the Red Hat OpenShift cluster, ensure OpenShift Command Line Interface (CLI) is installed. Also, install Podman to interact with Red Hat OpenShift Image Registry.

  2. The Red Hat OpenShift image registry is secured by default. You need to expose the image registry service to create a route. This action is required to create a route URL to the image registry to facilitate the pushing of images so that the application can pull the images during deployment. This field would be in the field global.image.repository of the helm chart. Go to the link https://docs.openshift.com/container-platform/4.6/registry/securing-exposing-registry.html and expose the SVC to route for the image registry. The link is for version 4.6 and you must select the appropriate Helm version according to your installation.

  3. A route URL is created.
    Your route URL depends on your Red Hat OpenShift installation. We refer to this URL henceforth as [image-registry-routeURL].

  4. Open your route URL- https://[image-registry-routeURL] in a browser.

    Copy the certificate. For example, if you are using the Firefox browser, follow the steps to copy the certificate:

    • Click the HTTPS icon in the location bar. In the pop-up screen, you see Connection Secure or Connection Not Secure. Click the right arrow to see Connection details.
    • Click More Information and then click the button View Certificate.
    • The browser opens a new page. Locate the link Download PEM (cert) to download the certificate(.crt) and save it.
    • Save the certificate in the node /etc/docker/certs.d/[image-registry-routeURL]/.
  5. Log in to the Red Hat OpenShift Cluster by running the command:

    oc login [adminuser]
    Use the same admin user that was created when OpenShift was installed.
  6. Now to push the image to Red Hat OpenShift registry, follow the steps:

  • Log in to podman by running the command:

    podman login [image-registry-routeURL]

    You need admin user credentials.

  • Tag the image by running the command:
podman tag [ImageId] [image-registry-routeURL]/default/cpq-vm-app:10.0.0.22-amd64
Here we are tagging the IBM Sterling Visual Modeler image to push it to the default namespace.
  • Push the image to this registry by running the command:
podman push [image-registry-routeURL]/default/cpq-vm-app:10.0.0.22-amd64
Since you are now pointing to the Red Hat OpenShift repository, you don't need to set the field pullsecret. You can set the field as global.image.pullsecret="". This action skips the imagepullsecret in the pods.

Install the helm client

Install Helm (3.4.1) for production usage.
To install Helm (3.4.1), complete the following steps:
1. Install Helm version 3.4.1 from https://helm.sh/ on Linux client by running the following command:
curl -s https://get.helm.sh/helm-v3.4.1-linux-amd64.tar.gz | tar xz
2. Locate the Helm executable in your machine and add that path in the Environment Variable $PATH.

Also, update the PATH variable in your Linux login profile script.

3. Run the oc login to cluster command to log in to the Red Hat OpenShift cluster.
4. Run the helm version command to verify the version of the installed Helm client and to view more information about the Helm client.
version.BuildInfo{Version:"v3.4.1", GitCommit:"c4e74854886b2efe3321e185578e6db9be0a6e29", GitTreeState:"clean", GoVersion:"go1.14.11"}
Install the repository on an NFS server

The IBM Sterling Omni-Configurator file structure where the models are stored as XML files is called a repository. The repository (repo.tar) is packaged with the IBM Sterling Configure Price Quote base image as repo.tar inside the "/opt/VMSDK" folder.

(instructions to set up NFS server are out of scope of this document)

After the NFS server is set up, extract the repo.tar file and create a shared directory structure similar to the following to store the Omni-Configurator repository.

For Visual Modeler Customization, new folder structure is added where the customized jar, properties, xml, DTD, log4.properties, and JSP file can be placed. These files are deployed inside VisualModeler.war during the installation of VM.

[mounted dir]/configurator_logs
[mounted dir]/omniconfigurator
[mounted dir]/omniconfigurator/extensions
[mounted dir]/omniconfigurator/models
[mounted dir]/omniconfigurator/properties
[mounted dir]/omniconfigurator/rules
[mounted dir]/omniconfigurator/tenants
[mounted dir]/VM/extensions
[mounted dir]/VM/properties
[mounted dir]/VM/web
[mounted dir]/VM/schema
[mounted dir]/VM/messages
[mounted dir]/VM/classes

Ensure that the permissions on the repository are set correctly. You can verify by mounting the folder into a node and executing the following commands:

As seen in folder structure, the owner of the files in folders is the user 1001 and the group is root. Also, the rwx permissions are 770.

Note: The repository is shared by the IBM Sterling Visual Modeler, IBM Sterling Omni Configurator, and IBM Field Sales applications. When the repository is copied to an NFS shared folder, a Persistent Volume Claim from the application points to the repository. The intended folder is then mounted in the pod through the config map. The pod mounts only unique folder. For example, if two folders point to same NFS location, only one of them gets mounted in the pod. You can verify the mounts by running the following command inside the pod:
df -h
Make sure the mounted path are in sync with the path of the repository that you give in the SMCFS Application Platform console.
  • sudo chown -R 1001 /[mounted dir]
  • sudo chgrp -R 0 /[mounted dir]
  • sudo chmod -R 770 /[mounted dir]

Install Persistent related objects in Red Hat OpenShift with NFS server

Note: The charts are bundled with sample files as templates. You can use these files to plug in to your configuration and create the pre-req objects. They are packaged in the prereqs.zip folder, along with the chart.
  1. Download the charts from IBM site. Add the dependencies as a part of the Passport Advantage archives.
  2. Create a persistent volume, persistent volume claim, and storage class with the access mode as 'Read write many' with minimum 12GB space.
Creating a pv.yaml file
See the following pv.yaml file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: cpq-pv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 12Gi
nfs:
path: [nfs-shared-path]]
server: [nfs-server]
persistentVolumeReclaimPolicy: Retain
storageClassName: cpq-sc
Run the following command to create the file:
oc create -f pv.yaml
Creating the persistent volume claim file
See the following pvc.yaml file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-cpq-vmoc-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: cpq-sc
volumeName: cpq-pv
Run the following command to create the pvc.yaml file:
oc create -f pvc.yaml

Creating a storage class

See the following storage class sc.yaml file:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cpq-sc
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
volumeBindingMode: Immediate

Run the following command to create the sc.yaml file:

oc create -f sc.yaml

Ensure that the names used in the pv.yaml, pvc.yaml, and sc.yaml files have correct references throughout the files.

Creating a database secret

See the following cpq-secrets.yaml file:

apiVersion: v1
kind: Secret
metadata:
name: cpq-secrets
type: Opaque
stringData:
dbUser: xxxx
dbPassword: xxxx
dbHostIp: "1.2.3.4"
dbPort: "50000"
Run the following command to create the cpq-secrets.yaml file:
oc create -f cpq-secrets.yaml
If you create a self-signed certificate for ingress, follow the steps.
  1. Select a host name for launching the IBM Sterling Visual Modeler.
  2. Create a certificate ingress.crt and a key ingress.key. This certificate is to enable the HTTPS protocol for IBM Sterling Visual Modeler. The host name ends with the subdomain name of your machine.

Run the following command:

cmd - openssl req -x509 -nodes -days 365 -newkey ./ingress.key -out ingress.crt -subj "/CN=[hostname]/O=[hostname]"

In this example, the command is:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ingress.key -out ./ingress.crt -subj "/CN=[ingress_host]/O==[ingress_host]"

Run the following command:

oc create secret tls vm-ingress-secret --key ingress.key --cert ingress.crt

See vm-ingress-secret in the values.yaml file in the field vmappserver.ingress.ssl.secretname. Also, enable SSL by setting the field vmappserver.ingress.ssl.enabled to true.

For production environments, it is recommended to obtain a TLS certificate certified by a CA and create a secret manually by completing the following steps:

  • Obtain a TLS certificate certified by a CA for the given vmappserver.ingress.host in the form of key and certificate files.
  • Create a secret from the key and certificate files by running the command:
   oc create secret tls -ingress-secret --key --cert -n
  • Use the -ingress-secret secret as the value of the parameter vmappserver.ingress.ssl.secretname. Set the following variables in the values.yaml file.
  • Set the registry from where you can pull the images. For example, the parameter and the value can be global.image.repository: "cp.icr.io/ibm-cpq".
  • Set the image names. For example,
    • vmappserver.image.name: cpq-vm-app
    • vmappserver.image.tag: 10.0.0.22-amd64
    • ocappserver.image.name: cpq-oc-app
    • ocappserver.image.tag: 10.0.0.22-amd64
Set the ingress host
Note: The host name must end with the subdomain name as that of the cluster node.
  1. Follow same steps as for ocappserver.
  2. Check global.persistence.claims.name “nfs-cpq-vmoc-claim” matches with name given in the pvc.yaml file.
  3. Check that the ingress TLS secret name is set correctly according to the certificate created, in place of vmappserver.ingress.ssl.secretname.
Installing a chart by using Helm Form on the UI in Web-Console (OCP4.6)
With the release of OCP 4.6, a new feature called Helm Form is available. It is available in the Developer Catalog in the Web Console. This feature facilitates deployment of application by filling the UI form. To do so, complete the following steps:
  1. Log in to the web-console.
  2. Select 'Developer' context in the menu.
  3. Click 'Add' and select a project to deploy CPQ.
  4. Select 'Helm Chart' tile and then select CPQ tile (Chart v4.0.1).
  5. Click 'Install Helm Chart' button.
  6. You can select 'Form View' to enter the values.

Install the chart with the release name my-release through cmd line:

  • Ensure that the chart is downloaded locally by completing the following instructions:
  • Set the enabled flag to true in the values.yaml for the applications that you want to install. Applications must be installed one at a time. Disable the flag for the application once you install it.
  install:
  configurator:
  enabled: false
  visualmodeler:
  enabled: false
  ifs:
  enabled: false
  runtime:
  enabled: false
  • Ensure that the settings are correct in the values.yaml file by simulating the installation chart by running the following command:
   helm template --name=my-release [chartpath]
This command displays all the Kubernetes objects that are deployed on Red Hat OpenShift. The objects are not installed.
  • To install the application in Red Hat OpenShift, run the following command:
   helm install my-release [chartpath] --timeout 3600
  • Similarly, install the IBM Sterling Omni Configurator application.
Testing the installation
You can test the installation by checking the following links:
  • IBM Sterling Visual Modeler administrator: https://[hostname]/VisualModeler/en/US/enterpriseMgr/admin?
  • IBM Sterling Visual Modeler matrix: https://[hostname]/VisualModeler/en/US/enterpriseMgr/matrix?
  • IBM Sterling Omni Configurator application: https://[hostname]/ConfiguratorUI/UI/index.html#
  • IBM Sterling Omni Configurator backend: https://[hostname]/configurator/

Depending on the capacity of the Kubernetes worker node and the database connectivity, the deployment process can take an average time as follows:

  • 5 - 6 minutes for installation against a pre-loaded database.
  • 50 - 60 minutes for installation against a fresh new database.

When you check the deployment status, the following values can be seen in the status column:

  • – Running: The container deployment is started.
  • – Init: 0/1: The container deployment is pending on another container to start.

You can see the following values in the Ready column:

  • – 0/1: The container deployment is started but the application is not ready yet.
  • – 1/1: The application is ready to use.

Run the following command to make sure there are no errors in the log file:

oc logs -n -f
If you are deploying CPQ Software on a namespace other than default, create Role Based Access Control (RBAC), if it is not created already, with a cluster admin role.
Following is an example of the RBAC for default service account on the target namespace that is referred as namespace.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cpq-role-
namespace:
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list","create","delete","patch","update"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cpq-rolebinding-
namespace:
subjects:
- kind: ServiceAccount
name: default
namespace:
roleRef:
kind: Role
name: cpq-role-
apiGroup: rbac.authorization.k8s.io

For a non-default namespace, you must add the anyuid scc to the default service account in that namespace by running the command:

oc adm policy add-scc-to-user anyuid -z default

Pod Security Policy Requirements

In case you need a PodSecurityPolicy to be bound to the target namespace, complete the following steps. Choose either a predefined PodSecurityPolicy or have your cluster administrator create a custom PodSecurityPolicy for you.

  • ICPv3.1 - Predefined PodSecurityPolicy name: default

  • ICPv3.1.1 - Predefined PodSecurityPolicy name: ibm-anyuid-psp

Custom PodSecurityPolicy definition:

apiVersion: apps/v1
kind: PodSecurityPolicy
metadata:
annotations:
kubernetes.io/description: "This policy allows pods to run with
any UID and GID, but preventing access to the host."
name: ibm-cpq-anyuid-psp
spec:
allowPrivilegeEscalation: true
fsGroup:
rule: RunAsAny
requiredDropCapabilities:
- MKNOD
allowedCapabilities:
- SETPCAP
- AUDIT_WRITE
- CHOWN
- NET_RAW
- DAC_OVERRIDE
- FOWNER
- FSETID
- KILL
- SETUID
- SETGID
- NET_BIND_SERVICE
- SYS_CHROOT
- SETFCAP
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
forbiddenSysctls:
- '*'

To create a custom PodSecurityPolicy, create a file cpq_psp.yaml with the definition and run the command:

oc create -f cpq_psp.yaml
Custom ClusterRole and RoleBinding definitions:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
name: ibm-cpq-anyuid-clusterrole
rules:
- apiGroups:
- extensions
resourceNames:
- ibm-cpq-anyuid-psp
resources:
- podsecuritypolicies
verbs:
- use

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ibm-cpq-anyuid-clusterrole-rolebinding
namespace:
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ibm-cpq-anyuid-clusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:

Replace the definition with the namespace of the target environment. To create a custom ClusterRole and RoleBinding, create a file cpq_psp_role_and_binding.yaml with the definition and run the following command:

oc create -f cpq_psp_role_and_binding.yaml

Red Hat OpenShift SecurityContextConstraints Requirements

ibm-restricted-scc:  Use the predefined securitycontextconstraint - restricted.

To enable pods to run as any uid, you need to add the serveraccount default to anyuid scc by running the following command:
oc adm policy add-scc-to-user anyuid -z default

Custom SecurityContextConstraints definition

apiVersion: apps/v1
kind: PodSecurityPolicy
metadata:
annotations:
kubernetes.io/description: "This policy is the most restrictive, requiring pods to run with a non-root UID, and preventing pods from accessing the host."
#apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
#apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
name: ibm-restricted-psp
spec:
allowPrivilegeEscalation: false
forbiddenSysctls:
- '*'
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim

Time zone considerations

To deploy the IBM Sterling Configure Price Quote software, the database, application servers, and agents must have the same time zone. Additionally, the time zone must be compatible with the locale code specified in the Configure Price Quote software. By default, the containers are deployed in UTC time zone. The locale code in Configure Price Quote software is also set to en_US_UTC time zone, which ensures that the database is also deployed in UTC.

The time zone of the applications is configurable through Helm chart.
Example: global.timeZone="America/Chicago"
For Supported time zone Id's, check
https://www.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.base.iseries.doc/ae/rrun_svr_timezones.html

Configuration

Ingress

  • For IBM Sterling Visual Modeler, ingress can be enabled by setting the parameter vmappserver.ingress.enabled to true. If ingress is enabled, the application is exposed as a ClusterIP service, otherwise the application is exposed as NodePort service. It is recommended to enable and use ingress for accessing the application from outside the cluster. For production workloads, the only recommended approach is ingress with cluster IP. Do not use NodePort.

  • vmappserver.ingress.host: It is a fully qualified domain name that resolves to the IP address of your cluster’s proxy node. Based on your network settings it is possible that multiple virtual domain names resolve to the same IP address of the proxy node. You can use any one of those domain names. For example, "example.com" or "test.example.com".

  • vmappserver.ingress.ssl.enabled: It is recommended to enable SSL. If SSL is enabled by setting this parameter to true, a secret is needed to hold the TLS certificate.

  • For the IBM Sterling Omni Configurator application, ingress can be enabled by setting the parameter ocappserver.ingress.enabled to true. If ingress is enabled, the application is exposed as a ClusterIP service, otherwise the application is exposed as NodePort service. It is recommended to enable and use ingress for accessing the application from outside the cluster. For production workloads, the only recommended approach is Ingress with cluster IP. Do not use NodePort.

  • ocappserver.ingress.host: It is a fully qualified domain name that resolves to the IP address of your cluster’s proxy node. Based on your network settings it is possible that multiple virtual domain names resolve to the same IP address of the proxy node. You can use any one of the domain names. For example, "example.com" or "test.example.com".

  • ocappserver.ingress.ssl.enabled: It is recommended to enable SSL. If SSL is enabled by setting this parameter to true, a secret is needed to hold the TLS certificate.

Installation of new database

The following process creates the required database tables and factory data in the database.

Assumption: A new user is created in database. 
During installing the chart on an empty database that does not have IBM Visual Modeler tables and data, do the following:
• Ensure that the install.runtime.enabled=true, runtime.image has the details of correct cpq-vmoc-base image and tag.
• Ensure that the following flags are set to install the database, migrate the database, load database, and load Matrix database in values.yaml.
vmdatasetup.dbType = Db2 (choose between Oracle or Db2)
vmdatasetup.createDB = true
vmdatasetup.loadDB = true (load minimal DB)
vmdatasetup.loadMatrixDB = true (load Matrix reference models)
vmdatasetup.skipCreateWAR = true
vmdatasetup.generateImage = false
vmdatasetup.loadFactoryData = install
vmdatasetup.migrateDB.enabled = true
vmdatasetup.migrateDB.fromFPVersion = 10.0.0.10
vmdatasetup.migrateDB.toFPVersion = 10.0.0.12
Note:
The vmdatasetup.createDB flag must be set to true for fresh data creation.
The vmdatasetup.migrateDB.enabled flag must be set to true to migrate DB from One Fixpack version to another.
At a time, either of the flags vmdatasetup.createDB or vmdatasetup.migrateDB.enabled can be set to true.
The fromFPVersion and toFPVersion flags must match with the folder in the system.

The vmdatasetup job takes approximately 4 - 5 mins to create, loadDB, and loadMatrixDB.
To ensure datasetup job completion user can check the logs under /logs/DBUtility and /logs/VisualModeler/[podname]_runtime.log folder

Installation against a pre-loaded database
When you are installing the chart against a database with IBM Visual Modeler tables and factory data, check the following points:
• Ensure that the install.runtime.enabled is set to false and the following flags are set. Doing so, avoids re-creating tables and overwriting factory data.
vmdatasetup.dbType = Db2 (choose between Oracle or Db2)
vmdatasetup.createDB = false
vmdatasetup.loadDB = false
vmdatasetup.loadMatrixDB = false
vmdatasetup.skipCreateWAR = true
vmdatasetup.generateImage = false
vmdatasetup.loadFactoryData =

Db2 database secrets:

  • Create a Db2 user.
  • Add the following properties in the cpq-secrets.yaml file.
 apiVersion: v1
 kind: Secret
 metadata:
 name:
 type: Opaque
 stringData:
 dbUser:
 dbPassword:
 dbHostIp: ""
 databaseName:
 dbPort: ""
  • Create a secret by running the following command:
   oc create -f cpq-secrets.yaml

Oracle database secret:

  • In the Oracle database, create a user.
  • Create tablespace DATA on database server by using the following script:
 Create TABLESPACE DATA
 ADD DATAFILE 'data01.DBF'
 SIZE 1000M;

The tablespace is required to import data into the Oracle user database.

  • Add the following properties in the cpq-secrets.yaml file:
 apiVersion: v1
 kind: Secret
 metadata:
 name:
 type: Opaque
 stringData:
 dbUser:
 dbPassword:
 dbHostIp: ""
 databaseName:
 dbPort: ""
 tableSpaceName: "DATA"
  • Create a secret by running the following command:
    oc create -f cpq-secrets.yaml
  • Make the following changes in the values.yaml file to install a new database:

 global:
 appSecret: ""
 database:
 dbvendor:
 install:
 configurator:
 enabled: false
 visualmodeler:
 enabled: false
 ifs:
 enabled: false
 runtime:
 enabled: true
 vmdatasetup:
 dbType: ""
 createDB: true
 loadDB: true
 skipCreateWAR: true
 generateImage: false
 loadFactoryData: "install"
 runtime:
 image:
 name:
 tag:
 pullPolicy: IfNotPresent
  • Run the following command to install the database.
 helm install my-release [chartpath] --timeout 3600

Configurable parameters for IBM Sterling Visual Modeler and IBM Sterling Omni Configurator charts


Affinity and Toleration

The chart provides various ways to configure advance pod scheduling in Kubernetes, in the form of node affinity, pod affinity, pod anti-affinity, and tolerations. For more information about the usage and specifications of the following features, see Kubernetes documentation.

  • Toleration: It can be configured by using the parameter vmappserver.tolerations for the vmappserver. Similar parameters can be used for IBM Sterling Omni Configurator charts.

  • Node affinity: It can be configured by using the parameters: 

  •  vmappserver.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution, vmappserver.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution for the vmappserver

  • ocappserver.common.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution, ocappserver.common.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution for the ocappserver.

  • Depending on the architecture preference selected for the parameter global.arch, a suitable value for node affinity is automatically appended in addition to the user provided values.

  • Pod affinity: It can be configured by using the following parameters:

  •  vmappserver.podAffinity.requiredDuringSchedulingIgnoredDuringExecution, vmappserver.podAffinity.preferredDuringSchedulingIgnoredDuringExecution for the vmappserver

  • ocappserver.common.podAffinity.requiredDuringSchedulingIgnoredDuringExecution, ocappserver.common.podAffinity.preferredDuringSchedulingIgnoredDuringExecution for the ocappserver.

  • Pod anti-affinity: It can be configured by using the following parameters:

  •  vmappserver.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution, vmappserver.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution for the appserver.

  • ocappserver.common.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution, ocappserver.common.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution for the ocappserver.

  • Depending on the value of the parameter podAntiAffinity.replicaNotOnSameNode, a suitable value for pod anti-affinity is automatically appended in addition to the user provided values. This step is to confirm whether the replica of a pod is to be scheduled on the same node.

  • If the value is prefer, then podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution is automatically appended.

  • If the value is require, then podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution is appended.

  • If the value is blank, then pod anti-affinity value is not automatically appended.

  • If the value is prefer, the weighting for the preference is set by using the parameter podAntiAffinity.weightForPreference that must be specified in the range 1 - 100.

Readiness and liveness

Readiness and liveness checks are provided for the agents and application server pods as applicable.

  1. Application Server pod: The following parameters can be used to tune the readiness and liveness checks for application server pods:
  • vmappserver.livenessCheckBeginAfterSeconds: It is used to specify the delay in starting the liveness check for the application server. The default value is 900 seconds (15 minutes).
  • vmappserver.livenessFailRestartAfterMinutes: If the liveness check fails continuously for the specified period, this parameter is used to specify the approximate time period after which the pod restarts. The default value is 10 minutes.

For example, if the values for vmappserver.livenessCheckBeginAfterSeconds and vmappserver.livenessFailRestartAfterMinutes are 900 seconds and 10 minutes and the application server pod is not able to start even after 25 minutes, it will be restarted. After the application server starts successfully and if the liveness check fails continuously for 10 minutes, then it will be restarted.

Upgrading the Chart

You can upgrade your deployment when you have a new docker image for application or agent server or if there is a change in configuration. For example, if a new agent or an integration server needs to be deployed or started:

  1. Ensure that the chart is downloaded locally by following the instructions given here.

  2. Ensure that the ifsdatasetup.loadFactoryData parameter is set to donotinstall or blank.

  3. Run the following command to upgrade your deployments:

 helm upgrade my-release -f values.yaml [chartpath] --timeout 3600 --tls

Uninstalling the chart

To uninstall or delete the my-release deployment, run the following command:

 helm delete my-release --tls

Note: If you need to clean the installation, you can also consider deleting the secrets and persistent volume that was created as part of the prerequisites for installation.

Dashboard Experience

The Red Hat OpenShift comes with out of the box Dashboard displaying the objects that are deployed. You can check the Red Hat OpenShift documentation for the usage of Dashboard. https://docs.openshift.com/container-platform/4.6/welcome/index.html
Confirm the version of Red Hat OpenShift once you visit this page.

Limitations

  • The database must be installed in UTC time zone.

Backup/recovery process

Back up of persistent data for IBM Sterling Configure Quote Price software can be a Database backup or a Repository backup. The database must be backed up regularly. Similarly, the repository folders must be backed up on a regularly to back up the model XML files and other properties defined as a part of the repository. Since the application pods are stateless, there is no backup or recovery process required for the pods.

The deployed application can be deleted by using the following command:

helm del [release-name]

The application can also be rolled back by using the following command:

helm rollback [release-name] 0

Usage of the vmoc base image and its customization

  • Pull cpq-vmoc-base image into the local Docker repository.

  • Create a custom base container from the base image by running the following command:
    podman run -it --net=podman --privileged -e LICENSE=accept:
    This command takes you inside the base container in "/opt/VMSDK" folder.

  • The "/opt/VMSDK" folder contains all the artifacts.

  • Once you create the container, come out of the container (Ctrl+PQ). Copy the IBM_VisualModeler.jar, configurator.war, ConfiguratorUI.war of the latest Fixpack inside the container in the /opt/VMSDK/newartifacts folder. Go to the folder where the 3 artifacts from the Jenkins build are downloaded and run the following commands:

Parameter Description Default
vmappserver.replicaCount Number of vmappserver instances 1
vmappserver.image Docker image details of vmappserver
cpq-vm-app
vmappserver.runAsUser Needed for non OCP cluster 1001
vmappserver.config.vendor OMS Vendor websphere
vmappserver.config.vendorFile OMS Vendor file servers.properties
vmappserver.config.serverName App server name DefaultAppServer
vmappserver.config.jvm Server min/max heap size and jvm parameters 1024m min, 2048m max, no parameters
vmappserver.livenessCheckBeginAfterSeconds Approximate wait time(secs) to begin the liveness check 600
vmappserver.livenessFailRestartAfterMinutes If liveness check keeps failing for the specified period, this value is the approximate time period (minutes) after which server is restarted. 10
vmappserver.service.type Service type NodePort
vmappserver.service.http.port HTTP container port 9080
vmappserver.service.http.nodePort HTTP external port 30083
vmappserver.service.https.port HTTPS container port 9446
vmappserver.service.https.nodePort HTTPS external port 30443
vmappserver.resources CPU/Memory resource requests/limits Memory: 2560Mi, CPU: 1
vmappserver.ingress.enabled Whether Ingress settings enabled true
vmappserver.ingress.host Ingress host
vmappserver.ingress.controller Controller class for ingress controller nginx
vmappserver.ingress.contextRoots Context roots that can be accessed through ingress ["VisualModeler","adminCenter","/"]
vmappserver.ingress.annotations Annotations for the ingress resource
vmappserver.ingress.ssl.enabled Whether SSL enabled for ingress true
vmappserver.podLabels Custom labels for the vmappserver pod
vmappserver.tolerations Toleration for vmappserver pod. Specify in accordance with k8s PodSpec.tolerations.
vmappserver.custom.functionHandler Specify the path of the FunctionHandlers property file WEB-INF/properties/custom_functionHandlers.properties
vmappserver.custom.uiControl Specify the path of the Custom Control property file WEB-INF/properties/custom_controls_v2.properties
importcert.secretname Secret name consisting of certificate to be imported into VM.
ocappserver.deployConfiguratorUI If Configurator UI is to be deployed, set this value. true
ocappserver.replicaCount Number of ocappserver instances 1
ocappserver.image Docker image details of ocappserver
ocappserver.runAsUser Needed for non OCP cluster 1001
ocappserver.config.vendor OMS Vendor websphere
ocappserver.config.vendorFile OMS Vendor file servers.properties
ocappserver.config.serverName App server name DefaultAppServer
ocappserver.config.jvm Server min/max heap size and jvm parameters 1024m min, 2048m max, no parameters
ocappserver.livenessCheckBeginAfterSeconds Approximate wait time(secs) to begin the liveness check 900
ocappserver.livenessFailRestartAfterMinutes
If liveness check keeps failing for the specified period, this value is the approximate time period (minutes) after which server is restarted.
10
ocappserver.service.type Service type NodePort
ocappserver.service.http.port HTTP container port 9080
ocappserver.service.http.nodePort HTTP external port 30080
ocappserver.service.https.port HTTPS container port 9443
ocappserver.service.https.nodePort HTTPS external port 30443
ocappserver.resources CPU/Memory resource requests/limits Memory: 2560Mi, CPU: 1
ocappserver.ingress.enabled Whether Ingress settings enabled true
ocappserver.ingress.host Ingress host
ocappserver.ingress.controller Controller class for ingress controller nginx
ocappserver.ingress.contextRoots Context roots that can be accessed through ingress ["ConfiguratorUI","configurator"]
ocappserver.ingress.annotations Annotations for the ingress resource
ocappserver.ingress.ssl.enabled Whether SSL enabled for ingress true
ocappserver.podLabels Custom labels for the ocappserver pod
ocappserver.tolerations Toleration for ocappserver pod. Specify in accordance with k8s PodSpec.tolerations. For more information, see section Affinity and Toleration.
importcert.secretname Secret name consisting of certificate to be imported into OC.
vmappserver.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
vmappserver.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
vmappserver.podAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAffinity.requiredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
vmappserver.podAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAffinity.preferredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
vmappserver.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
vmappserver.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
vmappserver.podAntiAffinity.replicaNotOnSameNode Directive to prevent scheduling of replica pod on the same node. Valid values: prefer, require, blank. For more information, see section Affinity and Toleration. prefer
vmappserver.podAntiAffinity.weightForPreference Preference weighting 1-100. It is used when value 'prefer' is specified for parameter vmappserver.podAntiAffinity.replicaNotOnSameNode. For more information, see section Affinity and Toleration. 100
ocappserver.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ocappserver.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ocappserver.podAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAffinity.requiredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ocappserver.podAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAffinity.preferredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ocappserver.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ocappserver.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ocappserver.podAntiAffinity.replicaNotOnSameNode
Directive to prevent scheduling of replica pod on the same node. Valid values: prefer, require, blank. For more information, see section Affinity and Toleration.
prefer
ocappserver.podAntiAffinity.weightForPreference Preference weighting 1-100. It is used when value 'prefer' is specified for parameter ocappserver.podAntiAffinity.replicaNotOnSameNode. For more information, see section Affinity and Toleration. 100
global.license Set the value to true in order to accept the application license false
global.image.repository Registry for CPQ images
global.image.pullsecret Used in imagePullSecrets of Pods. See prerequisite steps, option 1 and 2.
global.appSecret CPQ secret name
global.tlskeystoresecret CPQ/IFS TLS Keystore Secret for Liberty keystore password pkcs12
global.persistence.claims.name Persistent volume name pq-vmoc-claim
global.persistence.securityContext.fsGroup File system group ID to access the persistent volume 0
global.persistence.securityContext.supplementalGroup Supplemental group ID to access the persistent volume 0
global.database.dbvendor DB Vendor Db2/Oracle Db2
global.database.schema
Database schema name.
The default schema name for Db2 is  "global.database.dbname". The default schema name for Oracle is "global.serviceAccountName".
Service account name
global.arch Architecture affinity while scheduling pods amd64: 2 - No preference, ppc64le: 2 - No preference
global.install.configurator.enabled Install Configurator
global.install.visualmodeler.enabled Install VisualModeler
global.install.ifs.enabled Install IFS
global.install.runtime.enabled Install Base Pod (Required for factory data loading)
global.timeZone Set the time zone in which the Application would be running
vmdatasetup.dbType Type of Database used by CPQ Application Db2
vmdatasetup.createDB Specifying this flag as true creates Database Schema true
vmdatasetup.loadDB
vmdatasetup.loadMatrixDB
Specifying this flag as true loads configuration data
Specifying this flag as true loads reference configuration data
false
true
vmdatasetup.skipCreateWAR Specifying this flag as true prevents the creation of application war true
vmdatasetup.generateImage Specifying this flag as true prevents the creation of CPQ image false
vmdatasetup.loadFactoryData Load factory data of VM Application
runtime.image.name The Base image used for generating customized images of VM and OC cpq-vmoc-base
runtime.runAsUser Needed for non OCP cluster
1001
podman cp IBM_VisualModeler.jar :/opt/VMSDK/newartifacts
podman cp configurator.war :/opt/VMSDK/newartifacts
podman cp ConfiguratorUI.war :/opt/VMSDK/newartifacts
  • Once you are inside the container, run the following command to create the three database-independent messages: Visual Modeler, Omni Configurator, and the base image.
    ./executeAll.sh --createDB=false --loadDB=false --MODE=all --generateImage=true --generateImageTar=true --pushImage=true --imageRegistryURL= --imageRegistryPassword=****** --IMAGE_TAG_NAME=
Here,

createDB: If you are generating the images, set value to false. To create and load the database, set value to true.

loadDB: If you are generating the images, set value to false. To create and load the database, set value to true.

MODE: Set value to vm to generate only Visual Modeler appserver image. Set value to oc to generate only Omni Configurator appserver image. Set value to base to generate only vmoc base image. Set value to all to generate all the 3 images.

generateImage: To generate the image, set value to true. Otherwise, set to false.

generateImageTar: To save the image into .tar file, set value to true.

pushImage: To push the image into registry, set value to true.

imageRegistryURL: Specify the registry URL to push the image.

imageRegistryPassword: Password or API key required to log in to registry.

IMAGE_TAG_NAME: Provide tag name to the image. The generated image is pushed in registry with this tag name.

  • To generate database-specific image, run the following command:
./executeAll.sh --DBTYPE= --createDB=false --loadDB=false --MODE=all --generateImage=true --generateImageTar=true --pushImage=true --imageRegistryURL= --imageRegistryPassword=****** --IMAGE_TAG_NAME=

Here,
DBTYPE: Provide the value for either Db2 or oracle to generate Db2 or oracle image.

  • Run the following command inside the container to create "projects/matrix" folder under "/opt/VMSDK" folder that is used to customize the Visual Modeler image:
./executeAll.sh --DBTYPE=dbtype --createDB=false --loadDB=false --MODE=vm --generateImage=false

Follow the IBM Documentation for copying the customization changes. After the customization changes are copied, run following commands from "/opt/VMSDK" folder to build the Visual Modeler WAR file with the customized changes and generate or push the customized images:

./generateImage.sh --DBTYPE="$DBTYPE" --MODE=vm --IMAGE_TAG_NAME="$IMAGE_TAG_NAME" --generateImageTarFlag=true --pushImageFlag=true --imageRegistryURL="$imageRegistryURL" --imageRegistryPassword="$imageRegistryPassword"
*Note: /opt/VMSDK/rt.log is where logs would be saved.
  • For more information about customizing IBM Sterling Omni Configurator repository, see IBM Documentation.

Customizing server.xml for Liberty

You can customize the server.xml for Liberty by using the Helm charts.

There are two files, server_vm.xml and server_oc.xml, which is deployed as server.xml to the respective applications.

Warning: Do not change the out of the box settings provided in server.xml as it can impact the application.
Customizing properties in Comergent.xml

Out of Box implementation to populate Configurator System URL and IBM Configurator UI URL
  1. Helm chart populates IBM Configurator System URL and IBM Configurator UI URL depending upon the ingress host of the configurator defined in values.yaml in ocappserver.
  2. The logic to populate ingress host of configurator is defined in vm-configmap.yaml.
    Default Value
    configuratorServerURL: {{ printf "https:\\/\\/%s\\/configurator" .Values.ocappserver.ingress.host}}
    configuratorUIURL: {{ printf "https:\\/\\/%s\\/ConfiguratorUI\\/index.html\\#" .Values.ocappserver.ingress.host}}
How customer can override the default values Configurator System URL and IBM Configurator UI URL.
  1. Customer must modify vm-configmap.yaml and can provide values like, configuratorServerURL: {{ printf "https:\\/\\/\\/configurator" }} configuratorUIURL: {{ printf "https:\\/\\/\\/ConfiguratorUI\\/index.html\\#" }}
Out of Box value of Function Handler and Custom Control Properties
  1. The default value of Function Handler properties can be found at vmappserver.custom.functionHandler.
    The default value is "WEB-INF\\/properties\\/custom_functionHandlers.properties" ("\\" is required to escape path).
  2. The default value of Custom Control properties can be found at vmappserver.custom.functionHandler.
    The default value is "WEB-INF\\/properties\\/custom_controls_v2.properties" ("\\" is required to escape path).
Note:
The default value can be modified to provide environment-specific value.
When specifying the path function handler properties file or custom control properties file, use "\\".
Deployment of Configurator UI
By default Configurtaor UI is deployed. But, for example, in production environment if customer
wishes not to deploy Configurator UI then set deployConfiguratorUI to false.
Note:
The default value can be modified to provide environment-specific value
(remember to give "\\" while specifying the path function handler properties file or custom control properties file)

Password and User name for Sterling Fulfillment system
The value for Password field is populated from stringData.appPassword. The value for username field is populated from ifs-secret.yaml stringData.appUserName.
Ehcache enablement in OC
1. To enable EhCache to work in distribution mode, you need to enable the multicast in OCP:
     oc annotate netnamespace netnamespace.network.openshift.io/multicast-enabled=true
     Where 'namespace' is the project to which you are deploying the OC.
2. Incorporate EhCache.xml in extensions.jar.
     For more information about customizing EhCache as part of extensions.jar, see IBM Documentation.
3. To enable EhCache logs for OC, update the log4j.properties in repository with the following content:

log4j.logger.net.sf.ehcache=ALL, file
log4j.logger.net.sf.ehcache.Cache=ALL, file
log4j.logger.net.sf.ehcache=ALL, file
log4j.logger.net.sf.ehcache.config=ALL, file
log4j.logger.net.sf.ehcache.distribution=ALL, file
log4j.logger.net.sf.ehcache.code=ALL, file
log4j.logger.net.sf.ehcache.event=ALL, file
log4j.logger.net.sf.ehcache.statistics=ALL, file


4. To make the logger into effect reinstall OC.
5. You can see the following logs once the logs are enabled. The logs are environment-specific.
2020-07-17 12:16:44 DEBUG MulticastKeepaliveHeartbeatReceiver - rmiUrls received //10.254.16.241:37825/configPricingCache
2020-07-17 12:16:44 DEBUG RMICacheManagerPeerProvider - Lookup URL //10.254.16.241:37825/configPricingCache
|
|
2020-07-17 12:16:45 DEBUG PayloadUtil - Cache peers for this CacheManager to be advertised: //10.254.20.224:44175/configPricingCache
|
2020-07-17 12:16:45 DEBUG RMIBootstrapCacheLoader - cache peers: [RMICachePeer_Stub[UnicastRef2 [liveRef: [endpoint:[10.254.16.241:44029,net.sf.ehcache.distribution.ConfigurableRMIClientSocketFactory@1d4c0](remote),objID:[-aec6f0c:1735cad4714:-7ffb, 3567411233127048233]]]]]
|
|
2020-07-17 12:18:39 DEBUG MulticastKeepaliveHeartbeatReceiver - rmiUrls received //10.254.16.241:37825/configPricingCache

Note: The logs keep filling rapidly. Once you get the ehCache working, you can disable the Ehcache logging by removing the logger content and reinstall OC.
Invoking Custom code on IBM Sterling Omni Configurator startup
In order to invoke custom code on IBM Sterling Omni Configurator server startup, one can use ocpostappready.sh that is found in /charts/ibm-cpq-prod/scripts folder.

IBM Field Sales version 10.0.0.16

Introduction

This section describes how to deploy the IBM Field Sales v10.0.0.16 on Red Hat OpenShift platform. This helm chart does not install database server or messaging server. Both the middleware needs to be set up and configured separately for IBM Field Sales.

Note: This helm chart supports deployment of IBM Field Sales with Db2 database and MQ messaging.

Chart Details

This chart does the following:

  • Create a deployment -ibm-cpq-prod-ifsappserver for IBM Field Sales application server with 1 replica by default.
  • Create a deployment -ibm-cpq-prod-ifshealthmonitor for IBM Field Sales HealthMonitor, if health monitor is enabled.
  • Create a deployment -ibm-cpq-prod- for each of the IBM Field Sales agent or integration server configured.
  • Create a service -ibm-cpq-prod-ifsappserver. This service is used to access the IBM Field Sales application server by using a consistent IP address.
  • Create a job -ibm-cpq-prod-ifsdatasetup. It is used for performing data setup for IBM Field Sales that is required to deploy and run the application. If the data setup is disabled at the time of installation or upgrade, the job is not created.
  • Create a job -ibm-cpq-prod-preinstall. It is used to perform pre-installation activities like generating ingress TLS secret.
  • Create a ConfigMap -ibm-cpq-prod-ifsconfig. It is used to provide IBM Field Sales and Liberty configuration.
  • Create a ConfigMap -ibm-cpq-prod-def-server-xml-conf. It is used to provide default server.xml for Liberty. If a custom server.xml is used, the configmap is not created.

Note: Refers to the name of the helm release and refers to the agent or integration server name.

Prerequisites for IFS

  1. Kubernetes version >= 1.17.0

  2. Ensure that Db2 database server is installed and the database is accessible from inside the cluster. For more information, see section time zone considerations.

  3. Ensure that MQ server is installed and it is accessible from inside the cluster.

  4. Ensure that the docker images for IBM Field Sales are loaded to an appropriate docker registry. The default images for IBM Field Sales can be loaded from IBM Passport Advantage. Alternatively, customized images for IBM Field Sales can also be used.

  5. Ensure that the docker registry is configured in Manage -> Resource Security -> Image Policies and also ensure that docker image can be pulled on all the Kubernetes worker nodes.

  6. Create a persistent volume with access mode as 'Read write many' with minimum 10GB space.

  7. Create a secret with datasource connectivity details as given. The name of this secret needs to be supplied as the value of parameter ifs.appSecret. It is recommended to prefix the release name to the secret name.

  8. When using Podman for image operations like launching base container, ensure you have root privileges for the logged in user. Alternatively, you can use sudo.

Creating a secret ifs_secrets.yaml

Create a secret with data source connectivity details as given here. The name of this secret needs to be supplied as the value of parameter ifs.appSecret. It is recommended to prefix the release name to the secret name.

See the following ifs_secrets.yaml file:

apiVersion: v1 kind: Secret metadata: name: "-ifs-secrets" type: Opaque stringData: consoleadminpassword: "" consolenonadminpassword: "" dbpassword: ""
Note: For more information about IBM Field Sales, see the values.yaml file that looks similar to the following file:
ifs:
appSecret: "ifs-secret"
database:
serverName: "1.2.3.4"
port: "[DB PORT]"
dbname: CPQ
user: "[DB USER]"
dbvendor: Db2
datasourceName: jdbc/OMDS
systemPool: true
schema: "[DB SCHEMA]"
Run the following command to create the secret file:
oc create -f ifs_secrets.yaml -n
Creating a persistent volume
See the follwoing ifs_pv.yaml file to create a persistent volume with access mode as 'Read write many' and a minimum 10GB space.
apiVersion: v1
kind: PersistentVolume
metadata:
name: ifs-pv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 12Gi
nfs:
path: [nfs-shared-path]
server: [nfs-server]
persistentVolumeReclaimPolicy: Retain
Run the following command to create the ifs_pv.yaml file:
oc create -f ifs_pv.yaml -n
If you are deploying IBM Field Sales on a namespace other than default, create Role Based Access Control (RBAC) if it is not already created, with cluster admin role.
Following is an example of the RBAC for default service account on target namespace as :
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ifs-role-
namespace:
rules: -
apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list","create","delete","patch","update"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ifs-rolebinding-
namespace:
subjects: -
kind: ServiceAccount
name: default
namespace:
roleRef:
kind: Role
name: ifs-role-
apiGroup: rbac.authorization.k8s.io
Read the instructions provided in the Configuring Agent or Integration Servers section before configuring an agent or integration server in the chart.

PodSecurityPolicy Requirements

This chart requires a PodSecurityPolicy to be bound to the target namespace before installation. Choose either a predefined PodSecurityPolicy or have your cluster administrator create a custom PodSecurityPolicy for you:

  • ICPv3.1 - Predefined PodSecurityPolicy name: default
  • ICPv3.1.1 - Predefined PodSecurityPolicy name: ibm-anyuid-psp
  • Custom PodSecurityPolicy definition:
Definition of a ifs_psp.yaml file:
apiVersion: apps/v1
kind: PodSecurityPolicy
metadata:
annotations:
kubernetes.io/description: "This policy allows pods to run with
any UID and GID, but preventing access to the host."
name: ibm-ifs-anyuid-psp
spec:
allowPrivilegeEscalation: true
fsGroup:
rule: RunAsAny
requiredDropCapabilities:
- MKNOD
allowedCapabilities:
- SETPCAP
- AUDIT_WRITE
- CHOWN
- NET_RAW
- DAC_OVERRIDE
- FOWNER
- FSETID
- KILL
- SETUID
- SETGID
- NET_BIND_SERVICE
- SYS_CHROOT
- SETFCAP
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
forbiddenSysctls:
- '*'
Run the following command to create the ifs_psp.yaml file:
oc create -f ifs_psp.yaml
Custom ClusterRole and RoleBinding definitions
To create a custom ClusterRole and RoleBinding, create a file ifs_psp_role_and_binding.yaml with the following definition:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
name: ibm-ifs-anyuid-clusterrole
rules:
- apiGroups:
- extensions
resourceNames:
- ibm-ifs-anyuid-psp
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ibm-ifs-anyuid-clusterrole-rolebinding
namespace:
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ibm-ifs-anyuid-clusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:

The definition must be replaced with the namespace of the target environment. Run the following command to create the file:

oc create -f ifs_psp_role_and_binding.yaml

Red Hat OpenShift SecurityContextConstraints Requirements

This chart requires a SecurityContextConstraints to be bound to the target namespace before installation.

The predefined SecurityContextConstraints name ibm-anyuid-scc is verified for this chart. If your target namespace is bound to this SecurityContextConstraints resource, you can proceed to install the chart.

Alternatively, a custom SecurityContextConstraints can be created by using the following definition:

apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
annotations:
name: ibm-ifs-scc
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegedContainer: false
allowedCapabilities: []
allowedFlexVolumes: []
defaultAddCapabilities: []
fsGroup:
type: MustRunAs
ranges:
- max: 65535
min: 1
readOnlyRootFilesystem: false
requiredDropCapabilities:
- ALL
runAsUser:
type: MustRunAsNonRoot
seccompProfiles:
- docker/default
seLinuxContext:
type: RunAsAny
supplementalGroups:
type: MustRunAs
ranges:
- max: 65535
min: 1
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret
priority: 0

Run the following command to create a file ibm-ifs-scc.yaml with the definition :

oc create -f ibm-ifs-scc.yaml

Time Zone considerations

In order to deploy IBM Field Sales, the time zone of the database, application servers, and agents must be same. Additionally, this time zone must be compatible with the locale code specified in IBM Field Sales. By default, the containers are deployed in UTC. Also, the locale code is set as en_US_UTC. Doing so, ensures that the database is also deployed in UTC.

Configuration

Installation on a new database

When installing the chart on a new database that does not have IBM Field Sales tables and factory data, complete the following steps:

  • Ensure that ifsdatasetup.loadFactoryData parameter is set to install and ifsdatasetup.mode parameter is set as create. It creates the required database tables and factory data in the database before installing the chart.
  • Ensure that you do not specify any agents or integration servers with parameters ifsagentserver.servers.name. When installing against a fresh database, you do not have any agent and integration server configured. Hence, it does not make sense to configure agents and integration servers in the chart. Once the application server is deployed, you can configure the agents or integration servers. For more information about agent deployment and integration servers, see section Configuring agent or integration servers.

Installation against a pre-loaded database

When installing the chart against a database that has the IBM Field Sales tables and factory data, ensure that the ifsdatasetup.loadFactoryData parameter is set to donotinstall or blank. This step avoids re-creating tables and overwriting factory data.

Following table lists the configurable parameters for the chart:

Parameter Description Default
ifsappserver.replicaCount Number of appserver instances 1
ifsappserver.image Docker image details of appserver cpq-ifs-app
ifsappserver.runAsUser Needed for non OCP cluster 1001
ifsappserver.config.vendor OMS Vendor websphere
ifsappserver.config.vendorFile OMS Vendor file servers.properties
ifsappserver.config.serverName App server name DefaultAppServer
ifsappserver.config.jvm Server min/max heap size and jvm parameters 1024m min, 2048m max, no parameters
ifsappserver.config.database.maxPoolSize DB max pool size 50
ifsappserver.config.database.minPoolSize DB min pool size 10
ifsappserver.config.corethreads Core threads for Liberty 20
ifsappserver.config.maxthreads Maximum threads for Liberty 100
ifsappserver.config.libertyServerXml Custom server.xml for Liberty. For more information, see section Customizing server.xml for Liberty.
ifsappserver.livenessCheckBeginAfterSeconds Approximate wait time(secs) to begin the liveness check 900
ifsappserver.livenessFailRestartAfterMinutes If liveness check keeps failing for the specified period, this value is the approximate time period (minutes) after which server is restarted. 10
ifsappserver.service.type Service type NodePort
ifsappserver.service.http.port HTTP container port 9082
ifsappserver.service.http.nodePort HTTP external port 30086
ifsappserver.service.https.port HTTPS container port 9445
ifsappserver.service.https.nodePort HTTPS external port 30449
ifsappserver.resources CPU/Memory resource requests/limits Memory: 2560Mi, CPU: 1
ifsappserver.ingress.enabled Whether Ingress settings enabled true
ifsappserver.ingress.host Ingress host
ifsappserver.ingress.controller Controller class for ingress controller nginx
ifsappserver.ingress.contextRoots Context roots that can be accessed through ingress ["smcfs", "sbc", "sma", "isccs", "wsc", "adminCenter"]
ifsappserver.ingress.annotations Annotations for the ingress resource
ifsappserver.ingress.ssl.enabled Whether SSL enabled for ingress true
ifsappserver.podLabels Custom labels for the appserver pod
ifsappserver.tolerations Toleration for appserver pod. Specify in accordance with k8s PodSpec.tolerations. For more information, see section Affinity and Toleration.
importcert.secretname Secret name consisting of certificate to be imported into IFS.
ifsappserver.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ifsappserver.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ifsappserver.podAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAffinity.requiredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ifsappserver.podAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAffinity.preferredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ifsappserver.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ifsappserver.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ifsappserver.podAntiAffinity.replicaNotOnSameNode Directive to prevent scheduling of replica pod on the same node. Valid values: prefer, require, blank. For more information, see section Affinity and Toleration. prefer
ifsappserver.podAntiAffinity.weightForPreference Preference weighting 1-100. It is used when value 'prefer' is specified for parameter ifsappserver.podAntiAffinity.replicaNotOnSameNode. For more information, see section Affinity and Toleration. 100
ifsagentserver.image Docker image details of agent server cpq-ifs-agent
ifsagentserver.runAsUser Needed for non OCP cluster 1000
ifsagentserver.deployHealthMonitor Deploy health monitor agent true
ifsagentserver.common.jvmArgs Default JVM args that are passed to the list of agent servers "-Xms512m\ -Xmx1024m"
ifsagentserver.common.replicaCount Default number of instances of agent servers that are deployed
ifsagentserver.common.resources Default CPU/Memory resource requests/limits Memory: 1024Mi, CPU: 0,5
ifsagentserver.common.readinessFailRestartAfterMinutes
If readiness check keeps failing for the specified period, this value is the approximate time period (minutes) after which server is restarted.
10
ifsagentserver.common.podLabels Custom labels for the agent pod
ifsagentserver.common.tolerations Toleration for agent pod. Specify in accordance with k8s PodSpec.tolerations. For more information, see section Affinity and Toleration.
ifsagentserver.common.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ifsagentserver.common.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ifsagentserver.common.podAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAffinity.requiredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ifsagentserver.common.podAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAffinity.preferredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ifsagentserver.common.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ifsagentserver.common.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution k8s PodSpec.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution. For more information, see section Affinity and Toleration.
ifsagentserver.common.podAntiAffinity.replicaNotOnSameNode Directive to prevent scheduling of replica pod on the same node. Valid values: prefer, require, blank. For more information, see section Affinity and Toleration. prefer
ifsagentserver.common.podAntiAffinity.weightForPreference Preference weighting 1-100. It is used when value 'prefer' is specified for parameter ifsappserver.podAntiAffinity.replicaNotOnSameNode.  For more information, see section Affinity and Toleration. 100
ifsagentserver.servers.group Agent server group name Default Servers
ifsagentserver.servers.name List of agent server names
ifsagentserver.servers.jvmArgs JVM args that are passed to the list of agent servers
ifsagentserver.servers.replicaCount Number of instances of agent servers that are deployed
ifsagentserver.servers.resources CPU/Memory resource requests/limits Memory: 1024Mi, CPU: 0,5
ifsdatasetup.loadFactoryData Load factory data
ifsdatasetup.mode Run factory data load in create create
ifs.mq.bindingConfigName Name of the mq binding file config map
ifs.mq.bindingMountPath Path where the binding file is mounted /opt/ssfs/.bindings
ifs.persistence.claims.name Persistent volume name nfs-cpq-ifs-claim
ifs.persistence.claims.accessMode Access Mode ReadWriteMany
ifs.persistence.claims.capacity Capacity 10
ifs.persistence.claims.capacityUnit CapacityUnit Gi
ifs.persistence.securityContext.fsGroup File system group ID to access the persistent volume 0
ifs.persistence.securityContext.supplementalGroup Supplemental group ID to access the persistent volume 0
ifs.image.repository Repository for Order Management images
ifs.appSecret Order Management secret name ifs-secret
ifs.database.dbvendor DB Vendor Db2/Oracle Db2
ifs.database.serverName DB server IP/host
ifs.database.port DB server port
ifs.database.dbname DB name or catalog name
ifs.database.user DB user
ifs.database.datasourceName External datasource name jdbc/OMDS
ifs.database.systemPool Is DB system pool? true
ifs.database.schema Database schema name. The default schema name for Db2 is "ifs.database.dbname". The default schema name for Oracle is "ifs.database.user".
ifs.serviceAccountName Service account name
ifs.customerOverrides Array of customer overrides properties as key=value
ifs.envs Environment variables as array of Kubernetes EnvVars objects
ifs.arch Architecture affinity while scheduling pods amd64: 2 - No preference, ppc64le: 2 - No preference

Ingress configuration

Ingress can be enabled by setting the parameter ifsappserver.ingress.enabled as true. If ingress is enabled, then the application is exposed as a ClusterIP service, otherwise the application is exposed as NodePort service. It is recommended to enable and use ingress for accessing the application from outside the cluster. For production workloads, the only recommended approach is Ingress with cluster IP. Do not use NodePort.

  • ifsappserver.ingress.host: It is the fully qualified domain name that resolves to the IP address of your cluster’s proxy node. Based on your network settings it is possible that multiple virtual domain names resolve to the same IP address of the proxy node. Any of those domain names can be used. For example, "example.com" or "test.example.com".

  • ifsappserver.ingress.ssl.enabled: It is recommended to enable SSL. If SSL is enabled by setting this parameter to true, a secret is needed to hold the TLS certificate. If the optional parameter ifsappserver.ingress.ssl.secretname is left as blank, a secret containing a self-signed certificate is automatically generated.

    However, for production environments it is recommended to obtain a CA-certified TLS certificate and create a secret manually.

  • Obtain a CA certified TLS certificate for the specified ifsappserver.ingress.host in the form of key and certificate files.
  • Create a secret from the key and certificate files by running the following command:
oc create secret tls -ingress-secret --key --cert -n
  • Use the created secret as the value of the parameter ifsappserver.ingress.ssl.secretname.
  • ifsappserver.ingress.contextRoots: The context roots that can be accessed through ingress. By default, the following context roots are allowed: SMCFS, SBC, IFS, WSC, adminCenter. If any more context roots are to be allowed through ingress, then they must be added to the list.
  • Set the following variables in the values.yaml file:
    1. Set the Registry to pull the images from. For example, global.image.repository: "cp.icr.io/ibm-cpq".
    2. Set the image names. For example, ifsappserver.image.name: cpq-ifs-app, ifsappserver.image.tag: 10.0.0.16-amd64, ifsagentserver.image.name: cpq-ifs-agent, ifsagentserver.image.tag: 10.0.0.16-amd64
    3. Set the ingress host.
    4. Check that the ifs.persistence.claims.name “ifs-common” matches with name given in pvc.yaml.
    5. Check that the ingress TLS secret name is set correctly according to the certificate created , in place of ifsappserver.ingress.ssl.secretname

Installing the Chart

Prepare a custom values.yaml file based on the configuration section. Ensure that application license is accepted by setting the value of ifs.license to accept.

To install the chart with the release name my-release, do the following:

  1. Ensure that the chart is downloaded locally by following the instructions given here.

  2. In order to set up IBM Field Sales application, set the parameter global.ifs.enable to true in the values.yaml file.

  3. Run the following commands to check that the settings are correct in the values.yaml file by simulating the install chart.

helm template my-release stable/ibm-cpq-prod

This command displays all Kubernetes objects that are deployed on Red Hat OpenShift. This command does not install anything.

To install the application in Red Hat OpenShift, run the following command

helm install my-release [chartpath] --timeout 3600 --tls --namespace

To test the installation, go to the URL:

https://[hostname]/ifs/ifs/login.do?

Depending on the capacity of the Kubernetes worker node and database connectivity, the deployment process can take some time:

  • 2-3 minutes for installation against a pre-loaded database.
  • 20-30 minutes for installation against a fresh new database.
When you check the deployment status, the following values can be seen in the Status column: –
  • Running: The deployment of the container has started.
  • Init: 0/1: The deployment of the container is pending on another container to start.

You can see the following values in the Ready column: –

  • 0/1: The container deployment has started but the application is not yet ready.
  • 1/1: The application is ready to use.

Run the following command to make sure there are no errors in the log file:

oc logs -n -f


Affinity and Toleration

The chart provides various ways in the form of node affinity, pod affinity, pod anti-affinity and tolerations to configure advance pod scheduling in Kubernetes. See the Kubernetes documentation for details on usage and specifications for these features.

  • Toleration: This can be configured using the parameter ifsappserver.tolerations for the appserver, and the parameter ifsagentserver.common.tolerations for the agent servers.

  • Node affinity: This can be configured using the parameters ifsappserver.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution, ifsappserver.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution for the appserver, and the parameters ifsagentserver.common.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution, ifsagentserver.common.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution for the agent servers. Depending on the architecture preference selected for the parameter global.arch, a suitable value for node affinity is automatically appended in addition to the user provided values.

  • Pod affinity: This can be configured using the parameters ifsappserver.podAffinity.requiredDuringSchedulingIgnoredDuringExecution, ifsappserver.podAffinity.preferredDuringSchedulingIgnoredDuringExecution for the appserver, and the parameters ifsagentserver.common.podAffinity.requiredDuringSchedulingIgnoredDuringExecution, ifsagentserver.common.podAffinity.preferredDuringSchedulingIgnoredDuringExecution for the agent servers.

  • Pod anti-affinity: This can be configured using the parameters ifsappserver.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution, ifsappserver.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution for the appserver, and the parameters ifsagentserver.common.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution, ifsagentserver.common.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution for the agent servers. Depending on the value of the parameter podAntiAffinity.replicaNotOnSameNode, a suitable value for pod anti-affinity is automatically appended in addition to the user provided values. This is to configure whether replicas of a pod should be scheduled on the same node. If the value is prefer then podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution is automatically appended whereas if the value is require then podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution is appended. If the value is blank, then no pod anti-affinity value is automatically appended. If the value is prefer, the weighting for the preference is set by using the parameter podAntiAffinity.weightForPreference which should be specified in the range of 1-100.

Configuring agent or integration servers

After the deployment is ready and the application server is running, you can configure the agents and integration servers by logging into the IBM Order Management Application Manager. After completing the changes as described here, the release needs to be upgraded. Refer here for more details.

IBM Field Sales related configuration

You must define the agent and integration servers in the Application Manager. After the agent or integration servers are defined, you can deploy the same by providing the names of those agent or integration servers as a list to ifsagentserver.servers.name parameter in the chart values.yaml. For example,
servers:
- group: "Logical Group 1"
name:
- scheduleOrder
- releaseOrder
jvmArgs: "-Xms512m\ -Xmx1024m"
replicaCount: 1
resources:
requests:
memory: 1024Mi
cpu: 0.5
- group: "Logical Group 2"
name:
- integrationServer1
- orderPurge
jvmArgs: "-Xms512m\ -Xmx1024m"
replicaCount: 2
resources:
requests:
memory: 1024Mi
cpu: 0.5
...

Note: The underscore(_) character cannot be used to define the agent or integration server name.

The parameters directly inside ifsagentserver.common such as the jvmArgs, resources, tolerations, etc will be applied to each of the ifsagentserver.servers. These parameters can also be overridden in each of ifsagentserver.servers. All the agent servers defined under the same group will share the same ifsagentserver.common parameters such as resources. You can define multiple groups in ifsagentserver.servers[] if there is a requirement for different set of ifsagentserver.common parameters. For example, if you have a requirement to run certain agents with higher CPU and memory requests or a higher replication count, you can define a new group and update its resources object accordingly.

MQ related configuration

  • Ensure that all the JMS resources configured in IBM Field Sales agents and integration servers are configured in MQ and the corresponding .bindings file is generated.
  • Create a ConfigMap for storing the MQ bindings. For example, you can use the this command to create the ConfigMap from a given .bindings file:
oc create configmap --from-file= -n
Ensure that the ConfigMap is specified in the parameter ifs.mq.bindingConfigName.

After the changes are made in the values.yaml file, you need to run the helm upgrade command. Refer the section Upgrading the chart for details.

Readiness and liveness

Readiness and liveness checks are provided for the agents and application server pods as applicable.

Application Server pod: The following parameters can be used to tune the readiness and liveness checks for application server pods.
  • ifsappserver.livenessCheckBeginAfterSeconds - This can be used to specify the delay in starting the liveness check for the application server. The default value is 900 seconds (15 minutes).
  • ifsappserver.livenessFailRestartAfterMinutes - This can be used to specify the approximate time period, after which the pod will get restarted if the liveness check keeps on failing continuously for this period of time. The default value is 10 minutes.

For example, if the values for ifsappserver.livenessCheckBeginAfterSeconds ifsappserver.livenessFailRestartAfterMinutes are 900 seconds and 10 minutes  and the application server pod is not able to start successfully after 25 minutes, then it will be restarted. After the application server has started successfully, if the liveness check fails continuously for a period of 10 minutes, then it will be restarted.

Agent server pod: The following parameter can be used to tune the readiness check for agent server pods.
  • ifsagentserver.common.readinessFailRestartAfterMinutes: This can be used to specify the approximate time period after which the pod will get restarted if the readiness check fails continuously for this period of time. The default value is 10 minutes. For example, if the value for ifsagentserver.common.readinessFailRestartAfterMinutes is 10 minutes and the agent server pod is not able to start up successfully after 10 minutes, then it will be restarted.

Customizing server.xml for Liberty

A custom server.xml for the liberty application server can be configured as here. If a custom server.xml is not specified then a default server.xml is auto generated.

  • Create the custom server.xml file with the name server.xml.

  • Create a ConfigMap containing the custom server.xml by running the following command:

oc create configmap --from-file= -n
Specify the created ConfigMap in the chart parameter ifsappserver.config.libertyServerXml.

Important Notes:

  1. Ensure that the database information specified in the datasource section of server.xml is same as what is specified in the chart through the object ifs.database.
  2. Ensure that the HTTP and HTTPS ports in server.xml are same as specified in the chart through the parameters ifsappserver.service.http.port and ifsappserver.service.https.port.

Upgrading the Chart

To upgrade your deployment when you have a new docker image for application or agent server or a change in configuration, do the following:

  1. Ensure that the chart is downloaded locally by following the instructions given here.

  2. Ensure that the datasetup.loadFactoryData parameter is set to donotinstall or blank. Run the following command to upgrade your deployments.

code>helm upgrade my-release -f values.yaml [chartpath] --timeout 3600 --tls

Uninstalling the Chart

To uninstall or delete the my-release deployment, run the following command:

helm delete my-release --tls

Since there are certain Kubernetes resources created using the pre-install hook, helm delete command will not delete them. You need to manually delete the following resources created by the chart.

  • -ibm-cpq-prod-config
  • -ibm-cpq-prod-def-server-xml-conf
  • -ibm-cpq-prod-datasetup
  • -ibm-cpq-prod-auto-ingress-secret

Note: You may also consider deleting the secrets and persistent volume created as part of prerequisites.

Fixpack Installation for IFS
  1. Create a shared directory in the host file system.. The shared directory must be mounted on the host file system.
    mkdir -p /opt/ssfs/shared
  2. Download OMS Fixpack Jar(If required) and IFS Fixpack Jar from Fix Central.
    For OMS_FP, copy to the "/opt/ssfs/shared/OMS_FP" directory. For IFS_FP, copy to the "/opt/ssfs/shared/IFS_FP" directory.
    Make sure there is only one file in OMS_FP and IFS_FP.
    Verify Jar file name patterns.  The file name for OMS is smcfs_10_FP*.jar. The file name for IFS is sfs_10_FP3.jar.
  3. Create a IFS base container from the IFS base image by running the Podman command -
    podman run -e LICENSE=accept --privileged -v /opt/ssfs/shared:/opt/ssfs/shared -it --name cpq-ifs-base:
    This will take you inside the base container in /opt/ssfs/runtime folder.
    If the IFS base container already exist then use the following command to get into the IFS base container.
    podman exec -it /bin/bash
  4. IFS fixpack images can be created by running following scripts.
    ./installFixpack.sh | tee ifs_fixPack.log
    This script will taken an hour to complete.
  5. Once the script completes the ifs image tar file can be located in /opt/ssfs/shared folder.
    cpq-ifs-app_.tar, cpq-ifs-agent_.tar, cpq-ifs-base_.tar
  6. Exit from the IFS Base container.
  7. Load the app image using following command from /opt/ssfs/shared folder.
    podman load -i cpq-ifs-app_
    Similar command can be followed to load agent image.
  8. Tag and push the image to the image registry(For example Red Hat OpenShift image registry) using following command.
    podman tag :
    podman push :
  9. Update values.yaml with required details.
  10. Install IFS Application
    helm install my-release [chartpath] --timeout 3600 --set global.license=true, global.install.visualmodeler.enabled=false, global.install.configurator.enabled=false, global.install.ifs.enabled=true, global.install.runtime.enabled=false
  11. Log in to IFS application and verify IFS fix pack version from About Menu.
  12. Log in to SBC application and verify OMS fix pack version from About Menu.
Fixpack Installation for VM and OC.
You can install fix packs for IBM Sterling Configure Price and Quote Software containers.
Procedure :
  1. Download the fix pack from IBM Fix Central and save the file to a temporary location on a installation node.
  2. Locate the fix pack compressed file and extract the file contents
    For example, the compressed file name for fix pack 9 is 10.0.0.0-Sterling-VM-All-fp0009.zip
  3. Extract the contents of the fix pack compressed file to the temp location on the installation node.
  4. Create base container using base image cpq-vmoc-base.
  5. If the base container exists, go to the base container shell.
    For more information, see link https://www.ibm.com/support/knowledgecenter/SS4QMC_10.0.0/installation/c_cpqRHOC_customizing_runtime.html
  6. Copy the content from temp location in Step 3, VM10-FP9.jar, ConfiguratorUI.zip, and configurator.war to fix pack location inside the base container.
    The corresponding container path is /opt/VMSDK/fixpack.Copy the VM10-FP9.jar, ConfiguratorUI.zip, and configurator.war to the /opt/VMSDK/fixpack directory. Use podman cp to copy the jar file to the base container.
  7. In the container shell, go to the /opt/VMSDK directory.
  8. Generate the images by running executeAll script which will install the VM10-FP9.jar, ConfiguratorUI.zip, and configurator.war.
    '
    /executeAll.sh --DBTYPE="$DBTYPE" --createDB=false --loadDB=false --loadMatrixDB=false --MODE=vm --generateImage=true --pushImage=false --generateImageTar=true --IMAGE_TAG_NAME="$IMAGE_TAG_NAME"
    /executeAll.sh --DBTYPE="$DBTYPE" --createDB=false --loadDB=false --loadMatrixDB=false --MODE=oc --generateImage=true --pushImage=false --generateImageTar=true --IMAGE_TAG_NAME="$IMAGE_TAG_NAME"
  9. The command will create two image files cpq-oc-app_.tar and cpq-vm-app_.tar
  10. Copy the tar file outside the container and create image out of it using podman load command.
*Note: In the steps we have used Fixpack 9 as an example for installation
Below content applies to CPQ irrespective of the application.

Reinstall the Chart

To re-install the chart, you need to first delete the deployment.

helm delete my-release

Please make sure all the objects related to your release are deleted.

You can do that by running the command:

oc get all -l release=my-release

If you perform the command for helm delete, it is mandatory to delete the persistent volume and recreate it. Run the following command for doing so:
helm delete pv ifs-pv -n oc create -f ifs_pv.yaml -n

This command will give you a list of all the objects that are related to your release. If you see any objects related to your release remaining, please delete them by using the command:

oc delete

To re-install, run the command:

oc install --name=my-release [chartpath]

Import Certificates

If there is a requirement to import server certificates into IBM Sterling Configure Price Quote software, you can do that by the import certificate feature of the chart. For example, integration with Salesforce (SFDC) requires importing certificates to Configure Price Quote.

To do that follow the steps:
  1. Get the certificate which you need to add to the application. One way of getting a certificate is by exporting it through browser (lock icon in the location bar). The browser will show the certificate which you can save as either a .crt file or as a .pem file if you need the chain of certificates.
  2. Save this file into the node where you have installed the helm charts.
  3. You will need to create a Red Hat OpenShift secret object using this certificate file. To do that execute the command:
oc create secret generic vm-cert-secret --from-file=cert=vm.crt or oc create secret generic vm-cert-secret --from-file=cert=vm.pem
Here, vm.crt or vm.pem is the file which contains the certificate and vm-cert-secret is the name of secret you need to give.
Note: To import multiple certificates you need to create a chain of certificates in a .pem file in the format as shown:
-----BEGIN CERTIFICATE-----
XXX
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
XXX
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
XXX
-----END CERTIFICATE-----
^ * Please note an empty line at the bottom of the .pem file, this is mandatory to work correctly.
For example, if you need to import this certificate into Visual Modeler, populate the Values.vmappserver.importcert.secretname by giving the secret name which you created in the step.
This certificate will be imported to the truststore of Visual Modeler application server after you install the Visual Modeler application. Similarly you can import certificates to Omni Configurator and IBM Field Sales.
To confirm the import of certificate is successful you can run the following command:
oc exec [podname] -- keytool -v -list -keystore /home/default/trustStore.jks
Here, the password of the trustStore is password and podname is the name of the pod that is obtained by running the command:
oc get pods

Dashboard Experience

Red Hat OpenShift comes with Out of the box Dashboard displaying objects deployed.
Please check the Red Hat OpenShift documentation for the usage of Dashboard.
https://docs.openshift.com/container-platform/4.6/welcome/index.html
(Please confirm the version once you visit the page.)

Troubleshooting

The health of the applications can be checked by running the command:

oc get pods

See the example for a pod named as mypod:

NAME READY STATUS

mypod 1/1 Running

Here, NAME is name of the pod running and the second and third columns show the status of the pods. If you don't see READY status, you will need to collect log to identify the issue.

Note: The Visual Modeler application takes up to 5 minutes to be up and running.

To get information about all pods running along with what node they are running on, run the command:

oc get pods -o wide

To get the information about events fired in the namespace, run the command:

oc get events

To describe the pod in a human readable format, run the following command:

oc describe pod mypod

To get information about the pod in YAML format, run the following command:

oc get pod mypod -o yaml

To get the console logs of the pod, run the following command:

oc logs mypod

Execute the commands to collect information about the pod that you are interested in. You can locate Events section when you describe a pod to know about any triggered events on the pod. To get more information about the messages or errors found by executing the commands you can visit https://docs.openshift.com.

To know more about the application logs, you can open a shell to the pod by running the command:

oc rsh mypod

The command allows you to execute Linux commands to locate logs. To locate logs you can use the following command inside the pod.:

cd /logs

More debug logs can be found at cd /output inside the pod. For easier view of log files you can copy logs out of the pod to your local as shown:

oc cp mypod:/logs/debs.log ./debs.log

To check whether the NFS is mounted in pod you can run the command:

oc rsh mypod

Then run the following command inside the pod:

df -h

To restart a pod you can delete it and it will auto restart:

oc delete pod mypod

The logs for Visual Modeler, Omni Configurator and FieldSales are available on the shared NFS server where you store the repository. You can mount the repository on a system and check out the logs in path /omscommonfile/configrepo in the repository. Look into the repository install section for repository details.

To fine tune trace level you can customize server xml of the respective application in 'logging' tag. For details visit the topic

'Customizing server.xml for Liberty' and Liberty IBM Documentation online. The console logs would appear in messages log file

Warning: Make sure your NFS storage has enough space for the logs. To use it adequately, you can clean the older logs.

Check yfs_application_menu to make sure IBM Field Sales application is installed properly. This table should contain Application_Menu_Key for Field Sales (Field_Sales_Menu).

Check yfs_heartbeat and make sure the entry HealthMonitor Service is present.

IBM Sterling Configurator has a feature for testing its REST API through Swagger.
Swagger URL : https:///configurator/swagger/index.html
* Swagger UI and API docs can be turned on or off using the flag ocappserver.swagger.enabled default value is true

Some common errors

Some common errors that you might face while deploying application are listed below:
  • ImagePullBackOff: This error can mean multiple things. To get the exact error you will need to describe the pod and look into the Events section.

  • CrashLoopBackOff: A CrashloopBackOff means that you have a pod starting and crashing intermittently. You will need to describe the pod and look into the Events section. Also you can look into the application logs by opening a session in the pod.

  • LivenessProbeFailure: If the liveness probe fails, the Red Hat OpenShift kills the container and the container is subjected to its restart policy. You can describe the pod and also look into application logs to identify any exception or error.

  • ReadinessProbeFailure: Indicates whether the container is ready to service requests. If the readiness probe fails, the endpoints controller removes the pod’s IP address from the endpoints of all services that match the pod. You can describe the pod and also look into application logs to identify any exception or error.

  • If podman push or pull causes the following errors Error: Error copying image to the remote destination: Error trying to reuse blob or Error: error pulling image, you will need to make sure you log in to the registry where you want to push the image. Make sure you have logged in to the cluster by running the command:

oc login and then podman login -u [username] -p $(oc whoami -t) [image-registry]

Resources required

Red Hat OpenShift cluster 4.6:
  • Minimum - 3 master 3 worker nodes
  • Minimum - Each node should have 8CPU, 16GB RAM, 250GB Disk
Visual Modeler and Omni Configurator:
  • 2560Mi memory for application servers
  • 1 CPU core for application servers
  • 3840Mi memory for application servers
  • 2 CPU core for application servers
IBM Field Sales:
The chart uses the following resources by default:
  • 2560Mi memory for application server
  • 1024Mi memory for each agent/integration server and health monitor
  • 1 CPU core for application server
  • 0.5 CPU core for each agent/integration server and health monitor

Upgrade Path

To get to the v10 container version,

  • If you are on 9.x, upgrade to v10 non-containerized version first.
  • Once you are on v10, upgrade to the v10 containerized version.
  • You need to download the v10 images, put any customization using the base image and use the same database from the version you will be upgrading from. More documentation can be found on the IBM Documentation.

Limitations

The database must be installed in UTC timezone.

Backup or recovery process

Back up of persistent data for IBM Field Sales like database back up needs to be taken on regular basis. Since the application pods are stateless, there is no backup or recovery process required for the pods.

You can delete the deployed application by running the command:

helm delete --purge [release-name]

The application can be rolled back using the command:

helm rollback [release-name] 0

To see revision numbers, run the commands:

  • helm history my-release
  • helm rollback my-release 0
docker.io
Due to docker pull rate limits put by docker.io you might face error  like below during generateImage.sh
"You have reached your pull rate limit.."
To overcome , you will need to use a authenticated user to login to docker.io.
For this purpose you would need to provide env variables - DOCKER_HUB_USER and DOCKER_HUB_PASSWD having the credentials to docker.io. You need to do this when you execute generateImage.sh in the base container of VM and OC.

[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SS4QMC","label":"Sterling Configure, Price, Quote"},"ARM Category":[{"code":"a8m50000000Cbu9AAC","label":"Sterling Configure Price Quote"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"10.0","Edition":"","Line of Business":{"code":"LOB59","label":"Sustainability Software"}}]

Document Information

Modified date:
28 February 2022

UID

ibm16187707