Read the following information to install and configure the following components:
Two separate packages are provided:
For more information, also see Deploying with containers.
Ensure your envirnonment meets the following prerequisites:
To perform a fresh install at the latest product version, download the installation images from IBM Fix Central
Each of the following package provides the related Operator to install the specified content:
wa-dyn-agent
workload-automation
Each operator package contains the following structure:
build
deploy
helm-charts
watches.yaml
To generate and publish the images using the Docker command line, run the following commands:
docker build -t <repository_url>/<wa-package>-operator:9.5.0.02 -f build/Dockerfile .
docker push <repository_url>/<wa-package>-operator:9.5.0.02
where wa-package
is one of the following:
To generate and publish the images using the Podman and Buildah
command line, run the following commands:
buildah bud -t <repository_url>/<wa-package>-operator:9.5.0.02 -f build/Dockerfile .
podman push <repository_url>/<wa-package>-operator:9.5.0.02
Before deploying the IBM Workload Scheduler components, create a dedicated project using the OPC command line, as follows:
oc new-project <wa-package>
To deploy the containers for the components belonging to the operator, type the following commands:
oc create -f deploy/WA_<wa-package>_service_account.yaml
oc create -f deploy/WA_<wa-package>_role.yaml
oc create -f deploy/WA_<wa-package>_role_binding.yaml
oc create -f deploy/crds/WA_<wa-package>_custome_resource_definition.yaml
If you have Operator Lifecycle Manager (OLM) installed, perform the following steps
to configure the operator:
deploy/olm-catalog/9.5.0.02/
<wa-package>.9.5.0.02.clusterserviceversion.yaml
in a flat text editor.REPLACE_IMAGE
string with the following string: <repository_url>/<wa-package>-operator:9.5.0.02
<repository_url>
is the URL of the OpenShift registry or external registry.<wa-package>
is one of the following values wa-dyn-agent
, workload-automation
oc create -f deploy/olm-catalog/9.5.0.02/<wa-package>-group.yaml
oc create -f deploy/olm-catalog/9.5.0.02/<wa-package>.9.5.0.2.clusterserviceversion.yaml
If you DO NOT have OLM installed, perform the following steps
to configure the operator:
deploy
<wa-package>_operator.yaml
in a flat text editor.REPLACE_IMAGE
string with the following string: <repository_url>/<wa-package>-operator:9.5.0.02
oc create -f deploy/<wa-package>_operator.yaml
Resources limit:
To deploy multiple instances of the server, enable new rules in the User management section in the console.
Open Role and then RoleBinding and create new rules, by copying the following snippets for Role and RoleBinding:
Role:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: wa-pod-role
namespace: workload-automation
rules:
- apiGroups:
- ‘*’
resources:
- pods
verbs:
- get
- watch
- list
- patch
- update
RoleBinding:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: wa-pod-role-binding
namespace: workload-automation
subjects:
- kind: ServiceAccount
name: default
namespace: workload-automation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: wa-pod-role
The following table lists the configurable parameters of the chart and an example of their values:
Parameter | Description | Mandatory | Example | Default |
---|---|---|---|---|
global.license | Use ACCEPT to agree to the license agreement | yes | not accepted | not accepted |
global.serviceAccountName | The name of the serviceAccount to use | no | default | leave it empty |
global.language | The language of the container internal system. The supported language are: en (English), de (German), es (Spanish), fr (French), it (Italian), ja (Japanese), ko (Korean), pt_BR (Portuguese (BR)), ru (Russian), zh_CN (Simplified Chinese) and zh_TW (Traditional Chinese) | yes | en | en |
replicaCount | Number of replicas to deploy | yes | 1 | 1 |
image.repository | IBM Server image repository | yes | @DOCKER.SERVER.IMAGE.NAME@ | @DOCKER.SERVER.IMAGE.NAME@ |
image.tag | IBM Server image tag | yes | @VERSION@ | @VERSION@ |
image.pullPolicy | Image pull policy | yes | Always | Always |
fsGroupId | The secondary group ID of the user | no | 999 | |
server.company | The name of your Company | no | my-company | my-company |
server.agentName | The name to be assigned to the dynamic agent of the Server | no | WA_SAGT | WA_AGT |
server.dateFormat | The date format defined in the plan | no | MM/DD/YYYY | MM/DD/YYYY |
server.timezone | The timezone used in the create plan command | no | America/Chicago | |
server.startOfDay | The start time of the plan processing day in 24 hour format: hhmm | no | 0000 | 0700 |
server.tz | If used, it sets the TZ operating system environment variable | no | America/Chicago | |
server.createPlan | If true, an automatic JnextPlan is executed at the same time of the container deployment | no | no | no |
server.containerDebug | The container is executed in debug mode | no | no | no |
server.db.type | The preferred remote database server type (e.g. DERBY, DB2, ORACLE, MSSQL, IDS) | yes | DB2 | DB2 |
server.db.hostname | The Hostname or the IP Address of the database server | yes | ||
server.db.port | The port of the database server | yes | 50000 | 50000 |
server.db.name | Depending on the database type, the name is different; enter the name of the Server’s database for DB2/Informix/MSSQL, enter the Oracle Service Name for Oracle | yes | TWS | TWS |
server.db.tsName | The name of the DATA table space | no | TWS_DATA | |
server.db.tsPath | The path of the DATA table space | no | TWS_DATA | |
server.db.tsLogName | The name of the LOG table space | no | TWS_LOG | |
server.db.tsLogPath | The path of the LOG table space | no | TWS_LOG | |
server.db.tsPlanName | The name of the PLAN table space | no | TWS_PLAN | |
server.db.tsPlanPath | The path of the PLAN table space | no | TWS_PLAN | |
server.db.tsTempName | The name of the TEMP table space (Valid only for Oracle) | no | TEMP | leave it empty |
server.db.tssbspace | The name of the SB table space (Valid only for IDS) | no | twssbspace | twssbspace |
server.db.usepartitioning | If true, the Oracle Partitioning feature is enabled. Valid only for Oracle, it is ignored by other databases. The default value is true | no | true | true |
server.db.user | The database user who accesses the Server tables on the database server. In case of Oracle, it identifies also the database. It can be specified in a secret too | yes | db2inst1 | |
server.db.adminUser | The database user administrator who accesses the Server tables on the database server. It can be specified in a secret too | yes | db2inst1 | |
server.db.sslConnection | If true, SSL is used to connect to the database (Valid only for DB2) | no | false | false |
server.pwdSecretName | The name of the secret to store all passwords | yes | wa-pwd-secret | wa-pwd-secret |
server.livenessProbe.initialDelaySeconds | The number of seconds after which the liveness probe starts checking if the server is running | yes | 600 | 600 |
server.useCustomizedCert | If true, customized SSL certificates are used to connect to the master domain manager | no | false | false |
server.certSecretName | The name of the secret to store customized SSL certificates | no | waserver-cert-secret | |
server.libConfigName | The name of the ConfigMap to store all custom liberty configuration | no | libertyConfigMap | |
server.routes.enabled | If true, the routes controller rules is enabled | no | true | true |
server.routes.hostname | The virtual hostname defined in the DNS used to reach the Server | no | server.mycluster.proxy | |
server.routes.secretName | The name of the secret to store certificates used by the routes. If not used, leave empty. | no | waserver-ingress-secret | |
resources.requests.cpu | The minimum CPU requested to run | yes | 1 | 1 |
resources.requests.memory | The minimum memory requested to run | yes | 4Gi | 4Gi |
resources.limits.cpu | The maximum CUP requested to run | yes | 4 | 4 |
resources.limits.memory | The maximum memory requested to run | yes | 16Gi | 16Gi |
persistence.enabled | If true, persistent volumes for the pods are used | no | true | true |
persistence.useDynamicProvisioning | If true, StorageClasses are used to dynamically create persistent volumes for the pods | no | true | true |
persistence.dataPVC.name | The prefix for the Persistent Volumes Claim name | no | data | data |
persistence.dataPVC.storageClassName | The name of the StorageClass to be used. Leave empty to not use a storage class | no | nfs-dynamic | |
persistence.dataPVC.selector.label | Volume label to bind (only limited to single label) | no | my-volume-label | |
persistence.dataPVC.selector.value | Volume label value to bind (only limited to single value) | no | my-volume-value | |
persistence.dataPVC.size | The minimum size of the Persistent Volume | no | 5Gi | 5Gi |
(*) Note: for details about static workstation pools, see:
Workstation.
(**) Note: if you set useCustomizedCert:true
, you must create a secret containing the customized files:
that will replace the Server default ones. Customized files must have the same name of the ones listed above.
For detailed instructions, see the Secrets section.
(***) Note: if you set db.sslConnection:true
you must set to be true the useCustomizeCert
setting too (on both server and console charts); more, you must add the following certificates in the customized SSL certificates secret on both server and console charts:
Customized files must have the same name of the ones listed above.
For detailed instructions, see the Secrets section.
Tip: You can use the default values.yaml
To store passwords in the Passwords Secret, read the procedure below:
This secret is valid for Console and Server only.
Manually create a mysecret.yaml file to store passwords. In the mysecret.yaml file, hidden passwords must be entered; to hide them, run the following command:
echo -n 'mypassword' | base64
Note: The command must be launched three times, once for each password that must be entered in the mysecret.yaml
The mysecret.yaml file must contain the following parameters:
apiVersion: v1
kind: Secret
metadata:
name: <secret_name>
namespace: <your_namespace>
type: Opaque
data:
WA_PASSWORD: <hidden password>
DB_ADMIN_PASSWORD: <hidden password>
DB_PASSWORD: <hidden password>
where:
Once the file has been created and filled in, it must be imported; to import it, log in to your namespace and launch the following command:
oc create -f <my_path>/mysecret.yaml
where **<my_path>** is the location path of mysecret.yaml file.
To add custom certificates to the Certificates Secret, read the procedure below:
If you want to use custom certificates, set useCustomizedCert:true
and use oc to create the secret in the same namespace where you want to deploy the chart:
```bash
$ oc create secret generic release_name-secret --from-file=TWSClientKeyStoreJKS.sth --from-file=TWSClientKeyStore.kdb --from-file=TWSClientKeyStore.sth --from-file=TWSClientKeyStoreJKS.jks --from-file=TWSServerTrustFile.jks --from-file=TWSServerKeyFile.jks --namespace=chart_namespace
```
where TWSClientKeyStoreJKS.sth, TWSClientKeyStore.kdb, TWSClientKeyStore.sth, TWSClientKeyStoreJKS.jks, TWSServerTrustFile.jks and TWSServerKeyFile.jks are the Container keystore and stash file containing your customized certificates.
For details about custom certificates, see Connection security overview .
Note: Passwords for “TWSServerTrustFile.jks” and “TWSServerKeyFile.jks” files must be entered in the respective “TWSServerTrustFile.jks.pwd” and “TWSServerKeyFile.jks.pwd” files.
Note: Customized files must have the same name of the ones listed above.
See an example where release_name
= myname and namespace
= default:
```bash
$ oc create secret generic myname-secret --from-file=TWSClientKeyStore.kdb --from-file=TWSClientKeyStore.sth --namespace=default
```
If you want to use SSL connection to DB, set db.sslConnection:true
and useCustomizedCert:true
, then use oc to create the secret in the same namespace where you want to deploy the chart:
```bash
$ oc create secret generic release_name-secret --from-file=TWSServerTrustFile.jks --from-file=TWSServerKeyFile.jks --from-file=TWSServerTrustFile.jks.pwd --from-file=TWSServerKeyFile.jks.pwd --namespace=chart_namespace
```
Note: Passwords for “TWSServerTrustFile.jks” and “TWSServerKeyFile.jks” files must be entered in the respective “TWSServerTrustFile.jks.pwd” and “TWSServerKeyFile.jks.pwd” files.
Note: Customized files must have the same name of the ones listed above.
To enable SSO between console and server, LTPA tokens must be the same. The following procedure explains how to create LTPA tokens to be shared between server and console (this procedure must be run only once and not on both systems).
Access the container by launching the following command:
oc exec -it <server_pod_name> /bin/bash
Create new LTPA token, by launching the following command:
/opt/wautils/wa_create_ltpa_keys.sh -p <keys_password> -o /home/wauser
where:
The “ltpa.keys” and “wa_ltpa.xml” files are created in /home/wauser.
Exit from the container by launching the “exit” command.
Copy the just created files in the local machine, by launching the following command:
oc cp <server_pod_name>:/home/wauser/ltpa.keys <host_dir>
oc cp <server_pod_name>:/home/wauser/wa_ltpa.xml <host_dir>
where:
The “ltpa.keys” file must be placed into the secret that stores customized SSL certificates (on both server and console charts); to place it into the secret, launch the following command:
oc create secret generic <secret_name> --from-file=<host_dir>/ltpa.keys --namespace=<your_namespace>
The “wa_ltpa.xml” file must be placed in the ConfigMap that stores all custom liberty configurations (on both server and console charts); to place it into the ConfigMap, launch the following command:
oc create configmap <configmap_name> --from-file=<host_dir>/wa_ltpa.xml --namespace=<your_namespace>
For further details about ConfigMap, see the “Creating ConfigMaps” chapter on the cloud platform documentation.
In both server and console charts, useCustomizedCert property must set to be “true”, the libConfigName and certSecretName properties must be configured with the related name defined in the commands previously launched.
To make persistent all configuration and runtime data, the Persistent Volume you specify is mounted in the following container folder:
/home/wauser
The Pod is based on a StatefulSet. This to guarantee that each Persistent Volume is mounted in the same Pod when it is scaled up or down.
Only for test purposes, you can configure the chart in a way not to use persistence.
You can pre-create Persistent Volumes to be bound to the StatefulSet using Label or StorageClass. Anyway, it is highly suggested to use persistence with dynamic provisioning. In this case you must have defined your own Dynamic Persistence Provider.
The Helm chart is written so that it can support several different storage use cases:
1. Persistent storage using kubernetes dynamic provisioning
It uses the default storageClass defined by the Kubernetes admin or by using a custom storageClass which overrides the default.
Set the values as follows:
persistence.enabled:true (default)
persistence.useDynamicProvisioning:true(default)
Specify a custom storageClassName per volume or leave the value empty to use the default storageClass.
2. Persistent storage using a predefined PersistentVolume setup prior to the deployment of this chart
Set global values to:
persistence.enabled:true
persistence.useDynamicProvisioning:false
Let the Kubernetes binding process select a pre-existing volume based on the accessMode and size. Use selector labels to refine the binding process.
3. No persistent storage
The entire storage is within the container and will be lost when pod terminates.
Enable this mode by setting the global values to:
persistence.enabled:false
persistence.useDynamicProvisioning:false
For a description of IBM Server functionality, see the Knowledge Center.
In case of problems, see Troubleshooting.