Containers:
V20.x: All instances of an operator need a place to store its
log files and find database drivers. If you plan to run the deployment script to generate a custom
resource (CR), the script creates a persistent volume claim (PVC) and copies the JDBC drivers for
you. However, if you manually compile the CR then you must review all of the steps.
About this task
You must prepare the storage of the operator before you create an instance of the operator. You
can use the deployment script to create the operator instance or create it manually. If you choose
to manually compile your CR file from a descriptor template, then you also need to install the
operator and create the necessary storage for it.
Tip: The cluster setup script identifies the available storage classes on your cluster,
but you can create a new PV for the operator. The name of the PV must be set in the PVC, so make
sure that the storageClassName has the correct value. Make sure that you also
grant group write permission to the hostPath.path on the host or your shared
volume on your NFS server.
Important: If you plan to run the installation scripts and want to use the default
storage, all you need to do in this task is to decide whether to create a new namespace here or use
the setup cluster script to do it. If you do not intend to run the scripts, then complete all of the
steps that apply to your configuration.
Procedure
-
Log in to your OpenShift Container Platform (OCP) cluster.
oc login https://<cluster-ip>:port -u cluster_admin -p password
-
Create a project (
cp4a-project
) for the operator by running the following
command.
oc new-project project_name --description="<description>" --display-name="<display_name>"
- Optional:
Create the YAML resources for the operator and component logs.
- If you want to use static storage instead of dynamic storage, create a PV YAML file, for
example operator-shared-pv.yaml. The following example YAML defines two PVs,
one for the operator and one shared volume for the component logs. PVs depend on your cluster
configuration, so adapt the YAML to your configuration.
V20.0.0.2
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: cp4a-shared-log-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/root/logs"
persistentVolumeReclaimPolicy: Delete
V20.0.0.1
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: operator-shared-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/root/operator"
persistentVolumeReclaimPolicy: Delete
- If you did the previous step, deploy the
PVs.
oc create -f operator-shared-pv.yaml
- If you did the previous steps, provide group write permission to the persistent
volumes. According to the PV hostPath.path definitions, run the following
commands:
chmod -R g=u hostPath
chmod g+rw hostPath
Where
hostPath is the value in your PVs (/root/operator and
/root/logs).
Remove the .OPERATOR_TYPE
file in case it
exists from a previous
deployment.
rm -f hostPath.OPERATOR_TYPE
Where
hostPath is the value in your operator PV
(/root/operator).
- Create a claim for the static PVs or your dynamic storage.
To create a claim bound to the
previously created PVs, create a file <path>/operator-shared-pvc.yaml
anywhere on your disk, with the following content.
V20.0.0.2
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cp4a-shared-log-pvc
namespace: project_name
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 100Gi
volumeName: cp4a-shared-log-pv
V20.0.0.1 apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: operator-shared-pvc
namespace: project_name
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
volumeName: operator-shared-pv
Replace
the project_name
placeholders with the name of your OpenShift
project to use for the operator in your OCP cluster.
If you prefer to use dynamic provisioning
for this claim, edit the corresponding YAML file:
Replace the
StorageClassName
and
Fast_StorageClassName
placeholders by storage classes of your
choice.
- Deploy the PVCs. If you created your own
operator-shared-pvc.yaml
file, run the
following command with your own path.
oc create -f <path>/operator-shared-pvc.yaml
Otherwise, if you edited
descriptors/operator-shared-pvc.yaml
run the command with the file from the
descriptors
folder.
oc create -f descriptors/operator-shared-pvc.yaml
- Optional:
Add the JDBC drivers to the operator PV hostPath.
Copy all of the JDBC drivers that are needed by the components you intend to install to the
persistent volume. Depending on your storage configuration, you might not need these drivers.
Note: File names for JDBC drivers cannot include version information.
- Db2
- db2jcc4.jar
- db2jcc_license_cu.jar
- Oracle
- V20.0.0.2 SQL Server
- V20.0.0.2 PostgreSQL
The following structure shows an example remote file system.
/root/operator
└── jdbc
├── db2
├── db2jcc4.jar
└── db2jcc_license_cu.jar
├── oracle
└── ojdbc8.jar
├── sqlserver
└── mssql-jdbc-?.jar
├── postgresql
└── postgresql-?.jar
- Optional:
V20.0.0.2 If you intend to install
Content Collector for SAP as an optional component of the Content Manager pattern, then you must
download the necessary libraries and put them in a directory under
cert-kubernetes/scripts.
-
Make a saplibs directory in
cert-kubernetes/scripts.
Give read and write permissions to the directory by running the chmod
command.
-
Download the SAP Netweaver SDK 7.50 library from the SAP Service Marketplace.
-
Download the SAP JCo Release 3.0.x from the SAP Service Marketplace.
-
Extract all of the content of the packages to the saplibs directory.
-
Check you have all of the following libraries.
saplibs/
├── libicudata.so.50
├── libicudecnumber.so
├── libicui18n.so.50
├── libicuuc.so.50
├── libsapcrypto.so
├── libsapjco3.so
├── libsapnwrfc.so
└── sapjco3.jar
Results
Wait for the confirmation message that the PVC is bound before you move to the next
step.
Note: For Oracle, if the ojdbc8.jar
from 12.2 fails, apply the
ojdbc8.jar
from 19c.
What to do next
You can now set up your cluster manually or use the setup cluster script. For more information,
see Setting up the cluster.