After you set up everything that you need on a bastion host, you can then install the IBM
operator catalog in your air gap environment.
Before you begin
Your deployment in an air gap environment also needs IBM Cloud® Platform Common Services, so make sure that your cluster has capacity to install these
services, see Hardware requirements and recommendations for foundational
services.
Download the cert-kubernetes
repository to a Linux-based machine (RHEL, CentOS,
and so on) or a client to a Linux-based machine or virtual machine that can run Podman. For more
information, see Setting up a mirror image registry.
Important: The Cloud Pak cannot be installed on a cluster with an existing installation
of
IBM Automation foundation that used the
All
namespaces on the cluster option. Check the
openshift-operators
namespace to find installed operators. The Cloud Pak supports installation on a single namespace and
not on all namespaces. To install more than one deployment of the Cloud Pak, each deployment must be
installed in a different namespace and the operator needs to be installed for each namespace.
You
must have cluster administrator privileges to run the installation. During the installation of
Common Services (with IAM), a non-administrator OCP user called "admin" is created or is overwritten
if it exists. Make sure that if a user with the name "admin" exists in your cluster, overwriting it
does not cause a problem.
About this task
You must prepare the storage of the operator before you create an instance of the operator.
Tip: When you run the deployment script, you must have storage classes on your cluster.
If you use static storage, make sure that you grant group write permission to the
nfs.path on the host or your shared volume on your NFS server.
To install the Cloud Pak operator, complete the following steps.
Procedure
-
Log in to your OCP cluster.
oc login https://<cluster-ip>:<port> -u <cluster-admin> -p <password>
-
Go to the Kubernetes namespace for the Cloud Pak operator that you created in the previous
step.
oc project ${NAMESPACE}
Where <operator namespace> is the namespace where you want to install
the operator.
-
Create the YAML resources for the operator and component logs. Dynamic storage (Choice
1) is recommended.
- Choice 1:
- Make sure that you are in the directory where you downloaded the CASE
archive. For more information, see Setting up a mirror image registry.
cd ${OFFLINEDIR}/ibm-cp-automation/inventory/cp4aOperatorSdk/files/deploy/crs
tar xvzf cert-k8s-21.0.x.tar
cd cert-kubernetes
- Edit the cert-kubernetes/descriptors/operator-shared-pvc.yaml file and
replace the
<StorageClassName>
and
<Fast_StorageClassName>
placeholders by storage classes of
your choice.
- Deploy the PVCs. If you created your own
operator-shared-pvc.yaml
file, run the
following command with your own
path.oc create -f <path>/operator-shared-pvc.yaml
Otherwise, if you
edited descriptors/operator-shared-pvc.yaml
run the command with the file from the
descriptors
folder.
oc create -f descriptors/operator-shared-pvc.yaml
Confirm that
the STATUS
of the PVCs (cp4a-shared-log-pvc and operator-shared-pvc) are bound
correctly before you move to the next step by running the following command in the
${NAMESPACE}.
oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cp4a-shared-log-pvc Bound pvc-db2068e1-83d1-45e4-a4db-a33b93387561 100Gi RWX managed-nfs-storage 102m
operator-shared-pvc Bound pvc-74f0a26c-3632-4c93-a78c-6502cee5ab48 1Gi RWX managed-nfs-storage 102m
- Choice 2:
- If you want to use static storage, create a PV YAML file, for example
operator-shared-pv.yaml. The following example YAML defines two PVs, one for
the operator and one shared volume for the component logs. PVs depend on your cluster configuration,
so adapt the YAML to your configuration.
apiVersion: v1
kind: PersistentVolume
metadata:
name: operator-shared-pv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
nfs:
path: /shared/operator
server: <NFS Server>
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: cp4a-shared-log-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
path: /root/logs
server: <NFS Server>
persistentVolumeReclaimPolicy: Delete
Replace
<NFS Server>
with the actual server name.
- If you did the previous step, deploy the
PVs.
oc create -f operator-shared-pv.yaml
- If you did the previous steps, provide group write permission to the persistent
volumes. According to the PV
nfs.path
definitions, run the following
commands:chown -R :65534 <path>
chmod -R g+rw <path>
Where
<path> is the value in your PVs (/root/operator and
/root/logs). Group ownership must be set to the anongid
option
given in the NFS export definition of the NFS server associated with the PV. The default
anongid
value is 65534.
Remove the .OPERATOR_TYPE
file in
case it exists from a previous
deployment.
rm -f <path>.OPERATOR_TYPE
Where
<path> is the value in your operator PV
(/root/operator).
- Create a claim for the static PVs.
To create a claim bound to the previously created PVs,
create a file <path>/operator-shared-pvc.yaml
anywhere on your disk, with the
following
content.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: operator-shared-pvc
namespace: ${NAMESPACE}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: operator-shared-pv
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cp4a-shared-log-pvc
namespace: ${NAMESPACE}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
volumeName: cp4a-shared-log-pv
- Deploy the PVCs. If you created your own
operator-shared-pvc.yaml
file, run the
following command with your own path.
oc create -f <path>/operator-shared-pvc.yaml
- Confirm that the
STATUS
of the PVCs (cp4a-shared-log-pvc and
operator-shared-pvc) are bound correctly before you move to the next step by running the following
command in the ${NAMESPACE}.oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cp4a-shared-log-pvc Bound pvc-db2068e1-83d1-45e4-a4db-a33b93387561 100Gi RWX managed-nfs-storage 102m
operator-shared-pvc Bound pvc-74f0a26c-3632-4c93-a78c-6502cee5ab48 1Gi RWX managed-nfs-storage 102m
-
Install the Cloud Pak operator.
-
Create a catalog source.
cloudctl case launch \
--case ${OFFLINEDIR}/${CASE_ARCHIVE} \
--inventory ${CASE_INVENTORY_SETUP} \
--action install-catalog \
--namespace ${NAMESPACE} \
--args "--registry ${LOCAL_REGISTRY} --inputDir ${OFFLINEDIR} --recursive"
-
Verify that the pods for the Cloud Pak operator catalogs are created.
Check that the following pods are recently created (oc get pods -n
openshift-marketplace
):
ibm-automation-foundation-core-catalog-<five characters>
ibm-cp-automation-foundation-catalog-<five characters>
ibm-cp4a-operator-catalog-<five characters>
opencloud-operators-<five characters>
- Check that the following catalog sources are recently created (
oc get
catalogsource -n openshift-marketplace
):
NAME DISPLAY
ibm-automation-foundation-core-catalog IBM Automation Foundation Core Operators
ibm-cp-automation-foundation-catalog IBM Automation Foundation Operators
ibm-cp4a-operator-catalog ibm-cp4a-operator
opencloud-operators IBMCS Operators
-
Install the Cloud Pak operator in the specified namespace.
cloudctl case launch \
--case ${OFFLINEDIR}/${CASE_ARCHIVE} \
--inventory ${CASE_INVENTORY_SETUP} \
--action install-operator \
--namespace ${NAMESPACE} \
--args "--registry ${LOCAL_REGISTRY} --inputDir ${OFFLINEDIR}"
-
Verify that the operators are installed.
oc get pod | grep ibm-cp4a-operator
oc get pod -n ibm-common-services
It might take up to 10 minutes or so for all the pods to show the Running
status.
Tip: If
ibm-cp4a-operator is inactive for some time, you can delete
the operator pod and let it reconcile.
To confirm that the operator is stuck, check to see
whether the log is providing an output.
oc logs <operator pod> -f
If
you see the following issues when the image is pulled, verify the global pull secret and confirm
that the docker registry username and password are
correct.
Warning Failed <invalid> (x2 over <invalid>) kubelet Error: ImagePullBackOff
Normal Pulling <invalid> (x2 over <invalid>) kubelet Pulling image
The
following command verifies the global pull secrets.
oc -n openshift-config get secret/pull-secret -o jsonpath='{.data.\.dockerconfigjson}' | base64 --decode | tr -d "\r|\n| " > dockerconfig.json
To
change the credentials you can edit the dockerconfig.json file, delete the
registry entries for the registry, and then apply the
changes.
oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=dockerconfig.json
-
Add JDBC drivers to the operator pod for Business Automation Navigator and all of the other
patterns in your deployment that need them.
Copy all of the JDBC drivers that are needed by the components to the operator pod. Depending on
your storage configuration, you might not need all these drivers. For more information about
compatible JDBC drivers, see Db2 JDBC information, Oracle JDBC information, SQL Server JDBC information, and PostgreSQL JDBC information. The following .jar files are
examples.
- Db2®
- db2jcc4.jar
- db2jcc_license_cu.jar
- Oracle
- Microsoft SQL Server
- mssql-jdbc-8.2.2.jre8.jar
- PostgreSQL
The following structure shows an example remote file system.
/root/operator
└── jdbc
├── db2
├── db2jcc4.jar
└── db2jcc_license_cu.jar
├── oracle
├── ojdbc8.jar
└── orai18n.jar
├── sqlserver
└── mssql-jdbc-8.2.2.jre8.jar
├── postgresql
└── postgresql-42.2.9.jar
Copy the JDBC files to the operator pod.
podname=$(oc get pod | grep ibm-cp4a-operator | awk '{print $1}')
kubectl cp $PATH_TO_JDBC/jdbc ${NAMESPACE}/$podname:/opt/ansible/share
Note: The $PATH_TO_JDBC is the path to the driver files on your system. The
${NAMESPACE} must be set to the namespace of the installed operator.
To verify that the files are in the pod, run the following commands:
oc get pod | grep ibm-cp4a-operator | awk '{print $1}'
The output provides the name of the pod: ibm-cp4a-operator-<ten characters>-<five
characters>
oc rsh ibm-cp4a-operator-<ten characters>-<five characters>
ls -lR /opt/ansible/share
/opt/ansible/share:
total 0
drwxrwxrwx. 3 1000600000 root 1 jdbc
/opt/ansible/share/jdbc:
total 0
drwxrwxrwx. 2 1000600000 root 2 db2
/opt/ansible/share/jdbc/db2:
total 6399
-rw-r--r--. 1 1000600000 root 6550443 db2jcc4.jar
-rw-r--r--. 1 1000600000 root 1529 db2jcc_license_cu.jar
exit
-
If you intend to install Content Collector for SAP as an optional component of the Content
Manager pattern, then you must download the necessary libraries, put them in a directory, and copy
the files to the operator pod.
- Make a saplibs directory. Give read and write permissions to the directory
by running the
chmod
command.
- Download the SAP Netweaver SDK 7.50 library from the SAP Service Marketplace.
- Copy the SAP files to the operator
pod.
podname=$(oc get pod | grep ibm-cp4a-operator | awk '{print $1}')
kubectl cp $PATH_TO_SAPLIBS/saplibs ${NAMESPACE}/$podname:/opt/ansible/share
Note: The
$PATH_TO_SAPLIBS is the path to the driver files on your system. The
${NAMESPACE} must be set to the namespace of the installed operator.
Results
When started, you can monitor the operator logs with the following command.
oc logs -f deployment/ibm-cp4a-operator -c operator
What to do next
Use the instructions on how to prepare your cluster for the capabilities that you want to
install. For more information, see Preparing capabilities.
When you are ready, choose the option to deploy the CR. For more information, see Installing an Enterprise deployment in Operator Hub.