Installing IBM Cloud Private with OpenShift
You can install IBM Cloud Private with OpenShift and IBM Cloud Private with OpenShift on IBM Cloud by using the IBM Cloud Private installer.
Before you begin, ensure that your cluster meets the installation requirements. For more information, see Preparing to install IBM Cloud Private with OpenShift.
Installation can be completed in four main steps:
- Configure the boot node
- Configure your cluster
- Run the IBM Cloud Private installer
- Post installation tasks
Configure the boot node
The IBM Cloud Private with OpenShift installer can run from either a dedicated boot node or any cluster node. If the boot node is not an OpenShift node, install Docker for your boot node only.
The boot node is the node that is used for the installation of your cluster. The boot node is usually your master node. For more information about the boot node, see Boot node. For IBM Cloud Private with OpenShift, the boot node must be one of the OpenShift nodes.
For an OpenShift on IBM Cloud cluster, all nodes are not accessible. A boot node must be used to install a IBM Cloud Private cluster on top. Since IBM Cloud Private with OpenShift reuses the OpenShift image registry, additional steps are needed to enable access to the registry. For more information, see Step 4 in Configure your cluster.
You need a version of Docker that is supported by IBM Cloud Private with OpenShift installed on your boot node. All versions of Docker that are supported by OpenShift are supported for the boot node. For more information about the supported Docker
versions, see OpenShift Docker installation .
For the procedure to install Docker, see Manually installing Docker.
Set up the installation environment
- Log in to the boot node as a user with root permissions or as a user with sudo privileges.
-
Download the installation files for IBM Cloud Private 3.2.1. You must download the correct file or files for the type of nodes in your cluster.
-
If you are installing IBM Cloud Private version 3.2.1, download the installation files from the IBM Passport Advantage®
website.
-
For a Linux® cluster, download the
ibm-cloud-private-rhos-3.2.1.tar.gz
file. -
For a Linux on Power (ppc64le) cluster, download the ibm-cloud-private-ppc64le-3.2.1.tar.gz file.
-
-
If you are installing an IBM Cloud Private fix pack, these files are available for download from the IBM® Fix Central
website.
Currently, there are two fix pack versions availabe, version 3.2.1.2203 and version 3.2.2.2203. Version 3.2.1.2203 is intended for environments that use Kubernetes version 1.13.12. Version 3.2.2.2203 is intended for applying the same fixes to a IBM Cloud Private on an upgraded version of Kubernetes (1.16.7). This 3.2.2.2203 is also intended for upgrading the supported version of Kubernetes from 1.13.12 to 1.16.7.
- For the 3.2.2.2203 fix pack, download the
ibm-cloud-private-rhos-3.2.1.2203.tar.gz
file. - For the 3.2.1.2203 fix pack, download the
ibm-cloud-private-rhos-3.2.1.2203.tar.gz
file.
- For the 3.2.2.2203 fix pack, download the
-
-
Extract the images and load them into Docker. Extracting the images might take a few minutes.
-
If you are installing IBM Cloud Private version 3.2.1, run the following command:
- For x86_64:
tar xf ibm-cloud-private-rhos-3.2.1.tar.gz -O | sudo docker load
- For Linux on Power (ppc64le):
tar xf ibm-cloud-private-ppc64le-3.2.1.tar.gz -O | sudo docker load
- For x86_64:
-
If you are installing an IBM Cloud Private fix pack, run the following command.
-
For the 3.2.2.2203 fix pack:
tar xf ibm-cloud-private-rhos-3.2.2.2203.tar.gz -O | sudo docker load
-
For the 3.2.1.2203 fix pack:
tar xf ibm-cloud-private-rhos-3.2.1.2203.tar.gz -O | sudo docker load
-
-
-
Create an installation directory to store the IBM Cloud Private configuration files in and change to that directory.
For example, to store the configuration files in
/opt/ibm-cloud-private-rhos-3.2.1
, run the following commands:mkdir /opt/ibm-cloud-private-rhos-3.2.1; \ cd /opt/ibm-cloud-private-rhos-3.2.1
-
Extract the cluster directory.
-
If you are installing IBM Cloud Private version 3.2.1, run the following command:
-
For x86_64:
sudo docker run --rm -v $(pwd):/data:z -e LICENSE=accept --security-opt label:disable ibmcom/icp-inception-amd64:3.2.1-rhel-ee cp -r cluster /data
-
For Linux on Power (ppc64le):
sudo docker run --rm -v $(pwd):/data:z -e LICENSE=accept --security-opt label:disable ibmcom/icp-inception-ppc64le:3.2.1-ee cp -r cluster /data
-
-
If you are installing IBM Cloud Private fix pack version 3.2.1.2203, run the following command:
sudo docker run --rm -v $(pwd):/data:z -e LICENSE=accept --security-opt label:disable ibmcom/icp-inception-amd64:3.2.1.2203-rhel-ee cp -r cluster /data
-
-
Create cluster configuration files. The OpenShift configuration files can be found on the OpenShift master node.
Copy the OpenShift
admin.kubeconfig
file to the cluster directory. The OpenShiftadmin.kubeconfig
file can be found in the/etc/origin/master/admin.kubeconfig
directory:sudo cp /etc/origin/master/admin.kubeconfig cluster/kubeconfig
If your boot node is different from the OpenShift master node, then the previous files must be copied to the boot node.
For an OpenShift on IBM Cloud cluster, you can obtain or generate its Kubernetes configuration by following the steps in Creating a cluster with the console
. Once you log in with
oc
, you can generate the configuration file by running the following command:oc config view > kubeconfig
Configure your cluster
-
Update the
config.yaml
file for x86_64, or thepower.openshift.config.yaml
file for Linux on Power (ppc64le), that you extracted in Step 5 with the following configurations:cluster_nodes: master: - <your-openshift-dedicated-node-to-deploy-icp-master-components> proxy: - <your-openshift-dedicated-node-to-deploy-icp-proxy-components> management: - <your-openshift-dedicated-node-to-deploy-icp-management-components> storage_class: <storage class available in OpenShift>
Note: The value of the
master
,proxy
, andmanagement
parameters is an array and can have multiple nodes and the same node can be used for master, management and proxy; and the same node can be used for the master, management, and proxy. Due to a limitation from OpenShift, if you want to deploy IBM Cloud Private on any OpenShift master or infrastructure node, you must label the node as an OpenShift compute node with the following command:oc label node <master node host name/infrastructure node host name> node-role.kubernetes.io/compute=true
Note: For high availability (HA) configuration, configure more than one master, management, and proxy node.
For a OpenShift on IBM Cloud cluster:
- For the
storage class
parameter, you can useibmc-file-gold
for an x86_64 environment. - For the
storage class
parameter, you can useibmc-powervc-k8s-volume-default
for a Linux on Power (ppc64le) environment.
- For the
-
Set up a default password in the
config.yaml
file that meets the default password enforcement rule'^([a-zA-Z0-9\-]{32,})$'
. You can also define a custom set of password rules.-
Open the
/<installation_directory>/cluster/config.yaml
file, and set thedefault_admin_password
. The password must satisfy all regular expressions that are specified inpassword_rules
. -
Optional: You can define one or more rules as regular expressions in an array list that the password must pass. For example, a rule can state that the password must be longer than a specified number of characters and/or that it must contain at least one special character. The rules are written as regular expressions that are supported by the Go programming language. To define a set of password rules, add the following parameter and values to the
config.yaml
file:password_rules: - '^.{10,}' - '.*[!@#\$%\^&\*].*'
To disable the
password_rule
, add(.*)
password_rules: - '(.*)'
Note: The
default_admin_password
must match all rules that are defined. Ifpassword_rules
is not defined, thedefault_admin_password
must meet the default passport enforcement rule'^([a-zA-Z0-9\-]{32,})$'
.
-
-
Optional: Enable IBM Multicloud Manager from your
config.yaml
file. By default, themulticluster-hub
option isenabled
and thesingle_cluster_mode
option istrue
, which means IBM Multicloud Manager is not configured. You cannot use IBM Multicloud Manager with thesingle_cluster_mode
defaulttrue
setting.For more information and other configuration scenarios for IBM Multicloud Manager, see Configuration options for IBM Multicloud Manager with IBM Cloud Private installation.
-
Optional: Use a customized Root CA certificate, for example:
openssl rsa -in /path/to/your-root-ca-key.key -out /path/to/your-root-ca-key.key.p1 # convert key to p1 format kubectl -n kube-system create secret tls cluster-ca-cert --cert=/path/to/your-root-ca-cert.crt --key=/path/to/your-root-ca-key.key.p1
Note: The Root CA key must be in P1 format.
-
For IBM PowerVC users only: See Creating a storage class for the IBM PowerVC FlexVolume Driver (IBM Power only) for the steps to configure your IBM PowerVC FlexVolume Driver storage class.
Run the IBM Cloud Private installer
Run the following command:
-
For x86_64:
sudo docker run -t --net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v /var/run:/var/run:z -v /etc/docker:/etc/docker:z --security-opt label:disable ibmcom/icp-inception-amd64:3.2.1-rhel-ee install-with-openshift
-
For Linux on Power (ppc64le):
sudo docker run -t --net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v /var/run:/var/run:z -v /etc/docker:/etc/docker:z --security-opt label:disable ibmcom/icp-inception-ppc64le:3.2.1-ee install-with-openshift
Note: If you encounter errors during installation, you should uninstall by entering the following command:
-
For x86_64:
sudo docker run -t --net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v /var/run:/var/run:z -v /etc/docker:/etc/docker:z --security-opt label:disable ibmcom/icp-inception-amd64:3.2.1-rhel-ee uninstall-with-openshift
-
For Linux on Power (ppc64le):
sudo docker run -t --net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v /var/run:/var/run:z -v /etc/docker:/etc/docker:z --security-opt label:disable ibmcom/icp-inception-ppc64le:3.2.1-ee uninstall-with-openshift
Then, retry the installation by rerunning the install command.
Access your cluster
Access your cluster by using a different port than the one that was used for standalone IBM Cloud Private. From a web browser, browse to the URL of your cluster. Your URL might resemble the following hyperlink: https://icp-console.<openshift-router>
.
You can get the openshift-router
, by command: kubectl -n openshift-console get route console -o jsonpath='{.spec.host}'
. For a list of supported browsers, see Supported browsers.
Note: If you installed fix pack version 3.2.1.2203, add the root CA certificate to your trust store. With this fix pack, users on macOS 10.15 or newer cannot access the management console until the root CA certificate is added to the trust store. For more information, see:
Post installation tasks
-
Check which node runs the IBM Cloud Private components. To discover the node that runs the IBM Cloud Private components, run following command:
kubectl --kubeconfig /etc/origin/master/admin.kubeconfig get nodes
The output shows you the roles of your OpenShift node and which node runs the IBM Cloud Private components.
-
Correct the security context constraints. To correct the security context constraints, run the following command:
kubectl --kubeconfig /etc/origin/master/admin.kubeconfig patch scc icp-scc -p '{"allowPrivilegedContainer": true} --type=merge'
Example output:
securitycontextconstraints.security.openshift.io/icp-scc patched
-
After you apply the new security context constraints, run the following command to get the updated output:
kubectl --kubeconfig /etc/origin/master/admin.kubeconfig get scc icp-scc
Example output:
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES icp-scc true [] MustRunAs RunAsAny RunAsAny RunAsAny 1 false [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret]