Installing IBM Cloud Private with OpenShift

You can install IBM Cloud Private with OpenShift by using the IBM Cloud Private installer.

Installation can be completed in four main steps:

  1. Configure the boot node
  2. Set up the installation environment
  3. Configure your cluster
  4. Run the IBM Cloud Private installer
  5. Post installation tasks

Configure the boot node

The IBM Cloud Private with OpenShift installer can run from either a dedicated boot node or an OpenShift master node. If the boot node is not an OpenShift node, install Docker for your boot node only.

The boot node is the node that is used for the installation of your cluster. The boot node is usually your master node. For more information about the boot node, see Boot node.

You need a version of Docker that is supported by IBM Cloud Private with OpenShift installed on your boot node. All versions of Docker that are supported by OpenShift are supported for the boot node. For more information about the supported Docker versions, see OpenShift Docker installation Opens in a new tab.

For the procedure to install Docker, see Manually installing Docker.

Set up the installation environment

  1. Log in to the boot node as a user with root permissions or as a user with sudo privileges.
  2. Download the installation files for IBM Cloud Private 3.1.2. You must download the correct file or files for the type of nodes in your cluster. You can obtain these files from the IBM Passport Advantage® Opens in a new tab website.

    • For a Red Hat Enterprise Linux OpenShift (64-bit) cluster, download the ibm-cloud-private-rhos-3.1.2.tar.gz file.
  3. Extract the image and load them into Docker. Extracting the images might take a few minutes.

     tar xf ibm-cloud-private-rhos-3.1.2.tar.gz -O | sudo docker load
    
  4. Create an installation directory to store the IBM Cloud Private configuration files in and change to that directory.

    For example, to store the configuration files in /opt/ibm-cloud-private-rhos-3.1.2, run the following commands:

     mkdir /opt/ibm-cloud-private-rhos-3.1.2; \
     cd /opt/ibm-cloud-private-rhos-3.1.2
    
  5. Extract the cluster directory:

     sudo docker run --rm -v $(pwd):/data:z -e LICENSE=accept ibmcom/icp-inception-amd64:3.1.2-rhel-ee cp -r cluster /data
    

    If SELinux is enabled on the boot node, run the following command:

     sudo docker run --rm -v $(pwd):/data -e LICENSE=accept --security-opt label:disable ibmcom/icp-inception-amd64:3.1.2-rhel-ee cp -r cluster /data
    
  6. Copy the ibm-cloud-private-rhos-3.1.2.tar.gz to the cluster and images:

     sudo cp ibm-cloud-private-rhos-3.1.2.tar.gz cluster/images
    
  7. Create cluster configuration files. The OpenShift configuration files are found on the OpenShift Master node.

    1. Copy the OpenShift admin.kubeconfig file to the cluster directory. The OpenShift admin.kubeconfig file can be found in the /etc/origin/master/admin.kubeconfig directory:

      sudo cp /etc/origin/master/admin.kubeconfig cluster/kubeconfig
      
    2. Copy the OpenShOpenShiftift SSH key to the cluster directory:

      sudo cp ~/.ssh/id_rsa cluster/ssh_key
      
    3. Copy the OpenShift inventory file to the cluster directory:

      sudo cp openshift-ansible/inventory/hosts cluster/
      

    If your boot node is different from the OpenShift master node, then the previous files must be copied to the boot node.

Configure your cluster

  1. Update the config.yaml file that you extracted in step 5 with the following configurations:

       cluster_CA_domain: <your-openshift-master-fqdn>
       tiller_service_ip: "None"
       mariadb_privileged: "false"
       install_on_openshift: true
       storage_class: <storage class available in OpenShift>
    
       ## Kubernetes apiserver port (OpenShift port)
       kube_apiserver_secure_port: 8443
    
       ## Cluster Router settings
       router_http_port: 5080
       router_https_port: 5443
    
       ## Nginx Ingress settings
       ingress_http_port: 3080
       ingress_https_port: 3443
    

    In the previous sample, you can set the ports to any available free port number. Note: You must replace the port number 8443 for kube_apiserver_secure_port with the port number on which your OpenShift Kubernetes API server is listening.

  2. Update the label on master node with compute role

     oc label node <master node host name? node-role.kubernetes.io/compute=true
    
  3. Set up a default password in the config.yaml file that meets the default password enforcement rule; and optionally define a custom set of password rules.

    1. In the /<installation_directory>/cluster/config.yaml file, set the default_admin_password. The password must satisfy all regular expressions that are specified in password_rules.

    2. Optional: Define a set of password rules. You can define one or more rules as regular expressions in an array list that the password must pass. For example, a rule can state that the password must be longer than a specified number of characters and/or that it must contain at least one special character. The rules are written as regular expressions that are supported by the Go programming language. For example:

        password_rules:
        - '^.{10,}'
        - '.*[!@#\$%\^&\*].*'
      

      To disable the password_rule, add (.*)

        password_rules:
        - '(.*)'
      

      Notes: The default_admin_password must match all rules that are defined. If password_rules is not defined, the default_admin_password must meet the default passport enforcement rule.

Run the IBM Cloud Private installer

  1. Run the install-on-openshift command:

     sudo docker run -t --net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z ibmcom/icp-inception-amd64:3.1.2-rhel-ee install-on-openshift
    
  2. If SELinux is enabled on the boot node, run the following command:

     sudo docker run --rm -v $(pwd):/installer/cluster:z -e LICENSE=accept --security-opt label:disable ibmcom/icp-inception-amd64:3.1.2-rhel-ee install-on-openshift
    

Access your cluster

Access your cluster by using a different port than the one that was used for standalone IBM Cloud Private. From a web browser, browse to the URL of your cluster. For a list of supported browsers, see Supported browsers.

Post installation tasks

Correct the security context constraints

To correct the security context constraints, run the following command:

  kubectl --kubeconfig /etc/origin/master/admin.kubeconfig  patch scc icp-scc -p '{"allowPrivilegedContainer": true}'

The output should resemble the following text:

  # kubectl --kubeconfig /etc/origin/master/admin.kubeconfig  patch scc icp-scc -p '{"allowPrivilegedContainer": true}'
  securitycontextconstraints "icp-scc" patched

After you apply the new security context constraints, you see the following update:

  # kubectl --kubeconfig /etc/origin/master/admin.kubeconfig get scc icp-scc
  NAME      PRIV      CAPS      SELINUX     RUNASUSER   FSGROUP    SUPGROUP   PRIORITY   READONLYROOTFS   VOLUMES
  icp-scc   true      []        MustRunAs   RunAsAny    RunAsAny   RunAsAny   1          false            [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret]

Fix file permissions

If SELinux is enabled on the master node and if helm-repo and mgmt-repo pods are in error state, run the following commands to fix the file permissions on the master node.

  sudo mkdir -p /var/lib/icp/helmrepo
  sudo mkdir -p /var/lib/icp/mgmtrepo
  sudo chcon -Rt svirt_sandbox_file_t /var/lib/icp/helmrepo
  sudo chcon -Rt svirt_sandbox_file_t /var/lib/icp/mgmtrepo