Installing IBM Cloud Pak foundational services in an air-gapped environment using a Bastion host with ibm-pak plug-in

If your cluster is not connected to the internet, you can install IBM Cloud Pak foundational services in your cluster by using a bastion host.

1. Set up your mirroring environment

Before you install any IBM Cloud Pak® on an air-gapped environment, you must set up a host that can be connected to the internet to complete configuring your mirroring environment. To set up your mirroring environment, complete the following step:

1.1. Prerequisites

No matter what medium you choose for your air-gapped installation, you must satisfy the following prerequisites:

Prepare a host

Regardless of which type of host you're using, you must be able to connect it to the internet and to the air-gapped network with access to the Red Hat® OpenShift® Container Platform cluster and the local, intranet Docker registry. Your host must be on a Linux® x86_64, Windows or Mac platform with any operating system that the ibm-pak plug-in, and the Red Hat® OpenShift® Container Platform CLI support. If you are on a Windows platform, you must execute oc ibm-pak launch in a Linux® x86_64 VM or from a Windows Subsystem for Linux (WSL) terminal.

The following table explains the software requirements for installing IBM Cloud Pak foundational services in an air-gapped environment:

Table 1. Software requirements and purpose
Software Purpose
Docker Container management
Podman Container management
Red Hat OpenShift CLI (oc) Red Hat OpenShift Container Platform administration

Complete the following steps on your host:

  1. Install Docker or Podman.

    Notes:

    • If you want to install a version of the foundational services prior to version 3.7.3 or ibm-cp-common-services-1.3.3.tgz (not included), install the Docker version between 18.x.x - 19.x.x, and run yum install <package name>-<version info>.
    • You can use Podman to install foundational services on Red Hat Enterprise Linux 8.x or later. Docker is not supported.

    To install Docker (for example, on Red Hat® Enterprise Linux®), run the following commands:

    Note: If you are installing as a non-root user, you must use sudo. For more information, refer to the Podman or Docker documentation for installing as a non-root user.

    yum check-update
    yum install docker
    

    To install Podman, see Podman Installation Instructions Opens in a new tab.

  2. Install the oc Red Hat® OpenShift® Container Platform CLI tool. For more information, see Red Hat® OpenShift® Container Platform CLI tools.

  3. Download and install the most recent version of IBM Catalog Management Plug-in for IBM Cloud Paks from the IBM/ibm-pak-plugin. Extract the binary file by entering the following command:

    tar -xf oc-ibm_pak-linux-amd64.tar.gz
    

    Run the following command to move the file to the /usr/local/bin directory:

    Note: If you are installing as a non-root user, you must use sudo.

    mv oc-ibm_pak-linux-amd64 /usr/local/bin/oc-ibm_pak
    

    Note: Download the plug-in based on the host operating system. You can confirm that oc ibm-pak -h is installed by running the following command:

    oc ibm-pak --help
    

    The plug-in usage is displayed.

    The plug-in is also provided in a container image cp.icr.io/cpopen/cpfs/ibm-pak:TAG where TAG should be replaced with the corresponding plug-in version, for example cp.icr.io/cpopen/cpfs/ibm-pak:v1.2.0 will have v1.2.0 of the plug-in.

    The following command will create a container and copy the plug-ins for all the supported platforms in a directory, plugin-dir. You can specify any directory name and it will be created while copying. After copying, it will delete the temporary container. The plugin-dir will have all the binaries and other artifacts you find in a GitHub release and repo at IBM/ibm-pak-plugin.

    id=$(docker create cp.icr.io/cpopen/cpfs/ibm-pak:TAG - )
    docker cp $id:/ibm-pak-plugin plugin-dir
    docker rm -v $id
    cd plugin-dir
    

If you are installing foundational services by using a bastion host, the bastion server must be configured.

Creating registry namespaces

Top-level namespaces are the namespaces which appear at the root path of your private registry. For example, if your registry is hosted at myregistry.com:5000, then mynamespace in myregistry.com:5000/mynamespace is defined as a top-level namespace. There can be many top-level namespaces.

When the images are mirrored to your private registry, it is required that the top-level namespace where images are getting mirrored already exists or can be automatically created during the image push. If your registry does not allow automatic creation of top-level namespaces, you must create them manually.

In section 3.1 when you generate mirror manifests, you can specify the top-level namespace where you want to mirror the images by setting TARGET_REGISTRY to myregistry.com:5000/mynamespace which has the benefit of needing to create only one namespace mynamespace in your registry if it does not allow automatic creation of namespaces.

If you do not specify your own top-level namespace, the mirroring process will use the ones which are specified by the CASEs. For example, it will try to mirror the images at myregistry.com:5000/cp, myregistry.com:5000/cpopen etc.

So if your registry doesn't allow automatic creation of top level namespaces and you are not going to use your own during generation of mirror manifests then you must create the following namespaces at the root of your registry.

There can be more top-level namespaces which you might need to create. See section on Generate mirror manifests for information on how to use the oc ibm-pak describe command to list all the top-level namespaces.

2. Set environment variables and download CASE files

If your bastion host, portable compute device, or portable storage device must connect to the internet via a proxy, you must set environment variables on the machine that accesses the internet via the proxy server. For detailed information, see Setting up proxy environment variables.

Before mirroring your images, you can set the environment variables on your mirroring device, and connect to the internet so that you can download the corresponding CASE files. To finish preparing your host, complete the following steps:

Note: Save a copy of your environment variable values to a text editor. You can use that file as a reference to cut and paste from when completing your air-gapped installation tasks.

  1. Create the following environment variables with the installer image name and the version.

    export CASE_NAME=ibm-cp-common-services
    export CASE_VERSION=1.9.0
    
  2. Connect your host to the internet and disconnect it from the local air-gapped network.

  3. The plug-in can detect the locale of your environment and provide textual helps and messages accordingly. You can optionally set the locale by running the following command:

    oc ibm-pak config locale -l LOCALE
    

    where LOCALE can be one of de_DE, en_US, es_ES, fr_FR, it_IT, ja_JP, ko_KR, pt_BR, zh_Hans, zh_Hant.

  4. Download the IBM Cloud Pak foundational services installer and image inventory to your host.

    Tip: If you do not specify the CASE version, it will download the latest CASE.

    oc ibm-pak get $CASE_NAME --version $CASE_VERSION
    

Notes:

By default, the root directory used by plugin is ~/.ibm-pak. This means that the preceding command will download the CASE under ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION. You can configure this root directory by setting the IBMPAK_HOME environment variable. Assuming IBMPAK_HOME is set, the preceding command will download the CASE under $IBMPAK_HOME/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION.

The logs files will be available at $IBMPAK_HOME/.ibm-pak/logs/oc-ibm_pak.log.

Your host is now configured and you are ready to mirror your images.

3. Mirror images to your final location

The process of mirroring images takes the image from the internet to your host, then effectively copies that image on to your air-gapped environment. After you mirror your images, you can configure your cluster and complete air-gapped installation.

Notes:

Complete the following steps to mirror your images from your host to your air-gapped environment:

3.1. Generate mirror manifests

  1. Define the environment variable $TARGET_REGISTRY by running the following command:

    export TARGET_REGISTRY=<target-registry>
    

    The <target-registry> refers to the registry (hostname and port) where your images will be mirrored to and accessed by the oc cluster. For example: 172.16.0.10:5000

For example setting TARGET_REGISTRY to myregistry.com:5000/mynamespace will create manifests such that images will be mirrored to the top level namespace mynamespace.

Run the following command to generate mirror manifests to be used when mirroring the image to the target registry:

oc ibm-pak generate mirror-manifests \
   $CASE_NAME \
   $TARGET_REGISTRY \
   --version $CASE_VERSION

The preceding command will generate the files, images-mapping.txt and image-content-source-policy.yaml, at ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION.

The $TARGET_REGISTRY refers to the registry where your images will be mirrored to and accessed by the oc cluster.

Notes:

Consider the following command:

   oc ibm-pak generate mirror-manifests \
      ibm-cloud-native-postgresql \
      file://cpfs \
      --final-registry $TARGET_REGISTRY/cpfs \
      --filter $GROUPS

The command was updated with a --filter argument. For example, for $GROUPS equal to ibmEdbStandard the mirror manifests will be generated only for the images associated with ibm-cloud-native-postgresql in its Standard variant. The resulting image group consists of images in the ibm-cloud-native-postgresql image group as well as any images that are not associated with any groups. This allows products to include common images as well as the ability to reduce the number of images that you need to mirror.

Example ~/.ibm-pak directory structure

The ~/.ibm-pak directory structure is built over time as you save CASEs and mirror. The following tree shows an example of the ~/.ibm-pak directory structure:

 tree ~/.ibm-pak
/root/.ibm-pak
├── config
│   └── config.yaml
├── data
│   ├── cases
│   │   └── ibm-cp-common-services
│   │       └── 1.9.0
│   │           ├── XXXXX
│   │           ├── XXXXX
│   └── mirror
│       └── ibm-cp-common-services
│           └── 1.9.0
│               ├── catalog-sources.yaml
|               ├── image-content-source-policy.yaml
│               └── images-mapping.txt
└── logs
    └── oc-ibm_pak.log

A new directory ~/.ibm-pak/mirror is created when you issue the oc ibm-pak generate mirror-manifests command. This directory holds the image-content-source-policy.yaml, images-mapping.txt and catalog-sources.yaml.

3.2. Authenticating the registry

Complete the following steps to authenticate your registries:

  1. Store authentication credentials for all source Docker registries.

    All IBM Cloud Pak foundational services credentials are stored in public registries that do not require authentication. However, other products and third-party components require one or more authenticated registries. The following registries require authentication:

    • cp.icr.io
    • registry.redhat.io
    • registry.access.redhat.com

    You must run the following command to configure credentials for all target registries that require authentication. Run the command separately for each registry:

    export REGISTRY_AUTH_FILE=<path to the file which will store the auth credentials generated on podman login>
    podman login cp.icr.io
    podman login <TARGET_REGISTRY>
    

    Important: When you log in to cp.icr.io, you must specify the user as cp and the password which is your Entitlement key from the IBM Container registry. For example:

    podman login cp.icr.io
    Username: cp
    Password:
    Login Succeeded!
    

For example, if you export REGISTRY_AUTH_FILE=~/.ibm-pak/auth.json, then after performing podman login, you can see that the file is populated with registry credentials.

If you use docker login, the authentication file is typically located at $HOME/.docker/config.json on Linux or %USERPROFILE%/.docker/config.json on Windows. After docker login you should export REGISTRY_AUTH_FILE to point to that location. For example in Linux you can issue the following command:

export REGISTRY_AUTH_FILE=$HOME/.docker/config.json

3.3. Mirror images to final location

Complete these steps on your host that is connected to both the local Docker registry and the Red Hat® OpenShift® Container Platform cluster:

  1. Mirror images to the TARGET_REGISTRY.

    oc image mirror \
      -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \
      --filter-by-os '.*'  \
      -a $REGISTRY_AUTH_FILE \
      --insecure  \
      --skip-multiple-scopes \
      --max-per-registry=1 \
      --continue-on-error=true
    
   oc image mirror --help

The above command can be used to see all the options available on the mirror command. Note that we use continue-on-error to indicate that command should try to mirror as much as possible and continue on errors.

NOTE - Sometimes based on the number and size of images to be mirrored, the oc image mirror might take longer. If you are issuing the command on a remote machine it is recommended that you run the command in the background with a nohup so even if network connection to your remote machine is lost or you close the terminal the mirroring will continue. For example, the below command will start the mirroring process in background and write the log to my-mirror-progress.txt.

   nohup oc image mirror \
   -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \
   -a $REGISTRY_AUTH_FILE \
   --filter-by-os '.*' \
   --insecure \
   --skip-multiple-scopes \
   --max-per-registry=1 \
   --continue-on-error=true > my-mirror-progress.txt  2>&1 &

You can view the progress of the mirror by issuing the below command on the remote machine.

   tail -f my-mirror-progress.txt
  1. Update the global image pull secret for your OpenShift cluster. Follow the steps in Updating the global cluster pull secret. The documented steps in the link enable your cluster to have proper authentication credentials in place to pull images from your TARGET_REGISTRY as specified in the image-content-source-policy.yaml which you will apply to your cluster in the next step.

  2. Create ImageContentSourcePolicy

    Important: Before you run the command in this step, you must be logged into your OpenShift cluster.

    Using the oc login command, log in to the Red Hat OpenShift Container Platform cluster where your final location resides. You can identify your specific oc login by clicking the user drop-down menu in the Red Hat OpenShift Container Platform console, then clicking Copy Login Command.

    Run the following command to create ImageContentSourcePolicy.

    oc apply -f  ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml
    

    If you are using Red Hat OpenShift Container Platform 4.7 or lower, this step might cause your cluster nodes to drain and reboot sequentially to apply the configuration changes.

  3. Verify that the ImageContentSourcePolicy resource is created.

    oc get imageContentSourcePolicy
    
  4. Verify your cluster node status.

    oc get MachineConfigPool -w
    

    After the ImageContentsourcePolicy and global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until all MachineConfigPools are updated before proceeding to the next step.

  5. Create a new project for the CASE commands by running the following commands:

    Note: You must be logged into a cluster before performing the following steps.

    export NAMESPACE=ibm-common-services
    
    oc new-project $NAMESPACE
    
  6. Optional: If you use an insecure registry, you must add the target registry to the cluster insecureRegistries list.

    oc patch image.config.openshift.io/cluster --type=merge \
    -p '{"spec":{"registrySources":{"insecureRegistries":["'${TARGET_REGISTRY}'"]}}}'
    
  7. Verify your cluster node status.

    oc get MachineConfigPool -w
    

    After the ImageContentsourcePolicy and global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until all MachineConfigPools are updated.

4. Install IBM Cloud Pak foundational services by way of Red Hat OpenShift Container Platform

Now that your images are mirrored to your air-gapped environment, you can deploy your IBM Cloud® Paks to that environment. When you mirrored your environment, you created a parallel offline version of everything that you needed to install an operator into Red Hat® OpenShift® Container Platform. To install the IBM Cloud Pak foundational services, complete the following steps:

4.1. Create the catalog source and install the IBM Cloud Pak foundational services

Important: Before you run any of the oc ibm-pak launch \ command, you must be logged into your cluster.

Using the oc login command, log in to the Red Hat OpenShift Container Platform cluster where your final location resides. You can identify your specific oc login by clicking the user drop-down menu in the Red Hat OpenShift Container Platform console, then clicking Copy Login Command.

  1. Set the namespace to install the foundational services catalog:

    export NAMESPACE=ibm-common-services
    
  2. Set the environment variable of the --inventory parameter:

    export CASE_INVENTORY_SETUP=ibmCommonServiceOperatorSetup
    
  3. Create and configure a catalog source.

    The recommended way to install the catalog is to run the following command:

    oc apply -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/catalog-sources.yaml
    

    The below command can also be used to install the catalog.

    oc ibm-pak launch \
    $CASE_NAME \
      --version $CASE_VERSION \
      --action install-catalog \
      --inventory $CASE_INVENTORY_SETUP \
      --namespace $NAMESPACE \
      --args "--user $LOCAL_DOCKER_USER --pass $LOCAL_DOCKER_PASSWORD --registry $TARGET_REGISTRY --recursive \
      --inputDir ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION"
    

    Note: In foundational services version 3.13 and previous versions, the install-catalog command will deploy catalogsource with the latest tag. Starting from foundational services version 3.14, install-catalog deploys the catalog source ,opencloud-operators, with catalogsource image digest.

  4. Verify that the CatalogSource for foundational services installer operator is created.

    oc get pods -n openshift-marketplace
    oc get catalogsource -n openshift-marketplace
    
  5. Optional: Create a configmap if you want to install foundational services in a custom namespace. By default, the foundational services are installed in the ibm-common-services namespace. For more information, see Installing IBM Cloud Pak foundational services in a custom namespace.

    Important: You can install foundational services in only one namespace in your cluster.

  6. Install the foundational services operators.

    Note: You must have cluster admin access to run the install-operator command. However, you do not need cluster admin access for mirroring.

    oc ibm-pak launch \
       $CASE_NAME \
       --version $CASE_VERSION \
       --inventory $CASE_INVENTORY_SETUP \
       --action install-operator \
       --namespace $NAMESPACE
    

    Note: The foundational services are by default installed with the starterset deployment profile. You can change the profile to small by adding --args "--size small", or to large by adding --args "--size large", to the command. See the following example:

    oc ibm-pak launch \
       $CASE_NAME \
       --version $CASE_VERSION \
       --inventory ibmCommonServiceOperatorSetup \
       --action install-operator \
       --namespace $NAMESPACE \
       --args "--size small"
    
  7. Using the oc login command, log in to the Red Hat® OpenShift® Container Platform cluster where your final location resides. You can identify your specific oc login by clicking the user drop-down menu in the Red Hat OpenShift Container Platform console, then clicking Copy Login Command.

  8. Verify that the foundational services are installed:

    oc get pod -n ibm-common-services
    

    It might take up to 15 minutes for all the pods to show the Running status.

    Note: If you want to deploy Db2, you must load the Db2 CASE bundle and update the OperandRegistry before you create OperandRequest.

    oc edit opreg common-service -n ibm-common-services
    

    Replace the catalog source value from ibm-operator-catalog to ibm-db2uoperator-catalog such that the catalog source looks like the following example:

    - channel: v1.0
      name: ibm-db2u-operator
      namespace: placeholder
      packageName: db2u-operator
      scope: public
      sourceName: ibm-db2uoperator-catalog
      sourceNamespace: openshift-marketplace
    

4.2. Access the console

Use the following command to get the URL to access the console:

oc get route -n ibm-common-services cp-console -o jsonpath=‘{.spec.host}’

The preceding command returns the following output:

cp-console.apps.mycluster.mydomain.com

Based on the example output, your console URL would be https://cp-console.apps.mycluster.mydomain.com.

4.3. Retrieve your console username and password

The default username to access the console is admin.

You can get the password for the admin username by running the following command:

oc -n ibm-common-services get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' | base64 -d

The following password is an example of the output of the preceding command:

EwK9dj9fwPZHyHTyu9TyIgh9klZSzVsA

Based on the example output, you would then use EwK9dj9fwPZHyHTyu9TyIgh9klZSzVsA as the password.

You can change the default password at any time. For more information, see Changing the cluster administrator password

Note: Any user with access to the ibm-common-services namespace can retrieve this password since it is stored in a secret in the ibm-common-services namespace. To minimize password exposure, allow limited users to access the ibm-common-services namespace.

You have now successfully created and deployed your air-gapped instance of IBM Cloud Pak foundational services.

For more information about troubleshooting your air-gapped installation, see Troubleshooting an air-gapped installation.

Setting up a repeatable air-gap process

Once you completed a CASE save, you can mirror the CASE as many times as you want to. This approach allows you to airgap a specific version of the Cloud Pak into development, test, and production stages.

Follow the steps in this section if you want to save the CASE to multiple registries (per environment) once and be able to run the CASE in the future without repeating the CASE save process.

  1. Run the following command to save the CASE to ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION which can be used as an input during the mirror manifest generation:

    oc ibm-pak get \
    $CASE_NAME \
    --version $CASE_VERSION
    
  2. Run the oc ibm-pak generate mirror-manifests command to generate the image-mapping.txt:

    oc ibm-pak generate mirror-manifests \
    $CASE_NAME \
    $TARGET_REGISTRY \
    --version $CASE_VERSION
    

    Then add the image-mapping.txt to the oc image mirror command:

    oc image mirror \
      -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \
      --filter-by-os '.*'  \
      -a $REGISTRY_AUTH_FILE \
      --insecure  \
      --skip-multiple-scopes \
      --max-per-registry=1 \
      --continue-on-error=true
    

If you want to make this repeatable across environments, you can reuse the same saved CASE cache (~/.ibm-pak/$CASE_NAME/$CASE_VERSION) instead of executing a CASE save again in other environments. You do not have to worry about updated versions of dependencies being brought into the saved cache.

Tip: The process remains the same if you are installing IBM Cloud Pak foundational services in an air-gapped environment using a file system (ibm-pak plug-in). For more information, see Installing IBM Cloud Pak foundational services in an air-gapped environment using a portable compute or storage device (ibm-pak plug-in).