Installing IBM Cloud Pak foundational services in an air-gapped environment using a portable storage device

If your cluster is not connected to the internet, you can install IBM Cloud Pak foundational services in your cluster by using either a bastion host, portable compute device, or a portable storage device.

1. Set up your mirroring environment

Before you install any IBM Cloud Pak® on an air-gapped environment, you must set up a host that can be connected to the internet to complete configuring your mirroring environment. To set up your mirroring environment, complete the following steps:

1.1. Prerequisites

No matter what medium you choose for your air-gapped installation, you must satisfy the following prerequisites:

Prepare a host

Regardless of which type of host you're using, you must be able to connect it to the internet and to the air-gapped network with access to the Red Hat® OpenShift® Container Platform cluster and the local, intranet Docker registry. Your host must be on a Linux® x86_64 or Mac platform with any operating system that the IBM Cloud Pak® CLI and the Red Hat® OpenShift® Container Platform CLI support. If you are on a Windows platform, you must execute the actions in a Linux® x86_64 VM or from a Windows Subsystem for Linux (WSL) terminal.

Your host must have sufficient storage to hold all of the software that is to be transferred to the local, intranet Docker registry.

The following table explains the software requirements for installing IBM Cloud Pak foundational services in an air-gapped environment:

Table 2. Software requirements and purpose
Software Purpose
OpenSSL Validating certificates when you run the air-gapped install scripts
Docker Container management
Podman Container management
Apache httpd-tools Creating an account when you run the air-gapped install scripts
IBM Cloud Pak® CLI (cloudctl) Running CASE commands
Red Hat OpenShift CLI (oc) Red Hat OpenShift Container Platform administration
Skopeo Working with container images and registries in an air-gapped environment

Complete the following steps on your host:

  1. Install OpenSSL version 1.1.1 or higher.

  2. Install Docker or Podman.

    Notes:

    • If you want to install a version of the foundational services prior to 3.7.3 or ibm-cp-common-services-1.3.3.tgz (not included), install the docker version within 18.x.x - 19.x.x, and run yum install <package name>-<version info>.
    • If you want to install foundational services on Red Hat Enterprise Linux 8.x or later, Docker is not supported, and you can use Podman instead.

    To install Docker (for example, on Red Hat® Enterprise Linux®), run the following commands:

       yum check-update
       yum install docker
    

    To install Podman, see Podman Installation Instructions Opens in a new tab.

  3. Install httpd-tools.

    yum install httpd-tools
    
  4. Download and install the most recent version of cloudctl-linux-amd64.tar.gz from the IBM/cloud-pak-cli repo. Extract the binary file by entering the following command:

    tar -xf cloudctl-linux-amd64.tar.gz
    

    Run the following command to modify the file:

    chmod 755 cloudctl-linux-amd64
    

    Run the following command to move the file to the /usr/local/bin directory:

    mv cloudctl-linux-amd64 /usr/local/bin/cloudctl
    

    Note: You can confirm that cloudctl is installed by entering the following command:

    cloudctl --help
    

    The cloudctl usage is displayed.

  5. Install the oc Red Hat® OpenShift® Container Platform CLI tool. For more information, see Red Hat® OpenShift® Container Platform CLI tools.

  6. Install the skopeo CLI version 1.0.0 or higher. For more information, select install.md in the Skopeo repo Opens in a new tab.

  7. Create a directory that serves as the offline store.

    Following is an example directory, which is used in the subsequent steps.

    mkdir $HOME/offline
    

    Note: This offline store must be persistent to avoid transferring data more than once. The persistence also helps to run the mirroring process multiple times or on a schedule.

If you are installing foundational services by using portable storage device, the portable storage device must have sufficient storage, and must be attached to the external host.

1.2. Set up local image registry and access

You must use a local Docker registry to store all of your images in your intranet network. Many customers have one or more centralized, corporate registry servers to store production container images. If a local registry is not available, a production-grade registry will need to be installed and configured. To access your registries during an air-gapped installation, you must use an account that the username and password are associated with a user who can write to the target local registry from a portable storage device and read from the target local registry that is on the OpenShift cluster nodes. You must create such a registry and the registry must meet the following requirements:

After you create the internal Docker registry, you must configure the registry:

  1. Create registry namespaces. If you are using Podman or Docker for your internal registry, you can skip this step. Docker creates these namespaces for you when you mirror the images.

    A registry namespace is the first component in the image name. For example: in the image name, icr.io/cpopen/myproduct, the namespace portion of the image name is cpopen.

    If you are using different registry providers, you must create separate registry Kubernetes namespaces for each public registry source. These Kubernetes namespaces are used as the location that the contents of the CASE file gets pushed into when you run your CASE commands.

    The following registry namespaces might be used by the CASE command:

    • cp - Namespace to store the IBM images from the cp.icr.io/cp repository.

      The cp namespace is for the images in the IBM Entitled Registry that require a product entitlement key and credentials to pull.

    • cpopen - Namespace to store the operator related IBM images from the icr.io/cpopen repository.

      The cpopen namespace is for publicly available images hosted by IBM® that don't require credentials to pull. The images are hosted in cpopen for foundational services versions 3.18 and later.

    • opencloudio - Namespace to store the images from quay.io/opencloudio.

      The opencloudio namespace is for select IBM open source component images that are available on quay.io. The images are hosted in opencloudio for foundational services versions 3.17 and prior.

  2. Verify that each namespace meets the following requirements:

    • Supports auto-repository creation.
    • Has credentials of a user who can write and create repositories. The external host uses these credentials.
    • Has credentials of a user who can read all repositories. The Red Hat® OpenShift® Container Platform cluster uses these credentials.

2. Set environment variables and download CASE files

See the following notes:

If your bastion host, portable compute device, or portable storage device must connect to the internet via a proxy, you must set environment variables on the machine that accesses the internet via the proxy server. For detailed information, see Setting up proxy environment variables.

Before mirroring your images, you can set the environment variables on your mirroring device, and connect to the internet so that you can download the corresponding CASE files.

Note: Save a copy of your environment variable values to a text editor. You can use that file as a reference to cut and paste from when completing your air-gapped installation tasks.

To finish preparing your host, complete the following steps:

  1. Create the following environment variables with the installer image name and the image inventory on your host.

    Important: To install IBM Cloud Pak foundational services version 3.14.0, use the ibm-cp-common-services-1.9.0.tgz CASE archive. If you want to install an earlier version of IBM Cloud Pak foundational services, see Table 1. Image versions for offline installation for the CASE versions that you can use.

       export CASE_NAME=ibm-cp-common-services
       export CASE_VERSION=1.9.0
       export CASE_ARCHIVE=$CASE_NAME-$CASE_VERSION.tgz
       export CASE_INVENTORY_SETUP=ibmCommonServiceOperatorSetup
       export OFFLINEDIR=$HOME/offline
       export OFFLINEDIR_ARCHIVE=offline.tgz
       export CASE_REPO_PATH=https://github.com/IBM/cloud-pak/raw/master/repo/case
       export CASE_LOCAL_PATH=$OFFLINEDIR/$CASE_ARCHIVE
    
       export PORTABLE_DOCKER_REGISTRY_HOST=localhost
       export PORTABLE_DOCKER_REGISTRY_PORT=443
       export PORTABLE_DOCKER_REGISTRY=$PORTABLE_DOCKER_REGISTRY_HOST:$PORTABLE_DOCKER_REGISTRY_PORT
       export PORTABLE_DOCKER_REGISTRY_USER=localuser
       export PORTABLE_DOCKER_REGISTRY_PASSWORD=l0calPassword!
       export PORTABLE_DOCKER_REGISTRY_PATH=$OFFLINEDIR/imageregistry
    
  2. Connect your host to the internet and disconnect it from the local air-gapped network.

  3. Download the IBM Cloud Pak foundational services installer and image inventory to your host.

       cloudctl case save \
       --repo $CASE_REPO_PATH \
       --case $CASE_NAME \
       --version $CASE_VERSION \
       --outputdir $OFFLINEDIR
    

Your host is now configured and you are ready to mirror your images.

Important: For portable storage devices, a Docker registry service must run from your connected device (localhost) by completing the following steps:

Note: The default Docker registry server image is docker.io/library/registry:2.8.1. You can export REGISTRY_IMAGE to use a different version. For example, export REGISTRY_IMAGE=docker.io/library/registry:latest.

a. Initialize the Docker registry:

     cloudctl case launch \
       --case $CASE_LOCAL_PATH \
       --inventory $CASE_INVENTORY_SETUP \
       --action init-registry \
       --args "--registry $PORTABLE_DOCKER_REGISTRY_HOST --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD --dir $PORTABLE_DOCKER_REGISTRY_PATH"

b. Start the Docker registry:

     cloudctl case launch \
       --case $CASE_LOCAL_PATH \
       --inventory $CASE_INVENTORY_SETUP \
       --action start-registry \
       --args "--registry $PORTABLE_DOCKER_REGISTRY_HOST --port $PORTABLE_DOCKER_REGISTRY_PORT --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD --dir $PORTABLE_DOCKER_REGISTRY_PATH"

3. Mirror images to portable registry

The process of mirroring images takes the image from the internet to your host, then effectively copies that image on to your air-gapped environment. After you mirror your images, you can configure your cluster and complete air-gapped installation.

Notes:

Complete the following steps to mirror your images from your host to your air-gapped environment:

3.1. Mirror the images to the host

Complete these steps to mirror the images from the internet to your host:

Note: Don't use the tilde within double quotation marks in any command. For example, don't use args "--registry <registry> --user <registry userid> --pass {registry password} --inputDir ~/offline". The tilde does not expand and your commands might fail.

  1. Store authentication credentials for all source Docker registries.

    All IBM Cloud Pak foundational services credentials are stored in public registries that don't require authentication. However, other products and third-party components require one or more authenticated registries. The following registries require authentication:

    • cp.icr.io
    • registry.redhat.io
    • registry.access.redhat.com

    For more information about these registries, see Create registry namespaces.

    You must run the following command to configure credentials for all registries that require authentication. Run the command separately for each such registry:

    cloudctl case launch \
      --case $CASE_LOCAL_PATH \
      --inventory $CASE_INVENTORY_SETUP \
      --action configure-creds-airgap \
      --args "--registry <registry> --user $REGISTRY_USER --pass $REGISTRY_PASSWORD" \
    

    The command stores and caches the registry credentials in a file on your file system in the $HOME/.airgap/secrets location.

  2. Store authentication credentials of the host Docker registry:

     cloudctl case launch \
       --case $CASE_LOCAL_PATH \
       --inventory $CASE_INVENTORY_SETUP \
       --action configure-creds-airgap \
       --args "--registry $PORTABLE_DOCKER_REGISTRY --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD" \
    

    The command stores and caches the registry credentials in a file on your file system in the $HOME/.airgap/secrets location.

  3. Mirror the images to the registry on the host.

    cloudctl case launch \
      --case $CASE_LOCAL_PATH \
      --inventory $CASE_INVENTORY_SETUP \
      --action mirror-images \
      --args "--registry $PORTABLE_DOCKER_REGISTRY --inputDir $OFFLINEDIR" \
    

    Notes:

    • If you want to mirror images to an alternative namespace, you can use the --nsPrefix argument. You can copy images to your mirror registry in a specific location and not to the default path by providing the location in the --nsPrefix argument. For example, the following CASE command implements the --nsPrefix argument for locationX/:

       cloudctl case launch \
         --case $CASE_LOCAL_PATH \
         --inventory $CASE_INVENTORY_SETUP \
         --action mirror-images \
         --args " --registry $LOCAL_DOCKER_REGISTRY --inputDir $OFFLINEDIR --nsPrefix locationX/" \
      
    • Some products support the ability to mirror a subset of images using the --filter argument and image grouping. The --filter argument provides the ability to customize which images are mirrored during an air-gapped installation. For example, you have an Elasticsearch CASE that contains groups that allow you to mirror specific versions of Elasticsearch. Use the --filter argument to target one or more versions of Elasticsearch to mirror rather than the entire library.

      Consider the following command:

       cloudctl case launch \
         --case $CASE_LOCAL_PATH \
         --inventory $CASE_INVENTORY_SETUP \
         --action mirror-images \
         --args " --registry $LOCAL_DOCKER_REGISTRY --inputDir $OFFLINEDIR" \
      

      Update the --args argument to include a --filter argument. For example, use --args "--registry $LOCAL_DOCKER_REGISTRY --inputDir $OFFLINE_DIR --filter ibm_es_7" in the preceding example, ibm_es_7 will mirror only the images associated with Elasticsearch version 7.0. The resulting image group consists of images in the ibm_es_7 image group as well as any images that are not associated with any groups. This allows products to include common images as well as the ability to reduce the number of images that you need to mirror.

  1. Save the Docker registry image.

If your air-gapped network doesn’t have a Docker registry image, you can save the image and copy it later to the host in your air-gapped environment:

   docker save docker.io/library/registry:2.6 -o $PORTABLE_DOCKER_REGISTRY_PATH/registry-image.tar

3.2. Copy saved offline data (for a portable storage device only)

  1. Connect the portable storage device, such as a USB drive or external HDD, to this external host.

  2. Archive the offline data for transfer:

    tar -cvzf $OFFLINEDIR_ARCHIVE -C $OFFLINEDIR .
    
  3. Copy the preceding TAR file to the portable storage.

    Physically transfer your portable storage device from the machine that has a public internet connection to the machine that has no internet connectivity (your air-gapped environment).

  4. Proceed to the next section to set up your cluster.

3.3. Connect your host to your air-gapped environment and set up your container

  1. Connect your host device to the air-gapped network and disconnect it from the internet.
  2. Log in to the Red Hat® OpenShift® Container Platform cluster as a cluster administrator. The following sample command logs in to the Red Hat® OpenShift® Container Platform cluster:

    oc login <cluster host:port> --username=<cluster admin user> --password=<cluster admin password>
    
  3. Create an environment variable with a namespace to installfoundational services:

    export NAMESPACE=ibm-common-services
    

    Note: If you want to install the IBM Common Service Operator in all namespaces, export openshift-operators as the NAMESPACE value.

  4. Create your Kubernetes namespace:

    oc create namespace $NAMESPACE
    

You have now created your Kubernetes namespace on your air-gapped environment. You can now configure your cluster.

3.4. Mirror images to final location and configure the cluster

Complete these steps on your host that is connected to both the local docker registry and the Red Hat® OpenShift® Container Platform cluster:

  1. Create environment variables with the local Docker registry connection information.

    export CASE_NAME=ibm-cp-common-services
    export CASE_VERSION=1.9.0
    export CASE_ARCHIVE=$CASE_NAME-$CASE_VERSION.tgz
    export CASE_INVENTORY_SETUP=ibmCommonServiceOperatorSetup
    export OFFLINEDIR=$HOME/offline
    export OFFLINEDIR_ARCHIVE=offline.tgz
    export CASE_REPO_PATH=https://github.com/IBM/cloud-pak/raw/master/repo/case
    export CASE_LOCAL_PATH=$OFFLINEDIR/$CASE_ARCHIVE
    
    export PORTABLE_DOCKER_REGISTRY_HOST=localhost
    export PORTABLE_DOCKER_REGISTRY_PORT=443
    export PORTABLE_DOCKER_REGISTRY=$PORTABLE_DOCKER_REGISTRY_HOST:$PORTABLE_DOCKER_REGISTRY_PORT
    export PORTABLE_DOCKER_REGISTRY_USER=localuser
    export PORTABLE_DOCKER_REGISTRY_PASSWORD=l0calPassword!
    export PORTABLE_DOCKER_REGISTRY_PATH=$OFFLINEDIR/imageregistry
    
    export LOCAL_DOCKER_REGISTRY_HOST=<IP_or_FQDN_of_local_docker_registry>
    export LOCAL_DOCKER_REGISTRY_PORT=443
    export LOCAL_DOCKER_REGISTRY=$LOCAL_DOCKER_REGISTRY_HOST:$LOCAL_DOCKER_REGISTRY_PORT
    export LOCAL_DOCKER_REGISTRY_USER=<username>
    export LOCAL_DOCKER_REGISTRY_PASSWORD=<password>
    

    Note: The Docker registry uses standard ports such as 80 or 443. If your Docker registry uses a non-standard port, specify the port by using the syntax <host>:<port>. For example, export LOCAL_DOCKER_REGISTRY=myregistry.local:5000.

  2. Extract the transferred offline data:

    mkdir -p $OFFLINEDIR
    tar -xvf $OFFLINEDIR_ARCHIVE -C $OFFLINEDIR
    
  3. Set up the registry. Run the local Docker registry as a container. The registry then points to the Docker file system directory that is transferred from the external host:

    cloudctl case launch \
      --case $CASE_LOCAL_PATH \
      --inventory $CASE_INVENTORY_SETUP \
      --action start-registry \
      --args "--image $PORTABLE_DOCKER_IMAGE --registry $PORTABLE_DOCKER_REGISTRY_HOST --port $PORTABLE_DOCKER_REGISTRY_PORT --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD --dir $PORTABLE_DOCKER_REGISTRY_PATH" \
      --tolerance 1
    
  4. Configure an authentication secret for the Docker registry.

    Note: This step needs to be done only one time.

    On a portable storage device, store credentials of the registry that is running on the internal host (created in the previous step):

       cloudctl case launch \
         --case $CASE_LOCAL_PATH \
         --inventory $CASE_INVENTORY_SETUP \
         --action configure-creds-airgap \
         --args "--registry $PORTABLE_DOCKER_REGISTRY --user $PORTABLE_DOCKER_REGISTRY_USER --pass $PORTABLE_DOCKER_REGISTRY_PASSWORD" \
         --tolerance 1
    

    Then, store credentials of the registry that is going to serve images to the cluster/workloads:

       cloudctl case launch \
         --case $CASE_LOCAL_PATH \
         --inventory $CASE_INVENTORY_SETUP \
         --action configure-creds-airgap \
         --args "--registry $LOCAL_DOCKER_REGISTRY --user $LOCAL_DOCKER_REGISTRY_USER --pass $LOCAL_DOCKER_REGISTRY_PASSWORD" \
         --tolerance 1
    

    The command stores and caches the registry credentials in a file on your file system in the $HOME/.airgap/secrets location.

  5. Mirror images to the local image registry:

    On a portable storage device, mirror the images from the portable registry to the target registry on your cluster.

    cloudctl case launch \
      --case $CASE_LOCAL_PATH \
      --inventory $CASE_INVENTORY_SETUP \
      --action mirror-images \
      --args "--fromRegistry $PORTABLE_DOCKER_REGISTRY --registry $LOCAL_DOCKER_REGISTRY --inputDir $OFFLINEDIR" \
      --tolerance 1
    
  6. Using the oc login command, log in to the Red Hat® OpenShift® Container Platform cluster where your final location resides. You can identify your specific oc login by clicking the user drop-down menu in the Red Hat OpenShift Container Platform console, then clicking Copy Login Command.

  7. Create a new project for the CASE commands by running the following commands:

    export NAMESPACE=ibm-common-services
    
    oc new-project ibm-common-services
    
  8. Configure a global image pull secret and ImageContentSourcePolicy.

    Note: You must have cluster admin access to run the configure-cluster-airgap command. However, you do not need cluster admin access for mirroring.

     cloudctl case launch \
       --case $CASE_LOCAL_PATH \
       --inventory $CASE_INVENTORY_SETUP \
       --action configure-cluster-airgap \
       --namespace $NAMESPACE \
       --args "--registry $LOCAL_DOCKER_REGISTRY --user $LOCAL_DOCKER_USER --pass $LOCAL_DOCKER_PASSWORD --inputDir $OFFLINEDIR" \
       --tolerance 1
    

    Note: If you include the --nsPrefix argument in your mirror-image command, you must also include it in your configure-cluster-airgap command to set up the image redirection. For example, the following command adds the --nsPrefix argument to the preceding command:

     cloudctl case launch \
       --case $CASE_LOCAL_PATH \
       --inventory $CASE_INVENTORY_SETUP \
       --action configure-cluster-airgap \
       --namespace $NAMESPACE \
       --nsPrefix $NSPREFIX \
       --args "--registry $LOCAL_DOCKER_REGISTRY --user $LOCAL_DOCKER_USER --pass $LOCAL_DOCKER_PASSWORD --inputDir $OFFLINEDIR" \
    

    If you are using Red Hat OpenShift Container Platform version 4.7 or earlier, this step might cause your cluster nodes to drain and reboot sequentially to apply the configuration changes.

  9. Verify that the ImageContentSourcePolicy resource is created.

    oc get imageContentSourcePolicy
    
  10. Optional: If you use an insecure registry, you must add the local registry to the cluster insecureRegistries list.

    oc patch image.config.openshift.io/cluster --type=merge \
    -p '{"spec":{"registrySources":{"insecureRegistries":["'${LOCAL_DOCKER_REGISTRY}'"]}}}'
    
  11. Verify your cluster node status.

    oc get MachineConfigPool -w
    

    After the ImageContentsourcePolicy and global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until all MachineConfigPools are updated.

4. Install IBM Cloud Pak foundational services by way of Red Hat OpenShift Container Platform

Now that your images are mirrored to your air-gapped environment, you can deploy your IBM Cloud® Paks to that that environment. When you mirrored your environment, you created a parallel offline version of everything that you needed to install an operator into Red Hat® OpenShift® Container Platform. To install the IBM Cloud Pak foundational services, complete the following steps:

4.1. Create the catalog source and install the IBM Cloud Pak foundational services

  1. Set the namespace to install the foundational services catalog:

    export NAMESPACE=ibm-common-services
    
  2. Create and configure a catalog source.

    cloudctl case launch \
      --case $CASE_LOCAL_PATH \
      --inventory $CASE_INVENTORY_SETUP \
      --action install-catalog \
      --namespace $NAMESPACE \
      --args "--registry $LOCAL_DOCKER_REGISTRY" \
      --tolerance 1
    

    Note: In foundational services version 3.13 and previous versions, the install-catalog command deploys catalogsource with the latest tag. Starting from foundational services version 3.14, install-catalog deploys the catalog source ,opencloud-operators, with catalogsource image digest.

  3. Verify that the CatalogSource for foundational services installer operator is created.

    oc get pods -n openshift-marketplace
    oc get catalogsource -n openshift-marketplace
    
  4. (Optional) Create a configmap if you want to install foundational services in a custom namespace. By default, the foundational services are installed in the ibm-common-services namespace. For more information, see Installing IBM Cloud Pak foundational services in a custom namespace.
    Important: You can install foundational services in only one namespace in your cluster.

  5. Install the foundational services operators.

    Note: You must have cluster admin access to run the install-operator command. However, you do not need cluster admin access for mirroring.

     cloudctl case launch \
       --case $CASE_LOCAL_PATH \
       --inventory $CASE_INVENTORY_SETUP \
       --action install-operator \
       --namespace $NAMESPACE \
       --tolerance 1
    

    Note: The foundational services are by default installed with the starterset deployment profile. You can change the profile to small by adding --args "--size small", or to large by adding --args "--size large", to the command. See the following example:

       cloudctl case launch \
           --case $CASE_LOCAL_PATH \
           --inventory $CASE_INVENTORY_SETUP \
           --action install-operator \
           --namespace $NAMESPACE \
           --args "--size small" \
           --tolerance 1
    
  6. Verify that the foundational services are installed:

    oc get pod -n ibm-common-services
    

It might take up to 15 minutes for all the pods to show the Running status.

Note: If you want to deploy Db2, you must load the Db2 CASE bundle and update the OperandRegistry before you create OperandRequest.

   oc edit opreg common-service -n ibm-common-services

Replace the catalog source value from ibm-operator-catalog to ibm-db2uoperator-catalog such that the catalog source looks like the following example:

   - channel: v1.0
     name: ibm-db2u-operator
     namespace: placeholder
     packageName: db2u-operator
     scope: public
     sourceName: ibm-db2uoperator-catalog
     sourceNamespace: openshift-marketplace

4.2. Access the console

Use the following command to get the URL to access the console:

   oc get route -n ibm-common-services cp-console -o jsonpath=‘{.spec.host}’

This command results in the following output:

   cp-console.apps.mycluster.mydomain.com

Based on the example output, your console URL would be https://cp-console.apps.mycluster.mydomain.com.

4.3. Retrieve your console username and password

The default username to access the console is admin.

You can get the password for the admin username by running the following command:

   oc -n ibm-common-services get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' | base64 -d

The following password is an example of the output of the preceding command:

   EwK9dj9fwPZHyHTyu9TyIgh9klZSzVsA

Based on the example output, you would then use EwK9dj9fwPZHyHTyu9TyIgh9klZSzVsA as the password.

You can change the default password at any time. For more information, see Changing the cluster administrator password

Note: Any user with access to the ibm-common-services namespace can retrieve this password since it is stored in a secret in the ibm-common-services namespace. To minimize password exposure, allow limited users to access the ibm-common-services namespace.

You have now successfully created and deployed your air-gapped instance of IBM Cloud Pak foundational services.

For more information about troubleshooting your air-gapped installation, see Troubleshooting an air-gapped installation.

Setting up a repeatable air-gap process

Once you completed a CASE save, you can mirror the CASE as many times as you want to. This approach allows you to airgap a specific version of the Cloud Pak into development, test, and production stages.

Follow the steps in this section if you want to save the CASE to multiple registries (per environment) once and be able to run the CASE in the future without repeating the CASE save process.

  1. Run the following command to save the CASE to OFFLINEDIR, which can be used as an input during the CASE launch:
    cloudctl case save \
   --repo $CASE_REPO_PATH \
   --case $CASE_NAME \
   --version $CASE_VERSION \
   --outputdir $OFFLINEDIR
  1. Execute the CASE launch by running the following command:
cloudctl case launch \
  --case $CASE_LOCAL_PATH \
  --inventory $CASE_INVENTORY_SETUP \
  --action mirror-images \
  --args "--registry $PORTABLE_DOCKER_REGISTRY --inputDir $OFFLINEDIR" \
  --tolerance 1

If you want to make this repeatable across environments, you can reuse the same saved CASE cache ($OFFLINEDIR) instead of executing a CASE save again in other environments. You do not have to worry about updated versions of dependencies being brought into the saved cache.