Structuring your deployment

Get guidance and best practices for planning your installation and use of IBM Cloud Pak® for Integration.

Considerations

Here are some of the questions that you might need to consider when you plan your deployment:

  • Do you want to install the Long Term Support or Continuous Delivery version of Cloud Pak for Integration?

  • How many Red Hat OpenShift clusters do you need?

  • How should you deploy the operators and instances within those clusters?

  • Is it possible (or desirable) to share clusters between different user namespaces, environments, or lifecycle phases?

  • What are the implications of sharing a cluster between two or more logically separate activities?

  • How can you make the best use of namespaces (projects)?

Terminology

namespace

A Kubernetes object that is used for isolating groups of resources within a single cluster. This Kubernetes term is equivalent to the project term in Red Hat OpenShift. The phrases "in a namespace" and "in all namespaces" in this topic refer to the operator installation modes shown as A specific namespace on the cluster and All namespaces on the cluster in the Red Hat OpenShift interface. For more information, see Namespaces in the Kubernetes documentation and Adding Operators to a cluster in the OpenShift documentation.

operator

An operator provides custom resources in a Kubernetes cluster. Operators reconcile custom resources (CRs) to create instances. For more information, see Operators on Red Hat OpenShift in the OpenShift documentation.

pod

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. For more information, see Pods in the Kubernetes documentation.

Sharing infrastructure across workloads in Red Hat OpenShift

Although many organizations successfully use namespaces to provide a degree of separation between teams, workloads, and namespaces, Red Hat OpenShift itself does not natively support full isolation of different workloads running in a single cluster. This challenge applies equally to workloads with any of the following aspects:

  • Different environments in the development lifecycle for an application (such as development, test, and production environments that share the same namespace).

  • Different user applications in the same environment (for example, development for Application A, Application B, and Application C).

  • Workloads for different customer organizations (in the traditional sense of "Software as a Services (SaaS)" or a hosted service multi-tenancy).

Comparing Red Hat OpenShift with a traditional deployment (on virtual machines) helps clarify the challenge of providing workload isolation:

  • A Red Hat OpenShift cluster does not offer the same isolation as a configuration where different VMs are deployed for each workload, even if those VMs are running on the same hypervisor. For example, pods from different namespaces can share a worker node and compete for resources, and cluster-level configuration elements such as CustomResourceDefinition, global pull secrets, and ImageContentSourcePolicy affect all namespaces.

  • Because Kubernetes worker nodes are often VMs themselves, the equivalent scenario is running multiple workloads inside the same VM.

Considerations before deployment

The techniques within Red Hat OpenShift for sharing a cluster across workloads have a range of benefits and drawbacks that you should take into account when planning a deployment architecture. This sections describes some potential benefits and trade-offs for the various layers of a deployment.

Long Term Support or Continuous Delivery releases

Continuous Delivery (CD) releases offer more frequent access to new features.

For more information about which releases of Cloud Pak for Integration are LTSR or CD, see IBM Cloud Pak for Integration Software Support Lifecycle Addendum on the IBM Support website.

User roles and organizational structure

To ensure your deployment approach addresses the needs of various user roles, you might consider the organizational structure of your company:

  • Does a single administration team manage both the OpenShift cluster and the product-level administration of instances (such as integration runtimes or queue managers), or are these responsibilities handled by different teams?

  • Is there a single team for all instances, or does each type have its own specialized admin team?

  • How much access is required to the OpenShift layer by the application developers, who are the ultimate consumers of the instances that are deployed?

  • Some instance types, such as Integration switch servers, might require frequent Red Hat OpenShift access by developers to deploy updated integration flows for testing.

  • For other instance types, like Queue manager, API management, and Enterprise gateway, the instance is more commonly configured up-front. In this case, developers can be granted instance-level access inside that Platform UI (rather than at the OpenShift level) to complete self-service tasks.

OpenShift Container Platform (Kubernetes-based)

There are several technical characteristics of OpenShift that are inherited from upstream Kubernetes which affect the ability to provide full isolation between tenants.

You should consider these constraints:

  • Network bandwidth cannot be easily isolated to a specific workload.

  • Instances that are deployed in a namespace might consume CPU and memory, which impacts pods in other namespaces that are running on the same node. See Node placement for guidance on placing instances on specific nodes.

  • Instances can also contend for Ephemeral storage on a worker node. Running out of ephemeral storage can cause the eviction of pods, impacting end-users.

  • A cluster runs a single version of Red Hat OpenShift that applies to all namespaces. For this reason, all workloads must be ready to support new versions of OpenShift at the same time, and any issues introduced by upgrading to a new version of OpenShift apply immediately to everything that is deployed there.

Operators for Red Hat OpenShift and Cloud Pak for Integration

An operator consists of the following components:

  • Custom resource definitions (CRDs) that define Kubernetes resources. CRDs are cluster-scope objects and can not be constrained to individual namespaces.

  • Controllers that reconcile custom resources to install instances.

In Red Hat OpenShift, CRDs are objects that are shared across all namespaces. Therefore, if you have two controllers that manage the same kind of object in different namespaces, the CRD must be compatible with both controllers. A scenario in which two controllers use the same CRD increases the complexity of the system and the possibility of encountering unexpected errors.

Within Red Hat OpenShift, both operators and dependencies are installed and managed by Operator Lifecycle Manager (OLM). OLM enforces the following rules for installation:

  • Because each custom resource can be managed by only one controller, if an operator is installed in A specific namespace on the cluster mode, it cannot also be installed in All namespaces on the cluster mode.

  • OLM installs an operator only if its custom resources are each managed by only one controller.

The implications of installing a Cloud Pak for Integration operator in All namespaces on the cluster mode versus A specific namespace on the cluster mode are as follows:

  • If an operator is installed in all namespaces, its controller manages all of its owned custom resources within all namespaces, and there can only be one installation of that operator.

  • The operators can manage multiple custom resources and multiple versions of the instances. Operators are tested to be compatible with previous instance versions. So, when you upgrade the operators, the following benefits apply:

    • The operators can manage all custom resources in the cluster.

    • The operators can manage all versions of the custom resources in the cluster (This behavior does not apply to the IBM API Connect operator, which can only manage the latest version and upgrade previous versions).

    • The operators don't upgrade the instance automatically; instead, you edit the custom resource to upgrade the instance. (This behavior does not apply to the IBM Event Streams operator).

    • If an operator is installed in a specific namespace, its controller manages only the custom resources that the operator owns in that namespace. Therefore, you can install copies of the operator in more than one namespace.

IBM Cloud Pak Platform UI

After you install and deploy the Platform UI, you can use it to easily deploy all available instance types.

You can install the Cloud Pak for Integration operator either in all namespaces or in a specific namespace on the cluster. For each Cloud Pak for Integration operator that you install, you can create only one Platform UI instance, as follows:

  • If you install the operator in All namespaces on the cluster mode, the Platform UI displays instances from the whole cluster.

  • If you install the operator in A specific namespace on the cluster mode, you must deploy a separate Platform UI in each namespace where it is required. The Platform UI instance displays instances from that namespace, only.

IBM Cloud Pak foundational services

The IBM Cloud Pak foundational services in Cloud Pak for Integration enable functions such as Keycloak and EDB. The Keycloak instance is typically configured to integrate with the corporate identity management system, such as LDAP. If so configured, the instance can adopt the same user registry and group structure that is used elsewhere in the installation environment.

Optimal deployment approach

This approach for structuring your Cloud Pak for Integration installation takes into account the considerations that are mentioned in the previous section. This information contains guidelines, not requirements; you might need to follow an alternative installation approach (such as the one described in the next section) to satisfy organizational goals and constraints.

Deploy a separate cluster for the environment in each stage of the development lifecycle.

To restrict access to customer data and live systems, and to prevent any non-production activities (development or test, for example) from affecting customer-facing endpoints, industry best practice is to deploy a production workload on a different infrastructure from the one you use for non-production deployments. Even within non-production environments, the safest option is to use a separate cluster for each environment.

This practice offers the best separation and flexibility for implementing new functions without the risk of affecting established environments. For example, to upgrade to a new version of OpenShift Container Platform without the risk of changing your test environment, use a separate cluster for development.

Important: Using separate clusters for each environment requires deploying and managing extra resources, so users commonly share a cluster for certain environments. While this practice has the benefit of reduced infrastructure costs, the tradeoff is reduced isolation. You should have a clear understanding of the risks that are involved before you share a cluster for certain environments.
When you deploy a separate cluster in each environment, install operators across all namespaces.

When a cluster has a single purpose, there is little benefit to deploying operators in a specific namespace on the cluster. Installing operators across all namespaces makes the deployment simpler and more manageable.

If the operators in Cloud Pak for Integration are installed in all namespaces, the following points apply:

  • There can only be one instance of the Platform UI for that cluster.

  • The Platform UI instance provides access-controlled management to instances that are deployed across all namespaces in the cluster.

Use a namespace for each logical grouping of users who manage deployments in the cluster.

If a single team is managing all instances, start with a single namespace; you can subdivide into more than one namespace later, as needed. If multiple, independent teams exist, you can use different namespaces to hold each team's deployments.

Depending on your organizational structure (for example, whether a single team covers all instances or whether each team has its own specialized administration team), you might need specific namespaces. For example, you might need a specific namespace for the following teams:

  • A cross-functional team that manages instances of various different instance types.

  • A domain-specific team that manages a single instance type on behalf of multiple applications.

Grouping a set of instances into a namespace enables the administrator to efficiently apply controls for things like resource quotas (for resources such as CPU and memory) and network policies, and to more easily filter log output.

Create supporting configuration items, such as Secrets and Config Maps, only in the namespace in which they are required by the instances. This action restricts access to the minimum necessary set of users and workloads.

Subdivide your deployment into multiple namespaces, even if the same team of people is managing
instances in all those namespaces.

Using a namespace for each application domain or business domain can make it easier to group smaller numbers of instances together or to help match the scope of other consumers (such as application developers).

Using multiple namespaces can be particularly beneficial if there are large numbers (for example, hundreds) of instances.

Alternative installation approach (common for non-production)

This section describes the characteristics of an alternative approach, in which a cluster is shared across different non-production environments.

A cluster can be shared between different environments in the development lifecycle, and between different teams.

Typically, you share a cluster between different workloads by using a different namespace for each environment. This arrangement maintains logical separation between the different deployments.

Because network bandwidth cannot be easily isolated to a specific workload, Red Hat OpenShift does not provide perfect multi-tenancy within a cluster. This increases the risk that activity in one namespace (such as performance testing or testing of an upgrade process) has an impact on workloads that are running in other namespaces. One potential drawback of sharing a cluster is that when you need to upgrade to a new version of OpenShift Container Platform, Kubernetes changes that are introduced immediately affect all the environments (such as development and test environments) that are running in that cluster.

Using separate clusters (instead of a shared cluster) can cause an inefficient use of resources in these situations:

  • In smaller organizations, where each environment contains a limited number of instances. The use of multiple clusters can be an issue even when small worker nodes are used.

  • In non-production environments, where any problems that are caused by interference between environments should have minimal impact (for example, where a performance run in a test environment occurs outside normal working hours for developers).

In a shared cluster approach, you can benefit from installing operators in a specific namespace instead of in all namespaces.

You can have different versions of instances in different namespaces. This independence means that you can maintain separate upgrade cycles in different namespaces. For example, you can try out a new version of the Queue manager instance type alongside an existing deployment.

Important: Because the Operator CRDs are objects that are in all namespaces on the cluster, upgrade behavior is not completely isolated between namespaces. A change in the CRD applies to operators in all namespaces, affecting all relevant deployments. Therefore, if you choose to deploy in a specific namespace, do not reinstall a lower operator version than the latest operator version that can be found anywhere (in any namespace) in the cluster.

The following exceptions apply:

  • In this setup, you must create one Platform UI in each namespace.

  • If you install an operator in a specific namespace, you cannot also install it in all namespaces.

A single instance of a resource-intensive instance can be shared across multiple user domains to optimize resource requirements.

In high-availability configurations, or any situation where multiple user namespaces need to use resource-intensive instances, such as API management, consider deploying only one instance of each type and allowing shared usage by the essential user workloads.

You can also use the features of Red Hat OpenShift (similar to standard multi-tenancy) that enable separation by using a different provider organization per namespace in API management. However, there are limits to the tenant isolation that is provided in this scenario.

Important: Sharing an instance might add complexity to the implementation of the applications that use that instance. As an alternative, you might dynamically configure the API endpoint that is used, rather than depending on a single, fixed name throughout the environments.