App Connect Dashboard reference

Use this reference to create and delete App Connect Dashboard instances by using the IBM® Cloud Pak for Integration Platform Navigator, or the Red Hat® OpenShift® web console or CLI.

Introduction

The App Connect Dashboard API enables you to create an App Connect Dashboard instance for administering integration servers, which are deployed from BAR files that users upload into the dashboard instance. An App Connect Dashboard instance provides a runtime environment for hosting production workloads.


Usage guidelines:

Only one App Connect Dashboard instance is recommended per namespace (project). If you require more than one Dashboard instance (for example, to set up staging and production instances, or to group your integration servers), create each instance in a separate namespace in which the IBM App Connect Operator is running.

Prerequisites

  • Red Hat OpenShift Container Platform 4.6 is required for EUS compliance. Red Hat OpenShift Container Platform 4.10 is also supported for migration from EUS to a Continuous Delivery (CD) or Long Term Support (LTS) release.
  • For any production licensed instances, you must create a secret called ibm-entitlement-key in the namespace where you want to create IBM App Connect resources. For more information, see IBM Entitled Registry entitlement keys.

Red Hat OpenShift SecurityContextConstraints requirements

IBM App Connect runs under the default restricted SecurityContextConstraints.

Resources required

Minimum recommended requirements:

  • CPU: 0.5 Cores
  • Memory: 0.75 GB

For information about how to configure these values, see Custom resource values.

Creating an instance

You can create an App Connect Dashboard instance from the IBM Cloud Pak for Integration Platform Navigator, or by using the Red Hat OpenShift web console or CLI.

Before you begin

  • The IBM App Connect Operator must be installed in your cluster either through a standalone deployment or an installation of IBM Cloud Pak for Integration. For further details, see the following information:
  • Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).
  • Decide how to control upgrades to the instance when a new version becomes available. The spec.version value that you specify while creating the instance will determine how that instance is upgraded after installation, and whether you will need to specify a different license or version number for the upgrade. To help you decide whether to specify a spec.version value that either lets you subscribe to a channel for updates, or that uses a specific version for the instance, review the Upgrade considerations for channels, versions, and licenses before you start this task.
    Namespace restriction for an instance, server, or configuration:

    The namespace in which you install must be no more than 40 characters in length.

Creating an instance from the IBM Cloud Pak for Integration Platform Navigator

To create a Dashboard instance from the IBM Cloud Pak for Integration Platform Navigator, complete the following steps:

  1. From a browser window, log in to the IBM Cloud Pak for Integration Platform Navigator.
  2. From the Capabilities tab, click Create capability.
  3. Click the App Connect Dashboard tile and click Next.
  4. Click a tile to select which type of instance you want to create:
    • Quick start: Deploy a development dashboard with one replica pod.
    • Production: Deploy a production dashboard with multiple replica pods for high availability.
  5. Click Next. A UI form view opens with the minimum configuration required to create the instance.
  6. Complete either of the following steps:
    • To quickly get going, complete the configuration fields. You can display advanced settings in the UI form view by setting Advanced settings to On. Note that some fields might not be represented in the form.
    • For a more advanced configuration, click YAML to switch to the YAML view and then update the editor with your required parameters.
  7. Click Create. You are redirected to the Platform Navigator. An entry for the instance is shown in the capabilities table with an initial status of Pending, which you can click to check the progress of the deployment. When the deployment completes, the status changes to Ready.

Users with the required permission can access this Dashboard instance and use it to deploy Designer and Toolkit integrations to integration servers.

Creating an instance from the Red Hat OpenShift web console

To create a Dashboard instance by using the Red Hat OpenShift web console, complete the following steps:

  1. Applicable to IBM Cloud Pak for Integration only:
    1. If not already logged in, log in to the Platform Navigator for your cluster.
    2. From the IBM Cloud Pak menu IBM Cloud Pak menu, click OpenShift Console and log in if prompted.
  2. Applicable to an IBM App Connect Operator deployment only: From a browser window, log in to the Red Hat OpenShift Container Platform web console.
  3. From the navigation, click Operators > Installed Operators.
  4. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  5. From the Installed Operators page, click IBM App Connect.
  6. From the Operator Details page for the App Connect Operator, click the App Connect Dashboard tab.
  7. Click Create Dashboard. Switch to the YAML view if necessary for a finer level of control over your installation settings. The minimum custom resource (CR) definition that is required to create an App Connect Dashboard instance is displayed.

    From the Details tab on the Operator Details page, you can also locate the App Connect Dashboard tile and click Create Instance to specify installation settings for the instance.

  8. Update the content of the YAML editor with the parameters and values that you require for this Dashboard instance.
  9. Optional: If you prefer to use the Form view, click Form View and then complete the fields. Note that some fields might not be represented in the form.
  10. Click Create to start the deployment. An entry for the Dashboard instance is shown in the Dashboards table, initially with a Pending status.
  11. Click the Dashboard name to view information about its definition and current status.

    On the Details tab of the page, the Conditions section reveals the progress of the deployment.

    Note: The Admin UI field provides the URL for accessing the Dashboard instance. You can also locate this URL under Networking > Routes in the console navigation.

    Share this URL with users who have access to this namespace, and who will need to use the Dashboard instance to deploy integration servers.

    You can use the breadcrumb trail to return to the (previous) Operator Details page for the App Connect Operator. When the deployment is complete, the status is shown as Ready in the Dashboards table.

Creating an instance from the Red Hat OpenShift CLI

To create a Dashboard instance from the Red Hat OpenShift CLI, complete the following steps:

  1. From your local computer, create a YAML file that contains the configuration for the App Connect Dashboard instance that you want to create. Include the metadata.namespace parameter to identify the namespace in which you want to create the instance; this should be the same namespace where the other App Connect instances or resources are created.

    Example:

    apiVersion: appconnect.ibm.com/v1beta1
    kind: Dashboard
    metadata:
      name: db-prod
      namespace: mynamespace
    spec:
      license:
        accept: true
        license: L-APEH-CEKET7
        use: AppConnectEnterpriseProduction
      pod:
        containers:
          content-server:
            resources:
              limits:
                cpu: 250m
                memory: 250Mi
              requests:
                cpu: 50m
                memory: 50Mi
          control-ui:
            resources:
              limits:
                cpu: 250m
                memory: 250Mi
              requests:
                cpu: 50m
                memory: 125Mi
      replicas: 3
      storage:
        class: ibmc-file-gold-gid
        size: 5Gi
        type: persistent-claim
      useCommonServices: true
      version: 11.0.0-eus
  2. Save this file with a .yaml extension; for example, dashboard_cr.yaml.
  3. From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
  4. Run the following command to create the App Connect Dashboard instance. (Use the name of the .yaml file that you created.)
    oc apply -f dashboard_cr.yaml
  5. Run the following command to check the status of the App Connect Dashboard instance and verify that it is ready:
    oc get dashboards -n namespace

    The output will also provide the URL of the Dashboard instance; for example:

    NAME       RESOLVEDVERSION     REPLICAS   CUSTOMIMAGES   STATUS    URL                                                          AGE
    db-prod    11.0.0.18-r1-eus    3          false          Ready     https://db-prod-ui-mynamespace.apps.acecc-abcde.icp4i.com    9m12s

    Share the URL value in the output with users who have access to this namespace, and who will need to use the Dashboard instance to deploy integration servers.

Deleting an instance

If no longer required, you can uninstall an App Connect Dashboard instance by deleting it. You can delete an instance from the IBM Cloud Pak for Integration Platform Navigator, or by using the Red Hat OpenShift web console or CLI.

Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).

Deleting an instance from the IBM Cloud Pak for Integration Platform Navigator

To delete a Dashboard instance from the IBM Cloud Pak for Integration Platform Navigator, complete the following steps:

  1. From a browser window, log in to the IBM Cloud Pak for Integration Platform Navigator.
  2. On the Capabilities tab, search in the table to locate the App Connect Dashboard instance that you want to delete.
  3. Click the options icon (Options menu) for that row to open the options menu, and then click Delete.
  4. Confirm the deletion.

Deleting an instance from the Red Hat OpenShift web console

To delete a Dashboard instance by using the Red Hat OpenShift web console, complete the following steps:

  1. Applicable to IBM Cloud Pak for Integration only:
    1. If not already logged in, log in to the Platform Navigator for your cluster.
    2. From the IBM Cloud Pak menu IBM Cloud Pak menu, click OpenShift Console and log in if prompted.
  2. Applicable to an IBM App Connect Operator deployment only: From a browser window, log in to the Red Hat OpenShift Container Platform web console.
  3. From the navigation, click Operators > Installed Operators.
  4. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  5. From the Installed Operators page, click IBM App Connect.
  6. From the Operator Details page for the App Connect Operator, click the App Connect Dashboard tab.
  7. Locate the Dashboard instance that you want to delete.
  8. Click the options icon (Options menu) for that row to open the options menu, and then click the Delete option.
  9. Confirm the deletion.

Deleting an instance from the Red Hat OpenShift CLI

To delete a Dashboard instance from the Red Hat OpenShift CLI, complete the following steps:

  1. From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
  2. Run the following command to delete the App Connect Dashboard instance, where instanceName is the value of the metadata.name parameter.
    oc delete dashboard instanceName

Custom resource values

The following table lists the configurable parameters and default values for the custom resource.

Parameter Description Default

apiVersion

The API version that identifies which schema is used for this instance.

appconnect.ibm.com/v1beta1

kind

The resource type.

Dashboard

metadata.name

A unique short name by which the Dashboard instance can be identified.

metadata.namespace

The namespace (project) in which the Dashboard instance is installed.

The namespace in which you install must be no more than 40 characters in length.

spec.affinity

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

Specify custom affinity settings that will control the placement of pods on nodes. The custom affinity settings that you specify will completely overwrite all of the default settings. (The current default settings are shown after this table.)

Custom settings are supported only for nodeAffinity. If you provide custom settings for nodeAntiAffinity, podAffinity, or podAntiAffinity, they will be ignored.

For more information about spec.affinity.nodeAffinity definitions, see Controlling pod placement on nodes using node affinity rules in the OpenShift documentation and Assign Pods to Nodes using Node Affinity in the Kubernetes documentation.

spec.annotations

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

Specify one or more custom annotations (as arbitrary metadata) to apply to each pod that is created during deployment. Specify each annotation as a key/value pair in the format key: value. For example:

spec:
  annotations:
    key1: value1
    key2: value2

The custom annotations that you specify will be merged with the default (generated) annotations. If duplicate annotation keys are detected, the custom value will overwrite the default value.

spec.labels

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

Specify one or more custom labels (as classification metadata) to apply to each pod that is created during deployment. Specify each label as a key/value pair in the format labelKey: labelValue. For example:

spec:
  labels:
    key1: value1
    key2: value2

The custom labels that you specify will be merged with the default (generated) labels. If duplicate label keys are detected, the custom value will overwrite the default value.

spec.license.accept

An indication of whether the license should be accepted.

Valid values are true and false. To install, this value must be set to true.

false

spec.license.license

See Licensing reference for IBM App Connect Operator for the valid values.

spec.license.use

See Licensing reference for IBM App Connect Operator for the valid values.

If using an IBM Cloud Pak for Integration license, spec.useCommonServices must be set to true.

spec.logFormat

The format used for the container logs that are output to the container's console.

Valid values are basic and json.

basic

spec.pod.containers.content-server.livenessProbe.failureThreshold

The number of times the liveness probe (which checks whether the content server that stores BAR files is still running) can fail before taking action.

1

spec.pod.containers.content-server.livenessProbe.initialDelaySeconds

How long to wait (in seconds) before starting the liveness probe, which checks whether the content server that stores BAR files is still running.

360

spec.pod.containers.content-server.livenessProbe.periodSeconds

How often (in seconds) to perform the liveness probe, which checks whether the content server that stores BAR files is still running.

10

spec.pod.containers.content-server.livenessProbe.timeoutSeconds

How long (in seconds) before the liveness probe (which checks whether the content server that stores BAR files is still running) times out.

5

spec.pod.containers.content-server.readinessProbe.failureThreshold

The number of times the readiness probe (which checks whether the content server that stores BAR files is ready) can fail before taking action.

1

spec.pod.containers.content-server.readinessProbe.initialDelaySeconds

How long to wait (in seconds) before starting the readiness probe, which checks whether the content server that stores BAR files is ready.

10

spec.pod.containers.content-server.readinessProbe.periodSeconds

How often (in seconds) to perform the readiness probe, which checks whether the content server that stores BAR files is ready.

5

spec.pod.containers.content-server.readinessProbe.timeoutSeconds

How long (in seconds) before the readiness probe (which checks whether the content server that stores BAR files is ready) times out.

3

spec.pod.containers.content-server.resources.limits.cpu

The upper limit of CPU cores that are allocated for running the content server container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

1

spec.pod.containers.content-server.resources.limits.memory

The memory upper limit (in bytes) that is allocated for running the content server container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

512Mi

spec.pod.containers.content-server.resources.requests.cpu

The minimum number of CPU cores that are allocated for running the content server container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

250m

spec.pod.containers.content-server.resources.requests.memory

The minimum memory (in bytes) that is allocated for running the content server container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

256Mi

spec.pod.containers.control-ui.livenessProbe.failureThreshold

The number of times the liveness probe (which checks whether the Dashboard UI container is still running) can fail before taking action.

1

spec.pod.containers.control-ui.livenessProbe.initialDelaySeconds

How long to wait (in seconds) before starting the liveness probe, which checks whether the Dashboard UI container is still running. Increase this value if your system cannot start Dashboard UI container in the default time period.

360

spec.pod.containers.control-ui.livenessProbe.periodSeconds

How often (in seconds) to perform the liveness probe that checks whether the Dashboard UI container is still running.

10

spec.pod.containers.control-ui.livenessProbe.timeoutSeconds

How long (in seconds) before the liveness probe (which checks whether the Dashboard UI container is still running) times out.

5

spec.pod.containers.control-ui.readinessProbe.failureThreshold

The number of times the readiness probe (which checks whether the Dashboard UI container is ready) can fail before taking action.

1

spec.pod.containers.control-ui.readinessProbe.initialDelaySeconds

How long to wait (in seconds) before starting the readiness probe, which checks whether the Dashboard UI container is ready.

10

spec.pod.containers.control-ui.readinessProbe.periodSeconds

How often (in seconds) to perform the readiness probe that checks whether the Dashboard UI container is ready.

5

spec.pod.containers.control-ui.readinessProbe.timeoutSeconds

How long (in seconds) before the readiness probe (which checks whether the Dashboard UI container is ready) times out.

3

spec.pod.containers.control-ui.resources.limits.cpu

The upper limit of CPU cores that are allocated for running the Dashboard UI container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

1

spec.pod.containers.control-ui.resources.limits.memory

The memory upper limit (in bytes) that is allocated for running the Dashboard UI container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

512Mi

spec.pod.containers.control-ui.resources.requests.cpu

The minimum number of CPU cores that are allocated for running the Dashboard UI container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

250m

spec.pod.containers.control-ui.resources.requests.memory

The minimum memory (in bytes) that is allocated for running the Dashboard UI container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

256Mi

spec.pod.imagePullSecrets.name

The secret used for pulling images.

spec.replicas

The number of replica pods to run for each deployment. A number between 1-10.

spec.storage.claimName

The name of an existing claim.

spec.storage.class

The storage class to use. If using IBM Cloud, set the storage class to ibmc-file-gold-gid.

spec.storage.selector

A label query over volumes to consider for binding. Optional when spec.storage.type is set to persistent-claim.

spec.storage.size

Limits the maximum amount of storage required when spec.storage.type is set to persistent-claim.

spec.storage.sizeLimit

The storage size limit when spec.storage.type is set to ephemeral.

spec.storage.type

Valid values are:
  • persistent-claim: Persistent claim storage is recommended for extra resilience.
  • ephemeral: When using ephemeral storage, there is a risk that data might be lost when pods restart.

spec.useCommonServices

An indication of whether to enable use of IBM Cloud Pak foundational services (previously IBM Cloud Platform Common Services).

Valid values are true and false.

Must be set to true if using an IBM Cloud Pak for Integration license (specified via spec.license.use).

true

spec.version

The product version that the Dashboard instance is based on. Can be specified by using a channel or as a fully qualified version. If you specify a channel, you must ensure that the license aligns with the latest available version of the IBM App Connect Operator.

To view the available values that you can choose from, see spec.version values.

11.0.0

Default affinity settings

The default settings for spec.affinity are as follows. Note that the labelSelector entries are automatically generated.

You can overwrite the default settings for spec.affinity.nodeAffinity with custom settings, but attempts to overwrite the default settings for spec.affinity.podAntiAffinity will be ignored.
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/arch
            operator: In
            values:
            - amd64
            - s390x
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchLabels:
              <copy of the pod labels>
          topologyKey: kubernetes.io/hostname
        weight: 100

Storage

The App Connect Dashboard requires a file based StorageClass with ReadWriteMany (RWX) capability. If using IBM Cloud, use the ibmc-file-gold-gid StorageClass.

The file system must not be root-owned and must allow read/write access for the user that the Dashboard runs as. This user is a random unique identifier (UID) that is chosen by the Red Hat OpenShift cluster for all restricted pods in a given namespace.

Limitations

Supports only the amd64 and s390x CPU architectures. For more information, see Supported platforms.