App Connect Dashboard reference

Use this reference to create, update, or delete App Connect Dashboard instances by using the IBM® Cloud Pak Platform UI, or the Red Hat® OpenShift® web console or CLI.

Tip: A deployed IBM Cloud Pak for Integration Platform UI instance gives you access to the IBM Cloud Pak Platform UI, where you can create and manage instances of capabilities from a central location.

Introduction

The App Connect Dashboard API enables you to create an App Connect Dashboard instance for administering integration servers, which are deployed from BAR files that users upload into the Dashboard instance. An App Connect Dashboard instance provides a runtime environment for hosting production workloads.

Usage guidelines:

One App Connect Dashboard instance is recommended per namespace (or project) to host your integration servers. However, if you require more than one Dashboard instance (for example, to set up staging and production instances, or to group your integration servers), you can create the instances in the same or in separate namespaces in which the IBM App Connect Operator is running.

Prerequisites

Red Hat OpenShift SecurityContextConstraints requirements

IBM App Connect runs under the default restricted SecurityContextConstraints.

Resources required

Minimum recommended requirements:

  • CPU: 0.5 Cores
  • Memory: 0.75 GB

For information about how to configure these values, see Custom resource values.

Storage

When you upload or import BAR files to the App Connect Dashboard for deployment to integration servers, the BAR files are stored in a content server that is associated with the App Connect Dashboard instance. The content server is created as a container in the App Connect Dashboard deployment and can either store uploaded (or imported) BAR files in a volume in the container’s file system, or store them within a bucket in a simple storage service that provides object storage via a web interface.

Before you create the App Connect Dashboard instance, you must decide what type of storage to use for uploaded or imported BAR files because you will need to specify this storage type while creating the Dashboard and will not be able to change this setting after the Dashboard is created.

Supported storage types

The following storage types can be used to allocate storage for the content server:

Persistent storage

With this storage type, any BAR files that you upload to the App Connect Dashboard (while creating an integration server) or that you import (by using the "BAR files" page in the Dashboard) are stored in a persistent volume in the container’s file system. The persistent volume can be dynamically provisioned through a storage class that is available on the cluster, or can be requested through a claim name. Persistent storage is recommended for extra resilience because the BAR files are retained in the content-server when pods restart and are deleted only when you delete the Dashboard.

The App Connect Dashboard requires a file-based storage class with ReadWriteMany (RWX) capability. If using IBM Cloud, use the ibmc-file-gold-gid storage class.

The file system must not be root-owned and must allow read/write access for the user that the Dashboard runs as. This user is a random unique identifier (UID) that is chosen by the Red Hat OpenShift cluster for all restricted pods in a given namespace.

If using Azure File storage, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. From the Dashboard's pod YAML resource, get the uid value from the runAsUser field. For more information, see Considerations when using Azure File.

If you choose persistent storage, you will need to specify a storage class or the name of an existing claim, the storage size, and optionally, a label selector.

Ephemeral storage

With this storage type, an ephemeral volume is created when a Dashboard pod is started, and uploaded or imported BAR files are stored in this volume in the container’s file system. The ephemeral (emptyDir) volume exists only for the lifetime of the pod, so the BAR files will be lost when the pod restarts. You might typically choose this storage type if creating an environment for demonstration or testing.

If you choose ephemeral storage, you can specify a storage size limit for the volume.

Simple Storage Service (S3) storage

S3 storage offers an alternative option to persistent storage, and enables the content server to support the use of an object storage service for BAR file storage by using the S3 REST API.

Note: S3 storage is applicable only if the spec.version value in the App Connect Dashboard custom resource resolves to 12.0.1.0-r1 or later.
Restriction:
  • S3 storage is not supported when the IBM App Connect Operator is installed in a restricted network because a cluster requires internet access to read and write BAR files from or to an S3 object store.
  • Only Amazon S3 and IBM Cloud Object Storage S3 are supported as S3 providers.

Any BAR files that you upload or import to the Dashboard will be stored with read/write access in a specified bucket in your S3 instance. These BAR files will all be visible on the "BAR files" page (which presents a view of the content server) and can also be viewed in the S3 bucket. An uploaded or imported BAR file is stored with a generated static token, which is hidden in the Dashboard UI, but visible as an object within the S3 bucket.

BAR files and associated tokens that are stored as objects in a bucket follow a naming convention, as shown in the following example for a BAR file named CustomerDatabaseV1.bar:

/mnt/data/content/CustomerDatabaseV1/bars/CustomerDatabaseV1.bar

/mnt/data/content/CustomerDatabaseV1/.token

S3 bucket with BAR files and tokens

If you choose S3 storage, you will need to specify the following storage settings for your provisioned S3 instance:

  • The name of an existing bucket for storing uploaded or imported BAR files

    Although you are not restricted from using an S3 bucket that already contains other objects, a dedicated bucket for BAR file storage is recommended. If you decide to use a bucket that contains other objects, the App Connect Dashboard will ignore those objects because they have no association with the content server.

    Note: If different instances of the App Connect Dashboard are configured to use the same S3 bucket with the same credentials and endpoint, BAR files that are uploaded or imported to the content server from either Dashboard will be visible on the "BAR files" page in both Dashboards and can be deployed from these Dashboards.
  • An S3 endpoint to which the S3 REST API sends requests for reading and writing objects

    You should be able to locate the available endpoints from your S3 instance. For example, on IBM Cloud Object Storage S3, you can locate the endpoints from the Endpoints page.

    To minimize latency, it is recommended that you use an S3 bucket that is in the same geographic area as your App Connect Dashboard instance. Also use an endpoint with a location or region that is similar to where the Dashboard is deployed. For more information, see Endpoints and storage locations in the IBM Cloud Object Storage S3 documentation and Amazon Simple Storage Service endpoints and quotas in the Amazon Web Services documentation.

  • S3 credentials for connecting to the bucket

    You will need to supply these credentials by creating a configuration of type S3Credentials. For more information, see Creating a configuration of type S3Credentials for use with the App Connect Dashboard.

To define the storage type when creating an App Connect Dashboard instance, you must specify your preferred type in the custom resource settings by setting the spec.storage.type parameter and then complete the other spec.storage.* parameters as appropriate for the selected storage type. If you want to create an S3-compatible Dashboard, your must first create a configuration object of type S3Credentials to store the credentials for accessing the bucket.

Creating an instance

You can create an App Connect Dashboard instance from the IBM Cloud Pak Platform UI, or the Red Hat OpenShift web console or CLI.

Before you begin

  • Ensure that the Prerequisites are met.
  • If you want to create an S3-compatible App Connect Dashboard instance, ensure that you have created a configuration object of type S3Credentials as described in Creating a configuration of type S3Credentials for use with the App Connect Dashboard.
  • Decide how to control upgrades to the instance when a new version becomes available. The spec.version value that you specify while creating the instance will determine how that instance is upgraded after installation, and whether you will need to specify a different license or version number for the upgrade. To help you decide whether to specify a spec.version value that either lets you subscribe to a channel for updates, or that uses a specific version for the instance, review the Upgrade considerations for channels, versions, and licenses before you start this task.
    Namespace restriction for an instance, server, configuration, or trace:

    The namespace in which you create an instance or object must be no more than 40 characters in length.

Creating an instance from the IBM Cloud Pak Platform UI

To create an App Connect Dashboard instance from the IBM Cloud Pak Platform UI, complete the following steps:

  1. From a browser window, log in to the IBM Cloud Pak Platform UI.
    Tip: You can use the generated URL for a deployed IBM Cloud Pak for Integration Platform UI instance to access the IBM Cloud Pak Platform UI.

    The Platform UI home page opens with cards and navigation menu options that provide access to the instances and other resources that you are authorized to create, manage, or use. For information about completing administration tasks (such as user management or platform customization) from this page, see Platform UI in the IBM Cloud Pak foundational services documentation.

  2. From the navigation menu Navigation menu, expand Administration and click Integration instances.
  3. From the "Integration instances" page, click Create an instance.
  4. To create an App Connect Dashboard instance from the Create an instance page, click the Integration dashboard tile and click Next.
  5. From the "Create an integration dashboard" page, click a tile to select which type of instance you want to create:
    • Quick start: Deploy a development dashboard with one replica pod.
    • Production: Deploy a production dashboard with multiple replica pods for resilience and high availability.
  6. Click Next. A UI form view opens with the minimum configuration required to create the instance.
  7. Complete either of the following steps:
    • To quickly get going, complete the standard set of configuration fields. You can display advanced settings in the UI form view by setting Advanced settings to On. Note that some fields might not be represented in the form.
      • Name: Enter a short distinctive name that uniquely identifies this Dashboard.
      • Namespace: Enter the name of the namespace (project) where you want to create the Dashboard instance.
      • Channel or version: Select an App Connect product (fix pack) version that the Dashboard is based on. You can select a channel that will resolve to the latest fully qualified version on that channel, or select a specific fully qualified version. If you are using IBM App Connect Operator 5.0.4 or later, the supported channels or versions will depend on the Red Hat OpenShift version that is installed in your cluster. For more information about these values, see spec.version values.
      • Accept: Review the license in the supplied link and then click this check box to accept the terms and conditions.
      • License LI: Select a license identifier that aligns with the channel or the fully qualified version that you selected. For more information, see Licensing reference for IBM App Connect Operator.
      • License use: Select an appropriate CloudPakForIntegration or AppConnectEnterprise license type that you are entitled to use.
      • Replicas: Specify the number of replica pods to run for this deployment.
      • Storage type: Select the type of storage to use for storing BAR files that are uploaded or imported to the Dashboard.
        • persistent-claim: Choose this option for storage in a persistent volume in the container’s file system.
        • ephemeral: Choose this option for storage in an ephemeral volume that exists only for the lifetime of the pod.
        • s3: Choose this option for storage in a bucket in a Simple Storage Service (S3) instance.

        For more information, see Supported storage types.

      • Storage class: If preferred for a persistent volume, select a supported storage class for your cluster, which should be used to dynamically provision a persistent volume that belongs to that class.

        When Storage type is set to persistent-claim, either Storage class or Claim Name is required.

      • Claim Name: If preferred for a persistent volume, specify the name of an existing claim that should be used to request a persistent volume for BAR file storage. This claim must exist in the same namespace as the Dashboard.

        When Storage type is set to persistent-claim, either Storage class or Claim Name is required.

      • Size: Specify the maximum amount of storage required for a persistent volume in decimal or binary format; that is, Gi or G. This value is required if Storage type is set to persistent-claim.
      • s3 bucket: Specify the name of an existing bucket that is used for object storage in a Simple Storage Service (S3) instance. You must have read/write access to this bucket, which will be used to store BAR files that are uploaded or imported to the Dashboard. This value is required if Storage type is set to s3.

        For a list of supported S3 providers and considerations for choosing a bucket, see Supported storage types.

      • s3 host: Specify an endpoint that is associated with your Simple Storage Service (S3) system, to which the S3 REST API sends requests for reading and writing objects to the bucket specified in s3 bucket. This value is required if Storage type is set to s3.
      • s3 configuration: Specify the name of an existing configuration object of type S3Credentials, which stores credentials for accessing the bucket specified in s3 bucket. Set this parameter to the Name (or metadata.name) value that was specified while creating the configuration. This value is required if Storage type is set to s3.
    • For a more advanced configuration, click YAML to switch to the YAML view and then update the editor with your required parameters.
      • For information about the available parameters and their values, see Custom resource values.
      • For information about the supported storage types for storing BAR files that are uploaded to the App Connect Dashboard, see Storage. You can use the spec.storage.* parameters to allocate this storage. Note that you will not be able to change the storage type of your App Connect Dashboard instance after it's created.
      • For licensing information, see Licensing reference for IBM App Connect Operator.
  8. Click Create. You are redirected to the Integration instances page. An entry for the instance is shown in the table with an initial status of Pending, which you can click to check the progress of the deployment. When the deployment completes, the status changes to Ready.
    "Integration instances" page with Dashboard and Designer instances

Users with the required permission can access this Dashboard (Integration dashboard) instance by clicking the name, and then use the instance to deploy Designer and Toolkit integrations to integration servers.

Creating an instance from the Red Hat OpenShift web console

To create an App Connect Dashboard instance by using the Red Hat OpenShift web console, complete the following steps:

  1. Applicable to IBM Cloud Pak for Integration only:
    1. If not already logged in, log in to the IBM Cloud Pak Platform UI for your cluster.
    2. From the Platform UI home page, click Install operators or OpenShift Container Platform, and log in if prompted.
  2. Applicable to an independent deployment of IBM App Connect Operator only: From a browser window, log in to the Red Hat OpenShift Container Platform web console. Ensure that you are in the Administrator perspective Administrator perspective of the web console.
  3. From the navigation, click Operators > Installed Operators.
  4. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  5. From the Installed Operators page, click IBM App Connect.
  6. From the Operator details page for the App Connect Operator, click the Dashboard tab.
  7. Click Create Dashboard.

    From the Details tab on the Operator details page, you can also locate the Dashboard tile and click Create instance to specify installation settings for the instance.

  8. To use the Form view, ensure that Form view is selected and then complete the fields. Note that some fields might not be represented in the form. For information about completing the standard set of configuration fields, refer to the field descriptions in Creating an instance from the IBM Cloud Pak Platform UI. (In the web console, the namespace or project is not included in the form and should already be selected from an earlier step.)
  9. Optional: For a finer level of control over your installation settings, click YAML view to switch to the YAML view. Update the content of the YAML editor with the parameters and values that you require for this Dashboard instance.
    • To view the full set of parameters and values available, see Custom resource values.
    • For information about the supported storage types for storing BAR files that are uploaded to the App Connect Dashboard, see Storage. You can use the spec.storage.* parameters to allocate this storage. Note that you will not be able to change the storage type of your App Connect Dashboard instance after it's created.
    • For licensing information, see Licensing reference for IBM App Connect Operator.
  10. Click Create to start the deployment. An entry for the Dashboard instance is shown in the Dashboards table, initially with a Pending status.
  11. Click the Dashboard name to view information about its definition and current status.

    On the Details tab of the page, the Conditions section reveals the progress of the deployment.

    Note: The Admin UI field provides the URL for accessing the Dashboard instance. You can also locate this URL under Networking > Routes in the console navigation.
    URL of the Dashboard instance displayed in the Admin UI field

    Share this URL with users who have access to this namespace, and who will need to use the Dashboard instance to deploy integration servers.

    You can use the breadcrumb trail to return to the (previous) Operator details page for the App Connect Operator. When the deployment is complete, the status is shown as Ready in the Dashboards table.

Creating an instance from the Red Hat OpenShift CLI

To create an App Connect Dashboard instance from the Red Hat OpenShift CLI, complete the following steps.

  1. From your local computer, create a YAML file that contains the configuration for the App Connect Dashboard instance that you want to create. Include the metadata.namespace parameter to identify the namespace in which you want to create the instance; this should be the same namespace where the other App Connect instances or resources are created.
    • To view the full set of parameters and values that you can specify, see Custom resource values.
    • For information about the supported storage types for storing BAR files that are uploaded to the App Connect Dashboard, see Storage. You can use the spec.storage.* parameters to allocate this storage. Note that you will not be able to change the storage type of your App Connect Dashboard instance after it's created.
    • For licensing information, see Licensing reference for IBM App Connect Operator.

    The following examples (Example 1 and Example 2) show a Dashboard CR with settings for a persistent-claim storage type:

    Example 1:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: Dashboard
    metadata:
      name: db-prod
      namespace: mynamespace
    spec:
      license:
        accept: true
        license: L-QECF-MBXVLU
        use: CloudPakForIntegrationNonProduction
      pod:
        containers:
          content-server:
            resources:
              limits:
                cpu: 250m
                memory: 512Mi
              requests:
                cpu: 50m
                memory: 50Mi
          control-ui:
            resources:
              limits:
                cpu: 500m
                memory: 512Mi
              requests:
                cpu: 50m
                memory: 125Mi
      replicas: 3
      storage:
        size: 5Gi
        type: persistent-claim
        class: ibmc-file-gold-gid
      useCommonServices: true
      version: 12.0-lts
    Example 2:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: Dashboard
    metadata:
      name: db-prod
      namespace: mynamespace
    spec:
      license:
        accept: true
        license: L-QECF-MBXVLU
        use: AppConnectEnterpriseProduction
      pod:
        containers:
          content-server:
            resources:
              limits:
                cpu: 250m
                memory: 512Mi
              requests:
                cpu: 50m
                memory: 50Mi
          control-ui:
            resources:
              limits:
                cpu: 500m
                memory: 512Mi
              requests:
                cpu: 50m
                memory: 125Mi
      replicas: 3
      storage:
        size: 5Gi
        type: persistent-claim
        class: valid-storage-class
      useCommonServices: false
      version: 12.0-lts

    The following examples (Example 3 and Example 4) show a Dashboard CR with settings for an s3 storage type:

    Example 3:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: Dashboard
    metadata:
      name: db-prod
      namespace: mynamespace
    spec:
      license:
        accept: true
        license: L-QECF-MBXVLU
        use: CloudPakForIntegrationNonProduction
      pod:
        containers:
          content-server:
            resources:
              limits:
                cpu: 250m
                memory: 512Mi
              requests:
                cpu: 50m
                memory: 50Mi
          control-ui:
            resources:
              limits:
                cpu: 500m
                memory: 512Mi
              requests:
                cpu: 50m
                memory: 125Mi
      replicas: 3
      storage:
        size: 5Gi
        type: s3
        bucket: appc-operator-e2e
        host: s3.eu-gb.cloud-object-storage.appdomain.cloud
        s3Configuration: s3credentials-ibmcosiam
      useCommonServices: true
      version: 12.0-lts
    Example 4:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: Dashboard
    metadata:
      name: db-prod
      namespace: mynamespace
    spec:
      license:
        accept: true
        license: L-QECF-MBXVLU
        use: AppConnectEnterpriseProduction
      pod:
        containers:
          content-server:
            resources:
              limits:
                cpu: 250m
                memory: 512Mi
              requests:
                cpu: 50m
                memory: 50Mi
          control-ui:
            resources:
              limits:
                cpu: 500m
                memory: 512Mi
              requests:
                cpu: 50m
                memory: 125Mi
      replicas: 3
      storage:
        size: 5Gi
        type: s3
        bucket: appc-operator-e2e
        host: s3.eu-gb.cloud-object-storage.appdomain.cloud
        s3Configuration: s3credentials-ibmcosiam
      useCommonServices: false
      version: 12.0-lts

  2. Save this file with a .yaml extension; for example, dashboard_cr.yaml.
  3. From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
  4. Run the following command to create the App Connect Dashboard instance. (Use the name of the .yaml file that you created.)
    oc apply -f dashboard_cr.yaml
  5. Run the following command to check the status of the App Connect Dashboard instance and verify that it is ready:
    oc get dashboards -n namespace

    The output will also provide the URL of the Dashboard instance; for example:

    NAME       RESOLVEDVERSION   REPLICAS   CUSTOMIMAGES   STATUS    URL                                                          AGE
    db-prod    12.0.12.2-r1-lts  3          false          Ready     https://db-prod-ui-mynamespace.apps.acecc-abcde.icp4i.com    9m12s


    Share the URL value in the output with users who have access to this namespace, and who will need to use the Dashboard instance to deploy integration servers.

Updating the custom resource settings for an instance

If you want to change the settings of an existing App Connect Dashboard instance, you can edit its custom resource settings from the IBM Cloud Pak Platform UI, or the Red Hat OpenShift web console or CLI. For example, you might want to change the log level for the container logs or apply custom annotations to the pods.

Restriction: You cannot update standard settings such as the kind of resource (kind), and the name and namespace (metadata.name and metadata.namespace), as well as some system-generated settings, or settings such as the storage type of certain components. An error message will be displayed when you try to save.

Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).


Updating an instance from the IBM Cloud Pak Platform UI

To update an App Connect Dashboard instance from the IBM Cloud Pak Platform UI, complete the following steps:

  1. From a browser window, log in to the IBM Cloud Pak Platform UI.
    Tip: You can use the generated URL for a deployed IBM Cloud Pak for Integration Platform UI instance to access the IBM Cloud Pak Platform UI.

    The Platform UI home page opens with cards and navigation menu options that provide access to the instances and other resources that you are authorized to create, manage, or use. For information about completing administration tasks (such as user management or platform customization) from this page, see Platform UI in the IBM Cloud Pak foundational services documentation.

  2. From the navigation menu Navigation menu, expand Administration and click Integration instances.
  3. From the "Integration instances" page, locate the App Connect Dashboard (Integration dashboard) instance that you want to update.
  4. Click the options icon Options icon to open the options menu, and then click Edit. The "Edit" page opens for that instance.
  5. Either use the fields in the "UI form" view or switch to the YAML view to update the required settings. You can update existing values, add new entries, or delete entries. For information about the available parameters and their values, see Custom resource values.
  6. Click Update to save your changes.

Updating an instance from the Red Hat OpenShift web console

To update an App Connect Dashboard instance by using the Red Hat OpenShift web console, complete the following steps:

  1. Applicable to IBM Cloud Pak for Integration only:
    1. If not already logged in, log in to the IBM Cloud Pak Platform UI for your cluster.
    2. From the Platform UI home page, click Install operators or OpenShift Container Platform, and log in if prompted.
  2. Applicable to an independent deployment of IBM App Connect Operator only: From a browser window, log in to the Red Hat OpenShift Container Platform web console. Ensure that you are in the Administrator perspective Administrator perspective of the web console.
  3. From the navigation, click Operators > Installed Operators.
  4. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  5. From the Installed Operators page, click IBM App Connect.
  6. From the Operator details page for the App Connect Operator, click the Dashboard tab.
  7. Locate and click the name of the instance that you want to update.
  8. Click the YAML tab.
  9. Update the content of the YAML editor as required. You can update existing values, add new entries, or delete entries. For information about the available parameters and their values, see Custom resource values.
  10. Click Save to save your changes.

Updating an instance from the Red Hat OpenShift CLI

To update an App Connect Dashboard instance from the Red Hat OpenShift CLI, complete the following steps.

  1. From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
  2. From the namespace where the Dashboard instance is deployed, run the oc edit command to partially update the instance, where instanceName is the name (metadata.name value) of the instance.
    oc edit dashboard instanceName

    The Dashboard CR automatically opens in the default text editor for your operating system.

  3. Update the contents of the file as required. You can update existing values, add new entries, or delete entries. For information about the available parameters and their values, see Custom resource values.
  4. Save the YAML definition and close the text editor to apply the changes.
Tip:

If preferred, you can also use the oc patch command to apply a patch with some bash shell features, or use oc apply with the appropriate YAML settings.

For example, you can save the YAML settings to a file with a .yaml extension (for example, updatesettings.yaml), and then run oc patch as follows to update the settings for an instance:

oc patch dashboard instanceName --type='merge' --patch "$(cat updatesettings.yaml)"

Deleting an instance

If no longer required, you can delete an App Connect Dashboard instance. You can do so from the IBM Cloud Pak Platform UI, or the Red Hat OpenShift web console or CLI.

Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).

Note:

If you delete an S3-compatible App Connect Dashboard instance, the BAR files and corresponding tokens will still be preserved in the S3 bucket. You can delete these objects from the bucket if no longer required. If retained in the bucket, you can reuse these objects later (if required) by creating another S3-compatible App Connect Dashboard instance that connects to the same bucket.


Deleting an instance from the IBM Cloud Pak Platform UI

To delete an App Connect Dashboard instance from the IBM Cloud Pak Platform UI, complete the following steps:

  1. From a browser window, log in to the IBM Cloud Pak Platform UI.
    Tip: You can use the generated URL for a deployed IBM Cloud Pak for Integration Platform UI instance to access the IBM Cloud Pak Platform UI.

    The Platform UI home page opens with cards and navigation menu options that provide access to the instances and other resources that you are authorized to create, manage, or use. For information about completing administration tasks (such as user management or platform customization) from this page, see Platform UI in the IBM Cloud Pak foundational services documentation.

  2. From the navigation menu Navigation menu, expand Administration and click Integration instances.
  3. From the "Integration instances" page, locate the App Connect Dashboard (Integration dashboard) instance that you want to delete.
  4. Click the options icon (Options menu) to open the options menu, and then click Delete.
  5. Confirm the deletion.

Deleting an instance from the Red Hat OpenShift web console

To delete an App Connect Dashboard instance by using the Red Hat OpenShift web console, complete the following steps:

  1. Applicable to IBM Cloud Pak for Integration only:
    1. If not already logged in, log in to the IBM Cloud Pak Platform UI for your cluster.
    2. From the Platform UI home page, click Install operators or OpenShift Container Platform, and log in if prompted.
  2. Applicable to an independent deployment of IBM App Connect Operator only: From a browser window, log in to the Red Hat OpenShift Container Platform web console. Ensure that you are in the Administrator perspective Administrator perspective of the web console.
  3. From the navigation, click Operators > Installed Operators.
  4. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  5. From the Installed Operators page, click IBM App Connect.
  6. From the Operator details page for the App Connect Operator, click the Dashboard tab.
  7. Locate the instance that you want to delete.
  8. Click the options icon (Options menu) to open the options menu, and then click the Delete option.
  9. Confirm the deletion.

Deleting an instance from the Red Hat OpenShift CLI

To delete an App Connect Dashboard instance from the Red Hat OpenShift CLI, complete the following steps.

  1. From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
  2. From the namespace where the Dashboard instance is deployed, run the following command to delete the instance, where instanceName is the value of the metadata.name parameter.
    oc delete dashboard instanceName

Custom resource values

The following table lists the configurable parameters and default values for the custom resource.

Parameter Description Default

apiVersion

The API version that identifies which schema is used for this instance.

appconnect.ibm.com/v1beta1

kind

The resource type.

Dashboard

metadata.name

A unique short name by which the Dashboard instance can be identified.

metadata.namespace

The namespace (project) in which the Dashboard instance is installed.

The namespace in which you create an instance or object must be no more than 40 characters in length.

spec.affinity

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

Specify custom affinity settings that will control the placement of pods on nodes. The custom affinity settings that you specify will completely overwrite all of the default settings. (The current default settings are shown after this table.)

Custom settings are supported only for nodeAffinity. If you provide custom settings for nodeAntiAffinity, podAffinity, or podAntiAffinity, they will be ignored.

For more information about spec.affinity.nodeAffinity definitions, see Controlling pod placement on nodes using node affinity rules in the OpenShift documentation and Assign Pods to Nodes using Node Affinity in the Kubernetes documentation.

spec.annotations

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

Specify one or more custom annotations (as arbitrary metadata) to apply to each pod that is created during deployment. Specify each annotation as a key/value pair in the format key: value. For example:

spec:
  annotations:
    key1: value1
    key2: value2

The custom annotations that you specify will be merged with the default (generated) annotations. If duplicate annotation keys are detected, the custom value will overwrite the default value.

spec.disableRoutes

(Only applicable if spec.version resolves to 12.0.5.0-r1-lts or later)

Indicate whether to disable the automatic creation of routes, which externally expose a service that identifies the set of Dashboard pods.

Valid values are true and false. Set this value to true to disable the automatic creation of external HTTP and HTTPS routes.

false

spec.labels

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

Specify one or more custom labels (as classification metadata) to apply to each pod that is created during deployment. Specify each label as a key/value pair in the format labelKey: labelValue. For example:

spec:
  labels:
    key1: value1
    key2: value2

The custom labels that you specify will be merged with the default (generated) labels. If duplicate label keys are detected, the custom value will overwrite the default value.

spec.license.accept

An indication of whether the license should be accepted.

Valid values are true and false. To install, this value must be set to true.

false

spec.license.license

See Licensing reference for IBM App Connect Operator for the valid values.

spec.license.use

See Licensing reference for IBM App Connect Operator for the valid values.

If using an IBM Cloud Pak for Integration license, spec.useCommonServices must be set to true.

spec.logFormat

The format used for the container logs that are output to the container's console.

Valid values are basic and json.

basic

spec.logLevel

(Only applicable if spec.version resolves to 11.0.0.12-r1 or later)

The level of information that is displayed in the container logs.

Valid values are info and debug.

info

spec.pod.containers.content-server.livenessProbe.failureThreshold

The number of times the liveness probe (which checks whether the content server that stores BAR files is still running) can fail before taking action.

1

spec.pod.containers.content-server.livenessProbe.initialDelaySeconds

How long to wait (in seconds) before starting the liveness probe, which checks whether the content server that stores BAR files is still running.

360

spec.pod.containers.content-server.livenessProbe.periodSeconds

How often (in seconds) to perform the liveness probe, which checks whether the content server that stores BAR files is still running.

10

spec.pod.containers.content-server.livenessProbe.timeoutSeconds

How long (in seconds) before the liveness probe (which checks whether the content server that stores BAR files is still running) times out.

5

spec.pod.containers.content-server.readinessProbe.failureThreshold

The number of times the readiness probe (which checks whether the content server that stores BAR files is ready) can fail before taking action.

1

spec.pod.containers.content-server.readinessProbe.initialDelaySeconds

How long to wait (in seconds) before starting the readiness probe, which checks whether the content server that stores BAR files is ready.

10

spec.pod.containers.content-server.readinessProbe.periodSeconds

How often (in seconds) to perform the readiness probe, which checks whether the content server that stores BAR files is ready.

5

spec.pod.containers.content-server.readinessProbe.timeoutSeconds

How long (in seconds) before the readiness probe (which checks whether the content server that stores BAR files is ready) times out.

3

spec.pod.containers.content-server.resources.limits.cpu

The upper limit of CPU cores that are allocated for running the content server container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

1

spec.pod.containers.content-server.resources.limits.ephemeral-storage

(Only applicable if spec.version resolves to 12.0.1.0-r4 or later)

The disk upper limit in bytes when using ephemeral storage. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For more information, see Setting requests and limits for local ephemeral storage.

20Gi

spec.pod.containers.content-server.resources.limits.memory

The memory upper limit (in bytes) that is allocated for running the content server container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

512Mi

spec.pod.containers.content-server.resources.requests.cpu

The minimum number of CPU cores that are allocated for running the content server container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

250m

spec.pod.containers.content-server.resources.requests.ephemeral-storage

(Only applicable if spec.version resolves to 12.0.1.0-r4 or later)

The minimum disk in bytes when using ephemeral storage. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For more information, see Setting requests and limits for local ephemeral storage.

5Gi

spec.pod.containers.content-server.resources.requests.memory

The minimum memory (in bytes) that is allocated for running the content server container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

256Mi

spec.pod.containers.control-ui.livenessProbe.failureThreshold

The number of times the liveness probe (which checks whether the Dashboard UI container is still running) can fail before taking action.

1

spec.pod.containers.control-ui.livenessProbe.initialDelaySeconds

How long to wait (in seconds) before starting the liveness probe, which checks whether the Dashboard UI container is still running. Increase this value if your system cannot start Dashboard UI container in the default time period.

360

spec.pod.containers.control-ui.livenessProbe.periodSeconds

How often (in seconds) to perform the liveness probe that checks whether the Dashboard UI container is still running.

10

spec.pod.containers.control-ui.livenessProbe.timeoutSeconds

How long (in seconds) before the liveness probe (which checks whether the Dashboard UI container is still running) times out.

5

spec.pod.containers.control-ui.readinessProbe.failureThreshold

The number of times the readiness probe (which checks whether the Dashboard UI container is ready) can fail before taking action.

1

spec.pod.containers.control-ui.readinessProbe.initialDelaySeconds

How long to wait (in seconds) before starting the readiness probe, which checks whether the Dashboard UI container is ready.

10

spec.pod.containers.control-ui.readinessProbe.periodSeconds

How often (in seconds) to perform the readiness probe that checks whether the Dashboard UI container is ready.

5

spec.pod.containers.control-ui.readinessProbe.timeoutSeconds

How long (in seconds) before the readiness probe (which checks whether the Dashboard UI container is ready) times out.

3

spec.pod.containers.control-ui.resources.limits.cpu

The upper limit of CPU cores that are allocated for running the Dashboard UI container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

1

spec.pod.containers.control-ui.resources.limits.memory

The memory upper limit (in bytes) that is allocated for running the Dashboard UI container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

512Mi

spec.pod.containers.control-ui.resources.requests.cpu

The minimum number of CPU cores that are allocated for running the Dashboard UI container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

250m

spec.pod.containers.control-ui.resources.requests.memory

The minimum memory (in bytes) that is allocated for running the Dashboard UI container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

256Mi

spec.pod.imagePullSecrets.name

The secret used for pulling images.

spec.replicas

The number of replica pods to run for each deployment. A number between 1-10.

spec.storage.bucket

(Only applicable if spec.version resolves to 12.0.1.0-r1 or later)

The name of an existing bucket that is used for object storage in a Simple Storage Service (S3) instance. You must have read/write access to this bucket, which will be used to store BAR files that are uploaded or imported to the App Connect Dashboard. For a list of supported S3 providers and considerations for choosing a bucket, see Supported storage types.

Required if spec.storage.type is set to s3.

spec.storage.claimName

The name of an existing claim that should be used to request a persistent volume for BAR file storage. This claim must exist in the same namespace as the App Connect Dashboard.

When spec.storage.type is set to persistent-claim, either spec.storage.claimName or spec.storage.class is required.

spec.storage.class

A supported storage class for your cluster, which should be used to dynamically provision a persistent volume that belongs to that class. If using IBM Cloud, set the storage class to ibmc-file-gold-gid. For more information, see Supported storage types.

When spec.storage.type is set to persistent-claim, either spec.storage.claimName or spec.storage.class is required.

spec.storage.host

(Only applicable if spec.version resolves to 12.0.1.0-r1 or later)

An endpoint associated with your Simple Storage Service (S3) system, to which the S3 REST API sends requests for reading and writing objects to the bucket specified in spec.storage.bucket. For more information, see Supported storage types.

Required if spec.storage.type is set to s3.

spec.storage.s3Configuration

(Only applicable if spec.version resolves to 12.0.1.0-r1 or later)

The name of an existing configuration object of type S3Credentials, which stores credentials for accessing the bucket specified in spec.storage.bucket.

Set this parameter to the metadata.name value that was specified while creating the configuration. For information about creating this configuration, see Creating a configuration of type S3Credentials for use with the App Connect Dashboard.

Required if spec.storage.type is set to s3.

spec.storage.selector

A label query that a specified claim can use to filter for volumes that can be bound to the claim. Optional and applicable only when spec.storage.type is set to persistent-claim.

For more information, see Selector and Label selectors in the Kubernetes documentation.

spec.storage.size

The maximum amount of storage required for a persistent volume in decimal or binary format; that is, Gi or G.

Required if spec.storage.type is set to persistent-claim.

5Gi

spec.storage.sizeLimit

The storage size limit when spec.storage.type is set to ephemeral.

spec.storage.type

The type of storage to use for storing BAR files that are uploaded or imported to the App Connect Dashboard.

Valid values are:
  • persistent-claim: Choose this option for storage in a persistent volume in the container’s file system.
  • ephemeral: Choose this option for storage in an ephemeral volume that exists only for the lifetime of the pod.
  • s3: Choose this option for storage in a bucket in a Simple Storage Service (S3) instance.

For more information, see Supported storage types.

spec.useCommonServices

An indication of whether to enable use of IBM Cloud Pak foundational services (formerly IBM Cloud Platform Common Services).

Valid values are true and false.

  • OpenShift-only contentMust be set to true if using an IBM Cloud Pak for Integration license (specified via spec.license.use). Can be set to true or false if using an App Connect Enterprise license.

Applicable only if spec.version resolves to 11.0.0.11-r2 or later: When set to true, only essential IBM Cloud Pak foundational services are requested, which means that only the Identity and Access Management (IAM) service is installed by default, together with a Common UI component that provides a login page for secure access to the instance.

true

spec.version

The product version that the Dashboard instance is based on. Can be specified by using a channel or as a fully qualified version. If you specify a channel, you must ensure that the license aligns with the latest fully qualified version in the channel.

If you are using IBM App Connect Operator 5.0.4 or later, the supported channels or versions will depend on the Red Hat OpenShift version that is installed in your cluster.

To view the available values that you can choose from, see spec.version values.

12.0-lts

Default affinity settings

The default settings for spec.affinity are as follows. Note that the labelSelector entries are automatically generated.

You can overwrite the default settings for spec.affinity.nodeAffinity with custom settings, but attempts to overwrite the default settings for spec.affinity.podAntiAffinity will be ignored.
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/arch
            operator: In
            values:
            - amd64
            - s390x
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchLabels:
              <copy of the pod labels>
          topologyKey: kubernetes.io/hostname
        weight: 100

Supported platforms

Red Hat OpenShift: Supports the amd64, s390x, and ppc64le CPU architectures. For more information, see Supported platforms.