Configuring the cluster

You can customize the sandbox instances provisioning to override the default settings. The configurations are recommended if external access to the sandbox instance is required or if changes to the sandbox instance are expected to be persistent.

Audience: Cluster administrators

The following configurations are recommended if the mentioned use cases apply to you and the developers on your team. Configuration changes take effect upon developers creating sandbox instances from the Sandbox Operator.

Configuring ingress cluster traffic

A sandbox instance is a virtualized z/OS® environment. To use it, developers need to access it like accessing a physical system or a virtual machine. If external access to the sandbox instance is required, you must set up the mechanism that allows the routing of external traffic into the cluster.

Note: External access might not be required, for example, if developers are using IBM® Wazi Developer for CodeReady Workspaces in the same cluster.

A typical use case of external access is that when developers want to access the z/OS instance that is running in the sandbox from a client outside of the cluster. See the following examples of external clients:

  • External 3270 terminal emulator that is running on developers' workstations
  • IBM® Wazi Developer for Eclipse that is running on developers' workstations
  • IBM Wazi Developer for VS Code that is running on developers' workstations
  • IBM Wazi Developer for CodeReady Workspaces that is running in a different cluster

The OpenShift®® platform provides methods for communicating from outside the cluster with the services that run in the cluster. When the Sandbox Operator creates an instance, by default it creates a Service internal to the cluster of type ClusterIP that exposes secure ports in the instance to other namespaces in the cluster, and creates a Route for Wazi host components that are HTTPS services, including RSE API, the Debug Profile Service, and z/OSMF.

If this default is not appropriate, for example, if you do not have all Wazi host components installed in the instance, or you need access to unsecured or do not work with Route, you can change the way that Sandbox creates Services by using the portProfile and zPorts configuration elements.

portProfile

Use portProfile to configure what ports to expose and how to expose them. The portProfile has two sub-elements: profile and scope, which are described as follows:

Use profile to control what ports to expose.

Table 1. profile options
Profile Description
wazi Default. Includes encrypted ports of Wazi Developer's host components.
wazi-all Includes encrypted and unencrypted ports of Wazi Developer's host components.
custom Empty profile. No ports will be exposed by default unless they are given in zPorts.

Use scope to control how to expose the ports specified in the profile. The scope options are as follows:

Table 2. scope options
Scope  
route Default. Exposes ports on the OpenShift cluster. It also exposes the ports that are modern, web-based protocols outside the cluster using OpenShift Routes.
nodeport Exposes what route does, and everything else it exposes using a NodePort Service.
cluster Exposes ports only on the OpenShift cluster using a ClusterIP Service.

zPorts

You can use the zPorts to expose other ports that are not covered by portProfile, such as custom applications on the instance, or provide alternate behavior for the Wazi ports.

zPorts has the following three sub-elements: cluster, route, and nodeport, which work similarly to the scope.

Table 3. Sub-elements of zPorts
Elements Description
cluster Add ports to a ClusterIP service
route Add ports a ClusterIP service and also add a Route
nodeport Add ports to a NodePort service

All three sub-elements have name and port to identify the port. The route sub-element also has a tls sub-element to control how TLS is handled.

For more details, see Configuration reference.

If you do not want to use the custom resource to create services and routes for ingress, set portProfile.profile: custom, leave out zPorts, and manually create any Service, Route, or Ingress.

Note: The service configuration element in Sandbox 1.3 and earlier releases was replaced with portProfile and zPorts. If you have a service section in your custom resource, it will create the same NodePort service as it did in earlier releases; but if you do not, you will notice the services exposed by default have changed slightly. If you require the old default behavior, add a service section with the ports you require. But service will be removed in a future release, so it is recommended to move to portProfile and zPorts.

Configuring network policies

OpenShift uses Kubernetes NetworkPolicy resources to control network traffic to pods and namespaces, which is in much the same way a firewall on a computer can block or allow traffic.

By default, when a sandbox instance is created, a NetworkPolicy is created that allows all traffic to and from the sandbox instance. This allows external and internal users to connect to z/OS without additional configuration.

If you want to change the default NetworkPolicy, specify other NetworkPolicy that are provided the WaziSandboxSystem custom resource, which are shown as follows:

Table 4. NetworkPolicy for sandbox instances
Value Description
allow-all Creates the default NetworkPolicy.
namespace-only Allows traffic to the instance only from other Pods in the same namespace.
deny-all Blocks all traffic to the instance.
none Creates no NetworkPolicy.

The access to the instance is the union of all network policies that select the instance pod. You can adjust the access by adding more NetworkPolicy objects manually. For example, you can start with the deny-all policy, and then manually add policies to specific addresses, namespaces, or pods that are on the allow list.

For more information, see Network Policies in Kubernetes documentation.

Configuring cluster persistent storage

Sandbox supports OpenShift Container Storage and was tested with the Rook-Ceph® operator, and NFS. On IBM Cloud®, Sandbox was tested with ibmc-block-gold and ibmc-block-custom storage.

If using NFS, the group ID of the storage must match the group ID for Sandbox by using the fsGroup option.

The z/OS volumes are not encrypted on the PersistentVolumeClaim by Sandbox. If sensitive data is to be stored in z/OS storage, you can protect it in two ways:

  • Use encryption on z/OS to encrypt the data on z/OS.
  • Configure passive encryption on the cluster for the storage used by Sandbox, so the PersistentVolumeClaim used by Sandbox is encrypted at rest. The details of setting up encrypted storage are specific to the cluster and underlying storage, and are beyond the scope of this document. Consult your cluster administrator.

Sandbox stores z/OS volumes for each instance on a PersistentVolumeClaim.

Use cases

You can manage the persistent storage for sandbox instances in either of the following ways:
  • Let the Operator manage the storage automatically.

    Each sandbox instance has a copy of the Extended ADCD volumes, so each sandbox instance might require 300 GB of storage, which is the default size of the claims that are automatically created.

    Use case: A clean environment is needed every time the instance starts, and changes to the z/OS system or data on it do not need to be kept. For example, you can use this approach to set up a build or test pipeline that always wants a clean environment and copies out any build or test artifacts before deleting the instance.
  • You provision the storage for sandbox instances manually. For more information, see How to provision the storage for sandbox instances manually.

    Use case: If changes to the z/OS system or data stored there are important, manual provisioning is recommended.

Both options might require some preparation for the cluster to choose appropriate storage drivers, create or choose storage classes, and set the default storage classes. For more information, see OpenShift documentation.

Ensure that there is sufficient storage available because many sandbox instances are expected to be created at one time.

Storage must be writable by the pod that runs the sandbox instance. By default, the Sandbox Operator mounts the storage with a default fsGroup value in the securityContext of the pod. If the storage driver that is being used does not support changing the group ID, you might need to set fsGroup in the custom resource definition to match the group ID of the storage. See the OpenShift documentation for information about storage group IDs, and the Configuration reference for using fsGroup in the custom resources for Sandbox.

Provision the storage for sandbox instances manually

  • The storage class must support ReadWriteOnce access mode; no other modes are supported.
  • The default size of the claims that are automatically created is 300 GB. For larger or smaller images, or if you plan to add files to the image, you need to set the size when you create the claim.
  • As the volumes will hold z/OS instance data, you must set up appropriate access controls and encryption of the storage.
  • For ease of use, use a driver with dynamic storage capability. However, if dynamic provisioning is not available, you need to set spec.persistence.useDynamicProvisioning: false in the Custom Resource to prevent the Operator from using dynamic provisioning.
  • For better performance, use fast storage that is close to the nodes where the sandbox instances will run.
  • To allow fast and easy copying of PersistentVolumeClaim that has cloud-ready z/OS volume images, use a storage class and driver that supports CSI volume cloning.

Optionally, you can create a custom storage class for sandbox storage, and set up a default storage class for the cluster. For example, you can use IBM Cloud storage type ibmc-block-gold and ibmc-block-custom.

What's next
  • At a minimum, you need to tell developers the size to request for sandbox storage, and what storageClassName to use.

  • If developers do not have authority to create a PersistentVolumeClaim as described in Prerequisites of creating a sandbox instance, you need to create a PersistentVolumeClaim for each sandbox instance.

  • Starting from version 1.2, if the backing storage requires a specific group ID, or the default group ID 2105 conflicts with another ID already in use, you can set the group ID in the Custom Resource with the spec.fsGroup parameter. Note that in versions earlier than 1.2, when a sandbox instance is started, the storage is initialized with correct user and group file permissions by an init container. Therefore, no extra configuration of file system permissions is required.