Networking
In an RHOCP environment, planning of the network is important. With IBM Z and IBM® LinuxONE, you have different choices and combinations for the RHOCP network. Each RHOCP node can have one or more network interfaces. The network design depends on the isolation requirements and integration architecture with other workloads.
Depending on the implementation and hypervisor, you can use the following network options:
- HiperSockets networks, which are internal IBM Z and IBM® LinuxONE networks. They do not require any network cards. You can define multiple instances in one machine.
- Open System Adapter (OSA) cards that can be shared and at the same time can be bounded for enhancing the network bandwidth and build an HA network.
- RoCE cards, which are like the OSA cards, can also be shared and run different protocols.
- Depending on the hypervisor, you can use virtual switches with the network capabilities of IBM Z and IBM® LinuxONE mentioned earlier.
A good reference for network configuration options can be found here.
RHOCP internal networking
RHOCP uses an internal SDN (Software Defined Network) to communicate between the nodes, manage the cluster and distribute workloads to the pods in the nodes.
Beginning with Red Hat OpenShift 4.12, new clusters are installed with the OVN-Kubernetes network plug-in as the default networking plug-in across all supported platforms and topologies. All prior Red Hat OpenShift releases continue to use Red Hat OpenShift SDN as the default networking plug-in.
The OVN-Kubernetes network plug-in includes a rich set of capabilities that include support for:
- All existing Red Hat OpenShift SDN features
- IPv6 networks
- Configuring IPsec encryption
- NetworkPolicy API
- Audit logging of network policy events
- Network flow tracking in NetFlow, sFlow, and IPFIX formats
- Hybrid networks for Windows containers
- Hardware offloading to compatible NICs
For more information regarding OVN-Kubernetes, see About the OVN-Kubernetes network plugin.
Since OVN-Kubernetes is the default network plug-in, now an IPv4 single-stack cluster can be converted to a dual-network cluster network that supports IPv4 and IPv6 address families. After converting to dual-stack, all newly created pods are dual-stack enabled.
For more information regarding IPv6 and dual-stack, see Converting to a dual-stack cluster network
OVN-Kubernetes can take advantage of the network capabilities of IBM Z and IBM® LinuxONE and includes an ingress load balancer.
Included in RHOCP internal network capabilities is IPsec encryption support, which provides encryption of cross-node pod-to-pod traffic on the cluster network. For secured communications, OpenShift Service Mesh enables that transparently for the microservices and application interconnections. It can take advantage of the network capabilities and acceleration in IBM Z and IBM® LinuxONE and also includes an ingress load balancer. Let's see the structure of these internal network capabilities.
The internal network in an RHOCP cluster for the default Container Network Interface (CNI) network provider is OVN-Kubernetes. It is an Open vSwitch-based software-defined networking solution. With you can programmatically connect groups of guest instances into private L2 and L3 networks.
The Cluster Network Operator deploys the Red Hat OpenShift CNI network provider plug-in that you selected during cluster installation, by using a daemon set. - The next level of the internal network topology in an RHOCP cluster, is the pod interconnectivity in the node and accross the nodes. Each pod has one, but can have multiple network interfaces, which are connected to one or more network interfaces (NICs) of the node. To enable and manage multiple network interfaces in a pod, the Multus sci plug-in is available. Multus is the open source component that enables Kubernetes pods to attach to multiple networks.
For more information about Multus, see Understanding multiple networks.
The RHOCP internal ingress load balancer is controlled by the control plane nodes and is responsible for distributing the workload to the different compute nodes and routes, which represent the entry point to the various services that run in different nodes and pods.
RHOCP load balancer
Per default, RHOCP establishes an internal load balancer in each cluster. It is mandatory to have an external load balancer, especially during setup and installation of the RHOCP cluster. For each RHOCP cluster in general, it is highly recommended to have at least one external load balancer that distributes s the workload among the API servers in the control plane nodes and the application requests running on the compute nodes.
External load balancers can be implemented in software or specialized hardware such as:
- Software load balancers
- HAProxy
- NGINX
- Hardware load balancers
- F5
- IBM Datapower
- Another option is to use DNS round-robin in the absence of a load balancer, with a DNS server called 'authoritative nameserver'.
RHOCP with its built-in DNS resolves the names of the services and associated routes to be reached via the service DNS as well as the service IP/port. For communications from outside the cluster RHOCP provides methods by using an Ingress Controller with services and routes for apps running in different Pods in the cluster.
For details of getting traffic into the cluster, see Configuring ingress cluster traffic using an Ingress Controller
The Ingress Operator implements the Ingress Controller API and is the component responsible for enabling external access to Red Hat OpenShift Container Platform cluster services. The Ingress Operator deploys and manages one or more HAProxy-based Ingress controllers to handle routing. You can use the Ingress Operator to route traffic by specifying Red Hat OpenShift Container Platform Routes and Kubernetes Ingress resources. Scale an Ingress Controller to meet routing performance or availability requirements such as the requirement to increase throughput.
Request flow in RHOCP through DNS and load balancers
Essential paths of the information flow include the following:
- Incoming request through an external load balancer passes the request
- To a control plane node, if it is a command for an API server port
- To a compute node with a route pod, if it is a request for an application port
- The route pod uses the HAProxy-based Ingress Controller to route between pods
- The built-in internal DNS converts the services by name
- The compute plane forwards DNS queries to internal services via Ingress Controllers and routes
- The Software Defined Networking (SDN) in an RHOCP cluster network enables pod-to-pod communication
- RHOCP follows the Kubernetes Container Networking Interface (CNI) plug-in model

Networking options with Red Hat OpenShift Container Platform
For attaching RHOCP to networking interfaces, you have different options. Depending on the use case, each option has different advantages. It is important to classify the workload and estimate if RHOCP has many incoming requests from the network, or rather has high communication demand with an external database. Based on the workload you can choose or combine different network topologies for the best performance.
- Connect control plane and compute nodes by using one or multiple virtual switches. Connect to
the external load balancer for outside communication by using OSA cards. The benefit is that virtual
switches allow for fast changes of the RHOCP node configuration and the capability to extend a
cluster with extra nodes.
Figure 2. Network Topology Options with a Virtual Switch 
- Connect control plane and compute nodes as well as load balancer by using OSA or RoCE cards. The
benefit is that direct-attached OSA or RoCE cards provide fast communication and ensure high
availability and enhanced bandwidth.
Figure 3. Network Topology Options With OSA 
- Discuss with Willi: Use https://docs.openshift.com/container-platform/latest/networking/multiple_networks/attaching-pod.html[multi-NIC support] to connect the RHOCP nodes to multiple, different networks. You can plan and define multiple network interfaces for the nodes during the installation time or after the installation. For the installation process, the number of network interfaces per node is limited by the length of the parmline (896 bytes). The benefit is network traffic separation and isolation for different types of workloads as well as network isolation or balancing purposes.
- If you want to add an additional network interface to a node in an RHOCP cluster after the
installation, you can use the day-two operation capability with the Kubernetes NMState Operator.
Using a single .yaml config file you can define additional network interfaces for several control or
compute nodes. The resulted configuration persists after node reboot and these interfaces are
integrated in the communication with the API server in the control planes for monitoring their
health.
Figure 4. Network Topology Options with multi-NIC 
Networking in a z/VM hypervisor based implementation
In an environment using the z/VM hypervisor the z/VM VSWITCH can be used to build virtual networks in the hypervisor, which isconnected to an OSA card for external communication or even connected to HiperSockets using the HiperSockets Bridge feature.
- Connect control plane and compute nodes by using the z/VM VSWITCH and then connect them with the
load balancer via an OSA card. The benefit is increased flexibility when adding and extending RHOCP
nodes along with internal network isolation by using multiple VSWITCHES.
Figure 5. Network Topology Options with z/VM VSWITCH 
- Connect control plane and compute nodes in a HiperSockets network and the HiperSockets Bridge to
connect with the z/VM VSWITCH that provides a communication path to the external network via an OSA
card. The benefit is added flexibility in a failover and relocation scenario when some nodes are
relocated to another physical machine.
Figure 6. z/VM VSWITCH Network Topology Option with HiperSockets 
Networking in a KVM hypervisor-based implementation
In an environment that uses the KVM hypervisor, the use of MacVTap enables the build of virtual bridged networks in the hypervisor.
You have different options to build the network topology with RHOCP:
- Connect to an OSA or RoCE card for external communication as shown in Figure 3
- Use network channel bonding support to connect the nodes to multiple network cards. The benefit
is enhanced network bandwidth and balancing purposes.
Figure 7. Network Topology Options with bonding 
- Use the Linux bridging network capabilities to connect the nodes, for example, to a
HiperSockets network. The benefit is having an internal network without external routers and
switches.
Figure 8. Network Topology Options with bridging 
Server Time Protocol
Server Time Protocol (STP) is a server-wide facility that is implemented in the Licensed Internal Code (LIC) of the IBM® Z. The STP function was introduced in a previous generation of IBM Z and it provides improved time synchronization in a sysplex or nonsysplex configuration.
In the IBM Z architecture, the Store Clock (STCK) and the Store Clock Extended (STCKE) instructions provide a means by which programs can both establish time-of-day and unambiguously determine the ordering of serialized events, such as updates to a database, a log file, or another data structure. The architecture requires that the TOD clock resolution is sufficient to ensure that every value stored by an STCK or STCKE instruction is unique. Consecutive STCK or STCKE instructions that are run, possibly on different CPUs in the same server, must always produce increasing values. Thus, the timestamps can be used to reconstruct, recover, or in many different ways can ensure the ordering of these serialized updates to shared data.
STP is a message-based protocol, like the industry standard Network Time Protocol (NTP). STP allows a collection of IBM Z servers to maintain time synchronization with each other using a time value known as Coordinated Server Time (CST). The network of servers is known as a Coordinated Timing Network (CTN). The mainframe's Hardware Management Console (HMC) plays a critical role with STP CTNs: The HMC can initialize Coordinated Server Time (CST) manually or initialize CST to an external time source. The HMC also sets the time zone, Daylight Saving Time, and leap seconds offsets. It also performs the time adjustments when needed.
For an RHOCP cluster running on a IBM Z or IBM® LinuxONE the time synchronization configuration can be skipped because all the VMs are running on the same zSystem and as mentioned previously are managed by the STP. For details, refer to the IBM RedBook IBM Z Server Time Protocol.
Summary of the Options
- Options for load balancer
- As external load balancer you can use specialized hardware like F5 or IBM Datapower, or software based load balancers like NGINX or HAProxy.
- Options for Networking
-
Table 1. Networking Options Options Usage scenario HiperSockets| network in the box, internal communication z/VM VSWITCH high flexibility and SDN OSA for fast communication inside-out, VSWITCH compatible RoCE for fast communication inside-out MacVTap network flexibility in a KVM environment
Networking in Red Hat OpenShift Container Platform (RHOCP) is a topic that requires good planning. For additional information refer to:Red Hat OpenShift Container Platform - Networking