F5 BIG-IP
The F5 BIG-IP Controller provides a platform-native integration of BIG-IP devices with Kubernetes. The BIG-IP Controller for Kubernetes (k8s-bigip-ctlr) configures BIG-IP objects for applications in the IBM® Cloud Private cluster, serving north-south traffic.
The BIG-IP Controller supports the following features:
- Dynamically create, manage, and destroy BIG-IP objects.
- Forward traffic from the BIG-IP device to Kubernetes clusters through PodIP, NodePort, or ClusterIP.
- Support F5 iApps.
- Manage F5-specific virtual server objects that are created in Kubernetes.
- Manage standard Kubernetes ingress objects by using F5-specific extensions.
For more information about F5 BIG-IP Controller for Kubernetes, see F5 BIG-IP Controller for Kubernetes .
Topology
Consider the following topology for integration:
- An IBM Cloud Private cluster with one master and two worker nodes. The cluster name is
cluster1
. - The nodes have two network interface cards (NICs) each: one on the management network and the other on the internal network.
The F5 BigIP is typically installed on four different networks:
- HA Network - BigIP uses the HA network to sync up state among all members of the cluster to maintain High Availability
- Management Network - the control plane; BigIP uses this network to accept management traffic; the console and REST API listen on this network
- Internal Network - the data plane; BigIP forwards traffics to workload backends that are running on this network, for example, containers that are running in the IBM Cloud Private platform
- External Network - BigIP external IP accepts connections on this network.
The network topology resembles the following diagram:
Type | CIDR |
---|---|
Internal Network | 192.168.70.0/24 |
External Network | 192.168.60.0/24 |
HA Network | 192.168.50.0/24 |
Management Network | 192.168.80.0/24 |
Node | IP address |
---|---|
Master | Internal network IP address is 192.168.70.225; Management network IP address is 192.168.80.225 |
Worker node 1 | 192.168.70.226 |
Worker node 2 | 192.168.70.227 |
Type | IP address |
---|---|
Management | 192.168.80.254 |
HA | 192.168.50.254 |
External network | 192.168.60.254 |
Internal network | 192.168.70.254 |
F5 BIG IP local traffic manager
F5 BIG IP Local Traffic Manager is a traffic management platform that can serve as an external load balancer for applications that are running in IBM® Cloud Private.
It can forward Layer 4 traffic to a service that is running IBM Cloud Private or be used as a Layer 7 ingress controller for Ingress Resources instead of the proxy nodes. The BigIP is available as a hardware appliance or as a VM, and there are also cloud images for use on public cloud.
F5 BIG-IP LTM integration
The F5 Container Connector can export pod IPs associated with Kubernetes service to F5 LTM appliance. The F5 Container Connector can be deployed as a helm chart on IBM Cloud Private. The F5 Container Connector watches Kubernetes resources that use the API from inside the cluster, then calls the iControl REST API on the management network to create virtual servers in the F5 LTM appliance.
Note: The default Common
partition in F5 BigIP appliance cannot be managed by the integration; you must create a separate partition.
To integrate the F5 BIG-IP device with your IBM Cloud Private cluster, see Integrating IBM Cloud Private with F5 BIG-IP Controller for Kubernetes.
When you are using an F5 BigIP appliance, proxy nodes do not need to be deployed. The default ingress controller that is deployed with IBM Cloud Private can be ignored as the TLS can be terminated at the BigIP appliance. See more information on this deployment in the Dedicated proxy nodes and shared ingress controller section.
F5 pool network type
When the integration creates backends for the virtual server, it can forward the traffic directly to the pods (cluster
pool network type), or use kube-proxy and forward traffic to the NodePorts on the worker node on the internal network
(NodePort
pool network type). For NodePort, since internal S-NAT might occur in worker nodes that are not running the pod, the externalTrafficPolicy
can be set to Local
so that the client IP address is preserved.
For more information, see the NodePort service type.
To forward traffic directly to the pods on the internal network, it is necessary to BGP peer the cluster with the router or the F5 BigIP appliance directly so there are routes from the appliance to the pods. If Calico is used, the ibm-calico-bgp-peer
chart can be used to add the F5 appliance to the BGP mesh so the routes to pods are populated in the F5 appliance. For more information about how to configure it, see Integrating F5 BIG-IP device with IBM Cloud Private.
Exposing Kubernetes services
To forward Layer 4 traffic from the appliance to the pods, a ConfigMap resource that
represents an F5 Resource is created that
the BigIP controller monitors. The F5 BigIP controller watches the ConfigMaps with labels that match f5type=virtual-server
on all namespaces it is configured for and creates virtual servers on the appliance based on the content.
For example, the following code shows the Node.js sample app that is running on 3000 on port 80:
kind: ConfigMap
apiVersion: v1
metadata:
name: nodejs-vs
namespace: default
labels:
f5type: virtual-server
data:
schema: "f5schemadb://bigip-virtual-server_v0.1.1.json"
data: |
{
"virtualServer": {
"frontend": {
"balance": "round-robin",
"mode": "http",
"partition": "ICP",
"virtualAddress": {
"bindAddr": "172.16.252.180",
"port": 80
}
},
"backend": {
"serviceName": "nodejs-test-ibm-nodejs-s",
"servicePort": 3000
}
}
}
This resource type is ideal when the backend is not an HTTP backend, or to support TLS pass-through to another proxy such as the nginx-ingress controller chart.
Exposing ingress resources
To forward Layer 7 traffic from the appliance to the pods, an Ingress resource can be used to add several backends to the same virtual server instance in the appliance. These resources can be at different host names, or at different paths, depending on the rules that are defined in the ingress resource.
To expose the ingress resource on the F5 appliance, some F5 specific annotations indicate to the controller how to program the F5. For a full list of supported annotations, see Attach a virtual server to a Kubernetes Ingress .
For example, the following ingress resource exposes the Node.js sample app on the resource path /
on the default virtual server in the F5 BIG-IP ICP partition. A specific IP can also be set, or in this case controller-default
was specified as 172.16.252.180
when the F5 controller chart was installed.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nodejs
namespace: default
annotations:
virtual-server.f5.com/ip: "controller-default"
virtual-server.f5.com/partition: "ICP"
spec:
rules:
- host: nodejs.csplab.cloudns.cx
http:
paths:
- backend:
serviceName: nodejs-test-ibm-nodejs-s
servicePort: 3000
path: /
Using ingress resources allows the same ingress yaml to be portable across platforms, for example, the same ingress resource yaml can be used on premises and use an F5 appliance, or on the public cloud with the default nginx-based ingress controller.
Additionally, using an ingress resource allows TLS termination at the F5 appliance. The same spec that is defined in the standard Kubernetes ingress resource is used. The TLS certificate and key are added as a secret to Kubernetes. The secret name
is specified, or an SSL profile is stored in the BigIP appliance if the SSL profile is specified as /<partition>/<profileName>
.
To avoid the default ingress controller that is included with IBM Cloud Private from also exposing the same ingress resources that are meant to be exposed on the F5, add the additional annotation, which causes the default ingress controller to ignore it.
...
metadata:
annotations:
kubernetes.io/ingress.class: "f5"
...