Known issues and limitations for IBM Cloud Private with OpenShift
Review the known issues for version 3.2.0.
- Incorrect logging port
- Node exporter in error state
- Incorrect cURL command
- Certificate error after running the cloudctl login command
- icp-scc SecurityContextContraints is erroneously assigned to all pods in all namespaces
- Calling router proxy_host domain on HTTPS port 443 returns a TLS handshake error
- IBM Cloud Private console on OpenShift is inaccessible post installation
Incorrect logging port
If you click Logging from the IBM Cloud Private navigation, you reach https://:8443/kibana/
, where 8443
is an incorrect port. Change the port to the IBM Cloud Private port number that is included in the
config.yaml
installation file (https:// :/kibana/
) to display the Kibana dashboard correctly.
Node exporter in error state
Node exporter might be in error state due to unsuccessful image pull in OpenShift environment.
To work around this issue in pre-installation, add the following content to the config.yaml
file:
monitoring:
nodeExporter:
serviceAccount:
name: "default"
To work around this issue in post-installation, use the following command:
kubectl patch ds/monitoring-prometheus-nodeexporter -n kube-system -p '{"spec":{"template":{"spec":{"serviceAccount":"default","serviceAccountName":"default"}}}}'
Incorrect cURL command
You might encounter a Connection Refused
error due to the incorrect cURL command in IBM Cloud Private CLI.
To correct the error, replace 8443
with 5443
.
Certificate error after running the cloudctl login command
If you run the IBM Cloud Private general CLI cloudctl login
command with OpenShift installed, afterward, kubectl calls might receive a certificate error.
To resolve this issue, follow the appropriate steps for either a Linux® master or a Mac OSx master:
On a Linux master:
-
Replace
<Cluster Master Host>:<Cluster Master API Port>
with the master endpoint that is defined in Master endpoints in the following commands:export OS_CA_CERT=$(openssl s_client -showcerts -connect <Cluster Master Host>:<Cluster Master API Port> </dev/null 2>/dev/null | openssl x509 -outform PEM) export ICP_CA_CERT=$(kubectl -n kube-system get secret cluster-ca-cert -o yaml | grep ' tls.crt' | cut -d ":" -f 2 | xargs | base64 -d) echo -e "$ICP_CA_CERT\n$OS_CA_CERT" | base64 | tr -d '\n'
-
Copy the output and replace the
tls.crt
values in thecluster-ca-cert
:kubectl -n kube-system edit secret cluster-ca-cert
On a Mac master:
-
Replace
<Cluster Master Host>:<Cluster Master API Port>
with the master endpoint that is defined in Master endpoints in the following commands:export OS_CA_CERT=$(openssl s_client -showcerts -connect <Cluster Master Host>:<Cluster Master API Port> </dev/null 2>/dev/null | openssl x509 -outform PEM) export ICP_CA_CERT=$(kubectl -n kube-system get secret cluster-ca-cert -o yaml | grep ' tls.crt' | cut -d ":" -f 2 | base64 -D) echo -e "$ICP_CA_CERT\n$OS_CA_CERT" | base64 | tr -d '\n'
-
Copy the output and replace the
tls.crt
values in thecluster-ca-cert
:kubectl -n kube-system edit secret cluster-ca-cert
icp-scc SecurityContextContraints is erroneously assigned to all pods in all namespaces
In OpenShift clusters, the icp-scc
SecurityContextContraints resource is erroneously assigned to all pods in all namespaces that use Deployments, StatefulSets, DaemonSets, Jobs, and other controllers that manage pods. The icp-scc
SecurityContextContraints is assigned regardless of the user ID that was used to create the resource or the service account that is assigned to the pod.
To resolve the issue, run the following kubectl
commands on the master node:
kubectl patch scc icp-scc --type='json' -p='[{"op": "remove", "path": "/groups"}]'
kubectl patch scc icp-scc --type='json' -p='[{"op": "add", "path": "/users", "value": ["system:serviceaccount:kube-system:default","system:serviceaccount:istio-system:default", "system:serviceaccount:icp-system:default","system:serviceaccount:cert-manager:default"] }]'
Calling router proxy_host domain on HTTPS port 443 returns a TLS handshake error
When you install IBM Cloud Private on OpenShift, a route resource named icp-proxy
is set up to forward traffic to the NGINX ingress controller. When you create an ingress resource and try to connect to it by using the configured router
proxy domain name on HTTPS port, the Transport Layer Security (TLS) handshake fails. The default HTTPS port is 443.
Consider the following example.
An IBM Cloud Private cluster with the following parameters is installed on OpenShift:
openshift:
console:
host: console.ibm.com
port: 8443
router:
cluster_host: cluster_host.ibm.com
proxy_host: proxy_host.ibm.com
Following is the ingress resource that is used:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
annotations:
ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /nginx
backend:
serviceName: nginx
servicePort: 80
When you try to access to the resource by using the proxy domain name, the TLS handshake fails.
curl -k https://<proxy_host>/nginx
The command returns the following error:
curl: (35) SSL received a record that exceeded the maximum permissible length.
To work around this issue, edit the nginx-ingress
service that is in the kube-system
namespace:
kubectl edit svc -n kube-system nginx-ingress
Remove the following section:
- name: http
port: 80
protocol: TCP
targetPort: 80
You can run the following commands to verify that the TLS handshake is successful:
openssl s_client -debug -connect <proxy_host>:443 -servername <proxy_host>
curl -k https://<proxy_host>/nginx
IBM Cloud Private console on OpenShift is inaccessible post installation
You cannot view the management console on OpenShift post installation.
OpenShift DNS wildcard is configured incorrectly.
To resolve this issue, create a wildcard DNS entry for your application that points to the public IP address of the host where the router is deployed.
You can obtain the wildcard DNS by running the following command:
kubectl -n openshift-console get route console -o jsonpath='{.spec.host}'| cut -f 2- -d "."
For more information, view Configuring a DNS wildcard section in the OpenShift documentation .