Troubleshooting Elasticsearch

If you encounter a security policy error with Elasticsearch, here are the diagnosis and solution.

Security policies

Problem
After the Helm chart is deployed, none of the pods are in ready state. After you run the describe command, the Events section contains text such as Unable to validate against any pod security policy. Privileged containers are not allowed.
kubectl describe pod <pod_name>
Diagnosis
This error indicates that the Kubernetes service account does not have the permissions to deploy to the target namespace any pods that require privileged containers.
Cause
Some deployment types in Kubernetes are queued and executed asynchronously. When Kubernetes executes the queued deployment, however, it does so in the context of its internal service account instead of using the security context of the user that originally invoked the deployment. For the public discussion, see Kubernetes issue 55973.
Solution
You can grant a Kubernetes service account permissions to deploy to the target namespace any pods that require privileged containers. For more information, see Modifying IBM Cloud™ Private security policy.

 New in 18.0.2  Elasticsearch data pod not starting

Problem
After you upgrade IBM® Business Automation Insights to a version that includes a version upgrade of the embedded Kibana, one of the Elasticsearch data pod remains in Error state and the logs of this pod show an error message such as the following one.
java.lang.IllegalStateException: index and alias names need to be unique, but the following duplicates were found [.kibana (alias of [.kibana_1/r1bLSwwsSS-FQp3WHQJa4Q])]
Diagnosis
The starting Elasticsearch data pod holds data for an index named .kibana while the Elasticsearch cluster is already started and contains an alias that is also named .kibana. As indexes and aliases cannot have the same name, the attempt by the Elasticsearch data pod at joining the Elasticsearch cluster is rejected.
Cause
A Kibana upgrade is likely to trigger the following sequence of events.
  1. Create new .kibana_N indexes with up-to-date index mappings.
  2. Reindex the data from the original .kibana index to these new .kibana_N indexes.
  3. Delete the original .kibana index and replace it with an alias that uses the same name.
For more information, see the Saved object migrations page of the Elastisearch documentation.
Solution
  1. Manually delete all the files that are stored on the persistent volume that holds the data for the failing Elasticsearch data pod.
  2. Delete the Elasticsearch data pod so that it gets recreated and restarted.