Operating Elasticsearch
IBM® Business Automation Insights never deletes your data. You decide which data to delete and how to select it, depending on your business requirements.
Before you begin
About this task
As part of the installation process, configure the default Elasticsearch environment (or your external Elasticsearch installation) such that your business data can be written and read. Data processing jobs and Kibana dashboards rely on Elasticsearch aliases to write data to Elasticsearch and query data from it. Indices are always accessed through aliases. Defining indices and aliases allows you to use Elasticsearch capabilities to rollover and purge data efficiently.
- For 18.0.0 and For 18.0.1 : You cannot change the ibm_dba_ek.elasticsearch.client.antiAffinity, ibm_dba_ek.elasticsearch.data.antiAffinity, and ibm_dba_ek.elasticsearch.master.antiAffinity parameters from the Helm command line.
- New in 18.0.2 : You can update the antiAffinity parameters.
- About Elasticsearch aliases and indices
- See Elasticsearch aliases and indexes.
- About the rollover pattern
- See Elasticsearch Rollover pattern and Elasticsearch API.
- About index management
- See Elasticsearch Curator.
Procedure
Typically, take a snapshot of the base at least every day, and more often than the retention duration of your Apache Kafka queue, so that data that you might have lost in the meantime is still available for reconstruction from the Kafka cluster.
What to do next
If you use embedded Elasticsearch and Kibana, rather than an external Elasticsearch installation, and only in this case, you might need to take the following actions.