April 18, 2019 By Phil Alger 5 min read

IBM Cloud Databases for Elasticsearch coupled with Kibana

You might already have set up an IBM Cloud Databases for Elasticseach deployment, and you now want to use Kibana to visualize your data or run queries using the interactive UI. We’ve got you covered. All you need are your database credentials and Docker to get set up and start searching.

If you use Elasticsearch, you’re more than likely aware of how powerful the database is when coupled with Kibana, the open source tool that lets you add visualization capabilities to your Elasticsearch database. So, if you’re already running IBM Cloud Databases for Elasticsearch, you might consider using it or be wondering how to get started.

We’ve got you covered. In this article, we’ll show you how to connect your Databases for Elasticsearch deployment to Kibana using Docker, which takes just a couple of minutes to get set up.

Let’s get started.

Setting things up

First, make sure you have Docker installed. You’ll want it so that you can pull the Kibana container image to connect to Databases for Elasticsearch.

Next, grab the credentials for your Databases for Elasticsearch deployment. You can either do this from the IBM Cloud console or using the IBM Cloud CLI. In this example, we’ll get the credentials using the IBM Cloud CLI. With the CLI installed, run the following in your terminal if you’re using a federated login account. Otherwise, omit the --sso flag.

ibmcloud login --sso

The next step is to get your Databases for Elasticsearch credentials. This is done using the IBM Cloud CLI’s cloud-databases plugin. If you don’t have this plugin installed, we’ll show you how; otherwise, you can skip this step.

To install the cloud-databases plugin from the IBM Cloud CLI, run the following in your terminal:

ibmcloud plugin install cloud-databases

Once that’s finished installing, you can access any of the cloud-databases commands using ibmcloud cdb. In order to connect Kibana to your database, you’ll need the connection string, username and password, and CA certificate of your Databases for Elasticsearch deployment.

To get the connection string of your deployment, use the CLI and the cloud-databases plugin and run:

ibmcloud cdb cxn <name elasticsearch deployment>

So, if our database is called “Databases for Elasticsearch”, we would run:

ibmcloud cdb cxn "Databases for Elasticsearch"

Make sure to use quotes around the deployment name if your deployment name has spaces in it like in the example above. If you don’t have spaces in the deployment name, you don’t need to use quotes.

After running that command, you’ll get the connection strings to the deployment. In it, you have the username admin, the redacted password, as well as the host and port name of the Elasticsearch deployment.

If you’ve set up your Databases for Elasticsearch, you should have changed the admin password. If not, do that now so you have access to that user.

ibmcloud cdb user-password "Databases for Elasticsearch" admin <new password>

You’ll also need the CA certificate to access the database. You can get that using the following command:

ibmcloud cdb cacert <name elasticsearch deployment>

For example, using the same example database name above:

ibmcloud cdb cacert "Databases for Elasticsearch"

Running this command would give you something like this:

-----BEGIN CERTIFICATE----- 
...
 -----END CERTIFICATE-----

The certificate has been redacted here, but you need to copy everything starting from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- into a file and save it to a directory of your choice. You’ll need to reference this directory when we run Kibana from Docker.

Setting up Kibana

Before running a Docker container that includes Kibana, you’ll need to create a configuration file that contains some basic Kibana settings. There are numerous settings for Kibana that you can peruse and add to your configuration file if you need them.

To set up the configuration file, create a YAML file called kibana.yml. Inside this file, you’ll need the following Kibana configuration settings:

elasticsearch.ssl.certificateAuthorities: "/usr/share/kibana/config/cacert"
elasticsearch.username: "admin"
elasticsearch.password: "mypassword"
elasticsearch.url: "https://xxxx.databases.appdomain.cloud:30694"
server.name: "kibana"
server.host: "0.0.0.0"

The first setting, elasticsearch.ssl.certificateAuthorities, is the location of your deployment’s CA certificate stored in the container. You can change this to a location of your choice, but we’ve kept it in Kibana’s config directory. Remember this is a location in the Docker container, not your physical system.

The next three settings are your Databases for Elasticsearch username (elasticsearch.username), password (elasticsearch.password), and hostname and port ("https://xxxx.databases.appdomain.cloud:30694). Finally, we have the server.name, which is a machine-readable name for the Kibana instance, and server.host, which is the host of the backend server and where you’ll connect to in your web browser.

Again, the settings above are just an example to get started. See the Kibana documentation for more configuration settings you can set for your use case.

Running the Kibana Container

Now that the kibana.yml file is set up, we’ll show you how to use Docker to attach that file and your CA certificate to the Docker container while pulling the Kibana image from the Docker image repository. The Docker image for the Kibana version we’re using is kibana-oss version 6.5.4, which is the open source version of Kibana without X-Pack.

Below is the Docker command that you’ll run in your terminal to start up the Kibana container:

docker container run -it --name kibana \ 
-v /path/to/kibana.yml:/usr/share/kibana/config/kibana.yml \
-v /path/to/<ca cert file name>:/usr/share/kibana/config/<ca certificate file name>  \
-p 5601:5601 docker.elastic.co/kibana/kibana-oss:6.5.4

I haven’t detached the container because I want to illustrate what the container will look like while running, but you can use the -d flag as an option if you don’t want to see the output of Kibana in your terminal.

You’ll notice in the Docker command above that we have two volumes attached with the -v flag. These are mounted to the Kibana container to the path /usr/share/kibana/config/, which is a configuration directory that Kibana looks at for configuration files. The first volume points to yourkibana.yml file. The file name it assigns in the container must be named kibana.ymlbecause that’s the file name that the Kibana server reads server properties from. The second volume refers to the path on your system of your CA certificate that you saved earlier and makes a copy of that in the /usr/share/kibana/config/ directory in the container as well. This path is also specified in the kibana.yml file as elasticsearch.ssl.certificateAuthorities and the two paths from the Docker command and in the kibana.yml file must be identical so that Kibana knows where your CA certificate is located. The port that’s exposed from the container is 5601 and we’ll access Kibana using that port. Finally, the Kibana image we’ll pull is the kibana-oss version without X-Pack: docker.elastic.co/kibana/kibana-oss:6.5.4.

After configuring the command, run the Docker command from your terminal and it will download the Kibana Docker image and run Kibana. Once Kibana has connected to your Databases for Elasticsearch deployment and is running successfully, you will see output  like the following in your terminal:

log   [01:19:31.839] [info][status][plugin:kibana@6.5.4] Status changed from uninitialized to green - Ready
log   [01:19:31.925] [info][status][plugin:elasticsearch@6.5.4] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log   [01:19:32.120] [info][status][plugin:timelion@6.5.4] Status changed from uninitialized to green - Ready
log   [01:19:32.134] [info][status][plugin:console@6.5.4] Status changed from uninitialized to green - Ready
log   [01:19:32.147] [info][status][plugin:metrics@6.5.4] Status changed from uninitialized to green - Ready
log   [01:19:33.132] [info][status][plugin:elasticsearch@6.5.4] Status changed from yellow to green - Ready
log   [01:19:33.378] [info][listening] Server running at http://0.0.0.0:5601

At this point, you can run the URL http://0.0.0.0:5601 in your browser. Remember we set up 0.0.0.0 in the kibana.yml file to access Kibana and exposed the port 5601 from the container. Once you go to the URL, a pop-up window will prompt you for your username and password. These are credentials that have access to your Databases for Elasticsearch deployment. They don’t have to be the same username and password you provided in the kibana.yml file.

From here, you can start using Kibana with Databases for Elasticsearch.

Summary

We started out getting your credentials from your IBM Cloud Databases for Elasticsearch deployment, then set up a Kibana configuration file with some basic options that you’ll need to provide in order to connect Kibana to your Elasticsearch database. From there, we run Docker and watched Kibana go live from your browser. In the next article, I’ll take you through the steps to get Kibana set up on IBM Cloud Kubernetes Service.

Was this article helpful?
YesNo

More from Cloud

How a US bank modernized its mainframe applications with IBM Consulting and Microsoft Azure

9 min read - As organizations strive to stay ahead of the curve in today's fast-paced digital landscape, mainframe application modernization has emerged as a critical component of any digital transformation strategy. In this blog, we'll discuss the example of a US bank which embarked on a journey to modernize its mainframe applications. This strategic project has helped it to transform into a more modern, flexible and agile business. In looking at the ways in which it approached the problem, you’ll gain insights into…

The power of the mainframe and cloud-native applications 

4 min read - Mainframe modernization refers to the process of transforming legacy mainframe systems, applications and infrastructure to align with modern technology and business standards. This process unlocks the power of mainframe systems, enabling organizations to use their existing investments in mainframe technology and capitalize on the benefits of modernization. By modernizing mainframe systems, organizations can improve agility, increase efficiency, reduce costs, and enhance customer experience.  Mainframe modernization empowers organizations to harness the latest technologies and tools, such as cloud computing, artificial intelligence,…

Modernize your mainframe applications with Azure

4 min read - Mainframes continue to play a vital role in many businesses' core operations. According to new research from IBM's Institute for Business Value, a significant 7 out of 10 IT executives believe that mainframe-based applications are crucial to their business and technology strategies. However, the rapid pace of digital transformation is forcing companies to modernize across their IT landscape, and as the pace of innovation continuously accelerates, organizations must react and adapt to these changes or risk being left behind. Mainframe…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters