Verifying Kafka
To ensure that your environment is ready to install IBM® Surveillance Insight® for Financial Services, you can verify your Kafka
settings.
Procedure
- Create a file that is named client.ssl.properties in the /usr/hdp/2.6.4.0-91/kafka/conf directory.
- Add the following contents to client.ssl.properties:
security.protocol=SASL_SSL ssl.keystore.location=<ssl.keystore.location> ssl.keystore.password= <ssl.keystore.password> ssl.truststore.location=<ssl.truststore.location> ssl.truststore.password=<ssl.truststore.password> ssl.key.password=<ssl.key.password>For the keystore and truststore properties, see the Kafka > Configs > Custom kafka-broker settings in the Ambari console.
- Create a file that is named kafka_client_jaas_sifs.conf in the /usr/hdp/2.6.4.0-91/kafka/conf directory.
- Add the following contents to kafka_client_jaas_sifs.conf:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/kafka.service.keytab" principal="kafka/<Kafka_Broker_Host_name>" useTicketCache=true renewTicket=true serviceName="kafka"; };Change the
<Kafka_Broker_Host_name>value to the name of the Kafka broker computer. - In a terminal window, enter the following command to set the JVM parameters:
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/hdp/2.6.4.0-91/kafka/conf/kafka_client_jaas_sifs.conf" - Enter the following command to create a Kafka topic:
/usr/hdp/2.6.4.0-91/kafka/bin/kafka-topics.sh --create --zookeeper <ZooKeeper_Host>:2181 --replication-factor 1 --partitions 1 --topic sifs.ecomm.inReplace
<ZooKeeper_Host>with the appropriate Zookeeper hostname. The hostname is shown in the Ambari console. Click kafka > Configs > KafkaBroker > zookeeper.connect. - In the current terminal, start the producer by running the following commands:
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/hdp/2.6.4.0-91/kafka/conf/kafka_client_jaas_sifs.conf"cd /usr/hdp/2.6.4.0-91/kafkabin/kafka-console-producer.sh --broker-list <KafkaBroker>:6667 --topic sifs.ecomm.in --producer.config conf/client.ssl.properties --security-protocol SASL_SSL - In another terminal, start the consumer by running following commands:
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/hdp/2.6.4.0-91/kafka/conf/kafka_client_jaas_sifs.conf"cd /usr/hdp/2.6.4.0-91/kafkabin/kafka-console-consumer.sh --bootstrap-server <Kafka_Broker_Host>:6667 --topic sifs.ecomm.in --new-consumer --consumer.config conf/client.ssl.properties --security-protocol SASL_SSL - Go to the /usr/hdp/2.6.4.0-91/kafka/bin directory, and run the
following commands to run the consumer and producer and validate the message exchange:
./kafka-console-producer.sh --broker-list /<Kafka_Broker_Host>:6667 --topic sifs.ecomm.in --producer.config ../conf/client-ssl.properties --security-protocol SASL_SSL./kafka-console-consumer.sh --bootstrap-server /<Kafka_Broker_Host>:6667 --topic sifs.ecomm.in --new-consumer --consumer.config ../conf/client-ssl.properties --security-protocol SASL_SSLMessages that are placed on the producer console should be visible in the consumer session.
For example, enter Hello World! in the producer terminal. This message should appear in the consumer terminal.
- On each of the HDFS slave nodes, create a directory that is named
kafka/conf in /usr/hdp/2.6.4.0-91, if it does not
exist.
For example,
mkdir -p /usr/hdp/2.6.4.0-91/kafka/conf- Copy kafka_client_jaas_sifs.conf to all of the slave nodes in
/usr/hdp/2.6.4.0-91/kafka/conf.
scp /usr/hdp/2.6.4.0-91/kafka/conf/kafka_client_jaas.conf root@<hdp_node>:/usr/hdp/2.6.4.0-91/kafka/conf - Copy kafka.service.keytab from the Kafka node to the
/etc/security/keytabs directory on all of the HDFS nodes.
scp /etc/security/keytabs/kafka.service.keytab root@<hdp_node>:/etc/security/keytabs - Run the following command to grant read permissions for all users:
chmod a+r /etc/security/keytabs/kafka.service.keytab
- Copy kafka_client_jaas_sifs.conf to all of the slave nodes in
/usr/hdp/2.6.4.0-91/kafka/conf.