Question & Answer
Question
Cause
Answer
To configure a federation to access Spark using JDBC wrapper, you need to provide information about the data source and objects that you want to access to for creating server, user mapping and nicknames for tables in remote Spark data source.
Before you begin.
Check the driver FOsparksql.jar in the path $/INSTANCE_HOME/sqllib/federation/jdbc/lib. Spark JDBC support is optimized from Db2 V11.5.6, including function mapping, data type mapping, server attribute optimization.
1. Enable Federation server and restart Db2.
# db2 update dbm cfg using federated YES
# db2stop force
# db2start
2. Test the connection to the Spark server and verify the service is started correctly.
telnet <spark_ip> <port>
If the connection is successful, you receive the following similar output from the command.
$ telnet pastimes1.fyre.ibm.com 10016
Trying 9.30.215.134...
Connected to pastimes1.fyre.ibm.com.
If the connection fails, you will receive an error, please check the status of Spark server.
3. Create server, user mapping, nickname and query the nickname.
# connect to testdb
# CREATE SERVER SERVER1 type spark version 6.0 OPTIONS(DRIVER_CLASS 'com.ibm.fluidquery.jdbc.sparksql.SparkSQLDriver', URL 'jdbc:ibm:sparksql://pastimes1.fyre.ibm.com:10016;DatabaseName=default', DRIVER_PACKAGE '/home/haijs/sqllib/federation/jdbc/lib/FOsparksql.jar')
# CREATE USER MAPPING FOR PUBLIC SERVER server1 OPTIONS (REMOTE_AUTHID 'spark', REMOTE_PASSWORD 'hadoop');
# create nickname nk1 for jdbc1."test_spark"
# select * from nk1
col_1
--------------------------
1970-11-23-12.12.12.000000
1999-12-31-12.12.12.000000
2 record(s) selected.
Was this topic helpful?
Document Information
Modified date:
06 April 2021
UID
ibm16429467