IBM Support

How to connect to Netezza Using Spark

Question & Answer


Question

Can we use jdbc connector from Spark 1.3 and 1.4 to connect from Spark to Netezza?

Answer

Yes, here are the steps:

1) Download and install spark:-

[cfkoh@oc3263864625 spark]$ ls -l
total 230940
drwxr-xr-x. 11 cfkoh cfkoh 4096 Sep 1 09:21 spark-1.5.0-bin-hadoop1
-rw-r-----. 1 cfkoh cfkoh 236473795 Sep 9 16:00 spark-1.5.0-bin-hadoop1.tgz

2) Download JDBC driver into local machine:
[cfkoh@oc3263864625 spark]$ scp nz@9.167.40.76:/nz/kit/sbin/nzjdbc*.jar .
nz@9.167.40.76's password:
nzjdbc3.jar

(The above are short-cut, you can download the Client software from fixcentral.)

3 ) Setup the CLASSPATH point to the JDBC jar file:
export CLASSPATH=$PWD/nzjdbc3.jar

4) Launch spark shell:-

[cfkoh@oc3263864625 spark]$ spark-1.5.0-bin-hadoop1/bin/spark-shell

5) Establish JDBC connection, abstract the data and store in variable: df

scala> val df = sqlContext.read.format("jdbc").options(Map("url" -> "jdbc:netezza://9.167.40.76:5480/system", "user" -> "admin", "password" -> "password", "dbtable" -> "_V_TABLE", "driver" -> "org.netezza.Driver")).load()
df: org.apache.spark.sql.DataFrame = [OBJID: int, TABLENAME: string, OWNER: string, CREATEDATE: timestamp, OBJTYPE: string, OBJCLASS: int, DESCRIPTION: string, RELHASINDEX: boolean, RELKIND: string, RELCHECKS: int, RELTRIGGERS: int, RELHASRULES: boolean, RELUKEYS: int, RELFKEYS: int, RELREFS: int, RELHASPKEY: boolean, RELNATTS: int, RELDISTMETHOD: int, OBJDELIM: boolean, RELTUPLES: bigint, BASEOBJID: int, VISIBLEID: int, RELVERSION: int, RELORIGOID: int]

6) check the data:

scala> df.show
+-----+--------------------+-----+--------------------+------------+--------+-----------+-----------+-------+---------+-----------+-----------+--------+--------+-------+----------+--------+-------------+--------+---------+---------+---------+----------+----------+
|OBJID| TABLENAME|OWNER| CREATEDATE| OBJTYPE|OBJCLASS|DESCRIPTION|RELHASINDEX|RELKIND|RELCHECKS|RELTRIGGERS|RELHASRULES|RELUKEYS|RELFKEYS|RELREFS|RELHASPKEY|RELNATTS|RELDISTMETHOD|OBJDELIM|RELTUPLES|BASEOBJID|VISIBLEID|RELVERSION|RELORIGOID|
+-----+--------------------+-----+--------------------+------------+--------+-----------+-----------+-------+---------+-----------+-----------+--------+--------+-------+----------+--------+-------------+--------+---------+---------+---------+----------+----------+
| 5006| _T_OBJECT|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| true| r| 0| 0| false| 0| 0| 0| false| 12| 6003| false| 6096| 5006| 5006| 0| 5006|
| 5634| _T_VT_HOSTTX|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 22| 6003| false| 0| 5634| 5634| 0| 5634|
| 5619| _T_HIST_CONFIG|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| true| r| 0| 0| false| 0| 0| 0| false| 25| 6003| false| 0| 5619| 5619| 0| 5619|
| 5134|_T_SHOW_COHORT_HI...|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 3| 6003| false| 1000| 5134| 5134| 0| 5134|
| 5135| _T_USER_SECURITY|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 3| 6003| false| 1000| 5135| 5135| 0| 5135|
| 5624| _T_CRYPTO_KEYSTORE|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| true| r| 0| 0| false| 0| 0| 0| false| 5| 6003| false| 1| 5624| 5624| 0| 5624|
| 5625| _T_CRYPTO_KEY|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| true| r| 0| 0| false| 0| 0| 0| false| 5| 6003| false| 0| 5625| 5625| 0| 5625|
| 5144| _T_VECTOR_IDX|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 1| 6003| false| 1000| 5144| 5144| 0| 5144|
| 5632|_T_CLIENT_COMPATI...|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 2| 6003| false| 1000| 5632| 5632| 0| 5632|
| 5035| _T_SYS_DATATYPE|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 21| 6003| false| 34| 5035| 5035| 0| 5035|
| 5034| _T_VIRTUAL|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 8| 6003| false| 118| 5034| 5034| 0| 5034|
| 5037| _T_SYSTEMDEF|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 4| 6003| false| 16| 5037| 5037| 0| 5037|
| 5615| _T_CONNECTION|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 6| 6003| false| 3| 5615| 5615| 0| 5615|
| 5039| _T_PRIORITY|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 4| 6003| false| 7| 5039| 5039| 0| 5039|
| 5041| _T_CHARSET|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 2| 6003| false| 3| 5041| 5041| 0| 5041|
| 5043| _T_COLLATION|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 2| 6003| false| 2| 5043| 5043| 0| 5043|
| 5023| _T_TEMP_TABLE_INFO|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 4| 6003| false| 1000| 5023| 5023| 0| 5023|
| 5028| _T_VREL|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| false| r| 0| 0| false| 0| 0| 0| false| 3| 6003| false| 66| 5028| 5028| 0| 5028|
| 5409| _T_BACKUP_HISTORY|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| true| r| 0| 0| false| 0| 0| 0| false| 18| 6003| false| 6| 5409| 5409| 0| 5409|
| 5411| _T_RESTORE_HISTORY|ADMIN|2015-07-16 04:33:...|SYSTEM TABLE| 4920| null| true| r| 0| 0| false| 0| 0| 0| false| 19| 6003| false| 1| 5411| 5411| 0| 5411|
+-----+--------------------+-----+--------------------+------------+--------+-----------+-----------+-------+---------+-----------+-----------+--------+--------+-------+----------+--------+-------------+--------+---------+---------+---------+----------+----------+
only showing top 20 rows

There are lot of resource out there that explain how to do the similar in spark program / pyton script etc. The above is just one of the way, to involve the JDBC connection from spark-shell

Take note that:
* Make sure that JDBC driver are correct version
* Configure CLASSPATH environment.
* Make sure that JDBC connection strings are correct (point to the right server name/IP etc)
* You have to use Spark 1.2 or later that comes with the Data source API
* Some of the class file inside the nzjdbc3.jar file that was compiled by earlier release of java might not run in later version of java (spark-shell). When that happened, Please try the later version of JDBC driver.

[{"Product":{"code":"SSULQD","label":"IBM PureData System"},"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Component":"--","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"1.0.0","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Document Information

More support for:
IBM PureData System

Software version:
1.0.0

Document number:
267565

Modified date:
17 October 2019

UID

swg21967820