IN pyspark
df = sqlContext.read.format('jdbc').options(url='jdbc:netezza://server1:5480/DATABASE', \
user='KIRK', password='****', dbtable='SCHEMA.MYTABLE', \
driver='org.netezza.Driver').load()
and in spark-shell
val df = sqlContext.read.format("jdbc").options(Map(
"url" -> "jdbc:netezza://server1:5480/DATABASE",
"user" -> "KIRK",
"password" -> "****",
"dbtable" -> "SCHEMA.MYTABLE",
"driver" -> "org.netezza.Driver")).load()
Please note that Netezza loves things in ALL CAPS. I do not know if this is necessary, but it does not hurt.
source
share