I am trying to use a structured streaming method using Spark-Streaming based on the DataFrame / Dataset API to load a data stream from Kafka.
I use:
- Spark 2.10
- Kafka 0.10
- spark SQL-Kafka-0-10
Spark Kafka DataSource has defined a basic schema:
|key|value|topic|partition|offset|timestamp|timestampType|
My data comes in json format and it is stored in a value column. Am I looking for a way to extract the base schema from a value column and update the resulting framework into columns stored in the value? I tried the approach below, but it does not work:
val columns = Array("column1", "column2")
val rawKafkaDF = sparkSession.sqlContext.readStream
.format("kafka")
.option("kafka.bootstrap.servers","localhost:9092")
.option("subscribe",topic)
.load()
val columnsToSelect = columns.map( x => new Column("value." + x))
val kafkaDF = rawKafkaDF.select(columnsToSelect:_*)
val query = kafkaDF.writeStream.format("console").start()
query.awaitTermination()
Here I get an Exception org.apache.spark.sql.AnalysisException: Can't extract value from value#337;
because during the creation of the stream, the values inside are unknown ...
Do you have any suggestions?