Apache Spark: How to convert Spark DataFrame to RDD with type RDD [(Type1, Type2, ...)]?

For example, suppose I have a DataFrame:

var myDF = sc.parallelize(Seq(("one",1),("two",2),("three",3))).toDF("a", "b") 

I can convert it to RDD[(String, Int)] with a map:

 var myRDD = myDF.map(r => (r(0).asInstanceOf[String], r(1).asInstanceOf[Int])) 

Is there a better way to do this, possibly using a DF schema?

+5
source share
1 answer

Using pattern matching over Row :

 import org.apache.spark.sql.Row myDF.map{case Row(a: String, b: Int) => (a, b)} 

In Spark 1.6+, you can use Dataset as follows:

 myDF.as[(String, Int)].rdd 
+8
source

Source: https://habr.com/ru/post/1241273/


All Articles