Why not use the built-in toDF
?
scala> val myRDD = sc.parallelize(Seq(("1", "roleA"), ("2", "roleB"), ("3", "roleC"))) myRDD: org.apache.spark.rdd.RDD[(String, String)] = ParallelCollectionRDD[60] at parallelize at <console>:27 scala> val colNames = List("id", "role") colNames: List[String] = List(id, role) scala> val myDF = myRDD.toDF(colNames: _*) myDF: org.apache.spark.sql.DataFrame = [id: string, role: string] scala> myDF.show +---+-----+ | id| role| +---+-----+ | 1|roleA| | 2|roleB| | 3|roleC| +---+-----+ scala> myDF.printSchema root |-- id: string (nullable = true) |-- role: string (nullable = true) scala> myDF.write.save("myDF.parquet")
nullable=true
simply means that the specified column can contain null
values (this is useful for int
columns that usually don't have null
values - int
don't have NA
or null
).
source share