In simple cases, you can provide the original schema, which is a superset of the expected schemas. For example, in your case:
val schema = Seq[MyType]().toDF.schema
Seq("a", "b", "c").map(Option(_))
.toDF("column1")
.write.parquet("/tmp/column1only")
val df = spark.read.schema(schema).parquet("/tmp/column1only").as[MyType]
df.show
+-------+-------+
|column1|column2|
+-------+-------+
| a| null|
| b| null|
| c| null|
+-------+-------+
df.first
MyType = MyType(Some(a),None)
This approach can be a little fragile , so in general, you'd better use SQL literals to fill in the blanks:
spark.read.parquet("/tmp/column1only")
// or ArrayType(StringType)
.withColumn("column2", lit(null).cast("array<string>"))
.as[MyType]
.first
MyType = MyType(Some(a),None)
source
share