How to split a column with multiple values ​​into separate rows using a dataset dataset?

I ran into the problem of splitting a column with multiple values, i.e. List[String], on separate lines.

The initial data set has the following types: Dataset[(Integer, String, Double, scala.List[String])]

+---+--------------------+-------+--------------------+
| id|       text         | value |    properties      |
+---+--------------------+-------+--------------------+
|  0|Lorem ipsum dolor...|    1.0|[prp1, prp2, prp3..]|
|  1|Lorem ipsum dolor...|    2.0|[prp4, prp5, prp6..]|
|  2|Lorem ipsum dolor...|    3.0|[prp7, prp8, prp9..]|

The resulting dataset must have the following types:

Dataset[(Integer, String, Double, String)]

and propertiesshould be broken down like this:

+---+--------------------+-------+--------------------+
| id|       text         | value |    property        |
+---+--------------------+-------+--------------------+
|  0|Lorem ipsum dolor...|    1.0|        prp1        |
|  0|Lorem ipsum dolor...|    1.0|        prp2        |
|  0|Lorem ipsum dolor...|    1.0|        prp3        |
|  1|Lorem ipsum dolor...|    2.0|        prp4        |
|  1|Lorem ipsum dolor...|    2.0|        prp5        |
|  1|Lorem ipsum dolor...|    2.0|        prp6        |
+4
source share
3 answers

explodeoften suggested, but from the untyped DataFrame API and given that you are using Dataset, I think the statement flatMapmight be better suited (see org.apache.spark.sql.Dataset ).

flatMap[U](func: (T) ⇒ TraversableOnce[U])(implicit arg0: Encoder[U]): Dataset[U]

(Scala -) , , .

:

val ds = Seq(
  (0, "Lorem ipsum dolor", 1.0, Array("prp1", "prp2", "prp3")))
  .toDF("id", "text", "value", "properties")
  .as[(Integer, String, Double, scala.List[String])]

scala> ds.flatMap { t => 
  t._4.map { prp => 
    (t._1, t._2, t._3, prp) }}.show
+---+-----------------+---+----+
| _1|               _2| _3|  _4|
+---+-----------------+---+----+
|  0|Lorem ipsum dolor|1.0|prp1|
|  0|Lorem ipsum dolor|1.0|prp2|
|  0|Lorem ipsum dolor|1.0|prp3|
+---+-----------------+---+----+

// or just using for-comprehension
for {
  t <- ds
  prp <- t._4
} yield (t._1, t._2, t._3, prp)
+3

explode:

df.withColumn("property", explode($"property"))

:

val df = Seq((1, List("a", "b"))).toDF("A", "B")   
// df: org.apache.spark.sql.DataFrame = [A: int, B: array<string>]

df.withColumn("B", explode($"B")).show
+---+---+
|  A|  B|
+---+---+
|  1|  a|
|  1|  b|
+---+---+
+3

Here is one way to do this:

val myRDD = sc.parallelize(Array(
  (0, "text0", 1.0, List("prp1", "prp2", "prp3")),
  (1, "text1", 2.0, List("prp4", "prp5", "prp6")),
  (2, "text2", 3.0, List("prp7", "prp8", "prp9"))
)).map{
  case (i, t, v, ps) => ((i, t, v), ps)
}.flatMapValues(x => x).map{
  case ((i, t, v), p) => (i, t, v, p)
}
+1
source

Source: https://habr.com/ru/post/1016719/


All Articles