Scala -Spark (version1.5.2) Dataframes data splitting error

I have an input file foo.txtwith the following contents:

c1|c2|c3|c4|c5|c6|c7|c8|
00| |1.0|1.0|9|27.0|0||
01|2|3.0|4.0|1|10.0|1|1|

I want to convert it to Dataframeto execute some Sqlqueries:

var text = sc.textFile("foo.txt")
var header = text.first()
var rdd = text.filter(row => row != header)
case class Data(c1: String, c2: String, c3: String, c4: String, c5: String, c6: String, c7: String, c8: String)

Up to this point, everything is in order, the problem arises in the following sentence:

var df = rdd.map(_.split("\\|")).map(p => Data(p(0), p(1), p(2), p(3), p(4), p(5), p(6), p(7))).toDF()

If I try to print dfwith df.show, I get an error:

scala> df.show()
java.lang.ArrayIndexOutOfBoundsException: 7

I know that the error may be caused by a separation clause. I also tried to split foo.txtusing the following syntax:

var df = rdd.map(_.split("""|""")).map(p => Data(p(0), p(1), p(2), p(3), p(4), p(5), p(6), p(7))).toDF()

And then I get something like this:

scala> df.show()
+------+---------+----------+-----------+-----+-----------+----------------+----------------+
|  c1  |     c2  |    c3    |     c4    |  c5 |     c6    |        c7      |       c8       |
+------+---------+----------+-----------+-----+-----------+----------------+----------------+
|     0|        0|         ||           |    ||          1|               .|               0|
|     0|        1|         ||          2|    ||          3|               .|               0|
+------+---------+----------+-----------+-----+-----------+----------------+----------------+

Therefore, my question is how to properly transfer this file to a Dataframe.

EDIT: - || . .

+4
2

, :

scala> var df = rdd.map(_.split("\\|")).map(_.length).collect()
df: Array[Int] = Array(7, 8)

( ):

val df = rdd.map(_.split("\\|")).map{row =>
  row match {
    case Array(a,b,c,d,e,f,g,h) => Data(a,b,c,d,e,f,g,h)
    case Array(a,b,c,d,e,f,g) => Data(a,b,c,d,e,f,g," ")
  }
}

scala> df.show()
+---+---+---+---+---+----+---+---+
| c1| c2| c3| c4| c5|  c6| c7| c8|
+---+---+---+---+---+----+---+---+
| 00|   |1.0|1.0|  9|27.0|  0|   |
| 01|  2|3.0|4.0|  1|10.0|  1|  1|
+---+---+---+---+---+----+---+---+

EDIT:

:

val df = rdd.map(_.split("\\|", -1)).map(_.slice(0,8)).map(p => Data(p(0), p(1), p(2), p(3), p(4), p(5), p(6), p(7))).toDF()

, , .

+3

databrick csv.

: https://github.com/databricks/spark-csv

:

, :

c1|c2|c3|c4|c5|c6|c7|c8|
00| |1.0|1.0|9|27.0|0||
01|2|3.0|4.0|1|10.0|1|1|

, :

  val df = sqlContext.read
    .format("com.databricks.spark.csv")
    .option("header", "true") // Use first line of all files as header
    .option("inferSchema", "true") // Automatically infer data types
    .option("delimiter", "|") // default is ","
    .load("foo.txt")
    .show

+---+---+---+---+---+----+---+----+---+
| c1| c2| c3| c4| c5|  c6| c7|  c8|   |
+---+---+---+---+---+----+---+----+---+
|  0|   |1.0|1.0|  9|27.0|  0|null|   |
|  1|  2|3.0|4.0|  1|10.0|  1|   1|   |
+---+---+---+---+---+----+---+----+---+

, , . dataframe

+2

Source: https://habr.com/ru/post/1674197/


All Articles