How to create a dataset from a custom class Person?

I tried to create Datasetin Java, so I am writing the following code:

public Dataset createDataset(){
  List<Person> list = new ArrayList<>();
  list.add(new Person("name", 10, 10.0));
  Dataset<Person> dateset = sqlContext.createDataset(list, Encoders.bean(Person.class));
  return dataset;
}

Person class is an inner class.

However, Spark throws the following exception:

org.apache.spark.sql.AnalysisException: Unable to generate an encoder for inner class `....` without access to the scope that this class was defined in. Try moving this class out of its parent class.;

at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$$anonfun$2.applyOrElse(ExpressionEncoder.scala:264)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$$anonfun$2.applyOrElse(ExpressionEncoder.scala:260)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:243)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:243)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:53)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:242)

How to do it right?

+4
source share
3 answers

tl; dr (only in the Spark shell) First define the case classes and, after defining them, use them. Using case classes in Spark / Scala applications should work.

In 2.0.1 in the Spark shell, you must first define the case classes and only then access them to create them Dataset.

$ ./bin/spark-shell --version
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0-SNAPSHOT
      /_/

Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_102
Branch master
Compiled by user jacek on 2016-10-25T04:20:04Z
Revision 483c37c581fedc64b218e294ecde1a7bb4b2af9c
Url https://github.com/apache/spark.git
Type --help for more information.

$ ./bin/spark-shell
scala> :pa
// Entering paste mode (ctrl-D to finish)

case class Person(id: Long)

Seq(Person(0)).toDS // <-- this won't work

// Exiting paste mode, now interpreting.

<console>:15: error: value toDS is not a member of Seq[Person]
       Seq(Person(0)).toDS // <-- it won't work
                      ^
scala> case class Person(id: Long)
defined class Person

scala> // the following implicit conversion *will* work

scala> Seq(Person(0)).toDS
res1: org.apache.spark.sql.Dataset[Person] = [id: bigint]

43ebf7a9cbd70d6af75e140a6fc91bf0ffc2b877 commit (Spark 2.0.0-SNAPSHOT 21 ) .

Scala REPL OuterScopes.addOuterScope(this), :paste :

scala> :pa
// Entering paste mode (ctrl-D to finish)

import sqlContext.implicits._
case class Token(name: String, productId: Int, score: Double)
val data = Token("aaa", 100, 0.12) ::
  Token("aaa", 200, 0.29) ::
  Token("bbb", 200, 0.53) ::
  Token("bbb", 300, 0.42) :: Nil
org.apache.spark.sql.catalyst.encoders.OuterScopes.addOuterScope(this)
val ds = data.toDS
+10

, :

org.apache.spark.sql.catalyst.encoders.OuterScopes.addOuterScope(this);
+4

For a similar problem in scala, my solution should have run exactly the same as AnalysisException suggested. Moving the case class from its parent class. For example, I had something like below in Streaming_Base.scala:

abstract class Streaming_Base {
    case class EventBean(id:String, command:String, recordType:String)
    ...
}

I changed this below:

case class EventBean(id:String, command:String, recordType:String)
abstract class Streaming_Base {        
    ...
}
0
source

Source: https://habr.com/ru/post/1625641/


All Articles