tl; dr (only in the Spark shell) First define the case classes and, after defining them, use them. Using case classes in Spark / Scala applications should work.
In 2.0.1 in the Spark shell, you must first define the case classes and only then access them to create them Dataset.
$ ./bin/spark-shell --version
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.1.0-SNAPSHOT
/_/
Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_102
Branch master
Compiled by user jacek on 2016-10-25T04:20:04Z
Revision 483c37c581fedc64b218e294ecde1a7bb4b2af9c
Url https://github.com/apache/spark.git
Type --help for more information.
$ ./bin/spark-shell
scala> :pa
// Entering paste mode (ctrl-D to finish)
case class Person(id: Long)
Seq(Person(0)).toDS // <-- this won't work
// Exiting paste mode, now interpreting.
<console>:15: error: value toDS is not a member of Seq[Person]
Seq(Person(0)).toDS // <-- it won't work
^
scala> case class Person(id: Long)
defined class Person
scala> // the following implicit conversion *will* work
scala> Seq(Person(0)).toDS
res1: org.apache.spark.sql.Dataset[Person] = [id: bigint]
43ebf7a9cbd70d6af75e140a6fc91bf0ffc2b877 commit (Spark 2.0.0-SNAPSHOT 21 ) .
Scala REPL OuterScopes.addOuterScope(this), :paste :
scala> :pa
import sqlContext.implicits._
case class Token(name: String, productId: Int, score: Double)
val data = Token("aaa", 100, 0.12) ::
Token("aaa", 200, 0.29) ::
Token("bbb", 200, 0.53) ::
Token("bbb", 300, 0.42) :: Nil
org.apache.spark.sql.catalyst.encoders.OuterScopes.addOuterScope(this)
val ds = data.toDS