You can use the randomSplit method for Dataframes.
import scala.util.Random
val df = List(0,1,2,3,4,5,6,7,8,9).toDF
val splitted = df.randomSplit(Array(1,1,1,1,1))
splitted foreach { a => a.write.format("csv").save("path" + Random.nextInt) }
I used Random.nextInt for a unique name. If necessary, you can add some other logic there.
Source:
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.Dataset
How to save a DataFrame candle as csv on disk?
https://forums.databricks.com/questions/8723/how-can-i-split-a-spark-dataframe-into-n-equal-dat.html
: :
var input = List(1,2,3,4,5,6,7,8,9).toDF
val limit = 2
var newFrames = List[org.apache.spark.sql.Dataset[org.apache.spark.sql.Row]]()
var size = input.count;
while (size > 0) {
newFrames = input.limit(limit) :: newFrames
input = input.except(newFrames.head)
size = size - limit
}
newFrames.foreach(_.show)
, .