Sparkr 2.0 read.df throws path does not exist error

My spark r 1.6 code does not work in spark2.0, I made the necessary changes, for example, creating sparkr.session()instead sparkr.init(), and not passing the sqlcontext parameter, etc.

In the code below, I am loading data from folder folders into dataframe

read.df in spark1.6 which works

sales <- read.df(sqlContext, path= "gs://dev.appspot.com/myData/2014/20*,gs://dev.appspot.com/myData/2015/20*", source = "com.databricks.spark.csv", delimiter
="\t")

read.df in spark2.0 that doesn't work

sales <- read.df("gs://dev.appspot.com/myData/2014/20*,gs://dev.appspot.c
om/myData/2015/20*", source = "com.databricks.spark.csv", delimiter="\t")

the above line throws the following error:

6/09/25 19:28:52 ERROR org.apache.spark.api.r.RBackendHandler: loadDF on org.apache.spark.sql.api.r.SQLUtils faile d Error in invokeJava(isStatic = TRUE, className, methodName, ...) :    org.apache.spark.sql.AnalysisException: **Path does not exist: gs://dev.appspot.com/myData/2014/ 20*,gs://dev.appspot.com/myData/2015/20***;
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$12.apply(DataSource.scala:357)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$12.apply(DataSource.scala:350)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
        at scala.collection.immutable.List.flatMap(List.scala:344)
        at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:122 Calls: read.df -> dispatchFunc -> f -> callJStatic -> invokeJava Execution halted 16/09/25 19:28:53 INFO org.spark_project.jetty.server.ServerConnector: Stopped ServerConnector@148bd6fd{HTTP/1.1}{0 .0.0.0:4040}
+4
source share
1 answer

spark2.0 read.df does not work when reading files with a "," (comma) in the file name.

The data files that I created have a comma in the file names, something like these 201448-0.004 201448-0.005 201448-0.006

After painful hours while debugging the problem, finally he began to read the data when I deleted the "," from the file names.

+1
source

Source: https://habr.com/ru/post/1655755/


All Articles