I think you have a couple of options depending on the spark version you are using
Spark> = 1.6.1
from here: https://docs.databricks.com/spark/latest/sparkr/functions/read.df.html it seems you can specifically specify your schema to make it use doubling
csvSchema <- structType(structField("carat", "double"), structField("color", "string")) diamondsLoadWithSchema<- read.df("/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv", source = "csv", header="true", schema = csvSchema)
Spark & lt; 1.6.1 consider test.csv
1,a,4.1234567890 2,b,9.0987654321
you can easily make it more efficient, but I think you get a gist
linesplit <- function(x){ tmp <- strsplit(x,",") return ( tmp) } lineconvert <- function(x){ arow <- x[[1]] converted <- list(as.integer(arow[1]), as.character(arow[2]),as.double(arow[3])) return (converted) } rdd <- SparkR:::textFile(sc,'/path/to/test.csv') lnspl <- SparkR:::map(rdd, linesplit) ll2 <- SparkR:::map(lnspl,lineconvert) ddf <- createDataFrame(sqlContext,ll2) head(ddf) _1 _2 _3 1 1 a 4.1234567890 2 2 b 9.0987654321
NOTE. SparkR methods: are private for some reason. The docs say, "Be careful when you use this."
source share