I am reading an excel file using the com.crealytics.spark.excel package . Below is the code for reading excel file in spark java.
Dataset<Row> SourcePropertSet = sqlContext.read()
.format("com.crealytics.spark.excel")
.option("location", "D:\\5Kto10K.xlsx")
.option("useHeader", "true")
.option("treatEmptyValuesAsNulls", "true")
.option("inferSchema", "true")
.option("addColorColumns", "false")
.load("com.databricks.spark.csv");
But I tried with the same package (com.crealytics.spark.excel) to write a dataset object to an excel file in spark java.
SourcePropertSet.write()
.format("com.crealytics.spark.excel")
.option("useHeader", "true")
.option("treatEmptyValuesAsNulls", "true")
.option("inferSchema", "true")
.option("addColorColumns", "false").save("D:\\resultset.xlsx");
But I am getting below the error.
java.lang.RuntimeException: com.crealytics.spark.excel.DefaultSource does not allow creating a table as select.
And even I tried with the org.zuinnote.spark.office.excel package . below is the code for this.
SourcePropertSet.write()
.format("org.zuinnote.spark.office.excel")
.option("write.locale.bcp47", "de")
.save("D:\\result");
I added the following dependencies to my pom.xml
<dependency>
<groupId>com.github.zuinnote</groupId>
<artifactId>hadoopoffice-fileformat</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>com.github.zuinnote</groupId>
<artifactId>spark-hadoopoffice-ds_2.11</artifactId>
<version>1.0.3</version>
</dependency>
But I am getting below the error.
java.lang.IllegalAccessError: org.zuinnote.hadoop.office.format.mapreduce.ExcelFileOutputFormat.getSuffix(Ljava/lang/String;) Ljava/lang/String; org.zuinnote.spark.office.excel.ExcelOutputWriterFactory
, excel java.