I am trying to insert some data into a table that will contain 1,500 dynamic partitions, and I get this error:
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
Number of dynamic partitions created is 1500, which is more than 1000.
To solve this try to set hive.exec.max.dynamic.partitions to at least 1500.
So I'm trying: SET hive.exec.max.dynamic.partitions=2048but I still get the same error.
How can I change this value from Spark?
The code:
this.spark.sql("SET hive.exec.dynamic.partition=true")
this.spark.sql("set hive.exec.dynamic.partition.mode=nonstrict")
this.spark.sql("SET hive.exec.max.dynamic.partitions=2048")
this.spark.sql(
"""
|INSERT INTO processed_data
|PARTITION(event, date)
|SELECT c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,event,date FROM csv_data DISTRIBUTE BY event, date
""".stripMargin
).show()
Using offline Spark 2.0.0. Thank!
source
share