I would like to update the hive table, which is in the orc format, I can update it from my ambari hari view, but could not run the same update instruction from sacla (spark-shell)
objHiveContext.sql ("select * from table_name"), able to see data, but when I run
objHiveContext.sql ("update table_name set column_name = 'testing'") cannot be started, some one-time exception appears (invalid syntax next to update, etc.) when I can update from Ambari view (as I install all required configurations, ie TBLPROPERTIES "orc.compress" = "NONE" are transactional true, etc.)
I tried to insert case into everything using arguments, but I couldn’t. Can we UPDATE the ORC hive tables from a spark? If so, what is the procedure?
Imported below
import org.apache.spark.SparkConf
import org.apache.spark.SparkConf
import org.apache.spark._
import org.apache.spark.sql._
import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.sql.hive.orc._
Note. I did not use the section or bucketing on this table. If I use bucketing, I can’t even view the data when stored as ORC Print version: 1.2.1 Spark version: 1.4.1 Scala Version: 2.10.6
source
share