Scenario:
Let's say that there is a table in Hive , and it is requested using SparkSql in Apache Spark , where the table name is passed as an argument and combined with the request.
In the case of an unallocated system, I have a basic understanding of the SQL-Injection vulnerability, and in the JDBC context they understand the use of createStatement / preparedStatement in such a scenario.
But what about this scenario in case of sparksql, is this code vulnerable? Any ideas?
def main(args: Array[String]) { val sconf = new SparkConf().setAppName("TestApp") val sparkContext = new SparkContext(sconf) val hiveSqlContext = new org.apache.spark.sql.hive.HiveContext(sparkContext) val tableName = args(0)
source share