I am running zeppelin 0.7.0 on an emr-5.4.0 cluster. I am starting a cluster with default settings. The interpreter is %spark.depnot configured by EMR.
I edited the file /etc/zeppelin/conf/interpreter.jsonas follows:
"2ANGGHHMQ": {
"id": "2ANGGHHMQ",
"name": "spark",
"group": "spark",
"properties": {
"spark.yarn.jar": "",
"zeppelin.spark.printREPLOutput": "true",
"master": "yarn-client",
"zeppelin.spark.maxResult": "1000",
"spark.app.name": "Zeppelin",
"zeppelin.spark.useHiveContext": "true",
"args": "",
"spark.home": "/usr/lib/spark",
"zeppelin.spark.concurrentSQL": "false",
"zeppelin.spark.importImplicit": "true",
"zeppelin.pyspark.python": "python",
"zeppelin.dep.localrepo":"/usr/lib/zeppelin/local-repo"
},
"interpreterGroup": [
{
"class": "org.apache.zeppelin.spark.SparkInterpreter",
"name": "spark"
},
{
"class": "org.apache.zeppelin.spark.PySparkInterpreter",
"name": "pyspark"
},
{
"class": "org.apache.zeppelin.spark.SparkSqlInterpreter",
"name": "sql"
}
],
"option": {
"remote": true,
"port": -1,
"perNoteSession": false,
"perNoteProcess": false,
"isExistingProcess": false
}
}
I need to manually add the following and restart zeppelin:
{
"class":"org.apache.zeppelin.spark.DepInterpreter",
"name": "dep"
}
Is there a way to force EMR to use zeppelin's default settings (and not delete this configuration)?
UPDATE
Can someone explain why the cluster that I just created this morning by cloning the original cluster has a completely different configuration?
"interpreterGroup": [
{
"name": "spark",
"class": "org.apache.zeppelin.spark.SparkInterpreter",
"defaultInterpreter": false,
"editor": {
"language": "scala",
"editOnDblClick": false
}
},
{
"name": "pyspark",
"class": "org.apache.zeppelin.spark.PySparkInterpreter",
"defaultInterpreter": false,
"editor": {
"language": "python",
"editOnDblClick": false
}
},
{
"name": "sql",
"class": "org.apache.zeppelin.spark.SparkSqlInterpreter",
"defaultInterpreter": false,
"editor": {
"language": "sql",
"editOnDblClick": false
}
}
]