I installed a cluster (YARN) using Ambari with 3 virtual machines as hosts.
Where can I find the value for HADOOP_CONF_DIR?
# Run on a YARN cluster export HADOOP_CONF_DIR=XXX ./bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master yarn-cluster \ # can also be `yarn-client` for client mode --executor-memory 20G \ --num-executors 50 \ /path/to/examples.jar \ 1000
Install Hadoop. In my case, I installed it in / usr / local / hadoop
Setting up Hadoop environment variables
export HADOOP_INSTALL=/usr/local/hadoop
Then set the conf directory
export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
From /etc/spark/conf/spark-env.sh:
/etc/spark/conf/spark-env.sh
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/etc/hadoop/conf}
Source: https://habr.com/ru/post/1620532/More articles:Associating with UserControl DP from DataTemplate UWP - c #Internal join on LIKE sqldf - inner-joinPaypal: tariff plan + agreement - basic Qs - paypalAngularJS Django CORS API - angularjsHow to clear the most significant bit in a byte? - javaWhy do we call $ timeout without a delay argument. - angularjsOnly I-frames are correctly encoded during H264 video recording - pythonffmpeg: Thumbnail video in original orientation in php - phpFFMPEG thumbnail in php application does not rotate properly - phpPython: rounding error distorts uniform distribution - pythonAll Articles