Failed to start node name when configuring Hadoop for Luster

I am trying to integrate howtop with intelligence. I added the hadoop-lustre-plugin-3.1.0 hadoop-2.7.3/lib/native to hadoop-2.7.3/lib/native . Glitter is mounted on /mnt/lustre . I get the following error when I start hasoop using start-all.sh

 [ root@master hadoop]# start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 17/04/06 17:36:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured. Starting namenodes on [ ] ... 

core-site.xml:

 <property> <name>fs.defaultFS</name> <value>lustre:///</value> </property> <property> <name>fs.lustre.impl</name> <value>org.apache.hadoop.fs.LustreFileSystem</value> </property> <property> <name>fs.AbstractFileSystem.lustre.impl</name> <value>org.apache.hadoop.fs.LustreFileSystemlustre</value> </property <property> <name>fs.lustrefs.mount</name> <value>/mnt/lustre/hadoop</value> <description>This is the directory on Lustre that acts as the root level for Hadoop services</description> </property> <property> <name>lustre.stripe.count</name> <value>1</value> </property> <property> <name>lustre.stripe.size</name> <value>4194304</value> </property> <property> <name>fs.block.size</name> <value>1073741824</value> </property> 

maprd-site.xml

 <property> <name>mapreduce.job.map.output.collector.class</name> <value>org.apache.hadoop.mapred.SharedFsPlugins$MapOutputBuffer</value> </property> <property> <name>mapreduce.job.reduce.shuffle.consumer.plugin.class</name> <value>org.apache.hadoop.mapred.SharedFsPlugins$Shuffle</value> </property> 

HDFS-site.xml

 <property> <name>dfs.name.dir</name> <value>/mnt/lustre/hadoop/hadoop_tmp/namenode</value> <description>true</description> </property> 

Is there any configuration that I missed in the configuration files?

+5
source share
1 answer

Since fs.defaultFS contains a specific URI token, running the script cannot determine the host where Namenode needs to be run.

Add this property to hdfs-site.xml ,

 <property> <name>dfs.namenode.rpc-address</name> <value>namenode_host:port</value> </property> 
0
source

Source: https://habr.com/ru/post/1266685/


All Articles