Hadaop datanode not starting

I am using ubuntu 14.04 LTS Java version 8 and Hadoop 2.5.1 for installation. I have completed this installation guide for all components. Sorry for not using michael noll. Now the problem I am facing is when I run -dfs.sh, I get the following message

oroborus@Saras-Dell-System-XPS-L502X :~$ start-dfs.sh <br> 14/11/12 16:12:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable<br> Starting namenodes on [localhost]<br> localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-oroborus-namenode-Saras-Dell-System-XPS-L502X.out<br> localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-oroborus-datanode-Saras-Dell-System-XPS-L502X.out<br> Starting secondary namenodes [0.0.0.0]<br> 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-oroborus-secondarynamenode-Saras-Dell-System-XPS-L502X.out<br> 14/11/12 16:12:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable<br> 

Now, after starting start-yarn.sh (which works fine) and jps, I get the following output

 oroborus@Saras-Dell-System-XPS-L502X :~$ jps 9090 NodeManager 5107 JobHistoryServer 8952 ResourceManager 12442 Jps 11981 NameNode 

The ideal output should contain a datanode, but it is not. Googling and SOing I learned a bit that the error breaks in the logs, so here are the logs for the datanode. (Only part of the error, if you need more, let me know)

  2014-11-08 23:30:32,709 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT] 2014-11-08 23:30:33,132 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /usr/local/hadoop_store/hdfs/datanode : EPERM: Operation not permitted at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:642) at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:472) at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126) at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1866) at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1908) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1890) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1782) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1829) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2005) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2029) 2014-11-08 23:30:33,134 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/usr/local/hadoop_store/hdfs/datanode/" at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1917) 

Now I doubt how to do this.

Help is appreciated.
Postscript I tried many forums, so none of them could solve this problem for me. Hence the question.

+5
source share
3 answers

First delete all contents from the hdfs folder:

Value <name>hadoop.tmp.dir</name>

 rm -rf /usr/local/hadoop_store 

Make sure dir has ownership and permissions of / usr / local / hadoop _store

 hduser@localhost $sudo chown hduser:hadoop -R /usr/local/hadoop_store hduser@localhost $sudo chmod 777 -R /usr/local/hadoop_store 

Format namenode:

 hduser@localhost $hadoop namenode -format 

Run all processes again

+11
source

I also had the same problem, and I fixed them by changing the owner of this working directory. Although you have 777 permissions for these two directories, the framework will not be able to use it unless you change the owner to hduser.

$ sudo chown -R hduser: hadoop / usr / local / hadoop / yarn_data / hdfs / namenode

$ sudo chown -R hduser: hadoop / usr / local / hadoop / yarn_data / hdfs / datanode

After that, you will start your cluster again and you will see how the datanode works.

+4
source
  • first stop all objects like namenode, datanode etc. (you will have a script command or a command for this)
  • Tmp directory format
  • Go to / var / cache / hadoop-hdfs / hdfs / dfs / and delete all the contents in the directory manually
  • Now format your namenode again
  • run all entities, then use the jps command to confirm that the date is up.
  • Now run any application that you may like or have.

Hope this helps.

+1
source

Source: https://habr.com/ru/post/1206712/


All Articles