Namenode not starting

I used Hadoop in pseudo-distributed mode and everything worked fine. But then I had to restart my computer for some reason. And now, when I try to run Namenode and Datanode, I can only find Datanode. Can someone tell me a possible cause of this problem? Or am I doing something wrong?

I tried both bin/start-all.sh and bin/start-dfs.sh .

+47
hadoop hdfs
Nov 10 2018-11-11T00
source share
21 answers

I came across the question that namenode is not starting. I found a solution using the following:

  • first delete all the contents from the temporary folder: rm -Rf <tmp dir> (mine was / usr / local / hadoop / tmp)
  • format namenode: bin/hadoop namenode -format
  • start all processes again: bin/start-all.sh

You can also think about rolling back using a breakpoint (if you turned it on).

+92
Dec 08 2018-11-11T00:
source share

hadoop.tmp.dir in core-site.xml defaulted to /tmp/hadoop-${user.name} , which is cleared after each reboot. Change this to another directory that is not cleared upon reboot.

+31
Nov 10 '11 at 10:03
source share

After STEPS I worked with hadoop 2.2.0,

STEP 1 stop hadoop

 hduser@prayagupd$ /usr/local/hadoop-2.2.0/sbin/stop-dfs.sh 

STEP 2 delete tmp folder

 hduser@prayagupd$ sudo rm -rf /app/hadoop/tmp/ 

STEP 3 create / app / hadoop / tmp /

 hduser@prayagupd$ sudo mkdir -p /app/hadoop/tmp hduser@prayagupd$ sudo chown hduser:hadoop /app/hadoop/tmp hduser@prayagupd$ sudo chmod 750 /app/hadoop/tmp 

STEP 4 formatting namenode

 hduser@prayagupd$ hdfs namenode -format 

STEP 5 start dfs

 hduser@prayagupd$ /usr/local/hadoop-2.2.0/sbin/start-dfs.sh 

STEP 6 check jps

 hduser@prayagupd$ $ jps 11342 Jps 10804 DataNode 11110 SecondaryNameNode 10558 NameNode 
+23
Nov 23 '14 at 9:55
source share

In conf / hdfs-site.xml you should have a type property

 <property> <name>dfs.name.dir</name> <value>/home/user/hadoop/name/data</value> </property> 

The "dfs.name.dir" property allows you to control where Hadoop writes NameNode metadata. And, giving it a different directory, and not / tmp, make sure that the NameNode data is not deleted upon reboot.

+3
Apr 2 '13 at
source share

Open a new terminal and run namenode using path-to-your-hadoop-install / bin / hasoop namenode

Validation using jps and namenode should be done

+2
Feb 21 '13 at 16:10
source share

If anyone uses hadoop1.2.1 and cannot execute namenode, go to core-site.xml and change dfs.default.name to fs.default.name .

Then format the namenode with $hadoop namenode -format .

Finally, run hdfs using start-dfs.sh and test the service using jps ..

+1
Oct 07 '13 at 14:19
source share

Why do most of the answers here assume that all data needs to be deleted, reformatted and restarted by Hadoop? As we learn that namenode does not progress, but takes a lot of time. He will do this when there is a lot of data in HDFS. Check the progress in the magazines before taking anything hanging or stuck.

 $ [kadmin@hadoop-node-0 logs]$ tail hadoop-kadmin-namenode-hadoop-node-0.log ... 016-05-13 18:16:44,405 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 117/141 transactions completed. (83%) 2016-05-13 18:16:56,968 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 121/141 transactions completed. (86%) 2016-05-13 18:17:06,122 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 122/141 transactions completed. (87%) 2016-05-13 18:17:38,321 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 123/141 transactions completed. (87%) 2016-05-13 18:17:56,562 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 124/141 transactions completed. (88%) 2016-05-13 18:17:57,690 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: 127/141 transactions completed. (90%) 

It was almost an hour of waiting in a particular system. He is still progressing every time I look at him. Have patience with Hadoop when you lift the system and check the logs before assuming that something is hanging or not progressing.

+1
May 13 '16 at 22:26
source share

In the core-site.xml file:

  <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/yourusername/hadoop/tmp/hadoop-${user.name} </value> </property> </configuration> 

and the format of the name with:

hdfs namenode -format

worked for hadoop 2.8.1

+1
Aug 14 '17 at 7:34 on
source share

Have you changed conf/hdfs-site.xml dfs.name.dir ?

Format the namenode after changing it.

 $ bin/hadoop namenode -format $ bin/hadoop start-all.sh 
0
Sep 02 '13 at 7:56
source share

If you saved the default settings when running hasoop, then the port for namenode will be 50070. You will need to find any processes running on this port and destroy them first.

  • Stop all running operations with: bin/stop-all.sh

    check all processes running on port 50070

  • sudo netstat -tulpn | grep :50070 sudo netstat -tulpn | grep :50070 # check all processes running on port 50070, if there is / will appear on the RHS output.

  • sudo kill -9 <process_id> #kill_the_process .

  • sudo rm -r /app/hadoop/tmp #delete temporary folder

  • sudo mkdir /app/hadoop/tmp #recreate it

  • sudo chmod 777 –R /app/hadoop/tmp (777 is for this example only)

  • bin/hadoop namenode –format #format hadoop namenode

  • bin/start-all.sh # start-all hadoop services

Refer this blog

0
Jul 18 '14 at 4:27
source share

If you encounter this problem after rebooting the system, then the steps below will work fine

Workaround.

1) format namenode: bin/hadoop namenode -format

2) start all the processes again: bin/start-all.sh

For Perm fix: -

1) go to / conf / core -site.xml, change fs.default.name to your own.

2) format namenode: bin/hadoop namenode -format

3) start all the processes again: bin/start-all.sh

0
Jul 25 '14 at 7:20
source share

Try it,

1) Stop all haop processes: stop-all.sh

2) Delete the tmp folder manually

3) format namenode: hadoop namenode -format

4) Run all processes: start-all.sh

0
Jul 25 '14 at 7:28
source share

Faced the same problem.

(1) Always check for input errors when customizing .xml , especially xml tags.

(2) go to the bin directory. and enter ./start-all.sh

(3) then enter jps to check if processes are running

0
Aug 07 '14 at
source share

For me, the following worked after I changed the namename and datanode directory in hdfs-site.xml

- before performing the following steps, stop all services using stop-all.sh or in my case, I used stop-dfs.sh to stop dfs

  • In the new configured directory for each node (namenode and datanode), delete each folder / files inside it (in my case, the current directory).
  • remove the temporary Hadoop directory: $rm -rf /tmp/haddop-$USER
  • Namenode format: hadoop/bin/hdfs namenode -format
  • start-dfs.sh

After completing these steps, my namenode and datanodes were alive using the new configured directory.

0
Jul 18 '15 at 0:20
source share

Add hadoop.tmp.dir property in core-site.xml

 <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/yourname/hadoop/tmp/hadoop-${user.name}</value> </property> </configuration> 

and hdfs format (hadoop 2.7.1):

 $ hdfs namenode -format 

The default value in core-default.xml is / tmp / hadoop - $ {user.name}, which will be deleted after reboot.

0
Sep 05 '15 at 14:59
source share

If your namenode is stuck in safemode, you can ssh enter namenode, su hdfs user and run the following command to disable safe mode:

 hdfs dfsadmin -fs hdfs://server.com:8020 -safemode leave 
0
01 Oct '15 at 9:34
source share

I ran $hadoop namenode to run namenode manually in the foreground.

From the logs, I realized that 50070 was occupied, which by default was used by dfs.namenode.http-address. After setting up dfs.namenode.http-address in hdfs-site.xml, everything went well.

0
Aug 09 '16 at 8:32
source share

After rebooting, I came across the same thing.

for hadoop-2.7.3 all I had to do was format the namenode:

 <HadoopRootDir>/bin/hdfs namenode -format 

Then the jps command shows

 6097 DataNode 755 RemoteMavenServer 5925 NameNode 6293 SecondaryNameNode 6361 Jps 
0
Oct 28 '16 at 15:57
source share
 I got the solution just share with you that will work who got the errors: 1. First check the /home/hadoop/etc/hadoop path, hdfs-site.xml and check the path of namenode and datanode <property> <name>dfs.name.dir</name> <value>file:///home/hadoop/hadoopdata/hdfs/namenode</value> </property> <property> <name>dfs.data.dir</name> <value>file:///home/hadoop/hadoopdata/hdfs/datanode</value> </property> 2.Check the permission,group and user of namenode and datanode of the particular path(/home/hadoop/hadoopdata/hdfs/datanode), and check if there are any problems in all of them and if there are any mismatch then correct it. ex .chown -R hadoop:hadoop in_use.lock, change user and group chmod -R 755 <file_name> for change the permission 
0
May 04 '17 at 18:46
source share

After deleting the resource manager data folder, the problem disappeared. Even if you have formatting, this problem cannot be solved.

0
May 7 '17 at 16:26
source share

Instead of formatting the namenode, perhaps you can use the following command to restart the namenode. This worked for me:

sudo service hasoop-master restart

  1. hadoop dfsadmin -safemode leave
-one
Dec 22 '17 at 18:23
source share



All Articles