Hadoop pseudo-distributed mode - Datanode and tasktracker not starting

I am running a Red Hat Enterprise Linux Server 6.4 (Santiago) distribution with Hadoop 1.1.2 installed on it. I made the necessary configurations to enable pseudo-distributed mode. But when trying to run hadoop, datanode and tasktracker do not start.

I can not copy files to hdfs.

[ hduser@is-joshbloom-hadoop hadoop]$ hadoop dfs -put README.txt /input Warning: $HADOOP_HOME is deprecated. 13/05/23 16:42:00 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /input could only be replicated to 0 nodes, instead of 1 

Also after trying hadoop-daemon.sh start datanode I get a message:

 starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-is-joshbloom-hadoop.out 

the same applies to tasktracker. But when I try to execute the same command for namenode, secondarynamenode, jobtracker, they seem to work.

 namenode running as process 32933. Stop it first. 

I tried the following solutions:

  • Reformat namenode
  • Reinstall hadoop
  • Installing a different version of hadoop (1.0.4)

No, it seems to work. I followed the same installation steps on my Mac and Amazon ubuntu VM and it works fine.

How can I work with hadoop? Thanks!

* UPDATE **

Below is an entry in the namenode entry

 2013-05-23 16:27:44,087 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode STARTUP_MSG: host = java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop STARTUP_MSG: args = [] STARTUP_MSG: version = 1.1.2 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013 ************************************************************/ 2013-05-23 16:27:44,382 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2013-05-23 16:27:44,432 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2013-05-23 16:27:44,446 ERROR org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error getting localhost name. Using 'localhost'... java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop at java.net.InetAddress.getLocalHost(InetAddress.java:1438) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.getHostname(MetricsSystemImpl.java:463) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSystem(MetricsSystemImpl.java:394) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:390) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:152) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:133) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:40) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1589) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1608) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1734) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1751) Caused by: java.net.UnknownHostException: is-joshbloom-hadoop at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258) at java.net.InetAddress.getLocalHost(InetAddress.java:1434) ... 11 more 2013-05-23 16:27:44,453 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2013-05-23 16:27:44,453 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started 2013-05-23 16:27:44,768 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 2013-05-23 16:27:44,914 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library 2013-05-23 16:27:45,212 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop at java.net.InetAddress.getLocalHost(InetAddress.java:1438) at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:271) at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:289) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:301) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1651) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1608) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1734) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1751) Caused by: java.net.UnknownHostException: is-joshbloom-hadoop at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258) at java.net.InetAddress.getLocalHost(InetAddress.java:1434) ... 8 more 2013-05-23 16:27:45,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop ************************************************************/ 

* UPDATE ***

contents of /etc/hosts

 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 
+4
source share
3 answers

Modify your /etc/hosts to enable host loop mapping:

 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 127.0.1.1 is-joshbloom-hadoop ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 

Your problem is that your machine does not know how to resolve the is-joshbloom-hadoop to a specific IP address. Typically, two place / method permissions occur - either through the DNS server or using the local hosts file (the hosts file takes precedence).

The specified amendment to your host file allows the machine to resolve the is-joshbloom-hadoop machine name to the IP address 127.0.1.1 . The OS has an internal loopback address for the range 127.0.0.0/8 , so you can name any address here. On my Ubuntu laptop, it uses 127.0.1.1 , and I'm sure it changes between OSs, but I assume that without using 127.0.0.1 , you do not need to look for it in the localhost line if you change your computer name in the future.

+7
source

Check your core-site.xml in HADOOP_HOME / conf.It will have the fs.default.name property. It should have the host name specified in your / etc / hosts. "is-joshbloom- hasoop" hostname is missing from /etc/hosts.Use localhost instead

 <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> </property> </configuration> 
+2
source

The problem seems to be that you have nothing in the slaves file under conf / slaves.

Check the slaves file in conf / slaves. Delete everything and add localhost to this file. Delete the name and data directory mentioned in dfs.name.dir and dfs.data.dir in the hdfs-site.xml file.

Format the HDFS file system, and then run the ur daemons again.

0
source

Source: https://habr.com/ru/post/1482485/


All Articles