Hadoop - java.net.ConnectException: connection rejected

I want to connect to hdfs (in localhost) and I have an error:

Call from despubuntu-ThinkPad-E420 / 127.0.1.1 to localhost: 54310 failed to connect: java.net.ConnectException: connection rejected; See http://wiki.apache.org/hadoop/ConnectionRefused for more details.

I follow all the steps in other posts, but I will not solve the problem. I am using hasoop 2.7 and these are the settings:

core-site.xml

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/despubuntu/hadoop/name/data</value>
  </property>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
  </property>
</configuration>

HDFS-site.xml

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>

I print / usr / local / hadoop / bin / hdfs namenode -format and / usr / local / hadoop / sbin / start -all.sh

But when I type "jps", the result is:

10650 Jps
4162 Main
5255 NailgunRunner
20831 Launcher

I need help...

+4
source share
5 answers

, DFS, 9000 core-site.xml, . jps. sbin/start-dfs.sh

+3

, hadoop , :

1: .bashrc:

vi $HOME/.bashrc

: ( )

# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-sun

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
    hadoop fs -cat $1 | lzop -dc | head -1000 | less
}

# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin

2: hadoop-env.sh :

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun

3:

$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp

4: core-site.xml

<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>

5: mapred-site.xml

<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
</property>

6: hdfs-site.xml

<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>

hdfs ( Hadoop)

 $ /usr/local/hadoop/bin/hadoop namenode -format

,

+2

. , node, DataNode, Resource Manager Demand . start-all.sh, , HDFS.

0

, java- , jps . jps- → >

  • DataNode
  • JPS
  • NameNode
  • SecondaryNameNode

, node, : → start-dfs.sh

.

0

I was getting a similar error. After checking, I found that my namenode service was stopped.

check namenode status sudo status hadoop-hdfs-namenode

if its state is not running / running

start namenode service sudo start hadoop-hdfs-namenode

Keep in mind that it takes time before the node name service becomes fully operational after a reboot. It reads all hdfs changes in memory. You can check the progress of this in / var / log / hadoop-hdfs / with the commandtail -f /var/log/hadoop-hdfs/{Latest log file}

0
source

Source: https://habr.com/ru/post/1584980/


All Articles