Your client should have hadoop hdfs-site.xml cluster, as it will contain the name service used for both namenodes, and information about the hostname namenodes, port for connection, etc.
You should set these settings in your client as indicated in the answer ( fooobar.com/questions/10276734 / ... ):
"dfs.nameservices", "hadooptest" "dfs.client.failover.proxy.provider.hadooptest" , "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider" "dfs.ha.namenodes.hadooptest", "nn1,nn2" "dfs.namenode.rpc-address.hadooptest.nn1", "10.10.14.81:8020" "dfs.namenode.rpc-address.hadooptest.nn2", "10.10.14.82:8020"
Thus, your client will use the class "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider" to find which namenode is active and, accordingly, redirect the request to this namenode. First, it tries to connect to the first URI, and in case of failure, the second.
https://blog.woopi.org/wordpress/files/hadoop-2.6.0-javadoc/org/apache/hadoop/hdfs/server/namenode/ha/ConfiguredFailoverProxyProvider.html

source share