An example of a common Hadoop / Yarn shell

I am trying to run a distributed shell example (using the Hadoop SVN check, so the version is installed in 3.0.0-SNAPSHOT):

yarn jar share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.0.0-SNAPSHOT.jar \ -jar share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.0.0-SNAPSHOT.jar \ org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command whoami 

However, this does not work:

 12/09/03 13:44:37 FATAL distributedshell.Client: Error running CLient java.lang.reflect.UndeclaredThrowableException at org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl.unwrapAndThrowException(YarnRemoteExceptionPBImpl.java:128) at org.apache.hadoop.yarn.api.impl.pb.client.ClientRMProtocolPBClientImpl.getClusterMetrics(ClientRMProtocolPBClientImpl.java:123) at org.hadoop.yarn.client.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:163) at org.apache.hadoop.yarn.applications.distributedshell.Client.run(Client.java:316) at org.apache.hadoop.yarn.applications.distributedshell.Client.main(Client.java:164) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unknown protocol: org.apache.hadoop.yarn.api.ClientRMProtocolPB at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.getProtocolImpl(ProtobufRpcEngine.java:398) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:456) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1732) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1728) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1726) at org.apache.hadoop.ipc.Client.call(Client.java:1164) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at $Proxy7.getClusterMetrics(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ClientRMProtocolPBClientImpl.getClusterMetrics(ClientRMProtocolPBClientImpl.java:121) ... 8 more 

A significant problem appears to be in the second trace:

 Unknown protocol: org.apache.hadoop.yarn.api.ClientRMProtocolPB 

Does anyone know how protocol registration works for Hadoops ProtoBufRPC? Any idea on how to debug?

Edit: With Hadoop 2.0.1-alpha, it works a little better.

 12/09/03 18:43:14 INFO distributedshell.Client: Application did not finish. YarnState=FAILED, DSFinalStatus=FAILED. Breaking monitoring loop 12/09/03 18:43:14 ERROR distributedshell.Client: Application failed to complete successfully 

So maybe my build is not working correctly. Any ideas on what causes the problem above (I really would like to use HEAD as I plan to do low level experiments besides MapReduce)? Or is HEAD partially broken, does the distributed shell work for you on HEAD?

My own (not yet working ...) client still fails with the same error:

 Caused by: java.io.IOException: Unknown protocol: org.apache.hadoop.yarn.api.ClientRMProtocolPB 
+4
source share
1 answer

It turned out that the main problem with my own code was that I naively created an instance of the Configuration class instead of creating an instance of YarnConfiguration . Thus, the yarn configuration files were not readable and tried to contact the servers on their ports by default, which is not consistent with my settings.

The same error is present in the distributedshell example.

+3
source

Source: https://habr.com/ru/post/1432071/


All Articles