Error trying to start HBase card

It's really hard for me to fight the launch of Hbase-MapReduce with Hadoop.

I am using the Hadoop Hortonwork 2 version. The HBase version I am using is 0.96.1-hadoop2. Now, when I try to start my MapReduce as follows:

hadoop jar target/invoice-aggregation-0.1.jar  start="2014-02-01 01:00:00" end="2014-02-19 01:00:00" firstAccountId=0 lastAccountId=10

Hadoop informs me that account-aggregation-0.1.jar was not found in its file system ?! I wonder why this is needed?

Here is the error I get

14/02/05 10:31:48 ERROR security.UserGroupInformation: PriviledgedActionException as:adio (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: hdfs://localhost:8020/home/adio/workspace/projects/invoice-aggregation/target/invoice-aggregation-0.1.jar
java.io.FileNotFoundException: File does not exist: hdfs://localhost:8020/home/adio/workspace/projects/invoice-aggregation/target/invoice-aggregation-0.1.jar
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
    at com.company.invoice.MapReduceStarter.main(MapReduceStarter.java:244)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

I would appreciate any suggestion, help, or even guess why I am getting this error?

+4
source share
4 answers

The error is because hadoop could not find banks in place.

Place the cans and run the task again. This will solve the problem.

+1

, , , , HDFS. Hadoop fs -copyFromLocal 'myjarslocation' 'where_hdfs_needs_the_jars'. , MepReduce , , - HDFS - , . , . - , .

0

JAR "-libjars" hadoop jar …

0

​​ mapred-site.xml HADOOP_CONF_DIR

0

Source: https://habr.com/ru/post/1525371/


All Articles