OpenCV library loaded in chaos but not working

I am trying to use OpenCV with Hadoop. Below is my code. I just check if the OpenCV libraries work perfectly with Hadoop, that is, when I run the OpenCV code in the public int run(String[] args) function from Hadoop.

I searched the Internet and found several ways to add the OpenCV source library ( libopencv_java310.so ) to Hadoop. I tried several ways, but it did not work. For example, this tutorial .

It says to add the file JAVA.LIBRARY.PATH to hadoop-config.sh . But that did not work. I got this error

 Exception in thread "main" java.lang.UnsatisfiedLinkError: no opencv_java310 in java.library.path at line System.loadLibrary(Core.NATIVE.LIBRARY.NAME); 

Finally, I added the OpenCV source library ( libopencv_java310.so ) to this path (got the solution from the Internet)

 $HADOOP_HOME/lib/native 

And it looks like it worked. I did not get the above error. But I got this error on the following line:

 Exception in thread "main" java.lang.UnsatisfiedLinkError: org.opencv.objdetect.CascadeClassifier.CascadeClassifier_1(Ljava/lang/String;) 

This error is on the line:

 CascadeClassifier cad = new CascadeClassifier(); 

As far as I know, we get this error if the original OpenCV library is not loaded. But now the library is loaded, I do not know what is the cause of this error.

  public int run(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf); job.setJarByClass(readVideoFile.class); job.setJobName("smallfilestoseqfile"); job.setInputFormatClass(readVideoInputFormat.class); job.setNumReduceTasks(1); FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.setOutputKeyClass(Text.class); job.setOutputValueClass(Text.class); job.setMapperClass(readVideoMapper.class); System.loadLibrary(Core.NATIVE_LIBRARY_NAME); CascadeClassifier cad = new CascadeClassifier(); return job.waitForCompletion(true) ? 0 : 1; } 
+6
source share
1 answer

I had the same problem. I used the following workaround.

You can start by using the JavaCV tool as it works great with hadoop. Then use OpenCv to create an executable jar by wrapping all opencv libraries and banks in an executable jar. Now the source library is loaded by the operating system. Therefore, in the jar executable, write code that extracts its own OpenCv library to load the pry file, then loads the library and finally deletes the temporary file.

+1
source

Source: https://habr.com/ru/post/1208847/


All Articles