Need to understand some Java code

I am new to Android programming, but I get involved quickly. So, I found an interesting piece of code here: http://code.google.com/p/camdroiduni/source/browse/trunk/code/eclipse_workspace/camdroid/src/de/aes/camdroid/CameraView.java

And it's about streaming from your deviceโ€™s camera to your browser.

But I want to know how the code works.

Here is what I want to understand:

1) How they flow into a web browser. I understand that they send the index.html file to the deviceโ€™s IP address (on Wi-Fi), and this file reloads the page every second. But how do they send the index.html file to the desired IP address with sockets?

2) http://code.google.com/p/camdroiduni/wiki/Status#save_pictures_frequently It is mentioned here that they use video, but I am still convinced that they take pictures and send them, because I canโ€™t see the media recorder.

My question now is how they continue to send and save these images to the SD folder (I think). I think this is done with this code, but how does it work. Like c.takepicture, it will take a long time to save and start the preview again, so there is no possibility for livestream.

public synchronized byte[] getPicture() { try { while (!isPreviewOn) wait(); isDecoding = true; mCamera.setOneShotPreviewCallback(this); while (isDecoding) wait(); } catch (Exception e) { return null; } return mCurrentFrame; } private LayoutParams calcResolution (int origWidth, int origHeight, int aimWidth, int aimHeight) { double origRatio = (double)origWidth/(double)origHeight; double aimRatio = (double)aimWidth/(double)aimHeight; if (aimRatio>origRatio) return new LayoutParams(origWidth,(int)(origWidth/aimRatio)); else return new LayoutParams((int)(origHeight*aimRatio),origHeight); } private void raw2jpg(int[] rgb, byte[] raw, int width, int height) { final int frameSize = width * height; for (int j = 0, yp = 0; j < height; j++) { int uvp = frameSize + (j >> 1) * width, u = 0, v = 0; for (int i = 0; i < width; i++, yp++) { int y=0; if(yp < raw.length) { y = (0xff & ((int) raw[yp])) - 16; } if (y < 0) y = 0; if ((i & 1) == 0) { if(uvp<raw.length) { v = (0xff & raw[uvp++]) - 128; u = (0xff & raw[uvp++]) - 128; } } int y1192 = 1192 * y; int r = (y1192 + 1634 * v); int g = (y1192 - 833 * v - 400 * u); int b = (y1192 + 2066 * u); if (r < 0) r = 0; else if (r > 262143) r = 262143; if (g < 0) g = 0; else if (g > 262143) g = 262143; if (b < 0) b = 0; else if (b > 262143) b = 262143; rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff); } } } @Override public synchronized void onPreviewFrame(byte[] data, Camera camera) { int width = mSettings.PictureW() ; int height = mSettings.PictureH(); // API 8 and above // YuvImage yuvi = new YuvImage(data, ImageFormat.NV21 , width, height, null); // Rect rect = new Rect(0,0,yuvi.getWidth() ,yuvi.getHeight() ); // OutputStream out = new ByteArrayOutputStream(); // yuvi.compressToJpeg(rect, 10, out); // byte[] ref = ((ByteArrayOutputStream)out).toByteArray(); // API 7 int[] temp = new int[width*height]; OutputStream out = new ByteArrayOutputStream(); // byte[] ref = null; Bitmap bm = null; raw2jpg(temp, data, width, height); bm = Bitmap.createBitmap(temp, width, height, Bitmap.Config.RGB_565); bm.compress(CompressFormat.JPEG, mSettings.PictureQ(), out); /*ref*/mCurrentFrame = ((ByteArrayOutputStream)out).toByteArray(); // mCurrentFrame = new byte[ref.length]; // System.arraycopy(ref, 0, mCurrentFrame, 0, ref.length); isDecoding = false; notify(); } 

I really hope someone can explain these things as best as possible. That would be very helpful.

+6
source share
2 answers

Well, if someone is interested, I have an answer.

The code re-takes the snapshot from the camera preview using setOneShotPreviewCallback () to call onPreviewFrame (). The frame comes in YUV format, so raw2jpg () converts it to 32-bit ARGB for the jpeg encoder. NV21 is a YUV planar format, as described here.

getPicture () is called, presumably by the application, and creates the jpeg data for the image in the private byte mCurrentFrame array and returns this array. What happens if there is subsequently no code in this piece of code. Note that getPicture () executes a couple of wait () s. This is due to the fact that the image acquisition code works in a separate stream than in the application.

In the main activity, the public static byte CurrentJPEG receives this: cameraFrame.getPicture (); in public void run (). In this web service, it is sent with the socket to the desired ip.

Correct me if I am wrong.

Now I am still wondering how the image is displayed in the browser as an image, because you send it byte data to the right? Please check this: http://code.google.com/p/camdroiduni/source/browse/trunk/code/eclipse_workspace/camdroid/src/de/aes/camdroid/WebServer.java

+3
source

Nothing in this code sends any data to any URL. The getPicture method returns a byte array, which is probably used as the output stream in some other method / class, which then directs it to the web service through some protocol (probably UDP).

+2
source

Source: https://habr.com/ru/post/890786/


All Articles