I am new to Android programming, but I get involved quickly. So, I found an interesting piece of code here: http://code.google.com/p/camdroiduni/source/browse/trunk/code/eclipse_workspace/camdroid/src/de/aes/camdroid/CameraView.java
And it's about streaming from your deviceโs camera to your browser.
But I want to know how the code works.
Here is what I want to understand:
1) How they flow into a web browser. I understand that they send the index.html file to the deviceโs IP address (on Wi-Fi), and this file reloads the page every second. But how do they send the index.html file to the desired IP address with sockets?
2) http://code.google.com/p/camdroiduni/wiki/Status#save_pictures_frequently It is mentioned here that they use video, but I am still convinced that they take pictures and send them, because I canโt see the media recorder.
My question now is how they continue to send and save these images to the SD folder (I think). I think this is done with this code, but how does it work. Like c.takepicture, it will take a long time to save and start the preview again, so there is no possibility for livestream.
public synchronized byte[] getPicture() { try { while (!isPreviewOn) wait(); isDecoding = true; mCamera.setOneShotPreviewCallback(this); while (isDecoding) wait(); } catch (Exception e) { return null; } return mCurrentFrame; } private LayoutParams calcResolution (int origWidth, int origHeight, int aimWidth, int aimHeight) { double origRatio = (double)origWidth/(double)origHeight; double aimRatio = (double)aimWidth/(double)aimHeight; if (aimRatio>origRatio) return new LayoutParams(origWidth,(int)(origWidth/aimRatio)); else return new LayoutParams((int)(origHeight*aimRatio),origHeight); } private void raw2jpg(int[] rgb, byte[] raw, int width, int height) { final int frameSize = width * height; for (int j = 0, yp = 0; j < height; j++) { int uvp = frameSize + (j >> 1) * width, u = 0, v = 0; for (int i = 0; i < width; i++, yp++) { int y=0; if(yp < raw.length) { y = (0xff & ((int) raw[yp])) - 16; } if (y < 0) y = 0; if ((i & 1) == 0) { if(uvp<raw.length) { v = (0xff & raw[uvp++]) - 128; u = (0xff & raw[uvp++]) - 128; } } int y1192 = 1192 * y; int r = (y1192 + 1634 * v); int g = (y1192 - 833 * v - 400 * u); int b = (y1192 + 2066 * u); if (r < 0) r = 0; else if (r > 262143) r = 262143; if (g < 0) g = 0; else if (g > 262143) g = 262143; if (b < 0) b = 0; else if (b > 262143) b = 262143; rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff); } } } @Override public synchronized void onPreviewFrame(byte[] data, Camera camera) { int width = mSettings.PictureW() ; int height = mSettings.PictureH();
I really hope someone can explain these things as best as possible. That would be very helpful.