How to connect a camera from one iOS device to another using multi-level communication

how can we efficiently transfer data from one iOS device to another using bluetooth or wifi in iOS 7. Below is the code to get the steam buffer.

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Create a UIImage from the sample buffer data UIImage *image = [self imageFromSampleBuffer:sampleBuffer]; } // Create a UIImage from sample buffer data - (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer { // Get a CMSampleBuffer Core Video image buffer for the media data CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // Lock the base address of the pixel buffer CVPixelBufferLockBaseAddress(imageBuffer, 0); // Get the number of bytes per row for the pixel buffer void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); // Get the number of bytes per row for the pixel buffer size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); // Get the pixel buffer width and height size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); // Create a device-dependent RGB color space CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // Create a bitmap graphics context with the sample buffer data CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); // Create a Quartz image from the pixel data in the bitmap graphics context CGImageRef quartzImage = CGBitmapContextCreateImage(context); // Unlock the pixel buffer CVPixelBufferUnlockBaseAddress(imageBuffer,0); // Free up the context and color space CGContextRelease(context); CGColorSpaceRelease(colorSpace); // Create an image object from the Quartz image UIImage *image = [UIImage imageWithCGImage:quartzImage]; // Release the Quartz image CGImageRelease(quartzImage); return (image); } 

here we can get the image that is captured by the iOS camera.

Can we send buffer buffer information directly to another device using multiple peer-to-peer networks, or is there an efficient way to pass data to other iOS devices?

Thanks.

+6
source share
2 answers

I have a way to do this. We can use multi-user communication to stream compressed images so that it looks like a camera stream.

One peer that will send the stream will use this code. In the captureOutput Delegate method:

  NSData *imageData = UIImageJPEGRepresentation(cgBackedImage, 0.2); // maybe not always the correct input? just using this to send current FPS... AVCaptureInputPort* inputPort = connection.inputPorts[0]; AVCaptureDeviceInput* deviceInput = (AVCaptureDeviceInput*) inputPort.input; CMTime frameDuration = deviceInput.device.activeVideoMaxFrameDuration; NSDictionary* dict = @{ @"image": imageData, @"timestamp" : timestamp, @"framesPerSecond": @(frameDuration.timescale) }; NSData *data = [NSKeyedArchiver archivedDataWithRootObject:dict]; [_session sendData:data toPeers:_session.connectedPeers withMode:MCSessionSendDataReliable error:nil]; 

And on the receiving side:

 - (void)session:(MCSession *)session didReceiveData:(NSData *)data fromPeer:(MCPeerID *)peerID { // NSLog(@"(%@) Read %d bytes", peerID.displayName, data.length); NSDictionary* dict = (NSDictionary*) [NSKeyedUnarchiver unarchiveObjectWithData:data]; UIImage* image = [UIImage imageWithData:dict[@"image"] scale:2.0]; NSNumber* framesPerSecond = dict[@"framesPerSecond"]; } 

We get the FPS value and, accordingly, we can set the parameters for managing streaming images.

Hope this helps.

Thanks.

0
source

Here is the best way to do this (and, I will explain why in the end):

On an iOS device sending image data:

 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(imageBuffer,0); uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGImageRef newImage = CGBitmapContextCreateImage(newContext); UIImage *image = [[UIImage alloc] initWithCGImage:newImage scale:1 orientation:UIImageOrientationUp]; CGImageRelease(newImage); CGContextRelease(newContext); CGColorSpaceRelease(colorSpace); CVPixelBufferUnlockBaseAddress(imageBuffer, 0); if (image) { NSData *data = UIImageJPEGRepresentation(image, 0.7); NSError *err; [((ViewController *)self.parentViewController).session sendData:data toPeers:((ViewController *)self.parentViewController).session.connectedPeers withMode:MCSessionSendDataReliable error:&err]; } } 

On an iOS device receiving image data:

 typedef struct { size_t length; void *data; } ImageCacheDataStruct; - (void)session:(nonnull MCSession *)session didReceiveData:(nonnull NSData *)data fromPeer:(nonnull MCPeerID *)peerID { dispatch_async(self.imageCacheDataQueue, ^{ dispatch_semaphore_wait(self.semaphore, DISPATCH_TIME_FOREVER); const void *dataBuffer = [data bytes]; size_t dataLength = [data length]; ImageCacheDataStruct *imageCacheDataStruct = calloc(1, sizeof(imageCacheDataStruct)); imageCacheDataStruct->data = (void*)dataBuffer; imageCacheDataStruct->length = dataLength; __block const void * kMyKey; dispatch_queue_set_specific(self.imageDisplayQueue, &kMyKey, (void *)imageCacheDataStruct, NULL); dispatch_sync(self.imageDisplayQueue, ^{ ImageCacheDataStruct *imageCacheDataStruct = calloc(1, sizeof(imageCacheDataStruct)); imageCacheDataStruct = dispatch_queue_get_specific(self.imageDisplayQueue, &kMyKey); const void *dataBytes = imageCacheDataStruct->data; size_t length = imageCacheDataStruct->length; NSData *imageData = [NSData dataWithBytes:dataBytes length:length]; UIImage *image = [UIImage imageWithData:imageData]; if (image) { dispatch_async(dispatch_get_main_queue(), ^{ [((ViewerViewController *)self.childViewControllers.lastObject).view.layer setContents:(__bridge id)image.CGImage]; dispatch_semaphore_signal(self.semaphore); }); } }); }); } 

The reason for semaphores and individual GCD queues is simple: you want frames to be displayed at regular intervals. Otherwise, the video seems to slow down at first at times, right before speeding the way back to catch up. My scheme ensures that each frame plays one after another at the same pace, regardless of bottlenecks in network bandwidth.

0
source

Source: https://habr.com/ru/post/975187/


All Articles