How to get 240 frames per second on low resolution iPhone 6

I am trying to process images in real time using the iPhone 6 at a speed of 240 frames per second. The problem is that when I shoot a video at such a speed, I can’t process the image fast enough, since I need to try every pixel to get the average value. Decreasing the resolution of the image will easily solve this problem, but I cannot figure out how to do it. Available AVCaptureDeviceFormat has parameters with 192x144 px, but with a speed of 30 frames per second. All 240fps options are large. This is how I take the data sample:

- (void)startDetection { const int FRAMES_PER_SECOND = 240; self.session = [[AVCaptureSession alloc] init]; self.session.sessionPreset = AVCaptureSessionPresetLow; // Retrieve the back camera NSArray *devices = [AVCaptureDevice devices]; AVCaptureDevice *captureDevice; for (AVCaptureDevice *device in devices) { if ([device hasMediaType:AVMediaTypeVideo]) { if (device.position == AVCaptureDevicePositionBack) { captureDevice = device; break; } } } NSError *error; AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:captureDevice error:&error]; [self.session addInput:input]; if (error) { NSLog(@"%@", error); } // Find the max frame rate we can get from the given device AVCaptureDeviceFormat *currentFormat; for (AVCaptureDeviceFormat *format in captureDevice.formats) { NSArray *ranges = format.videoSupportedFrameRateRanges; AVFrameRateRange *frameRates = ranges[0]; // Find the lowest resolution format at the frame rate we want. if (frameRates.maxFrameRate == FRAMES_PER_SECOND && (!currentFormat || (CMVideoFormatDescriptionGetDimensions(format.formatDescription).width < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).width && CMVideoFormatDescriptionGetDimensions(format.formatDescription).height < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).height))) { currentFormat = format; } } // Tell the device to use the max frame rate. [captureDevice lockForConfiguration:nil]; captureDevice.torchMode=AVCaptureTorchModeOn; captureDevice.activeFormat = currentFormat; captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND); captureDevice.activeVideoMaxFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND); [captureDevice setVideoZoomFactor:4]; [captureDevice unlockForConfiguration]; // Set the output AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init]; // create a queue to run the capture on dispatch_queue_t captureQueue=dispatch_queue_create("catpureQueue", NULL); // setup our delegate [videoOutput setSampleBufferDelegate:self queue:captureQueue]; // configure the pixel format videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil]; videoOutput.alwaysDiscardsLateVideoFrames = NO; [self.session addOutput:videoOutput]; // Start the video session [self.session startRunning]; } 
+6
source share
1 answer

Try the GPUImage library. Each filter has a forceProcessingAtSize: method. After resizing on the GPU, you can get the data using the GPUImageRawDataOutput .

I got 60 frames per second with the image of the process on the processor using this method.

0
source

Source: https://habr.com/ru/post/983719/


All Articles