I am trying to make something very simple. I want to display a video layer in full screen, and after every second update of UIImage with CMSampleBufferRef I received at that time. However, I ran into two different problems. The first is that the change:
[connection setVideoMaxFrameDuration:CMTimeMake(1, 1)]; [connection setVideoMinFrameDuration:CMTimeMake(1, 1)];
The video preview level will also be changed, I thought it would change the speed when the av-fund sends information to the delegate, but it seems to affect the entire session (which looks more obvious). Thus, this makes my video update every second. I think I could omit these lines and just add a timer to the delegate so that every second it sends a CMSampleBufferRef to another method to process it. But I do not know if this is the right approach.
My second problem is that the UIImageView is NOT updated, or sometimes it just updates once and does not change after. I use this method to update it:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
What I took from apple examples. The method is called correctly every second that I checked while reading the update message. But the image does not change at all. Also, is sampleBuffer automatically destroyed or do I need to release it?
These are two other important methods: View loaded:
- (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. session = [[AVCaptureSession alloc] init]; // Add inputs and outputs. if ([session canSetSessionPreset:AVCaptureSessionPreset640x480]) { session.sessionPreset = AVCaptureSessionPreset640x480; } else { // Handle the failure. NSLog(@"Cannot set session preset to 640x480"); } AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; NSError *error = nil; AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; if (!input) { // Handle the error appropriately. NSLog(@"Could create input: %@", error); } if ([session canAddInput:input]) { [session addInput:input]; } else { // Handle the failure. NSLog(@"Could not add input"); } // DATA OUTPUT dataOutput = [[AVCaptureVideoDataOutput alloc] init]; if ([session canAddOutput:dataOutput]) { [session addOutput:dataOutput]; dataOutput.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey: (id)kCVPixelBufferPixelFormatTypeKey]; //dataOutput.minFrameDuration = CMTimeMake(1, 15); //dataOutput.minFrameDuration = CMTimeMake(1, 1); AVCaptureConnection *connection = [dataOutput connectionWithMediaType:AVMediaTypeVideo]; [connection setVideoMaxFrameDuration:CMTimeMake(1, 1)]; [connection setVideoMinFrameDuration:CMTimeMake(1, 1)]; } else { // Handle the failure. NSLog(@"Could not add output"); } // DATA OUTPUT END dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL); [dataOutput setSampleBufferDelegate:self queue:queue]; dispatch_release(queue); captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session]; [captureVideoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspect]; [captureVideoPreviewLayer setBounds:videoLayer.layer.bounds]; [captureVideoPreviewLayer setPosition:videoLayer.layer.position]; [videoLayer.layer addSublayer:captureVideoPreviewLayer]; [session startRunning]; }
Hide CMSampleBufferRef until UIImage:
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer {
Thanks in advance for any help you can give me.