Want to scale functionality in iphone camera with AVFoundation framework

I want to enlarge the camera using UISlider .

I did this successfully by setting up AffineTransform AVCaptureVideoPreviewLayer .

Here is his code

-(void)sliderAction:(UISlider*)sender{ CGAffineTransform affineTransform = CGAffineTransformMakeTranslation(sender.value, sender.value); affineTransform = CGAffineTransformScale(affineTransform, sender.value, sender.value); affineTransform = CGAffineTransformRotate(affineTransform, 0); [CATransaction begin]; [CATransaction setAnimationDuration:.025]; //previewLayer is object of AVCaptureVideoPreviewLayer [[[self captureManager]previewLayer] setAffineTransform:affineTransform]; [CATransaction commit]; } 

but when I fix it, I get a non-enlarged image object.

+6
source share
2 answers

A little late to answer. But I am responding to future links. In fact, what you did in your code is only that you changed the scaling factor of the preview level, and not the base output connection. But in order for the zoom to be initially reflected in the captured output, you must also put a factor in your output mix. You can use something similar below:

 -(void)sliderAction:(UISlider*)sender { AVCaptureConnection* connection = [self.photoOutput connectionWithMediaType:AVMediaTypeVideo]; // photoOutput is a AVCaptureStillImageOutput object, representing a capture session output with customized preset CGAffineTransform affineTransform = CGAffineTransformMakeTranslation(sender.value, sender.value); affineTransform = CGAffineTransformScale(affineTransform, sender.value, sender.value); affineTransform = CGAffineTransformRotate(affineTransform, 0); [CATransaction begin]; [CATransaction setAnimationDuration:.025]; //previewLayer is object of AVCaptureVideoPreviewLayer [[[self captureManager]previewLayer] setAffineTransform:affineTransform]; if (connection) { connection.videoScaleAndCropFactor = sender.value; } [CATransaction commit]; } 

And he has to do the trick.

Ideally, you should not make a change to connection.videoScaleAndCropFactor in your Slider routine and should put the code in your original capture routine and set it only once with a short-term slider value, just before calling the captureStillImageAsynchronouslyFromConnection method.

Hope this helps :)

+3
source

First, your code scales only the contents of the layer, not the CMSampleBuffer . Your next job is to scale to CVPixelBuffer from CMSampleBuffer and save the scaled CMSampleBuffer to AVWriter . You can use Accelerate.framework to scale CVPixelBuffer .

0
source

Source: https://habr.com/ru/post/949084/


All Articles