Front orientation of the AVCaptureSession camera

I use AVCaptureSession to record video using AVAssetWriterInput I am writing a video to a file.
My problem is with the video orientation, using the apple RosyWriter example, I can convert AVAssetWriterInput so that the video is in the correct orientation.

 - (CGFloat)angleOffsetFromPortraitOrientationToOrientation:(AVCaptureVideoOrientation)orientation { CGFloat angle = 0.0; switch (orientation) { case AVCaptureVideoOrientationPortrait: angle = 0.0; break; case AVCaptureVideoOrientationPortraitUpsideDown: angle = M_PI; break; case AVCaptureVideoOrientationLandscapeRight: angle = -M_PI_2; break; case AVCaptureVideoOrientationLandscapeLeft: angle = M_PI_2; break; default: break; } return angle; } - (CGAffineTransform)transformFromCurrentVideoOrientationToOrientation:(AVCaptureVideoOrientation)orientation { CGAffineTransform transform = CGAffineTransformIdentity; // Calculate offsets from an arbitrary reference orientation (portrait) CGFloat orientationAngleOffset = [self angleOffsetFromPortraitOrientationToOrientation:orientation]; CGFloat videoOrientationAngleOffset = [self angleOffsetFromPortraitOrientationToOrientation:self.videoOrientation]; // Find the difference in angle between the passed in orientation and the current video orientation CGFloat angleOffset = orientationAngleOffset - videoOrientationAngleOffset; transform = CGAffineTransformMakeRotation(angleOffset); return transform; } 

The problem is with the orientation of the front camera, since this code does not work after the user switches to the front camera.

It seems that the reason is that I change the camera to the foreground, change the AVCaptureConnection , and I get a different orientation for the front cam and the rear cam.

So maybe I need to adapt to the differences between the initial orientation for the rear and front cameras.
I donโ€™t want to change the connection orientation every time the user switches cameras, because, because Apple says that it is bad for performance (and really, it doesnโ€™t look good when I do it), therefore, Apple will instead suggest using the AVAssetWriterInput transform property to change the orientation output, but I'm not sure I can use transform because I want the user to switch the camera during recording, but I canโ€™t change the transform after recording starts (it fails) ...

Any ideas how to solve this?

+4
source share
2 answers

Since the image from the front camera is mirrored, you will also need to reflect the conversion. The function for this is simply to scale one of the parameters with -1. Try playing with this:

 transform = CGAffineTransformConcat(CGAffineTransformMakeRotation(angle), CGAffineTransformMakeScale(1.0, -1.0)); 

Note. Possible permutations (next to the angle) X scale factor -1 and or substitution order concat

EDIT: clarification

AVCaptureSession will return you image data that does not rotate, like your device, naturally, I believe that it will be a landscape on the left or right. If you need your video to be correctly oriented depending on how you hold the device, you need to apply some kind of transformation to it, in most cases it is enough to just apply some rotation. However, the front camera is a specific case because it mimics a mirror while you still get non-mirror frames. As a result of this, your video appears up-down or left-right or all other combinations depending on which rotation you apply, so as you said, โ€œI tried a lot of angles, but I just can't get it right ... " Again, to simulate the effect of a mirror, you must scale one of the axes by -1.

EDIT: swap orientation (from comments)

This is just an idea, and I think it is a good one. Do not use asset transformation at all, do your thing. Create a view with the size you want your video to be, and then create a subtitle to view the images with all the transformations, cropping and content modes you need. From samples, you create images, set them as images and get a snapshot of the layer from the view. Then send the snapshot to the resource recorder.

I know what you think here is "overhead." Well, I donโ€™t think so. Since the representations and representations of the images make the conversion on the processor the same as the owner of the resource, so all you have done is cancel its internal conversion and use your own system. If you try this, I really want to hear your results.

To get a layer image:

 - (UIImage *)imageFromLayer:(CALayer *)layer { UIGraphicsBeginImageContext([layer frame].size); [layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return outputImage; } 
+1
source

Not the easiest way, but I think this will definitely work if you capture the frame and perform your own transformations. You do not need UIView for this - AVFoundation allows direct access and manipulation of your frame (if necessary, you can replace your own buffer when writing to a file). The only drawback that I see is performance. Even just flipping the image can be very slow on older devices and with high resolution.

If this affects performance, you can save the video as is and maintain a separate orientation array for each frame. After saving the video, open it, perform conversions for each frame and save it in a new file. I did something similar and it works.

On the other hand, the AVAssetWriterInput transform property only works on the iPhone. If, say, you record a video from AVFoundation and download it somewhere and watch it in a browser, it will have the wrong orientation. Therefore, if you are after a careful decision, your own transformation is the way to go.

If you often perform video / image processing (for example, in this case, flipping / rotating), consider the OpenCV library. It takes some time to learn, but it's worth it.

0
source

Source: https://habr.com/ru/post/1485773/


All Articles