Capturing AVCaptureVideoPreviewLayer

I use WebRTC to create a video chat between two users. I want to take a snapshot of the localView view showing one of the faces.

This is my class using the configureLocalPreview method, which connects video streams to UIViews:

 @IBOutlet var remoteView: RTCEAGLVideoView! @IBOutlet var localView: UIView! var captureSession: AVCaptureSession? var videoSource: RTCAVFoundationVideoSource? var videoTrack: RTCVideoTrack? func configureLocalPreview() { self.videoTrack = self.signaling.localMediaStream.self.videoTracks.first as! RTCVideoTrack? self.videoSource = (self.videoTrack?.source as? RTCAVFoundationVideoSource) self.captureSession = self.videoSource?.self.captureSession self.previewLayer = AVCaptureVideoPreviewLayer.init(session: self.captureSession) self.previewLayer.frame = self.localView.bounds self.localView.layer.addSublayer(self.previewLayer) self.localView.isUserInteractionEnabled = true //self.localView.layer.position = CGPointMake(100, 100); } 

In the place where I want to access the snapshot, I call:

 self.localView.pb_takeSnapshot() 

pb_takeSnapshot comes from the UIView extension, which I found in another post. It is defined as follows:

 extension UIView { func pb_takeSnapshot() -> UIImage { UIGraphicsBeginImageContextWithOptions(bounds.size, false, UIScreen.main.scale) drawHierarchy(in: self.bounds, afterScreenUpdates: true) let image = UIGraphicsGetImageFromCurrentImageContext()! UIGraphicsEndImageContext() return image } } 

When I view the image in the Xcode debugger, it looks completely green, and the person I see on the iphone screen (inside this view) does not exist:

snapshot screen shot

What could be the reason that a person is not visible? Is it somehow simply impossible to take a snapshot of the stream? Thanks for watching!

+5
source share
3 answers

You must create localView using RTCEAGLVideoView instead of UIView. I use the same for my localView and can take a snapshot using the same code snippet that is indicated in your post.

Below is an example of code that will launch your camera and show a local preview:

 class ViewController: UIViewController,RTCEAGLVideoViewDelegate { var captureSession: AVCaptureSession? var previewLayer :AVCaptureVideoPreviewLayer? var peerConnectionFactory: RTCPeerConnectionFactory! var videoSource:RTCAVFoundationVideoSource! var localTrack :RTCVideoTrack! @IBOutlet var myView: UIView! override func viewDidLoad() { super.viewDidLoad() /*myView = UIView(frame: CGRect(x: 0, y: 0, width: UIScreen.main.bounds.size.width, height: UIScreen.main.bounds.size.height))*/ startCamera() // Do any additional setup after loading the view, typically from a nib. } fileprivate func startCamera() { peerConnectionFactory = RTCPeerConnectionFactory() RTCInitializeSSL(); RTCSetupInternalTracer(); RTCSetMinDebugLogLevel(RTCLoggingSeverity.info) videoSource = peerConnectionFactory.avFoundationVideoSource(with: nil); localTrack = peerConnectionFactory.videoTrack(with: videoSource, trackId: "ARDAMSv0") let localScaleX = CGFloat(1.0) let localView : RTCEAGLVideoView = RTCEAGLVideoView(frame: self.view.bounds) self.view.insertSubview(localView, at: 1) localView.frame = self.view.bounds; localView.transform = CGAffineTransform(scaleX: localScaleX, y: 1) localTrack.add(localView) } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } override func viewDidAppear(_ animated: Bool) { //previewLayer?.frame.size = myView.frame.size } func videoView(_ videoView: RTCEAGLVideoView, didChangeVideoSize size: CGSize) { print("Inside didChangeVideoSize") } } 
+3
source

Since AVCaptureVideoPreviewLayer implemented as an OpenGL layer, you cannot use the usual CoreGraphic context. I can suggest trying to access the raw data.

Add AVCaptureVideoDataOutput with a delegate:

 previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) let captureVideoOutput = AVCaptureVideoDataOutput() captureVideoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main) captureSession?.addOutput(captureVideoOutput) previewLayer.frame = localView.bounds 

Connect your controller (or something else) to AVCaptureVideoDataOutputSampleBufferDelegate .

Declare the shouldCaptureFrame variable and adjust it when you need to take a snapshot.

 var shouldCaptureFrame: Bool = false ... func takeSnapshot() { shouldCaptureFrame = true } 

And implement didOutputSampleBuffer from the delegate:

 func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { if !shouldCaptureFrame { return } let image = UIImage.from(sampleBuffer: sampleBuffer) shouldCaptureFrame = false } 

Finally, an extension using the from(sampleBuffer:) function:

 extension UIImage { static func from(sampleBuffer: CMSampleBuffer) -> UIImage? { guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil } CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0)) let baseAddresses = CVPixelBufferGetBaseAddress(imageBuffer) let colorSpace = CGColorSpaceCreateDeviceRGB() let context = CGContext( data: baseAddresses, width: CVPixelBufferGetWidth(imageBuffer), height: CVPixelBufferGetHeight(imageBuffer), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(imageBuffer), space: colorSpace, bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue ) let quartzImage = context?.makeImage() CVPixelBufferUnlockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0)) if let quartzImage = quartzImage { let image = UIImage(cgImage: quartzImage) return image } return nil } } 
+2
source

For the WebRTC video layer, you must use RTCEAGLVideoView for views. For more information, check out this example WebRTC application located here AppRTC App

+1
source

Source: https://habr.com/ru/post/1265056/


All Articles