How can we get a H.264 encoded video from an iPhone Camera?

I use the following to get the video fetch buffer:

- (void) writeSampleBufferStream:(CMSampleBufferRef)sampleBuffer ofType:(NSString *)mediaType 

Now my question is how can I get NSHata with h.264 encoding above sampleBuffer . Please suggest.

+6
source share
2 answers

Update for 2017:

Now you can create streaming video and audio using the VideoToolbox API. Read the documentation here: VTCompressionSession

Original answer (since 2013):

Short: you cannot, the sample buffer you receive is uncompressed.

Ways to get h264 hardware accelerated compression:

As you can see as writing to a file, writing to the channel does not work, since the encoder updates the header information after the frame or GOP has been completely recorded. Therefore, you better not touch the file while the encoder is writing to it, since it accidentally overwrites the header information. Without this header information, the video file will not play (it updates the size field, so the first header says that the file is 0 bytes). Direct write to memory is not currently supported. But you can open the encoded video file and dismantle the stream to access h264 data ( after , of course, the encoder closed the file)

+5
source

You can only receive raw video in BGRA or YUV color format from AVFoundation. However, when you record these frames in mp4 via AVAssetWriter, they will be encoded using H264 encoding.

A good example with code on how to do this is RosyWriter

Note that after each AVAssetWriter recording, you will find out that one full H264 NAL was recorded in mp4. You can write code that reads the full H264 NAL after each write using AVAssetWriter, which will give you access to an H264 encoded frame. It may take a little time to succeed at a decent speed, but it is doable (I did it successfully).

By the way, to successfully decode these encoded video frames, you need the H2PS SPS and PPS information, which is located elsewhere in the mp4 file. In my case, I actually create a couple of mp4 test files, and then manually extracted them. Since these changes do not change, unless you change the H264-encoded specifications, you can use them in your code.

Mark my post in the SPS values โ€‹โ€‹for stream H 264 on the iPhone to see some of the SPS / PPS that I used in my code.

Just a final note, in my case I had to transfer h264 encoded frames to another endpoint for decoding / viewing; so my code should have done it fast. In my case, it was relatively fast; but in the end I switched to VP8 for encoding / decoding only because it was faster, because everything was done in memory without reading / writing files.

Good luck, and hopefully this information helps.

+7
source

Source: https://habr.com/ru/post/946923/


All Articles