You can only receive raw video in BGRA or YUV color format from AVFoundation. However, when you record these frames in mp4 via AVAssetWriter, they will be encoded using H264 encoding.
A good example with code on how to do this is RosyWriter
Note that after each AVAssetWriter recording, you will find out that one full H264 NAL was recorded in mp4. You can write code that reads the full H264 NAL after each write using AVAssetWriter, which will give you access to an H264 encoded frame. It may take a little time to succeed at a decent speed, but it is doable (I did it successfully).
By the way, to successfully decode these encoded video frames, you need the H2PS SPS and PPS information, which is located elsewhere in the mp4 file. In my case, I actually create a couple of mp4 test files, and then manually extracted them. Since these changes do not change, unless you change the H264-encoded specifications, you can use them in your code.
Mark my post in the SPS values โโfor stream H 264 on the iPhone to see some of the SPS / PPS that I used in my code.
Just a final note, in my case I had to transfer h264 encoded frames to another endpoint for decoding / viewing; so my code should have done it fast. In my case, it was relatively fast; but in the end I switched to VP8 for encoding / decoding only because it was faster, because everything was done in memory without reading / writing files.
Good luck, and hopefully this information helps.
source share