Placing an H.264 frame in AVSampleBufferDisplayLayer, but the video image is not displayed

After a detailed review of WWDC2014, Session513, I am trying to write my application on IOS8.0 to decode and display a single stream in real time of H.264. First of all, I successfully create an H264 parameter set. When I get one I-frame with a 4-bit start code, like "0x00 0x00 0x00 0x01 0x65 ...", I put it in CMblockBuffer. Then I create a CMSampleBuffer using the CMBlockBuffer preview. After that, I put the CMSampleBuffer in the AVSampleBufferDisplayLayer. Everything is in order (I checked the return value), except that AVSampleBufferDisplayLayer does not show any video image. Since these APIs are completely new to everyone, I could not find a single body that could solve this problem.

I will give the key codes as follows, and I really appreciate it if you can help figure out why the video image cannot be displayed. Many thanks.

(1) AVSampleBufferDisplayLayer initialized. dsplayer is an objc instance of my main view controller.

@property(nonatomic,strong)AVSampleBufferDisplayLayer *dspLayer; if(!_dspLayer) { _dspLayer = [[AVSampleBufferDisplayLayer alloc]init]; [_dspLayer setFrame:CGRectMake(90,551,557,389)]; _dspLayer.videoGravity = AVLayerVideoGravityResizeAspect; _dspLayer.backgroundColor = [UIColor grayColor].CGColor; CMTimebaseRef tmBase = nil; CMTimebaseCreateWithMasterClock(NULL,CMClockGetHostTimeClock(),&tmBase); _dspLayer.controlTimebase = tmBase; CMTimebaseSetTime(_dspLayer.controlTimebase, kCMTimeZero); CMTimebaseSetRate(_dspLayer.controlTimebase, 1.0); [self.view.layer addSublayer:_dspLayer]; } 

(2) In another thread, I get one H.264 I frame. // create a set of h.264 ok parameters

  CMVideoFormatDescriptionRef formatDesc; OSStatus formatCreateResult = CMVideoFormatDescriptionCreateFromH264ParameterSets(NULL, ppsNum+1, props, sizes, 4, &formatDesc); NSLog([NSString stringWithFormat:@"construct h264 param set:%ld",formatCreateResult]); 

// build cmBlockbuffer. // databuf points to H.264 data. starts with "0x00 0x00 0x00 0x01 0x65 ........"

  CMBlockBufferRef blockBufferOut = nil; CMBlockBufferCreateEmpty (0,0,kCMBlockBufferAlwaysCopyDataFlag, &blockBufferOut); CMBlockBufferAppendMemoryBlock(blockBufferOut, dataBuf, dataLen, NULL, NULL, 0, dataLen, kCMBlockBufferAlwaysCopyDataFlag); 

// build cmsamplebuffer ok

  size_t sampleSizeArray[1] = {0}; sampleSizeArray[0] = CMBlockBufferGetDataLength(blockBufferOut); CMSampleTiminginfo tmInfos[1] = { {CMTimeMake(5,1), CMTimeMake(5,1), CMTimeMake(5,1)} }; CMSampleBufferRef sampBuf = nil; formatCreateResult = CMSampleBufferCreate(kCFAllocatorDefault, blockBufferOut, YES, NULL, NULL, formatDesc, 1, 1, tmInfos, 1, sampleSizeArray, &sampBuf); 

// put only one frame in AVSampleBufferdisplayLayer. But I do not see the video frames in my view

  if([self.dspLayer isReadyForMoreMediaData]) { [self.dspLayer enqueueSampleBuffer:sampBuf]; } [self.dspLayer setNeedsDisplay]; 
+5
source share
2 answers

The initial NAL device codes 0x00 0x00 0x01 or 0x00 0x00 0x00 0x01 must be replaced with a length header.

This was clearly indicated in the WWDC session to which you refer that the application B start code should be replaced with the AVCC header to match lengh. You basically translate to MP4 file format from the application B stream format on the fly here (simplified course description).

Your call when creating the parameter set will be “4” for this, so you need to prefix your VAL NAL blocks with a prefix of 4 bytes in length. That's why you specify it as in AVCC format, the length of the header may be shorter.

Everything that you put into the CMSampleBuffer will be fine, there is no operability check if the contents can be decoded, just so that you perform the required parameters for arbitrary data in combination with the time and parameter information.

Mostly with the data you entered, you said that VAL NAL units are 1 byte long. The decoder does not receive a full NAL block and takes an error response.

Also make sure that when using create the parameter set in PPS / SPS, do not add the added length and that the initial code of the application B is also devoid.

I also recommend that you do not use AVSampleBufferDisplayLayer, but go through VTDecompressionSession, so you can do things like color grading or other things that are needed inside the pixel shader.

+4
source

It might be an idea to initially use the DecompressionSessionDecode Frame, as this will give you some feedback on the success of decoding. If there is a problem with decoding, the AVSampleBufferDisplay level does not tell you that it does not display anything. I can give you some code to help with this, if necessary, let me know how you deal when I try to do the same :)

0
source

Source: https://habr.com/ru/post/1203552/


All Articles