The code that I am describing in the answer you provide describes the creation of a pixel buffer, the extraction of its corresponding texture and the binding of this texture as the output of the framebuffer. You use this code once to tune the framebuffer to which you will make your scene.
Whenever you want to capture this texture, you probably want to use glFinish() to lock until all OpenGL ES rendering is complete, and then use the code that I describe there:
CVPixelBufferLockBaseAddress(renderTarget, 0); _rawBytesForImage = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
to extract raw bytes for a texture containing an image of your scene.
The internal byte ordering of iOS texture bytes is BGRA, so you want to use something like the following to create a CGImageRef from these bytes:
In the above example, I use the dataProviderUnlockCallback() function to handle unlocking the pixel buffer and safely resume rendering, but you can probably ignore this in your case and just pass the parameter there to NULL.
source share