I am using glgrab code to try and get a full screen screenshot of a Mac. However, I want the bitmap data to be in GL_RGB format. That is, each pixel should be in the format:
0x00RRGGBB
The source code specifies the format GL_BGRA . However, changing this parameter to GL_RGB gives me a completely empty result. The general source code I use is:
CGImageRef grabViaOpenGL(CGDirectDisplayID display, CGRect srcRect) { CGContextRef bitmap; CGImageRef image; void * data; long bytewidth; GLint width, height; long bytes; CGColorSpaceRef cSpace = CGColorSpaceCreateWithName (kCGColorSpaceGenericRGB); CGLContextObj glContextObj; CGLPixelFormatObj pixelFormatObj ; GLint numPixelFormats ; //CGLPixelFormatAttribute int attribs[] = { // kCGLPFAClosestPolicy, kCGLPFAFullScreen, kCGLPFADisplayMask, NULL, /* Display mask bit goes here */ kCGLPFAColorSize, 24, kCGLPFAAlphaSize, 0, kCGLPFADepthSize, 32, kCGLPFASupersample, NULL } ; if ( display == kCGNullDirectDisplay ) display = CGMainDisplayID(); attribs[2] = CGDisplayIDToOpenGLDisplayMask(display); /* Build a full-screen GL context */ CGLChoosePixelFormat( (CGLPixelFormatAttribute*) attribs, &pixelFormatObj, &numPixelFormats ); if ( pixelFormatObj == NULL ) // No full screen context support { // GL didn't find any suitable pixel formats. Try again without the supersample bit: attribs[10] = NULL; CGLChoosePixelFormat( (CGLPixelFormatAttribute*) attribs, &pixelFormatObj, &numPixelFormats ); if (pixelFormatObj == NULL) { qDebug("Unable to find an openGL pixel format that meets constraints"); return NULL; } } CGLCreateContext( pixelFormatObj, NULL, &glContextObj ) ; CGLDestroyPixelFormat( pixelFormatObj ) ; if ( glContextObj == NULL ) { qDebug("Unable to create OpenGL context"); return NULL; } CGLSetCurrentContext( glContextObj ) ; CGLSetFullScreen( glContextObj ) ; glReadBuffer(GL_FRONT); width = srcRect.size.width; height = srcRect.size.height; bytewidth = width * 4; // Assume 4 bytes/pixel for now bytewidth = (bytewidth + 3) & ~3; // Align to 4 bytes bytes = bytewidth * height; // width * height /* Build bitmap context */ data = malloc(height * bytewidth); if ( data == NULL ) { CGLSetCurrentContext( NULL ); CGLClearDrawable( glContextObj ); // disassociate from full screen CGLDestroyContext( glContextObj ); // and destroy the context qDebug("OpenGL drawable clear failed"); return NULL; } bitmap = CGBitmapContextCreate(data, width, height, 8, bytewidth, cSpace, kCGImageAlphaNoneSkipFirst /* XRGB */); CFRelease(cSpace); /* Read framebuffer into our bitmap */ glFinish(); /* Finish all OpenGL commands */ glPixelStorei(GL_PACK_ALIGNMENT, 4); /* Force 4-byte alignment */ glPixelStorei(GL_PACK_ROW_LENGTH, 0); glPixelStorei(GL_PACK_SKIP_ROWS, 0); glPixelStorei(GL_PACK_SKIP_PIXELS, 0); /* * Fetch the data in XRGB format, matching the bitmap context. */ glReadPixels((GLint)srcRect.origin.x, (GLint)srcRect.origin.y, width, height, GL_RGB, #ifdef __BIG_ENDIAN__ GL_UNSIGNED_INT_8_8_8_8_REV, // for PPC #else GL_UNSIGNED_INT_8_8_8_8, // for Intel! http://lists.apple.com/archives/quartz-dev/2006/May/msg00100.html
I am completely new to OpenGL, so I would appreciate some pointers in the right direction. Hooray!
Update:
After some experimentation, I managed to narrow down the problem. My problem is that although I do not want an alpha component, I want each pixel to be packed in 4-byte borders. Now, when I specify the formats GL_RGB or GL_BGR to call glReadPixels , I get bitmap data packed in 3 byte blocks. When I specify GL_RGBA or GL_BGRA , I get four byte blocks, but always with the last alpha channel component.
Then I tried to change the value passed in
bitmap = CGBitmapContextCreate(data, width, height, 8, bytewidth,cSpace, kCGImageAlphaNoneSkipFirst /* XRGB */);
however, no changes to AlphaNoneSkipFirst or AlphaNoneSkipLast place the alpha channel at the beginning of the block of pixel bytes.
Any ideas?