Mac OS screen capture using GL_RGB format

I am using glgrab code to try and get a full screen screenshot of a Mac. However, I want the bitmap data to be in GL_RGB format. That is, each pixel should be in the format:

0x00RRGGBB

The source code specifies the format GL_BGRA . However, changing this parameter to GL_RGB gives me a completely empty result. The general source code I use is:

 CGImageRef grabViaOpenGL(CGDirectDisplayID display, CGRect srcRect) { CGContextRef bitmap; CGImageRef image; void * data; long bytewidth; GLint width, height; long bytes; CGColorSpaceRef cSpace = CGColorSpaceCreateWithName (kCGColorSpaceGenericRGB); CGLContextObj glContextObj; CGLPixelFormatObj pixelFormatObj ; GLint numPixelFormats ; //CGLPixelFormatAttribute int attribs[] = { // kCGLPFAClosestPolicy, kCGLPFAFullScreen, kCGLPFADisplayMask, NULL, /* Display mask bit goes here */ kCGLPFAColorSize, 24, kCGLPFAAlphaSize, 0, kCGLPFADepthSize, 32, kCGLPFASupersample, NULL } ; if ( display == kCGNullDirectDisplay ) display = CGMainDisplayID(); attribs[2] = CGDisplayIDToOpenGLDisplayMask(display); /* Build a full-screen GL context */ CGLChoosePixelFormat( (CGLPixelFormatAttribute*) attribs, &pixelFormatObj, &numPixelFormats ); if ( pixelFormatObj == NULL ) // No full screen context support { // GL didn't find any suitable pixel formats. Try again without the supersample bit: attribs[10] = NULL; CGLChoosePixelFormat( (CGLPixelFormatAttribute*) attribs, &pixelFormatObj, &numPixelFormats ); if (pixelFormatObj == NULL) { qDebug("Unable to find an openGL pixel format that meets constraints"); return NULL; } } CGLCreateContext( pixelFormatObj, NULL, &glContextObj ) ; CGLDestroyPixelFormat( pixelFormatObj ) ; if ( glContextObj == NULL ) { qDebug("Unable to create OpenGL context"); return NULL; } CGLSetCurrentContext( glContextObj ) ; CGLSetFullScreen( glContextObj ) ; glReadBuffer(GL_FRONT); width = srcRect.size.width; height = srcRect.size.height; bytewidth = width * 4; // Assume 4 bytes/pixel for now bytewidth = (bytewidth + 3) & ~3; // Align to 4 bytes bytes = bytewidth * height; // width * height /* Build bitmap context */ data = malloc(height * bytewidth); if ( data == NULL ) { CGLSetCurrentContext( NULL ); CGLClearDrawable( glContextObj ); // disassociate from full screen CGLDestroyContext( glContextObj ); // and destroy the context qDebug("OpenGL drawable clear failed"); return NULL; } bitmap = CGBitmapContextCreate(data, width, height, 8, bytewidth, cSpace, kCGImageAlphaNoneSkipFirst /* XRGB */); CFRelease(cSpace); /* Read framebuffer into our bitmap */ glFinish(); /* Finish all OpenGL commands */ glPixelStorei(GL_PACK_ALIGNMENT, 4); /* Force 4-byte alignment */ glPixelStorei(GL_PACK_ROW_LENGTH, 0); glPixelStorei(GL_PACK_SKIP_ROWS, 0); glPixelStorei(GL_PACK_SKIP_PIXELS, 0); /* * Fetch the data in XRGB format, matching the bitmap context. */ glReadPixels((GLint)srcRect.origin.x, (GLint)srcRect.origin.y, width, height, GL_RGB, #ifdef __BIG_ENDIAN__ GL_UNSIGNED_INT_8_8_8_8_REV, // for PPC #else GL_UNSIGNED_INT_8_8_8_8, // for Intel! http://lists.apple.com/archives/quartz-dev/2006/May/msg00100.html #endif data); /* * glReadPixels generates a quadrant I raster, with origin in the lower left * This isn't a problem for signal processing routines such as compressors, * as they can simply use a negative 'advance' to move between scanlines. * CGImageRef and CGBitmapContext assume a quadrant III raster, though, so we need to * invert it. Pixel reformatting can also be done here. */ swizzleBitmap(data, bytewidth, height); /* Make an image out of our bitmap; does a cheap vm_copy of the bitmap */ image = CGBitmapContextCreateImage(bitmap); /* Get rid of bitmap */ CFRelease(bitmap); free(data); /* Get rid of GL context */ CGLSetCurrentContext( NULL ); CGLClearDrawable( glContextObj ); // disassociate from full screen CGLDestroyContext( glContextObj ); // and destroy the context /* Returned image has a reference count of 1 */ return image; } 

I am completely new to OpenGL, so I would appreciate some pointers in the right direction. Hooray!

Update:

After some experimentation, I managed to narrow down the problem. My problem is that although I do not want an alpha component, I want each pixel to be packed in 4-byte borders. Now, when I specify the formats GL_RGB or GL_BGR to call glReadPixels , I get bitmap data packed in 3 byte blocks. When I specify GL_RGBA or GL_BGRA , I get four byte blocks, but always with the last alpha channel component.

Then I tried to change the value passed in

bitmap = CGBitmapContextCreate(data, width, height, 8, bytewidth,cSpace, kCGImageAlphaNoneSkipFirst /* XRGB */);

however, no changes to AlphaNoneSkipFirst or AlphaNoneSkipLast place the alpha channel at the beginning of the block of pixel bytes.

Any ideas?

+4
source share
3 answers

I'm not a Mac guy, but if you can get RGBA data and want XRGB, can't you just transfer each pixel eight bits?

 foreach( unsigned int* RGBA_pixel, pixbuf ) { (*RGBA_pixel) = (*RGBA_pixel) >> 8; } 
+2
source

Try GL_UNSIGNED_BYTE instead of GL_UNSIGNED_INT_8_8_8_8_REV / GL_UNSIGNED_INT_8_8_8_8 .

Although it seems that you want instead of GL_RGBA - then it should work instead of 8_8_8_8_REV or 8_8_8_8.

+1
source

When I use GL_BGRA, the data is returned pre-swizzled, which is confirmed because the colors look correct when I display the result in a window.

Contact me if you want the project that I created. Hope this helps.

0
source

Source: https://habr.com/ru/post/1299315/


All Articles