I am trying to use OpenGL to help with the processing of entering a Kinect depth map into an image. At the moment, we are using Kinect as the main motion sensor, and the program counts how many people go through and removes the screen every time it detects someone new.
The problem is that I need to run this program without access to the display. We want to run it remotely via SSH, and network traffic from other services will be too large to forward X11 to be a good idea. The ability to connect the display to the machine on which the program is running is an opportunity, but we want to avoid this for reasons of energy consumption.
The program creates a 2D texture object for OpenGL and usually just uses GLUT to render before reading pixels and outputting them to a .PNG file using FreeImage. The problem I am facing is that after deleting the GLUT function calls, everything that is printed in .PNG files is just black fields.
I use the OpenNI and NITE drivers for Kinect. The programming language is C ++, and I need to use Ubuntu 10.04 due to the hardware limitations of the target device.
I tried using OSMesa or FrameBuffer objects, but I'm a complete OpenGL newbie, so I didn't get OSMesa to display correctly instead of GLUT functions, and my compilers can't find any OpenGL FrameBuffer functions in GL / glext.h or GL / gl.h .
I know that textures can be read in a program from image files, and all I want to output is one two-dimensional texture. Is there a way to skip the headache of off-screen rendering in this case and print the texture directly to the image file without having to run OpenGL?
source share