Render OpenGL scene in Qt and stream in HTML5 interface

I was wondering if it is possible or not to render the OpenGL scene in Qt and transfer it to the HTML5 interface in real time (I mean, the scene is created in place).

I tried to find information about this and how to do it, but I was not successful ...

If it exists, is there any existing mechanism for compressing the image and optimizing the use of bandwidth. I am thinking of a solution in the likes of Citrix, but with an HTML5 client.

+6
source share
4 answers

This is quite achievable, however, depending on how you are ready to stretch the concept of "real time". But Qt will not be able to help much.

  • 1 - receiving images from the memory of the graphics processor. This is almost the only place where Qt can help. It provides two out of the box methods that can help you. First, if you included OpenGL rendering in an element inside a QQuickView or derived class, then you can use grabWindow() to get the QImage from the framebuffer. Secondly, you should use the QScreen class, which provides a similar method, but it can be even slower than the first method. On my system (rather high), for resolution of 720p it takes about 30 ms to get the raw image from the memory of the GPU, for a lower resolution the speed increases at a quadratic speed. If you are good at OpenGL, you might want to explore vendor-specific extensions, which may probably offer less overhead when copying each rendered frame from the GPU to the processor’s memory, namely, companies like Sony or nVidia, can achieve better graphics.

  • 2 - Use FFmpeg to encode QImage input data to video (preferably H264) to minimize throughput. You can check out this wrapper , which should work out of the box with Qt. FFmpeg can also help with actual streaming , eliminating the need to use an additional library for this, although I am not sure that this stream will be accessible in the HTML player without using a “relay” server for re-streaming.

But you should not expect miracles. Graphic streaming already works quite well on suppliers' own devices, using their proprietary technologies and a faster local area network. In a real-world scenario, prepare for "real time" with latency of half a second or more. Undoubtedly, recently it has been some kind of effort devoted to this senseless work, but, like many others, this is done simply for the sake of it, and not because it gives real benefits from it. Streaming graphics can be a viable solution if you have a network with 10 gigabytes and special equipment for the GPU that can use it directly, but this solution will be costly and inefficient, considering how today $ 10 chips consume 2-3 watts powers capable of rendering OpenGL, this will always be the most preferred solution. And since you mention the HTML5 browser, most likely you can go for a WebGL solution that IMO will outperform streaming video, just like WebGL. Even better, Qt already supports a huge number of platforms, you can easily implement your own rendering application and get even better performance than you got from WebGL, and potentially more rendering capabilities.

+7
source

This answer explains how this task can be accomplished using OpenGL, Qt, and GStreamer . But before I start, there are two questions that need to be addressed right away:

  • HTML5 streaming video is still problematic . I suggest using Ogg for coding, as it is better supported by modern browsers than h264;
  • Encoding video and transmitting it via HTTP is a rather difficult task without using third-party libraries. Take a look at GStreamer (a cross-platform library for processing multimedia files). This is what I use here to encode and stream a frame from an OpenGL framebuffer;

What does a roadmap do to implement something like this?

Start by capturing frames from the framebuffer . There are various methods that can be used for this purpose , and the Googling for opengl offscreen function will return some interesting posts and documents. I will not go into technical details, as this issue has been widely covered, but for educational purposes, I share the code below to demonstrate how to get the frame and save it as jpg on disk:

 // GLWidget is a class based on QGLWidget. void GLWidget::paintGL() { /* Setup FBO and RBO */ glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, _fb); glGenFramebuffersEXT(1, &_fb); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, _fb); glGenRenderbuffersEXT(1, &_color_rb); glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, _color_rb); GLint viewport[4]; glGetIntegerv(GL_VIEWPORT, viewport); glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_BGRA, viewport[2], viewport[3]); glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, _color_rb); /* Draw the scene (with transparency) */ glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glLoadIdentity(); glTranslatef(-2.0f, 0.0f, -7.0f); glRotatef(45, 1.0f, 1.0f, 0.0f); _draw_cube(); glLoadIdentity(); glTranslatef(2.0f, 0.0f, -7.0f); glRotatef(30, 0.5f, 1.0f, 0.5f); _draw_cube(); glFlush(); /* Retrieve pixels from the framebuffer */ int imgsize = viewport[2] * viewport[3]; std::cout << "* Viewport size: " << viewport[2] << "x" << viewport[3] << std::endl; glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glReadBuffer(GL_COLOR_ATTACHMENT0); unsigned char* pixels = new unsigned char[sizeof(unsigned char) * imgsize * 4]; glReadPixels(0, 0, viewport[2], viewport[3], GL_BGRA, GL_UNSIGNED_BYTE, pixels); // Use fwrite to dump data: FILE* fp = fopen("dumped.bin","w"); fwrite(pixels, sizeof(unsigned char) * imgsize * 4, 1, fp); fclose(fp); // or use QImage to encode the raw data to jpg: QImage image((const unsigned char*)pixels, viewport[2], viewport[3], QImage::Format_RGB32); QImage flipped = image.mirrored(); flipped.save("output2.jpg"); // Disable FBO and RBO glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); // Delete resources glDeleteRenderbuffersEXT(1, &_color_rb); glDeleteFramebuffersEXT(1, &_fb); delete[] pixels; } 

A QImage used to convert the source GL_BGRA frame to a jpg file. The draw_scene() method simply draws a color cube with transparency:

RaxAI.png

The next step is to encode the frame and pass it through HTTP . However, you probably won’t want to save every frame from the framebuffer to disk before you can transfer it. And you're right, you don’t have to! GStreamer provides an API API that you can use in your application to perform operations performed by gst-launch (presented below). There's even a Qt wrapper for this library called QtGstreamer to make things even easier.

GStreamer 1.0 provides the cmd-line application called gst-launch-1.0 , which can be used to test its functions before switching to encoding. Developers usually play with him to assemble a pipeline of instructions that force them to do magic before starting the code.

The following command shows how it can be used to decode jpg, encode it in Ogg theora, and transmit this single image in HTTP so that the HTML5 page can play it:

 gst-launch-1.0.exe -v filesrc location=output.jpg ! decodebin ! imagefreeze ! clockoverlay shaded-background=true font-desc="Sans 38" ! theoraenc ! oggmux ! tcpserversink host=127.0.0.1 port=8080 

The third and final step is to open the HTML5 page created to display the stream . This step should be completed while gst-launch is running, so copy and paste the code below into the file and open this page in your browser (I tested this in Chrome). The page connects to the local host, port 8080, and begins to receive the stream. You may have noticed that the gst-launch pipeline imposes a clock on the original image:

 <html> <title>A simple HTML5 video test</title> </html> <body> <video autoplay controls width=320 height=240> <source src="http://localhost:8080" type="video/ogg"> You browser doesn't support element <code>video</code>. </video> </body> 

enter image description here

I'm just trying to figure out exactly how GStreamer can convert a raw BGRA frame to jpg (or other formats) before streaming it.

Update:

The problem is solved! It is possible to encode a raw BGRA frame to jpg or * Ogg and directly transfer it without creating intermediate files to disk. I allowed to set the FPS limit to 15, and also reduce theoraenc standard quality by 50%:

 gst-launch-1.0.exe -v filesrc location=dumped.bin blocksize=1920000 ! video/x-raw,format=BGRA,width=800,height=600,framerate=1/1 ! videoconvert ! video/x-raw,format=RGB,framerate=1/1 ! videoflip method=vertical-flip ! imagefreeze ! videorate ! video/x-raw,format=RGB,framerate=30/2 ! videoconvert ! clockoverlay shaded-background=true font-desc="Sans 38" ! theoraenc quality=24 ! oggmux ! queue ! tcpserversink host=127.0.0.1 port=8080 sync-method=2 

There are several operations in this pipeline that you really don't need. However, some of the features you can do to optimize bandwidth reduce the frame size to a smaller size (400x300), set a lower limit for FPS, reduce the quality of the encoded frame, etc.:

 gst-launch-1.0.exe -v filesrc location=dumped.bin blocksize=1920000 ! video/x-raw,format=BGRA,width=800,height=600,framerate=1/1 ! videoconvert ! video/x-raw,format=RGB,framerate=1/1 ! videoflip method=vertical-flip ! videoscale ! video/x-raw,width=400,height=300! imagefreeze ! videorate ! video/x-raw,format=RGB,framerate=30/2 ! videoconvert ! clockoverlay shaded-background=true font-desc="Sans 38" ! theoraenc quality=24 ! oggmux ! tcpserversink host=127.0.0.1 port=8080 sync-method=2 
+7
source

Ok, OTOY did a similar thing ...

I recalled a simpler but working open source project, but I could not find the link. In this project, the video capture (or in your window buffer of your case) is encoded as MPEG and sent to the browser via a WebSocket connection. Then, client-side Javascript decodes this MPEG stream and displays it. This may give you more information about this subject ...

Here it is...

+3
source

Perhaps you can use WebSockets with an excellent Socket.IO layer for maximum cross-browser compatibility to transfer your data between your Qt application and the HTML5 client.

You will need to somehow encode your rendered image in the Qt application and send it through the socket to your HTML5 client, which will decode and display it.

Socket.IO has a nice demo on its website, something like this. Check here .

+1
source

Source: https://habr.com/ru/post/970028/


All Articles