GPU memory leak attachments framebuffer

Consider the following test code :

for (int i = 0; i < 100; ++i) { GLuint fboid = 0; GLuint colortex = 0; GLuint depthtex = 0; // create framebuffer & textures glGenFramebuffers(1, &fboid); glGenTextures(1, &colortex); glGenTextures(1, &depthtex); glBindTexture(GL_TEXTURE_2D, colortex); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 4000, 4000, 0, GL_BGRA, GL_UNSIGNED_BYTE, 0); glBindTexture(GL_TEXTURE_2D, depthtex); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, 4000, 4000, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, 0); glBindFramebuffer(GL_FRAMEBUFFER, fboid); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, colortex, 0); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, depthtex, 0); assert(GL_FRAMEBUFFER_COMPLETE == glCheckFramebufferStatus(GL_FRAMEBUFFER)); // clear it glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT); // delete everything glBindFramebuffer(GL_FRAMEBUFFER, 0); glBindTexture(GL_TEXTURE_2D, 0); glDeleteFramebuffers(1, &fboid); glDeleteTextures(1, &colortex); glDeleteTextures(1, &depthtex); } // put breakpoint here 

You will see on the activity monitor that the “Memory Used” at the bottom rises high (14 GB). As if the GPU was still referencing already released textures.

I tried the following:

  • calling glFlush () in different places
  • calling glFinish () in different places
  • change the texture removal order / fbo
  • detach attachments from fbo before deleting
  • call [flushBuffer context];

None of them had any effect. However (!), If I delete the glClear () call, the problem disappears.

What could be the reason for this? It can also be used for reuse on Windows and with a different implementation (unfortunately, I can’t share it and it’s much more complicated).

Have you had any memory leak problems?

UPDATE : Now it’s pretty obvious that the depth / stencil buffer is leaking. If I create the application only for depth, the problem will disappear again!

UPDATE : It is easier to play Intel cards. On my last mbpro in 2011, the code works fine with a discrete card (Radeon 6750M), but it creates the described leak with an integrated card (HD 3000).

UPDATE : it was installed on High Sierra (10.13.x)

+5
source share
1 answer

Until I found a suitable solution, I came up with a workaround (which, unfortunately, causes a different problem with the Radeon Pro 580 (?)) .

The workaround is as follows:

  • First of all, I allow GL context sharing
  • then whenever I want to remove the D24S8 texture, instead I put it in the cache
  • if the implementation requests the creation of a D24S8 buffer, I first look at the cache (do not forget that the number of MSAA instances must match!)
  • if there is an element in the cache that fits (> = than the requested size), I take it out and return
  • if that doesn't happen then i create one with requested size

With this solution, I was able to minimize leakage on the specified configurations ... aaaaaaaand spoil it on this AMD card (which, I suppose, is another driver error, but now I can not reproduce it using a small program).

0
source

Source: https://habr.com/ru/post/1274660/


All Articles