Consider the following test code :
for (int i = 0; i < 100; ++i) { GLuint fboid = 0; GLuint colortex = 0; GLuint depthtex = 0; // create framebuffer & textures glGenFramebuffers(1, &fboid); glGenTextures(1, &colortex); glGenTextures(1, &depthtex); glBindTexture(GL_TEXTURE_2D, colortex); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 4000, 4000, 0, GL_BGRA, GL_UNSIGNED_BYTE, 0); glBindTexture(GL_TEXTURE_2D, depthtex); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, 4000, 4000, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, 0); glBindFramebuffer(GL_FRAMEBUFFER, fboid); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, colortex, 0); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, depthtex, 0); assert(GL_FRAMEBUFFER_COMPLETE == glCheckFramebufferStatus(GL_FRAMEBUFFER)); // clear it glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT); // delete everything glBindFramebuffer(GL_FRAMEBUFFER, 0); glBindTexture(GL_TEXTURE_2D, 0); glDeleteFramebuffers(1, &fboid); glDeleteTextures(1, &colortex); glDeleteTextures(1, &depthtex); } // put breakpoint here
You will see on the activity monitor that the “Memory Used” at the bottom rises high (14 GB). As if the GPU was still referencing already released textures.
I tried the following:
- calling glFlush () in different places
- calling glFinish () in different places
- change the texture removal order / fbo
- detach attachments from fbo before deleting
- call [flushBuffer context];
None of them had any effect. However (!), If I delete the glClear () call, the problem disappears.
What could be the reason for this? It can also be used for reuse on Windows and with a different implementation (unfortunately, I can’t share it and it’s much more complicated).
Have you had any memory leak problems?
UPDATE : Now it’s pretty obvious that the depth / stencil buffer is leaking. If I create the application only for depth, the problem will disappear again!
UPDATE : It is easier to play Intel cards. On my last mbpro in 2011, the code works fine with a discrete card (Radeon 6750M), but it creates the described leak with an integrated card (HD 3000).
UPDATE : it was installed on High Sierra (10.13.x)