I recently switched from a 32-bit environment to a 64-bit one, and it went smoothly, except for one problem: glMultiDrawElements uses some arrays that do not work without any configuration under a 64-bit OS.
glMultiDrawElements( GL_LINE_LOOP, fCount_, GL_UNSIGNED_INT, reinterpret_cast< const GLvoid** >( iOffset_ ), mesh().faces().size() );
I use VBOs for both vertices and vertex indices. fCount_ and iOffset_ are arrays of GLsizei . Since the buffer is bound to GL_ELEMENT_ARRAY_BUFFER , the GL_ELEMENT_ARRAY_BUFFER elements iOffset_ used as byte offsets from the start of the VBO. This works fine under 32bit OS.
If I change glMultiDrawElements to glDrawElements and put it in a loop, it works fine on both platforms:
int offset = 0; for ( Sy_meshData::Faces::ConstIterator i = mesh().faces().constBegin(); i != mesh().faces().constEnd(); ++i ) { glDrawElements( GL_LINE_LOOP, i->vertexIndices.size(), GL_UNSIGNED_INT, reinterpret_cast< const GLvoid* >( sizeof( GLsizei ) * offset ) ); offset += i->vertexIndices.size(); }
I think I see that OpenGL reads 64-bit iOffset_ fragments leading to massive numbers, but glMultiDrawElements does not support any type wider than 32bit ( GL_UNSIGNED_INT ), so I'm not sure how to fix it.
Has anyone else had this situation and solved it? Or am I dealing with this completely wrong and just lucky on a 32-bit OS?
Update
Switching my existing code to:
typedef void ( *testPtr )( GLenum mode, const GLsizei* count, GLenum type, const GLuint* indices, GLsizei primcount ); testPtr ptr = (testPtr)glMultiDrawElements; ptr( GL_LINE_LOOP, fCount_, GL_UNSIGNED_INT, iOffset_, mesh().faces().size() );
The results have exactly the same result.