Is it possible to "manually" create image data to use OpenGL texture?

I study textures in an OpenGL environment. I don’t see the traditional way to get the actual image data before transferring it to video memory.

Out of curiosity, you can create your own matrix of pixel data and fill it with arbitrary information instead of actually reading a bitmap or image file?

+4
source share
3 answers

OpenGL does not know what a file is. He has no idea about image file formats, etc. It does not have functions for loading things from files into textures.

Various pixel transfer functions accept a pointer to memory (or buffer objects containing memory ). How you create this data is entirely up to you. You can load it from a file, create it from an algorithm, no matter what you do.

+9
source

I don’t see a traditional way to get actual image data before transferring it to video memory.

What do you mean by "ordinary"?

Out of curiosity, you can create your own matrix of pixel data and fill it with arbitrary information instead of actually reading a bitmap or image file?

Take a good look at the signature, hmm, say glTexImage2D:

void glTexImage2D( GLenum target, GLint level, GLint internalformat, GLsizei width, GLsizei height, GLint border, GLenum format, GLenum type, const GLvoid *pixels ) 

The last parameter takes a pointer to some memory. This memory should be filled by you before its transfer to OpenGL. This memory is filled either by reading and decoding the image file, or by procedurally creating the image. In fact, most of the examples in the official OpenGL programming guide (Red Book) generate image data procedurally. For example, from chapter 9 of the 1st edition ( it should be warned that this is not a very good programming style and should not be followed ):

 #define checkImageWidth 64 #define checkImageHeight 64 static GLubyte checkImage[checkImageHeight][checkImageWidth][4]; static GLuint texName; void makeCheckImage(void) { int i, j, c; for (i = 0; i < checkImageHeight; i++) { for (j = 0; j < checkImageWidth; j++) { c = ((((i&0x8)==0)^((j&0x8))==0))*255; checkImage[i][j][0] = (GLubyte) c; checkImage[i][j][1] = (GLubyte) c; checkImage[i][j][2] = (GLubyte) c; checkImage[i][j][3] = (GLubyte) 255; } } } void init(void) { /* ... */ makeCheckImage(); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glGenTextures(1, &texName); glBindTexture(GL_TEXTURE_2D, texName); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, checkImageWidth, checkImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, checkImage); } 
+8
source

At least in WebGL there is. Here's an example of using 32-bit floating-point / pixel brightness textures. (EDITed: not 8 bpp)

http://jsfiddle.net/greggman/upZ7V/

gl.texImage2D (gl.TEXTURE_2D, 0, gl.LUMINANCE, width, height, 0, gl.LUMINANCE, gl.FLOAT, pixels); where pixels is an array.

I am sure that this works exactly the same as the API in the opengl specification:

void TexImage2D ( enum target, int level, int internalformat, sizei width, sizei height, int border, enum format, enum type, void *data )

0
source

Source: https://habr.com/ru/post/1440727/


All Articles