WebGL packaging data: float64 / int64Arrays in Chrome

[Edit - one problem is almost fixed, the question has changed to reflect it)

I am working on a cloud webGL project in Chrome that displays millions of points at a time.

To make it more efficient, I tried to pack my data - float 6 xyz and rgb - into two 64-bit integers (xy and zrgb) and planned to unpack it into a shader.

I am developing in Chrome, and afaict, webkit does not support any type of 64-bit array ... even using Canary. Also afaict, Firefox supports 64-bit arrays, but I still get an error.

Problems arise with this line:

gl.bufferData(gl.ARRAY_BUFFER, new Float64Array(data.xy), gl.DYNAMIC_DRAW); 

In Chrome, I get an ArrayBufferView not a small enough positive integer, in FF I get "invalid arguments".

So my questions are: is there a way to send 64-bit numbers to a shader, preferably using Chrome, if not, to FF?

In addition, data packaging, how is a good idea? Any advice ?!

Thanks,

John

+4
source share
1 answer

It is important to know that WebGL is not at all interested in the TypedArray format that you provide it. No matter what you give it, it will consider it as an opaque binary buffer. What vertexAttribPointer how you configure vertexAttribPointer s. This allows you to use some very convenient ways to shuffle data back and forth. For example: I regularly read Uint8Array from a binary file and provide it as buffer data, but bind it as float and ints.

TypedArrays also have the remarkable ability to act as representations in other types of arrays, which makes it easy to mix types (if you don't have alignment problems). In your particular case, I would suggest doing something like this:

 var floatBuffer = new Float32Array(verts.length * 4); var byteBuffer = new Uint8Array(floatBuffer); // View the floatBuffer as bytes for(i = 0; i < verts.length; ++i) { floatBuffer[i * 4 + 0] = verts.x; floatBuffer[i * 4 + 1] = verts.y; floatBuffer[i * 4 + 2] = verts.z; // RGBA values expected as 0-255 byteBuffer[i * 16 + 12] = verts.r; byteBuffer[i * 16 + 13] = verts.g; byteBuffer[i * 16 + 14] = verts.b; byteBuffer[i * 16 + 15] = verts.a; } var vertexBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); gl.bufferData(gl.ARRAY_BUFFER, floatBuffer, gl.STATIC_DRAW); 

This will load a tightly packed vertex buffer containing 3 32-bit floats and 1 32-bit color for the GPU. Not as small as your proposed pair of 64-bit processors, but the GPU will most likely work better with it. When you bind it for rendering later, you will do so like this:

 gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); gl.vertexAttribPointer(attributes.aPosition, 3, gl.FLOAT, false, 16, 0); gl.vertexAttribPointer(attributes.aColor, 4, gl.UNSIGNED_BYTE, false, 16, 12); 

The corresponding shader code is as follows:

 attribute vec3 aPosition; attribute vec4 aColor; void main() { // Manipulate the position and color as needed } 

Thus, you have the advantages of using alternating arrays that the GPU likes to work with, and you only need to track one buffer (bonus!), And you don’t waste any space using a full float for each color component. If you REALLY want to become small, you can use shorts instead of floats for positions, but my experience with this in the past has shown that desktop GPUs are not very fast when using short attributes.

Hope this helps!

+7
source

Source: https://habr.com/ru/post/1400003/


All Articles