It is important to know that WebGL is not at all interested in the TypedArray format that you provide it. No matter what you give it, it will consider it as an opaque binary buffer. What vertexAttribPointer how you configure vertexAttribPointer s. This allows you to use some very convenient ways to shuffle data back and forth. For example: I regularly read Uint8Array from a binary file and provide it as buffer data, but bind it as float and ints.
TypedArrays also have the remarkable ability to act as representations in other types of arrays, which makes it easy to mix types (if you don't have alignment problems). In your particular case, I would suggest doing something like this:
var floatBuffer = new Float32Array(verts.length * 4); var byteBuffer = new Uint8Array(floatBuffer);
This will load a tightly packed vertex buffer containing 3 32-bit floats and 1 32-bit color for the GPU. Not as small as your proposed pair of 64-bit processors, but the GPU will most likely work better with it. When you bind it for rendering later, you will do so like this:
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); gl.vertexAttribPointer(attributes.aPosition, 3, gl.FLOAT, false, 16, 0); gl.vertexAttribPointer(attributes.aColor, 4, gl.UNSIGNED_BYTE, false, 16, 12);
The corresponding shader code is as follows:
attribute vec3 aPosition; attribute vec4 aColor; void main() {
Thus, you have the advantages of using alternating arrays that the GPU likes to work with, and you only need to track one buffer (bonus!), And you donβt waste any space using a full float for each color component. If you REALLY want to become small, you can use shorts instead of floats for positions, but my experience with this in the past has shown that desktop GPUs are not very fast when using short attributes.
Hope this helps!