Passing an object of arbitrary size to a fragment shader using UniformBuffer in Glium

My question came up while experimenting with a bunch of different techniques, none of which I have a lot of experience. Unfortunately, I don’t even know if I accept a stupid logical error, if I use the glium box glium , regardless of whether I got confused in GLSL , etc. Despite this, I managed to start a new Rust project from scratch, working on a minimal example showing my problem, and the problem reproduces on my computer at least.

The minimum example ends up being difficult to explain, so I will first make an even more minimal example that does what I want to do, albeit breaking bits and restricting to 128 elements (four times 32 bits, in GLSL uvec4 ). Based on this, the step to the version in which my problem arises is quite simple.

Working version with simple uniform and bit offset

The program creates one rectangle on the screen with texture coordinates from 0.0 to 128.0 horizontally. The program contains one vertex shader for a rectangle and a fragment shader that uses texture coordinates to draw vertical stripes on the rectangle: if the texture coordinate (clamped to uint ) is odd, it draws one color, when the texture coordinate is even, it draws a different color.

 // GLIUM, the crate I'll use to do "everything OpenGL" #[macro_use] extern crate glium; // A simple struct to hold the vertices with their texture-coordinates. // Nothing deviating much from the tutorials/crate-documentation. #[derive(Copy, Clone)] struct Vertex { position: [f32; 2], tex_coords: [f32; 2], } implement_vertex!(Vertex, position, tex_coords); // The vertex shader source. Does nothing special, except passing the // texture coordinates along to the fragment shader. const VERTEX_SHADER_SOURCE: &'static str = r#" #version 140 in vec2 position; in vec2 tex_coords; out vec2 preserved_tex_coords; void main() { preserved_tex_coords = tex_coords; gl_Position = vec4(position, 0.0, 1.0); } "#; // The fragment shader. uses the texture coordinates to figure out which color to draw. const FRAGMENT_SHADER_SOURCE: &'static str = r#" #version 140 in vec2 preserved_tex_coords; // FIXME: Hard-coded max number of elements. Replace by uniform buffer object uniform uvec4 uniform_data; out vec4 color; void main() { uint tex_x = uint(preserved_tex_coords.x); uint offset_in_vec = tex_x / 32u; uint uint_to_sample_from = uniform_data[offset_in_vec]; bool the_bit = bool((uint_to_sample_from >> tex_x) & 1u); color = vec4(the_bit ? 1.0 : 0.5, 0.0, 0.0, 1.0); } "#; // Logic deciding whether a certain index corresponds with a 'set' bit on an 'unset' one. // In this case, for the alternating stripes, a trivial odd/even test. fn bit_should_be_set_at(idx: usize) -> bool { idx % 2 == 0 } fn main() { use glium::DisplayBuild; let display = glium::glutin::WindowBuilder::new().build_glium().unwrap(); // Sets up the vertices for a rectangle from -0.9 till 0.9 in both dimensions. // Texture coordinates go from 0.0 till 128.0 horizontally, and from 0.0 till // 1.0 vertically. let vertices_buffer = glium::VertexBuffer::new( &display, &vec![Vertex { position: [ 0.9, -0.9], tex_coords: [ 0.0, 0.0] }, Vertex { position: [ 0.9, 0.9], tex_coords: [ 0.0, 1.0] }, Vertex { position: [-0.9, -0.9], tex_coords: [128.0, 0.0] }, Vertex { position: [-0.9, 0.9], tex_coords: [128.0, 1.0] }]).unwrap(); // The rectangle will be drawn as a simple triangle strip using the vertices above. let indices_buffer = glium::IndexBuffer::new(&display, glium::index::PrimitiveType::TriangleStrip, &vec![0u8, 1u8, 2u8, 3u8]).unwrap(); // Compiling the shaders defined statically above. let shader_program = glium::Program::from_source(&display, VERTEX_SHADER_SOURCE, FRAGMENT_SHADER_SOURCE, None).unwrap(); // Some hackyy bit-shifting to get the 128 alternating bits set up, in four u32's, // which glium manages to send across as an uvec4. let mut uniform_data = [0u32; 4]; for idx in 0..128 { let single_u32 = &mut uniform_data[idx / 32]; *single_u32 = *single_u32 >> 1; if bit_should_be_set_at(idx) { *single_u32 = *single_u32 | (1 << 31); } } // Trivial main loop repeatedly clearing, drawing rectangle, listening for close event. loop { use glium::Surface; let mut frame = display.draw(); frame.clear_color(0.0, 0.0, 0.0, 1.0); frame.draw(&vertices_buffer, &indices_buffer, &shader_program, &uniform! { uniform_data: uniform_data }, &Default::default()).unwrap(); frame.finish().unwrap(); for e in display.poll_events() { if let glium::glutin::Event::Closed = e { return; } } } } 

But this is not good enough ...

This program works and shows a rectangle with alternating stripes, but has an explicit restriction on limiting to 128 stripes (or 64 stripes, I think the remaining 64 are the "background of the rectangle"). To allow arbitrarily many bands (or, generally speaking, transfer arbitrarily much data to a fragmented shader), uniform buffer objects can be used that glium provides . the most suitable example in glium repo , unfortunately, cannot be compiled on my machine: the GLSL version is not supported, the buffer keyword is a syntax error in supported versions, calculating shaders is not supported at all (using glium on my machine), and are not contexts renderings without title.

Not much working version, with uniform buffer

So, unable to start with this example, I had to start from scratch using the documentation. In the above example, I came up with the following:

 // Nothing changed here... #[macro_use] extern crate glium; #[derive(Copy, Clone)] struct Vertex { position: [f32; 2], tex_coords: [f32; 2], } implement_vertex!(Vertex, position, tex_coords); const VERTEX_SHADER_SOURCE: &'static str = r#" #version 140 in vec2 position; in vec2 tex_coords; out vec2 preserved_tex_coords; void main() { preserved_tex_coords = tex_coords; gl_Position = vec4(position, 0.0, 1.0); } "#; // ... up to here. // The updated fragment shader. This one uses an entire uint per stripe, even though only one // boolean value is stored in each. const FRAGMENT_SHADER_SOURCE: &'static str = r#" #version 140 // examples/gpgpu.rs uses // #version 430 // buffer layout(std140); // but that shader version is not supported by my machine, and the second line is // a syntax error in `#version 140` in vec2 preserved_tex_coords; // Judging from the GLSL standard, this is what I have to write: layout(std140) uniform; uniform uniform_data { // TODO: Still hard-coded max number of elements, but now arbitrary at compile-time. uint values[128]; }; out vec4 color; // This one now becomes much simpler: get the coordinate, clamp to uint, index into // uniform using tex_x, cast to bool, choose color. void main() { uint tex_x = uint(preserved_tex_coords.x); bool the_bit = bool(values[tex_x]); color = vec4(the_bit ? 1.0 : 0.5, 0.0, 0.0, 1.0); } "#; // Mostly copy-paste from glium documentation: define a Data type, which stores u32s, // make it implement the right traits struct Data { values: [u32], } implement_buffer_content!(Data); implement_uniform_block!(Data, values); // Same as before fn bit_should_be_set_at(idx: usize) -> bool { idx % 2 == 0 } // Mostly the same as before fn main() { use glium::DisplayBuild; let display = glium::glutin::WindowBuilder::new().build_glium().unwrap(); let vertices_buffer = glium::VertexBuffer::new( &display, &vec![Vertex { position: [ 0.9, -0.9], tex_coords: [ 0.0, 0.0] }, Vertex { position: [ 0.9, 0.9], tex_coords: [ 0.0, 1.0] }, Vertex { position: [-0.9, -0.9], tex_coords: [128.0, 0.0] }, Vertex { position: [-0.9, 0.9], tex_coords: [128.0, 1.0] }]).unwrap(); let indices_buffer = glium::IndexBuffer::new(&display, glium::index::PrimitiveType::TriangleStrip, &vec![0u8, 1u8, 2u8, 3u8]).unwrap(); let shader_program = glium::Program::from_source(&display, VERTEX_SHADER_SOURCE, FRAGMENT_SHADER_SOURCE, None).unwrap(); // Making the UniformBuffer, with room for 128 4-byte objects (which u32s are). let mut buffer: glium::uniforms::UniformBuffer<Data> = glium::uniforms::UniformBuffer::empty_unsized(&display, 4 * 128).unwrap(); { // Loop over all elements in the buffer, setting the 'bit' let mut mapping = buffer.map(); for (idx, val) in mapping.values.iter_mut().enumerate() { *val = bit_should_be_set_at(idx) as u32; // This _is_ actually executed 128 times, as expected. } } // Iterating again, reading the buffer, reveals the alternating 'bits' are really // written to the buffer. // This loop is similar to the original one, except that it passes the buffer // instead of a [u32; 4]. loop { use glium::Surface; let mut frame = display.draw(); frame.clear_color(0.0, 0.0, 0.0, 1.0); frame.draw(&vertices_buffer, &indices_buffer, &shader_program, &uniform! { uniform_data: &buffer }, &Default::default()).unwrap(); frame.finish().unwrap(); for e in display.poll_events() { if let glium::glutin::Event::Closed = e { return; } } } } 

I would expect this to create the same striped rectangle (or give some error or crash if something I did wrong). Instead, it displays a rectangle having the rightmost square in solid bright red (i.e., β€œThe bit seemed set when the fragment reader read it”), and the remaining three quarters are darker than red (that is, the bit was canceled when the fragment shader read it ").

Update since publication

I really poke in the dark here, so I think it could be a low-level error with memory ordering, content, over- / underrun buffer, etc. I tried various ways to populate the "neighboring" memory cells with easily distinguishable bit patterns (for example, one bit in every three sets, every fourth, two sets, which are followed twice, etc.). This did not change the output.

One of the obvious ways to get memory "next to" uint values[128] is to put it in the Data structure, just before the values (behind values not allowed, since Data values: [u32] has a dynamic size). As stated above, this does not change the output. However, placing the correctly filled uvec4 inside the uniform_data buffer, and using the main function, similar to the first example , does the original result. This shows that glium::uniforms::UniformBuffer<Data> in se works .

I therefore updated the title to understand that the problem seems to be lying somewhere else.

After Eli's answer

@Eli Friedman's answer helped me move towards a solution, but I'm not quite there yet.

Selecting and filling the buffer four times more changed the output, starting from a quarter of the filled rectangle to a completely filled rectangle. Oh, that's not what I wanted. My shader is now reading from the correct memory words. All these words should have been filled in with the correct bit diagram. However, not a single part of the rectangle became striped. Since bit_should_be_set_at must set every bit, I developed a hypothesis that the following happens:

 Bits: 1010101010101010101010101010101010101 Seen: ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ What it looks like: all bits set 

To test this hypothesis, I changed bit_should_be_set_at to return true by multiples of 3, 4, 5, 6, 7, and 8. The results are consistent with my hypothesis:

 Bits: 1001001001001001001001001001001001001 Seen: ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ What it looks like: first bit set, then repeating two unset, one set. Bits: 1000100010001000100010001000100010001 Seen: ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ What it looks like: all bits set Bits: 1000010000100001000010000100001000010 Seen: ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ What it looks like: first bit set, then repeating four unset, one set. Bits: 1000001000001000001000001000001000001 Seen: ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ What it looks like: first bit set, then repeating two unset, one set. Bits: 1000000100000010000001000000100000010 Seen: ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ What it looks like: first bit set, then repeating six unset, one set. Bits: 1000000010000000100000001000000010000 Seen: ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ What it looks like: first bit set, then every other bit set. 

Does this hypothesis make sense? And it doesn’t matter: does the problem seem to be related to setting data up (from the Rust side) or reading it back (from the GLSL side)?

+5
source share
1 answer

The problem you are facing is with uniform distribution. uint values[128]; does not have a memory layout that you think he does; in fact, it has the same memory format as uint4 values[128] . See https://www.opengl.org/registry/specs/ARB/uniform_buffer_object.txt subclause 2.15.3.1.2.

+1
source

Source: https://habr.com/ru/post/1232916/


All Articles