To add something more specific so you can get started. You need a FBO with multiple attachments and a way for your shader to write to multiple FBO applications. Google glDrawBuffers . Your FBO attachments must also be textures so that the information can be passed to the shader. FBO attachments must be the same size as the screen you are showing. There are many ways to approach this. Here is one example.
You need two FBOs
Geometric buffer
1. Diffuse (GL_RGBA) 2. Normal Buffer (GL_RGB16F) 3. Position Buffer (GL_RGB32F) 3. Depth Buffer
Note that 3) is a huge waste, since we can use the depth buffer and projection to restore the position. It is a lot cheaper. With an initial position buffer, you should start at least. Attack one problem at a time.
2) the normal buffer can also be compressed more.
Light storage buffer
1. Light Buffer (GL_RGBA) 2. Depth Buffer
The embedding of the depth buffer in this FBO should be the same application as in the geometry buffer. We cannot use this depth buffer information in this example, but you will need it sooner or later. It will always contain depth information from the first step.
How to do it?
Let's start by rendering our scene using very simple shaders. Their purpose is mainly to fill the geometry buffer. We just draw all our geometry with a very simple shader that fills the geometry buffer. For simplicity, I use 120 shaders and no texture mapping (that's all, although this is super trivial to add).
Vertex Shader:
Fragment Shader:
#version 120 uniform vec4 objectColor;
Now, for example, we have drawn 20 objects for the geometry of the buffer with a different color. If we look at the diffuse buffer, this is a rather boring image with ordinary colors (or simple textures without lighting), but we still have a gaze position, normal and the depth of each individual fragment. This will be valuable information for the next stage in lighting.
Accumulation of light
Now we are switching to our light storage buffer, and it's time to do a little magic. For each individual light, we are going to pay attention to our light storage buffer with additive mixing enabled. How you do it is not so important for the result, as long as you cover all the fragments affected by the light. You can do this initially by drawing a full-screen square, but it is very expensive. We will only illuminate spotlights, but this is more than enough to cover the simple principle of lighting (simple spotlights are extremely trivial). A simple way is to draw a cube or low poly sphere (light volume) in the light position, scaled by the radius of the light. This makes rendering tons of small lights more efficient .. but don't worry about performance now. A full-screen quad-core processor will do just fine.
Now a simple principle:
- Each fragment has a saved position x, y, z, which we just get using fetch texture
- We move to the light position
- We walk in a radius of light
- We can know if this fragment affects the light simply by measuring the distance from the value in the buffer to the position and background position
- From there on it are pretty standard light calculations
Fragment shader: (This shader works for anything. Volumes of light, full-screen ATVs .. anything) #version 120
uniform sampler2D diffuseBuffer; uniform sampler2D positionBuffer; uniform sampler2D normalBuffer; uniform float lightRadius;
Repeat this for every light. The order in which the lights are displayed does not matter, as the result will always be the same with additive blending. You can also make it much simpler by accumulating only the intensity of the light. Theoretically, you should already have the last light result in the light storage buffer, but you may want additional settings.
Combine
You might want to tweak a few things. Ambient? Color correction? Fog? Other materials for further processing. You can combine the light storage buffer and diffuse buffer with some settings. We kind of already done this on the light stage, but if you just kept the light intensity, you will need to make a simple diffuse * light here.
Usually just a full-screen square that displays the final result on the screen.
More materials
- As mentioned earlier, we want to get rid of the position buffer. Use the depth buffer with the projection to restore the position.
- You do not need to use light volumes. Some people simply prefer to make a square large enough to cover an area on the screen.
- The above example does not address issues such as defining unique materials for each object. There are many resources and options for gbuffer formats. Some people prefer to store the material index in an alpha channel (in a diffuse buffer), and then look for a string in the texture to obtain material properties.
- Directional lights and other types of light affecting the entire scene can be easily processed by rendering a full-screen quadrant into the light storage buffer.
- Spotlights are also nice to have, and also pretty easy to implement
- We probably need lighter properties.
- We may need some way to evaluate how the diffuse and light buffers combine to support the environment and emission.
- There are many ways to store normals in a more compact way. You can, for example, use spherical coordinates to delete a single value. There are many articles on deferred lighting and gbuffer formats. Looking at the formats that people use, you can give some ideas. Just make sure your gbuffer is not getting too thick.
- Reconstruct the view position using the linearized depth value, and your projection is not so complicated. You need to build a vector using projection constants. Multiply it by the depth value (0 to 1) to get the viewing position. There are several articles. These are just two lines of code.
There probably is a lot to pick up in this post, but hopefully this shows the general principle. None of them were compiled by shaders. It was just converted from 3.3 to 1.2 from memory.
There are several approaches to the accumulation of light. You might want to reduce the number of callbacks by making VBOs with 1000 cubes and cones for batch drawing of everything. With more modern versions of GL, you can also use a geometric shader to calculate the square that will cover the area of ββlight for each light. Probably the best way is to use computational shaders, but that requires GL 4.3. The advantage here is that you can sort through all the light information and make one record. There are also pseudo-computational methods when you divide the screen into a coarse grid and assign a list of lights for each cell. This can only be done using a flash shader, but you need to create lists of light signals on the processor and send data to the shader, but UBOs.
The shader method of calculations is by far the easiest. It eliminates the great complexity of old methods to track and organize everything. Just repeat the backlight and make one entry in the framebuffer.