How does delayed shading work in LWJGL?

I want to start a deferred shader project using GLSL, Java and openGl

1. How does the delayed rendering pipeline work? Does it make a scene for each image? For example, when I want to create a mirror, blurry and shadow texture, I need to display a scene for each of these textures.

I have seen several pieces of code where there are no multiple rendering cycles.

2. What is a geometry buffer and what does it do? Is this something like a scene data store that I can do for texture without rendering again?

+4
source share
3 answers

To add something more specific so you can get started. You need a FBO with multiple attachments and a way for your shader to write to multiple FBO applications. Google glDrawBuffers . Your FBO attachments must also be textures so that the information can be passed to the shader. FBO attachments must be the same size as the screen you are showing. There are many ways to approach this. Here is one example.

You need two FBOs

Geometric buffer

 1. Diffuse (GL_RGBA) 2. Normal Buffer (GL_RGB16F) 3. Position Buffer (GL_RGB32F) 3. Depth Buffer 

Note that 3) is a huge waste, since we can use the depth buffer and projection to restore the position. It is a lot cheaper. With an initial position buffer, you should start at least. Attack one problem at a time.

2) the normal buffer can also be compressed more.

Light storage buffer

 1. Light Buffer (GL_RGBA) 2. Depth Buffer 

The embedding of the depth buffer in this FBO should be the same application as in the geometry buffer. We cannot use this depth buffer information in this example, but you will need it sooner or later. It will always contain depth information from the first step.

How to do it?

Let's start by rendering our scene using very simple shaders. Their purpose is mainly to fill the geometry buffer. We just draw all our geometry with a very simple shader that fills the geometry buffer. For simplicity, I use 120 shaders and no texture mapping (that's all, although this is super trivial to add).

Vertex Shader:

 #version 120 varying vec3 normal; varying vec4 position; void main( void ) { normal = normalize(gl_NormalMatrix * gl_Normal); position = gl_ModelViewMatrix * gl_Vertex; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } 

Fragment Shader:

 #version 120 uniform vec4 objectColor; // Color of the object you are drawing varying vec3 normal; varying vec4 position; void main( void ) { // Use glDrawBuffers to configure multiple render targets gl_FragData[0] = objectColor; // Diffuse gl_FragData[1] = vec4(normalize(normals.xyz), 0.0); // normals gl_FragData[2] = vec4(position.xyz, 0.0); // Position } 

Now, for example, we have drawn 20 objects for the geometry of the buffer with a different color. If we look at the diffuse buffer, this is a rather boring image with ordinary colors (or simple textures without lighting), but we still have a gaze position, normal and the depth of each individual fragment. This will be valuable information for the next stage in lighting.

Accumulation of light

Now we are switching to our light storage buffer, and it's time to do a little magic. For each individual light, we are going to pay attention to our light storage buffer with additive mixing enabled. How you do it is not so important for the result, as long as you cover all the fragments affected by the light. You can do this initially by drawing a full-screen square, but it is very expensive. We will only illuminate spotlights, but this is more than enough to cover the simple principle of lighting (simple spotlights are extremely trivial). A simple way is to draw a cube or low poly sphere (light volume) in the light position, scaled by the radius of the light. This makes rendering tons of small lights more efficient .. but don't worry about performance now. A full-screen quad-core processor will do just fine.

Now a simple principle:

  • Each fragment has a saved position x, y, z, which we just get using fetch texture
  • We move to the light position
  • We walk in a radius of light
  • We can know if this fragment affects the light simply by measuring the distance from the value in the buffer to the position and background position
  • From there on it are pretty standard light calculations

Fragment shader: (This shader works for anything. Volumes of light, full-screen ATVs .. anything) #version 120

 uniform sampler2D diffuseBuffer; uniform sampler2D positionBuffer; uniform sampler2D normalBuffer; uniform float lightRadius; // Radius of our point light uniform vec3 lightPos; // Position of our point light uniform vec4 lightColor; // Color of our light uniform vec2 screensize; // screen resolution void main() { // VU for the current fragment vec2 uv = vec2(gl_FragCoord.x / screensize.x, gl_FragCoord.y / screensize.y); // Read data from our gbuffer (sent in as textures) vec4 diffuse_g = texture2D(diffuseBuffer, uv); vec4 position_g = texture2D(positionBuffer, uv); vec4 gnormal_g = texture2D(normalBuffer, uv); // Distance from the light center and the current pixel float distance = length(lightPos - position_g.xyz); // If the fragment is NOT affecter by the light we discard it! // PS : Don't kill me for using discard. This is for simplicity. if(distance > lightRadius) discard; // Calculate the intensity value this light will affect the fragment (Standard light stuff!) ... Use lightPos and position_g to calculate the light normal .. ... Do standard dot product of light normal and normal_g ... ... Just standard light stuff ... // Super simple attenuation placeholder float attenuation = 1.0 - (distance / lightRadius); gl_FragColor = diffuse_g * lightColor * attenuation * <multiplier from light calculation>; } 

Repeat this for every light. The order in which the lights are displayed does not matter, as the result will always be the same with additive blending. You can also make it much simpler by accumulating only the intensity of the light. Theoretically, you should already have the last light result in the light storage buffer, but you may want additional settings.

Combine

You might want to tweak a few things. Ambient? Color correction? Fog? Other materials for further processing. You can combine the light storage buffer and diffuse buffer with some settings. We kind of already done this on the light stage, but if you just kept the light intensity, you will need to make a simple diffuse * light here.

Usually just a full-screen square that displays the final result on the screen.

More materials

  • As mentioned earlier, we want to get rid of the position buffer. Use the depth buffer with the projection to restore the position.
  • You do not need to use light volumes. Some people simply prefer to make a square large enough to cover an area on the screen.
  • The above example does not address issues such as defining unique materials for each object. There are many resources and options for gbuffer formats. Some people prefer to store the material index in an alpha channel (in a diffuse buffer), and then look for a string in the texture to obtain material properties.
  • Directional lights and other types of light affecting the entire scene can be easily processed by rendering a full-screen quadrant into the light storage buffer.
  • Spotlights are also nice to have, and also pretty easy to implement
  • We probably need lighter properties.
  • We may need some way to evaluate how the diffuse and light buffers combine to support the environment and emission.
  • There are many ways to store normals in a more compact way. You can, for example, use spherical coordinates to delete a single value. There are many articles on deferred lighting and gbuffer formats. Looking at the formats that people use, you can give some ideas. Just make sure your gbuffer is not getting too thick.
  • Reconstruct the view position using the linearized depth value, and your projection is not so complicated. You need to build a vector using projection constants. Multiply it by the depth value (0 to 1) to get the viewing position. There are several articles. These are just two lines of code.

There probably is a lot to pick up in this post, but hopefully this shows the general principle. None of them were compiled by shaders. It was just converted from 3.3 to 1.2 from memory.

There are several approaches to the accumulation of light. You might want to reduce the number of callbacks by making VBOs with 1000 cubes and cones for batch drawing of everything. With more modern versions of GL, you can also use a geometric shader to calculate the square that will cover the area of ​​light for each light. Probably the best way is to use computational shaders, but that requires GL 4.3. The advantage here is that you can sort through all the light information and make one record. There are also pseudo-computational methods when you divide the screen into a coarse grid and assign a list of lights for each cell. This can only be done using a flash shader, but you need to create lists of light signals on the processor and send data to the shader, but UBOs.

The shader method of calculations is by far the easiest. It eliminates the great complexity of old methods to track and organize everything. Just repeat the backlight and make one entry in the framebuffer.

+13
source

1) Delayed shading involves dividing the rendering of the geometry for the scene and basically everything else into separate passes.

For example, when I want to create a mirror, blurry and shadow texture, I need to display a scene for each of these textures.

For shadow texture, probably (if you use shadow mapping, this cannot be avoided). But for everything else:

No, therefore, delayed shading can be so useful. In a delayed pipeline, you run the geometry once and save the color, regular, and three-dimensional location (geometry buffer) for each pixel. This can be achieved in several different ways, but the most common is the use of frame buffer objects ( FBOs ) with multiple rendering objects ( MRTs ). When using FBOs for deferred shading, you render the geometry exactly as you would normally if you are attaching FBOs, use multiple outputs in your fragment shader (one for each rendering target) and do not calculate any lighting. You can learn more about FBOs and MRT on the OpenGL website or quickly find Google. Then, to ignite your scene, you will read this data in the shader and use it to calculate the lighting in the same way as usual. The easiest way to do this (but not the best way) is to display full-screen square and typical color, normal and local textures for your scene.

2) The geometry buffer is all the data necessary for lighting and other shading to be performed on the stage. It is created during the passage of the geometry (the only time the geometry has to be visualized) and is usually a collection of textures. Each texture is used as a rendering target (see FBOs and MRT above) when rendering geometry. You usually have one texture for color, one for normals and one for three-dimensional placement. If necessary, it can also contain more data (for example, lighting parameters). This gives you all the data you need to illuminate each pixel while skipping lighting.

Pseudocode may look like this:

  for all geometry { render to FBO } for all lights { read FBO and do lighting } //... here you can read the FBO and use it for anything! 
+1
source

The basic idea of ​​deferred rendering is to separate the process of converting the geometry of the meshes to the locations on the destination frame buffer and give the pixels of the destination framebuffer their final color.

The first step is to visualize the geometry so that each framebuffer pixel receives information about the original geometry, i.e. location in any world or eye space (eye space is preferred), transformed tangent space (normal, tangent, binormal) and other attributes, depending on what is required later. This is a "geometry buffer" (also answering your question 2.).

Using a geometry buffer, precalculated geometry β†’ pixel mapping can be reused for several similar processing steps. For example, if you want to visualize 50 light sources, you only need to process the geometry 50 times (which is equivalent to rendering 100 triangles that children play for a modern GPU), where other parameters are used for each iteration (light position, direction, shadow buffers, etc.) .d.). This contrasts with regular multi-pass rendering, where for each iteration the whole geometry needs to be reworked.

And, of course, each passage can be used to visualize a different type of shading process (glow, blur, bokeh, halos, etc.).

Then, for each iteration pass, the results are combined together into a composite image.

0
source

Source: https://habr.com/ru/post/1481782/


All Articles