How to blur the result of a fragmented shader?

I am working on a shader that generates small clouds based on some mask images. Now it works well, but I feel that something is missing as a result of something, and I thought the blur would be nice. I remember the basic blur algorithm in which you need to apply convolution with a matrix of norm 1 (the larger the matrix, the larger the result) and the image. The fact is, I donโ€™t know how to treat the current shader result as an image. So basically I want to keep the shader as it is, but get it blurry. Any ideas? How can I integrate the convolution algorithm into a shader? Or does anyone know of another algorithm?

Cg Code:

float Luminance( float4 Color ){ return 0.6 * Color.r + 0.3 * Color.g + 0.1 * Color.b; } struct v2f { float4 pos : SV_POSITION; float2 uv_MainTex : TEXCOORD0; }; float4 _MainTex_ST; v2f vert(appdata_base v) { v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); o.uv_MainTex = TRANSFORM_TEX(v.texcoord, _MainTex); return o; } sampler2D _MainTex; sampler2D _Gradient; sampler2D _NoiseO; sampler2D _NoiseT; float4 frag(v2f IN) : COLOR { half4 nO = tex2D (_NoiseO, IN.uv_MainTex); half4 nT = tex2D (_NoiseT, IN.uv_MainTex); float4 turbulence = nO + nT; float lum = Luminance(turbulence); half4 c = tex2D (_MainTex, IN.uv_MainTex); if (lum >= 1.0f){ float pos = lum - 1.0f; if( pos > 0.98f ) pos = 0.98f; if( pos < 0.02f ) pos = 0.02f; float2 texCord = (pos, pos); half4 turb = tex2D (_Gradient, texCord); //turb.a = 0.0f; return turb; } else return c; } 
+4
source share
1 answer

It seems to me that this shader emulates alpha testing between a texture like a backbuffer (passed through sampler2D _MainTex ) and the generated cloud brightness (represented by a float lum ) displayed on a gradient. This makes things more complicated because you cannot just fake the blur and let the alpha blend take care of the rest. You will also need to modify the alpha testing procedure to emulate the alpha mixture or restructure the rendering pipeline accordingly. First we look at cloud blur.

The first question you need to ask yourself is the screen blur. Having seen the mechanics of this fragmented shader, I would not have thought - you want to blur the clouds on a real model. Given this, it should be enough to blur the basic textures and produce a blurry result - except that you emulate alpha cropping so that you get rough edges. The question is what to do with these rough edges. What happens is alpha blending.

You can emulate alpha blending with lerp (linear interpolation) between turb color and color c with lerp () function (depending on which shader language you use). You probably want something similar to return lerp(c, turb, 1 - pos); instead of return turb; ... I expect that you will want to constantly tune it until you understand and start to get the desired results. (For example, you may prefer lerp(c, turb, 1 - pow(pos,4)) )

In fact, you can try this last step (just adding an erp) before changing the textures to see what alpha blending will do for you.

Edit: I did not consider the case where the _NoiseO and _NoiseT samplers _NoiseO constantly changing, so just telling you to blur them was the least useful _NoiseT advice. You can emulate blur using a filter with multiple filters. The easiest way is to get evenly distributed samples, weigh them and add them together, which will lead to your final color. (Usually you need the scales themselves to add up to 1.)

At the same time, you may or may not do it in the _NoiseO and _NoiseT textures themselves - you may want to create a blur of the screen space, which may look more interesting to the viewer. In this case, the same concept applies, but you need to perform calculations for the offset coordinates for each crane, and then perform a weighted summation.

For example, if we were going with the first case, and we wanted to try the _Noise0 sampler and slightly blur it, we could use this window filter (where all the scales are the same and sum to 1, which leads to the average):

 // Untested code. half4 nO = 0.25 * tex2D(_Noise0, IN.uv_MainTex + float2( 0, 0)) + 0.25 * tex2D(_Noise0, IN.uv_MainTex + float2( 0, g_offset.y)) + 0.25 * tex2D(_Noise0, IN.uv_MainTex + float2(g_offset.x, 0)) + 0.25 * tex2D(_Noise0, IN.uv_MainTex + float2(g_offset.x, g_offset.y)) 

Alternatively, if we wanted the whole cloud output to look vague, we would wrap part of the cloud generation in a function and call it instead of tex2D() for taps.

 // More untested code. half4 genCloud(float2 tc) { half4 nO = tex2D (_NoiseO, IN.uv_MainTex); half4 nT = tex2D (_NoiseT, IN.uv_MainTex); float4 turbulence = nO + nT; float lum = Luminance(turbulence); float pos = lum - 1.0; if( pos > 0.98f ) pos = 0.98f; if( pos < 0.02f ) pos = 0.02f; float2 texCord = (pos, pos); half4 turb = tex2D (_Gradient, texCord); // Figure out how you'd generate your alpha blending constant here for your lerp turb.a = ACTUAL_ALPHA; return turb; } 

And filtering with a few taps will look like this:

 // And even more untested code. half4 cloudcolor = 0.25 * genCloud(IN.uv_MainTex + float2( 0, 0)) + 0.25 * genCloud(IN.uv_MainTex + float2( 0, g_offset.y)) + 0.25 * genCloud(IN.uv_MainTex + float2(g_offset.x, 0)) + 0.25 * genCloud(IN.uv_MainTex + float2(g_offset.x, g_offset.y)) return lerp(c, cloudcolor, cloudcolor.a); 

However, it will be relatively slow to compute if you make the cloud function too complex. If you are connected by raster operations and texture readings (transferring texture / buffer data to and from memory), it is unlikely to matter if you do not use much more advanced blur technology (such a successful downsampling through buffered ping buffers useful for washouts / filters that are expensive because they have a lot of taps). But productivity is another whole because you just want to look.

+1
source

Source: https://habr.com/ru/post/1389753/


All Articles