Projection/Dynamic decals disabled (2023)

Often when you do visual effects, you want things to stick to the ground. Even when that floor is uneven. Or decals to make existing geometry more interesting, or you want bubble shadows on uneven floors, or a few other use cases in the same direction.

(note: I used the free "Nature Starter Kit 2" from the Unity Asset store throughout this tutorial)

Unity has the concept of "projector" for this, but I have to admit I don't trust them very much. They work by finding all the objects on the projector's frustum and redrawing them, which means you quickly get a lot of drawcalls and overdraws (and I don't know how expensive it is to find objects in complex scenes). So what I tend to use is depth buffering, rebuilding the world space and then the object space position based on that and then using that as the coordinates for whatever we want. I hope the concept becomes clearer by the end of the tutorial. The first time I came across this was the unity commandbuffer example where they use this technology to render to GBuffers before passing the deferred lighting, although this example is simpler.https://docs.unity3d.com/Manual/GraphicsCommandBuffers.html.

Preparing the data we need🔗︎

Since this will be a simple erase shader beyond the projection, we can start with justa transparent shader. Although we don't need the UVs of the mesh itself, and we need the position of the screen to read the depth buffer, as well as a beam from the camera to the object to reconstruct the position (there are other ways, but this seemed easier rn ). I delve into the coordinates of screen space and its implications inmy tutorial on them, for the radius subtract the world space position from the world space position of the camera. As I explain in the tutorial, to stretch the texture to screen space, we need to divide by the w component of the vector.

// the data passed from the vertex to the fragment shader and interpolated by the rasterizerstructurev2f{float position4:SV_POSITION; floating screen position4:TEXTCOORD0;radio float3:TEXCOORD1;};
// the vertex shader functionv2f vert(v application data){v2f o;// convert the vertex positions from object space to clip space so they can be rendered correctlyfloat3 worldPos=mul(unity_ObjectToWorld, v.vertex);o.position=UnityWorldToClipPos(worldPos);//calculate the radius between the camera and the vertexo.ray=afterworld-_WorldSpaceCameraPos;//calculate the position of the screeno.screenPos=ComputeScreenPos(o.position);returno;}
// the fragment shader functionfixed fragment4 (v2f i):SV_OBJECTIVE{// unstretch screenspace uv and get uvs from functionfloat2uv screen=i.screenPos.xy/i.screenPos.w;getProjectedObjectPos(screenUv, i.ray);//...

And with that we can start using that data.

doing what to do🔗︎

get depth🔗︎

I opted to put the projection code in its own function, as it seemed cleaner and easier to copy between files/into a library include file. Take the screen position as well as the radius we just created.

We start by sampling the depth texture, this is not the same as the depth buffer, it is a separate texture that is generated if we tell the camera we want it to do this, many post processing effects already do this, but they also do this. . you may have to do it yourself, depending on your scene. I'll assume it has a deep texture, if not take a quick look.my post processing tutorial using this texture. Once activated, just addsampler2D_float _CameraDepthTexture;as a uniform variable to our pass it is enough for it to have access to this texture. So in the function we use theSAMPLE_DEPTH_TEXTUREmacro to read from the texture andLinear01Depth(depth) * _ProjectionParams.zto first get rid of the bias it uses for better encoding, and then make its spacing vary from 0 to the furthest clipping plane instead of 0 to 1.

// get depth from depth texturefloatdepth=SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, screenPos); profundidade=Linear01Depth (depth)*_ProjectionParams.z;

naive reconstruction of the world🔗︎

Taking depth into account, we reconstruct the world position by multiplying the normalized radius by the depth. Once this is done, getting the position in space of the object is a simple matrix multiplication.

float3 getProjectedObjectPos(float2 screenPos, float3 worldRay){// get depth from depth texturefloatdepth=SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, screenPos); profundidade=Linear01Depth (depth)*_ProjectionParams.z;//rebuild the object's world and spatial positionsfloat3 worldPos=_WorldSpaceCameraPos+normalize(worldRay)*depth; float3 objectPos=mul (unity_WorldToObject, float4(worldPos,1)).xyz;returnobjectPos;}

Projection/Dynamic decals disabled (1)

Cut the things behind🔗︎

This already looks like positions based on the depth buffer and the position of the object, but it not only shows the position where the cube is, but also behind where the cube is. Since we know that the default unit cube we're using here is 1x1 large units, we know that the coordinates inside the cube go from -0.5 to 0.5, so we're going to discard all the pixels outside of that. If we were to use a different model, different constraints might make more sense. EITHERshortenthe function discards all pixels into which it is fed with a value less than 0, if we give it vectors with multiple components, it discards pixels into which any of the components are below 0. So, to solve our current problem, we can subtract the absolute (to get negative values) from the spatial position of the object from0,5(to be interpreted as a vector where each component is0,5) and send the result toshortenfunction.

shorten(0,5 -abs(objectPos));

Projection/Dynamic decals disabled (2)

Arranging the reconstruction of world space🔗︎

Now we have a square on the ground closer than we thought, but the square doesn't seem to behave well, in the corner of the screen it seems to slide up. This is because we made a small mistake in the world space reconstruction, we assume that the depth texture has the distance from the camera, but no, the depth is a bit parallel to the camera somehow (this is because the actual modern -time charts work with your matrix multiplications, but that's a topic for another day).

Projection/Dynamic decals disabled (3)

So what can we do about it? We could choose a different approach that fixes the problem, but what I chose to do is take the dot product between the (normalized) radius and the camera's forward direction. In the middle of the screen it will be 1 and will decrease as the beam points less and less towards the camera. And as you can see into bewonderful visualization by Freya Holmér the quantity that gets smaller is exactly how long one vector is projected onto the other, or in our case, the inverse of the length we want. So if we divide the radius by this dot product, we get a vector that is 1 in the direction the camera is facing. In general it will be greater than 1 because it will also point in other directions, but that's what we need.

We can obtain the advance vector of the camera by multiplying(0, 0, 1, 0)by the view matrix, since the view matrix is ​​constructed from the camera transformation. But since all that happens in this matrix multiplication is a bunch of multiplication by 1 or by 0, we can simplify this by simply taking the third row of the matrix and using-UNIDA_MATRIZ_V[2].xyz(Not sure why I had to add the minus sign here, but it works.)

float3 getProjectedObjectPos(float2 screenPos, float3 worldRay){// get depth from depth texturefloatdepth=SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, screenPos); profundidade=Linear01Depth (depth)*_ProjectionParams.z;// get a radius of 1 length in the camera axis (because that's how depth is defined)lightning of the world=normalize (worldray);//the third row of the view matrix has the forward vector of the camera encoded, so a dot product with that will give the inverse distance in that directionlightning of the world/=point (worldray,-UNITY_MATRIX_V[2].xyz);// with this we reconstruct the positions of the world and space of the objectfloat3 worldPos=_WorldSpaceCameraPos+lightning of the world*depth; float3 objectPos=mul (unity_WorldToObject, float4(worldPos,1)).xyz;// discard pixels where any component is beyond +-0.5shorten(0,5 -abs(objectPos));returnobjectPos;}

Projection/Dynamic decals disabled (4)

And with that all the hard work is done!

What I did next was subtract 0.5 from the position before returning it, so we're in the 0 to 1 space using the standard unit cube, and that's a good space to work with textures. If you want to do cool stuff with signed distance fields, you might not want that and keep the center of the coordinate system at the center of the cube.

//gets -0.5|0.5 headspace to 0|1 for a nice texture material if that's what we wantobjectPos+= 0,5;

My nofragmentfunction, we can directly feed the x and z components (assuming y is up) from the output to thetex2DThey function as UV coordinators.

// the fragment shader functionfixed fragment4 (v2f i):SV_OBJECTIVE{// unstretch screenspace uv and get uvs from functionfloat2uv screen=i.screenPos.xy/i.screenPos.w;float2 uv=getProjectedObjectPos(screenUv, i.ray).xz;//read the color of the texture in uv coordinatesarray 4 column=tex2D(_MainTex, uv);//multiply the texture color and the tint colorcolumn*=_Cor;//returns the final color to be drawn on the screenreturncolumn;}

Projection/Dynamic decals disabled (5)

And the last tweak that doesn't change the images but makes the shader more robust is to lower the material priority a bit so that it doesn't take precedence over transparent shaders that don't adhere to the depth buffer anyway. And disable batching, as that would mean the center of the batched object is at the center of the world, and we need our center to be where the original object is. Both settings are in Subshader tags.

Label{"Type of representation"="Transparent" "Fila"="Transparent-400" "Disable lottery"="TRUE"}

ideas for improvements🔗︎

Now if we place the camera inside the template object, the template disappears. We can get around this by selecting the front faces instead of the back faces and disabling the zbuffer test, but this comes at the cost of more overdrawing as it will draw fragments behind the walls to be discarded, a LOD system that changes shaders to the draw variant. always when the camera is very close is good here.

Another weakness is that templates now affectallwhich renders the depth buffer, which includes dynamic characters, moving projectiles, opaque particles... My choice to combat this would be to usestencil padsto choose a template value to mask the decals or to allow only that value.

Also, the technology can be a bit aggressive when drawing on things that are orthogonal to themselves, drawing stripes as the texture is stretched, to combat the fact that you cantell the camera to also render a normal textureand compare your "up" with the normals and disappear if they are too different, or try to rebuild the normals cheaply from the depth buffer using partial derivatives.

And then those are just darkened decals, you can do lighting decals using the same technology but unfortunately without using surface shaders, which means a lot of effort, especially in the built-in render pipeline since you have multiple passes for multiple lights.

Fuente🔗︎

Shadow"Tutorial/054_UnlitDynamicDecal"{// display values ​​to edit in the inspectorProperties{[HDR] _Color ("Hue", coro)=(0,0,0,1) _MainTex ("Texture",2D)= "branco"{}}SubShader{//material is fully transparent and renders before other transparent geometry by default (by 2500)Label{"render type"="Transparent" "fila"="Transparent-400" "Disable batch processing"="TRUE"}// Blend via alphaBlend SrcAlpha OneMinusSrcAlpha//don't write to zbuffer because we have semi-transparencyZCancelarAprobar{CGPROGRAM//include useful shading functions#include"UnityCG.cginc"// define the vertex and fragment shadow functions#pragma vertice vert#Fragmentation pragma fragment// texture and texture transformations2D sampler_MainTex;float4 _MainTex_ST;// texture tonefixo4 _Cor;//global texture containing depth informationsampler2D_float _CameraDepthTexture;// the mesh data read by the vertex shaderstructureappdata{float4 vertex:POSITION;};// the data passed from the vertex to the fragment shader and interpolated by the rasterizerstructurev2f{float position4:SV_POSITION; floating screen position4:TEXTCOORD0;radio float3:TEXCOORD1;};// the vertex shader functionv2f vert(v application data){v2f o;// convert the vertex positions from object space to clip space so they can be rendered correctlyfloat3 worldPos=mul(unity_ObjectToWorld, v.vertex);o.position=UnityWorldToClipPos(worldPos);//calculate the radius between the camera and the vertexo.ray=afterworld-_WorldSpaceCameraPos;//calculate the position of the screeno.screenPos=ComputeScreenPos(o.position);returno;}float3 getProjectedObjectPos(float2 screenPos, float3 worldRay){// get depth from depth texturefloatdepth=SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, screenPos); profundidade=Linear01Depth (depth)*_ProjectionParams.z;// get a radius of 1 length in the camera axis (because that's how depth is defined)lightning of the world=normalize (worldray);//the third row of the view matrix has the forward vector of the camera encoded, so a dot product with that will give the inverse distance in that directionlightning of the world/=point (worldray,-UNITY_MATRIX_V[2].xyz);// with this we reconstruct the positions of the world and space of the objectfloat3 worldPos=_WorldSpaceCameraPos+lightning of the world*depth; float3 objectPos=mul (unity_WorldToObject, float4(worldPos,1)).xyz;// discard pixels where any component is beyond +-0.5shorten(0,5 -abs(objectPos));//gets -0.5|0.5 headspace to 0|1 for a nice texture material if that's what we wantobjectPos+= 0,5;returnobjectPos;}// the fragment shader functionfixed fragment4 (v2f i):SV_OBJECTIVE{// unstretch screenspace uv and get uvs from functionfloat2uv screen=i.screenPos.xy/i.screenPos.w;float2 uv=getProjectedObjectPos(screenUv, i.ray).xz;//read the color of the texture in uv coordinatesarray 4 column=tex2D(_MainTex, uv);//multiply the texture color and the tint colorcolumn*=_Cor;//returns the final color to be drawn on the screenreturncolumn;}ENDCG}}}

I hope you enjoyed my tutorial ✨. If you want to support me even more, feel free to follow me onGore, make me a one-time donation throughcoffee shopsupport me inpattern(I try to post updates too but fail most of the time, bear with me 💖).

Top Articles
Latest Posts
Article information

Author: Errol Quitzon

Last Updated: 04/23/2023

Views: 6164

Rating: 4.9 / 5 (59 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Errol Quitzon

Birthday: 1993-04-02

Address: 70604 Haley Lane, Port Weldonside, TN 99233-0942

Phone: +9665282866296

Job: Product Retail Agent

Hobby: Computer programming, Horseback riding, Hooping, Dance, Ice skating, Backpacking, Rafting

Introduction: My name is Errol Quitzon, I am a fair, cute, fancy, clean, attractive, sparkling, kind person who loves writing and wants to share my knowledge and understanding with you.