Directional Lights


This is the third part of a tutorial series about creating a custom scriptable render pipeline. It adds support for shading with multiple directional lights.

This tutorial is made with Unity 2019.2.12f1.

Lighting

If we want to create a more realistic scene then we’ll have to simulate how light interacts with surfaces. This requires a more complex shader than the unlit one that we currently have.

Lit Shader

Duplicate the UnlitPass HLSL file and rename it to LitPass. Adjust the include guard define and the vertex and fragment function names to match. We’ll add lighting calculations later.

#ifndef 
#define 

…

Varyings  (Attributes input) { … }

float4  (Varyings input) : SV_TARGET { … }

#endif

Also duplicate the Unlit shader and rename it to Lit. Change its menu name, the file it includes, and the functions it uses. Let’s also change the default color to gray, as a fully white surface in a well-lit scene can appear very bright. The Universal pipeline uses a gray color by default as well.

Shader  {
	
	Properties {
		_BaseMap("Texture", 2D) = "white" {}
		_BaseColor("Color", Color) = (, 1.0)
		…
	}

	SubShader {
		Pass {
			…
			#pragma vertex 
			#pragma fragment 
			#include 
			ENDHLSL
		}
	}
}

We’re going to use a custom lighting approach, which we’ll indicate by setting the light mode of our shader to CustomLit. Add a Tags block to the Pass, containing "LightMode" = "CustomLit".

		Pass {
			
				
			

			…
		}

To render objects that use this pass we have to include it in CameraRenderer. First add a shader tag identifier for it.

	static ShaderTagId
		unlitShaderTagId = new ShaderTagId("SRPDefaultUnlit")
		;

Then add it to the passes to be rendered in DrawVisibleGeometry, like we did in DrawUnsupportedShaders.

		var drawingSettings = new DrawingSettings(
			unlitShaderTagId, sortingSettings
		) {
			enableDynamicBatching = useDynamicBatching,
			enableInstancing = useGPUInstancing
		};
		

Now we can create a new opaque material, though at this point it produces the same results an as unlit material.

Default opaque material.

Normal Vectors

How well an object is lit depends on multiple factors, including the relative angle between the light and surface. To know the surface’s orientation we need to access the surface normal, which is a unit-length vector pointing straight away from it. This vector is part of the vertex data, defined in object space just like the position. So add it to Attributes in LitPass.

struct Attributes {
	float3 positionOS : POSITION;
	
	float2 baseUV : TEXCOORD0;
	UNITY_VERTEX_INPUT_INSTANCE_ID
};

Lighting is calculated per fragment, so we have to add the normal vector to Varyings as well. We’ll perform the calculations in world space, so name it normalWS.

struct Varyings {
	float4 positionCS : SV_POSITION;
	
	float2 baseUV : VAR_BASE_UV;
	UNITY_VERTEX_INPUT_INSTANCE_ID
};

We can use TransformObjectToWorldNormal from SpaceTransforms to convert the normal to world space in LitPassVertex.

	output.positionWS = TransformObjectToWorld(input.positionOS);
	output.positionCS = TransformWorldToHClip(positionWS);
	

To verify whether we get a correct normal vector in LitPassFragment we can use it as a color.

	
	return base;
World-space normal vectors.

Negative values cannot be visualized, so they’re clamped to zero.

Interpolated Normals

Although the normal vectors are unit-lengh in the vertex program, linear interpolation across triangles affects their length. We can visualize the error by rendering the difference between one and the vector’s length, magnified by ten to make it more obvious.

	base.rgb = input.normalWS;
Interpolated normal error, exaggerated.

We can smooth out the interpolation distortion by normalizing the normal vector in LitPassFragment. The difference isn’t really noticeable when just looking at the normal vectors, but it’s more obvious when used for lighting.

	base.rgb = ;
Normalization after interpolation.

Surface Properties

Lighting in a shader is about simulating the interactions between light that hits a surface, which means that we must keep track of the surface’s properties. Right now we have a normal vector and a base color. We can split the latter in two: the RGB color and the alpha value. We’ll be using this data in a few places, so let’s define a convenient Surface struct to contain all relevant data. Put it in a separate Surface HLSL file in the ShaderLibrary folder.




	
	
	


Include it in LitPass, after Common. That way we can keep LitPass short. We’ll put specialized code in its own HLSL file from now on, to make it easier to locate the relevant functionality.

#include "../ShaderLibrary/Common.hlsl"

Define a surface variable in LitPassFragment and fill it. Then the final result becomes the surface’s color and alpha.

	
	 = normalize(input.normalWS);
	
	

	return ;

Calculating Lighting

To calculate the actual lighting we’ll create a GetLighting function that has a Surface parameter. Initially have it return the Y component of the surface normal. As this is lighting functionality we’ll put it in a separate Lighting HLSL file.




	


Include it in LitPass, after including Surface because Lighting depends on it.

#include "../ShaderLibrary/Surface.hlsl"

Now we can get the lighting in LitPassFragment and use that for the RGB part of the fragment.

	
	return float4(, surface.alpha);
Diffuse lighting from above.

At this point the result is the Y component of the surface normal, so it is one at the top of the sphere and drops down to zero at its sides. Below that the result becomes negative, reaching −1 at the bottom, but we cannot see negative values. It matches the cosine of the angle between the normal and up vectors. Ignoring the negative part, this visually matches diffuse lighting of a directional light pointing straight down. The finishing touch would be to factor the surface color into the result in GetLighting, interpreting it as the surface albedo.

float3 GetLighting (Surface surface) {
	return surface.normal.y ;
}
Albedo applied.

Lights

To perform proper lighting we also need to know the properties of the light. In this tutorial we’ll limit ourselves to directional lights only. A directional light represents a source of light so far away that its position doesn’t matter, only its direction. This is a simplification, but it’s good enough to simulate the Sun’s light on Earth and other situations where incoming light is more or less unidirectional.

Light Structure

We’ll use a struct to store the light data. For now we can suffice with a color and a direction. Put it in a separate Light HLSL file. Also define a GetDirectionalLight function that returns a configured directional light. Initially use a white color and the up vector, matching the light data that we’re currently using. Note that the light’s direction is thus defined as the direction from where the light is coming, not where it is going.




	
	



	
	
	
	


Include the file in LitPass before Lighting.

#include "../ShaderLibrary/Lighting.hlsl"

Lighting Functions

Add a GetIncomingLight function to Lighting that calculates how much incoming light there is for a given surface and light. For an arbitrary light direction we have to take the dot product of the surface normal and the direction. We can use the dot function for that. The result should be modulated by the light’s color.

	

But this is only correct when the surface is oriented toward the light. When the dot product is negative we have to clamp it to zero, which we can do via the saturate function.

float3 IncomingLight (Surface surface, Light light) {
	return dot(surface.normal, light.direction) * light.color;
}

Add another GetLighting function, which returns the final lighting for a surface and light. For now it’s the incoming light multiplied by the surface color. Define it above the other function.

	

Finally, adjust the GetLighting function that only has a surface parameter so it invokes the other one, using GetDirectionalLight to provide the light data.

float3 GetLighting (Surface surface) {
	return ;
}

Sending Light Data to the GPU

Instead of always using a white light from above we should use the light of the current scene. The default scene came with a directional light that represents the Sun, has a slightly yellowish color—FFF4D6 hexadecimal—and is rotated 50° around the X axis and −30° around the Y axis. If such a light doesn’t exist create one.

To make the light’s data accessible in the shader we’ll have to create uniform values for it, just like for shader properties. In this case we’ll define two float3 vectors: _DirectionalLightColor and _DirectionalLightDirection. Put them in a _CustomLight buffer defined at the top of Light.

	
	

Use these values instead of constants in GetDirectionalLight.

Light GetDirectionalLight () {
	Light light;
	light.color = ;
	light.direction = ;
	return light;
}

Now our RP must send the light data to the GPU. We’ll create a new Lighting class for that. It works like CameraRenderer but for lights. Give it a public Setup method with a context parameter, in which it invokes a separate SetupDirectionalLight method. Although not strictly necessary, let’s also give it a dedicated command buffer that we execute when done, which can be handy for debugging. The alternative would be to add a buffer parameter.





	

	
		
	
	
	
		
		
		
		
		
	
	
	

Keep track of the identifiers of the two shader properties.

	
		
		

We can access the scene’s main light via RenderSettings.sun. That gets us the most important directional light by default and it can also be explicitly configured via Window / Rendering / Lighting Settings. Use CommandBuffer.SetGlobalVector to send the light data to the GPU. The color is the light’s color in linear space, while the direction is the light transformation’s forward vector negated.

	void SetupDirectionalLight () {
		
		
		
	}

The light’s color property is its configured color, but lights also have a separate intensity factor. The final color is both multiplied.

		buffer.SetGlobalVector(
			dirLightColorId, light.color.linear 
		);

Give CameraRenderer a Lighting instance and use it to set up the lighting before drawing the visible geometry.

	

	public void Render (
		ScriptableRenderContext context, Camera camera,
		bool useDynamicBatching, bool useGPUInstancing
	) {
		…

		Setup();
		
		DrawVisibleGeometry(useDynamicBatching, useGPUInstancing);
		DrawUnsupportedShaders();
		DrawGizmos();
		Submit();
	}
Lit by the sun.

Visible Lights

When culling Unity also figures out which lights affect the space visible to the camera. We can rely on that information instead of the global sun. To do so Lighting needs access to the culling results, so add a parameter for that to Setup and store it in a field for convenience. Then we can support more than one light, so replace the invocation of SetupDirectionalLight with a new SetupLights method.

	

	public void Setup (
		ScriptableRenderContext context
	) {
		
		buffer.BeginSample(bufferName);
		//SetupDirectionalLight();
		
		…
	}
	
	

Add the culling results as an argument when invoking Setup in CameraRenderer.Render.

		lighting.Setup(context);

Now Lighting.SetupLights can retrieve the required data via the visibleLights property of the culling results. It’s made available as a Unity.Collections.NativeArray with the VisibleLight element type.

using UnityEngine;
using UnityEngine.Rendering;

public class Lighting {
	…

	void SetupLights () {
		
	}

	…
}

Multiple Directional Lights

Using the visible light data makes it possible to support multiple directional lights, but we have to send the data of all those lights to the GPU. So instead of a pair of vectors we’ll use two Vector4 arrays, plus an integer for the light count. We’ll also define a maximum amount of directional lights, which we can use to initialize two array fields to buffer the data. Let’s set the maximum to four, which should be enough for most scenes.

	

	static int
		//dirLightColorId = Shader.PropertyToID("_DirectionalLightColor"),
		//dirLightDirectionId = Shader.PropertyToID("_DirectionalLightDirection");
		
		
		

	
		
		

Add an index and a VisibleLight parameter to SetupDirectionalLight. Have it set the color and direction elements with the supplied index. In this case the final color is provided via the VisibleLight.finalColor property. The forward vector can be found via the VisibleLight.localToWorldMatrix property. It’s the third column of the matrix and once again has to be negated.

	void SetupDirectionalLight () {
		
		
	}

The final color already applied the light’s intensity, but by default Unity doesn’t convert it to linear space. We have to set GraphicsSettings.lightsUseLinearIntensity to true, which we can do once in the constructor of CustomRenderPipeline.

	public CustomRenderPipeline (
		bool useDynamicBatching, bool useGPUInstancing, bool useSRPBatcher
	) {
		this.useDynamicBatching = useDynamicBatching;
		this.useGPUInstancing = useGPUInstancing;
		GraphicsSettings.useScriptableRenderPipelineBatching = useSRPBatcher;
		
	}

Next, loop through all visible lights in Lighting.SetupLights and invoke SetupDirectionalLight for each element. Then invoke SetGlobalInt and SetGlobalVectorArray on the buffer to send the data to the GPU.

		NativeArray<VisibleLight> visibleLights = cullingResults.visibleLights;
		
			
			
		

		
		
		

But we only support up to four directional lights, so we should abort the loop when we reach that maximum. Let’s keep track of the directional light index separate from the loop’s iterator.

		
		for (int i = 0; i < visibleLights.Length; i++) {
			VisibleLight visibleLight = visibleLights[i];
			SetupDirectionalLight(, visibleLight);
			
				
			
		}

		buffer.SetGlobalInt(dirLightCountId, dirLightCount);

Because we only support directional lights we should ignore other light types. We can do this by checking whether the lightType property of the visible lights is equal to LightType.Directional.

			VisibleLight visibleLight = visibleLights[i];
			
				SetupDirectionalLight(dirLightCount++, visibleLight);
				if (dirLightCount >= maxDirLightCount) {
					break;
				}
			

This works, but the VisibleLight struct is rather big. Ideally we only retrieve it once from the native array and don’t also pass it as a regular argument to SetupDirectionalLight, because that copies it. We can use the same trick that Unity uses for the ScriptableRenderContext.DrawRenderers method, which is passing the argument by reference.

				SetupDirectionalLight(dirLightCount++,  visibleLight);

That requires us to also define the parameter as a reference.

	void SetupDirectionalLight (int index,  VisibleLight visibleLight) { … }

Shader Loop

Adjust the _CustomLight buffer in Light so it matches our new data format. In this case we’ll explicitly use float4 for the array types. Arrays have a fixed size in shaders, they cannot be resized. Make sure to use the same maximum that we defined in Lighting.


CBUFFER_START(_CustomLight)
	//float4 _DirectionalLightColor;
	//float4 _DirectionalLightDirection;
	
	
	
CBUFFER_END

Add a function to get the directional light count and adjust GetDirectionalLight so it retrieves the data for a specific light index.

	


Light GetDirectionalLight () {
	Light light;
	light.color = ;
	light.direction = ;
	return light;
}

Then adjust GetLight for a surface so it uses a for loop to accumulate the contribution of all directional lights.

float3 GetLighting (Surface surface) {
	
	
		
	
	return ;
}
Four directional lights.

Now our shader supports up to four directional lights. Usually only a single directional light is needed to represent the Sun or Moon, but maybe there’s a scene on a planet with multiple suns. Directional lights could also be used to approximate multiple large light rigs, for example those of a big stadium.

If your game always has a single directional light then you could get rid of the loop, or make multiple shader variants. But for this tutorial we’ll keep it simple and stick to a single general-purpose loop. The best performance is always achieved by ripping out everything that you do not need, although it doesn’t always make a significant difference.

Shader Target Level

Loops with a variable length used to be a problem for shaders, but modern GPUs can deal with them without issues, especially when all fragments of a draw call iterate over the same data in the same way. However, the OpenGL ES 2.0 and WebGL 1.0 graphics APIs can’t deal with such loops by default. We could make it work by incorporating a hard-coded maximum, for example by having GetDirectionalLight return min(_DirectionalLightCount, MAX_DIRECTIONAL_LIGHT_COUNT). This makes it possible to unroll the loop, turning it into a sequence of conditional code blocks. Unfortunately the resulting shader code is a mess and performance goes down fast. On very old-fashioned hardware all code blocks will always get executed, their contribution controlled via conditional assignments. While we could make this work it makes the code more complex, because we’d have to make other adjustments as well. So I opt to ignore these limitations and turn off WebGL 1.0 and OpenGL ES 2.0 support in builds for the sake of simplicity. They don’t support linear lighting anyway. We can also avoid compiling OpenGL ES 2.0 shader variants for them by raising the target level of our shader pass to 3.5, via the #pragma target 3.5 directive. Let’s be consistent and do this for both shaders.

			HLSLPROGRAM
			
			…
			ENDHLSL

BRDF

We’re currently using a very simplistic lighting model, appropriate for perfectly diffuse surfaces only. We can achieve more varied and realistic lighting by applying a bidirectional reflectance distribution function, BRDF for short. There are many such functions. We’ll use the same one that’s used by the Universal RP, which trades some realism for performance.

Incoming Light

When a beam of light hits a surface fragment head-on then all its energy will affect the fragment. For simplicity we’ll assume that the beam’s width matches the fragment’s. This is the case where the light direction `L` and surface normal `N` align, so `N*L=1`. When they don’t align at least part of the beam misses the surface fragment, so less energy affects the fragment. The energy portion that affects the fragment is `N*L`. A negative results means that the surface is oriented away from the light, so it cannot be affected by it.

Incoming light portion.

Outgoing Light

We don’t see the light that arrives at a surface directly. We only see the portion that bounces off the surface and arrives at the camera or our eyes. If the surface were a perfectly flat mirror then the light would reflect off it, with an outgoing angle equal to the incoming angle. We would only see this light if the camera were aligned with it. This is known as specular reflection. It’s a simplification of light interaction, but it’s good enough for our purposes.

Perfect specular reflection.

But if the surface isn’t perfectly flat then the light gets scattered, because the fragment effectively consists of many smaller fragments that have different orientations. This splits the beam of light into smaller beams that go in different directions, which effectively blurs the specular reflection. We could end up seeing some of the scattered light even when not aligned with the perfect reflection direction.

Scattered specular reflection.

Besides that, the light also penetrates the surface, bounces around, and exits at different angles, plus other things that we don’t need to consider. Taken to the extreme, we end up with a perfectly diffuse surface that scatters light evenly in all possible directions. This is the lighting that we’re currently calculating in our shader.

Perfect diffuse reflection.

No matter where the camera is, the amount of diffused light received from the surface is the same. But this means that the light energy that we observe is far less than the amount that arrived at the surface fragment. This suggests that we should scale the incoming light by some factor. However, because the factor is always the same we can bake it into the light’s color and intensity. Thus the final light color that we use represents the amount observed when reflected from a perfectly white diffuse surface fragment illuminated head-on. This is a tiny fraction of the total amount of light that is actually emitted. There are other ways to configure lights, for example by specifying lumen or lux, which make it easier to configure realistic light sources, but we’ll stick with the current approach.

Surface Properties

Surfaces can be perfectly diffuse, perfect mirrors, or anything in between. There are multiple ways in which we could control this. We’ll use the metallic workflow, which requires us to add two surface properties to the Lit shader.

The first property is whether a surface is metallic or nonmetalic, also known as a dielectric. Because a surface can contain a mix of both we’ll add a range 0–1 slider for it, with 1 indicating that it is fully metallic. The default is fully dielectric.

The second property controls how smooth the surface is. We’ll also use a range 0–1 slider for this, with 0 being perfectly rough and 1 being perfectly smooth. We’ll use 0.5 as the default.

		
		
Material with metallic and smoothness sliders.

Add the properties to the UnityPerMaterial buffer.

UNITY_INSTANCING_BUFFER_START(UnityPerMaterial)
	UNITY_DEFINE_INSTANCED_PROP(float4, _BaseMap_ST)
	UNITY_DEFINE_INSTANCED_PROP(float4, _BaseColor)
	UNITY_DEFINE_INSTANCED_PROP(float, _Cutoff)
	
	
UNITY_INSTANCING_BUFFER_END(UnityPerMaterial)

And also to the Surface struct.

struct Surface {
	float3 normal;
	float3 color;
	float alpha;
	
	
};

Copy them to the surface in LitPassFragment.

	Surface surface;
	surface.normal = normalize(input.normalWS);
	surface.color = base.rgb;
	surface.alpha = base.a;
	
	
		

And also add support for them to PerObjectMaterialProperties.

	static int
		baseColorId = Shader.PropertyToID("_BaseColor"),
		cutoffId = Shader.PropertyToID("_Cutoff")
		
		

	…

	[SerializeField, Range(0f, 1f)]
	float alphaCutoff = 0.5f;

	…

	void OnValidate () {
		…
		
		
		GetComponent<Renderer>().SetPropertyBlock(block);
	}
}

BRDF Properties

We’ll use the surface properties to calculate the BRDF equation. It tells us how much light we end up seeing reflected off a surface, which is a combination of diffuse reflection and specular reflection. We need to split the surface color in a diffuse and a specular part, and we’ll also need to know how rough the surface is. Let’s keep track of these three values in a BRDF struct, put in a separate BRDF HLSL file.




	
	
	


Add a function to get the BRDF data for a given surface. Start with a perfect diffuse surface, so the diffuse part is equal to the surface color while specular is black and roughness is one.

	
	
	
	
	

Include BRDF after Light and before Lighting.

#include "../ShaderLibrary/Common.hlsl"
#include "../ShaderLibrary/Surface.hlsl"
#include "../ShaderLibrary/Light.hlsl"

#include "../ShaderLibrary/Lighting.hlsl"

Add a BRDF parameter to both GetLighting functions, then multiply the incoming light with the diffuse portion instead of the entire surface color.

float3 GetLighting (Surface surface,  Light light) {
	return IncomingLight(surface, light) * ;
}

float3 GetLighting (Surface surface) {
	float3 color = 0.0;
	for (int i = 0; i < GetDirectionalLightCount(); i++) {
		color += GetLighting(surface,  GetDirectionalLight(i));
	}
	return color;
}

Finally, get the BRDF data in LitPassFragment and pass it to GetLighting.

	
	float3 color = GetLighting(surface);

Reflectivity

How reflective a surface is varies, but in general metals reflect all light via specular reflection and have zero diffuse reflection. So we’ll declare reflectivity to be equal to the metallic surface property. Light that gets reflected doesn’t get diffused, so we should scale the diffuse color by one minus the reflectivity in GetBRDF.

	

	brdf.diffuse = surface.color ;
White spheres with metallic 0, 0.25, 0.5, 0.75, and 1.

In reality some light also bounces off dielectric surfaces, which gives them their highlight. The reflectivity of nonmetals varies, but is about 0.04 on average. Let’s define that as the minimum reflectivity and add a OneMinusReflectivity function that adjusts the range from 0–1 to 0–0.96. This range adjustments matches the Universal RP’s approach.



	
	

Use that function in GetBRDF to enforce the minimum. The difference is hardly noticeable when only rendering the diffuse reflections, but will matter a lot when we add specular reflections. Without it nonmetals won’t get specular highlights.

	float oneMinusReflectivity = ;

Specular Color

Light that gets reflected one way cannot also get reflected another way. This is known as energy conservation, which means that the amount of outgoing light cannot exceed the amount of incoming light. This suggests that the specular color should be equal to the surface color minus the diffuse color.

	brdf.diffuse = surface.color * oneMinusReflectivity;
	

However, this ignores the fact that metals affect the color of specular reflections while nonmetals don’t. The specular color of dielectric surfaces should be white, which we can achieve by using the metallic property to interpolate between the minimum reflectivity and the surface color instead.

	brdf.specular = ;

Roughness

Roughness is the opposite of smoothness, so we can simply take one minus smoothness. The Core RP Library has a function that does this, named PerceptualSmoothnessToPerceptualRoughness. We’ll use this function, to make clear that the smoothness and thus also the roughness are defined as perceptual. We can convert to the actual roughness value via the PerceptualRoughnessToRoughness function, which squares the perceptual value. This matches the Disney lighting model. It’s done this way because adjusting the perceptual version is more intuitive when editing materials.

	
		
	

These functions are defined in the CommonMaterial HLSL file of the Core RP Libary. Include it in our Common file after including the core’s Common.

#include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl"

#include "UnityInput.hlsl"

View Direction

To determine how well the camera is aligned with the perfect reflection direction we’ll need to know the camera’s position. Unity makes this data available via float3 _WorldSpaceCameraPos, so add it to UnityInput.


To get the view direction—the direction from surface to camera—in LitPassFragment we need to add the world-space surface position to Varyings.

struct Varyings {
	float4 positionCS : SV_POSITION;
	
	…
};

Varyings LitPassVertex (Attributes input) {
	…
	positionWS = TransformObjectToWorld(input.positionOS);
	output.positionCS = TransformWorldToHClip(positionWS);
	…
}

We’ll consider the view direction to be part of the surface data, so add it to Surface.

struct Surface {
	float3 normal;
	
	float3 color;
	float alpha;
	float metallic;
	float smoothness;
};

Assign it in LitPassFragment. It’s equal to the camera position minus the fragment position, normalized.

	surface.normal = normalize(input.normalWS);
	

Specular Strength

The strength of the specular reflection that we observe depends on how well our view direction matches the perfect reflection direction. We’ll use the same formula that’s used in the Universal RP, which is a variant of the Minimalist CookTorrance BRDF. The formula contains a few squares, so let’s add a convenient Square function to Common first.

	

Then add a SpecularStrength function to BRDF with a surface, BRDF data, and light as parameters. It should calculate `r^2/(d^2 max(0.1, (L*H)^2)n)`, where `r` is the roughness and all dot products should be saturated. Furthermore, `d=(N*H)^2(r^2-1)+1.0001`, `N` is the surface normal, `L` is the light direction, and `H=L+V` normalized, which is the halfway vector between the light and view directions. Use the SafeNormalize function to normalize that vector, to avoid a division by zero in case the vectors are opposed. Finally, `n=4r-2` and is a normalization term.

	
	
	
	
	
	
	

Next, add a DirectBRDF that returns the color obtained via direct lighting, given a surface, BRDF, and light. The result is the specular color modulated by the specular strength, plus the diffuse color.

	

GetLighting then has to multiply the incoming light by the result of that function.

float3 GetLighting (Surface surface, BRDF brdf, Light light) {
	return IncomingLight(surface, light) * ;
}
Smoothness top to bottom 0, 0.25, 0.5, 0.75, and 0.95.

We now get specular reflections, which add highlights to our surfaces. For perfectly rough surfaces the highlight mimics diffuse reflection. Smoother surfaces get a more focused highlight. A perfectly smooth surface gets an infinitesimal highlight, which we cannot see. Some scattering is needed to make it visible.

Due to energy conservation highlights can get very bright for smooth surfaces, because most of the light arriving at the surface fragment gets focused. Thus we end up seeing far more light than would be possible due to diffuse reflection where the highlight is visible. You can verify this by scaling down the final rendered color a lot.

Final color divided by 100.

You can also verify that metals affect the color of specular reflections while nonmetals don’t, by using a base color other than white.

Blue base color.

We now have functional direct lighting that is believable, although currently the results are too dark—especially for metals—because we don’t support environmental reflections yet. A uniform black environment would be more realistic than the default skybox at this point, but that makes our objects harder to see. Adding more lights works as well.

Four lights.

Mesh Ball

Let’s also add support for varying metallic and smoothness properties to MeshBall. This requires adding two float arrays.

	static int
		baseColorId = Shader.PropertyToID("_BaseColor")
		
		;

	…
	
		
		

	…

	void Update () {
		if (block == null) {
			block = new MaterialPropertyBlock();
			block.SetVectorArray(baseColorId, baseColors);
			
			
		}
		Graphics.DrawMeshInstanced(mesh, 0, material, matrices, 1023, block);
	}

Let’s make 25% of the instances metallic and vary smoothness from 0.05 to 0.95 in Awake.

			baseColors[i] =
				new Vector4(
					Random.value, Random.value, Random.value,
					Random.Range(0.5f, 1f)
				);
			
			

Then make the mesh ball use a lit material.

Lit mesh ball.

Shader GUI

We now support multiple rendering modes, each requiring specific settings. To make it easier to switch between modes let’s add some buttons to our material inspector to apply preset configurations.

Custom Shader GUI

Add a CustomEditor "CustomShaderGUI" statement to the main block of the Lit shader.

Shader "Custom RP/Lit" {
	…

	
}

That instructs the Unity editor to use an instance of the CustomShaderGUI class to draw the inspector for materials that use the Lit shader. Create a script asset for that class and put it in a new Custom RP / Editor folder.

We’ll need to use the UnityEditor, UnityEngine, and UnityEngine.Rendering namespaces. The class has to extend ShaderGUI and override the public OnGUI method, which has a MaterialEditor and a MaterialProperty array parameter. Have it invoke the base method, so we end up with the default inspector.






	
		
	
		
	

Setting Properties and Keywords

To do our work we’ll need access to three things, which we’ll store in fields. First is the material editor, which is the underlying editor object responsible for showing and editing materials. Second is a reference to the materials being edited, which we can retrieve via the targets property of the editor. It’s defined as an Object array because targets is a property of the general-purpose Editor class. Third is the array of properties that can be edited.

	
	
	

	public override void OnGUI (
		MaterialEditor materialEditor, MaterialProperty[] properties
	) {
		base.OnGUI(materialEditor, properties);
		
		
		
	}

To set a property we first have to find it in the array, for which we can use the ShaderGUI.FindPropery method, passing it a name and property array. We can then adjust its value, by assigning to its floatValue property. Encapsulate this in a convenient SetProperty method with a name and a value parameter.

	
		
	

Settings a keyword is a bit more involved. We’ll create a SetKeyword method for this, with a name and a boolean parameter to indicate whether the keyword should be enabled or disabled. We have to invoke either EnableKeyword or DisableKeyword on all materials, passing them the keyword name.

	
		
			
				
			
		
		
			
				
			
		
	

Let’s also create a SetProperty variant that toggles a property–keyword combination.

	
		
		
	

Now we can define simple Clipping, PremultiplyAlpha, SrcBlend, DstBlend, and ZWrite setter properties.

	
		
	

	
		
	

	
		
	

	
		
	

	
		
	

Finally, the render queue is set by assigning to the RenderQueue€ property of all materials. We can use the RenderQueue enum for this.

	
		
			
				
			
		
	

Preset Buttons

A button can be created via the GUILayout.Button method, passing it a label, which will be a preset’s name. If the method returns true then it was pressed. Before applying the preset we should register an undo step with the editor, which can be done by invoking RegisterPropertyChangeUndo on it with the name. As this code is the same for all presets, put it in a PresetButton method that returns whether the preset should be applied.

	
		
			
			
		
		
	

We’ll create a separate method per preset, beginning with the default Opaque mode. Have it set the properties appropriately when activated.

	
		
			
			
			
			
			
			
		
	

The second preset is Clip, which is a copy of Opaque with clipping turned on and the queue set to AlphaTest.

	void  () {
		if (PresetButton("Clip")) {
			Clipping = ;
			PremultiplyAlpha = false;
			SrcBlend = BlendMode.One;
			DstBlend = BlendMode.Zero;
			ZWrite = true;
			RenderQueue€ = RenderQueue.;
		}
	}

The third preset is for standard transparency, which fades out objects so we’ll name it Fade. It’s another copy of Opaque, with adjusted blend modes and queue, plus no depth writing.

	void  () {
		if (PresetButton("Fade")) {
			Clipping = false;
			PremultiplyAlpha = false;
			SrcBlend = BlendMode.;
			DstBlend = BlendMode.;
			ZWrite = ;
			RenderQueue€ = RenderQueue.;
		}
	}

The fourth preset is a variant of Fade that applies premultiplied alpha blending. We’ll name it Transparent as it’s for semitransparent surfaces with correct lighting.

	void  () {
		if (PresetButton("Transparent")) {
			Clipping = false;
			PremultiplyAlpha = ;
			SrcBlend = BlendMode.;
			DstBlend = BlendMode.OneMinusSrcAlpha;
			ZWrite = false;
			RenderQueue€ = RenderQueue.Transparent;
		}
	}

Invoke the preset methods at the end of OnGUI so they show up below the default inspector.

	public override void OnGUI (
		MaterialEditor materialEditor, MaterialProperty[] properties
	) {
		…

		
		
		
		
	}
Preset buttons.

The preset buttons won’t be used often, so let’s put them inside a foldout that is collapsed by default. This is done by invoking EditorGUILayout.Foldout with the current foldout state, label, and true to indicate that clicking it should toggle its state. It returns the new foldout state, which we should store in a field. Only draw the buttons when the foldout is open.

	

	…

	public override void OnGUI (
		MaterialEditor materialEditor, MaterialProperty[] properties
	) {
		…

		
		
		
			OpaquePreset();
			ClipPreset();
			FadePreset();
			TransparentPreset();
		
	}
Preset foldout.

Presets for Unlit

We can also use the custom shader GUI for our Unlit shader.

Shader "Custom RP/Unlit" {
	…

	
}

However, activating a preset will result in an error, because we’re trying to set a property that the shader doesn’t have. We can guard against that by adjusting SetProperty. Have it invoke FindProperty with false as an additional argument, indicating that it shouldn’t log an error if the property isn’t found. The result will then be null, so only set the value if that’s not the case. Also return whether the property exists.

	 SetProperty (string name, float value) {
		
		
			
			
		
		
	}

Then adjust the keyword version of SetProperty so it only sets the keyword if the relevant property exists.

	void SetProperty (string name, string keyword, bool value) {
		SetProperty(name, value ? 1f : 0f)
			
		
	}

No Transparency

Now the presets also work for materials that use the Unlit shader, although the Transparent mode doesn’t make much sense in this case, as the relevant property doesn’t exist. Let’s hide this preset when it’s not relevant.

First, add a HasProperty method that returns whether a property exists.

	
		

Second, create a convenient property to check whether _PremultiplyAlpha exists.

	

Finally, make everything of the Transparent preset conditional on that property, by checking it first in TransparentPreset.

		if ( PresetButton("Transparent")) { … }
Unlit materials lack Transparent preset.

At this point we have functional lighting, but only for the direct illumination of directional lights. There is no environmental lighting, there are no shadows, and no other light types are supported. We’ll add all that in the future. Want to know when the next tutorial gets released? Keep tabs on my Patreon page!

license
repository
PDF



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *