Manual: Single-pass stereo rendering (double-width rendering) (2023)

  • Unity 2021.3 User Guide (LTS)
  • XR
  • Single-pass stereo rendering (double-width rendering)

XR API Reference

One-step instantiated rendering

Stereo rendering in one stepis a feature for PC and Playstation 4 based VR applications that renders left and right eye images simultaneously in one packagerender textureA special texture type that is created and updated at runtime. To use them, first create a new render texture and designate one of your cameras to render on it. You can then apply the render texture to a material like a normal texture.More information
seeglossary
that's twice the width of a single eye texture. The device does itsceneA scene contains your game's environments and menus. Think of each individual scene file as a single layer. In each scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces.More information
seeglossary
twice with 2 draw calls eachgame objectThe fundamental object in Unity scenes that can represent characters, props, scenery, cameras, waypoints, and more. The functionality of a GameObject is defined by the attached components.More information
seeglossary
which has a renderer component, however, iterates through the scenegraph only once when rendered for left and right eyes. With single-pass stereo rendering, both eyes share the work required for selection and shadow calculation. There are also fewer graphics command state toggles, as the GPU renders each GameObject in a ping-pong fashion (alternate rendering of objects between the eyes).

Single-pass stereo rendering allows the GPU to share the choices for both eyes. The GPU only needs to iterate through all GameObjects in the scene once for selection purposes, and then processes the GameObjects that survived the selection process.

The comparison images below show the difference between normal VR rendering and single-pass stereo rendering.

Regular VR rendering:

Manual: Single-pass stereo rendering (double-width rendering) (1)

Stereo VR rendering in one step:

Manual: Single-pass stereo rendering (double-width rendering) (2)

To enable this feature, open theplayerConfiguration (Menu:Edit>project settingsA huge collection of settings that lets you configure how physics, audio, network, graphics, input and many other areas of your project behave.More information
seeglossary
, then select theplayerCategory). Then navigate toXR Customizationspanel, make sure thatvirtual reality supportedcheck box is selected and select itsingle passpossibility ofstereo playback methodDropdown-Liste.

Manual: Single-pass stereo rendering (double-width rendering) (3)

Unity's built-in rendering capabilities and standard assets support this feature. However so farShaderA program running on the GPU.More information
seeglossary
and shaders downloaded from thewealth storageA growing library of free and commercial resources created by Unity and community members. It offers a wide variety of assets, from textures, models, and animations to full project examples, tutorials, and editor extensions.More information
seeglossary
This may need to be changed (e.g. you may need to scale and translate screen space coordinates to access the correct half of the packed render texture) to add support for single pass stereo rendering.

Add stereo rendering support to shaders in one step

The existing helper methods inUnityCG.cgincSupports one-pass stereo rendering. If your application isXRA generic term that includes Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) applications. Devices that support these forms of interactive applications may be referred to as XR devices.More information
seeglossary
or not, you still have to perform transformations on the vertices. For example, when you create any kind of application, the vertices go into theVertex-ShaderA program that runs on each vertex of a 3D model when the model is rendered.More information
seeglossary
Stage in model space and output in clip space. The vertex shader should output the vertex coordinates of the clip region. The set of vertices affected by the shader usually starts in model space before the vertex shader converts them to clip space. However, in order for these vertices to reach clip space, the vertex shader first transforms them to space and then to space.SpiegelThe user-visible area of ​​an application on their screen.
seeglossary
Space.

In the case of XR, there are multiple view matrices: one for the left eye and one for the right. You can use the built-in methodUnityWorldToClipPosto ensure that Unity accounts for when the computation requires handling multiple view arrays. If you use theUnityWorldToClipPosThe shader automatically performs the transformation calculation correctly, regardless of the platform your application is running on.

UnityCG.cgincalso includes the following utility methods that you can use to create stereoscopic shaders:

PropertyParameterDescription
UnityStereoScreenSpaceUVAdjust(uv, sb)ultraviolet: UV texture coordinates. Either a Float2 for a standard UV or a Float4 for a packed pair of two UVs.
jdn: A Float4 with a 2D scale and 2D bias that the shader applies to the UV, scaled in xy and biased in zw.
AndUNITY_SINGLE_PASS_STEREOis defined, this returns the result of applying the scale and skew in sb to the texture coordinates in uv. Otherwise, the unmodified texture coordinates are returned. Use this for per-eye scaling and bias in single-pass stereo rendering mode only.
UnityStereoTransformScreenSpaceTex(uv)ultraviolet: UV texture coordinates. Either a Float2 for a standard UV or a Float4 for a packed pair of two UVs.AndUNITY_SINGLE_PASS_STEREOis defined, this returns the result of applying the current eye's scale and tilt to the texture coordinates in uv. Otherwise, the texture coordinates are returned unchanged.
UnityStereoClamp (uv, sb)ultraviolet: UV texture coordinates. Either a Float2 for a standard UV or a Float4 for a packed pair of two UVs.
jdn: A Float4 with a 2D scale and 2D bias that the shader applies to the UV, scaled in xy and biased in zw.
AndUNITY_SINGLE_PASS_STEREOis defined, this returns the result of clamping the x value to the texture coordinates in uv using the width and slope provided by sb. Otherwise, the coordinates of the unmodified texture are returned. Use it to apply a hold per eye in One-Step Stereo Rendering mode to prevent color bleeding between the eyes.

Shaders expose the built-in constant variable unity_StereoEyeIndex, allowing Unity to perform calculations dependent on the eye. The value ofunity_StereoEyeIndexis 0 for left eye representation and 1 for right eye representation.

Here is an example ofUnityCG.cginc, demonstrates how it can be usedunity_StereoEyeIndexTo change screen area coordinates:

float2 TransformStereoScreenSpaceTex (float2 uv, float w) {float4 scaleOffset = unity_StereoScaleOffset [unity_StereoEyeIndex]; volver uv.xy * scaleOffset.xy + scaleOffset.zw * w;}

In most cases, you don't need to change your shaders. However, there are situations where you may need to try to: aMonoscopic textureas a source for single-pass stereo rendering (e.g. when creating a noise or grain effect for a full-frame movie where the source image should be the same for both eyes, rather than into onestereoscopic image). Use in such situationsComputeNonStereoScreenPos()ratherComputeScreenPos()to compute locations from the full source texture.

post-processing effects

post-processing effectsrequires a little extra work to support single pass stereo rendering. Everypost processingA process that enhances product images by applying filters and effects before the image appears on screen. You can use post-processing effects to simulate the physical characteristics of the camera and film, such as B. Bloom and depth of field.More information Post-processing, post-processing, post-processing
seeglossary
The effect runs once on the packed render texture (which contains the left and right eye images), but applies all drawing commands run during post-processing twice: once on the left eye half of the target render texture and once more in the center right eye.

Post-processing effects do not automatically recognize single-pass stereo rendering, so you should adjust the display of packed stereo render textures to only read them from the correct side for the rendered eye. Depending on how the post-processing effect is rendered, there are two ways to do this:

  • UseGraphics.Blit()
  • Mesh-based drawing

Without the settings mentioned above, each draw command reads the entire render texture of the source (including the left and right eye views) and outputs the full pair of images on the left and right side of the render texture, resulting in incorrect rendering, duplicating the source image in each Eye.

That happens when you use itGraphics.Blitor a full screen polygon with a texture map to draw any post-processing effect. Both methods refer to the full output of the previous post effect in the chain. When they refer to an area in a packed stereo render texture, they refer to the full packed render texture and not just the relevant half.

Graphics.Blit()

Rendered with post-processing effectsbecome()do not automatically point to the correct part of packed stereo render textures. By default, they refer to the entire texture. This incorrectly extends the post-processing effect to both eyes.

For one-pass stereo rendering withbecome(), the texture samplers in shaders have an additional auto-computed variable that references the correct half of a packed stereo render texture, depending on which eye is being drawn. The variable contains scale and offset values ​​that you can use to transform your target's coordinates to the correct location.

To access this variable, declare amedium4in your shader with the same name as your swatch and add theSufi _ST(see below for a code sample for this). To adjust the UV coordinates, pass yours_CALLVariable ascale and moveyou useUnityStereoScreenSpaceUVAdjust(uv, scaleAndOffset). This method doesn't compile anything on non-stereo one-pass builds, meaning shaders modified to support this mode are still compatible with non-stereo one-pass builds.

The following examples show what you need to change in your fragment shader code to support single-pass stereo rendering.

No stereo playback:

einheitlicher sampler2D _MainTex;fixed4 frag (v2f_img i) : SV_Target{ fixed4 myTex = tex2D(_MainTex, i.uv); ...}

For stereo playback:

sampler2D uniform _MainTex;half4 _MainTex_ST;fixed4 frag (v2f_img i) : SV_Target{ fixed4 myTex = tex2D(_MainTex, UnityStereoScreenSpaceUVAdjust(i.uv, _MainTex_ST)); ...}

Mesh-based drawing

Rendering post-processing effects with meshes (e.g. drawing a square in instant mode with theLow-Level-Grafik-API) also need to adjust the UV coordinates on the target texture when rendering each eye. To adjust your coordinates in these circumstances, useUnityStereoTransformScreenSpaceTex(uv). This method correctly adapts to stereo render textures packed in stereo single-pass render mode, and is automatically compiled for unpacked render textures if you have stereo single-pass render mode disabled. However, if you intend to use one shader to render packed and unpacked textures in the same mode, you need two separate shaders.

Screen Space Effects

Screen space effects are visual effects that are drawn onto a previously rendered image. Examples of screen space effects aresurrounding occlusionA method of approximating the amount of ambient light (light not coming from a specific direction) that can hit a point on a surface.
seeglossary
,depth of fieldA post-processing effect that simulates the focusing characteristics of a camera lens.More information
seeglossary
, jbloomA post-processing effect used to reproduce an image artifact from real cameras. The effect creates streaks of light that extend from the edges of bright areas in an image, adding to the illusion of extremely bright light that overwhelms the camera or eye capturing the scene.
seeglossary
.

For example, consider a screen space effect that requires an image to be drawn on the screen (perhaps you draw some kind of splattered dirt on the screen). Instead of applying the effect to the entire output screen, which would stretch the dirty image across both eyes, consider applying it twice: once for each eye. In such cases, you should switch from using texture coordinates that refer to the entire packed render texture to coordinates that refer to each eye.

The following code examples show aSurface ShaderA simplified way to write shaders for the embedded processing pipeline.More information
seeglossary
which reflects an input texture (named _).Detail) 8 x 6 times in the output image. In the second example, the shader transforms the target coordinates in single-step stereo mode to refer to the portion of the output texture that represents the currently rendered eye.

Example 1:Texture detail without one-step stereo support

void surf(Input IN, inout SurfaceOutput o) { o.Albedo = tex2D(_MainTex, IN.uv_MainTex).rgb; float2 screenUV = IN.screenPos.xy / IN.screenPos.w; screenUV *= float2(8,6); o.Albedo *= tex2D(_Detalle, screenUV).rgb * 2;}

Example 2:Texture detail with one-step stereo support

void surf(Input IN, inout SurfaceOutput o) { o.Albedo = tex2D(_MainTex, IN.uv_MainTex).rgb; float2 screenUV = IN.screenPos.xy / IN.screenPos.w; #if UNITY_SINGLE_PASS_STEREO // Si el modo estéreo de un solo paso está activo, transforme las // coordenadas para obtener la salida UV correcta para el ojo actual. float4 scaleOffset = unity_StereoScaleOffset[unity_StereoEyeIndex]; screenUV = (screenUV - scaleOffset.zw) / scaleOffset.xy; #endif screenUV *= float2(8,6); o.Albedo *= tex2D(_Detalle, screenUV).rgb * 2;}
  • 2018-08-16 Published page
  • new in unity2017.3 neu 20173

XR API Reference

One-step instantiated rendering

Top Articles
Latest Posts
Article information

Author: Margart Wisoky

Last Updated: 01/04/2023

Views: 6162

Rating: 4.8 / 5 (58 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Margart Wisoky

Birthday: 1993-05-13

Address: 2113 Abernathy Knoll, New Tamerafurt, CT 66893-2169

Phone: +25815234346805

Job: Central Developer

Hobby: Machining, Pottery, Rafting, Cosplaying, Jogging, Taekwondo, Scouting

Introduction: My name is Margart Wisoky, I am a gorgeous, shiny, successful, beautiful, adventurous, excited, pleasant person who loves writing and wants to share my knowledge and understanding with you.