Literature

In this post I list and describe important articles, papers and algorithms that are relevant for unified volumetric rendering with clustered shading.

Volumetric Fog [wronski]

Released in Siggraph 2014, Volumetric Fog describes a ray marching, compute shader based, volumetric fog algorithm.

=> They decouple important scattering steps: density estimation, in-scattering calculation, ray-marching and apply effect.

=> Compute Shaders and UAVs and used for density estimation, lighting calculations and ray marching.

=> The voxel buffer is frustum aligned with exponential depth slice distribution. They ended up using a 160 x 90 x 64 froxel buffer.

=> They use quadrilinear filtering when sampling volumetric data to avoid seeing individual volume texture texels.

=> The shadow mapping technique used was exponential shadow maps, an extension of variance shadow maps. These produce very soft shadows but this is often good since it mimics the effect of multiple scattering.

=> The density estimation was made with two simple methods: one-octave procedural perlin noise animated by wind and vertical attenuation.

=> “Volumetric fog can be quite heavy on both bandwidth and ALU, so it makes sense to put it in parallel with some vertex heavy pass like filling G-Buffer.”

 

reference: http://advances.realtimerendering.com/s2014/wronski/bwronski_volumetric_fog_siggraph2014.pdf

 

Light Propagation Volumes (LPVs) [Kaplanyan]

LPV is a method for calculating first bounce diffuse Global Illumination (GI) in real time. The idea is to use a lattice with spherical harmonics information to represent distribution of light in a scene.

reference: http://www.crytek.com/download/Light_Propagation_Volumes.pdf

 

A Novel Sampling Algorithm for Fast and Stable Real-Time Volume Rendering [Bowles H., Zimmermann D]

In this paper, Huw Bowles and Daniel Zimmermann explore new adaptive sampling techniques for volumetric ray marching.

=> First, they focus on adaptive sampling layout according to camera motion. When moving the camera forward, or strafing, or rotating the view, their method can adapt the “layout” of the ray marching to achieve better results.

=> Secondly, they propose methods to adaptively place the ray marching samples along a ray as to increase the visual quality close to the viewer while keeping a relatively low sample count.

reference: https://github.com/huwb/volsample

 

A Practical Analytic Single Scattering Model for Real Time Rendering [Ramamoorthi et al]

What’s interesting with this paper is that they derive a compact single scattering model by analytically integrating the single scattering equations, essentially what the very simplest fog models do but with focus on the effects of “airlight”, the glows around light sources or surfaces due to participating media.

reference: https://cseweb.ucsd.edu/~ravir/papers/singlescat/scattering.pdf

 

Battlefield 4, Unified Volumetric Rendering

DICE outlines their volumetrics approach a SIGGRAPH 2015 talk.

=> Their approach uses a frustum aligned voxel buffer, aligned on tiled-rendering tiles since they are using a tiled deferred pipeline.

=> Participating media properties are voxelized into the V-buffer by means of depth fog, height fog, local fog volumes with or without density textures, adding scattering, emissive and extinctions terms and averaging the phase lobe.

=> The light in-scattering term is evaluated through local (culled) lights + cascadede sun shadow maps and indirect lighting from SH probes in center of volumes.

=> “no scattering? skip local lights”

=> They use temporal halton jitter samples, offsetting samples along the view ray by the same amount. The reprojection blend amount uses an exponential moving average with a 5% blend.

reference: http://www.slideshare.net/DICEStudio/physically-based-and-unified-volumetric-rendering-in-frostbite

Killzone 4

Amsterdam based Guerrilla Games emphasized how volumetric lighting is important to the look and feel of Killzone. They use a straight forward view space ray marching solution as part of their deferred lighting pass.

=> renders at half/quarter resolution with bilateral upsampling

Rendering a full screen pass in full resolution was too expensive, and so they downsample to half or quarter resolution and use bilateral upsampling.

 

 

 

=> uses 128 steps ray march as reference.

When evaluating their real-time algorithm for volumetric lighting they primarily compared results to that of a 128-step ray marching render.

 

=> particles injection is made by rendering particles into a 16 layer “scattering amount buffer”.

The reason to include particle injection, they argue, is that artists have a lot of control over these systems and they react to physics, player movement and wind.

2015-10-02 16_28_13-GDC Vault - Taking Killzone Shadow Fall Image Quality into the Next Generation

The “injection” is made into a scattering amount buffer at {1/8}^{th} resolution and 16 depth slices distributed quadratically.

=> transparency is handled with “volume light intensity buffers”

They solve transparency composing by writing the ray march steps to a Volume Light Intensity Buffer which they sample when rendering transparent things. This value is used to blend the volumetric results together with the transparent object, i.e “how much stuff is in the way here”.

 

=> simple trick: don’t ray march sample in beginning and end of volume

 

reference: http://www.guerrilla-games.com/presentations/Valient_Killzone_Shadow_Fall_Demo_Postmortem.pdf

reference: http://www.slideshare.net/guerrillagames/killzone-shadow-fall-gdc2014-valient-killzonegraphics

 

 

Lords of The Fallen

Germany based DECK13 Interactive uses volumetric lighting in their Souls-like Lords of the Fallen title.

=> straight forward view space ray-marching approach

Their implementation uses only vertex and pixel shaders, instead of using more complex compute shader ray marching. The ray marching is performed in view space as a full screen pass for the sun and in tighter passes for local lights

=> light bounds intersections

For box and point lights they clamp ray-marching to attenuation distance of the lights.

They only ray-march inside light bounds/polygons -> different per light-type

 

=> They use a Nearest Depth Upsampling from 1/2 or 1/4 resolution.

 

 

Low Resolution Rendering

Low resolution rendering is a technique which is used extensively within computer graphics, for many different rendering algorithms including AO, transparency and volumetrics. It involves down sampling textures, rendering to lower resolution rendering targets and then upsampling to full resolution.

NVIDIA Fast rendering of opacitymapped particles using DirectX 11 tessellation and mixed resolutions

In this whitepaper NVIDIA introduces the concept of Nearest Depth Upsampling. When upsamplingl, the one of the four low resolution texels in the bilinear footprint with the depth value closest to the full resolution depth is chosen. They mention how Nearest Depth filtering can introduce blocky artifacts in non edge pixels and thus propose to switch to regular bilinear filtering by using an edge detection algorithm and branching.

 

Low Resolution Effects with Depth-Aware Upsampling

This article describes details for down sampling depth for usage in depth aware upsampling.

A depth buffer can not be downsampled with averages, since the resulting depths in the downsampled texture will be floating in between real depths in the scene, not representing any surface in the scene. The common approach is to use a min or max filter when downsampling the depth, usually using the depth nearest to the camera assuming this would be more important.

We want to maximise the chance of finding a depth in the low resolution depth buffer one that represents the real (hi-res) depth – in the depth downsampling step we want to choose a depth that is different from the ones of it neighbors. They propose to do this by alternating a min-max filter in a checkered 2×2 quad pattern, meaning that the texels considered for upsampling will represent at least two of the potential surfaces.

They mention that more than depth discontinuities can be used for determining whether samples belong to the same surface, such as normals etc.

reference: http://c0de517e.blogspot.ca/2016/02/downsampled-effects-with-depth-aware.html

 

NVIDIA Modern Real-Time Rendering Techniques

This article brings up some low resolution rendering, specifically for low-res particles.

=> For Batman Arkham Asylum the Nearest Depth Filter used fetches the 2×2 nearest low res depth, compares to the hi-res depth and keeps track of the nearest corresponding uv coordinate. The low resolution colour sample is fetched with point-filtering at the corresponding uv coordinate, but if |z_{lowres} - z_{hires}| < \epsilon for all four samples then they revert to bilinear filtering at uv_{center}.

=> They use a max-filter downsample for the depth to remove z-test artifacts due to low res depth test.

reference: http://developer.download.nvidia.com/presentations/2010/futuregameon/LouisBavoil_ModernRealTimeRenderingTechniques.pdf

 

 

Dithering

Dithering techniques are used to increase visual quality with few samples by offsetting samples spatially or temporally.  The idea of interleaved sampling distributes evenly spaced samples over several pixels, each of which has a different offset. By randomising the indices inside the interleaved NxN kernel repetative patterns can be traded with less notable noise.

Lords of the Fallen

This completely view space based volumetric rendering approach uses interleaved sampling with the kernel index randomized and combines it with a 15-tap separable gaussian blur to hide the artifacts.

reference: http://bglatzel.movingblocks.net/wp-content/uploads/2014/05/Volumetric-Lighting-for-Many-Lights-in-Lords-of-the-Fallen-With-Notes.pdf

 

Assassins Creed [Wronksi]

Wronski uses temporal sub-cell jittering, offsetting samples by sub-pixel amounts.

reference: http://advances.realtimerendering.com/s2014/wronski/bwronski_volumetric_fog_siggraph2014.pdf

 

 

Temporal Reprojection

With a regular rendering approach, each result frame is thrown away when a new frame is calculated. However, the frame i+1 is generally quite close to frame i, which hints at the idea of temporal reprojection. By looking at camera motion, the last frame can be re-projected in screen space allowing for lookup of the a corresponding pixel in the last frame. Now, instead of recalculating everything, the new result can build upon the old result, making it better! The technique is currently using in Stingray for Anti Aliasing calculations.

Volumetric temporal reprojection is this idea applied to the in-scattering terms calculated in ray marching.

Temporal Upsampling uses reprojection when upsampling.

 

Shadow Mapping

Shadow mapping literature is interesting for three main reasons:

  • The volumetric rendering samples shadow maps for sun and local lights
  • Several shadow mapping algorithms contains split schemes for dividing the view frustum along the z-axis
  • The shadow mapping literature can adress aliasing problems for shadow mapping

Percentage-Closer Filtering (PCF)

A first stab at alleviating shadow map aliasing was to do multiple shadow map comparisons and average them together.

Imperfect Shadow Mapping

http://resources.mpi-inf.mpg.de/ImperfectShadowMaps/ISM.pdf

Variance Shadow Mapping & Exponential Shadow Mapping

Variance Shadow Mapping (VSM) use Variance and Chebyshev’s inequality to replace standard shadow mapping occlusion queries. Exponential Shadow Mapping (ESM) is an extension of VSM that …

ESMs can be both down sampled and filtered with separable blur, with the disadvantage that they produce light leaking. This however, is negligable in participating media.

ESMs are used in the Volumetric Fog solution for Assasins Creed [Wronski]. They highlight how the CSMs have too much detail and the shadowing is above the volume Nyqvist frequency. Their first solution was to apply a 1naive 32-tap PCF low-pass filter, but this was too expensive.

PS: memory problems on consoles?

reference: http://www.cad.zju.edu.cn/home/jqfeng/papers/Exponential%20Soft%20Shadow%20Mapping.pdf
reference: http://advances.realtimerendering.com/s2014/wronski/bwronski_volumetric_fog_siggraph2014.pdf

Cascaded Shadow Mapping & Parallel-Split Shadow Maps (PSSMs)

CSM and PSSMs are common shadow mapping technique for bigger scenes, where several shadow maps are layed out in a cascade and then sampled depending on LoD, using lower resolution shadow maps are used for more distant objects.

The techniques use a practical view-frustum z split scheme with a control parameter, \lambda, that interpolates between a theoretically optimal logarithmic split scheme, C_i^{log} and a uniform split scheme C_i^{uni}.

reference: http://developer.download.nvidia.com/SDK/10.5/opengl/src/cascaded_shadow_maps/doc/cascaded_shadow_maps.pdf
reference: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch10.html
reference: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.109.290&rep=rep1&type=pdf

 

 

 

 

 

 

backlog:

http://groups.csail.mit.edu/graphics/mmvs/mmvs.pdf

http://www.cs.berkeley.edu/~ravir/papers/singlescat/scattering.pdf

Reverse Extruded Shadow Volumes:http://aras-p.info/texts/revext.html

Polygonal Light Volumes: http://www.cse.chalmers.se/~uffe/volumetricshadows.pdf

Geometry Shader Heirarchical Occlusion Shadow Volumes: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch11.html

Epipolar Sampling: http://groups.csail.mit.edu/graphics/mmvs/mmvs.pdf