Velocity Blur

Per pixel velocity blur by reconstructing world space before and after positions using the depth buffer was I technique I first noticed in GPU Gems 3.  http://http.developer.nvidia.com/GPUGems3/gpugems3_ch27.html

I suspect some people will say this method has no place in modern rendering, but I don’t have much time for comments like that – this, and other old school techniques, are still very valid and useful in certain situations and can often offer a performance benefit over more ‘correct’ techniques.  In particular as mobile devices become more powerful rendering techniques from PS3/360 come around again.

blur2

 

The technique works as a post process which requires the depth buffer as input as well as the rendered frame buffer.  The projected (screen-space) X/Y position from 0 to 1 is combined with the depth value for that pixel in the shader.  If we then apply the inverse of the ‘view projection’ matrix for the current frame, we get back a world-space position for the on-screen pixel.  If we then apply the ‘view projection’ used by the camera in the previous frame – we get the screen-space position of where that world-space point was in the last rendered frame.  Now we have a before and after we can compute the 2d direction the pixel traveled over the two frames and we can blur the current frame buffer in that direction to simulate motion blur.  What a neat trick!  Here is a snippet of HLSL that may explain it better:

float sceneDepth = depthTexture.Sample(pointSamplerClamped, IN.uv).r;
float4 currentPos = float4((1-IN.uv.x)*2-1, (1-IN.uv.y)*2-1, sceneDepth, 1);
float4 worldPos = mul(currentPos, g_invViewProjMatrix);
worldPos /= worldPos.w;
float4 previousPos = mul(worldPos, g_viewProjMatrixPrev);
previousPos.xy /= previousPos.w;
float2 velocity = (currentPos.xy - previousPos.xy) * g_velocityBlurParams.xy;

It’s far from a perfect technique though – if you find your ‘before’ position is off-screen then you don’t have correct information along your blur path.  It also assumes everything is blurred based on camera velocity alone.  If an object is travelling at equal velocity to the camera then it should not blur at all, likewise an object travelling towards the camera would be more blurred than it is with this method alone.  The fact the blur is computed in 2d also leads to noticeable artifacts when the camera is rotating and the angular motion is not in the velocity direction.

As I said at the start though, in some cases those restrictions can be worked around and you get the performance benefit of this method versus a per object velocity buffer or other expensive method like geometry fins.  Racing games in particular benefit as the camera is often tracking straight ahead and you can dampen the effect on rotation to lessen artifacts.  In a racing game the camera is also usually locked to the player vehicle – which is travelling at roughly the same velocity as the camera.  In this case you can mask out the pixels covered by the player car so they do not blur at all and have a perfectly in focus player car whilst the landscape blurs.  In DX9 and GLES2 I’ve used a render to texture for the masked objects and passed that as an input to the post-process, but in DX11/GLES3 it can be cheaper to use the stencil buffer to mask out these interesting objects and then pass that to the post-process as in DX11 you can bind a ‘read only’ stencil buffer to the pixel shader whilst keeping it active in the pipeline.

You can also use the stencil buffer for optimization in the post-process – essentially pixels in the sky can be considered ‘infinite depth’ because they are so far away they will never blur based on camera velocity.  So by setting the stencil buffer up as bitfields – sky as 0×80, player as 0×40, dynamic landscape as 0×20, static landscape as 0×10, etc, you can cull all sky pixels and player object pixels by rejecting >=0×40.  This will probably save 50% of the pixel shader cost on average.

blur1