Fourier Depth of Field Siggraph 2009
Cyril Soler, Kartic Subr, Frédo Durand, Nicolas Holzschuch & François Sillion
Rendered
FinalDOF renderer
Depth of field is expensive
http://developer.amd.com/media/gpu_assets/Scheuermann_DepthOfField.pdf
Oversampling ●
Typically we'll oversample a pixel, in a “circle of confusion”, near it's location. In focus
Out of focus
Algorithm for estimating defocus
P = {uniformly distributed image samples} NA // number of aperture samples for each pixel x in P L ← SampleLens(NA) for each sample y in L Sum ← Sum + EstimatedRadiance(x, y) Image (x) = Sum / NA
We're doing more work to display less data?
Less data at lower frequencies
●
●
Nyquist Limit “If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart.”
http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem
Summary Bandwidth Estimation
Sample generation over image and lens
Estimate radiance
rays through image and lens samples
Reconstruct image
from scattered radiance estimates http://artis.inrialpes.fr/~Kartic.Subr/Files/Pres/FourierDOFPres.ppt
Radiance function
● ●
Given a point The radiance function is the intensity of light in all directions
Example of light path
Local light field propagation
Central ray
[Durand05]
●
●
We're not talking about the spectrum of light transmitted. We're talking about measuring the local changes in the radiance function...
Fourier Transform → frequency space
Emission
●
Spacial frequencies
●
Angular frequencies
Transport
●
Angular shear in frequency space
Occluders
●
Convolution (in frequency space) with frequency of occluder. (Product in ray-space).
Durand 2005 goes into (much) more depth
Durand 2005, A Frequency Analysis of Light Transport
Old Algorithm for estimating defocus
P = {uniformly distributed image samples} NA // number of aperture samples for each pixel x in P L ← SampleLens(NA) for each sample y in L Sum ← Sum + EstimatedRadiance(x, y) Image (x) = Sum / NA
Our adaptive sampling (P, A) ← BandwidthEstimation() P = {uniformly {bandwidthdistributed dependent image image samples} samples} – 1 to 10% final samples N AA= {aperture variance estimate} for each pixel x in P LA← SampleLens(N N proportional to A(x) A) for each sample y in L Sum ← Sum + EstimatedRadiance(x, y) Image (x) = Sum / NA
Reconstruct (Image, P)
Image space sampling Density
●
W,H image dimensions
●
fh, fv field of view
●
●
Max energy - from angular bandwidth (use 98 percentile to avoid outliers) (From Nyquist limit)
th
Generating samples from
●
●
Fast Hierarchical Importance Sampling with Blue Noise Properties - Sigg04 Penrose tilings
Reconstruction from sparse samples ●
●
●
“weighted average of a constant number of neighboring samples” “adaptively varying the radius of contribution of each pixel” “In practice, we use a Gaussian weighting term with a variance that is proportional to the square root of the local density”
Results: Computation time (seconds)
Bandwidth estimation
90
45
60
Raytracing
4500
3150
7401
Image reconstruction
10
3
8
Summary of phenomena Reference
Defocus Reflectance Occlusion
Aperture variance
Image-space bandwidth
Reflection
Comments? ●
● ●
●
Final number of points is the integral of the sample density, rather than a given value Only ~20 times faster? Does it only process the spectra after the last bounce? (or is it gathered before?) Uses a conservative bandwidth estimate – lots of room to tighten bounds