Problem:
$I=\int_a^b f(x)\mathrm{d}x$
Solutions:
Rectangle rule | Trapezoidal rule | Simpson's rule |
---|---|---|
(images from wikimedia)
$I=\int_0^2 x^2\mathrm{d}x$
Python function that performs the rectangle quadrature of $f$ on domain $[a,b]$ with $n$ subdivisions:
def rectangle_rule(f, a, b, n):
size = (b - a) / n
area_inf, area_sup = 0, 0
for i in range (n):
area_inf = area_inf + size * f(a + i * size)
area_sup = area_sup + size * f(a + (i+1) * size)
return area_inf, area_sup
Problem:
Solution:
Downsides:
Your mission, should you choose to accept it: reduce the variance of the estimator at any cost
$I=\int_0^2 x^2\mathrm{d}x$
Problem:
Solution:
(generic version replace $\frac{g(x)}{c \cdot f(x)}$ by $h(x)$ with values in $[0,1]$)
Pros: allow to sample arbitrary distributions from uniform samples
Cons: can be very inefficient if the proportion of rejection is greater than acceptance
(modified animation from eduscol)
Problem:
Solution:
Downside:
(modified animation from eduscol)
Problem:
Solution:
Comparison of LDS sequences (from Martin Roberts blog):
Comparison of pointsets (from utk and sot):
Dart Throwing | BNOT | LDBN | SOT |
---|---|---|---|
Downside:
(modified animation from eduscol)
Problem:
Solution:
Downside:
Problem:
Solution:
Condition on weighting functions: $\sum_i w_i(x) = 1$, in practice balance heuristic weights are a provably good solution: $$ w_i(x) = \frac{c_i p_i(x)}{\sum_{j=1}^{k} c_j p_j(x)} $$ Downside:
The strategy of highest density is preponderant
BSDF sampling | Light sampling | MIS |
---|---|---|
(images from Yining Karl Li blog)
Problem:
Solution:
“If we really accepted the idea that a sample from one distribution is a sample from any distribution (if appropriately weighted) then we should not be surprised at the next two results stated below.” - Trotter and Tukey - Conditional Monte Carlo For Normal Samples - 1954
Estimate both the integral and the bias correction term at once, and use a ratio of integrals: $$ I = \int_\Omega f(x)\mathrm{d}\mu(x) = \mathbb{E}\left[\frac{f(x)}{p(x)}\right] = \mathbb{E}\left[\frac{f(x)}{q(x)}\frac{q(x)}{p(x)}\right] \qquad \approx \qquad \frac{\mathbb{E}\left[\frac{f(x)}{q(x)}\right]}{\mathbb{E}\left[\frac{p(x)}{q(x)}\right]} = \frac{\mathbb{E}\left[w(x)\frac{f(x)}{p(x)}\right]}{\mathbb{E}\left[w(x)\right]} \quad \text{with} \quad w(x)=\frac{p(x)}{q(x)} $$
The associated self normalizing Monte Carlo estimator (ratio estimator) can be estimated using a weighted mean: $$ \tilde{I} = \frac{ \frac{1}{n} \sum_{i=1}^n w(x_i)\frac{f(x_i)}{p(x_i)} }{ \frac{1}{n} \sum_{i=1}^n w(x_i)} = \frac{ \sum_{i=1}^n w(x_i)\frac{f(x_i)}{p(x_i)} }{ \sum_{i=1}^n w(x_i)} $$
The closer $p$ is to the perfect importance distritbution w.r.t the integrand, the better will be the estimator (can reach 0 variance if $f(x)=kp(x)$)
Downside:
Problem:
Solution:
Downside:
Scene (left half RIS, right half IS) | Light sources area RIS - 8 candidates | Light sources area IS |
---|---|---|
Problem:
Solution:
Downside:
The figure below shows several kernel density reconstruction over 10000 RIS samples.
The initial samples are distributed from the uniform distribution (orange), and resampled using different number of candidates $M$ to approximate a target gaussian density (blue).
Increasing the number of candidates leads to a better approximation of the target density, at the expense of sampling more candidates.
(from ReSTIR project page)
Problem:
Solution:
Downside:
TODO: add images and examples 1D - 2D
“Why did offline-rendering people publish several papers on control variates and (to our knowledge) never published a ratio estimator, which is simpler and better?” - Eric Heitz - Combining analytic direct illumination and stochastic shadows - 2018
Compile Jupyter to HTML:
jupyter nbconvert presentation.ipynb --no-input --to html --template full --no-prompt