It can therefore be decomposed into a multiresolution representation
where each component captures information present at a given scale,
analog to the perception of the human visual system and the composition
of real objects. Multiresolution fusion can be split into three
steps:
The analysis decomposes an input image I0 using spatial filtering
into a multiresolution representation composed of approximation
images Ia
k and detail images Id
k at different levels k. The total number
of levels is denoted by n. The fusion is then applied pixel-wise
at each level k. For each image Ia
k and Id
k , one of their pixels is chosen
according to a criterion, e.g., maximum, minimum or average.
The criterion is application-dependent. The inverse transform of the
analysis is the synthesis, where the original image is reconstructed
from the multiresolution representation.
Many different techniques exist based on pyramid schemes or
wavelets, the latter having the advantage of not containing redundant
information. For a more complete survey, we refer to [13].
2.4. Edge-preserving Filters
Traditional pyramid schemes, like the Laplacian pyramid decomposition
or Toet’s [14] contrast pyramid decomposition use linear
(Gaussian) filters to obtain the approximation images described in
2.3. Even with a small kernel, they introduce halo artifacts into the
images by blurring over sharp edges. This is because the filter is applied
iteratively, which at higher levels is equivalent to a large filter
kernel.
To prevent halo artifacts, edge-preserving filters have become
popular, notably in the tone-mapping community, the most widelyused
being the bilateral filter [15].
The bilateral filter performs well to separate an image into a
base-layer, containing large scale variations, and a detail-layer capturing
small scale variations such as texture. It is, however, less
suited for progressive coarsening of images [2].
Farbman et al. [2] therefore describe an approach based on the
weighted least squares optimization framework (WLS), obtaining a
decomposition of a base-layer and detail-layers of different scales.
Edge-preserved smoothing tries to find an image u that simultaneously
is as close as possible to the input image g and as smooth as
possible along significant gradients in g. Formally:
u = Wλ(g) = (I + λLg)
−1g (2)
where Lg = DT
x AxDx + DT
y AyDy with Dx and Dy being
difference operators. Ax and Ay contain smoothness weights that
depend on g. By increasing λ, the result becomes progressively
smoother [2].
When using edge-preserving filters, no downscaling of the
smoothed images should generally be performed as this would introduce
aliasing artifacts because the images are not band-limited.
As opposed to the wavelet approach, this representation is overcomplete.