Deep image prior
Deep image prior is a type of convolutional neural network used to enhance a given image with no prior training data other than the image itself. A neural network is randomly initialized and used as prior to solve inverse problems such as noise reduction, super-resolution, and inpainting. Image statistics are captured by the structure of a convolutional image generator rather than by any previously learned capabilities.
Method
Background
Inverse problems such as noise reduction, super-resolution, and inpainting can be formulated as the optimization task [math]\displaystyle{ x^{*} = min_x E (x;x_0 )+R(x) }[/math], where [math]\displaystyle{ x }[/math] is an image, [math]\displaystyle{ x_0 }[/math] a corrupted representation of that image, [math]\displaystyle{ E(x;x_0 ) }[/math] is a task-dependent data term, and R(x) is the regularizer. This forms an energy minimization problem.
Deep neural networks learn a generator/decoder [math]\displaystyle{ x=f_\theta (z) }[/math] which maps a random code vector [math]\displaystyle{ z }[/math] to an image [math]\displaystyle{ x }[/math].
The image corruption method used to generate [math]\displaystyle{ x_0 }[/math] is selected for the specific application.
Specifics
In this approach, the [math]\displaystyle{ R(x) }[/math] prior is replaced with the implicit prior captured by the neural network (where [math]\displaystyle{ R(x)=0 }[/math] for images that can be produced by a deep neural networks and [math]\displaystyle{ R(x)=+\infin }[/math] otherwise). This yields the equation for the minimizer [math]\displaystyle{ \theta^* = argmin_\theta E(f_\theta(z);x_0) }[/math] and the result of the optimization process [math]\displaystyle{ x^* = f_{\theta^*}(z) }[/math].
The minimizer [math]\displaystyle{ \theta^* }[/math] (typically a gradient descent) starts from a randomly initialized parameters and descends into a local best result to yield the [math]\displaystyle{ x^* }[/math] restoration function.
Overfitting
A parameter θ may be used to recover any image, including its noise. However, the network is reluctant to pick up noise because it contains high impedance while useful signal offers low impedance. This results in the θ parameter approaching a good-looking local optimum so long as the number of iterations in the optimization process remains low enough not to overfit data.
Deep Neural Network Model
Typically, the deep neural network model for deep image prior uses a U-Net like model without the skip connections that connect the encoder blocks with the decoder blocks. The authors in their paper mention that "Our findings here (and in other similar comparisons) seem to suggest that having deeper architecture is beneficial, and that having skip-connections that work so well for recognition tasks (such as semantic segmentation) is highly detrimental."[1]
Applications
Denoising
The principle of denoising is to recover an image [math]\displaystyle{ x }[/math] from a noisy observation [math]\displaystyle{ x_0 }[/math], where [math]\displaystyle{ x_0 = x + \epsilon }[/math]. The distribution [math]\displaystyle{ \epsilon }[/math] is sometimes known (e.g.: profiling sensor and photon noise[2]) and may optionally be incorporated into the model, though this process works well in blind denoising.
The quadratic energy function [math]\displaystyle{ E(x,x_0)=||x-x_0||^2 }[/math] is used as the data term, plugging it into the equation for [math]\displaystyle{ \theta^* }[/math] yields the optimization problem [math]\displaystyle{ min_\theta ||f_\theta (z)-x_0||^2 }[/math].
Super-resolution
Super-resolution is used to generate a higher resolution version of image x. The data term is set to [math]\displaystyle{ E(x;x_0)=||d(x)-x_0||^2 }[/math] where d(·) is a downsampling operator such as Lanczos that decimates the image by a factor t.
Inpainting
Inpainting is used to reconstruct a missing area in an image [math]\displaystyle{ x_0 }[/math]. These missing pixels are defined as the binary mask [math]\displaystyle{ m \in \{ 0,1 \} ^ {H \times V} }[/math]. The data term is defined as [math]\displaystyle{ E(x;x_0)=||(x-x_0) \odot m||^2 }[/math] (where [math]\displaystyle{ \odot }[/math] is the Hadamard product).
The intuition behind this is that the loss is computed only on the known pixels in the image, and the network is going to learn enough about the image to fill in unknown parts of the image even though the computed loss doesn't include those pixels. This strategy is used to remove image watermarks by treating the watermark as missing pixels in the image.
Flash–no-flash reconstruction
This approach may be extended to multiple images. A straightforward example mentioned by the author is the reconstruction of an image to obtain natural light and clarity from a flash–no-flash pair. Video reconstruction is possible but it requires optimizations to take into account the spatial differences.
Implementations
- A reference implementation rewritten in Python 3.6 with the PyTorch 0.4.0 library was released by the author under the Apache 2.0 license: deep-image-prior [3]
- A TensorFlow-based implementation written in Python 2 and released under the CC-SA 3.0 license: deep-image-prior-tensorflow
- A Keras-based implementation written in Python 2 and released under the GPLv3: machine_learning_denoising
References
- ↑ https://sites.skoltech.ru/app/data/uploads/sites/25/2018/04/deep_image_prior.pdf
- ↑ jo (2012-12-11). "profiling sensor and photon noise .. and how to get rid of it.". darktable. http://www.darktable.org/2012/12/profiling-sensor-and-photon-noise/.
- ↑ "DmitryUlyanov/Deep-image-prior". 3 June 2021. https://github.com/DmitryUlyanov/deep-image-prior.
- Ulyanov, Dmitry; Vedaldi, Andrea; Lempitsky, Victor (30 November 2017). "Deep Image Prior". arXiv:1711.10925v2 [cs.CV].
Original source: https://en.wikipedia.org/wiki/Deep image prior.
Read more |