日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

DnCNN论文阅读笔记【MATLAB】

發(fā)布時間:2025/3/17 编程问答 26 豆豆
生活随笔 收集整理的這篇文章主要介紹了 DnCNN论文阅读笔记【MATLAB】 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
DnCNN論文閱讀筆記 論文信息:


論文代碼:https://github.com/cszn/DnCNN

Abstract
提出網(wǎng)絡DnCNNs
關鍵技術?Residual learning and batch normalization? 殘差學習和批歸一化
解決問題?Gaussian denoising (nonblind and blind)
????????????????? Single image super-resolution(SISR?)
????????????????? JPEG image deblocking? 解壓縮
I.?Introduction
之前的進展
(1)various models have been exploited for modeling image priors ??? 缺點:測試階段包含復雜的優(yōu)化問題,耗時; ????????????? 模型一般為非凸,且包含很多超參數(shù),很難達到最優(yōu)性能。 (2)several discriminative learning methods ??? 缺點:顯式學習圖像先驗
?????????????? 包含很多超參數(shù),很難達到最優(yōu)性能 ????????????? 一個噪聲水平訓練一個模型,受限于盲圖像去噪
本文使用CNN的3個原因 (1)深網(wǎng)絡可以有效提高利用圖像特征的容量和靈活性; (2)CNN訓練正則和學習方法有相當大的提升,例如:Rectifier Linear Unit (ReLU)、batch normalization and residual learning,可以加速訓練過程,提高去噪性能; (3)GPU并行計算可提高運行速度。
本文創(chuàng)新 (1)提出一個端到端的可訓練的CNN網(wǎng)絡,采用殘差學習策略,在網(wǎng)絡的隱層隱式地移除干凈圖片(clean image)。即輸入為帶噪的觀測圖片(noisy image),輸出為移除了干凈圖片的殘差(噪聲)圖片(residual image)。這樣做的動機是,用殘差學習刻畫恒等映射或近似的恒等映射,效果要比直接學習干凈圖片要相對好; (2)采用殘差學習和批歸一化加速訓練并提升性能; (3)訓練可以進行盲圖像去噪的單一模型; ???????? 訓練單一模型解決三類圖像去噪問題:blind Gaussian denoising, SISR, and JPEG deblocking。單圖像超分辨問題(SISR)和去塊效應問題都是降噪問題的特例。一般化的模型可以一起解決這些問題。
II.?Related Work
A. Deep Neural Networks for Image Denoising (a specific model is trained for a certain noise level)

(1)the multilayer perceptron (MLP) [31]



(2)a trainable nonlinear reaction diffusion (TNRD) model [19]


B. Residual Learning and Batch Normalization
(1)Residual Learning

(2)Batch Normalization

III.The Proposed Denoising CNN Model
Training a deep CNN model?for a specific task generally involves two steps:

(1) network architecture design 修改VGG網(wǎng)絡 [26]并設置網(wǎng)絡深度


(2) model learning from training data 使用the residual learning,batch normalization 加速訓練并提升去噪性能

A. Network Depth

濾波器尺寸3*3,但去除所有的池化層。故對于d層的DnCNN ,感受野為(2d+1)(2d+1)。

確定感受野的大小 (1)其他經(jīng)典方法中的感受野對比:
(2)本文中:
??? For Gaussian denoising with a certain noise level, we set the receptive field size of DnCNN to 35×35 with the corresponding?depth of 17.
??? For other general image denoising tasks, we adopt a larger receptive field and set the?depth to be 20.
B. Network Architecture
?


For DnCNN, we adopt the residual learning formulation to train a residual mapping?R(y)≈?v, and then we have?x?=?y-?R(y).

The loss function(the averaged mean squared error between the desired residual images and estimated ones from noisy input)?to learn the trainable? parameters:
prepresents N noisy-clean training image patch (pairs).

(1)Deep Architecture
深度為D的網(wǎng)絡包含三種類型的層 (i)?Conv+ReLU: for the first layer, 64 filters of size 3×?3?×c?are used to generate 64 feature maps, and rectified linear units (ReLU,max(0,·)) are then utilized for nonlinearity. Herec?represents the number of image channels,
i.e.,?
c?=?1 for gray image and?c?=?3 for color image.

(ii)Conv+BN+ReLU: for layers 2?~?(D-?1), 64 filters of size 3×3×64 are used, and batch normalization is added
between convolution and ReLU.
?

(iii)Conv: for the last layer,?c?filters of size 3?×3?×?64 are used to reconstruct the output.

(2)Reducing Boundary Artifacts?

In many low level vision applications, it usually requires that the output image size should keep the same as the input one. This may lead to the boundary artifacts.

We directly pad zeros before convolution to make sure that each feature map of the middle layers has the same?size as the input image.
C. Integration of Residual Learning and Batch Normalization for Image Denoising



It is the integration of residual learning formulation and batch normalization rather than the optimization algorithms (SGD or Adam) that leads to the best denoising performance.

D. Connection With TNRD

E. Extension to General Image Denoising

(1)DnCNN for Gaussian denoising with unknown?noise level?

In the training stage, we use the noisy images from a wide range of noise levels (e.g.,σ?∈ [0,55]) to train a single DnCNN model. Given a test image whose noise level belongs to the noise level range, the learned single DnCNN
model can be utilized to denoise it without estimating its noise level.

(2)three specific tasks, i.e., blind Gaussian denoising, SISR, and JPEG deblocking ?three specific tasksby employing the proposed DnCNN method?

In the training stage, we utilize the images with AWGN from a wide range of noise levels, down-sampled images with multiple upscaling factors, and JPEG images with different quality factors to train a single DnCNN model.

IV.?Experimental Results

A. Experimental Setting

1. Training and Testing Data:

(1)DnCNN-S?(for Gaussian denoising with known specific noise level?)

Three noise levels:σ?=?15, 25 and 50 Follow [19] to use 400 images of size 180×180 for training
Set the patch size as 40×40, and crop 128×1,600 patches to train the model?

(2)DnCNN-B?(single DnCNN model for? blind?gray?Gaussian denoising task? )

Set the range of the noise levels asσ?∈ [0,55]
Set the patch size as 50×?50 and crop 128×3,000 patches to train the model Two test datasets: 68 natural images from Berkeley segmentation dataset (BSD68) [14]
????????????????????????????? the other one contains 12 images as shown in Fig. 3

(3)CDnCNN-B?(single DnCNN model for? blind?color?Gaussian denoising task?)

Set the range of the noise levels asσ?∈ [0,55]
Set the patch size as 50×?50 and crop 128×3,000 patches to train the model
Use color version of the BSD68 dataset for testing and the remaining 432 color images from Berkeley segmentation dataset are adopted as the training images

(4)DnCNN-3?(single DnCNN model for these three general image denoising tasks?)

Set the patch size as 50×?50 and crop 128×3,000 patches to train the model
Rotation/flip based operations on the patch pairs are used during mini-batch learning.
The parameters are initialized with DnCNN-B Training set:? 91 images from [43] and 200 training images from the Berkeley segmentation dataset

三種去噪任務的輸入分別為:
1) The noisy image is generated by adding Gaussian noise with a certain noise level
from the range of?
[0,55].
2) The SISR input is generated by first bicubic downsampling and then bicubic upsampling the high-resolution image with downscaling factors 2, 3 and 4.
3) The JPEG deblocking input is generated by compressing the?image with a quality factor ranging from 5 to 99 using the MATLAB JPEG encoder.

2. Parameter Setting and Network Training

Set the network depth to 17 for DnCNN-S and 20 for DnCNN-B and DnCNN-3 initialize the weights by the method in [34] and use SGD with weight decay of 0.0001, a momentum of 0.9 and a mini-batch size of 128. We train 50 epochs for our DnCNN models. The learning rate was decayed exponentially from 1e-?1 to 1e-?4 for the 50 epochs.
B. Compared Methods

two non-local similarity based methods (i.e.,?BM3D?[2] and?WNNM?[15])
one generative method (i.e.,EPLL?[40]) three discriminative training based methods (i.e.,?MLP?[31],CSF?[17] and?TNRD?[19])?

C. Quantitative and Qualitative Evaluation



D. Run Time



E. Experiments on Learning a Single Model for Three General Image Denoising Tasks



V.?Conclusion

In future, we will investigate proper CNN models for denoising of images with real complex noise and other general image restoration tasks.

總結

以上是生活随笔為你收集整理的DnCNN论文阅读笔记【MATLAB】的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。