Abatract

In this paper, we propose a novel video rain streak removal approach FastDeRain, which fully considers the discriminative (辨别)characteristics of rain streaks and the clean video in the gradient domain(梯度域).Specifically, on the one hand, rain streaks are sparse(梳) and smooth along the direction of the raindrops, whereas on the other hand, clean videos exhibit (展示出)piecewise smoothness(分段平滑) along the rain-perpendicular direction and continuity along the temporal direction.(沿着垂直方向和在时间方向连续)Theses smoothness and continuity results in the sparse distribution in the different directional gradient domain, respectively.(这些平滑度和连续性分别导致不同方向梯度域中的稀疏分布。)Thus, we minimize 1) the l1 norm(范数) to enhance the sparsity of the underlying rain streaks, 2) two l1 norm of unidirectional Total Variation (TV) regularizers to guarantee(保证) the anisotropic(各向异性) spatial (空间的)smoothness(单向全变差(TV)正则化器的两个l1范数,以保证各向异性的空间平滑度,), and 3) an l1 norm of the time-directional difference operator to characterize the temporal continuity.(时间方向差分算子的l1范数,用于表征时间连续性。)A split augmented Lagrangian shrinkage algorithm (SALSA) based algorithm is designed to solve the proposed minimization model.(基于分裂增强拉格朗日收缩算法(SALSA)的算法被设计用于求解所提出的最小化模型。)Experiments conducted(进行) on synthetic and real data demonstrate(表明) the effectiveness and efficiency of the proposed method.According to comprehensive quantitative(全面大量的) performance measures, our approach outperforms other state-of-the-art methods, especially on account of the running time.

1.Introduction

Raindrops usually introduce bright streaks into the acquired images or videos, because of their scattering of light into complementary metal–oxide–semiconductor cameras and their high velocities.
For the single-image de-raining task, Kang et al. [8] decomposed(分解) a rainy image into low-frequency (LF) and highfrequency (HF) components using a bilateral(双向) filter and then performed morphological component analysis (形态成分分析)(MCA)-based dictionary learning and sparse coding to separate the rain streaks in the HF component. To alleviate(缓和) the loss of the details when learning HF image bases, Sun et al. [9] tactfully exploited(巧妙的利用) the structural similarity of the derived(派生) HF image bases. Chen et al. [10] considered the similar and repeated patterns of the rain streaks and the smoothness of the background. Sparse coding and dictionary learning were adopted in [12–14].In their results, the details of backgrounds were well preserved.

[8] L.-W. Kang, C.-W. Lin, and Y.-H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1742–1755, 2012.
[9] S.-H. Sun, S.-P. Fan, and Y.-C. F. Wang, “Exploiting image structural similarity for single image rain removal,” in the IEEE International Conference on Image Processing (ICIP), 2014, pp. 4482–4486.
[10] Y.-L. Chen and C.-T. Hsu, “A generalized low-rank appearance model for spatio-temporally correlated rain streaks,” in the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 1968–1975.
[11] J. Chen and L.-P. Chau, “A rain pixel recovery algorithm for videos with highly dynamic scenes,” IEEE Transactions on Image Processing, vol. 23, no. 3, pp. 1097–1104, 2014.
[12] D.-Y. Chen, C.-C. Chen, and L.-W. Kang, “Visual depth guided color image rain streaks removal using sparse coding,” IEEE transactions on circuits and systems for video technology, vol. 24, no. 8, pp. 1430–1455, 2014.
[13] Y. Luo, Y. Xu, and H. Ji, “Removing rain from a single image via discriminative sparse coding,” in the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 3397–3405.
[14] C.-H. Son and X.-P. Zhang, “Rain removal via shrinkage(收缩) of sparse codes and learned rain dictionary,” in the IEEE International Conference on Multimedia & Expo Workshops (ICMEW), 2016, pp. 1–6.

The recent work by Li et al. [15] was the first to utilize Gaussian mixture model (GMM) patch priors for rain streak removal, with the ability to account for(支持) rain streaks of different orientations and scales. Zhu et al. [16] proposed a joint bi-layer optimization (联合双层优化)method progressively(逐步) separate rain streaks from background details, in which the gradient statistics are analyzed(梯度统计分析). Meanwhile, the directional property of rain streaks received a lot of attention in [19–21] and these methods achieved promising performances(有前途的表现). Ren et al. [23] removed the rain streaks from the image recovery perspective(透视). Wang et al. [22] took advantage the image decomposition and dictionary learning. The recently developed deep learning technique was also applied to the single image rain streaks removal task, and excellent results were obtained [24–31].

[15] Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, “Single image rain streak decomposition using layer priors,” IEEE Transactions on Image Processing, vol. 26, no. 8, pp. 3874–3885, 2017.
[16] L. Zhu, C.-W. Fu, D. Lischinski, and P.-A. Heng, “Joint bi-layer optimization for single image rain streak removal,” in the IEEE International Conference on Computer Vision (ICCV), Oct 2017.
[17] B.-H. Chen, S.-C. Huang, and S.-Y. Kuo, “Error-optimized sparse representation for single image rain removal,” IEEE Transactions on Industrial Electronics, vol. 64, no. 8, pp. 6573–6581, 2017.
[18] S. Gu, D. Meng, W. Zuo, and L. Zhang, “Joint convolutional analysis and synthesis sparse representation for single image layer separation,” in the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1717–1725.
[19] Y. Chang, L. Yan, and S. Zhong, “Transformed low-rank model for line pattern noise removal,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1726–1734.
[20] L.-J. Deng, T.-Z. Huang, X.-L. Zhao, and T.-X. Jiang, “A directional global sparse model for single image rain removal,” Applied Mathematical Modelling, vol. 59, pp. 662–679, 2018.
[21] S. Du, Y. Liu, M. Ye, Z. Xu, J. Li, and J. Liu, “Single image deraining via decorrelating the rain streaks and background scene in gradient domain,” Pattern Recognition, vol. 79, pp. 303–317, 2018.
[22] Y. Wang, S. Liu, C. Chen, and B. Zeng, “A hierarchical approach for rain or snow removing in a single color image,” IEEE Transactions on Image Processing, vol. 26, no. 8, pp. 3936–3950, 2017.
[23] D. Ren, W. Zuo, D. Zhang, L. Zhang, and M.-H. Yang, “Simultaneous fidelity and regularization learning for image restoration,” arXiv preprint arXiv:1804.04522, 2018.

For the video rain streaks removal, Garg et al. [32] firstly raised a video rain streaks removal method with comprehensive analysis(综合分析) of the visual effects of the rain on an imaging system. Since then, many approaches have been proposed for the video rain streaks task and obtained good rain removing performance in videos with different rain circumstances(情况). Comprehensive early existing video-based methods are summarized in [33]. Chen et al. [11] took account of the highly dynamic(动态的) scenes. Whereafter(此后), Kim et al. [34] considered the temporal correlation(时间相关) of rain streaks and the low-rank nature of clean videos. Santhaseelan et al. [35] detected(检测) and removed- the rain streaks based on phase congruency features(相位一致性特征). You et al. [36] dealt with the situations where the raindrops are adhered to(附着在) the windscreen or the window glass. In [37], a novel tensor-based video rain streak removal approach was proposed considering the directional property(方向性). Ren et al. [38] handled the video desnowing and deraining task based on matrix decomposition(矩阵分解). The rain streaks and the clean background were stochastically(随机) modeled as a mixture of Gaussians by Wei et al. [39] while Li et al. [40] utilized the multiscale convolutional sparse coding(多尺度卷积稀疏编码). For the video rain streaks removal, the deep learning based methods also started to reveal their effectiveness [41, 42].

[32] K. Garg and S. K. Nayar, “Detection and removal of rain from videos,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. I–528–I–535.
[33] A. K. Tripathi and S. Mukhopadhyay, “Removal of rain from videos: a review,” Signal, Image and Video Processing, vol. 8, no. 8, pp. 1421–1430, 2014.
[34] J.-H. Kim, J.-Y. Sim, and C.-S. Kim, “Video deraining and desnowing using temporal correlation and low-rank matrix completion,” IEEE Transactions on Image Processing, vol. 24, no. 9, pp. 2658–2670, 2015.
[35] V. Santhaseelan and V. K. Asari, “Utilizing local phase information to remove rain from video,” International Journal of Computer Vision, vol. 112, no. 1, pp. 71–89, 2015.
[36] S. You, R. T. Tan, R. Kawakami, Y. Mukaigawa, and K. Ikeuchi, “Adherent raindrop modeling, detectionand removal in video,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 9, pp. 1721–1733, 2016.
[37] T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, L.-J. Deng, and Y. Wang, “A novel tensor-based video rain streaks removal approach via utilizing discriminatively intrinsic priors,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4057–4066.
[38] W. Ren, J. Tian, Z. Han, A. Chan, and Y. Tang, “Video desnowing and deraining based on matrix decomposition,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4210–4219.
[39] W. Wei, L. Yi, Q. Xie, Q. Zhao, D. Meng, and Z. Xu, “Should we encode rain streaks in video as deterministic or stochastic?” in the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2516–2525.
[40] M. Li, Q. Xie, Q. Zhao, W. Wei, S. Gu, J. Tao, and D. Meng, “Video rain streak removal by multiscale convolutional sparse coding,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6644–6653.
[41] J. Chen, C.-H. Tan, J. Hou, L.-P. Chau, and H. Li, “Robust video content alignment and compensation for rain removal in a cnn framework,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6286–6295.
[42] J. Liu, W. Yang, S. Yang, and Z. Guo, “Erase or fill? deep joint recurrent rain removal and reconstruction in videos,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 3233–3242.

In general, the observation model for a rainy image is formulated as O = B+R [1],which can be generalized to the video case as: O = B+R, where O, B; and R ∈ Rmnt are three 3-mode tensors representing the observed rainy video, the unknown rain-free video and the rain streaks, respectively.
When considering the noise or error, the observation model is modified as O = B + R + N, where N is the noise or error term. The goal of video rain streak removal is to distinguish the clean video B and the rain streaks R from an input rainy video O. This is an ill-posed inverse problem, which can be handled by imposing prior information.可以通过强加先验信息来处理。
Therefore, from this point of view, the most significant issues are the rational extraction and sufficient utilization of the prior knowledge, which is helpful to wipe off the rain streaks and reconstruct the rain-free video.(最重要的问题是合理提取和充分利用先验知识)
In this paper, we mainly focus on the discriminative characteristics of rain streaks and background in different directional gradient domains(在本文中,我们主要关注不同方向梯度域中雨条纹和背景的判别特征。).
From the temporal perspective(时间方面), the clean video is continuous along the time direction, while the rain streaks do not share this property [34, 39, 43].

[34] J.-H. Kim, J.-Y. Sim, and C.-S. Kim, “Video deraining and desnowing using temporal correlation and low-rank matrix completion,” IEEE Transactions on Image Processing, vol. 24, no. 9, pp. 2658–2670, 2015.
[39] W. Wei, L. Yi, Q. Xie, Q. Zhao, D. Meng, and Z. Xu, “Should we encode rain streaks in video as deterministic or stochastic?” in the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2516–2525.
[43] S. Starik and M. Werman, “Simulation of rain in videos,” in the IEEE International Conference on Computer Vision (ICCV) Texture Workshop, vol. 2, 2003, pp. 406–409.


As observed in Fig. 2, the time-directional gradient(时间方向梯度) of the rain-free video (a-2) exhibits a different histogram(直方图) compared with those of the rainy video (a-1) and the rain streaks (a-3).
The temporal gradient of the clean video is much sparser and it is corresponding to the temporal continuity of the clean video.
Therefore, we intend to minimize ‖▽tB‖1, where ▽t is the temporal differential operator(时间微分算子).(公式是什么意思?)
From the spatial perspective, it has been widely recognized that natural images are largely piecewise smooth and their gradient fields are typically sparse (自然图像基本上是分段平滑的,并且它们的梯度场通常是稀疏的)[44, 45].(没懂???

[44] X. Guo and Y. Ma, “Generalized tensor total variation minimization for visual data recovery,” in the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3603–3611.
[45] Y. Jiang, X. Jin, and Z. Wu, “Video inpainting based on joint gradient and noise minimization,” in The Pacific Rim Conference on Multimedia. Springer, 2016, pp. 407–417.

Many aforementioned de-rain methods take the spatial gradient into consideration and use the total variation (TV) to depict the property of the rain-free part [1, 10].(许多上述的降雨方法都考虑了空间梯度,并使用总变差(TV)来描述无雨部分[1,10]的特性。)(总变差是个啥,怎么描述的无雨的特性?)

[1] Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, “Rain streak removal using layer priors,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2736–2744.
[10] Y.-L. Chen and C.-T. Hsu, “A generalized low-rank appearance model for spatio-temporally correlated rain streaks,” in the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 1968–1975.

However, the effects of the rain streaks on the vertical gradient and horizontal gradient are different. This phenomenon was likewise(同样) noticed in [19–21]. Initially, for the sake of convenience, we assume that rain streaks are approximately vertical. The impact of the vertical rain streaks on the vertical gradient is limited(垂直降雨条纹对垂直梯度的影响是有限的). The subfigures (b-1,2,3) in Fig. 2 reveal that the vertical gradient of rain streaks are much sparser than those of the clean video and the rainy video.

[19] Y. Chang, L. Yan, and S. Zhong, “Transformed low-rank model for line pattern noise removal,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1726–1734.
[20] L.-J. Deng, T.-Z. Huang, X.-L. Zhao, and T.-X. Jiang, “A directional global sparse model for single image rain removal,” Applied Mathematical Modelling, vol. 59, pp. 662–679, 2018.
[21] S. Du, Y. Liu, M. Ye, Z. Xu, J. Li, and J. Liu, “Single image deraining via decorrelating the rain streaks and background scene in gradient domain,” Pattern Recognition, vol. 79, pp. 303–317, 2018.

Nonetheless, the vertical rain streaks severely disrupt the horizontal piecewise smoothness.
尽管如此,垂直的雨水条纹严重破坏了水平的分段平滑度。
As exhibited in Fig. 2(c-1,2,3), the pixel intensity is piecewise smooth only in (c-2), whereas burrs(毛刺) frequently appear in (c-1) and (c-3). Therefore, we intend to minimize ‖▽1R‖1and ‖▽2B‖1, where ▽1 and ▽2 are respectively the vertical difference (or say vertical unidirectional TV [46–48]) operator and horizontal difference (or say horizontal unidirectional TV) operator.(这里没大明白,为什么要最小化水平和垂直的梯度
Given a real rainfall-affected scene, without the wind, the raindrops generally fall from top to bottom. Meanwhile, when not very windy, the angles between rain streaks and the vertical direction are usually not very large. Therefore, the rain streak direction can be approximated as the vertical direction, i.e. the mode-1 (column) direction of the video tensor. Actually, this assumption is reasonable for parts of the rainy sceneries. For the rain streaks that are oblique (or say far from being vertical), directly utilizing the directional property is very difficult for the digital video data, which are cubes of distinct numbers. To cope with this difficulty, in Sec. III-E, we would design the shift strategy, based on our automatical rain streaks’ direction detection method.
The contributions of this paper include three aspects.

  • We propose a video rain streaks removal model, which fully considers the discriminative prior knowledge of the rain streaks and the clean video.
  • We design a split augmented Lagrangian shrinkage algorithm (SALSA) based algorithm to efficiently and effectively solve the proposed minimization model. The convergence(收敛) of our algorithm is theoretically guaranteed(理论上保证). Meanwhile, the implementation on the graphics processing unit (GPU) device further accelerates our method.
  • To demonstrate(展示) the efficacy and the superior performance of the proposed algorithm in comparison with state-ofthe-art alternatives(选择), extensive(广泛的) experiments both on the synthetic data and the real-world rainy videos are conducted.
    This work is an extension of the material published in [37](这项工作是[37]中发表的材料的延伸). The new material is the following: a) the proposed rain streaks removal model is improved and herein introduced in more technical details(这里介绍了更多的技术细节); b) we explicitly(明确的) use the split augmented Lagrangian shrinkage algorithm(分裂增广拉格朗日收缩算法) to solve the proposed model; c) to make the proposed method more applicable, we design an automatical rain streaks’ direction detecting method and provide the shift strategy(转变策略) to deal with oblique rain streaks; d) in our experiments, we re-simulate(重新模拟) the rain streaks for the synthetic data, using two different techniques and considering the rain streaks not very vertical; e) three recent state-of-the-art methods [27, 39, 40] are brought into comparison.

[37] T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, L.-J. Deng, and Y. Wang, “A novel tensor-based video rain streaks removal approach via utilizing discriminatively intrinsic priors,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4057–4066.
[27] X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, “Removing rain from single images via a deep detail network,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3855–3863.
[39] W. Wei, L. Yi, Q. Xie, Q. Zhao, D. Meng, and Z. Xu, “Should we encode rain streaks in video as deterministic or stochastic?” in the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2516–2525.
[40] M. Li, Q. Xie, Q. Zhao, W. Wei, S. Gu, J. Tao, and D. Meng, “Video rain streak removal by multiscale convolutional sparse coding,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6644–6653.

The paper organized as follows. Section II gives the preliminary on the tensor notations(初步的张量符号). In Section III, the formulation of our model is presented along with a SALSA solver. Experimental results are reported in Section IV. Finally, we draw some conclusions in Section V.

II. NOTATION(注释) AND PRELIMINARIES


0

[49] T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, T.-Y. Ji, and L.-J. Deng, “Matrix factorization for low-rank tensor completion using framelet prior,” Information Sciences, vol. 436, pp. 403–417, 2018.
[50] S. Li, R. Dian, L. Fang, and J. M. Bioucas-Dias, “Fusing hyperspectral and multispectral images via coupled sparse tensor factorization,” IEEE Transactions on Image Processing, vol. 27, no. 8, pp. 4118–4130, 2018.
[51] T.-Y. Ji, N. Yokoya, X. X. Zhu, and T.-Z. Huang, “Nonlocal tensor completion for multitemporal remotely sensed images’ inpainting,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 6, pp. 3047–3061, 2018.
[52] T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM Review, vol. 51, no. 3, pp. 455–500, 2009.

III. MAIN RESULTS

A. Problem formulation(问题的表述)

As mentioned before, a rainy video O ∈ Rmnt can be modeled as a linear superposition(线性叠加)? = B + R + N; (1) where O; B;R and N ∈Rmnt are four 3-mode tensors representing the observed rainy video, the unknown rain-free video, the rain streaks and the noise (or error) term, respectively.
Our goal is to decompose(分解) the rain-free video B and the rain streaks R from an input rainy video O. To solve this ill-posed inverse problem, we need to analyze the prior information for both B and R and then introduce corresponding regularizers(介绍相应的正则化器), which will be discussed in the next subsection.

B. Priors and regularizers**(正则化器是个什么东西,做什么用的?)**

In this subsection, we continue the discussion on the prior knowledge with the assumption that rain streaks are approximately vertical.

a) Sparsity of rain streaks:

When the rain is light, the rain streaks can naturally be considered as being sparse(雨条亮,可以考虑他是稀疏的). To boost the sparsity of rain streaks, minimizing the l1 norm of the rain streaks R is an ideal option(为了增加雨条稀疏性,最小化雨条R的l1范数是个理想的选择)(为什么最小化R就是能得到好的效果呢). When the rain is very heavy, it seems that this regularization is not proper(雨很大的时候正则化不是很恰当)(为什么雨大了就不行了呢). However, when the rain is extremely heavy, it is very difficult or even impossible to recover the rain-free part because of the huge loss of the reliable information. The rainy scenarios discussed in this paper are not that extreme, and we assume that the rain streaks always maintain lower energy than the background clean videos. Therefore, when the rain streaks are dense(稠密), the l1 norm can be viewed as a role to restrain the magnitude of the rain streaks(l1范数被看做是约束雨条大小的规则)(是不是说,l1范数的min值越小,雨越密?为什么呀,为0的点就会是雨吗?不是0的也可以是雨呀?). Meanwhile, in our model, other regularization terms(其他正则化条件可以分辨雨线) would also contribute to distinguishing the rain streaks. Thus, we can tackle the heavy raining scenarios by tuning the parameter of the sparsity term so as to reduce its effect.(我们可以通过调整稀疏度项的参数来解决大雨场景,从而减少其影响。)(怎么能通过调整这个参数就可以解决大雨的场景?

b) The horizontal direction:

In Fig. 2, (c-1,2,3) show the pixel intensities along a fixed row of the rainy video, the clean video and the rain streaks, respectively. (分别显示了沿着三个视频的每一行像素的像素强度)It is obvious that the variation of the pixel intensity is piecewise smooth only in (c-2), whereas burrs frequently appear in (c-1) and (c-3). Therefore, a horizontal unidirectional TV regularizer is a suitable candidate for B.(所以,水平单向TV(total variation总变差)调节器很合适)(意思理解了,但是水平单向TV调节器是什么不知道)

c) The vertical direction:

It can be seen from Fig. 2 that (b-3), which is the histogram of the intensity of the vertical gradient in a rain-streak frame, exhibits a distinct distribution(分布不均) with respect to (c-1) and (c-2). The long-tailed distributions in (c-1) and (c-3) indicate that the minimization of the l1 norm of ▽1R would help to distinguish the rain streaks.长尾属性是什么?)(最小化▽1R的l1范数为什么可以辨别雨线?涉及到l1范数是什么,▽1R是什么?
long-tailed distributions长尾属性

d) The temporal direction:

From the first column of Fig. 2, it can be observed that clean videos exhibit the continuity along the time axis. Sub-figures (a-1,2,3), which present the histograms of the magnitudes(大小,量纲) in the temporal directional gradient(时间方向梯度是个什么东西,是对每个像素来计算的吗?), illustrate (说明)that the clean video’s temporal gradients consist of more zero values and smaller non-zero values, whereas those of the rainy video and rain streaks tend to be long-tailed(时间方向梯度,对于clean videos,大部分为0或者接近0,rainy videos存在长尾现象). Therefore, it is natural to minimize the l1 norm of the temporal gradient of the clean video B. (所以,自然要最小化时间方向梯度的l1范数)(什么叫最小化这个值?)By the way, the low-rank regularization used in [37] is discarded(37中用的低秩正则化被丢弃) (37中的低秩正则化是什么?)since that the low-rank assumption is not reasonable for the videos captured by dynamic cameras and the rain streaks, which always share the repetitive patterns, can occasionally be more low-rank than the background along the spatial directions.(对于由动态相机捕获的视频而言,低等级假设是不合理的,并且总是共享重复模式的降雨条纹有时可能比沿着空间方向的背景更低等级。)(低等级假设是什么,共享重复模式图案是什么意思,偶尔比背景在空间方向更低等级什么意思,怎么定义这个等级?

[37] T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, L.-J. Deng, and Y. Wang, “A novel tensor-based video rain streaks removal approach via utilizing discriminatively intrinsic priors,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4057–4066.

C. The proposed model

Generally, there is an angle between the vertical direction and the real falling direction of the raindrops. The rain streaks pictured in Fig. 2 are not strictly vertical and there is a 5-degree angle between the rain streaks and the y-axis. In other words, the prior knowledge discussed above are still valid when this angle is small. Large-angle cases would be discussed in Sec. III-E). Therefore, the rain streak direction is referred to as the vertical direction corresponding to the y-axis, whereas the rain-perpendicular direction is referred to as the horizontal direction corresponding
to the x-axis. Thus, as a summary of the discussion of the priors and regularizers, our model can be compactly formulated as follows:(公式似懂非懂

D. Optimization(优化)

Since the proposed model (2) is concise and convex(简洁又凸), many state-of-the-art solvers are available to solve it. Here, we apply the ADMM [53], which has been proved an effective strategy for solving large scale(大规模) optimization problems [54–56]. More specifically, we adopt SALSA [57]. After introducing four auxiliary tensors(辅助张量) the proposed model (2) is reformulated as the following equivalent constrained problem:(被重新表述为以下等效约束问题)


欧拉-拉格朗日方程

a) Vi sub-problems:


这样的问题有一个封闭解,通过软阈值获得:
封闭解
arg max
为什么时间复杂度O(mnt)

b) B and R sub-problems:


快速傅里叶变换?什么叫小于0大于O的要压缩?时间复杂度是怎么算的?

E. Discussion of the oblique rain streaks

the directional property(方向性) we utilized in our model is a double-edged sword (双刃剑)when dealing with digital videos. In this subsection, we design an automatical rain streaks’ angle detection method, and based on it, we propose the shift strategy to deal with rain streaks not vertical.

a) Rain streaks direction detection:

Based on our analysis of the prior knowledge, it’s not difficult to come up with a simple and effective method to detect the direction.In this subsection, we assume that the rain streaks are in the same direction and the angle between rain streaks and the vertical direction are denoted as θ. For a rainy video O ∈ Rmnt, our method consists of three steps:

(1)用3*3中值滤波器过滤rainy video的水平切片,R0 = O - O^(这里中值滤波的作用是什么?为什么要过滤?减去滤波之后的为什么就是雨滴部分?
(2)旋转,i°时获得Ri
(3)▽Ri 的1范数最小时能得到角大小sita(为什么是梯度的一范数最小
Fig. 3 shows an example of our detection method, where the rain streaks are simulated(模拟) with angle 45° and the detection result (labeled red) is exactly 45°. Actually, the yis are very low when θi is close to 45°, according with the discussion in III-B. Generally, the angle between the rain streaks and the vertical direction distributes in (-90°; 90°). If the angle θ^∈ (-90°; 0°), we can restrict(约束) it to the range of (0; 90) by the left-right flipping of each frame(每帧的左右翻转). If the angle θ^∈2 (45; 90), we can restrict it to the range of (0; 45) by transposing (i.e. interchanging the rows and columns of a given matrix) each rame. To save space, we only discuss the situations where ^ 2 [0; 45] in the following.

b) The shift strategy:

When the detected angle θ ^∈ [15°; 45°], we apply the shift strategy, which consists of two shifting operations, as shown in Fig. 4, for different situations. The two shift operations are detailed as follows:(这种shift方式是不是太粗略了?为什么分的这么粗略?

Different from the rotation strategy recommended in [37], the core idea of the shift strategy is to rationally slide the rows of the rainy frames(合理地滑动有雨图像的行) and make the rain streaks being approximately vertical without any degradation caused by interpolation (插值引起的退化)(插值引起退化是什么意思?)Meanwhile, it is notable(显著) that these shifting operations wouldn’t affect the prior knowledges mentioned in III-B. After shifting, the rain streaks is close to being vertical, and we can apply the algorithm 1. Finally, the result would be shifted back. The flowchart of applying our FastDeRain with the shift strategy is shown in Fig. 5.

IV. EXPERIMENTAL RESULTS

In this section, we evaluate(评估) the performance of the proposed algorithm on synthetic data and real-world rainy videos

a) Implementation details:

Throughout our experiments, color videos with dimensions of m * n * 3 * t are transformed into the YUV format. YUV is a color space that is often used as part of a color image pipeline(用作彩色图像管道的一部分). Y stands for the luma component (the brightness), and U and V are the chrominance (color) components(色度成分)2. We apply our method only to the Y channel with the dimension of m * n * t. The exhibited rain streaks are scaled for better visualization.
为什么只关心亮度成分,不关心色度成分?
Since that the graphics processing unit (GPU) device is able to speed up the large-scale computing, we implement our method on the platform of Windows 10 and Matlab (R2017a)
with an Intel® Core™ i5-4590 CPU at 3.30GHz, 16 GB RAM, and a GTX1080 GPU. The involved operations in algorithm 1 is convenient to be implemented on the GPU device [58]. If we conduct our algorithm on the CPU, the running time for dealing with a video of size 240 * 320 * 3 * 100 is about 23 seconds, while 7 seconds on the GPU device. Meanwhile, Fu et al.’s method [27] can also be accelerated by the GPU device, from 38 seconds on the CPU to 24 seconds on the GPU, dealing with a video of size 2403203*100. Thus, we only report the GPU running time of FastDeRain and Fu et al.’s method in this section.

[58] “GPU computing,” https://www.mathworks.com/help/distcomp/run-built-in-functions-on-a-gpu.html.

b) Compared methods:

To validate(验证) the effectiveness and efficiency of the proposed method, we compare our method (denoted as “FastDeRain”) with recent state-of-the-art methods, including one single image based method, i.e., Fu et al.’s deep detail network (DDN) method3 [27]; and three video-based mehtods, i.e., Kim et al.’s method using temporal correlation and low-rankness (TCL) 4 [34], Wei et al.’s stochastic encoding (SE) method5 [39], and Li et al.’s multiscale convolutional sparse coding (MS-CSC) method6 [40]. In fact, DDN is a single-image-based rain streak removal method, but their performance has already surpassed some video-based methods. The deep learning technique shows a great vitality and an extremely wide application prospect. Hence, the comparison with DNN is reasonable and challenging.

[27] X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, “Removing rain from single images via a deep detail network,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3855–3863.
[34] J.-H. Kim, J.-Y. Sim, and C.-S. Kim, “Video deraining and desnowing using temporal correlation and low-rank matrix completion,” IEEE Transactions on Image Processing, vol. 24, no. 9, pp. 2658–2670, 2015.
[39] W. Wei, L. Yi, Q. Xie, Q. Zhao, D. Meng, and Z. Xu, “Should we encode rain streaks in video as deterministic or stochastic?” in the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2516–2525.
[40] M. Li, Q. Xie, Q. Zhao, W. Wei, S. Gu, J. Tao, and D. Meng, “Video rain streak removal by multiscale convolutional sparse coding,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6644–6653.

2https://en.wikipedia.org/wiki/YUV
3http://smartdsp.xmu.edu.cn/xyfu.html
4http://mcl.korea.ac.kr/jhkim/deraining/deraining code with example.zip
5http://gr.xjtu.edu.cn/web/dymeng
6https://github.com/MinghanLi/MS-CSC-Rain-Streak-Removal

A. Synthetic data

a) Rain streak generation:

Adding rain streaks to a video is indeed a complex problem since there is not an existing algorithm nor a free software to accomplish it in one step. Meanwhile, as Starik et al. pointed out in [43] that the rain streaks can be assumed temporal independent(可以假定为时间独立的), thus we can simulate rain streaks for each frame using the synthetic method mentioned in many recently developed single image rain streaks removal approaches [8, 13, 26], i.e., using the Photoshop software with the tutorial documents [59]. The density of the simulated rain streaks by this method is mainly determined by the ratio of the amounts of dots (in step 8 of [59]) to the number of all the pixels, and for convenience, the ratio is denoted as r. Another way to synthesize the rain streaks was proposed in [39], adding rain streaks taken by photographers under black background7.

[43] S. Starik and M. Werman, “Simulation of rain in videos,” in the IEEE International Conference on Computer Vision (ICCV) Texture Workshop, vol. 2, 2003, pp. 406–409.
[59] “Adding rain to a photo with photoshop,” https://www.photoshopessentials.com/photo-effects/rain/.
[39] W. Wei, L. Yi, Q. Xie, Q. Zhao, D. Meng, and Z. Xu, “Should we encode rain streaks in video as deterministic or stochastic?” in the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2516–2525.

7http://www.aigei.com/video/effect/1_rain/

Referring to [59] and [39], we generate 3 types of rain streaks as follows:



SE [39] and MS-CSC [40] are designed mainly for the videos captured by static cameras, and directly applying them on the video captured by dynamic camera would result in poor performances (see the gray values Table II). Therefore,for a fair comparison, the compared methods included DDN [26] and TCL [34] when dealing with the synthetic rainy data generated on the videos “foreman” “bus” and “waterfall”. When dealing with the rainy data simulated with the video “highway”, SE [39] and MS-CSC [40] would be brought into comparison.

b) Quantitative comparisons:

For quantitative assessment,(定量评估) the peak signal-to-noise ratio (PSNR) (峰值信噪比)of the whole video, and the structural similarity (SSIM)(结构相似性) [60], the feature similarity (FSIM)(特征相似性) [61], the visual information fidelity (VIF) (视觉信息保真度)[62], the universal image quality index (UIQI)(通用图像质量指数) [63], and the gradient magnitude similarity deviation (GMSD, smaller is better)(梯度幅度相似度偏差) [64] of each frame are calculated. The PSNR, the corresponding mean values of SSIM FSIM VIF and UIQI, and the running time are reported in Table II, in which the best quantitative values are in boldface(黑体).
As observed in Table II, our method considerably outperformed (相当跑赢)the other four state-of-the-art methods in terms of all the selected quality assessment indexes. Notably(值得注意的是), in many cases, the performances of the single-image-based deep learning method DNN [26] surpassed the those of the video-based method TCL [34]. This is in agreement with the aforementioned rationality of considering comparisons with the single-image-based method.(这与上述考虑与基于单图像的方法的比较的合理性一致。)
The running time of the our FastDeRain is extremely low. In particular, our method took less than 10 seconds when dealing with all the synthetic data. Although a tensor system might be expected to be computationally expensive(虽然张量系统可能预计在计算上很昂贵), our algorithm, with closed-form solutions to its sub-problems and a time complexity of approximately O(mntlog(mnt)) for an input video with a resolution of m*n and t frames, is expected to be efficient. In the meantime, the aforementioned(前述的) implementation on the GPU device also largely accelerated our algorithm.

c) Visual comparisons:

Fig. 6, 7 and 8 exhibit the results conducted(进行) on videos with synthetic rain streaks in case 1, case 2 and case 3, respectively. In Fig. 6, since the angles of rain streaks in case 1 increase with time, we display the frames at the beginning or end. Meanwhile, only one frame is exhibited in Fig. 7, Fig. 8 on account of that the rain streaks in every frame are of various directions.
In Fig. 6, all the methods removed almost all of the rain streaks and the proposed method maintained the best background. Many details in the background were incorrectly extracted to the rain streaks by DDN and TCL. It can be found in the 6-th row of Fig. 6, i.e., the error images of the results on the video “bus”, that little vertical patterns were mistakenly extracted as the rain streaks by the proposed method.
For the rain streaks in case 2, the denser rain streaks imply that it is more difficult than rain streaks in case 1(密集的降雨条纹意味着它比案例1中的雨条纹更难). For instance, the denser rain streaks visibly degraded the performance of SE. From Fig. 7, we can find that our method preserved the backgrounds well and other four methods erased the details of the backgrounds.
In Fig. 8, the proposed method removed most of the rain streaks and considerably(相当) preserves the background. Other methods tended to obtain over de-rain or under de-rain results. Considering the similarity of the extract rains streaks to the ground truth rain streaks, our FastDeRain held obvious advantages.
In summary, for these different types of synthetic data, our method can simultaneously(同时) remove almost all rain streaks while commendably(值得称赞的) preserving the details of the underlying clean videos.

[27] X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, “Removing rain from single images via a deep detail network,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3855–3863.
[34] J.-H. Kim, J.-Y. Sim, and C.-S. Kim, “Video deraining and desnowing using temporal correlation and low-rank matrix completion,” IEEE Transactions on Image Processing, vol. 24, no. 9, pp. 2658–2670, 2015.
[39] W. Wei, L. Yi, Q. Xie, Q. Zhao, D. Meng, and Z. Xu, “Should we encode rain streaks in video as deterministic or stochastic?” in the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2516–2525.
[40] M. Li, Q. Xie, Q. Zhao, W. Wei, S. Gu, J. Tao, and D. Meng, “Video rain streak removal by multiscale convolutional sparse coding,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6644–6653.

2https://en.wikipedia.org/wiki/YUV
3http://smartdsp.xmu.edu.cn/xyfu.html
4http://mcl.korea.ac.kr/jhkim/deraining/deraining code with example.zip
5http://gr.xjtu.edu.cn/web/dymeng
6https://github.com/MinghanLi/MS-CSC-Rain-Streak-Removal





error images的颜色深浅表示什么?这是怎么画出来的?

d) Discussion of each component:

There are four components in our model (2). To elucidate(阐述) their distinct(不同) effects, we degrade(降级) our method by setting each αi (i = 1; 2; 3; 4) equal to 10^-15, respectively. These degraded methods and FastDeRain are tested on the video “waterfall” with synthetic rain streaks in case 1. We present the quantitative assessments(定量评估) in Fig. 11 and the visual results in Fig. 9.


这里的降级是什么意思?只留一个项?为什么要这样?
From Fig. 11 and Fig. 9, we can conclude that all the four components contribute to the removal of rain streaks. Specifically, (a) when setting α1 = 10^-15, the rain streaks tend to be intermittent(断断续续的) along the vertical direction; (b) the rain streaks are fatter when the sparsity term contributes little(当稀疏性项贡献很少时,雨条更加肥弱)(没懂???); © some rain streaks remain in the background when the horizontal smoothness of the background is not sufficiently enhanced(背景的水平平滑没有有效的增强时,有些雨线没有去除。就是说要增加水平平滑?结合前面分析一下,是否要增加这个?); (d) the temporal continuity seems overwhelmingly(压倒性的) important since that without this regularization term our method nearly failed.

e) Parameters:

To examine the performance of the proposed FastDeRain with respect to(关于) different parameters, we conduct a series of experiments on the rainy data on synthetic video “waterfall” with the synthetic rain streaks in case 1 and the Gaussian noise(是个什么噪声) with zero mean and standard deviation(标准差,啥叫标准差来着) 0.02. In Fig. 10, a parameter analysis is presented and the SSIM FSIM and MUIQI are selected. Based on guidance from Fig. 10, our tuning strategy is as following: (1) set α2 and α3 as 10^-5 (这个数是怎么确定的?)and other αis to 0.01, and μ = 1, (2) tune(调) α1 and α4 until the results are barely satisfactory(刚满意)(这个调参是怎么调的,为什么这样调?α1/4同时调的话怎么调?), (3) and then fix α1 and α4 and enlarge(放大) α2 and α3 to further improve the performance(为什么要把α1,4再调大?). The tuning principle is as follows: when some of the texture or detail of the clean video is extracted into the estimated rain streaks, we increase α2 and α1 or decrease α4 and α3, and we do the opposite when rain streaks remain in the estimated rain-free content. Our recommended set of candidate values for α1 through α4 is {0.00001, 0.00003, 0.0001, 0.0003, 0.001, 0.003, 0.01}(为什么这里有6个数?). The Lagrange parameter μ is suggested to be 1. In practice, the time cost for the empirical tuning of the parameters is not much.

(N是啥?noise term)

f) Discussion of the noise term N in Eq. (1):

In this paper, the noise (or error) term (N in Eq. (1)) is taken into consideration in the observation model. To illustrate its effects, we conduct a series of experiments, in which the Gaussian noises of different standard deviations are respectively added to the video “waterfall” with synthetic rain streaks in case 1. The quantitative assessments of the results obtained by the proposed method with and without the noise (or error) term N taken into consideration (denoted as “with N” and “without N”, respectively ) are reported in Table III. In addition, we also exhibit the effects of different parameters on the proposed method without N in Fig. 10.

From Table III, we can conclude our method without N would acquire a better result when the rainy video is free from the noise. However, when the video is simultaneously affected by the rain streaks and the noise, which is unavoidable in real data, our method with N got better results. Therefore, we adopt the term N in Eq. (3) which enhances the robustness of our method to the noise. Meanwhile, the solid lines and the dashed lines in Fig. 10 also demonstrate that taking the noise (or error) term N into account would contribute to the robustness of the proposed method to different parameters.
σ是什么?)(rainy、withN,withoutN,分别是什么状态?没搞明白。。

g) Comparisons with the method in the conference version:(会议版本)

To clarify(澄清阐明) the improvement of the proposed method from our conference version [37], we compared the performances of our FastDeRain and the method in [37]. To save space, results on the part of the synthetic data, which are listed in the first column of Table IV, are reported. The deraining results are exhibited in Fig. 12, and, to avoid repetition, the numbers of the frames in Fig. 12 are different from those in foregoing figures. From Table IV and Fig. 12, we can conclude our FastDeRain made substantial(大量的) progress compared with the method in the conference version [37]. These results also accord with the above discussion of the irrationality of the low-rank regularizer.(低阶正则化器的不合理性。)

B. Real data

Four real-world rainy videos are chosen in this subsection. The first one (denoted as “wall”) of size 288 368 3 171 is download from the CAVE dataset10 and the second video11(denoted as “yard”) of size 512 256 3 126 was recorded by one of the authors on a rainy day in his backyard. The background of the video “wall” is consist of regular patterns while the background of the video “yard” is more complex. The third video is clipped from the well-known film “the Matrix”. The scene in this clips changes fast so that it is more difficult to deal with this video. The last video of size 480 640 3 108 is denoted as “crossing”12, and it was captured in the crossing with complex traffic conditions.

10http://www.cs.columbia.edu/CAVE/projects/camerarain/
11https://github.com/TaiXiangJiang/FastDeRain/blob/master/yard.mp4
12https://github.com/hotndy/SPAC-SupplementaryMaterials/blob/master/
Dataset Testing RealRain/ra4 Rain.rar

Fig. 13 shows two adjacent(相邻的) frames of the results obtained on the video “wall”. There are many vertical line patterns in the background of this video. Thus, exhibiting two adjacent frames would further help to distinguish the rain streaks from the background. It can be found in the zoomed in red blocks that this rain streak with high brightness is not handled properly by DNN, SE and MS-CSC. Our method removes almost all the rain streaks and preserves the background best compared with the results by other three methods.(了解一下DNN,SE,MS-CSC的模型?

Since there is little texture or structure similar to rain streaks in the video “yard”, only one frame is exhibited in Fig. 14. DNN and SE didn’t distinguish most of the rain streaks, especially in the zoomed in red blocks. Although TCL and MS-CSC separated the majority of rain streaks, some fine structures of the background were improperly extracted. Our FastDeRain removed most of the rain streaks and well preserved the background.

In Fig. 15, two adjacent frames of the rainy video “the Matrix” and deraining results by different methods are shown.The two adjacent rainy frames reveal the rapidly changing of the scene, particularly the luminance. Once again, our FastDeRain obtained the best result, especially when dealing with the obvious rain streak on the face of Neo.

The results on the rainy video “crossing” are exhibited in Fig. 16. From the zoomed in areas, we can observe that all the methods except MS-CSC entirely removed the rain streaks. TCL extracted some the structure of the curb line into the rain streaks while DNN tended to remove all the textures with line pattern. SE erased many structural details. The extracted rain streaks by the proposed FastDeRain were visually the best among all the results.

The scenarios in these four videos are of large differences. Our method obtains the best results, both in removing rain streaks and in retaining spatial details. In addition, the running time of our method is also obviously less than other methods, especially those three video-based methods.

C. Oblique rain streaks

In this subsection, we examine the performance of our method with the shift strategy and other four methods, when the rain streaks are far away from being vertical. We simulated two rainy videos: one is rain streaks with angles varying in [15; 35] added to the video “waterfall” (captured by a dynamic camera); another one is rain streaks with angles varying in [35; 55] added to the video “highway” (captured by a static camera). As shown in Table V and Fig. 17, the shift strategy helped our method to obtains the best results when dealing with the oblique rain streaks. The superior of the proposed FastDeRain is obvious both quantitatively and visually.

V. CONCLUSION

We have proposed a novel video rain streaks removal approach: FastDeRain. The proposed method, based on directional gradient priors in combination with sparsity, outperforms a series of state-of-the-art methods both visually and quantitively. We attribute the outperforming of FastDeRain to our intensive analysis of the characteristic priors of rainy videos, clean videos and rain streaks. Besides, it notable that our method is markedly(明显的) faster than the compared methods, even including a every fast single-image-based method. Our method is not without limitation. The natural rainy scenario is sometimes mixed with haze, and how to handle the residual rain artifacts remains an open problem. These issues will be addressed in the future.

ACKNOWLEDGMENT

The authors would like to express their sincere thanks to the editor and referees for giving us so many valuable comments and suggestions for revising this paper. The authors would like to thank Dr. Xueyang Fu, Dr. Wei Wei and Dr. Minghan Li for their generous sharing of their codes. This research was supported by the National Natural Science Foundation of China (61772003, 61702083), and the Fundamental Research Funds for the Central Universities (ZYGX2016J132, ZYGX2016J129, ZYGX2016KYQD142).

REFERENCES

[1] Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, “Rain streak removal
using layer priors,” in the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2016, pp. 2736–2744.
[2] T. Bouwmans, “Traditional and recent approaches in background modeling
for foreground detection: An overview,” Computer Science Review,
vol. 11, pp. 31–66, 2014.
[3] M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J.
Johannesson, and A. Radmanesh, “Video-based automatic incident detection
for smart roads: the outdoor environmental challenges regarding
false alarms,” IEEE Transactions on Intelligent Transportation Systems,
vol. 9, no. 2, pp. 349–360, 2008.
[4] X. Zhang, C. Zhu, S. Wang, Y. Liu, and M. Ye, “A bayesian approach to
camouflaged moving object detection,” IEEE Transactions on Circuits
and Systems for Video Technology, vol. 27, no. 9, pp. 2001–2013, 2017.
[5] C. Ma, Z. Miao, X.-P. Zhang, and M. Li, “A saliency prior context
model for real-time object tracking,” IEEE Transactions on Multimedia,
vol. 19, no. 11, pp. 2415–2424, 2017.
[6] K. Garg and S. K. Nayar, “Vision and rain,” International Journal of
Computer Vision, vol. 75, no. 1, pp. 3–27, 2007.
[7] L. Itti, C. Koch, E. Niebur et al., “A model of saliency-based visual attention
for rapid scene analysis,” IEEE Transactions on Pattern Analysis
and Machine Intelligence, vol. 20, no. 11, pp. 1254–1259, 1998.
[8] L.-W. Kang, C.-W. Lin, and Y.-H. Fu, “Automatic single-image-based
rain streaks removal via image decomposition,” IEEE Transactions on
Image Processing, vol. 21, no. 4, pp. 1742–1755, 2012.
[9] S.-H. Sun, S.-P. Fan, and Y.-C. F. Wang, “Exploiting image structural
similarity for single image rain removal,” in the IEEE International
Conference on Image Processing (ICIP), 2014, pp. 4482–4486.
[10] Y.-L. Chen and C.-T. Hsu, “A generalized low-rank appearance model
for spatio-temporally correlated rain streaks,” in the IEEE International
Conference on Computer Vision (ICCV), 2013, pp. 1968–1975.
[11] J. Chen and L.-P. Chau, “A rain pixel recovery algorithm for videos
with highly dynamic scenes,” IEEE Transactions on Image Processing,
vol. 23, no. 3, pp. 1097–1104, 2014.
[12] D.-Y. Chen, C.-C. Chen, and L.-W. Kang, “Visual depth guided color
image rain streaks removal using sparse coding,” IEEE transactions on
circuits and systems for video technology, vol. 24, no. 8, pp. 1430–1455,2014.
[13] Y. Luo, Y. Xu, and H. Ji, “Removing rain from a single image via
discriminative sparse coding,” in the IEEE International Conference on
Computer Vision (ICCV), 2015, pp. 3397–3405.
[14] C.-H. Son and X.-P. Zhang, “Rain removal via shrinkage of sparse codes
and learned rain dictionary,” in the IEEE International Conference on
Multimedia & Expo Workshops (ICMEW), 2016, pp. 1–6.
[15] Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, “Single image rain
streak decomposition using layer priors,” IEEE Transactions on Image
Processing, vol. 26, no. 8, pp. 3874–3885, 2017.
[16] L. Zhu, C.-W. Fu, D. Lischinski, and P.-A. Heng, “Joint bi-layer optimization for single-image rain streak removal,” in the IEEE International
Conference on Computer Vision (ICCV), Oct 2017.
[17] B.-H. Chen, S.-C. Huang, and S.-Y. Kuo, “Error-optimized sparse
representation for single image rain removal,” IEEE Transactions on
Industrial Electronics, vol. 64, no. 8, pp. 6573–6581, 2017.
[18] S. Gu, D. Meng, W. Zuo, and L. Zhang, “Joint convolutional analysis
and synthesis sparse representation for single image layer separation,” in
the IEEE International Conference on Computer Vision (ICCV), 2017,
pp. 1717–1725.
[19] Y. Chang, L. Yan, and S. Zhong, “Transformed low-rank model for line
pattern noise removal,” in the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2017, pp. 1726–1734.
[20] L.-J. Deng, T.-Z. Huang, X.-L. Zhao, and T.-X. Jiang, “A directional
global sparse model for single image rain removal,” Applied Mathematical
Modelling, vol. 59, pp. 662–679, 2018.
[21] S. Du, Y. Liu, M. Ye, Z. Xu, J. Li, and J. Liu, “Single image deraining via
decorrelating the rain streaks and background scene in gradient domain,”
Pattern Recognition, vol. 79, pp. 303–317, 2018.
[22] Y. Wang, S. Liu, C. Chen, and B. Zeng, “A hierarchical approach for
rain or snow removing in a single color image,” IEEE Transactions on
Image Processing, vol. 26, no. 8, pp. 3936–3950, 2017.
[23] D. Ren, W. Zuo, D. Zhang, L. Zhang, and M.-H. Yang, “Simultaneous
fidelity and regularization learning for image restoration,” arXiv preprint
arXiv:1804.04522, 2018.
[24] D. Eigen, D. Krishnan, and R. Fergus, “Restoring an image taken
through a window covered with dirt or rain,” in the IEEE International
Conference on Computer Vision (ICCV), 2013, pp. 633–640.
[25] W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, “Deep joint rain
detection and removal from a single image,” in the IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), July 2017.
[26] X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the
skies: A deep network architecture for single-image rain removal,” IEEE
Transactions on Image Processing, vol. 26, no. 6, pp. 2944–2956, 2017.
[27] X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, “Removing
rain from single images via a deep detail network,” in the IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), 2017,
pp. 3855–3863.
[28] H. Zhang, V. Sindagi, and V. M. Patel, “Image de-raining using a conditional
generative adversarial network,” arXiv preprint arXiv:1701.05957,2017.
[29] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative
adversarial network for raindrop removal from a single image,” pp.
2482–2491, 2018.
[30] S. Li, W. Ren, J. Zhang, J. Yu, and X. Guo, “Fast single image rain
removal via a deep decomposition-composition network,” arXiv preprint
arXiv:1804.02688, 2018.
[31] H. Zhang and V. M. Patel, “Density-aware single image de-raining using
a multi-stream dense network,” in the IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), 2018, pp. 695–704.
[32] K. Garg and S. K. Nayar, “Detection and removal of rain from videos,”
in the IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), pp. I–528–I–535.
[33] A. K. Tripathi and S. Mukhopadhyay, “Removal of rain from videos: a
review,” Signal, Image and Video Processing, vol. 8, no. 8, pp. 1421–
1430, 2014.
[34] J.-H. Kim, J.-Y. Sim, and C.-S. Kim, “Video deraining and desnowing
using temporal correlation and low-rank matrix completion,” IEEE
Transactions on Image Processing, vol. 24, no. 9, pp. 2658–2670, 2015.
[35] V. Santhaseelan and V. K. Asari, “Utilizing local phase information to
remove rain from video,” International Journal of Computer Vision, vol.
112, no. 1, pp. 71–89, 2015.
[36] S. You, R. T. Tan, R. Kawakami, Y. Mukaigawa, and K. Ikeuchi,
“Adherent raindrop modeling, detectionand removal in video,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 38,
no. 9, pp. 1721–1733, 2016.
[37] T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, L.-J. Deng, and Y. Wang, “A
novel tensor-based video rain streaks removal approach via utilizing
discriminatively intrinsic priors,” in the IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), 2017, pp. 4057–4066.
[38] W. Ren, J. Tian, Z. Han, A. Chan, and Y. Tang, “Video desnowing
and deraining based on matrix decomposition,” in the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4210–4219.
[39] W. Wei, L. Yi, Q. Xie, Q. Zhao, D. Meng, and Z. Xu, “Should we
encode rain streaks in video as deterministic or stochastic?” in the IEEE
International Conference on Computer Vision (ICCV), 2017, pp. 2516–2525.
[40] M. Li, Q. Xie, Q. Zhao, W. Wei, S. Gu, J. Tao, and D. Meng, “Video
rain streak removal by multiscale convolutional sparse coding,” in the
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2018, pp. 6644–6653.
[41] J. Chen, C.-H. Tan, J. Hou, L.-P. Chau, and H. Li, “Robust video content
alignment and compensation for rain removal in a cnn framework,” in the
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2018, pp. 6286–6295.
[42] J. Liu, W. Yang, S. Yang, and Z. Guo, “Erase or fill? deep joint recurrent
rain removal and reconstruction in videos,” in the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 3233–3242.
[43] S. Starik and M. Werman, “Simulation of rain in videos,” in the IEEE
International Conference on Computer Vision (ICCV) Texture Workshop,
vol. 2, 2003, pp. 406–409.
[44] X. Guo and Y. Ma, “Generalized tensor total variation minimization for
visual data recovery,” in the IEEE Conference on Computer Vision and
Pattern Recognition, 2015, pp. 3603–3611.
[45] Y. Jiang, X. Jin, and Z. Wu, “Video inpainting based on joint gradient
and noise minimization,” in The Pacific Rim Conference on Multimedia.
Springer, 2016, pp. 407–417.
[46] Y. Chang, L. Yan, H. Fang, and H. Liu, “Simultaneous destriping and
denoising for remote sensing images with unidirectional total variation
and sparse representation,” IEEE Geoscience and Remote Sensing Letters,
vol. 11, no. 6, pp. 1051–1055, 2014.
[47] Y. Chang, L. Yan, T. Wu, and S. Zhong, “Remote sensing image
stripe noise removal: from image decomposition perspective,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 54, no. 12, pp.
7018–7031, 2016.
[48] H.-X. Dou, T.-Z. Huang, L.-J. Deng, X.-L. Zhao, and J. Huang,
“Directional `0 sparse modeling for image stripe noise removal,” Remote
Sensing, vol. 10, no. 3, p. 361, 2018.
[49] T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, T.-Y. Ji, and L.-J. Deng, “Matrix
factorization for low-rank tensor completion using framelet prior,”
Information Sciences, vol. 436, pp. 403–417, 2018.
[50] S. Li, R. Dian, L. Fang, and J. M. Bioucas-Dias, “Fusing hyperspectral
and multispectral images via coupled sparse tensor factorization,” IEEE
Transactions on Image Processing, vol. 27, no. 8, pp. 4118–4130, 2018.
[51] T.-Y. Ji, N. Yokoya, X. X. Zhu, and T.-Z. Huang, “Nonlocal tensor
completion for multitemporal remotely sensed images’ inpainting,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 56, no. 6, pp.
3047–3061, 2018.
[52] T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,”
SIAM Review, vol. 51, no. 3, pp. 455–500, 2009.
[53] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed
optimization and statistical learning via the alternating direction method
of multipliers,” Foundations and Trends
R in Machine Learning, vol. 3,
no. 1, pp. 1–122, 2011.
[54] T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, and L.-J. Deng, “A novel nonconvex
approach to recover the low-tubal-rank tensor data: when t-svd
meets pssv,” arXiv preprint arXiv:1712.05870, 2017.
[55] X.-L. Zhao, F. Wang, T.-Z. Huang, M. K. Ng, and R. J. Plemmons,
“Deblurring and sparse unmixing for hyperspectral images,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 51, no. 7, pp.
4045–4058, 2013.
[56] X.-L. Zhao, F. Wang, and M. K. Ng, “A new convex optimization model
for multiplicative noise and blur removal,” SIAM Journal on Imaging
Sciences, vol. 7, no. 1, pp. 456–475, 2014.
[57] M. V. Afonso, J. M. Bioucas-Dias, and M. A. Figueiredo, “An augmented
lagrangian approach to the constrained optimization formulation
of imaging inverse problems,” IEEE Transactions on Image Processing,
vol. 20, no. 3, pp. 681–695, 2011.
[58] “GPU computing,” https://www.mathworks.com/help/distcomp/
run-built-in-functions-on-a-gpu.html.
[59] “Adding rain to a photo with photoshop,” https://www.
photoshopessentials.com/photo-effects/rain/.
[60] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image
quality assessment: from error visibility to structural similarity,” IEEE
Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
[61] L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarity
index for image quality assessment,” IEEE transactions on Image
Processing, vol. 20, no. 8, pp. 2378–2386, 2011.
[62] H. R. Sheikh and A. C. Bovik, “Image information and visual quality,”
IEEE Transactions on image processing, vol. 15, no. 2, pp. 430–444,2006.
[63] Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE
Signal Processing Letters, vol. 9, no. 3, pp. 81–84, 2002.
[64] W. Xue, L. Zhang, X. Mou, and A. C. Bovik, “Gradient magnitude
similarity deviation: A highly efficient perceptual image quality index,”
IEEE Transactions on Image Processing, vol. 23, no. 2, pp. 684–695,2014.

FastDeRain解读相关推荐

  1. Python Re 模块超全解读!详细

    内行必看!Python Re 模块超全解读! 2019.08.08 18:59:45字数 953阅读 121 re模块下的函数 compile(pattern):创建模式对象 > import ...

  2. Bert系列(二)——源码解读之模型主体

    本篇文章主要是解读模型主体代码modeling.py.在阅读这篇文章之前希望读者们对bert的相关理论有一定的了解,尤其是transformer的结构原理,网上的资料很多,本文内容对原理部分就不做过多 ...

  3. Bert系列(三)——源码解读之Pre-train

    https://www.jianshu.com/p/22e462f01d8c pre-train是迁移学习的基础,虽然Google已经发布了各种预训练好的模型,而且因为资源消耗巨大,自己再预训练也不现 ...

  4. NLP突破性成果 BERT 模型详细解读 bert参数微调

    https://zhuanlan.zhihu.com/p/46997268 NLP突破性成果 BERT 模型详细解读 章鱼小丸子 不懂算法的产品经理不是好的程序员 ​关注她 82 人赞了该文章 Goo ...

  5. 解读模拟摇杆原理及实验

    解读模拟摇杆原理及实验 Interpreting Analog Sticks 当游戏支持控制器时,玩家可能会一直使用模拟摇杆.在整个体验过程中,钉住输入处理可能会对质量产生重大影响.让来看一些核心概念 ...

  6. 自监督学习(Self-Supervised Learning)多篇论文解读(下)

    自监督学习(Self-Supervised Learning)多篇论文解读(下) 之前的研究思路主要是设计各种各样的pretext任务,比如patch相对位置预测.旋转预测.灰度图片上色.视频帧排序等 ...

  7. 自监督学习(Self-Supervised Learning)多篇论文解读(上)

    自监督学习(Self-Supervised Learning)多篇论文解读(上) 前言 Supervised deep learning由于需要大量标注信息,同时之前大量的研究已经解决了许多问题.所以 ...

  8. 可视化反投射:坍塌尺寸的概率恢复:ICCV9论文解读

    可视化反投射:坍塌尺寸的概率恢复:ICCV9论文解读 Visual Deprojection: Probabilistic Recovery of Collapsed Dimensions 论文链接: ...

  9. 从单一图像中提取文档图像:ICCV2019论文解读

    从单一图像中提取文档图像:ICCV2019论文解读 DewarpNet: Single-Image Document Unwarping With Stacked 3D and 2D Regressi ...

最新文章

  1. hadoop-07-ntp服务检查
  2. 记录奥林比克/课程录制 洛谷P2255 [USACO14JAN]
  3. VTK:轮廓 Glow Pass用法实战
  4. python fillna,Pandas之Fillna填充缺失数据的方法
  5. 服务器型号惠普RX3600,384854-B21 389344-001 146G SAS 15K 3.5寸HP服务器硬盘批发
  6. 网络基础四 DNS DHCP 路由 FTP
  7. asp:树型select菜单
  8. PC管理端及评委手机打分端的浏览器兼容问题
  9. ps常用快捷键 常用的
  10. 2021年京东/淘宝/天猫/双十一红包最新优惠攻略,1111超级红包如何抢?
  11. 故障树手册(Fault Tree handbook)(5)
  12. 斯特林数 java实现_关于斯特林数
  13. 兔子会死怎么办? 古典问题:有一对兔子,从出生后第3个月起每个月都生一对兔子,假如兔子会死
  14. 云主机-本地内网通信OPEN-V
  15. 王道机组笔记IEEE754
  16. Markdown 编辑公式
  17. js 判断两个时间相差多少月_js对日期操作 获取两个日期的相差是否在几月之内...
  18. 【异常检测第一篇】DeepLog: Anomaly Detection and Diagnosis from System Logs through Deep Learning
  19. python 中问号表达式替代 exper and a or b
  20. Linux 硬盘与硬件管理

热门文章

  1. Docker学习——pinpoint部署
  2. iphone不显示wifi连接到服务器,iPhone出现无法连接到任何WiFi怎么办 WiFi故障解决方法...
  3. 2021年中国农民工总量、外出农民工规模及农民工平均年龄分析[图]
  4. linux 使用icc运行tcl,ICC的步骤流程方法
  5. Electron--快速入门
  6. Mac 上简体中文输入方式的键盘快捷键
  7. [字符串题-java实现]20. 有效的括号
  8. x264参数设置详解(x264 settings)
  9. 通信原理学习笔记6-2:数字解调——抽样和符号同步
  10. 市场调研及发展前景分析报告有什么作用