簡易檢索 / 詳目顯示

研究生: 康穎軒
Kang, Ying-Xuan
論文名稱: 一個應用於單張影像雨紋去除的有效輕量化注意力網路
An Effective Lightweight Attention Network for Single Image Deraining
指導教授: 戴顯權
Tai, Shen-Chuan
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 84
中文關鍵詞: 單張影像除雨自注意力機制卷積神經網路Monarch-Mixer Matrix
外文關鍵詞: Single image deraining, Self-attention, Convolution neural networks, Monarch Mixer matrix
相關次數: 點閱:45下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 許多高階影像任務,諸如物件檢測、分割、追蹤等,都需要將原始影像進行低階影像任務,產生較完整清晰的影像後,才進行後續處理。影像去雨紋便是其中一項任務之一,在雨中拍攝的影像很容易受到雨水影響,造成模糊不清晰的視覺障礙,所以研究一個高效的影像除雨演算法相當重要。

    近年來有許多基於卷積神經網路的影像處理任務,雖然此方法能有效提取影像的局部特徵,但難以獲取影像的全局資訊,且不能調整權重以適應影像的資訊。另有許多基於自注意力的研究方法被提出,能夠有效捕捉輸入序列中各個位置之間的特徵相依性,建構影像全局的依賴關係,更好地修復成完整清晰影像。但是由於傳統Transformer中的自注意力機制以及多層感知器(Multi-Layer Perceptron)在處理高解析度影像任務時需要大量的參數量運算,其計算複雜度隨著輸入序列與模型維度呈二次方(quadratic)成長,導致其在影像任務領域有所瓶頸,因此如何在維持修復影像高品質的前提下,盡可能降低參數量,是當前此領域的研究方向之一。

    本文提出了一個結合稀疏自注意力機制、多層特徵聚合模組的影像去雨紋網路結構,以有效地提升恢復影像的品質。並引用了一個新穎的結構化矩陣,其計算複雜度達到了低於二次方的成長,即次平方(sub-quadratic),並節省了龐大的參數量消耗,以更低的成本取得不俗甚至更優於傳統Transformer的表現。綜合以上設計,在兩個標準的資料集上進行實驗,與目前基於卷積神經網路和自注意力機制的除雨紋演算法相比,本文所提出的方法在多個評估指標和視覺效果上有最佳的表現,並且相較基於自注意力的方法,降低了大量的參數量消耗。

    Many high-level image tasks, such as object detection, segmentation, and tracking, often require preprocessing the input images to produce clearer and more complete images before proceeding with further processing. One of these preprocessing tasks is rain streak removal. Images captured in rainy conditions are prone to being blurred and unclear due to the interference of rainwater, creating visual obstacles. Hence, researching efficient image deraining algorithms is crucial.

    In recent years, many image processing tasks have relied on convolutional neural networks (CNNs), which effectively extract local features from images. However, CNNs struggle to capture global information and adjust weights to accommodate image details. Alternatively, numerous studies have proposed methods based on self-attention mechanisms, which effectively capture feature dependencies among different positions in the input sequence, constructing global dependencies within images for better restoration into clear and complete images. Nevertheless, conventional Transformer-based self-attention mechanisms and Multi-Layer Perceptrons (MLPs) require a considerable number of parameters and computations for handling high-resolution image tasks. Their computational complexity grows quadratically with the input sequence and model dimensions, leading to bottlenecks in image processing tasks. Therefore, reducing the parameter count while maintaining high-quality image restoration has become a crucial research direction in this field.

    This Thesis addresses the mentioned challenges by proposing an image deraining network architecture that combines sparse self-attention mechanisms with a multi-level feature aggregation module to effectively enhance image restoration quality. It introduces a novel structured matrix with computational complexity growing sub-quadratically, thus significantly reducing parameter consumption compared to conventional transformers. By integrating these designs, experiments conducted on two standard datasets demonstrate that the proposed method outperforms existing deraining algorithms based on CNNs and self-attention mechanisms in multiple evaluation metrics and visual effects. Moreover, compared to self-attention-based methods, it significantly reduces the amount of parameters.

    摘 要 i Abstract iii Acknowledgments v Contents vi Lists of Tables viii Lists of Figures ix Chapter 1 Introduction 1 Chapter 2 Background and Related Works 4 2.1 Rain Physical Properties & Rain Model 4 2.2 Related Works 5 2.2.1 Prior-based and model-based method 5 2.2.2 Learning-based method 6 2.3 Transformer 8 2.4 Sparse Transformer 10 2.5 Selective Kernel Network 11 2.6 Restormer 13 2.7 Monarch Mixer 14 Chapter3 Proposed Algorithm 17 3.1 Algorithm Flow 17 3.1.1 Training stage flow 17 3.1.2 Testing stage flow 18 3.2 Proposed Network Architecture 19 3.2.1 Overall pipeline 19 3.2.2 Sparse attention 21 3.2.3 Monarch mixer MLP 25 3.2.4 Selective kernel feature fusion 27 3.2.5 Fusion transformer 28 3.3 Loss function 31 3.3.1 L1 loss 31 3.3.2 L2 loss 31 Chapter4 Experimental Result 33 4.1 Experimental dataset 33 4.2 Experimental settings 37 4.2.1 Experimental environment 37 4.2.2 Training strategy 37 4.3 Experimental evaluation metrics 38 4.3.1 PSNR 38 4.3.2 SSIM 38 4.4 Experimental Results 39 4.4.1 Quantitative results 39 4.4.2 Visual comparisons 41 4.5 Ablation Experimental Results 58 Chapter5 Conclusion and Future Work 60 5.1 Conclusion 60 5.2 Future Work 61 References 62

    [1] H. Zhang and V. M. Patel, ‘‘Convolutional sparse and low-rank coding based rain streak removal,’’ in Proc. IEEE Winter Conf. Appl. Comput. Vis. (WACV), pp. 1259–1267, Mar. 2017.

    [2] Luo, Y. Xu, and H. Ji, ‘‘Removing rain from a single image via discriminative sparse coding,’’ in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3397–3405, Dec. 2015.

    [3] Y.-L. Chen and C.-T. Hsu, ‘‘A generalized low-rank appearance modelfor spatio-temporally correlated rain streaks,’’ in Proc. IEEE Int. Conf. Comput. Vis., pp. 1968–1975, Dec. 2013.

    [4] Y. Luo and J. Ling, ‘‘Single-image de-raining using low-rank matrix approximation,’’ Neural Comput. Appl., vol. 32, no. 11, pp. 7503–7514. Jun. 2020.

    [5] T. Liu, H. Tang, D. Zhang, S. Zeng, B. Luo, and Z. Ai, ‘‘Feature-guided dictionary learning for patch-and-group sparse representations in single image deraining,’’ Appl. Soft Comput., vol. 113, Art. no. 107958, Dec. 2021.

    [6] H. Wang, Q. Xie, Q. Zhao, Y. Li, Y. Liang, Y. Zheng, and D. Meng, RCDNet: An interpretable rain convolutional dictionary network for single image deraining,’’ arXiv:2107.06808, 2021.

    [7] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.

    [8] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. Int. Conf. Neural Inf. Process. Syst., pp. 1106–1114, 2012.

    [9] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 770–778, 2016.

    [10] Y. LeCun et al., “Backpropagation applied to handwritten zip code recognition,” Neural Comput., vol. 1, no. 4, pp. 541–551,1989.

    [11] F. Lv, Y. Li, and F. Lu, “Attention guided low-light image enhancement with a large scale low-light simulation dataset,”Int. J. Comput. Vis., vol. 129, no. 7, pp. 2175–2193, 2021.

    [12] K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, and R. Timofte,“Plug-and-play image restoration with deep denoiser prior,”IEEE Trans. Pattern Anal. Mach. Intell., early access, Jun. 14. 2021.

    [13] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual densenetwork for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 7, pp. 2480–2495, Jul. 2021.

    [14] W. Ren, J. Pan, H. Zhang, X. Cao, and M.-H. Yang, “Single image Dehazing via multi-scale convolutional neural networks with holistic edges,” Int. J. Comput. Vis., vol. 128, no. 1, pp. 240–259, 2020.

    [15] J. Pan, W. Ren, Z. Hu, and M.-H. Yang, “Learning to deblur images with exemplars,” IEEE Trans. Pattern Anal. Mach. Intell.,vol. 41, no. 6, pp. 1412–1425, Jun. 2019.

    [16] Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. Transformers in vision: A survey. arXiv:2101.01169, 2021.

    [17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017

    [18] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.Language models are few-shot learners. arXiv:2005.14165, 2020.

    [19] William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv:2101.03961, 2021

    [20] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized bert pretraining approach. arXiv:1907.11692, 2019

    [21] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Technical report, OpenAI, 2018

    [22] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, 2020

    [23] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021

    [24] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herve J’egou. Training data-efficient image transformers & distillation through attention. In ICML, 2021

    [25] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In ICCV, 2021

    [26] Pichao Wang, Xue Wang, Fan Wang, Ming Lin, Shuning Chang, Wen Xie, Hao Li, and Rong Jin. Kvt: k-nn attention for boosting vision transformers. In ECCV, 2022

    [27] Guangxiang Zhao, Junyang Lin, Zhiyuan Zhang, Xuancheng Ren, Qi Su, and Xu Sun. Explicit sparse transformer: Concentrated attention through explicit selection. ICLR, 2020.

    [28] W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan. Deep joint rain detection and removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1357–1366, 2017.

    [29] L.-W. Kang, C.-W. Lin, and Y.-H. Fu, ‘‘Automatic single-image-based rain
    streaks removal via image decomposition,’’ IEEE Trans. Image Process., vol. 21, no. 4, pp. 1742–1755, Apr. 2012.

    [30] Y. Luo, Y. Xu, and H. Ji, ‘‘Removing rain from a single image via discriminative sparse coding,’’ in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), pp. 3397–3405, Dec. 2015.

    [31] L.-J. Deng, T.-Z. Huang, X.-L. Zhao, and T.-X. Jiang, ‘‘A directional global
    sparse model for single image rain removal,’’ Appl. Math. Model., vol. 59, pp. 662–679, Jul. 2018.

    [32] Y.-L. Chen and C.-T. Hsu, ‘‘A generalized low-rank appearance model for spatio-temporally correlated rain streaks,’’ in Proc. IEEE Int. Conf. Comput. Vis., pp. 1968–1975, Dec. 2013.

    [33] J.-H. Kim, C. Lee, J.-Y. Sim, and C.-S. Kim, ‘‘Single-image deraining using an adaptive nonlocal means filter,’’ in Proc. IEEE Int. Conf. Image Process., pp. 914–917, Sep. 2013.

    [34] Y. Wang, S. Liu, C. Chen, and B. Zeng, ‘‘A hierarchical approach for rain or snow removing in a single color image,’’ IEEE Trans. Image Process., vol. 26, no. 8, pp. 3936–3950, Aug. 2017.

    [35] Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, ‘‘Rain streak removal using layer priors,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 2736–2744, Jun. 2016.

    [36] I. Goodfellow, J. Pouget-Abadie, and M. Mirza, ‘‘Generative adversarial nets,’’ in Proc. Adv. Neural Inf. Process. Syst., pp. 2672–2680, 2014.

    [37] X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, ‘‘Removing rain from single images via a deep detail network,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1715–1723, Jul. 2017.

    [38] W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, ‘‘Deep joint rain detection and removal from a single image,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1685–1694, Jul. 2017.

    [39] G. Li, X. He, W. Zhang, H. Chang, L. Dong, and L. Lin, ‘‘Non-locally enhanced encoder–decoder network for single image de-raining,’’ in Proc.26th ACM Int. Conf. Multimedia, pp. 1056–1064, Oct. 2018.

    [40] H. Zhang and V. M. Patel, ‘‘Density-aware single image de-raining using a multi-stream dense network,’’ in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 695–704, Jun. 2018.

    [41] X. Li, J. Wu, Z. Lin, H. Liu, and H. Zha, “Recurrent squeeze-andexcitation context aggregation net for single image deraining,” in Proc. Eur. Conf. Comput. Vis., pp. 254–269, 2018.

    [42] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 7132–7141, 2018.

    [43] D. Ren, W. Zuo, Q. Hu, P. Zhu, and D. Meng, “Progressive image deraining networks: A better and simpler baseline,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 3937–3946, 2019.

    [44] T. Wang, X. Yang, K. Xu, S. Chen, Q. Zhang, and R. W. H. Lau, “Spatial attentive single-image deraining with a high quality real rain dataset,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 12 262–12 271, 2019.

    [45] R. Li, L.-F. Cheong, and R. T. Tan, “Heavy rain image restoration: Integrating physics model and conditional adversarial learning,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 1633–1642, 2019.

    [46] H. Wang, Q. Xie, Q. Zhao, and D. Meng, “A model-driven deep neural network for single image rain removal,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 3100–3109, 2020.

    [47] Yasarla, V. A. Sindagi, and V. M. Patel, “Syn2Real transfer learning for image deraining using Gaussian processes,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 2723–2733, 2020.

    [48] W. Wei, D. Meng, Q. Zhao, Z. Xu, and Y. Wu, “Semi-supervised transfer learning for image rain removal,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 3872–3881, 2019.

    [49] H. Huang, A. Yu, and R. He, “Memory oriented transfer learning for semi-supervised image deraining,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 7728–7737, 2021.

    [50] S. W. Zamir et al., “Multi-stage progressive image restoration,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 14 816–14 826, 2021.

    [51] R. Quan, X. Yu, Y. Liang, and Y. Yang, “Removing raindrops and rain streaks in one go,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 9143–9152, 2021.

    [52] Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Google AI Language., 2019.

    [53] Pichao Wang, Xue Wang, Fan Wang, Ming Lin, Shuning Chang, Wen Xie, Hao Li, and Rong Jin. Kvt: k-nn attention for boosting vision transformers. In ECCV, 2022

    [54] Guangxiang Zhao, Junyang Lin, Zhiyuan Zhang, Xuancheng Ren, Qi Su, and Xu Sun. Explicit sparse transformer: Concentrated attention through explicit selection. ICLR, 2020.

    [55] Yinglong Wang, Chao Ma, and Bing Zeng. Multi-decoding deraining network and quasi-sparsity based training. In CVPR, pages 13375–13384, 2021.

    [56] Zhihong Fu, Zehua Fu, Qingjie Liu, Wenrui Cai, and Yunhong Wang. Sparsett: Visual tracking with sparse transformers. arXiv preprint arXiv:2205.03776, 2022.

    [57] Jiale Zhang, Yulun Zhang, Jinjin Gu, Yongbing Zhang, Linghe Kong, and Xin Yuan. Accurate image restoration with attention retractable transformer. ICLR, 2023.

    [58] Guangxiang Zhao, Junyang Lin, Zhiyuan Zhang, Xuancheng Ren, Qi Su, and Xu Sun. Explicit sparse transformer: Concentrated attention through explicit selection. ICLR, 2020

    [59] Pichao Wang, Xue Wang, Fan Wang, Ming Lin, Shuning Chang, Wen Xie, Hao Li, and Rong Jin. Kvt: k-nn attention for boosting vision transformers. In ECCV, 2022.

    [60] Xiang Li, Wenhai Wang, Xiaolin Hu, and Jian Yang. Selective kernel networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 510–519, 2019.

    [61] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5728– 5739, 2022.

    [62] Wenzhe Shi, Jose Caballero, Ferenc Huszar, Johannes Totz, ´ Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super- resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1874–1883, 2016.

    [63] Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlpmixer: An all-mlp architecture for vision. Advances in neural information processing systems, 34:24261–24272, 2021.

    [64] Dianwen Ng, Yunqi Chen, Biao Tian, Qiang Fu, and Eng Siong Chng. Convmixer: Feature interactive convolution with curriculum learning for small footprint and noisy far-field keyword spotting. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3603–3607. IEEE, 2022.

    [65] Beidi Chen, Tri Dao, Kaizhao Liang, Jiaming Yang, Zhao Song, Atri Rudra, and Christopher R´e. Pixelated butterfly: Simple and efficient sparse training for neural network models. 2021.

    [66] Beidi Chen, Tri Dao, Eric Winsor, Zhao Song, Atri Rudra, and Christopher R´e. Scatterbrain: Unifying sparse and low-rank attention. In Advances in Neural Information Processing Systems (NeurIPS), 2021.

    [67] Daniel Y. Fu, Simran Arora, Jessica Grogan, Isys Johnson, Sabri Eyuboglu, Armin W. Thomas, Benjamin Spector, Michael Poli, Atri Rudra, Christopher Ré Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture. Conference on Neural Information Processing Systems (NeurIPS), 2023.

    [68] Kui Jiang, Zhongyuan Wang, Peng Yi, Chen Chen, Baojin Huang, Yimin Luo, Jiayi Ma, and Junjun Jiang. Multi-scale progressive fusion network for single image deraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8346–8355, 2020.

    [69] Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Zhang, Hanspeter Pfister, Radu Timofte, and Luc Van Gool. Mst++: Multi-stage spectral-wise transformer for efficient spectral reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 745–755, June 2022.

    [70] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In CVPR, pages 17683–17693, 2022.

    [71] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. UNet: convolutional networks for biomedical image segmentation. In MICCAI, 2015.

    [72] Namuk Park and Songkuk Kim. How do vision transformers work? arXiv preprint arXiv:2202.06709, 2022.

    [73] Jie Xiao, Xueyang Fu, Aiping Liu, Feng Wu, and ZhengJun Zha. Image de-raining transformer. IEEE Trans. Pattern Anal. Mach. Intell., pages 1–18, 2022.

    [74] Weijian Xu, Yifan Xu, Tyler Chang, and Zhuowen Tu. Co-scale conv-attentional image transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9981–9990, 2021

    [75] Xiang Chen, Hao Li, Mingqiang Li, Jinshan Pan. Learning A Sparse Transformer Network for Effective Image Deraining. In CVPR 2023, pages 5896-5905,2023.

    [76] Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Learning enriched features for real image restoration and enhancement. In European Conference on Computer Vision, pages 492–511. Springer, 2020.

    [77] Pinjun Luo, Guoqiang Xiao, Xinbo Gao, and Song Wu. Lkd-net: Large kernel convolution network for single image dehazing. arXiv preprint arXiv:2209.01788, 2022.

    [78] T. Wang, K. Zhang, T. Shen, W. Luo, B. Stenger, and T. Lu, “Ultra-high-definition low- light image enhancement: A benchmark and transformer based method,” arXiv preprint arXiv:2212.11548, 2022.

    [79] Ali Hatamizadeh, Hongxu Yin, Jan Kautz, and Pavlo Molchanov. Global context vision transformers. arXiv preprint arXiv:2206.09959, 2022.

    [80] W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan. Deep joint rain detection and removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1357–1366, 2017.

    [81] Qiaosi Yi, Juncheng Li, Qinyan Dai, Faming Fang, Guixu Zhang, Tieyong Zeng. Structure-Preserving Deraining with Residue Channel Prior Guidance. ICCV2021

    [82] Xueyang Fu, Qi Qi, Zheng-Jun Zha, Yurui Zhu, and Xing hao Ding. Rain streak removal via dual graph convolutional network. In AAAI, pages 1352–1360, 2021.

    [83] Jie Xiao, Xueyang Fu, Aiping Liu, Feng Wu, and Zheng-Jun Zha. Image de-raining transformer. IEEE TPAMI, 2022.

    無法下載圖示 校內:2029-07-31公開
    校外:2029-07-31公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE