A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 11 Issue 11
Nov.  2024

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
K. Jiang, R. Wang, Y. Xiao, J. Jiang, X. Xu, and  T. Lu,  “Image enhancement via associated perturbation removal and texture reconstruction learning,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 11, pp. 2253–2269, Nov. 2024. doi: 10.1109/JAS.2024.124521
Citation: K. Jiang, R. Wang, Y. Xiao, J. Jiang, X. Xu, and  T. Lu,  “Image enhancement via associated perturbation removal and texture reconstruction learning,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 11, pp. 2253–2269, Nov. 2024. doi: 10.1109/JAS.2024.124521

Image Enhancement via Associated Perturbation Removal and Texture Reconstruction Learning

doi: 10.1109/JAS.2024.124521
Funds:  This work was supported by the National Natural Science Foundation of China (U23B2009, 62376201, 423B2104) and Open Foundation (ZNXX2023MSO2, HBIR202311)
More Information
  • Degradation under challenging conditions such as rain, haze, and low light not only diminishes content visibility, but also results in additional degradation side effects, including detail occlusion and color distortion. However, current technologies have barely explored the correlation between perturbation removal and background restoration, consequently struggling to generate high-naturalness content in challenging scenarios. In this paper, we rethink the image enhancement task from the perspective of joint optimization: Perturbation removal and texture reconstruction. To this end, we advise an efficient yet effective image enhancement model, termed the perturbation-guided texture reconstruction network (PerTeRNet). It contains two sub-networks designed for the perturbation elimination and texture reconstruction tasks, respectively. To facilitate texture recovery, we develop a novel perturbation-guided texture enhancement module (PerTEM) to connect these two tasks, where informative background features are extracted from the input with the guidance of predicted perturbation priors. To alleviate the learning burden and computational cost, we suggest performing perturbation removal in a sub-space and exploiting super-resolution to infer high-frequency background details. Our PerTeRNet has demonstrated significant superiority over typical methods in both quantitative and qualitative measures, as evidenced by extensive experimental results on popular image enhancement and joint detection tasks. The source code is available at

    https://github.com/kuijiang94/PerTeRNet

    .

     

  • loading
  • [1]
    Z. Qin, X. Lu, X. Nie, D. Liu, Y. Yin, and W. Wang, “Coarse-to-fine video instance segmentation with factorized conditional appearance flows,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 5, pp. 1192–1208, May 2023. doi: 10.1109/JAS.2023.123456
    [2]
    Y. Liu, B. Jiang, and J. Xu, “Axial assembled correspondence network for few-shot semantic segmentation,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 3, pp. 711–721, Mar. 2023. doi: 10.1109/JAS.2022.105863
    [3]
    Y. Sun, B. Cao, P. Zhu, and Q. Hu, “Drone-based RGB-infrared cross-modality vehicle detection via uncertainty-aware learning,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 10, pp. 6700–6713, Oct. 2022. doi: 10.1109/TCSVT.2022.3168279
    [4]
    Z. Zou, K. Chen, Z. Shi, Y. Guo, and J. Ye, “Object detection in 20 years: A survey,” Proc. IEEE, vol. 111, no. 3, pp. 257–276, Mar. 2023. doi: 10.1109/JPROC.2023.3238524
    [5]
    J. E. Arco, A. Ortiz, J. Ramírez, F. J. Martínez-Murcia, Y.-D. Zhang, and J. M. Górriz, “Uncertainty-driven ensembles of multi-scale deep architectures for image classification,” Inf. Fusion, vol. 89, pp. 53–65, Jan. 2023. doi: 10.1016/j.inffus.2022.08.010
    [6]
    G. Cheng, P. Lai, D. Gao, and J. Han, “Class attention network for image recognition,” Sci. China Inf. Sci., vol. 66, no. 3, p. 132105, Jan. 2023. doi: 10.1007/s11432-021-3493-7
    [7]
    G. Xu, X. Wang, and X. Xu, “Single image enhancement in sandstorm weather via tensor least square,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 6, pp. 1649–1661, Nov. 2020. doi: 10.1109/JAS.2020.1003423
    [8]
    Y. Ma, X. Wang, W. Gao, Y. Du, J. Huang, and F. Fan, “Progressive fusion network based on infrared light field equipment for infrared image enhancement,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 9, pp. 1687–1690, Sep. 2022. doi: 10.1109/JAS.2022.105812
    [9]
    T. Wang, K. Zhang, T. Shen, W. Luo, B. Stenger, and T. Lu, “Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method,” in Proc. 37th AAAI Conf. Artificial Intelligence, Washington, USA, 2023, pp. 2654–2662.
    [10]
    Y. Kang, Q. Jiang, C. Li, W. Ren, H. Liu, and P. Wang, “A perception-aware decomposition and fusion framework for underwater image enhancement,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 3, pp. 988–1002, Mar. 2023. doi: 10.1109/TCSVT.2022.3208100
    [11]
    K. Jiang, Z. Wang, P. Yi, and J. Jiang, “Hierarchical dense recursive network for image super-resolution,” Pattern Recognit., vol. 107, p. 107475, Nov. 2020. doi: 10.1016/j.patcog.2020.107475
    [12]
    Y. Xiao, X. Su, Q. Yuan, D. Liu, H. Shen, and L. Zhang, “Satellite video super-resolution via multiscale deformable convolution alignment and temporal grouping projection,” IEEE Trans. Geosci. Remote Sens., vol. 60, p. 5610819, 2022.
    [13]
    C. Tian, Y. Xu, W. Zuo, C.-W. Lin, and D. Zhang, “Asymmetric CNN for image superresolution,” IEEE Trans. Syst., Man, Cybern.: Syst., vol. 52, no. 6, pp. 3718–3730, Jun. 2022. doi: 10.1109/TSMC.2021.3069265
    [14]
    X. Chen, H. Li, M. Li, and J. Pan, “Learning a sparse transformer network for effective image deraining,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Vancouver, Canada, 2023, pp. 5896–5905.
    [15]
    K. Jiang, Z. Wang, C. Chen, Z. Wang, L. Cui, and C.-W. Lin, “Magic ELF: Image deraining meets association learning and transformer,” in Proc. 30th ACM Int. Conf. Multimedia, Lisboa, Portugal, 2022, pp. 827–836.
    [16]
    Y. Song, Z. He, H. Qian, and X. Du, “Vision transformers for single image dehazing,” IEEE Trans. Image Process., vol. 32, pp. 1927–1941, Mar. 2023. doi: 10.1109/TIP.2023.3256763
    [17]
    H. Bai, J. Pan, X. Xiang, and J. Tang, “Self-guided image dehazing using progressive feature fusion,” IEEE Trans. Image Process., vol. 31, pp. 1217–1229, Jan. 2022. doi: 10.1109/TIP.2022.3140609
    [18]
    S. Yang, D. Zhou, J. Cao, and Y. Guo, “LightingNet: An integrated learning method for low-light image enhancement,” IEEE Trans. Comput. Imaging, vol. 9, pp. 29–42, Jan. 2023. doi: 10.1109/TCI.2023.3240087
    [19]
    K. Jiang, Z. Wang, Z. Wang, C. Chen, P. Yi, T. Lu, and C.-W. Lin, “Degrade is upgrade: Learning degradation for low-light image enhancement,” in Proc. 36th AAAI Conf. Artificial Intelligence, 2022, pp. 1078–1086.
    [20]
    K. Garg and S. K. Nayar, “When does a camera see rain?” in Proc. 10th IEEE Int. Conf. Computer Vision, Beijing, China, 2005, pp. 1067–1074.
    [21]
    J. Bossu, N. Hautiere, and J.-P. Tarel, “Rain or snow detection in image sequences through use of a histogram of orientation of streaks,” Int. J. Comput. Vis., vol. 93, no. 3, pp. 348–367, Jan. 2011. doi: 10.1007/s11263-011-0421-7
    [22]
    L.-W. Kang, C.-W. Lin, and Y.-H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Trans. Image Process., vol. 21, no. 4, pp. 1742–1755, Apr. 2012. doi: 10.1109/TIP.2011.2179057
    [23]
    M. Hu, K. Jiang, Z. Wang, et al., “CycMuNet+: Cycle-projected mutual learning for spatial-temporal video super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 11, pp. 13376–13392, 2023.
    [24]
    P. Yi, Z. Wang, K. Jiang, Z. Shao, and J. Ma, “Multi-temporal ultra dense memory network for video super-resolution,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 8, pp. 2503–2516, Aug. 2020. doi: 10.1109/TCSVT.2019.2925844
    [25]
    C. Tian, Y. Zhang, W. Zuo, C.-W. Lin, D. Zhang, and Y. Yuan, “A heterogeneous group CNN for image super-resolution,” IEEE Trans. Neural Netw. Learn. Syst., vol. 35, no. 5, pp. 6507–6519, May 2024. doi: 10.1109/TNNLS.2022.3210433
    [26]
    Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 7, p. 2480, Jul. 2021. doi: 10.1109/TPAMI.2020.2968521
    [27]
    Y. Tai, J. Yang, and X. Liu, “Image super-resolution via deep recursive residual network,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017, pp. 3147–3155.
    [28]
    C. Wang, X. Xing, Y. Wu, Z. Su, and J. Chen, “DCSFN: Deep cross-scale fusion network for single image rain removal,” in Proc. 28th ACM Int. Conf. Multimedia, Seattle, USA, 2020, pp. 1643–1651.
    [29]
    K. Jiang, Z. Wang, P. Yi, C. Chen, G. Wang, Z. Han, J. Jiang, and Z. Xiong, “Multi-scale hybrid fusion network for single image deraining,” IEEE Trans. Neural Netw. Learn. Syst., vol. 34, no. 7, pp. 3594–3608, Jul. 2023. doi: 10.1109/TNNLS.2021.3112235
    [30]
    C. Tian, Y. Yuan, S. Zhang, C.-W. Lin, W. Zuo, and D. Zhang, “Image super-resolution with an enhanced group convolutional neural network,” Neural Netw., vol. 153, pp. 373–385, Sep. 2022. doi: 10.1016/j.neunet.2022.06.009
    [31]
    K. Jiang, Z. Wang, P. Yi, C. Chen, Z. Han, T. Lu, B. Huang, and J. Jiang, “Decomposition makes better rain removal: An improved attention-guided deraining network,” IEEE Trans. Circuits Syst. Video Technol., vol. 31, no. 10, pp. 3981–3995, Oct. 2021. doi: 10.1109/TCSVT.2020.3044887
    [32]
    K. Jiang, J. Jiang, X. Liu, X. Xu, and X. Ma, “FMRNet: Image deraining via frequency mutual revision,” in Proc. 38th AAAI Conf. Artificial Intelligence, Vancouver, Canada, 2024, pp. 12892–12900.
    [33]
    R. Yasarla and V. M. Patel, “Uncertainty guided multi-scale residual learning-using a cycle spinning CNN for single image de-raining,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, USA, 2019, pp. 8405–8414.
    [34]
    C. Guo, C. Li, J. Guo, C. C. Loy, J. Hou, S. Kwong, and R. Cong, “Zero-reference deep curve estimation for low-light image enhancement,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, USA, 2020, pp. 1777–1786.
    [35]
    Y. Dong, Y. Liu, H. Zhang, S. Chen, and Y. Qiao, “FD-GAN: Generative adversarial networks with fusion-discriminator for single image dehazing,” in Proc. 34th AAAI Conf. Artificial Intelligence, New York, USA, 2020, pp. 10729–10736.
    [36]
    K. Jiang, Z. Wang, P. Yi, C. Chen, B. Huang, Y. Luo, J. Ma, and J. Jiang, “Multi-scale progressive fusion network for single image deraining,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, USA, 2020, pp. 8343–8352.
    [37]
    D. Ren, W. Zuo, Q. Hu, P. Zhu, and D. Meng, “Progressive image deraining networks: A better and simpler baseline,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, USA, 2019, pp. 3937–3946.
    [38]
    S. Deng, M. Wei, J. Wang, Y. Feng, L. Liang, H. Xie, F. L. Wang, and M. Wang, “Detail-recovery image deraining via context aggregation networks,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, USA, 2020, pp. 14548–14557.
    [39]
    S. Li, W. Ren, J. Zhang, J. Yu, and X. Guo, “Single image rain removal via a deep decomposition-composition network,” Comput. Vis. Image Understanding, vol. 186, pp. 48–57, Sep. 2019. doi: 10.1016/j.cviu.2019.05.003
    [40]
    X. Li, J. Wu, Z. Lin, H. Liu, and H. Zha, “Recurrent squeeze-and-excitation context aggregation net for single image deraining,” in Proc. 15th European Conf. Computer Vision, Munich, Germany, 2018, pp. 262–277.
    [41]
    S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Learning enriched features for real image restoration and enhancement,” in Proc. 16th European Conf. Computer Vision, Glasgow, UK, 2020, pp. 492–511.
    [42]
    C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 2, pp. 295–307, Feb. 2016. doi: 10.1109/TPAMI.2015.2439281
    [43]
    Y. Xiao, Q. Yuan, K. Jiang, J. He, C.-W. Lin, and L. Zhang, “TTST: A top-k token selective transformer for remote sensing image super-resolution,” IEEE Trans. Image Process., vol. 33, pp. 738–752, Jan. 2024. doi: 10.1109/TIP.2023.3349004
    [44]
    W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 1874–1883.
    [45]
    S. Mehta, M. Rastegari, L. Shapiro, and H. Hajishirzi, “ESPNetv2: A light-weight, power efficient, and general purpose convolutional neural network,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, USA, 2019, pp. 9190–9200.
    [46]
    S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, USA, 2022, pp. 5728–5739.
    [47]
    K. Jiang, Z. Wang, P. Yi, C. Chen, Z. Wang, X. Wang, J. Jiang, and C.-W. Lin, “Rain-free and residue hand-in-hand: A progressive coupled network for real-time image deraining,” IEEE Trans. Image Process., vol. 30, pp. 7404–7418, Aug. 2021. doi: 10.1109/TIP.2021.3102504
    [48]
    H. Zhang and V. M. Patel, “Density-aware single image de-raining using a multi-stream dense network,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018, pp. 695–704.
    [49]
    R. Li, L.-F. Cheong, and R. T. Tan, “Single image deraining using scale-aware multi-stage recurrent network,” arXiv preprint arXiv: 1712.06830, 2017.
    [50]
    P. Li, J. Jin, G. Jin, L. Fan, X. Gao, T. Song, and X. Chen, “Deep scale-space mining network for single image deraining,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition Workshops, New Orleans, USA, 2022, pp. 4276–4285.
    [51]
    K. Jiang, Z. Wang, P. Yi, C. Chen, X. Wang, J. Jiang, and Z. Xiong, “Multi-level memory compensation network for rain removal via divide-and-conquer strategy,” IEEE J. Sel. Top. Signal Process., vol. 15, no. 2, pp. 216–228, Feb. 2021. doi: 10.1109/JSTSP.2021.3052648
    [52]
    G. Li, X. He, W. Zhang, H. Chang, L. Dong, and L. Lin, “Non-locally enhanced encoder-decoder network for single image de-raining,” in Proc. 26th ACM Int. Conf. Multimedia, Seoul Republic of Korea, 2018, pp. 1056–1064.
    [53]
    W. Zou, Y. Wang, X. Fu, and Y. Cao, “Dreaming to prune image deraining networks,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, USA, 2022, pp. 6023–6032.
    [54]
    Y. Nanba, H. Miyata, and X.-H. Han, “Dual heterogeneous complementary networks for single image deraining,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition Workshops, New Orleans, USA, 2022, pp. 568–577.
    [55]
    Y.-T. Peng, Z. Lu, F.-C. Cheng, Y. Zheng, and S.-C. Huang, “Image haze removal using airlight white correction, local light filter, and aerial perspective prior,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 5, pp. 1385–1395, May 2020. doi: 10.1109/TCSVT.2019.2902795
    [56]
    B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “AOD-Net: All-in-one dehazing network,” in Proc. IEEE Int. Conf. Computer Vision, Venice, Italy, 2017, pp. 4770–4778.
    [57]
    H. Dong, J. Pan, L. Xiang, Z. Hu, X. Zhang, F. Wang, and M.-H. Yang, “Multi-scale boosted dehazing network with dense feature fusion,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, USA, 2020, pp. 2154–2164.
    [58]
    R. T. Tan, “Visibility in bad weather from a single image,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Anchorage, USA, 2008, pp. 1–8.
    [59]
    K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2341–2353, Dec. 2011. doi: 10.1109/TPAMI.2010.168
    [60]
    C. Guo, Q. Yan, S. Anwar, R. Cong, W. Ren, and C. Li, “Image dehazing transformer with transmission-aware 3D position embedding,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, USA, 2022, pp. 5812–5820.
    [61]
    B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, “Benchmarking single-image dehazing and beyond,” IEEE Trans. Image Process., vol. 28, no. 1, pp. 492–505, Jan. 2019. doi: 10.1109/TIP.2018.2867951
    [62]
    B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process., vol. 25, no. 11, pp. 5187–5198, Nov. 2016. doi: 10.1109/TIP.2016.2598681
    [63]
    Y. Qu, Y. Chen, J. Huang, and Y. Xie, “Enhanced pix2pix dehazing network,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, USA, 2019, pp. 8160–8168.
    [64]
    R. Liu, L. Ma, J. Zhang, X. Fan, and Z. Luo, “Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Nashville, USA, 2021, pp. 10561–10570.
    [65]
    R. Hummel, “Image enhancement by histogram transformation,” Comput. Graphics Image Process., vol. 6, no. 2, pp. 184–195, Apr. 1977. doi: 10.1016/S0146-664X(77)80011-7
    [66]
    L. Kaufman, D. Lischinski, and M. Werman, “Content-aware automatic photo enhancement,” Comput. Graphics Forum, vol. 31, no. 8, pp. 2528–2540, Dec. 2012. doi: 10.1111/j.1467-8659.2012.03225.x
    [67]
    D. J. Jobson, Z. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Trans. Image Process., vol. 6, no. 3, pp. 451–462, Mar. 1997. doi: 10.1109/83.557356
    [68]
    C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in Proc. British Machine Vision Conf., Newcastle, UK, 2018, pp. 155.
    [69]
    W. Yang, S. Wang, Y. Fang, Y. Wang, and J. Liu, “From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, USA, 2020, pp. 3063–3072.
    [70]
    W. Yang, Y. Yuan, W. Ren, J. Liu, W. J. Scheirer, Z. Wang, T. Zhang, Q. Zhong, D. Xie, S. Pu, Y. Zheng, Y. Qu, Y. Xie, L. Chen, Z. Li, C. Hong, H. Jiang, S. Yang, Y. Liu, X. Qu, P. Wan, S. Zheng, M. Zhong, T. Su, L. He, Y. Guo, Y. Zhao, Z. Zhu, J. Liang, J. Wang, T. Chen, Y. Quan, Y. Xu, B. Liu, X. Liu, Q. Sun, T. Lin, X. Li, F. Lu, L. Gu, S. Zhou, C. Cao, S. Zhang, C. Chi, C. Zhuang, Z. Lei, S. Z. Li, S. Wang, R. Liu, D. Yi, Z. Zuo, J. Chi, H. Wang, K. Wang, Y. Liu, X. Gao, Z. Chen, C. Guo, Y. Li, H. Zhong, J. Huang, H. Guo, J. Yang, W. Liao, J. Yang, L. Zhou, M. Feng, and L. Qin, “Advancing image understanding in poor visibility environments: A collective benchmark study,” IEEE Trans. Image Process., vol. 29, pp. 5737–5752, Mar. 2020. doi: 10.1109/TIP.2020.2981922
    [71]
    W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep Laplacian pyramid networks for fast and accurate super-resolution,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017, pp. 624–632.
    [72]
    Y. Xiao, Q. Yuan, K. Jiang, J. He, Y. Wang, and L. Zhang, “From degrade to upgrade: Learning a self-supervised degradation guided adaptive network for blind remote sensing image super-resolution,” Inf. Fusion, vol. 96, pp. 297–311, Aug. 2023. doi: 10.1016/j.inffus.2023.03.021
    [73]
    Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004. doi: 10.1109/TIP.2003.819861
    [74]
    Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018, pp. 2472–2481.
    [75]
    O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. 18th Int. Conf., Munich, Germany, 2015, pp. 234–241.
    [76]
    A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett., vol. 20, no. 3, pp. 209–212, Mar. 2013. doi: 10.1109/LSP.2012.2227726
    [77]
    L. Liu, B. Liu, H. Huang, and A. C. Bovik, “No-reference image quality assessment based on spatial and spectral entropies,” Signal Process.: Image Commun., vol. 29, no. 8, pp. 856–863, Sep. 2014. doi: 10.1016/j.image.2014.06.006
    [78]
    R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018, pp. 586–595.
    [79]
    Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in Proc. 15th European Conf. Computer Vision, Munich, Germany, 2018, pp. 286–301.
    [80]
    K. Jiang, W. Liu, Z. Wang, X. Zhong, J. Jiang, and C.-W. Lin, “DAWN: Direction-aware attention wavelet network for image deraining,” in Proc. 31st ACM Int. Conf. Multimedia, Ottawa, Canada, 2023, pp. 7065–7074.
    [81]
    Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general U-shaped transformer for image restoration,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, USA, 2022, pp. 17683–17693.
    [82]
    J. Xiao, X. Fu, A. Liu, F. Wu, and Z.-J. Zha, “Image de-raining transformer,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 11, pp. 12978–12995, Nov. 2023. doi: 10.1109/TPAMI.2022.3183612
    [83]
    S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Nashville, USA, 2021, pp. 14821–14831.
    [84]
    H. Huang, A. Yu, Z. Chai, R. He, and T. Tan, “Selective wavelet attention learning for single image deraining,” Int. J. Comput. Vis., vol. 129, no. 4, pp. 1282–1300, Jan. 2021. doi: 10.1007/s11263-020-01421-z
    [85]
    X. Fu, B. Liang, Y. Huang, X. Ding, and J. Paisley, “Lightweight pyramid networks for image deraining,” IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 6, pp. 1794–1807, Jun. 2020. doi: 10.1109/TNNLS.2019.2926481
    [86]
    H. Zhang, V. Sindagi, and V. M. Patel, “Image de-raining using a conditional generative adversarial network,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 11, pp. 3943–3956, Nov. 2020. doi: 10.1109/TCSVT.2019.2920407
    [87]
    W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, “Deep joint rain detection and removal from a single image,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017, pp. 1357–1366.
    [88]
    S. Li, I. B. Araujo, W. Ren, Z. Wang, E. K. Tokuda, R. H. Junior, R. Cesar-Junior, J. Zhang, X. Guo, and X. Cao, “Single image deraining: A comprehensive benchmark analysis,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, USA, 2019, pp. 3838–3847.
    [89]
    Y. Shao, L. Li, W. Ren, C. Gao, and N. Sang, “Domain adaptation for image dehazing,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, USA, 2020, pp. 2808–2817.
    [90]
    X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “FFA-Net: Feature fusion attention network for single image dehazing,” in Proc. 34th AAAI Conf. Artificial Intelligence, New York, USA, 2020, pp. 11908–11915.
    [91]
    L. P. Lu, Q. Xiong, D. F. Chu, and B. R. Xu, “MixDehazeNet: Mix structure block for image dehazing network,” arXiv preprint arXiv: 2305.17654, 2023.
    [92]
    Q. Wang, K. Jiang, Z. Wang, W. Ren, J. Zhang, and C.-W. Lin, “Multi-scale fusion and decomposition network for single image deraining,” IEEE Trans. Image Process., vol. 33, pp. 191–204, Dec. 2024. doi: 10.1109/TIP.2023.3334556
    [93]
    Y. Zhang, J. Zhang, and X. Guo, “Kindling the darkness: A practical low-light image enhancer,” in Proc. 27th ACM Int. Conf. Multimedia, Nice, France, 2019, pp. 1632–1640.
    [94]
    J. He, Y. Liu, Y. Qiao, and C. Dong, “Conditional sequential modulation for efficient global image retouching,” in Proc. 16th European Conf. Computer Vision, Glasgow, UK, 2020, pp. 679–695.
    [95]
    Y. Zhang, X. Guo, J. Ma, W. Liu, and J. Zhang, “Beyond brightening low-light images,” Int. J. Comput. Vis., vol. 129, no. 4, pp. 1013–1037, Jan. 2021. doi: 10.1007/s11263-020-01407-x
    [96]
    Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, “EnlightenGAN: Deep light enhancement without paired supervision,” IEEE Trans. Image Proces., vol. 30, pp. 2340–2349, Jan. 2021. doi: 10.1109/TIP.2021.3051462
    [97]
    S. Lim and W. Kim, “DSLR: Deep stacked Laplacian restorer for low-light image enhancement,” IEEE Trans. Multimedia, vol. 23, pp. 4272–4284, 2021. doi: 10.1109/TMM.2020.3039361
    [98]
    M. Jha and A. K. Bhandari, “Camera response based nighttime image enhancement using concurrent reflectance,” IEEE Trans. Instrum. Meas., vol. 71, p. 5010111, Apr. 2022.
    [99]
    S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Learning enriched features for fast image restoration and enhancement,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 2, pp. 1934–1948, Feb. 2023. doi: 10.1109/TPAMI.2022.3167175
    [100]
    X. Xu, R. Wang, C.-W. Fu, and J. Jia, “SNR-aware low-light image enhancement,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, USA, 2022, pp. 17714–17724.
    [101]
    Y. Wang, Z. Liu, J. Liu, S. Xu, and S. Liu, “Low-light image enhancement with illumination-aware gamma correction and complete image modelling network,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Paris, France, 2023, pp. 13128–13137.
    [102]
    F. Lv, F. Lu, J. Wu, and C. Lim, “MBLLEN: Low-light image/video enhancement using CNNs,” in Proc. British Machine Vision Conf., Newcastle, UK, 2018.
    [103]
    X. Guo, Y. Li, and H. Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process., vol. 26, no. 2, pp. 982–993, Feb. 2017. doi: 10.1109/TIP.2016.2639450
    [104]
    J. Redmon and A. Farhadi, “YOLOv3: An incremental improvement,” arXiv preprint arXiv: 1804.02767, 2018.

Catalog

    通讯作者: 陈斌, [email protected]
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(12)  / Tables(9)

    Article Metrics

    Article views (613) PDF downloads(76) Cited by()

    Highlights

    • This study investigates the image enhancement tasks from a fresh perspective that involves the joint representation of perturbation removal, texture reconstruction and their association
    • It develops a perturbation-guided texture enhancement module (PerTEM) to associate degradation simulation and texture restoration, facilitating the learning capability while maintaining the model compactness
    • Experiments on various mainstream image enhancement tasks, such as image deraining, image dehazing and low-light image enhancement have demonstrated that PerTeRNet delivers competitive performance compared to the state-of-the-art method

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return