Research article Special Issues

Robust color multi-focus image fusion using quaternion sparse representation and spatial information


  • Published: 04 September 2025
  • Most sparse representation (SR)-based fusion methods handle color channels separately, which easily causes hue distortion and color saturation reduction in fused images. To address these issues, we propose a novel color multi-focus image fusion method based on quaternion sparse representation (QSR). QSR employs the quaternion matrix to model color images in a holistic way that can fully exploit high correlations among color channels and preserve the inherent color structures of source images in reconstruction results. We first learn a clear quaternion dictionary on a high-quality image set and a blurry quaternion dictionary on a Gaussian-blurred version of the same set. To accurately estimate the focus information of each image patch, the salience and sparsity features are computed by collaboratively employing the sparse coefficients of the current patch and its neighboring patches deduced from the QSR model under the two learned dictionaries. Then, an activity measurement of each patch is defined by exploiting the two features. Finally, a maximum fusion rule is adopted to get the composited sparse coefficients. The experimental results show that our method successfully avoids color distortion in the fused images and performs qualitatively and quantitatively better than some recent SR-based fusion methods.

    Citation: Wei Liu, Wanqing Li, Xia Zhao, Fang Zhu. Robust color multi-focus image fusion using quaternion sparse representation and spatial information[J]. AIMS Electronics and Electrical Engineering, 2025, 9(4): 500-540. doi: 10.3934/electreng.2025023

    Related Papers:

  • Most sparse representation (SR)-based fusion methods handle color channels separately, which easily causes hue distortion and color saturation reduction in fused images. To address these issues, we propose a novel color multi-focus image fusion method based on quaternion sparse representation (QSR). QSR employs the quaternion matrix to model color images in a holistic way that can fully exploit high correlations among color channels and preserve the inherent color structures of source images in reconstruction results. We first learn a clear quaternion dictionary on a high-quality image set and a blurry quaternion dictionary on a Gaussian-blurred version of the same set. To accurately estimate the focus information of each image patch, the salience and sparsity features are computed by collaboratively employing the sparse coefficients of the current patch and its neighboring patches deduced from the QSR model under the two learned dictionaries. Then, an activity measurement of each patch is defined by exploiting the two features. Finally, a maximum fusion rule is adopted to get the composited sparse coefficients. The experimental results show that our method successfully avoids color distortion in the fused images and performs qualitatively and quantitatively better than some recent SR-based fusion methods.



    加载中


    [1] Li S, Kang X, Fang LY, Hu J, Yin H (2017) Pixel-level image fusion: A survey of the state of the art. Inform Fusion 33: 100‒112. https://doi.org/10.1016/j.inffus.2016.05.004 doi: 10.1016/j.inffus.2016.05.004
    [2] Jiang ZG, Han ZG, Chen J, Zhou XK (2004) A wavelet based algorithm for multi-focus micro-image fusion. In Third International Conference on Image and Graphics (ICIG'04), 176‒179. https://doi.org/10.1109/ICIG.2004.29
    [3] Sujatha K, Punithavathani DS (2018) Optimized ensemble decision-based multi-focus image fusion using binary genetic grey-wolf optimizer in camera sensor networks. Multimed Tools Appl 77: 1735‒1759. https://doi.org/10.1007/s11042-016-4312-3 doi: 10.1007/s11042-016-4312-3
    [4] Chen Z, Wang D, Gong S, Zhao F (2017) Application of multifocus image fusion in visual power patrol inspection. In 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference 1688‒1692. https://doi.org/10.1109/IAEAC.2017.8054302
    [5] Song Y, Li M, Li Q, Sun L (2006) A new wavelet based multi - focus image fusion scheme and its application on optical microscopy. In 2006 IEEE International Conference on Robotics and Biomimetics 401‒405. https://doi.org/10.1109/ROBIO.2006.340210
    [6] Liu Y, Wang L, Cheng J, Li C, Chen X (2020) Multi-focus image fusion: A Survey of the state of the art. Inform Fusion 64: 71‒91. https://doi.org/10.1016/j.inffus.2020.06.013 doi: 10.1016/j.inffus.2020.06.013
    [7] Zhou Z, Li S, Wang B (2014) Multi-scale weighted gradient-based fusion for multi-focus images. Inform Fusion 20: 60‒72. https://doi.org/10.1016/j.inffus.2013.11.005 doi: 10.1016/j.inffus.2013.11.005
    [8] Liu Y, Liu S, Wang Z (2015) Multi-focus image fusion with dense SIFT. Inform Fusion 23: 139‒155. https://doi.org/10.1016/j.inffus.2014.05.004 doi: 10.1016/j.inffus.2014.05.004
    [9] Yin W, Zhao W, You D, Wang D (2019) Local binary pattern metric-based multi-focus image fusion. Opt Laser Technol 110: 62‒68. https://doi.org/10.1016/j.optlastec.2018.07.045 doi: 10.1016/j.optlastec.2018.07.045
    [10] Nejati M, Samavi S, Shirani S (2015) Multi-focus image fusion using dictionary-based sparse representation. Inform Fusion 25: 72‒84. https://doi.org/10.1016/j.inffus.2014.10.004 doi: 10.1016/j.inffus.2014.10.004
    [11] Chen Y, Guan J, Cham WK (2018) Robust multi-focus image fusion using edge model and multi-matting. IEEE T Image Process 27: 1526‒1541. https://doi.org/10.1109/TIP.2017.2779274 doi: 10.1109/TIP.2017.2779274
    [12] Bouzos O, Andreadis I, Mitianoudis N (2019) Conditional random field model for robust multi-focus image fusion. IEEE T Image Process 28: 5636‒5648. https://doi.org/10.1109/TIP.2019.2922097 doi: 10.1109/TIP.2019.2922097
    [13] Liu W, Zheng Z, Wang Z F (2021) Robust multi-focus image fusion using lazy random walks with multiscale focus measures. Signal Process 179: 107850. https://doi.org/10.1016/j.sigpro.2020.107850 doi: 10.1016/j.sigpro.2020.107850
    [14] Li J, Li X, Li X, Han D, Tan H, Hou Z, et al. (2024) Multi-focus image fusion based on multiscale fuzzy quality assessment. Digital Signal Process 153: 104592. https://doi.org/10.1016/j.dsp.2024.104592 doi: 10.1016/j.dsp.2024.104592
    [15] Adeel H, Riaz MM, Bashir T, Ali SS, Latif S (2024) Shahzad Latif4 Multi-focus image fusion using curvature minimization and morphological filtering. Multimed Tools Appl 83: 78625‒78639. https://doi.org/10.1007/s11042-024-18654-6 doi: 10.1007/s11042-024-18654-6
    [16] Li B, Zhang L, Liu J, Peng H, Wang Q, Liu J (2024) Multi-focus image fusion with parameter adaptive dual channel dynamic threshold neural P systems. Neural Networks 179: 106603. https://doi.org/10.1016/j.neunet.2024.106603 doi: 10.1016/j.neunet.2024.106603
    [17] Toet A (1989) Image fusion by a ratio of lowpass pyramid. Pattern Recogn Lett 9: 245‒253. https://doi.org/10.1016/0167-8655(89)90003-2 doi: 10.1016/0167-8655(89)90003-2
    [18] Petrovic VS, Xydeas CS (2004) Gradient-based multiresolution image fusion. IEEE T Image Process 13: 228‒237. https://doi.org/10.1109/TIP.2004.823821 doi: 10.1109/TIP.2004.823821
    [19] Li H, Manjunath B, Mitra S (1995) Multisensor image fusion using the wavelet transform. Graphical Models Image Process 57: 235‒245. https://doi.org/10.1006/gmip.1995.1022 doi: 10.1006/gmip.1995.1022
    [20] Lewis JJ, O'Callaghan RJ, Nikolov SG, Bull DR, Canagarajah N (2007) Pixel- and region-based image fusion with complex wavelets. Inform Fusion 8: 119‒130. https://doi.org/10.1016/j.inffus.2005.09.006 doi: 10.1016/j.inffus.2005.09.006
    [21] Zhang Q, Guo B (2009) Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process 89: 1334‒1346. https://doi.org/10.1016/j.sigpro.2009.01.012 doi: 10.1016/j.sigpro.2009.01.012
    [22] Gao G, Xu L, Feng D (2013) Multi-focus image fusion based on nonsubsampled shearlet transform. IET Image Process 7: 633‒639. https://doi.org/10.1049/iet-ipr.2012.0524 doi: 10.1049/iet-ipr.2012.0524
    [23] Hu J, Li S (2012) The multiscale directional bilateral filter and its application to multisensor image fusion. Inform Fusion 13: 196‒206. https://doi.org/10.1016/j.inffus.2011.02.003 doi: 10.1016/j.inffus.2011.02.003
    [24] Jian L, Yang X, Zhou Z, Zhou K, Liu K (2018) Multi-scale image fusion through rolling guidance filter. Future Gener Comput Syst 83: 310‒325. https://doi.org/10.1016/j.future.2018.01.039 doi: 10.1016/j.future.2018.01.039
    [25] Liu W, Wang Z (2020) A novel multi-focus image fusion method using multiscale shearing non-local guided averaging filter. Signal Process 166: 1‒24. https://doi.org/10.1016/j.sigpro.2019.107252 doi: 10.1016/j.sigpro.2019.107252
    [26] Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inform Fusion 36: 191‒207. https://doi.org/10.1016/j.inffus.2016.12.001 doi: 10.1016/j.inffus.2016.12.001
    [27] Tang H, Xiao B, Li W, Wang G (2018) Pixel convolutional neural network for multi-focus image fusion. Inf Sci 433-434: 125‒141. https://doi.org/10.1016/j.ins.2017.12.043 doi: 10.1016/j.ins.2017.12.043
    [28] Yang Y, Nie Z, Huang S, Lin P, Wu J (2019) Multilevel features convolutional neural network for multifocus image fusion. IEEE T Comput Imag 5: 262‒273. https://doi.org/10.1109/TCI.2018.2889959 doi: 10.1109/TCI.2018.2889959
    [29] Wang Z, Li X, Duan H, Zhang X, Wang H (2019) Multifocus image fusion using convolutional neural networks in the discrete wavelet transform domain. Multimed Tools Appl 78: 34483‒34512. https://doi.org/10.1007/s11042-019-08070-6 doi: 10.1007/s11042-019-08070-6
    [30] Zhai H, Zheng W, Ouyang Y, Pan X, Zhang W (2024) Multi-focus image fusion via interactive transformer and asymmetric soft sharing. Eng Appl Artif Intell 133: 107967. https://doi.org/10.1016/j.engappai.2024.107967 doi: 10.1016/j.engappai.2024.107967
    [31] Ouyang Y, Zhai H, Hu H, Li X, Zeng Z (2025) FusionGCN: Multi-focus image fusion using superpixel features generation GCN and pixel-level feature reconstruction CNN. Expert Syst Appl 262: 125665. https://doi.org/10.1016/j.eswa.2024.125665 doi: 10.1016/j.eswa.2024.125665
    [32] Guo X, Nie R, Cao J, Zhou D, Mei L, He K (2019) FuseGAN: learning to fuse multi-focus image via conditional generative adversarial network. IEEE Trans Multimedia 21: 1982‒1996. https://doi.org/10.1109/TMM.2019.2895292 doi: 10.1109/TMM.2019.2895292
    [33] Huang J, Le Z, Ma Y, Mei X, Fan F (2020) A generative adversarial network with adaptive constraints for multi-focus image fusion. Neural Comput Appl 32: 15119‒15129. https://doi.org/10.1007/s00521-020-04863-1 doi: 10.1007/s00521-020-04863-1
    [34] Li H, Qian W, Nie R, Cao J, Xu D (2023) Siamese conditional generative adversarial network for multi-focus image fusion. Appl Intell 53: 17492‒17507. https://doi.org/10.1007/s10489-022-04406-2 doi: 10.1007/s10489-022-04406-2
    [35] Li J, Li B, Jiang Y (2023) GIPC-GAN: an end-to-end gradient and intensity joint proportional constraint generative adversarial network for multi-focus image fusion. Complex Intell Syst 9: 7395‒7422. https://doi.org/10.1007/s40747-023-01151-y doi: 10.1007/s40747-023-01151-y
    [36] Ma J, Tang L, Fan F, Huang J, Mei X, Ma Y (2022) SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer. IEEE/CAA J Autom Sin 9: 1200‒1217. https://doi.org/10.1109/JAS.2022.105686 doi: 10.1109/JAS.2022.105686
    [37] Li M, Pei R, Zheng T, Zhang Y, Fu W (2024) FusionDiff: Multi-focus image fusion using denoising diffusion probabilistic models. Expert Syst Appl 238: 121664. https://doi.org/10.1016/j.eswa.2023.121664 doi: 10.1016/j.eswa.2023.121664
    [38] Zhu P, Sun Y, Cao B, Hu Q (2024) Task-customized mixture of adapters for general image fusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7099‒7108. https://doi.org/10.1109/CVPR52733.2024.00678
    [39] Zhai H, Zhang G, Zeng Z, Xu Z, Fang A (2025) LSKN-MFIF: Large selective kernel network for multi-focus image fusion. Neurocomputing 635: 129984. https://doi.org/10.1016/j.neucom.2025.129984 doi: 10.1016/j.neucom.2025.129984
    [40] Liu Y, Wang L, Li H, Chen X (2022) Multi-focus image fusion with deep residual learning and focus property detection. Inform Fusion 86: 1‒16. https://doi.org/10.1016/j.inffus.2022.06.001 doi: 10.1016/j.inffus.2022.06.001
    [41] Wang Z, Li X, Zhao L, Duan H, Wang S, Liu H, et al. (2023) When multi-focus image fusion networks meet traditional edge-preservation technology. Int J Comput Vis 131: 2529‒2552. https://doi.org/10.1007/s11263-023-01806-w doi: 10.1007/s11263-023-01806-w
    [42] Duan Z, Luo X, Zhang T (2024) Combining transformers with CNN for multi-focus image fusion. Expert Syst Appl 235: 121156. https://doi.org/10.1016/j.eswa.2023.121156 doi: 10.1016/j.eswa.2023.121156
    [43] Shao X, Jin X, Jiang Q, Miao S, Wang P, Chu X (2024) Multi-focus image fusion basedon transformer and depth information learning. Comput Electr Eng 119: 109629. https://doi.org/10.1016/j.compeleceng.2024.109629 doi: 10.1016/j.compeleceng.2024.109629
    [44] Xie X, Cui Y, Tan T, Zheng X, Yu Z (2024) Fusionmamba: Dynamic feature enhancement for multimodal image fusion with mamba. Vis Intell 2: 37. https://doi.org/10.1007/s44267-024-00072-9 doi: 10.1007/s44267-024-00072-9
    [45] Jin X, Zhu P, Yu D, Wozniak M, Jiang Q, Wang P, et al. (2025) Combining depth and frequency features with Mamba for multi-focus image fusion. Inf Fusion 124: 103355. https://doi.org/10.1016/j.inffus.2025.103355 doi: 10.1016/j.inffus.2025.103355
    [46] Mustafa H, Liu F, Yang J, Khan Z, Huang Q (2019) Dense multi-focus fusion net: A deep unsupervised convolutional network for multi-focus image fusion. In: Proceedings of the International Conference on Artificial Intelligence and Soft Computing, 153‒163. https://doi.org/10.1007/978-3-030-20912-4_15
    [47] Xu H, Ma J, Le Z, Jiang J, Guo X (2020) FusionDN: A unified densely connected network for image fusion. In: Proceedings of the AAAI Conference on Artificial Intelligence, 34: 12484‒12491. http://dx.doi.org/10.1609/aaai.v34i07.6936
    [48] Ma B, Zhu Y, Yin X, Ban X, Huang H, Mukeshimana M (2021) SESF-Fuse: An unsupervised deep model for multi-focus image fusion. Neural Comput Appl 33: 5793‒5804. https://doi.org/10.1007/s00521-020-05358-9 doi: 10.1007/s00521-020-05358-9
    [49] Hu X, Jiang J, Liu X, Ma J (2023) ZMFF: Zero-shot multi-focus image fusion. Inf Fusion 92: 127‒138. https://doi.org/10.1016/j.inffus.2022.11.014 doi: 10.1016/j.inffus.2022.11.014
    [50] Fang J, Ning X, Mao T, Zhang M, Zhao Y, Hu S, et al. (2024) A multi-focus image fusion network combining dilated convolution with learnable spacings and residual dense network. Comput Electr Eng 117: 109299. https://doi.org/10.1016/j.compeleceng.2024.109299 doi: 10.1016/j.compeleceng.2024.109299
    [51] Liu S, Peng W, Liu Y, Zhao J, Su Y, Zhang Y (2023) AFCANet: An adaptive feature concatenate attention network for multi-focus image fusion. J King Saud Univ-Comput Inf Sci 35: 101751. https://doi.org/10.1016/j.jksuci.2023.101751 doi: 10.1016/j.jksuci.2023.101751
    [52] Li J, Li B, Jiang Y (2023) GIPC-GAN: An end-to-end gradient and intensity joint proportional constraint generative adversarial network for multi-focus image fusion. Complex Intell Syst 9: 7395‒7422. https://doi.org/10.1007/s40747-023-01151-y doi: 10.1007/s40747-023-01151-y
    [53] Zhai H, Ouyang Y, Luo N, Chen L, Zeng Z (2024) MSI-DTrans: A multi-focus image fusion using multilayer semantic interaction and dynamic transformer. Displays 85: 102837. https://doi.org/10.1016/j.displa.2024.102837 doi: 10.1016/j.displa.2024.102837
    [54] Jiang S, Yu S (2025) Refined multi-focus image fusion using multi-scale neural network with SpSwin autoencoder-based matting. Expert Syst Appl 276: 126980. https://doi.org/10.1016/j.eswa.2025.126980 doi: 10.1016/j.eswa.2025.126980
    [55] Yin HT, Li ST, Fang L (2013) Simultaneous image fusion and super-resolution using sparse representation. Inf Fusion 14: 229‒240. https://doi.org/10.1016/j.inffus.2012.01.008 doi: 10.1016/j.inffus.2012.01.008
    [56] Dong L, Yang Q, Wu H, Xiao H, Xu M (2015) High quality multi-spectral and panchromatic image fusion technologies based on curvelet transform. Neurocomputing 159: 268‒274. https://doi.org/10.1016/j.neucom.2015.01.050 doi: 10.1016/j.neucom.2015.01.050
    [57] Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inf Fusion 24: 147‒164. https://doi.org/10.1016/j.inffus.2014.09.004 doi: 10.1016/j.inffus.2014.09.004
    [58] Zhu Z, Chai Y, Yin H, Li Y, Liu Z (2016) A novel dictionary learning approach for multi-modality medical image fusion. Neurocomputing 214: 471‒482. https://doi.org/10.1016/j.neucom.2016.06.036 doi: 10.1016/j.neucom.2016.06.036
    [59] Zhang Q, Levine M (2016) Robust multi-focus image fusion using multi-task sparse representation and spatial context. IEEE T Image Process 25: 2045‒2058. https://doi.org/10.1109/TIP.2016.2524212 doi: 10.1109/TIP.2016.2524212
    [60] Tang D, Xiong Q, Yin H, Zhu Z, Li Y (2016) A novel sparse-representation-based multi-focus image fusion approach. Neurocomputing 216: 216‒229. https://doi.org/10.1016/j.eswa.2022.116737 doi: 10.1016/j.eswa.2022.116737
    [61] Zhang Q, Shi T, Wang F, Rick S, Han J (2018) Robust sparse representation based multi-focus image fusion with dictionary construction and local spatial consistency. Pattern Recognit 83: 299‒313. https://doi.org/10.1016/j.patcog.2018.06.003 doi: 10.1016/j.patcog.2018.06.003
    [62] Zhang Q, Wang F, Luo Y, Han J (2021) Exploring a unified low rank representation for multi-focus image fusion. Pattern Recognit 113: 107752. https://doi.org/10.1016/j.patcog.2020.107752 doi: 10.1016/j.patcog.2020.107752
    [63] Subakan Ö, Vemuri B (2011) A Quaternion Framework for Color Image Smoothing and Segmentation. Int J Comput Vis 91: 233‒250. https://doi.org/10.1007/s11263-010-0388-9 doi: 10.1007/s11263-010-0388-9
    [64] Kolaman A, Yadid-Pecht O (2012) Quaternion structural similarity: A new quality index for color images. IEEE T Image Process 21: 1526‒1536. https://doi.org/10.1109/TIP.2011.2181522 doi: 10.1109/TIP.2011.2181522
    [65] Xu Y, Yu L, Yu H, Zhang H, Nguyen T (2015) Vector sparse representation of color image using quaternion matrix analysis. IEEE T Image Process 24: 1315‒1329. https://doi.org/10.1109/TIP.2015.2397314 doi: 10.1109/TIP.2015.2397314
    [66] Lan R, Zhou Y (2016) Quaternion-michelson descriptor for color image classification. IEEE T Image Process 25: 5281‒5292. https://doi.org/10.1109/TIP.2016.2605922 doi: 10.1109/TIP.2016.2605922
    [67] Zou C, Kou K, Wang Y, Tang Y (2021) Quaternion block sparse representation for signal recovery and classification. Signal Process 179: 107849. https://doi.org/10.1016/j.sigpro.2020.107849 doi: 10.1016/j.sigpro.2020.107849
    [68] Zou C, Kou K, Wang Y (2016) Quaternion collaborative and sparse representation with application to color face recognition. IEEE T Image Process 25: 3287‒3302. https://doi.org/10.1109/TIP.2016.2567077 doi: 10.1109/TIP.2016.2567077
    [69] Chen Y, Xiao X, Zhou Y (2020) Low-rank quaternion approximation for color image processing. IEEE T Image Process 29: 1426‒1439. https://doi.org/10.1109/TIP.2019.2941319 doi: 10.1109/TIP.2019.2941319
    [70] Hamilton WR (1866) Elements of Quaternions. (Cambridge Library Collection - Mathematics) (W. Hamilton, Ed.). Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511707162
    [71] Lewicki M, Sejnowski T (1998) Learning overcomplete representations. Neural Comput 12: 337‒365. https://doi.org/10.1162/089976600300015826 doi: 10.1162/089976600300015826
    [72] Skretting K, Husøy J, Aase S (2006) General design algorithm for sparse frame expansions. Signal Process 86: 117‒126. https://doi.org/10.1016/j.sigpro.2005.04.013 doi: 10.1016/j.sigpro.2005.04.013
    [73] Aharon M, Elad M, Bruckstein A (2006) K-SVD: an algorithm for designing over-complete dictionaries for sparse representation. IEEE T Signal Process 54: 4311‒4322. https://doi.org/10.1109/TSP.2006.881199 doi: 10.1109/TSP.2006.881199
    [74] Mairal J, Elad M, Sapiro G (2008) Sparse representation for color image restoration. IEEE T Image Process 17: 53‒69. https://doi.org/10.1109/TIP.2007.911828 doi: 10.1109/TIP.2007.911828
    [75] Tropp JA, Gilbert AC, Strauss MJ (2006) Algorithms for simultaneous sparse approximation. Part Ⅰ: Greedy pursuit. Signal Process 86: 572‒588. https://doi.org/10.1016/j.sigpro.2005.05.030 doi: 10.1016/j.sigpro.2005.05.030
    [76] Li J, Levine M, An X, Xu X, He H (2013) Visual saliency based on scale-space analysis in the frequency domain. IEEE Trans Pattern Anal Mach Intell 35: 996‒1010. https://doi.org/10.1109/TPAMI.2012.147 doi: 10.1109/TPAMI.2012.147
    [77] Shi J, Xu L, Jia J (2015) Just noticeable defocus blur detection and estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 657‒665. https://doi.org/10.1109/CVPR.2015.7298665
    [78] Xu S, Wei X, Zhang C, Liu J, Zhang J (2020) MFFW: A new dataset for multi-focus image fusion. arXiv: 2002.04780[cs.CV]. https://doi.org/10.48550/arXiv.2002.04780
    [79] Zhang H, Le Z, Shao Z, Xu H, Ma J (2021) MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Inf Fusion 66: 40‒53. https://doi.org/10.1016/j.inffus.2020.08.022 doi: 10.1016/j.inffus.2020.08.022
    [80] Zhang J, Liao Q, Liu S, Ma H, Yang W, Xue JH (2020) Real-MFF: A large realistic multi-focus image dataset with ground truth. Pattern Recognit Lett 138: 370‒377. https://doi.org/10.1016/j.patrec.2020.08.002 doi: 10.1016/j.patrec.2020.08.002
    [81] Multi-focus Image Dataset. Available from: https://github.com/yuliu316316/MSFIN-Fusion.
    [82] Yang B, Li S (2010) Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Meas 59: 884‒892. https://doi.org/10.1109/TIM.2009.2026612 doi: 10.1109/TIM.2009.2026612
    [83] Liu Y, Wang Z (2015) Simultaneous image fusion and denoising with adaptive sparse representation. IET Image Process 9(5): 347‒357. https://doi.org/10.1049%2Fiet-ipr.2014.0311
    [84] Kim M, Han D, Ko H (2016) Joint patch clustering-based dictionary learning for multimodal image fusion. Inf Fusion 27: 198‒214. https://doi.org/10.1016/j.inffus.2015.03.003 doi: 10.1016/j.inffus.2015.03.003
    [85] Li H, Wu X (2017) Multi-focus image fusion using dictionary learning and low-rank representation. In: Lecture Notes in Computer Science, 675‒686. https://doi.org/10.1007/978-3-319-71607-7_59
    [86] Wang JW, Qu HJ, Zhang ZS, Xie M (2024) New insights into multi-focus image fusion:a fusion method based on multi-dictionary linear sparse representation and region fusion model. Inf Fusion 105: 102230. https://doi.org/10.1016/j.inffus.2024.102230 doi: 10.1016/j.inffus.2024.102230
    [87] Xie XZ, Guo BY, Li PL, He SY, Zhou SJ (2025) SwinMFF: toward high-fidelity end-to-end multi-focus image fusion via swin transformer-based network. Vis Comput 41: 3883‒3906. https://doi.org/10.1007/s00371-024-03637-3 doi: 10.1007/s00371-024-03637-3
    [88] Yang B, Jiang ZH, Pan D, Yu HY, Gui G, Gui WH (2025) LFDT-Fusion: A latent feature-guided diffusion Transformer model for general image fusion. Inf Fusion 113: 102639. https://doi.org/10.1016/j.inffus.2024.102639 doi: 10.1016/j.inffus.2024.102639
    [89] Hossny M, Nahavandi S, Vreighton D (2008) Comments on 'Information measure for performance of image fusion'. Electron Lett 44: 1066‒1067. http://dx.doi.org/10.1049/el:20081754 doi: 10.1049/el:20081754
    [90] Zhao J, Laganiere R, Liu Z (2007) Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement. Int J Innov Comput Inf Control 3: 1433‒1447. https://api.semanticscholar.org/CorpusID: 2510782
    [91] Wang Z, Bovik A, Sheikh H, Simoncelli E (2004) Image quality assessment: From error visibility to structural similarity. IEEE T Image Process 13: 600‒612. https://doi.org/10.1109/TIP.2003.819861 doi: 10.1109/TIP.2003.819861
    [92] Chen Y, Blum R (2009) A new automated quality assessment algorithm for image fusion. Image Vis Comput 27: 1421‒1432. https://doi.org/10.1016/j.imavis.2007.12.002 doi: 10.1016/j.imavis.2007.12.002
    [93] Han Y, Cai Y, Cao Y, Xu X (2013) A new image fusion performance metric based on visual information fidelity. Inf Fusion 14: 127‒135. https://doi.org/10.1016/j.inffus.2011.08.002 doi: 10.1016/j.inffus.2011.08.002
    [94] Fu YY (2006) Color image quality measures and retrieval. New Jersey, USA: New Jersey Institute of Technology, Dissertations. 745. https://digitalcommons.njit.edu/dissertations/745
    [95] True Color Kodak Images. Available from: http://r0k.us/graphics/kodak/.
    [96] Interactive Digital Photomontage. Available from: https://grail.cs.washington.edu/projects/photomontage/.
    [97] Helicon Focus. Available from: https://www.heliconsoft.com/helicon-focus-gallery/.
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1263) PDF downloads(96) Cited by(0)

Article outline

Figures and Tables

Figures(20)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog