Research article

Image quality assessment based on the perceived structural similarity index of an image


  • Received: 04 December 2022 Revised: 09 February 2023 Accepted: 23 February 2023 Published: 17 March 2023
  • Image quality assessment (IQA) has a very important role and wide applications in image acquisition, storage, transmission and processing. In designing IQA models, human visual system (HVS) characteristics introduced play an important role in improving their performances. In this paper, combining image distortion characteristics with HVS characteristics, based on the structure similarity index (SSIM) model, a novel IQA model based on the perceived structure similarity index (PSIM) of image is proposed. In the method, first, a perception model for HVS perceiving real images is proposed, combining the contrast sensitivity, frequency sensitivity, luminance nonlinearity and masking characteristics of HVS; then, in order to simulate HVS perceiving real image, the real images are processed with the proposed perception model, to eliminate their visual redundancy, thus, the perceived images are obtained; finally, based on the idea and modeling method of SSIM, combining with the features of perceived image, a novel IQA model, namely PSIM, is proposed. Further, in order to illustrate the performance of PSIM, 5335 distorted images with 41 distortion types in four image databases (TID2013, CSIQ, LIVE and CID) are used to simulate from three aspects: overall IQA of each database, IQA for each distortion type of images, and IQA for special distortion types of images. Further, according to the comprehensive benefit of precision, generalization performance and complexity, their IQA results are compared with those of 12 existing IQA models. The experimental results show that the accuracy (PLCC) of PSIM is 9.91% higher than that of SSIM in four databases, on average; and its performance is better than that of 12 existing IQA models. Synthesizing experimental results and theoretical analysis, it is showed that the proposed PSIM model is an effective and excellent IQA model.

    Citation: Juncai Yao, Jing Shen, Congying Yao. Image quality assessment based on the perceived structural similarity index of an image[J]. Mathematical Biosciences and Engineering, 2023, 20(5): 9385-9409. doi: 10.3934/mbe.2023412

    Related Papers:

  • Image quality assessment (IQA) has a very important role and wide applications in image acquisition, storage, transmission and processing. In designing IQA models, human visual system (HVS) characteristics introduced play an important role in improving their performances. In this paper, combining image distortion characteristics with HVS characteristics, based on the structure similarity index (SSIM) model, a novel IQA model based on the perceived structure similarity index (PSIM) of image is proposed. In the method, first, a perception model for HVS perceiving real images is proposed, combining the contrast sensitivity, frequency sensitivity, luminance nonlinearity and masking characteristics of HVS; then, in order to simulate HVS perceiving real image, the real images are processed with the proposed perception model, to eliminate their visual redundancy, thus, the perceived images are obtained; finally, based on the idea and modeling method of SSIM, combining with the features of perceived image, a novel IQA model, namely PSIM, is proposed. Further, in order to illustrate the performance of PSIM, 5335 distorted images with 41 distortion types in four image databases (TID2013, CSIQ, LIVE and CID) are used to simulate from three aspects: overall IQA of each database, IQA for each distortion type of images, and IQA for special distortion types of images. Further, according to the comprehensive benefit of precision, generalization performance and complexity, their IQA results are compared with those of 12 existing IQA models. The experimental results show that the accuracy (PLCC) of PSIM is 9.91% higher than that of SSIM in four databases, on average; and its performance is better than that of 12 existing IQA models. Synthesizing experimental results and theoretical analysis, it is showed that the proposed PSIM model is an effective and excellent IQA model.



    加载中


    [1] M. Perez, A. Mikhailiuk, E. Zerman, V. Hulusic, G. Valenzise, R. K. Mantiuk, From pairwise comparisons and rating to a unified quality scale, IEEE Trans. Image Process., 29 (2020), 1139–1151. https://doi.org/10.1109/TIP.2019.2936103 doi: 10.1109/TIP.2019.2936103
    [2] X. F. Zhang, W. S. Lin, Q. M. Huang, Fine-grained image quality assessment: a revisit and further thinking, IEEE Trans. Circuits Syst. Video Technol., 32 (2022), 2746–2759. https://doi.org/10.1109/TCSVT.2021.3096528 doi: 10.1109/TCSVT.2021.3096528
    [3] Y. M. Fang, R. G. Du, Y. F. Zuo, W. Y. Wen, L. D. Li, Perceptual quality assessment for screen content images by spatial continuity, IEEE Trans. Circuits Syst. Video Technol., 30 (2020), 4050–4063. https://doi.org/10.1109/TCSVT.2019.2951747 doi: 10.1109/TCSVT.2019.2951747
    [4] G. T. Zhai, X. K. Min, Perceptual image quality assessment: a survey, Sci. China Inf. Sci., 63 (2020), 84–135. https://doi.org/10.1007/s11432-019-2757-1 doi: 10.1007/s11432-019-2757-1
    [5] X. Y. Huang, L. J. He, Playback experience driven cross layer optimization of APP, TCP and MAC layer for video clients over LTE system, IET Commun., 14 (2020), 2176–2188. https://doi.org/10.1049/iet-com.2019.0645
    [6] D. M. Chandler, S. S. Hemami, VSNR: A wavelet-based visual signal-to-noise ratio for natural images, IEEE Trans. Image Process., 16 (2007), 2284–2298. https://doi.org/10.1109/TIP.2007.901820 doi: 10.1109/TIP.2007.901820
    [7] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Process., 13 (2004), 600–612. https://doi.org/10.1109/TIP.2003.819861 doi: 10.1109/TIP.2003.819861
    [8] L. Zhang, L. Zhang, X. Q. Mou, D. Zhang, FSIM: a feature similarity index for image quality assessment, IEEE Trans. Image Process., 20 (2011), 2378–2386. https://doi.org/10.1109/TIP.2011.2109730 doi: 10.1109/TIP.2011.2109730
    [9] W. F. Xue, L. Zhang, X. Q. Mou, A. C. Bovik, Gradient magnitude similarity deviation: a highly efficient perceptual image quality index, IEEE Trans. Image Process., 23 (2014), 684–695. https://doi.org/10.1109/tip.2013.2293423 doi: 10.1109/tip.2013.2293423
    [10] L. Zhang, Y. Shen, H. Li, VSI: a visual saliency-induced index for perceptual image quality assessment, IEEE Trans. Image Process., 23 (2014), 4270–4281. https://doi.org/10.1109/TIP.2014.2346028 doi: 10.1109/TIP.2014.2346028
    [11] Z. Wang, Q. Li, Information content weighting for perceptual image quality assessment, IEEE Trans. Image Process., 20 (2011), 1185–1198. https://doi.org/10.1109/TIP.2010.2092435 doi: 10.1109/TIP.2010.2092435
    [12] J. C. Yao, G. Z. Liu, Improved SSIM IQA of contrast distortion based on the contrast sensitivity characteristics of HVS, IET Image Process., 12 (2018), 872–879. https://doi.org/10.1049/iet-ipr.2017.0209 doi: 10.1049/iet-ipr.2017.0209
    [13] P. Cheraaqee, Z. Maviz, A. Mansouri, A. Mahmoudi-Aznaveh, Quality assessment of screen content images in wavelet domain, IEEE Trans. Circuits Syst. Video Technol., 32 (2022), 566–578. https://doi.org/10.1109/TCSVT.2021.3067627 doi: 10.1109/TCSVT.2021.3067627
    [14] Laboratory for Image & Video Engineering, LIVE image and video quality assessment database, The University of Texas at Austin, 2021. Available from: http://live.ece.utexas.edu/research/quality.
    [15] Tampere University of Technology, Tampere image database 2013 (TID2013), version 1.0, Tampere University of Technology, 2021. Available from: http://www.ponomarenko.info/tid2013.htm.
    [16] Oklahoma State University, The CSIQ image database, Oklahoma State University, 2021. Available from: http://vision.okstate.edu/csiq.
    [17] M. H. Khosravi, H. Hamid, A new paradigm for image quality assessment based on human abstract layers of quality perception, Multimedia Tools Appl., 81 (2022), 23193–23215. https://doi.org/10.1007/s11042-022-12478-y doi: 10.1007/s11042-022-12478-y
    [18] K. Y. Ding, K. D. Ma, S. Q. Wang, E. P. Simoncelli, Image quality assessment: unifying structure and texture similarity, IEEE Trans. Pattern Anal. Mach. Intell., 44 (2022), 2567–2581. https://doi.org/10.1109/TPAMI.2020.3045810 doi: 10.1109/TPAMI.2020.3045810
    [19] X. K. Min, G. T. Zhai, K. Gu, Y. T. Liu, X. K. Yang, Blind image quality estimation via distortion aggravation, IEEE Trans. Broadcast., 64 (2018), 508–517. https://doi.org/10.1109/TBC.2018.2816783 doi: 10.1109/TBC.2018.2816783
    [20] Z. Q. Pan, F. Yuan, J. J. Lei, Y. M. Fang, X. Shao, S. Kwong, VCRNet: visual compensation restoration network for no-reference image quality assessment, IEEE Trans. Image Process., 31 (2022), 1613–1627. https://doi.org/10.1109/TIP.2022.3144892 doi: 10.1109/TIP.2022.3144892
    [21] X. Y. Ma, S. Y. Zhang, Y. Q. Wang, R. Li, X. D. Chen, D. G. Yu, ASCAM-Former: blind image quality assessment based on adaptive spatial & channel attention merging transformer and image to patch weights sharing, Expert Syst. Appl., 215 (2023), 119268. https://doi.org/10.1016/j.eswa.2022.119268 doi: 10.1016/j.eswa.2022.119268
    [22] Q. Yang, Z. Ma, Y. L. Xu, L. Yang, W. J. Zhang, J. Sun, Modeling the screen content image quality via multiscale edge attention similarity, IEEE Trans. Broadcast., 66 (2020), 310–321. https://doi.org/10.1109/TBC.2019.2954063 doi: 10.1109/TBC.2019.2954063
    [23] L. J. He, G. Z. Liu, X. Y. Huang, Playback continuity and video quality driven optimisation for dynamic adaptive streaming over HTTP clients over wireless networks, IET Commun., 12 (2018), 1178–1187. https://doi.org/10.1049/iet-com.2017.0522 doi: 10.1049/iet-com.2017.0522
    [24] P. K. Podder, M. Paul, M. Murshed, EMAN: The human visual feature based no-reference subjective quality metric, IEEE Access, 7 (2019), 46152–46164. https://doi.org/10.1109/ACCESS.2019.2904732 doi: 10.1109/ACCESS.2019.2904732
    [25] A. Ahar, A. Barri, P. Schelkens, From sparse coding significance to perceptual quality: a new approach for image quality assessment, IEEE Trans. Image Process., 27 (2018), 879–893. https://doi.org/10.1109/TIP.2017.2771412 doi: 10.1109/TIP.2017.2771412
    [26] L. Zheng, L. Shen, J. Chen, P. An, J. Luo, No reference quality assessment for screen content images based on hybrid region features fusion. IEEE Trans. Multimedia, 21 (2019), 2057–2070. https://doi.org/10.1109/TMM.2019.2894939 doi: 10.1109/TMM.2019.2894939
    [27] S. Mahmoudpour, P. Schelkens, Synthesized view quality assessment using feature matching and superpixel difference, IEEE Signal Process. Lett., 27 (2020), 1650–1654. https://doi.org/10.1109/LSP.2020.3024109 doi: 10.1109/LSP.2020.3024109
    [28] Y. Jiang, X. Yan, J. Chen, J. Cheng, J. Zhang, Meaningful secret image sharing for JPEG images with arbitrary quality factors, Math. Biosci. Eng., 19 (2022), 11544–11562. https://doi.org/10.3934/mbe.2022538 doi: 10.3934/mbe.2022538
    [29] S. Westland, H. Owens, V. Cheung, Model of luminance contrast-sensitivity function for application to image assessment, Color Res. Appl., 31 (2006), 315–319. https://doi.org/10.1002/col.20230 doi: 10.1002/col.20230
    [30] M. J. Nadenau, J. Reichel, M. Kunt, Wavelet-based color image compression: exploiting the contrast sensitivity function, IEEE Trans. Image Process., 12 (2003), 58–70. https://doi.org/10.1109/TIP.2002.807358 doi: 10.1109/TIP.2002.807358
    [31] Y. Z. Niu, H. F. Zhang, W. Z. Guo, R. R. Ji, Image quality assessment for color correction based on color contrast similarity and color value difference, IEEE Trans. Circuits Syst. Video Technol., 28 (2018), 849–862. https://doi.org/10.1109/TCSVT.2016.2634590 doi: 10.1109/TCSVT.2016.2634590
    [32] J. C. Yao, G. Z. Liu, Bitrate-based no-reference video quality assessment combining the visual perception of video contents, IEEE Trans. Broadcast., 65 (2019), 546–557. https://doi.org/10.1109/TBC.2018.2878360 doi: 10.1109/TBC.2018.2878360
    [33] S. Cheng, H. Q. Zeng, J. Chen, J. H. Hou, J. Q. Zhu, K. K. Ma, Screen content video quality assessment: subjective and objective study, IEEE Trans. Image Process., 29 (2020), 8636–8651. https://doi.org/10.1109/TIP.2020.3018256 doi: 10.1109/TIP.2020.3018256
    [34] X. Liu, M. Pedersen, J. Y. Hardeberg, CID: IQ–A new image quality database, in Image and Signal Processing, Springer International Publishing, Cherbourg, France, 2014. https://doi.org/10.1007/978-3-319-07998-1_22
    [35] E. C. Larson, D. M. Chandler. Most apparent distortion: full-reference image quality assessment and the role of strategy, J. Electron. Imaging, 19 (2010), 011006. https://doi.org/10.1117/1.3267105. doi: 10.1117/1.3267105
    [36] Z. Yi, D. M. Chandler, Opinion-unaware blind quality assessment of multiply and singly distorted images via distortion parameter estimation, IEEE Trans. Image Process., 27 (2018), 5433–5448. https://doi.org/10.1109/TIP.2018.2857413 doi: 10.1109/TIP.2018.2857413
    [37] L. H. Chen, C. G. Bampis, Z. Li, J. Sole, A. C. Bovik, Perceptual video quality prediction emphasizing chroma distortions, IEEE Trans. Image Process., 30 (2021), 1408–1422. https://doi.org/10.1109/TIP.2020.3043127 doi: 10.1109/TIP.2020.3043127
    [38] C. Zhang, W. Cheng, K. Hirakawa, Corrupted reference image quality assessment of denoised images, IEEE Trans. Image Process., 28 (2019), 1732–1747. https://doi.org/10.1109/TIP.2018.2878326 doi: 10.1109/TIP.2018.2878326
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1067) PDF downloads(105) Cited by(0)

Article outline

Figures and Tables

Figures(13)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog