Export file:

Format

  • RIS(for EndNote,Reference Manager,ProCite)
  • BibTex
  • Text

Content

  • Citation Only
  • Citation and Abstract

Detection and localization of image forgeries using improved mask regional convolutional neural network

Beijing Key Lab of Intelligent Telecommunication Software and Multimedia, Beijing University of Posts and Telecommunications, Beijing, China, 100876.

Special Issues: Information Multimedia Hiding & Forensics based on Intelligent Devices

The research on forgery detection and localization is significant in digital forensics and has attracted increasing attention recently. Traditional methods mostly use handcrafted or shallow-learning based features,but they have limited description ability and heavy computational costs. Recently, deep neural networks have shown to be capable of extracting complex statistical features from high-dimensional inputs and efficiently learning their hierarchical representations. In order to capture more discriminative features between tampered and non-tampered regions,we propose an improved mask regional convolutional neural network (Mask R-CNN) which attach a Sobel filter to the mask branch of Mask R-CNN in this paper. The Sobel filter acts as an auxiliary task to encourage predicted masks to have similar image gradients to the groundtruth mask. The overall network is capable of detecting two different types of image manipulations, including copy-move and splicing. The experimental results on two standard datasets show that the proposed model outperforms some state-of-the-art methods.
  Figure/Table
  Supplementary
  Article Metrics

References

1. W. Luo, Z. Qu, F. Pan, et al., A survey of passive technology for digital image forensics, FCS, 1 (2007), 166–179.

2. J. G. R. Elwin, T. Aditya and S. M. Shankar, Survey on passive methods of image tampering detection, INCOCC,Erode, 2(2010), 431–436.

3. G. K. Birajdar and V. H. Mankar, Digital image forgery detection using passive techniques: A survey, Digit. Inve, 10 (2013), 226–245.

4. L. Verdoliva, D. Cozzolino and G. Poggi, A feature-based approach for image tampering detection and localization, WIFS, Atlanta, GA, (2014), 149–154.

5. U. H. Panchal and R. Srivastava, A comprehensive survey on digital image watermarking techniques, ICCSNT, (2015), 591–595.

6. Al-Qershi, M. Osamah and B. E. Khoo, Passive detection of copy-move forgery in digital images: State-of-the-art, Foren. Sci. Int., 23 (2013), 284–295.

7. H. Huang, W. Guo and Y. Zhang, Detection of copy-move forgery in digital images using SIFT algorithm, CIIA, (2008), 272–276.

8. T. Mahmood, A. Irtaza, Z. Mehmood, et al., Copy-move forgery detection through stationary wavelets and local binary pattern variance for forensic analysis in digital images, Forensic Sci. Int., 279 (2017), 8–21.

9. Y. Q. Shi, C. Chen and C. Wen, A natural image model approach to splicing detection, In Proceeding MM&Sec '07 Proceedings of the 9th workshop on Multimedia & security, (2007), 51–62.

10. X. Zhao, S. Wang, S, Li, et al., Passive image-splicing detection by a 2-D noncausal markov model, IEEE Trans. CSVT, 25 (2015), 185–199.

11. R. Girshick, Fast R-CNN, ICCV, (2015), 1440–1448.

12. B. H. Jawadul and A. K. Roy-Chowdhury, CNN based region proposals for efficient object detection, ICIP, (2016), 3658–3662.

13. B. Zhou, A. Lapedriza, J. Xiao, et al., Learning deep features for scene recognition using places database, ANIPS, 1(2015), 13–20

14. L. Jonathan, E. Shelhamer and T. Darrell, Fully convolutional networks for semantic segmentation, IEEE Trans. Pattern Anal. Mach. Intel., 39 (2014), 640–651.

15. J. Chen, X. Kang, Y. Liu, et al., Median filtering forensics based on convolutional neural networks, IEEE Sig. Pro. Lett., 22 (2015), 1849–1853.

16. Y. Qian, J. Dong, W. Wang, et al., Deep learning for steganalysis via convolutional neural networks, Pro. SPIE. ISOE, 94 (2015), 9–14.

17. B. Belhassen and M. C. Stamm, A deep learning approach to universal image manipulation detection using a new convolutional layer, IH&MMSec, (2016), 5–10.

18. R. Yuan and J. Ni, A deep learning approach to detection of splicing and copy-move forgeries in images, WIFS, (2016), 1–6.

19. P. Zhou, X. Han, V. I. Morariu and L. S. Davis, Learning rich features for image manipulation detection,IEEE Conf. Comput. Vis. Pattern Recognit., Salt Lake, (2018), 1053–1061.

20. K. He, G. Gkioxari, P. Dollár, et al., Mask R-CNN. ICCV, 99 (2017), 1–11.

21. R. S. Zimmermann and J. N. J. a. p. a. Siems, Faster Training of Mask R-CNN by Focusing on Instance Boundaries, arXiv: 1809.07069.

22. C. Lopez-Molina, H. Bustince, J. Fernández, et al., A t-norm based approach to edge detection, IWCANN, Springer, Berlin, Heidelberg, (2009), 302–309.

23. T.Y. Lin, M. Maire, S. Belongie, et al., Microsoft coco: Common objects in context, ECCV, Springer, Cham, (2014), 740–755.

24. B. Wen, Y. Zhu, R. Subramanian, et al., Coverage novel database for copy-move forgery detection. ICIP, (2016), 161–165.

25. Y. F. Hsu and S. F. Chang, Detecting image splicing using geometry invariants and camera characteristics consistency, ICME, (2006), 549–552.

26. L. Jonathan, E. Shelhamer and T. Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR, 39 (2015), 3431–3440.

27. T.Y. Lin, P. Dollar, R. Girshick, et al., Feature pyramid networks for object detection, CVPR, (2017), 2117–2125.

28. N. Krawetz and H. F. J. H. F. S. Solutions, A Picture's Worth, Hacker Fact. Solut., 6 (2007), 1–31.

29. B. Mahdian and S. Saic, Using noise inconsistencies for blind image forensics, Image Vis. Comput., 7 (2009), 1497–1503.

30. P. Ferrara, T. Bianchi, A. De Rosa, et al., Image forgery localization via fine-grained analysis of CFA artifacts, IEEE Trans. Inf. Foren. Secur., 7(2012), 1566–1577.

31. J. H. Bappy, A. K. Roy-Chowdhury, J. Bunk, et al., Exploiting spatial structure for localizing manipulated image regions, ICCV, (2017), 4970–4979.

32. R. Salloum, Y. Ren and C.-C. J. Kuo, Image splicing localization using a multi-task fully convolutional network (MFCN), J. Vis. Com. Image Repre., 51 (2018), 201–209.

© 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution Licese (http://creativecommons.org/licenses/by/4.0)

Download full text in PDF

Export Citation

Article outline

Show full outline
Copyright © AIMS Press All Rights Reserved