Processing math: 100%
Research article Special Issues

An anti-forensic scheme on computer graphic images and natural images using generative adversarial networks

  • Computer graphic images (CGI) can be manufactured very similar to natural images (NI) by state-of-the-art algorithms in computer graphic filed. Thus, there are various identification algorithms proposed to detect CGI. However, the manipulation is complicated and difficult for an ultimate CGI against the forensic algorithms. Further, the forensics on CGI and NI made achievements in the different aspects with the encouragement of deep learning. Though the generated CGI can achieve high quality automatically by generative adversarial networks (GAN), CGI generation based on GAN is difficult to ensure that it cannot be detected by forensics. In this paper, we propose a brief and effective architecture based on GAN for preventing the generated images being detected under the forensics on CGI and NI. The adapted characteristics will make the CGI generated by GAN fools the detector and keep the end-to-end generation mode of GAN.

    Citation: Qi Cui, Ruohan Meng, Zhili Zhou, Xingming Sun, Kaiwen Zhu. An anti-forensic scheme on computer graphic images and natural images using generative adversarial networks[J]. Mathematical Biosciences and Engineering, 2019, 16(5): 4923-4935. doi: 10.3934/mbe.2019248

    Related Papers:

    [1] Zairong Wang, Xuan Tang, Haohuai Liu, Lingxi Peng . Artificial immune intelligence-inspired dynamic real-time computer forensics model. Mathematical Biosciences and Engineering, 2020, 17(6): 7221-7233. doi: 10.3934/mbe.2020370
    [2] Hui Yao, Yuhan Wu, Shuo Liu, Yanhao Liu, Hua Xie . A pavement crack synthesis method based on conditional generative adversarial networks. Mathematical Biosciences and Engineering, 2024, 21(1): 903-923. doi: 10.3934/mbe.2024038
    [3] Jiajia Jiao, Xiao Xiao, Zhiyu Li . dm-GAN: Distributed multi-latent code inversion enhanced GAN for fast and accurate breast X-ray image automatic generation. Mathematical Biosciences and Engineering, 2023, 20(11): 19485-19503. doi: 10.3934/mbe.2023863
    [4] Hao Wang, Guangmin Sun, Kun Zheng, Hui Li, Jie Liu, Yu Bai . Privacy protection generalization with adversarial fusion. Mathematical Biosciences and Engineering, 2022, 19(7): 7314-7336. doi: 10.3934/mbe.2022345
    [5] Jinhua Zeng, Xiulian Qiu, Shaopei Shi . Image processing effects on the deep face recognition system. Mathematical Biosciences and Engineering, 2021, 18(2): 1187-1200. doi: 10.3934/mbe.2021064
    [6] Song Wan, Guozheng Yang, Lanlan Qi, Longlong Li , Xuehu Yan, Yuliang Lu . Multiple security anti-counterfeit applications to QR code payment based on visual secret sharing and QR code. Mathematical Biosciences and Engineering, 2019, 16(6): 6367-6385. doi: 10.3934/mbe.2019318
    [7] Si Li, Limei Peng, Fenghuan Li, Zengguo Liang . Low-dose sinogram restoration enabled by conditional GAN with cross-domain regularization in SPECT imaging. Mathematical Biosciences and Engineering, 2023, 20(6): 9728-9758. doi: 10.3934/mbe.2023427
    [8] Dehua Feng, Xi Chen, Xiaoyu Wang, Xuanqin Mou, Ling Bai, Shu Zhang, Zhiguo Zhou . Predicting effectiveness of anti-VEGF injection through self-supervised learning in OCT images. Mathematical Biosciences and Engineering, 2023, 20(2): 2439-2458. doi: 10.3934/mbe.2023114
    [9] Xiao Wang, Jianbiao Zhang, Ai Zhang, Jinchang Ren . TKRD: Trusted kernel rootkit detection for cybersecurity of VMs based on machine learning and memory forensic analysis. Mathematical Biosciences and Engineering, 2019, 16(4): 2650-2667. doi: 10.3934/mbe.2019132
    [10] Sonam Saluja, Munesh Chandra Trivedi, Ashim Saha . Deep CNNs for glioma grading on conventional MRIs: Performance analysis, challenges, and future directions. Mathematical Biosciences and Engineering, 2024, 21(4): 5250-5282. doi: 10.3934/mbe.2024232
  • Computer graphic images (CGI) can be manufactured very similar to natural images (NI) by state-of-the-art algorithms in computer graphic filed. Thus, there are various identification algorithms proposed to detect CGI. However, the manipulation is complicated and difficult for an ultimate CGI against the forensic algorithms. Further, the forensics on CGI and NI made achievements in the different aspects with the encouragement of deep learning. Though the generated CGI can achieve high quality automatically by generative adversarial networks (GAN), CGI generation based on GAN is difficult to ensure that it cannot be detected by forensics. In this paper, we propose a brief and effective architecture based on GAN for preventing the generated images being detected under the forensics on CGI and NI. The adapted characteristics will make the CGI generated by GAN fools the detector and keep the end-to-end generation mode of GAN.


    Data security and privacy has become the crucial concern [1]. At the same time, image tampering technology and image processing function are becoming more and more powerful, which gradually reduces the authenticity and reliability of digital images. In order to detect tampered or forged digital images, forensics algorithms [2,3,4,5,6] have been constantly improved. As one of significant research of forensics, forensic algorithms on natural images (NI) and computer graphic images (CGI) supply the forensics for the source of the suspicious image. NIs are photos taken by the terminal devices, and CGIs are the productions of computer generation. With the development of algorithms related to computer graphics, CGI can be devised to an extent that is indistinguishable by the human eye. Due to the increasing computing power of smart devices, high-quality CGI is also easy to make, and CGI will be forged as NI to obtain the illegal benefit. Therefore, the forensic on CGI and NI will be significant in protecting data security and personal property. As a matter of fact, effective research results have been obtained in this field. In addition, Photo Response Non-Uniformity (PRNU) is an important feature for image forensics tasks. Peng et al. [7] employed photo response non-uniformity (PRNU) as a breakthrough research point to design effective algorithms. The best error rate of the detection in their presented experiment is 5.71%. Long et al. [8] improved the forensics algorithm in [7] further by using binary similarity of PRNU as the measures and achieved a higher detection rate of 99.83%. There are achievements of forensics on NI and CGI with the development of deep leaning. Yang et al. [2] proposed a contrast enhancement forensics algorithm by P-CNN and H-CNN those are two convolutional neural networks. Especially, the deeper convolutional neural networks (DCNN) [9] based approaches make the forensics become Intelligent. DCNN provide an environment that pulling the feature extraction and training together. In this scheme, it is only required to adjust the architecture of DCNN to make it suitable for a forensic task. Modifying ResNet [10] by adding a pre-process layer for enhancing effective features for forensic on NI and CG, Cui et al. [11] achieved the classifying with the average accuracy of 98%. Quan et al. [12] proposed a CNN-based architecture consists of four-layer convolutional neural networks and two-layer fully-connected networks for forensic on NI and CGI with the classification accuracy of 98.50%. In common, the dataset using in the above approaches is Columbia Photographic Images dataset [13], which is conducted aiming at the study of NI and CGI classification. it does not contain the latest CGI samples, especially those generated from generative adversarial networks (GAN). Besides, Wang et al. [14] provide an effective approach to identifying computer generated images in color quaternion wavelet domain. In this study, we combined the fixed filters in the pre-processing layer of [11] and the generic convolutional neural networks in [12] as the discriminate network in our proposed GAN-based network.

    On the contrary, the research on anti-forensics is also constantly improving. These research will find the defect of the existing forensics approach. The contrary to the forensics method is achieved by proposing a corresponding resistance algorithm for a certain detection method. The significance of anti-forensic algorithm is that it can effectively prevent the intruder from judging under forensic methods. At the same time, the study of anti-forensics helps to help forensic research by upright researchers. There are several kinds of main anti-forensics methods [15,16,17,18,19]. Anti-forensic of JPEG image composition [20] is a practical and effective approach for common operation [21,22]. This approach makes the compressed images undetectable by estimating the distribution of coefficients. Li et al. [23] propose a multiclass classification method to classify the common image operations. In addition, they propose a compact universal feature set. There are also schemes to actively attack the detection method [24]. In order to guaranty information security and protecting copyright, the field of information hiding and forensics has been studied. In the field of information hiding, there have been many forensics research results of steganalysis algorithms countering steganography [25,26]. In recent years, the attributes of deep learning have also been applied to information hiding. Particularly, the adversarial training strategy in GAN provides a pattern that could be deployed in the field of anti-forensics. Actually, many algorithms utilized the similarity between GAN [27] and information hiding to combine steganography with GAN [28,29,30] which employ steganalysis networks as one of the discriminators. In addition, Meng et al. [31] proposed to use object detection method such as Faster R-CNN [32] to select the safe hiding area, so as to make steganography more secure and robust.

    In this paper, we enhanced the capability of anti-detection on NI and CGI forensics by using the GAN-based model with the adversarial concept. The contributions of this paper include: a. We define a GAN-based architecture with reforming the discriminator for NI and CGI anti-forensics. b. We provide adversarial training to generate photorealistic computer images which can fool the detector to some extent.

    With the wide application of deep learning, suitable data sets are needed by various networks. In order to solve the problem of insufficient data sets, GAN was proposed by Goodfellow et al. GAN consists of two sub-networks: generator and discriminator. as shown in Figure 1. Firstly, random noise is fed into the generator. Then the generator generates a false image with a randomly initialed image distribution and put the false image into the discriminator. At the same time, the real image is also fed into the discriminator as the other group of input. Through the confrontation between generator and discriminator, a relatively natural image is generated. The objective function of the whole network is denoted as:

    minGmaxDV(D,G)=ExPdata(x)[log(D(x))]+EzPz(z)[log(1D(G(x)))] (2.1)
    Figure 1.  The specific structure of the proposed architecture. The full name of the abbreviations NI and ICGI is natural images and improved computer graphic images. Each block represents a group of feature maps.

    where x is the randomly sampled real data, and z is the initial noise generated randomly. The whole process can be regarded as two optimization problems, one is the process of optimizing the sub-network D that is denoted in equation (2.2), the other is the process of optimizing the sub-network G that is denoted in equation (2.3):

    maxDV(D,G)=ExPdata(x)[log(D(x))]+EzPz(z)[log(1D(G(x)))] (2.2)
    minGV(D,G)=EzPz(z)[log(1D(G(x)))] (2.3)

    The training goal of the first step is to update the sub-network by ascending the stochastic gradient of it:

    θd1mmi=1[logD(xi)+log(1D(G(zi)))] (2.4)

    As the loss value of the whole network decreases gradually in the training process, the training model will approach the global optimum to a certain extent. Since the distribution of the ultimate optimization is in the format of digital image on the research we pay attention to, the form of the generated data is image.

    Further, in order to solve the problem of mode collapse in [27]. Gulrajani et al. [33] transformed the indicators of measuring similarity from K-L divergence to J-S divergence shown in the equation (2.5) and equation (2.6).

    KL(PrPg)=log(Pr(x)Pg(x))Pr(x)dμ(x) (2.5)
    JS(Pr,Pg)=KL(PrPm)+KL(PgPm) (2.6)

    The value of the equation (2.5) and equation (2.6) indicates the distance between the distributions of Pr and Pg. The term Pm of equation (2.6) is (Pr + Pg)/2. Then the improved algorithm named WGAN-GP introduced the gradient penalty as:

    ExiPxi[(xiD(xi)21)2] (2.7)

    In order to guaranty information security and protecting copyright, the field of information hiding and forensics has been studied. In the field of information hiding, there have been many forensics research results of steganalysis algorithms countering steganography [25,26]. In essence, deep learning is to construct a structure model with multiple hidden layers. By training large-scale sample data, representative feature information can be obtained, and new samples can be classified and regressed. The goal of deep learning is to enable machines to have the ability of analysis and learning, and to recognize data information such as words, images and sounds. In recent years, the attributes of deep learning have also been applied to information hiding. The CNN-based network applied to steganalysis proposed by Xu el at. [34] benefited from the BN layer. The normalization process of the input feature maps is denoted as:

    xkn,j,k=xkn,j,kμkσk (2.8)

    where k indicates the identifier of feature maps, μ indicates the average value and σ indicates the variance, then xkn,j,k denotes the normalized feature maps.

    With the power of the feature exaction of CNN, Baroffio et al. [35] presented the CNN-based approach for camera model identification. The Convolution process of the CNN is shown as:

    xlj=nl=1xl1iwl1i,j+bij (2.9)

    where represent the convolution operation, x denotes the feature maps, w and b denote the kernel and bias in the networks respectively, i and j represent the identifier of the output feature maps, l is the identifier of the output layer.

    In this part, we present the proposed method named as generative adversarial networks for anti-forensics. The effective adversarial concept is the main architecture to the approach.

    The whole scheme is composed of two main subnets. The detailed structures are shown in Figure 1. The specific structure of the proposed networks shown in Table 1. The Conv layers consist of the sub-network as the discriminator, and the Deconv layers consist of the sub-network as the generator. The residual block in [11] has shown the effective results on the task of distinguishing NI and CGI. We add the convolutional residual block as each convolution layer inspired by [11]. Each of the residual blocks consists of two-layer convolution layers and an activation function between them. We choose the leaky rectified linear unit (Leaky Relu) as the activation function. The structure of the residual block is shown in Figure 2. The weight normalization (WN) [36] is followed after each convolutional layer to facilitate to stabilize the training process and make the generated images more realistic.

    Table 1.  The configuration of the proposed GAN in detail. Conv layers represent convolutional layers. DeConv layers is transposed Conv layers.
    Layers Input layer size Stride Padding Kernel size
    Conv1 3 2 1 4
    Conv2 64 2 1 4
    Conv3 128 2 1 4
    Conv4 256 2 1 4
    Conv5 384 2 1 4
    Conv6 512 1 0 5
    DeConv1 512 1 0 5
    DeConv2 384 2 1 4
    DeConv3 256 2 1 4
    DeConv4 128 2 1 4
    DeConv5 64 2 1 4
    DeConv6 3 2 1 4

     | Show Table
    DownLoad: CSV
    Figure 2.  The structure in detail of the residual block.

    As the main part of the proposed approach, we define a sub-net of discriminator as the forensics to force the generated images to fool the detector. There are a couple of Sobel Filters F1 and F2 referenced in [11] as the kernels in the first convolutional layers of the discriminative network. F1 and F2 concentrate more on the texture and edge information of the input images. The two filters are 3×3 arrays as shown in equation (3.1). The generated images by the model of the generator are collected as an amplified image set IG of CGI. Then the discriminator is fed with the images in IG as fake images and the NI images as real image. The ultimately generated images are expected with the characteristic of resisting detector owing to adopt the forensic component.

    Kernel1=[ - 101 - 202 - 101]Kernel2=[121000 - 1 - 2 - 1] (3.1)

    In this segment, we demonstrate the loss functions for the two sub-networks. The target of the generator is evaluating the images with high-level qualities and the characteristics of NI to fool the forensics model on NI and CG. With the aid of the generation of diversity and stabilization by adding weight normalization in the proposed generative adversarial networks, we reference WGAN-GP [33] to design the loss function. Here the objective functions of the two sub-networks are shown in equation (3.2) and equation (3.3).

    LD=EzPz(z)[fw(z)]ExPdata[fw(x)]+λEˆxPˆx[xfw(x)p1]2 (3.2)
    LG=EzPz(z),xPdata[fw(G(z))+G(z)x1] (3.3)

    ˆx is the random interpolation sampling, and the specific expression is:

    ˆx=εz+(1ε)x (3.4)

    The function fw means Wasserstein distance which is shown as:

    W(Pdata,Pz)=1KsupfKExPdata[fw(x)]EzPz(z)[fw(z)] (3.5)

    The deep learning framework implemented in the experiment is PyTorch which is integrated with Python. The hardware environment is composed of Intel i9 CPU with 32GB memory and NVIDIA Geforce 1080Ti GPU with 12GB memory. The NI data set is CelebA [37] face data set as the generating target during training the model. The open access data set is usually used for generating images in the research of GAN, which contains 202,599 images extracted from 10,177 identities. The aligned version of the data set clips the image to the size of 178 × 218 aligning with the main part of the face.

    In the training process, we set the learning rate of 2×105. The size of each batch is set to 8, and the resolution of the generated image is set to 160×160. The loss function for the training process is sigmoid binary cross entropy loss:

    1n(yn×ln(sigmoid(xn))+(1yn)×ln(1sigmoid(xn))) (4.1)

    where x is the output of the model which represents the generated images and y is the target sample image. We use the optimization function of root mean square prop (RMSprop) [38] for descending the gradient. The parameters updated formula is:

    W=WαdWSdW+εb=bαdbSdb+ε (4.2)

    Where, SdW and Sdb denote the gradient momentum for weights and bias respectively. The formulas of SdW and Sdb are as follows:

    SdW=βsdw+(1β)dW2Sdb=βsdb+(1β)db2 (4.3)

    The model of the network is trained after 400,000 iterations. The generated samples by model after the well-trained model are shown in Figure 3. The loss of the whole process of the training evolution is shown in Figure 4. The loss value consists of two parts in our experiments, one is the loss of the real data and the other is the loss on the fake data. The two parts are shown in Figure 5 and Figure 6. The anti-forensics on NI and CGI is verified since the curve of the loss is gradually declined, which means the discriminator cannot distinguish the inputted images correctly with the adversarial training between the generator and the discriminator.

    Figure 3.  The random sampled generated images by the well-trained model of the proposed approach.
    Figure 4.  The loss curve of the test evolution during the iterations of 400,000.
    Figure 5.  The loss curve of the discriminator on the real data during the training evolution.
    Figure 6.  The loss curve of the discriminator on the fake data during the training evolution.

    To further verify the effectiveness of the trained model, we randomly select 200 test samples as generated targets for testing the visual effect and the characteristics of anti-forensics during the training process of each step of the generation and recording the testing loss value on the model during the training process. The samples target and the generated images of the test process are shown in Figure 7. The loss value during the test process is shown in Figure 8. With the increase of iteration steps, the loss value decreases gradually. The feature of anti-forensics to invalidate the CNN-based forensics algorithms [11,12] in the generated images is verified for our proposed approach.

    Figure 7.  The randomly selected target real natural images (left), and the generated images by the proposed approach targeting the randomly selected real natural images (right).
    Figure 8.  The loss curve graph of the discriminator during the test evolution.

    In this paper, we present an anti-forensics scheme for high-performance CGI generation based on GAN. We first analyze deep learning in the field of forensics including the latest CNN-based forensics algorithms and the generative adversarial learning model. By comparing the similarities between forensics and GAN, we found that the adversarial concept is suitable for anti-forensics tasks on distinguishing NI and CGI. At the same time, the anti-forensics scheme we proposed is available at resisting the uncertificated forensics. The generated images by our improving will treat the detector effectively with the trained model.

    This work is supported by the National Key R & D Program of China under grant 2018YFB1003205; by the National Natural Science Foundation of China under grant U1836208, U1536206, U1836110, 61602253, 61672294; by the Jiangsu Basic Research Programs-Natural Science Foundation under grant numbers BK20181407; by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) fund; by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET) fund, China.

    The authors declare that there are no actual or potential conflicts of interest in relation to this article.



    [1] A. Alabdulkarim, M. Al-Rodhaan, Y. Tian, et al., A privacy-preserving algorithm for clinical decision-support systems using random forest, bCMC-Comput. Mater. Con., 58(2019), 585–601.
    [2] P. Yang, R. Ni, Z. Yao, et al., Robust contrast enhancement forensics using convolutional neural networks, (2018), arXiv preprint arXiv:1803.04749.
    [3] M. C. Stamm and K. J. R. Liu, Forensic estimation and reconstruction of contrast enhancement mapping, IEEE International Conference on Acoustics, Speech and Signal, (2010), 1698–1701.
    [4] G. Cao, Y. Zhao, R. Ni, et al., Contrast enhancement based forensics in digital images, IEEE T. Inf. Foren. Sec., 9(2014), 515–525.
    [5] X. Lin, C. T. Li and Y. Hu, Exposing image forgery through the detection of contrast enhancement, IEEE International Conference on Image Processing (ICIP), (2013), 4467–4471.
    [6] C. Yuan, X. Li, Q. M. Jonathan. Wu, et al., Fingerprint liveness detection from different fingerprint materials using convolutional neural network and principal component analysis, CMC-Comput. Mater. Con., 53(2017), 357–371.
    [7] F. Peng and D. L. Zhou, Discriminating natural images and computer generated graphics based on the impact of CFA interpolation on the correlation of PRNU, Digit. Invest., 11(2014), 111–119.
    [8] M. Long, F. Peng and Y. Zhu, Identifying natural images and computer generated graphics based on binary similarity measures of PRNU, Multimed. Tools. Appl., 78(2019), 489–506.
    [9] A. Radford, L. Metz and S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, (2015), arXiv preprint arXiv:1511.06434.
    [10] K. He, X. Zhang, S. Ren, et al., Deep residual learning for image recognition, IEEE International Conference on Computer Vision, (2016), 770–778.
    [11] Q. Cui, S. McIntosh and H. Sun, Identifying materials of photographic images and photorealistic computer generated graphics based on deep CNNs, CMC-Comput. Mater. Con., 55(2018), 229–241.
    [12] W. Quan, K. Wang, D. M. Yan, et al., Distinguishing between natural and computer-generated images using convolutional neural networks, IEEE T. Inf. Foren. Sec, 13(2018), 2772–2787.
    [13] T. Ng, S. Chang, J. Hs, et al., Columbia photographic images and photorealistic computer graphics dataset, ADVENT, Columbia University, (2005).
    [14] J. Wang, T. Li., X. Luo, et al., Identifying computer generated images based on quaternion central moments in color quaternion wavelet domain, IEEE T. Circ. Syst. Vid. Tec., (2018), DOI: 10.1109/TCSVT.2018.2867786.
    [15] G. Cao, Y. Zhao, R. Ni, et al., Anti-forensics of contrast enhancement in digital images, 12th ACM Workshop on Multimedia and Security, (2010), 25–34.
    [16] K. Singh, A. Kansal and G. Singh, An improved median filtering anti-forensics with better image quality and forensic undetectability, Multidi. Syst. Sign. P., (2019), 1–24.
    [17] A. Mehrish, A. V. Subramanyam and S. Emmanuel, Joint spatial and discrete cosine transform domain-based counter forensics for adaptive contrast enhancement. IEEE Access, 7(2019), 27183–27195.
    [18] P. M. Shelke and R. S. Prasad, An improved anti-forensics JPEG compression using least cuckoo search algorithm, Imaging. Sci. J., 66(2018), 169–183.
    [19] D. Kim, H. U. Jang, S. M. Mun, et al., Median filtered image restoration and anti-forensics using adversarial networks, IEEE Signal Proc. Let., 25(2018), 278–282.
    [20] M. C. Stamm and K. R. Liu, Anti-forensics of digital image compression, IEEE T. Inf. Foren. Sec., 6(2011), 1050–1065.
    [21] P. Yang, R. Ni, Y. Zhao, et al., Robust contrast enhancement forensics using convolutional neural networks, (2018), arXiv preprint arXiv:1803.04749.
    [22] Y. Luo, H. Zi, Q. Zhang, et al., Anti-forensics of jpeg compression using generative adversarial networks, 26th European Signal Processing Conference (EUSIPCO), (2018), 952–956.
    [23] H. Li, W. Luo, X. Qiu, et al., Identification of various image operations using residual-based features, IEEE T. Circ. Syst. Vid. Tec., 28(2018), 31–45.
    [24] R. Böhme and M. Kirchner, Counter-forensics: Attacking image forensics, Digital Image Forensics, Springer, New York, (2013), 327–366.
    [25] J. Fridrich and J. Kodovsky, Rich models for steganalysis of digital images, IEEE T. Inf. Foren. Sec., 7(2012), 868–882.
    [26] T. Pevny, P. Bas and J. Fridrich, Steganalysis by subtractive pixel adjacency matrix, IEEE T. Inf. Foren. Sec., 5(2010), 215–224.
    [27] I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., Generative adversarial nets, Advances in Neural Information Processing Systems, (2014), 2672–2680.
    [28] J. Hayes and G. Danezis, Generating steganographic images via adversarial training, Advances in Neural Information Processing Systems, (2017), 1954–1963.
    [29] D. Volkhonskiy, I. Nazarov, B. Borisenko, et al., Steganographic generative adversarial networks, (2017), arXiv preprint arXiv:1703.05502.
    [30] H. Shi, J. Dong, W. Wang, et al., SSGAN: Secure steganography based on generative adversarial networks, Pacific Rim Conference on Multimedia, Springer, Cham, (2017), 534–544.
    [31] R. Meng, S. G. Rice, J. Wang, et al., A fusion steganographic algorithm based on faster r-cnn, CMC-Comput. Mater. Con., 55(2018), 1–16.
    [32] S. Ren, K. He, R. Girshick, et al., Faster r-cnn: Towards real-time object detection with region proposal networks, Advances in Neural Information Processing Systems, (2015), 91–99.
    [33] I. Gulrajani, F. Ahmed, M. Arjovsky, et al., Improved training of wasserstein gans, Advances in Neural Information Processing Systems, (2017), 5767–5777.
    [34] G. Xu, H. Z. Wu, and Y. Q. Shi, Structural design of convolutional neural networks for steganalysis, IEEE Signal Proc. Let., 23(2016), 708–712.
    [35] L. Baroffio, L. Bondi, P. Bestagini, et al., Camera identification with deep convolutional networks. IEEE Signal Proc. Let., 24(2016), 259–263.
    [36] S. Xiang and H. Li, On the effect of batch normalization and weight normalization in generative adversarial networks, (2017), arXiv preprint arXiv:1704.03971.
    [37] Z. Liu, P. Luo, X. Wang, et al., Deep learning face attributes in the wild, IEEE International Conference on Computer Vision, (2015), 3730–3738.
    [38] T. Tieleman and G. Hinton, Lecture 6.5-rmsprop, coursera: neural networks for machine learning, University of Toronto, (2012).
  • This article has been cited by:

    1. Ivan Castillo Camacho, Kai Wang, A Comprehensive Review of Deep-Learning-Based Methods for Image Forensics, 2021, 7, 2313-433X, 69, 10.3390/jimaging7040069
    2. Maryna Veksler, Kemal Akkaya, 2024, Chapter 3, 978-3-031-49802-2, 55, 10.1007/978-3-031-49803-9_3
    3. Yihong Lu, Jianyi Liu, Ru Zhang, 2024, An Images Regeneration Method for CG Anti-Forensics Based on Sensor Device Trace, 979-8-3503-9015-5, 1, 10.1109/ICME57554.2024.10688125
    4. Gajanan K. Birajdar, Mukesh D. Patil, A Systematic Survey on Photorealistic Computer Graphic and Photographic Image Discrimination, 2023, 23, 0219-4678, 10.1142/S0219467823500377
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5161) PDF downloads(490) Cited by(4)

Figures and Tables

Figures(8)  /  Tables(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog