Loading [MathJax]/jax/element/mml/optable/MathOperators.js
Research article Special Issues

Dynamic interaction between transmission, within-host dynamics and mosquito density


  • The central question in this paper is the character and role of the within-host and between-host interactions in vector-transmitted diseases compared to environmental-transmitted diseases. In vector-transmitted diseases, the environmental stage becomes the vector population. We link an epidemiological model for a vector-transmitted disease with a simple immunological process: the effective transmission rate from host to vector, modeled as a function of the infected cell level within the host, and a virus inoculation term that depends on the abundance of infected mosquitoes. We explore the role of infectivity (defined as the number of host target cells infected), recovery rate, and viral clearance rate in the coupled dynamics of these systems. As expected, the conditions for a disease outbreak require the average individual in the population to have an active (within-host) viral infection. However, the outbreak's nature, duration, and dynamic characteristics depend on the intensity of the within-host infection and the nature of the mosquito transmission capacity. Through the model, we establish inter-relations between the infectivity, host recovery rate, viral clearance rate, and different dynamic behavior patterns at the population level.

    Citation: Mayra Núñez-López, Jocelyn A. Castro-Echeverría, Jorge X. Velasco-Hernández. Dynamic interaction between transmission, within-host dynamics and mosquito density[J]. Mathematical Biosciences and Engineering, 2025, 22(6): 1364-1381. doi: 10.3934/mbe.2025051

    Related Papers:

    [1] Guangmin Sun, Hao Wang, Yu Bai, Kun Zheng, Yanjun Zhang, Xiaoyong Li, Jie Liu . PrivacyMask: Real-world privacy protection in face ID systems. Mathematical Biosciences and Engineering, 2023, 20(2): 1820-1840. doi: 10.3934/mbe.2023083
    [2] Muhammad Ahmad Nawaz Ul Ghani, Kun She, Muhammad Arslan Rauf, Shumaila Khan, Masoud Alajmi, Yazeed Yasin Ghadi, Hend Khalid Alkahtani . Toward robust and privacy-enhanced facial recognition: A decentralized blockchain-based approach with GANs and deep learning. Mathematical Biosciences and Engineering, 2024, 21(3): 4165-4186. doi: 10.3934/mbe.2024184
    [3] Lei Zhang, Lina Ge . A clustering-based differential privacy protection algorithm for weighted social networks. Mathematical Biosciences and Engineering, 2024, 21(3): 3755-3773. doi: 10.3934/mbe.2024166
    [4] Xiaoguang Liu, Meng Chen, Tie Liang, Cunguang Lou, Hongrui Wang, Xiuling Liu . A lightweight double-channel depthwise separable convolutional neural network for multimodal fusion gait recognition. Mathematical Biosciences and Engineering, 2022, 19(2): 1195-1212. doi: 10.3934/mbe.2022055
    [5] Radi P. Romansky, Irina S. Noninska . Challenges of the digital age for privacy and personal data protection. Mathematical Biosciences and Engineering, 2020, 17(5): 5288-5303. doi: 10.3934/mbe.2020286
    [6] Qingwei Wang, Xiaolong Zhang, Xiaofeng Li . Facial feature point recognition method for human motion image using GNN. Mathematical Biosciences and Engineering, 2022, 19(4): 3803-3819. doi: 10.3934/mbe.2022175
    [7] Bingyu Liu, Jiani Hu, Weihong Deng . Attention distraction with gradient sharpening for multi-task adversarial attack. Mathematical Biosciences and Engineering, 2023, 20(8): 13562-13580. doi: 10.3934/mbe.2023605
    [8] Kwabena Owusu-Agyemang, Zhen Qin, Appiah Benjamin, Hu Xiong, Zhiguang Qin . Insuring against the perils in distributed learning: privacy-preserving empirical risk minimization. Mathematical Biosciences and Engineering, 2021, 18(4): 3006-3033. doi: 10.3934/mbe.2021151
    [9] Songfeng Liu, Jinyan Wang, Wenliang Zhang . Federated personalized random forest for human activity recognition. Mathematical Biosciences and Engineering, 2022, 19(1): 953-971. doi: 10.3934/mbe.2022044
    [10] R Nandhini Abiram, P M Durai Raj Vincent . Identity preserving multi-pose facial expression recognition using fine tuned VGG on the latent space vector of generative adversarial network. Mathematical Biosciences and Engineering, 2021, 18(4): 3699-3717. doi: 10.3934/mbe.2021186
  • The central question in this paper is the character and role of the within-host and between-host interactions in vector-transmitted diseases compared to environmental-transmitted diseases. In vector-transmitted diseases, the environmental stage becomes the vector population. We link an epidemiological model for a vector-transmitted disease with a simple immunological process: the effective transmission rate from host to vector, modeled as a function of the infected cell level within the host, and a virus inoculation term that depends on the abundance of infected mosquitoes. We explore the role of infectivity (defined as the number of host target cells infected), recovery rate, and viral clearance rate in the coupled dynamics of these systems. As expected, the conditions for a disease outbreak require the average individual in the population to have an active (within-host) viral infection. However, the outbreak's nature, duration, and dynamic characteristics depend on the intensity of the within-host infection and the nature of the mosquito transmission capacity. Through the model, we establish inter-relations between the infectivity, host recovery rate, viral clearance rate, and different dynamic behavior patterns at the population level.



    The risk of privacy leakage is often encountered in the exchange of medical data, especially between institutions. Because auxiliary algorithms used by institutions come from different manufacturers, some data analysis tools even need to be connected to the Internet. Commercial biometric estimators with face recognition algorithms (FRAs) may steal facial data to a third party during transmission. Biometric Privacy-Enhancing Techniques (PETs) have been invented, but none can cope with all situations. There are three classifications to hit the target. One is parts shelter. The characteristic is that the human can see the local content, and the information of the specific information is hidden. Additional computational overhead is required while the image is restored, e.g., pixelation and blurring. Another is in pixels recoding. Anyone can hardly recognize the content in the image unless they know the cipher. For this reason, a password is required whenever the image is used, e.g., chaos [1] or RCDP [2] method. And the most suitable for daily use is cloak perturbation. It can not only hide specific features but also be recognizable to humans, e.g., Fawkes [3] and federated privacy [4]. The first two categories are almost impeccable, but in most cases, face images still need some readability in daily research, e.g., medical institutions exchange toddler images to record the relationship between eye distance and height. In contrast, the desensitization of children's images deserves more attention than the research itself. However, if the face image is hidden roughly, it may bring inconvenience to doctors to identify patients.

    PET methods are currently formulated for one or several types of FRAs. Hence, those excellent cases become helpless once the scenes are changed. These studies were not widely adopted because FRAs upgraded and developed rapidly, and the computing power and network update for face tasks were also fast. Further, the average annual compound growth rate of the face recognition market alone is 30.7% from 2010 to 2018, and the market scale will exceed 7 billion dollars by 2024 [5]. More importantly, everyone can get and collect FRAs easily. Since 2014, commercial FRAs have sprung up, and they have a perfect development to after-sales process, so their recognition accuracy is approaching 99.99% as shown in Table 1. After the COVID-19 outbreak, FRAs for deep occlusion have also emerged. They can identify covered faces (gauze mask), even the face covered over 50%. We can't conclude whether a camera applies an FRA, and manufacturer of the FRA.

    Table 1.  Accuracy of famous FRAs on difficult benchmarks. "*" indicates that the data is quoted from the original paper.
    FRA name Year Training dataset Accuracy (%)
    LFW2 IJB-C AndyLau CFP-FP iQIYI-VID
    Fisherface [6] 1936 -- 93.5 [7] -- 93.6 -- --
    Phenogram [8] 1987 -- 97.6 -- 98.1 -- --
    Eigenface [9] 1991 -- 64-96* -- 95.56 -- --
    LBP [10,11] 1996 -- 87.62 -- 88.12 -- --
    Color Segmentation [12] 2001 -- 96.27 -- 93.47 -- --
    DeepFace [13] 2014 LFW1 97.35* 79.88–90.49 98.81 97.26 --
    DeepID [14] 2014 LFW1 97.45 79.91–90.88 98.81 95.08 --
    DeepID2 [15] 2014 LFW1 99.15 79.94–90.98 98.81 95.31 --
    DeepID2+ [16] 2015 LFW1 99.47 81.36–90.96 98.98 95.43 97.52
    FaceNet [17] 2015 LFW1 99.63* 84.76–90.28 91.17 -- --
    Baidu [18] 2015 -- 99.85 98.21–98.66 99.98 99.71 98.11
    Face++ [19] 2016 -- 99.6–99.8* 90.73–96.12 99.96 99.74 98.09
    SphereFace [20] 2017 LFW1 99.17–99.42 73.58–93.77 97.89 -- --
    Cosface [21] 2018 YTF/LFW1 99.33–99.51 89.25–95.96 99.27 -- --
    ArcFace [22] 2018 -- 99.41–99.5* 88.5–95.74 99.94 98.27 98.2
    PyramidBox [23] 2018 WIDER FACE 88.7–95.6 79.51–89.41 96.17 97.11 97.39
    ElasticFace [24] 2019 MS1MV2 99.8–99.82* 96.4–98.96* 99.89 98.61-98.73* 97.41
    MagFace [25] 2021 MS1MV2 99.83* 89.26–90.24* 99.68 98.46* 98.07
    MixFaceNets [26] 2021 MS1MV2 99.53–99.68* 90.43–93.08* 99.93 98.37 98.11
    PocketNet [27] 2022 MS1MV2 99.5–99.58* 90.79–91.62* 99.71 93.78–94.21* 97.75–98.01

     | Show Table
    DownLoad: CSV

    The classification of PET can be roughly divided into 6 groups [28]: 1) Data applicable, 2) Mapping type, 3) Biometric attributes, 4) Ways to address biometric utility, 5) Guarantee of the possibility of reconstructing information, 6) Hidden biometric targets (humans and/or machines). As for the specific response measures, only three categories are widely used. Only Image means, representation means and inference means can produce effective algorithms.

    Image means are used to process graphics or videos. Their target may be humans or machines. Confusion, confrontation and synthesis technology is used to change visual data to protect privacy [29,30]. Representation means are based on the method of transformation and elimination. It suppresses some target aspects of data by transforming the original template into another form. It can also delete the meta element with the largest amount of information about the target attribute in the template [31], and used only for a pre-defined purpose [28]. Inference means usually change the biometric template and the comparison/classification process used to drive the similarity/comparison score [32]. Unlike image means PET, this group's technology is specifically aimed at automatic machine learning models rather than humans. However, our research mainly focuses on an image processing method that allows human beings to know who's that in the picture while the machine cannot, so we need to combine both image and inference. It is more important to disable many machine recognition algorithms as much as possible, rather than a certain class.

    All privacy protection networks based on neural networks are easily affected by the neural network itself. Therefore, even if the PET algorithm is extremely advanced, it cannot keep up with the frequently upgraded FRA. PET algorithms are usually targeted for research and development. Take Fawkes [3] for example, the training for ArcFace V1 has been outstanding, and the success probability (SP) can reach over 90%. However, this algorithm is failed to work with V2 soon after it was proposed. Even if we retrained the model, SP dropped below 50%. FlowSAN [33] as representative of semi-advertising networks will make the face look more and more strange with the high threshold. Thus, some face attributes are hidden. As shown in Figure 1. Later, some algorithms were compared. Few PETs can cope with multiple types of FRAs at the same time.

    Figure 1.  The illustration of the FlowSAN model, after completely hiding the gender attribute, the generated person can hardly be recognized by humans.

    As shown in Table 2, almost every method is not competent for non-anticipatory situations. If the retrained model copes with the new FRA version, it may lead to data rewriting. It's necessary to study a sustainable PET without retraining. Algorithm convergence leads to complex and huge networks. Therefore, no ideal neural network can be fast and flexible for multiple FRAs. High intensity GAN often leads to output distortion to the eyes or failure to FRAs.

    Table 2.  Characteristics of PETs.
    PET name Adversary Year SP Conclusion
    De-identifying [34] All 2005 99.9% Characteristic irreversibility. Not suitable for eyes. Key is easy to be leaked.
    Outsourced computation [35] All 2016 99% Not suitable for eyes.
    Multi-target adversary [36] All 2017 90–98% It requires a large number of training samples and huge parameters.
    Similarity-sensitive noise [37] SphereFace/Dlib 2019 95% Only used in SphereFace & Dlib.
    LightFace [38] All CNN based FRAs 2019 96.2–98.5% Only one FRA can be dealt with, slow training speed. Not suitable for eyes.
    Differential [39] Eigenface 2020 70–90% Limited to single white box applications
    Pixel perturbation [3] Black box based FRAs 2020 97–98% For specific versions of commercial FRAs only or need to retrain the model. Some algorithms fail after the image is compressed
    Synthetic face replacement [40] U-net based FRAs 2021 98.1–98.4% Become another person. Not suitable for eyes.

     | Show Table
    DownLoad: CSV

    In short, the main point is to keep the generator adapting to the newly added discriminator (black box FRAs) continuously. It is more important to find a fast gradient descent method when the parameters are increasing.

    The main contributions of this work can be summarized as follows:

    1) The model can integrate many face recognition algorithms without caring about their structure and training sets.

    2) The algorithm proposes an inverse back coupling mechanism to make the output play the opposite role to the input and automatically balance the target error.

    3) Keeping the network stable with the help of the principle of social animal hunting.

    4) Use the validator to score and maintain a stable visual effect.

    Privacy-Preserving and Security Mining Framework (PPSF) offers algorithms for data anonymity, privacy-preserving data mining, and privacy-preserving utility mining [41]. It comprises 13 data anonymity algorithms. However, these state-of-the-art algorithms mainly hide sensitive information [42], making the targets they try to read become incomprehensible, including humans.

    Lin et al. proposed an ant colony optimization (ACO) method [43], which uses multiple objectives and transaction deletion to protect confidential and sensitive information. This method mainly deletes transactions to protect confidential and sensitive information, which belongs to representation means. That protected information will not apply to the data without dedicated tools, e.g., CT graphics.

    Shan et al. [3] use the Generative Adversarial Networks to cheat the ArcFace algorithm at the image level. As a result, they can magically disable the specific FRA, to protect the textured face ID from being collected. The biggest disadvantage of this method is that it can only be used for a certain FRA. If FRA changed, the network needs to be retrained.

    Wang et al. [44] introduced the nearest neighbor method to calculate the cosine similarity of feature vectors into edge computing, which can effectively improve the security of face data in the cloud. The fault tolerance of identity authentication systems is improved by the secret sharing of homomorphism technology of distributed computing.

    The research of soft biometrics considers the use of feature templates to infer information about a person's gender, age, race, sexual orientation, and health status [32]. The server preserves a full amount of face feature templates [45]. In the business process, face privacy is safe, but if the server receives an intrusion, there is still a risk of privacy disclosure.

    In brief, the existing privacy protection methods are described for a certain pattern of FRA, and most of them are for machine recognition.

    Damer et al. suggested the concept of algorithm fusion in 2013 [46]. The algorithm results from different sources are normalized, and then the scores are fused to improve compatibility. However, after our evaluation, the time to reach the deviation of multiple algorithms is unacceptable, especially in the black box state. Baseline weighting algorithms [47] were applied in the second year. Although the robustness has been improved, the experimental results show significant differences for different data sets.

    Subsequently, Damer then focused on an asynchronous combination-based score-level weighted-sum fusion approach [48]. Its focus is on the trusted model, not the attack model. We believe that the attack model is the key to protect privacy. Face Morphing Attacks were invoked [49] by the minimum, maximum, and mean fusion rules, and at most 3 detectors of different protocols can be fused. The new method is totaling up to six fused detectors in different attacks.

    Literature [50] gives us some enlightenments. Even if we integrate some algorithms, some repair methods can still judge the changed area in the image, resulting in the attack's failure. There must be a network structure with both automatic mechanism and manual intervention mechanism to prevent the generated attack image from being seen through.

    A face image should be blocked by some cloaks so that the corresponding FRA recognition can be prevented as shown in Figure 2. Considering the variety and self-renewal of FRA, the algorithm should be compatible. The overall goal is to make function f that exerts some perturbation on the input image x to satisfy the following properties:

    Figure 2.  Overall target diagram: the protected face image can disable multiple FRAs at the same time without affecting human vision.

    It believes that the facial feature template corresponding to each algorithm is different. We define the characteristic set of each FRA as the two-dimensional representation of its feature template, and its mapping to image x is s. The original image feature cxs and the output image feature cys.

    If the feature set of x in FRA1 and FRA2 are s1 and s2 respectively, then the output image meets cy∉(s1s2).

    The best way to inherit human visual recognizance is to add small perturbations to the image, therefore the main research objective is to utilize the cloak pixels to encrypt the image. The proposed Adversarial Fusion Network (AFN) is divided into two categories: build a generative adversarial network (GAN) and inverse back coupling validator. In order to improve the compatibility of the algorithm, the adversary nozzles are used to carry out black box attacks instead of the discriminator. True and false images at the nozzles can be docked to parallel FRAs and compared constantly until they are decided to be different results completely by FRAs. To keep the visibility of the output image, a validator scores the output image to determine the generation effect, as shown in Figure 3.

    Figure 3.  Schematic representation of Adversarial Fusion Network architecture.

    The output contains random factors related to the distribution of feature points corresponding to the target FRA. The greater the Euclidean distance between the output vector and the original vector, the weaker FRA's ability. If multiple FRAs are compatible, try to keep the output vector consistent with the original image when the current FRA fails. The original image can be restored by printing random factors and calculating with the protected image. According to this idea, we assume that the regression function f is the feature point mapping learned in an FRA. In the training stage, the same mask template distribution needs to be got from images of different scales, and the complete feature space should comply with Eq (1). Vector φ is the nth feature distribution of different scales (s) in the original images. Equation (2) makes the network learn the hyperparameters. The relatively simple idea is the translation and scaling for the aim function in Eq (3).

    ϕ=n1φsn (1)
    f(φsn)=f(φsn) (2)
    f(ϕ)=ωTI (3)

    Set I represent input eigenvector, ω is hyper parameter, * refers regional coordinate of the feature point, f is predicted value, which should have the smallest Euclidean distance from the real value t, Eq (4) is the loss function for adversary nozzles, which distinguishes the difference between the generated image and the original image. The function optimization goal is defined as Eq (5).

    Ln=mi(tiωTIi)2 (4)
    W=argminωmi(tiωTIi)2+λω2 (5)

    Input images are samples of different scales and distinct faces which are mapped to a fixed size landmark template image, and the feature output is given by the same FRA. The feature points region output by the algorithm is pre-processed in 4 × 4 × 3 pixels or 4 × 4 × 1 pixels to reduce the amount of computation (the result of a grey scale image is sometimes not ideal, because some algorithms are sensitive to color channels). Normalizing a random facial mask model is expected to have 30 million super parameters, so it is necessary to establish a simple network. Each layer adopts batch normalization layer to speed up model training and prevent over fitting [51]. We use a filter of 4 × 4 pixels and the same convolution (stride is 2). After 2 consecutive convolutions with 16 filters, the image is compressed to 128 × 128 × 32 pixels through the pooling layer. Then there are several convolution layers again, by using 64 filters and some same convolution. After pooling, it becomes a 64 × 64 × 64 matrix. Subsequently, matrix data of 16 × 16 × 128 pixels are gained through 128 filters and maximum pooling twice respectively. After passing through two full connection layers, it is input into the Softmax layer for activation, as shown in Figure 4.

    Figure 4.  The detailed architecture of the generator.

    The concept of adversarial example transferability was first proposed by Szegedy et al. [52], in that the training models with different structures on different subsets set results in the target model have high confidence error classification. If the trained model is universal, Feature-representation-transfer [53] can be used as a node to fit different FRAs. Adversary nozzles establish a connection between the generator and FRAs, it aggregates the vectors with close feature distance into a subset that reduces the network parameters. The set φ is used to describe the characteristics of FRAs. The regrouped distribution f(x, y) is continuously fed back to the generator and compares the distance of the same feature with the real image. e.g., the tip and wing of nose are mapped to vector τ for different FRAs as shown in Figure 5. Next, more new FRAs can be added, and old memories are stored in the set φ.

    Figure 5.  The principle of the adversary nozzle.

    However, feeding data only through the nozzle will make the output image unattractive with high probability. If the generated image is approaching an unacceptable level, the nozzle should be closed in time. In the formula, the distribution parameter θ comes from the validator which makes the Euclidean distance between desired distribution t* and measurement f* as small as possible. The validator comprises six deconvolution layers and a human face tracking network. On the one hand, the deformation degree of the generated image is detected by the visual semi supervised window [54]. On the other hand, if the face cannot be detected, the training will be stopped by nozzles.

    The generated template can be regarded as a reserved room. If it is filled with noise, the image is encrypted. Using the template to do XOR operation with the encrypted image can also restore the image.

    When carrying out multi-objective AFN tasks, training time is a thorny problem. Attacker, barrier, chaser, and driver are simulated in expectation, extreme value, data distribution, and minimum loss [55]. We use the division of labor of chimpanzees in hunting behavior to improve the cooperation ability of the network. Each hidden layer in AFN is treated as a chimpanzee. The characteristic distribution plane is considered as a tree.

    In nature, there are two main differences between chimpanzee groups and other biological groups:

    1) Individual diversity: in a group of chimpanzees, the abilities and intelligence of individuals are not similar, but they are all members of the hunting team, and there is no discrimination when performing tasks. In view of their different abilities, chimpanzees will be responsible for different hunting operations according to their special abilities. In the algorithm, different models with different curvature, slope, and interception points are used to give chimpanzees different behaviors, just like in natural hunting tasks.

    2) Sexual motivation: besides the great advantages of group hunting, research shows that the hunting behavior of chimpanzees is also affected through the social benefits brought by obtaining meat. After obtaining meat, chimpanzees have a certain reputation. They can use the meat for corresponding returns, such as sex, being groomed by their companions, and so on. Unfortunately, chimpanzees will shuffle their duties after each round of hunting. The chaos stimulates them to obtain meat quickly. This unconditional behavior improves the exploitation and convergence speed in the final.

    Attackers need more cognitive effort to predict the subsequent movement of prey, so after successful hunting, they will get a larger piece of meat. This aggressive behavior is positively correlated with initial position, empirical value, and time. In addition, chimpanzees can choose roles in each round of hunting according to their advantages [56].

    Different FRAs are regarded as prey in AFN. The target FRA can appear simultaneously or separately, depending on the distance between the pursuer and the prey in the initial stage. Of course, every chimpanzee is constantly changing its position for driving and chasing its prey. The driver followed the FRA without trying to catch. And the barrier places itself on a tree to prevent FRA from escaping (Upgrading). The chaser chased the FRA and explored its route. Finally, the attacker predicts the FRA's downward escape route and grabs it at the shortest distance.

    The driving model d in the exploration and exploitation phase represents the relative distance between the position of driver xc and the position of prey xp. Where t (<500) stands for the iteration sequence number, a, m, and c are the coefficient vectors in [0, 2]. The smaller d, the better, but it cannot be 0. Its goal is to make prey move, the Eqs (6) and (7) are proposed.

    d=|c.xp(t)m.xc(t)| (6)
    xp(t+1)=xp(t)ad (7)

    According to experience, let the initial value x of f be 0.1 and the nonlinear descent space be [3.57, 4]. In which, r1 and r2 are the random vectors in the range of [0, 1]. Equations (8)–(10) respectively.

    a=(2r11)f (8)
    c=r2f (9)
    \boldsymbol{m} = f\left(x\right)≔\mathrm{l}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{s}\mathrm{t}\mathrm{i}\mathrm{c}\left(x\right) (10)

    Finally, m is a logistic mapped vector that represents the effect of the chaotic stimulates (sexual motivation) of chimpanzees after hunting. Convergence rate is improved via the chaotic function in complex and high-dimensional problems. All particles behave similarly in both local and global searches, so individuals can be considered as a single population with a common search strategy.

    Chimpanzees can be divided into several hunting groups. The roles of each group are the same, and the members of the group can be adjusted after several rounds of hunting. In Table 3, where t refers to the current iteration and T indicates the maximum number of iterations.

    Table 3.  The coefficient of f vector.
    Groups Plan 1 Plan 2
    1 1.95-2 \frac{t^{1 / 4}}{T^{1 / 3}} 2.5-\frac{2\mathrm{l}\mathrm{o}\mathrm{g}\left(t\right)}{\mathrm{l}\mathrm{o}\mathrm{g}\left(T\right)}
    2 1.85-3 \frac{t^{1 / 3}}{T^{1 / 4}} 2.5-2{\left(\frac{t}{T}\right)}^{3}
    3 1.5-3{\left(\frac{t}{T}\right)}^{3} 0.5+2\mathrm{e}\mathrm{x}\mathrm{p}\left[-16{\left(\frac{t}{T}\right)}^{2}\right]
    4 1.5-2{\left(\frac{t}{T}\right)}^{3} 2.5+2{\left(\frac{t}{T}\right)}^{2}-4\frac{t}{T}

     | Show Table
    DownLoad: CSV

    There are no known parameters of an FRA. In order to compare gaining FRA characteristics to chimpanzees hunting, it is assumed that any member can be a chaser to report the location x. These recommended role models are applied and the other members update their positions based on the recommended initial values to get the best formation. The relational model is expressed by the Eqs (11)–(13).

    {\bf{d}}_{R} = \left|{\boldsymbol{c}}_{n}{\boldsymbol{x}}_{R}-{\boldsymbol{m}}_{n}\boldsymbol{x}\right| (11)
    {\boldsymbol{x}}_{n} = {\boldsymbol{x}}_{R}-{\boldsymbol{a}}_{n}{\bf{d}}_{R} (12)
    \boldsymbol{x}\left(t+1\right) = \frac{1}{4}\sum _{\bf{1}}^{n}{\boldsymbol{x}}_{n} (13)

    The hunting role of chimpanzees is represented by subscript R; n is the group number of the next change. When the prey weight c > 1, the hunting group hunts (encircle and intercept), otherwise, it will divide food or reassign roles. This parameter also contributes to the randomization of the optimization process and reduces the probability of local minimization. Note that the c vector follows the same random function mapping from the beginning to the end of the iteration.

    Federated learning [39] and transfer learning [4] can also deal with multiple FRA attacks. However, their parameters are relatively large, and the gradient explosion or gradient disappearance often occurs. Too many training rounds will often lead to an unsatisfactory feeling for the human being. ChOA theory can realize rapid gradient descent, and the acceptance of the output image can be effectively optimized.

    All the feature points can't be clustered separately [57], because there will always be some unknown data. Chimpanzees need to know the focus of each FRA. According to the assumption, if the key points of an image correspond to the positions of different prey, the way to reduce the network parameters is to drive the prey to a certain position and then attack. It can reduce the capture time (training speed) and make the focus on the prey closer to the attacker (gradient explosion).

    Attackers attack the prey on the grassland named \sigma , the lowest gradient acceptable to the human. After AFN began, chimpanzees entered the stage of exploration. They search for prey (local extremum) separately and attack intensively. The vector a with a random value from -1 to 1, so that searchers focus on tracking prey far away from others. When inequality |a| > 1 the chimpanzees abandon the current route to find new prey, as shown in Figure 6.

    Figure 6.  Fast gradient descent method based on hunting model.

    As mentioned earlier, after several iterations of hunting activities, each chimpanzee adjusts its weight and position (satisfaction after eating), and then released or inherited its previous responsibilities. Therefore, the chaotic way helps to balance the division of labor among hunting groups. This chaotic behavior helps chimpanzees to further ease falling into local optimization and slow convergence speed in solving high-dimensional problems. Once the prey is captured, the nearest four groups of chimpanzees will regroup and start new exploitation.

    Every one-dimensional vector can be regarded as a chimpanzee to indicate the candidate points in AFN. Further, the weights are the coefficients of social roles, in which the total length is equal to the vector's in \boldsymbol{\varphi } , in Equation 14, where h is the number of neurons in the hidden layer and n is the number of inputs [58]. As an example of this analogy, the ensemble training is shown in Figure 7. Mean square error (MSE) is used to measure the fitness of the chimpanzee, which calculates the difference between the expected value f and evaluation value \widehat{f} of all training in Eq (15). Where n is the number of instances in the training dataset.

    Figure 7.  Diagrammatic sketch of ensemble training.
    \mathrm{L} = (n \times h)\mathrm{ }+\mathrm{ }(2 \times h)\mathrm{ }+\mathrm{ }1 (14)
    MSE = \frac{1}{n}\sum _{1}^{n}{\left(f-\widehat{f}\right)}^{2} (15)

    The captured prey is mapped to a two-dimensional grassland. It not only affects the behavior of chimpanzees' social factor but also stores it in a room to generate templates. In other words, the main purpose of the attack is to determine the position of the prey on the σ plane, which continuously forms a template corresponding to the feature points collected by FRAs. And the template is transformed into printed patch [59] by texture visualization technology as shown in Figure 8..

    Figure 8.  Template mapping.

    Because the location of prey driven to the ground is random, each template file is different. They are combined with perturbation technology to encrypt face images. For the generator, in order to improve the quality of samples and the speed of convergence, all pooling layers and full connection layers are cancelled [60]. This method can reduce the difficulty of search σ. The only problem is that it will produce noise pollution.

    By mapping Gaussian noise onto the template file and XOR with the original image, the features concerned by FRAs can be hidden. In order to make the generated image more realistic, we conduct Y-axis mapping on the images of less than 2KB in CelebA dataset, convert them into Gaussian noise and give them to AFN. The protected image is a set of pixels at specific regions generated by random noise. If template file XOR with the disturbed image again and the pure image can be restored. The template file can be saved again through image encryption technology [61] to increase its security.

    This section compares the effects of several famous privacy protection methods, famous FRAs will also take part in the experiments. Distributed training conducted by the GPU of ARM mali-t860 MP4 and the development board of 16 core NPUs. CelebA, LFW, and Andy_Lau datasets are used to assess the performance.

    Privacy protection methods include: Chaos [62] and cryptography [2], zero-watermark [63], Feature area twist [6], Fawkes [3], ADVHAT [64] in the image level. Parameters of paper [65] are used in the inference experiment.

    FRAs include: Baidu [18], Azure face [66], Face++ [19], ArcFace [22] and open source face recognition algorithm based on Dlib [67]. They are renamed as {M}_{B} , {M}_{Az} , {M}_{F} , {M}_{Ar} , {M}_{D} for convenience.

    This experiment is aimed at implementing ArcFace algorithm. ADVHAT and Fawkes algorithms are not updated for other FRAs. Principal component analysis (PCA) [68] method is used to reduce dimensionality to accept a two-dimensional space of visual features. To keep the interpretation of the original data to the greatest extent, this paper uses the minimum loss theory to make the first principal component have the largest variance, and each subsequent principal component is orthogonal to the previous ones. An image covered by every protection shows that chaos and cryptography are strongest and zero-watermark almost impossible to protect privacy, the algorithms of changing the eigenvector with cloak all work well, as shown in Figure 9. Although from the mathematical, most algorithms have the basic elements of hiding privacy. However, the printed image can't be recognized except ADVHAT, Fawkes, and the mask template.

    Figure 9.  Visual 2D feature space. The Euclidean distances are different, indicating that it has reached the expectation in principle.

    We refer to [65] and use 70% of CelebA as a data training set and 30% as a test set. Performance evaluation is conducted with a test set. Images from LFW are used to assess the generalization capabilities of AFN. To have a consistent evaluation setup, face recognition, gender, and age are organized into the experiments Learning-based and Strict (LBS) evaluation protocol [69] is used to train, and the dataset is organized into 5 folds. The commercialized FRA will not abide by the rules. The only way to prove whether the privacy protection is effective is whether the attacked FRA outputs the correct face ID. We use err to analyze the average ratio of genuine pair comparison.

    Privacy-gain identity-loss coefficient (PIC) is used to evaluate the performance of the model. Where fi{c}^{{'}} and eer{'} were computed from the attribute suppressed representations, whereas the errors fic and eer were calculated from the original (unmodified) face representations [37], shown in Eq (16) [37], the higher value of PIC the performance is better. The model maps the initial face representations into disentangled representations z, zid, and zatt, where z represents the template information of a face, zid shows identity information and zatt encodes information about biometric attributes.

    PIC = \frac{fi{c}^{{'}}-fic}{fic}-\frac{eer{'}-eer}{eer} (16)

    The fraction of incorrectly classified images fic and equal error rates err are reported for the face id and attribute recognition. In table 4, the impact of the model on potential space is concentrated on zid, and the information of zatt includes age and gender are not been affected. The experimental results show our model does not care about the specific attributes of the face, we only care about whether FRA can output face ID. Existing state-of-the-art e.g., PE-MIU [32], Unsupervised privacy-enhancement (PE-UP) [37], feature disentanglement (PE-FD) [65], prefer to hide some key attributes and preserve the recognizability of face.

    Table 4.  Privacy-gain identity-loss coefficient on CelebA, LFW and Andy_Lau. Computed over 5 folds.
    Datasets z zid zatt
    CelebA 25.077 ± 1.130 422 1
    LFW 24.478 ± 0.189 348 1
    Andy_Lau 29.635 ± 1.484 1024 256

     | Show Table
    DownLoad: CSV

    This implies that the total privacy cost remains unchanged. It is important to apply an adaptive iterative privacy budget allotment to a fixed allotment [70]. In terms of latent space, the goal is to maneuver latent space to achieve a given image's transformations [71]. After mapping, the template coordinates of Arcface algorithm are output in the training network as shown in Figure 10, where xs is the original image and xt is the image template in AFN. It can be seen that the feature distribution of the processed image has changed. All the test samples are collected as shown in Table 5. We have added liveness (images of volunteers) experiments to deal with reality.

    Figure 10.  2D coordinates of feature template. The test dataset is Andy_Lau and target FRA is Arcface v2.
    Table 5.  Change rate of feature distribution.
    Dataset {M}_{B} {M}_{Az} {M}_{F} {M}_{Ar} {M}_{D}
    CelebA 13.6% ± 1.1% 24.4% ± 0.8% 10.2% 27.1% ± 0.5% 99%
    LFW 9.3% ± 0.8% 23.6% ± 0.8% 12.4% 18.6% ± 0.1% 99%
    Andy_Lau 15.2% ± 2.7% 33.3% ± 0.1% 25.8% ± 0.9% 42.0% ± 1.2% 99%
    Liveness 5.8% 10% ± 0.2% 4.5% 38.1% ± 3.5% 99%

     | Show Table
    DownLoad: CSV

    It can be seen from the experiment that the single FRA algorithm can change the image feature distribution and make the detector ineffective. The best experimental result is Andy_ Lau's data set has a change rate of over 40% in the Arcface V2 algorithm. MD is not intelligent, so it is only recorded and not included in the experimental data.

    The comparison processes adopt the black box method, and the evaluation index uses the protection score {s}_{p} , which objectively evaluates the effect of algorithm output on the targeted FRA. {F}_{p} represents error samples on target FRA, {T}_{p} means accuracy given by FRA authors. {F}_{t}^{n} stands for the correct samples of the image after the protection algorithm in other FRA. \beta is the perception coefficient, which is based on the subjective evaluation of the tester comparing all the experimental results' weight from 0.10 to 0.99, n is FRA's ID. Shown as Eq (17), 1500 samples were tested in Table 6.

    {s}_{p} = {\beta }^{n}\frac{{F}_{p}}{{F}_{p}+{T}_{p}+\sum _{1}^{n}{F}_{t}^{n}} (17)
    Table 6.  Cheat experiments. The values in brackets have \beta participation.
    {M}_{B} {M}_{Az} {M}_{F} {M}_{Ar} {M}_{D}
    Chaos 0.999(0.001) 0.999(0.001) 0.999(0.001) 0.999(0.001) 0.999(0.001)
    Zero-watermark 0.001(0.999) 0.001(0.999) 0.001(0.999) 0.001(0.999) 0.001(0.999)
    Twist 0.414(0.238) 0.230(0.138) 0.377(0.137) 0.798(0.701) 0.816(0.803)
    Fawkes 0.001(0) 0.327(0.310) 0.199(0.013) 0.986(0.837) 0.997(0.986)
    ADVHAT 0.001(0) 0.001(0) 0.333(0.334) 0.894(0.803) 0.960(0.960)
    Mask Template 0.834(0.811) 0.852(0.633) 0.883(0.675) 0.901(0.897) 0.998(0.998)

     | Show Table
    DownLoad: CSV

    The results indirectly proved that the method proposed in this paper has compatibility when dealing with multiple FRAs. Chaos method is more efficient without considering the visual effect.

    There is a catastrophic forgetting (feature confusion) phenomenon. Obviously, the pixel coordinates have shifted, as shown in Figure 11. Intuitively, it should be substantial differences in landmarks specifications of these enumerated FRAs, which may be related to the initial chaotic function of the chimpanzee hunting group. Logistic chaotic mapping with values of [0.01, 0.4] is used. If Singer mapping was used, the effect is much better. If users do more chaotic mapping experiments, they can save a lot of time, because they often find that the vision is affected after many rounds of model training.

    Figure 11.  Catastrophic forgetting after hunting. (a) Feature confusion when Logistic mapping was used. (b) The image is smoother by singer mapping.

    The output image results from the XOR operation between the noise and the original image through the mask template in Figure 12. Therefore, in order to restore the image, we only need to make the output image XOR with the mask template again. The mask template can be saved by image encryption. When the image needs to be decoded, restore the graphical mask template by decryption, to ensure the security of the mask template key.

    Figure 12.  Multi model superposition reduction experiment. (a). Images generated for {M}_{B+Az+F+Ar+D} . (b). Restored image.

    The average error rate of cumulative FRAs {\overline{F}}_{p} proves the generality of the algorithm studied in this paper. Even if the generated image is far from expected, it must be able to restore to the original state. Table 7 records the results of accumulating different FRAs, where Perception is the author's subjective opinion, and {\overline{R}}_{\boldsymbol{p}} is the recognition accuracy after restoring. It can be demonstrated that the number of FRA is not directly proportional to the training time.

    Table 7.  Training results.
    FRAs Training time(min) {\overline{F}}_{p} {\overline{R}}_{p}
    {M}_{B+Az+F+Ar+D} 32,698 0.512 0.989
    {M}_{B+Az+F+D} 31,058 0.838 0.989
    {M}_{B+Az+F+Ar} 32,136 0.892 0.995
    {M}_{B+Az+F} 31,274 0.977 0.998
    {M}_{B+Az} 28,013 0.984 0.998
    {M}_{F+Ar} 29,137 0.996 0.997
    {M}_{b} 21,638 0.999 0.998
    {M}_{Ar} 19,333 0.998 0.998

     | Show Table
    DownLoad: CSV

    To investigate the performance of the algorithm switching between different versions of FRA from the same manufacturer, V1 and V2 of {M}_{Ar} is chosen because these two versions are easy to download and have different interfaces. V1 training took 19,333 minutes to generate a template, but it is invalid when applied to V2. Then V2 is directly turned into prey without changing the network structure and protected image. The result can be successful and takes another 6010 minutes. In contrast, Fawkes takes 61,028 minutes to train on V1 and AdvHat takes 93,110 minutes (two gradient explosions).

    The facial mask from AFN has great efficiency, but the solution to the template coordinate drift is only to change the chaotic function. The main reason is that we do not fully use the social behavior of chimpanzees. In other words, when the target features are captured, the weight of various roles in the network does not form an effective replacement. This causes that if the roles and tasks assigned at the beginning are not reasonable and the follow-up work is not tuned, we can only get good results from the random chaotic function. In the next work, we will focus on the optimization of task allocation and let the network adjust the weight of hunting members automatically.

    The model proposed in this paper can ensure that the target face recognition algorithm cannot recognize the protected Face ID. the model improves the existing algorithms that can only protect the privacy for specific face recognition algorithms, and can deal with multiple recognition algorithms at the same time. In addition, a negative feedback network is added to intervene in image distortion, so that it can ensure that human beings can recognize the protected Face ID. In order to speed up the speed, the optimization algorithm of chimpanzee group hunting is introduced. Experiments show that the model can change the feature distribution of the facial image by up to 42% without obvious visual changes. It can be connected in parallel at least five commercial face recognition algorithms. Using a chaotic function to initialize network parameters can reduce the difficulty of network design, but it will produce slight noise on the output image. This will reduce the possibility of a gradient explosion. Compared with other biological privacy protection methods based on small perturbation, our method can improve the training speed by 4.8 times.

    This research was partially funded by the Strategic Priority Research Program of the Chinese Academy of Sciences, grant number XDB41020104 and Scientific Research Project of Beijing Educational Committee, grant number KM202110005024.

    The authors declare no conflict of interest.



    [1] Z. Feng, J. X. Velasco-Hernández, B. Tapia-Santos, A mathematical model for coupling within-host and between-host dynamics in an environmentally-driven infectious disease, Math. Biosci., 241 (2013), 49–55. https://doi.org/10.1016/j.mbs.2012.09.004 doi: 10.1016/j.mbs.2012.09.004
    [2] Z. Feng, J. X. Velasco-Hernández, B. Tapia-Santos, M. Leite, A model for coupling within-host and between-host dynamics in an infectious disease, Nonlinear Dyn., 68 (2012), 401–411. https://doi.org/10.1007/s11071-011-0291-0 doi: 10.1007/s11071-011-0291-0
    [3] Z. Feng, X. Cen, Y. Zhao, J. X. Velasco-Hernández, Coupled within-host and between-host dynamics and evolution of virulence, Math. Biosci., 270 (2015), 204–212. https://doi.org/10.1016/j.mbs.2015.02.012 doi: 10.1016/j.mbs.2015.02.012
    [4] X. Cen, Z. Feng, Y. Zhao, Emerging disease dynamics in a model coupling withinhost and between-host systems, J. Theor. Biol., 361 (2014), 141–151. https://doi.org/10.1016/j.jtbi.2014.07.030 doi: 10.1016/j.jtbi.2014.07.030
    [5] M. A. Gilchrist, A. Sasaki, Modeling host-parasite coevolution: A nested approach based on mechanistic models, J. Theor. Biol., 218 (2002), 289–308. https://doi.org/10.1006/jtbi.2002.3076 doi: 10.1006/jtbi.2002.3076
    [6] A. E. S. Almocera, V. K. Nguyen, E. A. Hernández-Vargas, Multiscale model within-host and between-host for viral infectious diseases, J. Math. Biol., 77 (2018), 1432–1416. https://doi.org/10.1007/s00285-018-1241-y doi: 10.1007/s00285-018-1241-y
    [7] N. Mideo, S. Alizon, T. Day, Linking within- and between-host dynamics in the evolutionary epidemiology of infectious diseases, Trends Ecol. Evol., 23 (2008), 511–517. https://doi.org/10.1016/j.tree.2008.05.009 doi: 10.1016/j.tree.2008.05.009
    [8] M. Gilchrist, D. Coombs, Evolution of virulence: interdependence, constraints, and selection using nested models, Theor. Popul. Biol., 69 (2006), 145–153. https://doi.org/10.1016/j.tpb.2005.07.002 doi: 10.1016/j.tpb.2005.07.002
    [9] F. Saldaña, J. X. Velasco-Hernández, Modeling the COVID-19 pandemic: a primer and overview of mathematical epidemiology, SeMA J., 79 (2022), 225–251. https://doi.org/10.1007/s40324-021-00260-3 doi: 10.1007/s40324-021-00260-3
    [10] H. Gulbudak, V. L. Cannataro, N. Tuncer, M. Martcheva, Vector-borne pathogen and host evolution in a structured immuno-epidemiological system, Bull. Math. Biol., 79 (2017), 325–355. https://doi.org/10.1007/s11538-016-0239-0 doi: 10.1007/s11538-016-0239-0
    [11] M. Martcheva, N. Tuncer, Y. Kim, On the principle of host evolution in host–pathogen interactions, J. Biol. Dyn., 11 (2017), 102–119. https://doi.org/10.1080/17513758.2016.1161089 doi: 10.1080/17513758.2016.1161089
    [12] H. Gulbudak, An immuno-epidemiological vector-host model with within-vector viral kinetics, J. Biol. Syst., 28 (2020), 233–275. https://doi.org/10.1142/S0218339020400021 doi: 10.1142/S0218339020400021
    [13] R. M. Ribeiro, A. S. Perelson, The analysis of HIV dynamics using mathematical models, in AIDS and Other Manifestations of HIV Infection, 905 (2004), 912.
    [14] B. Tesla, L. R. Demakovsky, H. S. Packiam, E. A. Mordecai, A. D. Rodríguez, M. H. Bonds, et al., Estimating the effects of variation in viremia on mosquito susceptibility, infectiousness, and R0 of Zika in Aedes aegypti, PLoS Negl. Trop. Dis., 12 (2018). https://doi.org/10.1371/journal.pntd.0006733
    [15] B. W. Alto, K. Wiggins, B. Eastmond, D. Velez, L. P. Lounibos, C. C. Lord, Transmission risk of two chikungunya lineages by invasive mosquito vectors from Florida and the Dominican Republic, PLoS Negl. Trop. Dis., 11 (2017), e0005724. https://doi.org/10.1371/journal.pntd.0005724 doi: 10.1371/journal.pntd.0005724
    [16] L. Lambrechts, R. C. Reiner, M. V. Briesemeister, P. Barrera, K. C. Long, W. H. Elson, et al., Direct mosquito feedings on dengue-2 virus-infected people reveal dynamics of human infectiousness, PLoS Negl. Trop. Dis., 17 (2023), e0011593. https://doi.org/10.1371/journal.pntd.0011593 doi: 10.1371/journal.pntd.0011593
    [17] S. M. Lemon, P. F. Sparling, M. A. Hamburg, D. A. Relman, E. R. Choffnes, A. Mack, Vector-Borne Diseases: Understanding the Environmental, Human Health, and Ecological Connections: Workshop Summary, The National Academies Press, 2008. https://doi.org/10.17226/11950
    [18] N. L. González Morales, M. Núñez-López, J. Ramos-Castañeda, J. X. Velasco-Hernández, Transmission dynamics of two dengue serotypes with vaccination scenarios, Math. Biosci., 287 (2017), 54–71. https://doi.org/10.1016/j.mbs.2016.10.001 doi: 10.1016/j.mbs.2016.10.001
    [19] S. Bhatt, P. W. Gething, O. J. Brady, J. P. Messina, A. W. Farlow, C. L. Moyes, et al., The global distribution and burden of dengue, Nature, 496 (2013), 504–507. https://doi.org/10.1038/nature12060 doi: 10.1038/nature12060
    [20] C. A. Hill, F. C. Kafatos, S. K. Stansfield, F. H. Collins, Arthropod-borne diseases: vector control in the genomics era, Nat. Rev. Microbiol., 3 (2005), 262–268. https://doi.org/10.1038/nrmicro1101 doi: 10.1038/nrmicro1101
    [21] F. Beugnet, M. Jean-Lou, Emerging arthropod-borne diseases of companion animals in Europe, Vet. Parasitol., 163 (2009), 298–305. https://doi.org/10.1016/j.vetpar.2009.03.02 doi: 10.1016/j.vetpar.2009.03.02
    [22] H. J. Bremermann, H. R. R. Thieme, A competitive exclusion principle for pathogen virulence, J. Math. Biol., 27 (1989), 179–190. https://10.1007/BF00276102 doi: 10.1007/BF00276102
    [23] S. Lion, A. J. J. Metz, Beyond R0 Maximisation: On pathogen evolution and environmental dimensions, Trends Ecol. Evol., 33 (2018), 458–473. https://doi.org/10.1007/s00285-018-1241-y doi: 10.1007/s00285-018-1241-y
    [24] I. Thapa, D. Ghersi, Modeling preferential attraction to infected hosts in vector-borne diseases, Front. Public Health, 11 (2023), 1276029. https://doi.org/10.3389/fpubh.2023.1276029 doi: 10.3389/fpubh.2023.1276029
    [25] H. Zhang, Y. Zhu, Z. Liu, Y. Peng, W. Peng, L. Tong, et al., A volatile from the skin microbiota of flavivirus-infected hosts promotes mosquito attractiveness, Cell, 185 (2022), 2510–2522. https://doi.org/10.1016/j.cell.2022.05.016 doi: 10.1016/j.cell.2022.05.016
    [26] D. W. Vaughn, S. Green, S. Kalayanarooj, B. L. Innis, S. Nimmannitya, S. Suntayakorn, et al., Dengue viremia titer, antibody response pattern, and virus serotype correlate with disease severity, J. Infect. Dis., 181 (2000), 2–9. https://doi.org/10.1086/315215 doi: 10.1086/315215
    [27] N. Haider, Y. M. Chang, M. Rahman, A. Zumla, R. A. Kock, Dengue outbreaks in Bangladesh: Historic epidemic patterns suggest earlier mosquito control intervention in the transmission season could reduce the monthly growth factor and extent of epidemics, Curr. Res. Parasitol. Vector-Borne Dis., 1 (2021), 100063. https://doi.org/10.1016/j.crpvbd.2021.100063 doi: 10.1016/j.crpvbd.2021.100063
    [28] PAHO, Integrated management strategy for dengue prevention and control in the Caribbean subregion, 2010. Available from: https://www.paho.org/sites/default/files/IMS-Dengue%20CARIBBEAN%20SUBREGION%20Integrated%20FINAL.pdf.
    [29] M. A. Nowak, R. M. May, Virus Dynamics: Mathematical Principles of Immunology and Virology, Oxford University Press, 2000.
    [30] R. Ben-Shachar, K. Koelle, Transmission-clearance trade-offs indicate that dengue virulence evolution depends on epidemiological context, Nat. Commun., 9 (2018), 29907741. https://doi.org/10.1038/s41467-018-04595-w doi: 10.1038/s41467-018-04595-w
    [31] N. J. White, Malaria parasite clearance, Malar. J., 16 (2017). https://doi.org/10.1186/s12936-017-1731-1
    [32] F. B. Agusto, M. C. Leite, M. E. Orive, The transmission dynamics of a within-and between-hosts malaria model, Ecol. Complexity, 38 (2019), 31–55. https://doi.org/10.1016/j.ecocom.2019.02.002 doi: 10.1016/j.ecocom.2019.02.002
    [33] J. M. Mansuy, C. Mengelle, C. Pasquier, S. Chapuy-Regaud, P. Delobel, M. B. Guillaume, et al., Zika virus infection and prolonged viremia in whole-blood specimens, Emerging Infect. Dis., 23 (2017), 863–865. https://doi.org/10.3201/eid2305.161631 doi: 10.3201/eid2305.161631
    [34] C. Triplett, S. Dufek, N. Niemuth, D. Kobs, C. Cirimotich, K. Mack, et al., Onset and progression of infection based on viral loads in rhesus macaques exposed to Zika virus, Appl. Microbiol., 2 (2022), 544–553. https://doi.org/10.3201/eid2305.161631 doi: 10.3201/eid2305.161631
    [35] Y. M. Tun, P. Charunwatthana, C. Duangdee, J. Satayarak, S. Suthisawat, L. Sand, et al., Virological, serological and clinical analysis of Chikungunya virus infection in Thai patients macaques exposed to Zika virus, Viruses, 14 (2022), 1805. https://doi.org/10.3390/v14081805 doi: 10.3390/v14081805
    [36] M. N. García, R. Hasbun, K. O. Murray, Persistence of West Nile virus, Microbes Infect., 17 (2015), 163–168. https://doi.org/10.1016/j.micinf.2014.12.003 doi: 10.1016/j.micinf.2014.12.003
  • This article has been cited by:

    1. Guangmin Sun, Hao Wang, Yu Bai, Kun Zheng, Yanjun Zhang, Xiaoyong Li, Jie Liu, PrivacyMask: Real-world privacy protection in face ID systems, 2023, 20, 1551-0018, 1820, 10.3934/mbe.2023083
    2. Shuchao Pang, Ruhao Ma, Bing Li, Yongbin Zhou, Yazhou Yao, 2025, Chapter 17, 978-3-031-73009-2, 280, 10.1007/978-3-031-73010-8_17
    3. Hang Yanping, Zheng Haixia, Yang Minmin, Wang Nan, Kong Miaomiao, Zhao Mingming, Application of the joint clustering algorithm based on Gaussian kernels and differential privacy in lung cancer identification, 2025, 15, 2045-2322, 10.1038/s41598-025-01873-8
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(386) PDF downloads(34) Cited by(0)

Figures and Tables

Figures(6)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog