Research article Special Issues

How fast is the linear chain trick? A rigorous analysis in the context of behavioral epidemiology

  • Received: 05 May 2020 Accepted: 16 July 2020 Published: 24 July 2020
  • A prototype SIR model with vaccination at birth is analyzed in terms of the stability of its endemic equilibrium. The information available on the disease influences the parentso decision on whether vaccinate or not. This information is modeled with a delay according to the Erlang distribution. The latter includes the degenerate case of fading memory as well as the limiting case of concentrated memory. The linear chain trick is the essential tool used to investigate the general case. Besides its novel analysis and that of the concentrated case, it is showed that through the linear chain trick a distributed delay approaches a discrete delay at a linear rate. A rigorous proof is given in terms of the eigenvalues of the associated linearized problems and extension to general models is also provided. The work is completed with several computations and relevant experimental results.

    Citation: Alessia Andò, Dimitri Breda, Giulia Gava. How fast is the linear chain trick? A rigorous analysis in the context of behavioral epidemiology[J]. Mathematical Biosciences and Engineering, 2020, 17(5): 5059-5084. doi: 10.3934/mbe.2020273

    Related Papers:

    [1] Mani Pavuluri, Amber May . I Feel, Therefore, I am: The Insula and Its Role in Human Emotion, Cognition and the Sensory-Motor System. AIMS Neuroscience, 2015, 2(1): 18-27. doi: 10.3934/Neuroscience.2015.1.18
    [2] Mohammad Mofatteh . Neurosurgery and artificial intelligence. AIMS Neuroscience, 2021, 8(4): 477-495. doi: 10.3934/Neuroscience.2021025
    [3] Yasir Rehman, Cindy Zhang, Haolin Ye, Lionel Fernandes, Mathieu Marek, Andrada Cretu, William Parkinson . The extent of the neurocognitive impairment in elderly survivors of war suffering from PTSD: meta-analysis and literature review. AIMS Neuroscience, 2021, 8(1): 47-73. doi: 10.3934/Neuroscience.2021003
    [4] Jarrod Moss . Introduction to AIMS Neuroscience Special Issue “What Function Does the Anterior Insula Play in Human Cognition?”. AIMS Neuroscience, 2015, 2(3): 153-154. doi: 10.3934/Neuroscience.2015.3.153
    [5] Margaret Jane Moore, Nele Demeyere . Neglect Dyslexia in Relation to Unilateral Visuospatial Neglect: A Review. AIMS Neuroscience, 2017, 4(4): 148-168. doi: 10.3934/Neuroscience.2017.4.148
    [6] Paul G. Nestor, Toshiyuki Ohtani, James J. Levitt, Dominick T. Newell, Martha E. Shenton, Margaret Niznikiewicz, Robert W. McCarley . Prefrontal Lobe Gray Matter, Cognitive Control and Episodic Memory in Healthy Cognition. AIMS Neuroscience, 2016, 3(3): 338-355. doi: 10.3934/Neuroscience.2016.3.338
    [7] Gianna Sepede, Francesco Gambi, Massimo Di Giannantonio . Insular Dysfunction in People at Risk for Psychotic Disorders. AIMS Neuroscience, 2015, 2(2): 66-70. doi: 10.3934/Neuroscience.2015.2.66
    [8] Masatoshi Takita, Yumi Izawa-Sugaya . Neurocircuit differences between memory traces of persistent hypoactivity and freezing following fear conditioning among the amygdala, hippocampus, and prefrontal cortex. AIMS Neuroscience, 2021, 8(2): 195-211. doi: 10.3934/Neuroscience.2021010
    [9] Byron Bernal, Alfredo Ardila, Monica Rosselli . The Network of Brodmanns Area 22 in Lexico-semantic Processing: A Pooling-data Connectivity Study. AIMS Neuroscience, 2016, 3(3): 306-316. doi: 10.3934/Neuroscience.2016.3.306
    [10] Mark Reed, Christopher Miller, Cortney Connor, Jason S. Chang, Forshing Lui . Fat droplets in the cerebrospinal fluid (CSF) spaces of the brain. AIMS Neuroscience, 2024, 11(4): 484-489. doi: 10.3934/Neuroscience.2024029
  • A prototype SIR model with vaccination at birth is analyzed in terms of the stability of its endemic equilibrium. The information available on the disease influences the parentso decision on whether vaccinate or not. This information is modeled with a delay according to the Erlang distribution. The latter includes the degenerate case of fading memory as well as the limiting case of concentrated memory. The linear chain trick is the essential tool used to investigate the general case. Besides its novel analysis and that of the concentrated case, it is showed that through the linear chain trick a distributed delay approaches a discrete delay at a linear rate. A rigorous proof is given in terms of the eigenvalues of the associated linearized problems and extension to general models is also provided. The work is completed with several computations and relevant experimental results.


    It is a significant task to identify the authenticity of an image in several scenarios, such as the media industry, digital image forensics and academic appraisal. It is important to know if an image was tampered because people need to ensure if a certain image can serve as effective evidence for a case or a real result of an experiment. There are different kinds of tampering processing including but not limited to copy-paste, blurring and scale transformation. We want to make a fast and reliable preliminary judgment on tampering. And the widely used JPEG compression algorithm gives us a good chance, we can design an algorithm based on it to achieve our goal. Identification of JPEG compression history has received more and more attention in recent years. When an image is saved in bitmap format but has been compressed by JPEG method, we can not access to the jpg file header which contains the information about compression but we still need to know its compression history sometimes.

    Among all of the lossy compression algorithms, JPEG (Joint Photographic Experts Group) is one of the most popular and widely used standards. Almost all software provide JPEG compression operations choice when saving digital images. Sometimes images have been compressed by the JPEG method are saved into bitmaps. And we can not get the information whether images have been compressed from images files themselves directly because we do not have any access to the JPEG file headers after it has been saved as bitmaps.

    However, this information may be crucial in some cases, in the field of digital image forensics for instance. If the JPEG compression history is efficiently exposed, we can make a preliminary judgment that the image may have been tampered. That is why we need to detect the compression history. Thus, methods for detecting the compression history of bitmaps has become an important issue and received widespread attention.

    Many efforts have been attempted in this aspect, and many decent results have been achieved. Most of these works are related to JPEG coefficient, JPEG quantization table, DCT transformation and wavelet transformation. Based on these, different approaches were proposed.

    Thanh et al. [1] has proposed a method based on the combination of the quantization effect and the statistics of discrete cosine transform coefficient characterized by the statistical model. Hernandez et al. [2] has proposed a method which can avoid giving false results. When their method can not get the quantization table, it means this bitmap may not be compressed or it is not compressed by the JPEG algorithm. These methods have shown some characters of JPEG coefficients which are very meaningful for further works in this aspect.

    And there are some JPEG history detection methods which do not need to estimate the quantization table. Fan et al. [3] proposed a detection method based on the feature of block antifacts in the pixel domain, as the pixel values between the blocks should be inherent if an image was compressed before comparing with uncompressed images. But Fan's method [3] has a relatively high computational complexity. Yang et al. [4] used factor histogram to detect the JPEG compression history of bitmaps, because the decreasing values in factor histogram are observed with the increase of its bin index for uncompressed bitmaps, while no obvious decrease is found in uncompressed images. But Yang's method [4] gets a sudden drop in the accuracy when the compression quality factor is high because the block antifacts phenomenon is not obvious under such circumstances. Especially when the quality factor is 98 or higher the accuracy can go below 50%. Zhang et al. [5] found the tetrolet transformation proposed by Krommweh et al. [6] can be used to exploit the structure of images. Tetrolet transformation is a kind of Harr wavelet transformation which uses at most 177 different kinds of tetrolet as components to disassemble images. The authors proposed a detection method based on the tetrolet transformation to distinguish the uncompressed bitmap image from the decoded JPEG image. As far as we know, Zhang's method [5] has the highest accuracy until now.

    Because JPEG compression algorithm is a kind of lossy compression, the compressed image will lose some kind of information after the compression. Proposed in [7], the number of zeros of the JPEG coefficient is a major factor affecting the compression quality of JPEG. For the same bitmap image, the image quality will continue to improve with the increase of JPEG compression quality factor, while the percentage of zeros of the 64 JPEG coefficients will decrease. And we present a method based on this observation.

    In this paper, we propose a fast, and reliable method to detect the compression history of bitmaps based on image information loss. Our method is faster than most existing similar methods because we do not need to compress the test image in the processing. A lot of methods proposed contain the compression step because they need a comparison version for obtaining the results. Instead of making a compressed image with quality factor 100 in [5], we obtain an estimated original image which is firstly created based on the test image. This processing costs much less time than compression. Extensive experimental results have been achieved which demonstrate that our proposed method outperforms the state of art respect to the detection accuracy and computational complexity. And the accuracy of our method is relatively high, especially when the quality factor of the test images are below 97. Even when the quality factors are as high as 98 and 99, our method still gives acceptable results. What is more, the proposed method can be generally used no matter the test image uses standard or non-standard JPEG quantization table during the compression which means as long as the image was compressed by JPEG method our detection is effective.

    The remaining of the paper is organized as follows. In Section 2, we introduce the relationship of the JPEG coefficients and the image information loss caused by JPEG compression. And the method to create the estimated original image is also described. The framework and the details of the algorithm are stated in Section 3. In Section 4, the experimental results are shown and discussed. And conclusions will be drawn in section 5.

    In this paper, the quality factor Q is an important factor that determines the quality of the JPEG image, and the DCT coefficient after quantization is called the JPEG coefficient, which can be read directly from the JPEG image file. The number of zero of the JPEG coefficient is a major factor affecting the compression quality of JPEG images. Through extensive experiments, we find that the proportion of zero JPEG coefficients on the 64 DCT positions show downward trends as the image compression quality factor increase. In other words, for the same bitmap image, the higher the compression quality is, the less the image information loss is and the lower the percentage of zero among 64 JPEG coefficients is. So, the percentage of zero JPEG coefficients on different frequencies can be defined as the index of the amount of information loss after bitmaps were compressed by the JPEG method.

    When an image is compressed by JPEG, it will firstly be separated into several 8 × 8 blocks. Then each block is operated by DCT respectively. For each block, there are 64 positions. The first step of our method is doing statistics on the number of zero on 64 positions among all blocks. Note n(j) as the total number of zeros on jth position and m is the number of blocks. Then the amount of image information loss on the 64 DCT positions can be expressed as:

    p(j)=n(j)m,j=1,2,...,64 (2.1)

    and the average image information loss is expressed as:

    averageloss=64j=1p(j)/64 (2.2)

    Figure 1 illustrates the result of the average information loss of a bitmap after compressed into JPEG images with the quality factor varying from 60 to 100. The average image information loss decreases as the growth of quality factor Q.

    Figure 1.  The curve of average information loss with the increase of quality factor Q.

    Respecting to this observation, we obtain JPEG images from an uncompressed image Ibmp with different quality factors. And then the JPEG images are decoded to decompressed bitmap images. The uncompressed image and the decompressed images are JPEG compressed with the quality factor 100 to obtain IJPEG1 and IJPEG2 respectively. Obviously, the IJPEG1 is a single JPEG compressed image and IJPEG2 undergoes double JPEG compression. We can compare the amount of image information loss between IJPEG1 and IJPEG2 to achieve the goal of making a primary judge on compression. The higher difference between IJPEG1 and IJPEG2 means higher information loss.

    But please notice that in the example there is an assumption that we have the original lossless image and then compress it. And we make the judgement according to the contrast. But in the real case, the original lossless image is usually unacquirable. Therefore, we have to estimate the original image first.

    As proposed in [8,9], the image will be separated into blocks, when it undergoes JPEG compression. Then these blocks are operated separately. And to shrink the file size we tolerant some information loss during the quantization. Certain frequency signals are abandoned in quantization step especially for those high-frequency harmonics which only causes very little even no change for the human visual system (HVS). These signals mean redundancy to the human visual system, while they contain a lot of information. That is why they are important to the detection of compression. The image which has not been compressed or just been compressed with relatively high-quality factors remains more information which is a series of signals having different frequencies. Normally, a compressed image has lost a considerable amount of harmonics. Most high-frequency components are set to zero and some low-frequency components are also set to zero if they are small enough. It is true that we can not get what has been abandoned in the previous processing again because of the lossy JPEG compression. But it is still possible to estimate that information existing in the original image of the test image. During the JPEG compression, DCT and quantization are used on each block but not on the full image, which has been discussed in [10]. So, even those harmonics are lost in each separated 8 × 8 blocks but they are still existing among the full-size image. If we want to expose this information, we need to break the existing block artifacts. A method of cutting 4 rows and 4 columns of the test image widely used in image steganalysis [11] is employed to estimate the counterpart of the original image.

    The removal of left-top 4 rows and columns has been approved as an excellent way to estimate the counterpart of the original image from compression, which means similar statistic features, as the cut destroys the block-based structure of JPEG. The row and column cutting are illustrated in Figure 2.

    Figure 2.  Original image estimation.

    Based on the image information loss, we propose a novel algorithm to detect the JPEG compression history as Figure 3 illustrated. The idea of extracting feature from the JPEG file is based on [12].

    Figure 3.  The framework of algorithm based on image information loss.

    The whole processing is as following:

    ⅰ. To obtain IJPEG1, the test bitmap image is JPEG compressed with quality factor Q = 100.

    ⅱ. The counterpart of the original image is estimated by cutting 4 rows and 4 columns from the test image. The IJPEG2 is acquired by compressing the counterpart with quality factor Q = 100 as well.

    ⅲ. The features related to the image information loss are extracted from the two JPEG images, and then fed into the classifier to detect whether the test bitmap image has been compressed.

    Considering the test image as a decomposed JPEG image, as Figure 4 illustrated. IJPEG1 actually undergoes double JPEG compression with an unknown previous quality factor and the latter quality factor of 100. And because of the counterpart estimated by cutting rows and columns, the IJPEG2 could be considered as a single JPEG compressed image with the quality factor of 100. For the percentage of zero of JPEG coefficients among the 64 DCT positions are defined as the indexes of the amount of information loss after the bitmap image is JPEG compressed, there are disparities between the corresponding indexes IJPEG1 of IJPEG2 and on the 64 DCT positions, as shown in Figure 5. A higher information loss is expected for a JPEG compressed test image. On the contrary, if the test image is uncompressed, there should be no obvious differences between indexes on corresponding positions, as shown in Figure 6.

    Figure 4.  The original decompressed image.
    Figure 5.  The comparison of testing image and estimated original image in the case that the test image is decompressed from the JPEG image with quality factor Q = 90.
    Figure 6.  The comparison of testing image and estimated original image in the case that the test image is uncompressed.

    p1(j) denotes the indexes of the image information loss of IJPEG1. p2(j) denotes the indexes of the image information loss of IJPEG2. Then we describe the difference of information loss as

    pdif(j)=p1(j)p2(j),j=1,2,...,64 (3.1)
    pdif_average(j)=64j=1pdif(j)/64 (3.2)

    pdif_average indicates how much details are found in the estimation counterpart comparing with the test image. If the testing image is uncompressed, the value of pdif_average will be close to 1 which means there is no obvious difference between the test image and the estimation original image. If the test image was compressed, this value will be much greater than 1 which means the bias is observed between the IJPEG1 and IJPEG2 after breaking the 8 × 8 blocks in the test image by cutting. After extracting this feature of images, an SVM classifier is trained. And then we can detect the bitmap JPEG compression history with this model.

    Two image databases are used in our experiments to evaluate the performance of the proposed method. Firstly, 1338 uncompressed images from the UCID image database are used in our experiments. These images are saved in Tif format with the resolution of 512 × 384. And a series of standard JPEG quality factors (60, 70, 75, 80, 85, 90, 95, 96, 97, 98, 99) are applied to the images to obtain JPEG compression images of different qualities. The JPEG images with different quality are resaved in Tif format for evaluating the proposed algorithm. In the following, this image dataset is named as dataset1.

    The other 480 images come from the well-known Dresden database. Different from the UCID database, the images from the Dresden database are captured by consumer cameras and saved as JPEG image originally. In our experiments, we use 480 JPEG images from 4 different cameras, which are Agfa DC-830i, Canon PowerShotA640, Nikon D200 and Sony DSC-W170, 120 images from each camera. Different from the JPEG images obtained in dataset1, these images are compressed with different consumer-defined JPEG quantization tables with various camera models. Also, the images, named as dataset2 with the resolution of 3872 × 2592, are resaved as bitmap images for the experiments.

    We take 500 uncompressed images and 11 × 500 decompressed JPEG images from dataset1as the labeled samples to train the SVM classifier with RBF kernel. After we get the model we use it to test the rest images with different quality factors (60, 70, 75, 80, 85, 90, 95, 96, 97, 98, 99). We also compare our proposed method with Yang's [4], Fan's [3] and Zhang's [5] methods in terms of detection accuracy and algorithm complexity respectively. The results are shown in Table 1 to Table 3.

    Table 1.  Identification accuracy (%) of the proposed method and baselines for dataset1.
    Methods Q original
    60 70 80 85 90 95 96 97 98 99
    Fan's 97.10 96.68 96.00 95.14 89.78 69.14 59.80 48.33 25.53 17.27 84.10
    Yang's 99.90 100 100 100 99.80 98.69 96.58 88.74 78.16 39.79 96.59
    Zhang's 100 100 100 100 100 100 100 99.93 99.10 95.65 99.88
    Proposed 100 100 100 100 100 100 99.93 99.48 99.03 89.31 99.92

     | Show Table
    DownLoad: CSV
    Table 2.  Identification accuracy (%) of the proposed method and the baseline for images in dataset2.
    Method Accuracy(%)
    Zhangs 34.08
    Proposed 100

     | Show Table
    DownLoad: CSV
    Table 3.  Average time cost.
    Method Time cost(s)
    (image with resolution of 384 × 512)
    Fans 2.73
    Yangs 0.91
    Zhangs 9.64
    Proposed 0.60

     | Show Table
    DownLoad: CSV

    As shown in Table 1, Fan's method can give relatively good results when the quality factor is less than 85. The detection accuracy goes below 90% when the quality factors are greater than 90. Yang's method has a similar shortcoming. It performs well when the quality is 96 but the accuracy goes below 90% when the quality factors are greater. Zhang's method has a really good result. The detection accuracy is 95.65% when the quality factor is as high as 99. Our method outperforms Fan's and Yang's methods. And similar detection results are observed between Zhang's and the proposed method. While Zhang's method works better when the quality factor is 99, our method can give results in the shortest time, as shown in Table 3. Also, it can be proved that average cost time for each pixel is stable by simple computation. Time cost may be not the most important index in this aspect. But we can get reliable results within less time indeed. This may have a great advantage in some cases.

    Another comparison experiment is implemented between the proposed and Zhang's methods [5]. We take 480 compressed images from dataset2 to prove that our method can work on all JPEG compressed images. All of these bitmap images are not compressed using standard JPEG quantization but took by cameras which means they were compressed using customer-defined JPEG quantization tables. The results are shown in Table 2. Zhang's method is found only effective to images compressed using the standard JPEG quantization tables. And the proposed method still performs well.

    The issue of detecting the compression history of images receives more and more attention in recent years. In this paper, we propose a novel and fast detecting method based on novel feature respect to image information loss. According to this, the proportion of zero JPEG coefficients on 64 DCT positions falls down as well. We estimate the image counterpart by cutting 4 rows and 4 columns from the original image and calculate the differences between the values of the 64 DCT positions respectively. The feature extracted from the differences is fed into the SVM to train a mode to classify the test bitmap images. Extensive experiments and the results demonstrate that our proposed method outperforms the state of art, especially in the cases of high compression quality factors and customer-defined quality factors. And also the proposed algorithm indicates a lower computational complexity compared to the previous works.

    This work is supported by the National Science Foundation of China (No. 61502076, No. 61772111).

    All authors declare no conflicts of interest in this paper.



    [1] F. Curtain, H. Zwart, An Introduction to Infinite-Dimensional Linear Systems Theory, Texts in Applied Mathematics 21, Springer-Verlag, New York, 1995.
    [2] O. Diekmann, M. Gyllenberg, J. A. J. Metz, Finite dimensional state representation of linear and nonlinear delay systems, J. Dynam. Differ. Equations, 30 (2018), 1439-1467.
    [3] A. K. Erlang, Solution of some problems in the theory of probabilities of significance in automatic telephone exchanges, Post Office Elec. Eng., (1917), 189-197.
    [4] N. MacDonald, Time Lags in Biological Models, Lecture Notes in Biomathematics 27, Springer Verlag, Berlin, 1978.
    [5] N. MacDonald, Biological Delay Systems: Linear Stability Theory, Cambridge Studies in Mathematical Biology 8, Cambridge Univeristy Press, Cambridge, 1989.
    [6] D. Fargue, Reductibilitè des systèmes héréditaires à des systèmes dynamiques, C.R. Acad. Sci. Paris Sér. A-B, 277 (1973), 471-473.
    [7] D. Fargue, Reductibilitè des systèmes héréditaires, Int. J. Nonlin. Mech., 9 (1974), 331-338.
    [8] T. Vogel, Théorie Des Systèmes Evolutifs, Gautier Villars, Paris, 1965.
    [9] D. Breda, O. Diekmann, M. Gyllenberg, F. Scarabel, R. Vermiglio, Pseudospectral discretization of nonlinear delay equations: New prospects for numerical bifurcation analysis, SIAM J. Appl. Dyn. Sys., 15 (2016), 1-23.
    [10] S. Busenberg, C. Travis, On the use of reducible-functional differential equations in biological models, J. Math. Anal. Appl., 89 (1982), 46-66.
    [11] K. L. Cooke, Z. Grossman, Discrete delay, distributed delays and stability switches, J. Math. Anal. Appl., 86 (1982), 592-627.
    [12] E. Beretta, D. Breda, Discrete or distributed delay? Effects on stability of population growth, Math. Biosci. Eng., 13 (2016), 19-41.
    [13] C. Barril, A. Calsina, J. Ripoll, A practical approach to R0 in continuous-time ecological models, Math. Meth. Appl. Sci., 41 (2018), 8432-8445.
    [14] A. Lloyd, Destabilization of epidemic models with the inclusion of realistic distributions of infectious periods, Proc. Roy. Soc. Lond. B, 268 (2001), 985-993.
    [15] A. Lloyd, Realistic distributions of infectious periods in epidemic models: Changing patterns of persistence and dynamics, Theor. Popul. Biol., 60 (2001), 59-71.
    [16] C. Bauch, A. d'Onofrio, P. Manfredi, Behavioral epidemiology of infectious diseases: An overview, in Modeling the Interplay Between Human Behavior and the Spread of Infectious Diseases (eds. P. Manfredi and A. d'Onofrio), Springer-Verlag, New York, (2013), 1-19.
    [17] P. Manfredi, A. d'Onofrio, Modeling the Interplay Between Human Behavior and the Spread of Infectious Diseases, Springer-Verlag, New York, 2013.
    [18] Z. Wang, T. C. Bauch, S. Bhattacharyya, A. d'Onofrio, P. Manfredi, M. Percg, et al., Statistical physics of vaccination, Phys. Rep., 664 (2016), 1-113.
    [19] B. Buonomo, G. Carbone, A. d'Onofrio, Effect of seasonality on the dynamics of an imitationbased vaccination model with public health intervention, Math. Biosci., 15 (2018), 299-321.
    [20] B. Buonomo, A. d'Onofrio, D. Lacitignola, Global stability of an sir epidemic model with information dependent vaccination, Math. Biosci., 216 (2008), 9-16.
    [21] B. Buonomo, A. d'Onofrio, D. Lacitignola, The geometric approach to global stability in behavioral epidemiology, in Modeling the Interplay Between Human Behavior and the Spread of Infectious Diseases (eds. P. Manfredi and A. d'Onofrio), Springer-Verlag, New York, (2013), 289-308.
    [22] A. d'Onofrio, P. Manfredi, Information-related changes in contact patterns may trigger oscillations in the endemic prevalence of infectious diseases, J. Theoret. Biol., 256 (2009), 473-478.
    [23] A. d'Onofrio, P. Manfredi, Vaccine demand driven by vaccine side effects: Dynamic implications for sir diseases, J. Theoret. Biol., 264 (2010), 237-252.
    [24] A. d'Onofrio, P. Manfredi, P. Poletti, The impact of vaccine side effects on the natural history of immunization programmes: an imitation-game approach, J. Theoret. Biol., 273 (2011), 63-71.
    [25] A. d'Onofrio, P. Manfredi, E. Salinelli, Vaccinating behaviour and the dynamics of vaccine preventabe infections, in Modeling the Interplay Between Human Behavior and the Spread of Infectious Diseases (eds. P. Manfredi and A. d'Onofrio), Springer-Verlag, New York, (2013), 267-287.
    [26] S. Funk, M. Salathé, V. A. Jansen, Modelling the influence of human behaviour on the spread of infectious diseases: A review, J. R. Soc. Interface, 7 (2010), 1247-1256.
    [27] A. d'Onofrio, P. Manfredi, E. Salinelli, Vaccinating behaviour, information, and the dynamics of SIR vaccine preventable diseases, Theor. Popul. Biol., 71 (2007), 301-317.
    [28] A. d'Onofrio, Mixed pulse vaccination strategy in epidemic model with realistically distributed infectious and latent times, Appl. Math. Comput, 151 (2004), 181-187.
    [29] A. Calsina, J. Ripoll, Hopf bifurcation in a structured population model for the sexual phase of monogonont rotifers, J. Math. Biol., 45 (2002), 22-36.
    [30] D. Breda, O. Diekmann, S. Maset, R. Vermiglio, A numerical approach for investigating the stability of equilibria for structured population models, J. Biol. Dyn., 7 (2013), 4-20.
    [31] D. Breda, S. Maset, R. Vermiglio, Pseudospectral differencing methods for characteristic roots of delay differential equations, SIAM J. Sci. Comput., 27 (2005), 482-495.
    [32] D. Breda, S. Maset, R. Vermiglio, Stability of Linear Delay Differential Equations-A Numerical Approach with MATLAB, Springer, New York, 2015.
    [33] N. Olgac, R. Sipahi, Kernel and offspring concepts for the stability robustness of multiple time delayed systems (MTDS), J. Dyn. Syst. T. ASME, 129 (2006), 245-251.
    [34] O. Diekmann, P. Getto, M. Gyllenberg, Stability and bifurcation analysis of Volterra functional equations in the light of suns and stars, SIAM J. Math. Anal., 39 (2008), 1023-1069.
    [35] D. Breda, P. Getto, J. Sánchez Sanz, R. Vermiglio, Computing the eigenvalues of realistic Daphnia models by pseudospectral methods, SIAM J. Sci. Comput., 37 (2015), 2607-2629.
    [36] H. A. Priestley, Introduction to Complex Analysis, Oxford University Press, New York, 1990.
    [37] L. Fanti, P. Manfredi, The Solow's model with endogenous population: A neoclassical growth cycle model, J. Econ. Dev., 28 (2003), 103-115.
    [38] P. Manfredi, L. Fanti, Cycles in dynamic economic modelling, Econ. Model., 21 (2004), 573-594.
    [39] D. Breda, D. Liessi, Approximation of eigenvalues of evolution operators for linear renewal equations, SIAM J. Numer. Anal., 56 (2018), 1456-1481.
    [40] D. Breda, S. Maset, R. Vermiglio, Approximation of eigenvalues of evolution operators for linear retarded functional differential equations, SIAM J. Numer. Anal., 50 (2012), 1456-1483.
  • This article has been cited by:

    1. Barbara Tomasino, Maria Nobile, Marta Re, Monica Bellina, Marco Garzitto, Filippo Arrigoni, Massimo Molteni, Franco Fabbro, Paolo Brambilla, The mental simulation of state/psychological verbs in the adolescent brain: An fMRI study, 2018, 123, 02782626, 34, 10.1016/j.bandc.2018.02.010
    2. Frédérique Roy-Côté, Rayane Zahal, Johannes Frasnelli, Dang Khoa Nguyen, Olivier Boucher, Insula and Olfaction: A Literature Review and Case Report, 2021, 11, 2076-3425, 198, 10.3390/brainsci11020198
    3. Daphné Citherlet, Olivier Boucher, Julie Tremblay, Manon Robert, Anne Gallagher, Alain Bouthillier, Franco Lepore, Dang Khoa Nguyen, Spatiotemporal dynamics of auditory information processing in the insular cortex: an intracranial EEG study using an oddball paradigm, 2020, 225, 1863-2653, 1537, 10.1007/s00429-020-02072-z
    4. Barbara Tomasino, Dario Marin, Tamara Ius, Miran Skrap, 2018, Chapter 31, 978-3-319-75467-3, 281, 10.1007/978-3-319-75468-0_31
    5. Gianna Sepede, Francesco Gambi, Massimo Di Giannantonio, Insular Dysfunction in People at Risk for Psychotic Disorders, 2015, 2, 2373-7972, 66, 10.3934/Neuroscience.2015.2.66
    6. Jarrod Moss, Introduction to AIMS Neuroscience Special Issue “What Function Does the Anterior Insula Play in Human Cognition?”, 2015, 2, 2373-7972, 153, 10.3934/Neuroscience.2015.3.153
    7. Olivier Boucher, Daphné Citherlet, Benjamin Hébert-Seropian, Dang Khoa Nguyen, 2018, Chapter 26, 978-3-319-75467-3, 223, 10.1007/978-3-319-75468-0_26
    8. Elisa Cargnelutti, Marta Maieron, Serena D'Agostini, Tamara Ius, Miran Skrap, Barbara Tomasino, Preoperative plasticity in the functional naming network of patients with left insular gliomas, 2023, 22131582, 103561, 10.1016/j.nicl.2023.103561
    9. Daniel J. O’Hara, John Goodden, Ryan Mathew, Rebecca Chan, Paul Chumas, Recovery of major cognitive deficits following awake surgery for insular glioma: a case report, 2024, 38, 0268-8697, 236, 10.1080/02688697.2020.1825620
    10. Elisa Cargnelutti, Marta Maieron, Serena D’Agostini, Tamara Ius, Miran Skrap, Barbara Tomasino, Exploring cognitive Landscapes: Longitudinal Dynamics of left insula gliomas using neuropsychological Inquiry, fMRI and intra-resection real time neuropsychological testing, 2024, 22131582, 103689, 10.1016/j.nicl.2024.103689
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5249) PDF downloads(256) Cited by(4)

Figures and Tables

Figures(13)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog