Loading [MathJax]/jax/output/SVG/jax.js
Research article

Pricing hybrid-triggered catastrophe bonds based on copula-EVT model

  • This paper presents a hybrid-triggered catastrophe bond (CAT bond) pricing model. We take earthquake CAT bonds as an example for model construction and numerical analysis. According to the characteristics of earthquake disasters, we choose direct economic loss and magnitude as trigger indicators. The marginal distributions of the two trigger indicators are depicted using extreme value theory, and the joint distribution is established by using a copula function. Furthermore, we derive a multi-year hybrid-triggered CAT bond pricing formula under stochastic interest rates. The numerical experiments show that the bond price is negatively correlated with maturity, market interest rate and dependence of trigger indicators, and positively correlated with trigger level and coupon rate. This study can be used as a reference for formulating reasonable CAT bond pricing strategies.

    Citation: Longfei Wei, Lu Liu, Jialong Hou. Pricing hybrid-triggered catastrophe bonds based on copula-EVT model[J]. Quantitative Finance and Economics, 2022, 6(2): 223-243. doi: 10.3934/QFE.2022010

    Related Papers:

    [1] ZongWang, Qimin Zhang, Xining Li . Markovian switching for near-optimal control of a stochastic SIV epidemic model. Mathematical Biosciences and Engineering, 2019, 16(3): 1348-1375. doi: 10.3934/mbe.2019066
    [2] Damilola Olabode, Jordan Culp, Allison Fisher, Angela Tower, Dylan Hull-Nye, Xueying Wang . Deterministic and stochastic models for the epidemic dynamics of COVID-19 in Wuhan, China. Mathematical Biosciences and Engineering, 2021, 18(1): 950-967. doi: 10.3934/mbe.2021050
    [3] Linda J. S. Allen, Vrushali A. Bokil . Stochastic models for competing species with a shared pathogen. Mathematical Biosciences and Engineering, 2012, 9(3): 461-485. doi: 10.3934/mbe.2012.9.461
    [4] Thomas Torku, Abdul Khaliq, Fathalla Rihan . SEINN: A deep learning algorithm for the stochastic epidemic model. Mathematical Biosciences and Engineering, 2023, 20(9): 16330-16361. doi: 10.3934/mbe.2023729
    [5] Edward J. Allen . Derivation and computation of discrete-delayand continuous-delay SDEs in mathematical biology. Mathematical Biosciences and Engineering, 2014, 11(3): 403-425. doi: 10.3934/mbe.2014.11.403
    [6] Ling Xue, Caterina Scoglio . Network-level reproduction number and extinction threshold for vector-borne diseases. Mathematical Biosciences and Engineering, 2015, 12(3): 565-584. doi: 10.3934/mbe.2015.12.565
    [7] A Othman Almatroud, Noureddine Djenina, Adel Ouannas, Giuseppe Grassi, M Mossa Al-sawalha . A novel discrete-time COVID-19 epidemic model including the compartment of vaccinated individuals. Mathematical Biosciences and Engineering, 2022, 19(12): 12387-12404. doi: 10.3934/mbe.2022578
    [8] Jinna Lu, Xiaoguang Zhang . Bifurcation analysis of a pair-wise epidemic model on adaptive networks. Mathematical Biosciences and Engineering, 2019, 16(4): 2973-2989. doi: 10.3934/mbe.2019147
    [9] Tingting Xue, Xiaolin Fan, Zhiguo Chang . Dynamics of a stochastic SIRS epidemic model with standard incidence and vaccination. Mathematical Biosciences and Engineering, 2022, 19(10): 10618-10636. doi: 10.3934/mbe.2022496
    [10] Damilola Olabode, Libin Rong, Xueying Wang . Stochastic investigation of HIV infection and the emergence of drug resistance. Mathematical Biosciences and Engineering, 2022, 19(2): 1174-1194. doi: 10.3934/mbe.2022054
  • This paper presents a hybrid-triggered catastrophe bond (CAT bond) pricing model. We take earthquake CAT bonds as an example for model construction and numerical analysis. According to the characteristics of earthquake disasters, we choose direct economic loss and magnitude as trigger indicators. The marginal distributions of the two trigger indicators are depicted using extreme value theory, and the joint distribution is established by using a copula function. Furthermore, we derive a multi-year hybrid-triggered CAT bond pricing formula under stochastic interest rates. The numerical experiments show that the bond price is negatively correlated with maturity, market interest rate and dependence of trigger indicators, and positively correlated with trigger level and coupon rate. This study can be used as a reference for formulating reasonable CAT bond pricing strategies.



    The reading and interpretation of medical images is usually performed by medical professionals. But even for experienced experts, this process of medical image interpretation and reporting is also prone to error. Staff shortages and overworked work-loads can also lead to misjudgments in radiology reports. Writing accurate medical imaging reports is necessary for inexperienced radiologists and pathologists, especially in rural areas and in areas where the quality of care is relatively low. For experienced radiologists and pathologists, writing imaging reports is tedious and time-consuming, and automated generation of medical reports can effectively reduce the doctors' workload and mistakes.

    There are several issues that must be addressed to automate the generation of auxiliary reports. First, a complete diagnostic report consists of several different forms of information. Second, how to locate the image region and describe it correctly. You et al. [1] automatically extracted machine-learnable annotations from regression data, but the description results were still not ideal. Third, the description in the image re-port contains multiple sentences. Krause et al. [2] used the combined structure of image and language to generate hierarchical descriptive paragraphs, while generating such a long text is still relatively difficult to achieve. Fourth, the automatically generated statements are still unreadable and cannot be colloquial in a human voice. The current single-layer LSTM method cannot model long word sequences. The traditional RNN-+CNN architecture is difficult to generate long statement sequences. Multimodal recurrent model with attention (MRNA) can be used to model long word sequences, but the accuracy is very low and lacks readability.

    In view of the above problems, the following conclusions are drawn. 1) The verbal information of medical reports is more important than the image information. 2) The final results are often more concerned with the degree of imitation of the doctor's tone. Based on this, RCLN model is proposed in this paper. RCLN model solves the problem of multiple forms of information by establishing a multi-task framework. On the area localization problem, a research team proposed a new real-time automatic calibration scheme based on scanning sources. The proposed method allows accurate calibration regardless of the path length variation caused by the non-planar topography of the sample or the scanning of the galvanometer [3]. Previously, the application of multimodal imaging technology in the study of density changes of melanosomes and lipofuscin granules in the retinal pigment epithelium (RPE) cells [4]. There is also an efficient direct time-domain resampling scheme based on phase analysis, which shows significant performance improvements in terms of accuracy and speed and silica-coated silver nanostructures can be excellent contrast agents for optical coherence tomography (OCT) imaging [5]. Multi-label classification is a multi-label classification task processing model, it regards label prediction as a multi-label classification task and long description generation as a text generation task. To solve the problem of image region localization, the MRNA model introduced a cooperative attention mechanism, and explored the synergistic effect of visual features and semantics in the grouping while biased towards im-ages and prediction labels. In view of the difficulty in generating long text, RCLN uses hierarchical LSTM to induce long text by taking advantage of the constituent nature of reports. Combined with the cooperative attention mechanism, the hierarchical LSTM first generates high-level topics, and then generates fine-grained descriptions according to the topics.

    1) Aiming at the confusion of long sentences in traditional medical report generation and the difficulty in locating diseased areas, a new cycle sentence generation model and LSTM word-by-word generation model with attention were proposed to solve the problems of long text and colloquialism and achieve theoretical innovation.

    2) Through comparative experiments, it is proved that the model is more effective than the traditional model in the generation of chest X-ray reports.

    First of all, the first task is to predict the label of a given image. The label prediction task is processed in the way of multi-label classification task. Specifically, features of the given image I are firstly extracted:

    p1,pred(li=1{vn}Nn=1)exp(MLCi({vn}Nn=1)) (1)

    where I ∈R, L is the label vector, li = 1/0 indicates whether there is the ith label, and MLCi represents the ith output of the network. A complete diagnostic report is composed of multiple internal reports with different forms of information. The chest X-ray report contains the impression description, usually in one sentence. Findings are a description. Tags are a list of keywords. Generating such disparate information from a unified framework is technically demanding.

    Secondly, it is still difficult to locate the lesion area in the image and attach the correct description.

    Finally, descriptions in imaging reports are often long, containing multiple sentences or even paragraphs. y has S sentences, the ith sentence has N words, and y(i, j) is the jth word in the ith sentence. The loss ℓ(x, y) in long sentences produced by producing distribution values on each word of each sentence consists of two weighted and intersecting terms and a sentence loss ℓ shifts the distribution values when stopped, and the word loss ℓ on the word distribution p(i, j).

    However, it is indispensable to generate long texts, and this traditional method cannot meet the needs of long texts.

    Both CNN and RNN are extensions of traditional neural networks, which can generate results by forward calculation, and update the model by reverse calculation. Each layer of neural network can have multiple neurons horizontally, and there can be multiple layers of neural network connections vertically. The significance of the combination is that the combination can process a large amount of information and has the characteristics of time and space, such as video, image and text combination. There are also real scene dialogues and dialogues with images to make text expressions more specific, and videos are more complete than pictures description.

    Feature extraction mainly adopts convolution kernel, whose width and height are greater than 1, and which only performs cross-correlation operation with each position of the same size in the image. Therefore, the output size is equal to the input size nh × nw minus the convolution kernel size kh × kw, which is:

    l(x,y)=λsentSi=1lsent(pi,I[i=S])+λwordSi=1Nij=1lword(pij,yij) (2)

    Image description technology can automatically generate text descriptions for a given image. Most of the image text models studied recently are based on CNN-RNN framework. Vinyals et al. [6] provided image features extracted from the last hidden layer of CNN to LSTM network to generate text. Fang et al. [7] first used CNN to detect anomalies in the image which were used to generate a complete sentence through the language model. Karpathy et al. [8] put forward the use of multimodal recursive neural network to fuse visual and semantic features and then generate image description.

    Scientists have been devoted to studying the attention in the field of cognitive neuroscience since the 19th century. Kernel regression [9] in 1964 was a simple demonstration of machine learning with attention mechanism. Described in mathematical language, suppose there is a query q ∈ Rq and m key-value pairs (k1, v1)..., (km, vm), where ki ∈ Rk, vi ∈ R (v). The attention convergence function F is expressed as a weighted sum of values:

    hi=f(W(q)iq,W(k)ik,W(v)iv)Rpv (3)

    The attention weight (scalar) of the query q and the key ki is obtained by mapping the two vectors into scalars through the attention scoring function a, and then through the softmax operation:

    Wo[h1hh]Rpo (4)

    Attention mechanisms have proven useful for adding image text. Xu et al. introduced spatial visual attention mechanism into image features extracted from CNN middle layer [10]. Wang et al. [11] proposed a semantic attention mechanism for given image tags. In order to make better use of visual features and generate semantic labels.

    The design of LSTM network was inspired by the logic gates of computers. LSTM introduces memory cells, or cells for short, whose hidden layer outputs include hidden states and memory elements. Only the hidden state is passed to the output layer, while the memory element is entirely internal information. Suppose there are h hidden units, the batch size is n, and the input number is d. Therefore, the input is X ∈ R (n × d), and the hidden state of the previous time step is H (t − 1) ∈ R (n × h). Accordingly, the gate of time step t is defined as follows: the input gate is It ∈ Rn × h, the forgetting gate is Ft ∈ Rn × h, and the output gate is Ot ∈ Rn × h. They are calculated as follows:

    It=σ(XtWxi+Ht1Whi+bi)Ft=σ(XtWxf+Ht1Whf+bf)Ot=σ(XtWxo+Ht1Who+bo) (5)

    where Wxi, Wxf, Wxo ∈ R (d * h) Whi, Whi, Who ∈ Rh is the weight parameter, bi, bf, bo ∈ R (l * h) is offset parameters.

    As an improved recurrent neural network, LSTM can solve the problem of the long-distance dependence in the process of medical report generation which RNN cannot deal with [12]. Tong et al. [13] are studying intensive text, requiring the model to generate a text description for each detected image region. Lei et al. [14] generated paragraph descriptions for images through layered LSTM.

    The visual features of the image and the semantic features of the previous sentence are combined into a multimodal cyclic generation network model (MRNA) that generates the next sentence. The RCLN model proposed in this paper proposes a new cyclic generation model to generate results sentence by sentence, in which subsequent sentences are conditional on multi-modal input, including the preceding sentence and the original sentence image [15]. The multimodal model proposed in this paper adopts attention mechanism to improve performance. The overall architecture presented in this paper takes medical images as input from multiple views and generates a framework for radiology reports with impressions and findings. To generate the survey result paragraphs, this paper first uses an encoder-decoder model, which takes image pairs as inputs and generates the first sentence. The first sentence is then input into the sentence coding network to output the semantic representation of the sentence [16]. Suppose a result paragraph containing L sentences is being generated. The probability of generating the ith sentence of length T satisfies:

    P(Si=w1,w2,,wTV;θ)
    =P(S1V)i1j=2P(SjV,S1,Sj1)P(w1V,Si1)Tt=2P(wtV,Si1,w1,wt1) (6)

    where V is the given medical image, θ is the model parameter (θ on the right is omitted in this paper), Si represents the ith sentence, wt is the tth mark in the ith sentence. Similar to the n-gram hypothesis in the language model, this paper adopts Markov hypothesis to generate the 2-gram model at sentence level, which means the current sentence being generated depends only on its previous sentence and image. This simplifies the steps to estimate the probability:

    ˆP(Si=w1,w2,wTV;θ)=P(S1V)1i1j=2P(SjV,Sj1)2P(w1V,Si1)Tt=2P(wtV,Si1,w1,wt1)3 (7)
    Figure 1.  RCLN model flowchart.

    It can be noted that for small-scale data sets, the verbal information of medical reports is more important than the image information, and the final results tend to care more about the degree of imitation of doctors' tone.

    The medical reporting task is easily related to the Image2Text task, so this paper utilizes the Image Captions method to solve the problem of this task. In this model, an image encoder is applied to extract global and regional visual features from the input image. The background variable C output by the image encoder encodes the information of the entire image input sequence x1, …, xT. Given the output sequence y1, y2, …, yT′ in the training samples, for each time step t′, the conditional probability of output yt of the image decoder will be based on the previous output sequence y1, …, yt′−1 and the background variable c, which is P(yt′∣y1, …, yt′−1, c).At this time, another cyclic neural network can be used as the decoder to output the time step t 'of the sequence. The decoder takes the output yt′ −1 of the previous time step and background variable c as the input, and transforms them with the hidden state st′−1 of the previous time step into the hidden state st′ of the current time step. Therefore, function g (cyclic neural network unit) can be used to express the transformation of the hidden layer of the image decoder:

    st'=g(yt'1,c,st'1) (8)

    Image encoders automatically extract visual features of hierarchical CNN images. The image encoder of this model uses pre-trained Resnet-152 [10]. In this paper, the size of the input image is adjusted to 224 × 224 to keep consistent with the image of pre-trained Resnet encoder. Then, the local eigenmatrix f ∈ R1024 × 19 (reconstructed from 1024 × 14 × 14) res layer of Resnet [17]. Each column of f is a regional eigenvector. So, each image has 196 subregions. At the same time, this paper extracts the global feature vector f ∈ R2048 from the last mean pooling layer of Resnet. For multiple input images from multiple views (for example, the front and side views shown in the body text), their regional and global features are connected accordingly before feeding into the following layers [18]. For efficiency, all parameters in the layer built from Resnet-152 are fixed during training. Then, the maximum pooling operation is applied to the feature maps extracted from each convolution layer to generate 1024-dimension feature vectors. The final sentence feature is a concatenation of feature vectors from different layers. To generate a long paragraph description, a hierarchical cycle network was chosen in this paper. A two-level RNN is generally used for paragraph generation: first, some topics are generated by paragraph-level RNN which are then taken as input by a sentence-level RNN to generate sentences. The pre-trained dense subtitle model can be used to detect the semantic regions of images.

    Natural language is a complex system used to express the human mind. In this system, words are the basic units of meaning. As the name suggests, a word vector is a vector used to represent the meaning of a word, and can also be considered a feature vector or representation of a word. The technique of mapping words to real vectors is called word embedding. In recent years, word embedding has gradually become the basic knowledge of natural language processing. Word vector is used to represent the word meaning which can also be regarded as the word feature vector. Each word is mapped to a fixed-length vector that better expresses similarities and analogies between different words. Word embedding consists of two models, namely skip-gram and continuous bag of words. For semantically meaningful representations, their training relies on conditional probability, which can be seen as the use of some words in a corpus to predict other words [19]. Word embedding models are self-supervised models since it is unlabeled data.

    For the Impression and Findings description of medical reports, QA + Hierarchical RNN method was used in this paper to solve this problem [20]. By introducing hidden state variables to store past information and current input, current output can be determined. Hidden state is a kind of modeling of the way data is generated. It considers that data generation is divided into two steps: first, select a hidden state and then generate observation results from the hidden state [21]. Hiding means you can only see the observation sequence and not the hidden state sequence when the data generation is on the run, but it doesn't affect the hidden state being exposed to you during training [22]. All of this is done in the basic unit of time step, and the time step is the time interval of the load sub-step in the load step [23]. In rate-independent analysis such as static analysis and (static) nonlinear analysis, in a load step, the time step does not reflect the real time, it is accumulated to reflect the sequence of load sub-steps [24]. However, in rate-dependent analysis such as transient analysis, the size of time step reflects actual length of time.

    The original dataset was collected from Openi's chest radiography open data, which contained 3955 radiology reports from two large hospital systems in the Indiana Patient Care Network database and 7470 related chest X-rays from the Hospital Image Archiving System.

    Figure 2.  Sample dataset picture.

    First, the original data set contained 7470 images, 3391 pairs of positive side chest radiographs and 3631 pairs of sentences of which the number of sentences is greater than 4. In order to ensure that the largest subset of data information can be obtained, the maximum number of sentences was set to 8 since more than 90% of report statements are between 4 and 8 sentences. There were 3111 applications that met both conditions. Secondly, the training and validation dataset are spilt into 2811/300 with a ratio of about 1/10, using Adam optimization function based on stochastic gradient descent. The unused part of the dataset is then used as the test set. In this paper, 300 reports were randomly selected to form a test set on which all evaluations were performed.

    Some common image caption evaluation metrics, including bilingual evaluation understudy (BLEU), metric for evaluation of translation with explicit ordering (METEOR), and recall-oriented understudy for gisting evaluation (ROUGE), are used to provide quantitative comparisons in this paper. BlEU-1 measures the accuracy of words in medical reports, and higher-order BlEU can measure the fluency of sentences. For a sentence to be translated, candidate translations can be expressed as, and the corresponding group of reference translations can represent the phrase set of n words and the possible grams of the kth group.

    The purpose of METEOR is to prevent mistranslations of the reported results due to synonyms [25]. The measurement of METEOR is based on weighted harmonic mean value of single precision and single word recall rate. To calculate METEOR, a set of alignments needs to be given in advance, which is based on the thesaurus of WordNet. The alignments are calculated as harmonic average of accuracy and recall rate between the corresponding best candidate translation and reference translation the METEOR is calculated as the harmonic mean of precision and recall rate between corresponding best candidate translation and reference translations by minimizing successive ordered chunks in the corresponding statement:

    Pen=γ(chm)θ (9)
    Fmean=PmRmαPm+(1α)Rm (10)
    Pm=|m|khk(ci) (11)
    Rm=|m|khk(sij) (12)
    METEOR=(1Pen)Fmean (13)

    where α, γ and θ are the default parameters for evaluation. Therefore, the final evaluation of METEOR is based on a harmonic average of decomposition matching and characterization decomposition matching quality of chunk, and contains a penalty coefficient Pen which is different from BLEU. Accuracy and recall rate based on the whole corpus are taken into account to obtain the final measure.

    ROUGE evaluates abstracts based on the co-occurrence information of n-grams in abstracts, and it is a method for evaluating the recall rate of n-gram words based on the co-occurrence information of n-gram words [26]. The basic idea is that several experts generate artificial abstracts respectively to form a standard abstract set. The quality of the abstract is evaluated by counting the number of overlapping basic units (n-element grammar, word sequences and word pairs) through comparing the automatic abstracts generated by the system with the standard abstracts generated by the manual.

    ROUGEN=s{ReferemceSumaaries}gramnCountmatch(gramn)s{ReferenceSummaries}gramnsCount(gramn) (14)

    The stability and robustness of the evaluation system can be improved by comparing with the expert manual abstract. Neural machine translation (NMT) used in this paper is more powerful than its predecessor Statistical machine translation (SMT). The word order of medical reports is often correct but the error frequency increases. Therefore, a recall rate indicator like ROUGE is needed to evaluate the error frequency.

    Firstly, an image encoder is used to extract global and regional visual features from the input image. Image encoder is a CNN, which automatically extracts hierarchical visual features from images. More specifically, we adjust the size of the input image to 224 × 224. (Corresponding to the image size parameter).

    As shown in Figures 35, a dropout layer (corresponding to the dropout rate parameter) with a value of [0.3, 0.5 or 0.7] has been added to the network to reduce overfitting and this dropout layer represents the probability that the layer's output is discarded.

    Figure 3.  dropout = 0.3.
    Figure 4.  dropout = 0.5.
    Figure 5.  dropout = 0.7.

    Word embedding is mainly responsible for processing the title of each image given as input during training. The output of the word embedding is also a vector of size 1 × 256 (corresponding to the argument word_embedding_size parameter), which is another input to the decoder sequence.

    Start the training, set the batch size to 32 (corresponding to the parameter batch size), Adam optimizer makes the learning rate from 1E-2 to 1E-4 (corresponding to the parameter learning rate), a total of 50 iterations (parameter epoch num).

    The probability and accuracy influence of network layer output being discarded are discussed.

    In the following two model tests, the data output of the first and second tests both met the evaluation benchmark range. The label position of the model was adjusted before the second test. The performance of various indicators was improved when the label at the end of the whole sentence was changed to the half of the sentence and the training time was increased. The time complexity of this model is O(n^2). As shown in Figures 6 and 7, the minimum values of the baseline range are all 0. It can be seen that each score index is lower than the maximum value, proving that this model can generate relatively standard medical reports.

    Figure 6.  Comparison between RCLN model data and reference data in two experiments; The horizontal axis represents different score names, and the vertical axis represents the score value.
    Figure 7.  Model input and output test examples.

    In this paper, two comparative models for medical report generation are also implemented. The same Resnet pre-training model was used for pre-training. The data results are shown in Table 1.

    Table 1.  Comparison of the model.
    BLEU_1 BLEU_2 BLEU_3 BLEU_4 METEOR ROUGE
    CNN-RNN 0.3063 0.2026 0.148 0.0994 0.1525 0.3273
    CNN-RNN-Att 0.3235 0.2374 0.1197 0.1084 0.1484 0.3256
    MRNA 0.3773 0.2436 0.1726 0.1284 0.1635 0.3263
    RCLN 0.4341 0.3336 0.2623 0.1373 0.2034 0.3663

     | Show Table
    DownLoad: CSV

    CNN-NN, the prototype CNN, was published by Lecun in 1998 [27]. He formally proposed that he applied the back propagation to neural networks and proposed a new neural network convolution NN. Ronald Williams and David Zipser put forward real-time circular learning of RNN as the basis in 1989 [28].

    CNN-RNN-Att: The Attention mechanism was added on the basis of the previous one. The Attention mechanism was published by google mind team in 2014 [29]. In 2017, the article "Attention is All You Need" was published by Google Machine Translation team in which self-attention mechanism was extensively used to learn text representation.

    By comparing the results of other models and RCLN models, it can be seen that the model based on the multi-attention mechanism is superior to similar models in terms of BLUEs, METEOR and ROUGE, indicating the effectiveness of multi-attention mechanism on medical report generation [30]. The scores of RCLN model were much higher than CNN-RNN series model and higher than MRNA model, proving its effectiveness. Some statements in reports generated by other models are continuous but not coherent. In contrast, the model proposed in this paper is more coherent in context and more colloquial.

    This paper mainly focusses on generating detailed findings for chest radiographs medical reports. For impression generation, classification-based methods may be better at distinguishing anomalies and then drawing final conclusions. But from the results, we can see that in the first line the results and impressions are consistent with the actual situation. However, the results and impressions generated in the second line leave out some exception descriptions. The main reason may be that I was training on a small training set, with fewer training samples for anomalies, and some inconsistencies caused by real noise from the original report. Furthermore, the current model does not create high-quality new sentences that never appear in the training set. The reason may be that it is difficult to learn correct grammar from a small corpus because syntactic correctness is not considered in the training objective function.

    In conclusion, it is believed that with more control data sets and better noise reduction processing of data set preprocessing, better results will appear [31]. At the same time, multiple loop processing statements can also increase the depth, making the result more accurate. In the data labeling process, the addition of more high-quality sentences is expected to effectively ensure the enhancement of the quality of the results.

    The research is supported by the National Natural Science Foundation of China (No.12105120, No.72174079, No.72101045), Natural Science Foundation of the Jiangsu Higher Education Institutions of China (No.19KJB520004, No.21KJB520033), Jiangsu Province "333" project (BRA2020261), Jiangsu Qinglan Project, Lianyungang "521 project", Science and Technology project of Lianyungang High-tech Zone (No.ZD201912).

    The authors declare that there is no conflict of interest.



    [1] Acero FJ, Parey S, Garcia JA, et al. (2018) Return level estimation of extreme rainfall over the Iberian Peninsula: Comparison of methods. Water 10: 179. https://doi.org/10.3390/w10020179 doi: 10.3390/w10020179
    [2] Balkema AA, Haan L (1974) Residual lifetime at great age. Ann Probab 2: 792–804. https://www.jstor.org/stable/2959306
    [3] Bokusheva R (2014) Improving the effectiveness of weather-based insurance: An application of copula approach. MPRA Paper 62339, University Library of Munich, Germany. https://mpra.ub.uni-muenchen.de/62339/
    [4] Bouriaux S, MacMinn R (2009) Securitization of catastrophe risk: New developments in insurance-linked securities and derivatives. J Insur Iss 32: 1–34. http://www.jstor.org/stable/41946289
    [5] Braun A (2011) Pricing catastrophe swaps: A contingent claims approach. Insur Math Econ 49: 520–536. https://doi.org/10.1016/j.insmatheco.2011.08.003 doi: 10.1016/j.insmatheco.2011.08.003
    [6] Cai Y, Cai J, Xu L, et al. (2019) Integrated risk analysis of water-energy nexus systems based on systems dynamics, orthogonal design and copula analysis. Renew Sust Energ Rev 99: 125–137. https://doi.org/10.1016/j.rser.2018.10.001 doi: 10.1016/j.rser.2018.10.001
    [7] Chao W (2021) Valuing multirisk catastrophe reinsurance based on the Cox-Ingersoll-Ross (CIR) model. Discrete Dyn Nat Soc 2021: 8818486. https://doi.org/10.1155/2021/8818486 doi: 10.1155/2021/8818486
    [8] Chao W, Zou HW (2018) Multiple-event catastrophe bond pricing based on CIR-Copula-POT model. Discrete Dyn Nat Soc 2018: 5068480. https://doi.org/10.1155/2018/5068480 doi: 10.1155/2018/5068480
    [9] Chebbi A, Hedhli A (2020) Revisiting the accuracy of standard VaR methods for risk assessment: Using the copula-EVT multidimensional approach for stock markets in the MENA region. Q Rev Econ Financ. https://doi.org/10.1016/j.qref.2020.09.005 doi: 10.1016/j.qref.2020.09.005
    [10] Chen JF, Liu GY, Yang L, et al. (2013) Pricing and simulation for extreme flood catastrophe bonds. Water Resour Manag 27: 3713–3725. https://doi.org/10.1007/s11269-013-0376-2 doi: 10.1007/s11269-013-0376-2
    [11] Chukwudum QC, Mwita P, Mung'atu JK (2020) Optimal threshold determination based on the mean excess plot. Commun Stat-Theor M 49: 5948–5963. https://doi.org/10.1080/03610926.2019.1624772 doi: 10.1080/03610926.2019.1624772
    [12] Cox SH, Pedersen HW (2000) Catastrophe risk bonds. N Am Actuar J 4: 56–82. https://dx.doi.org/10.1080/10920277.2000.10595938 doi: 10.1080/10920277.2000.10595938
    [13] Cox JC, Ingersoll JE, Ross SA (1985) A theory of the term structure of interest rates. Econometrica 53: 385–407. https://doi.org/10.2307/1911242 doi: 10.2307/1911242
    [14] Cummins JD, Weiss MA (2009) Convergence of insurance and financial markets: Hybrid and securitized risk-transfer solutions. J Risk Insur 76: 493–545. https://doi.org/10.1111/j.1539-6975.2009.01311.x doi: 10.1111/j.1539-6975.2009.01311.x
    [15] Deng GQ, Liu SQ, Deng CS (2020) Research on the pricing of global drought catastrophe bonds. Math Probl Eng 2020: 3898191. https://doi.org/10.1155/2020/3898191 doi: 10.1155/2020/3898191
    [16] Frees EW, Valdez EA (1998) Understanding relationships using copulas. N Am Actuar J 2: 1–25. https://doi.org/10.1080/10920277.1998.10595667 doi: 10.1080/10920277.1998.10595667
    [17] Gu YK, Fan CJ, Liang LQ, et al. (2019) Reliability calculation method based on the copula function for mechanical systems with dependent failure. Ann Oper Res 311: 99–116. https://doi.org/10.1007/s10479-019-03202-5 doi: 10.1007/s10479-019-03202-5
    [18] Kurniawan H, Putri ER, Imron C, et al. (2021) Monte Carlo method to valuate CAT bonds of flood in Surabaya under jump diffusion process. J Phys Conf Ser 1821: 012026. http://dx.doi.org/10.1088/1742-6596/1821/1/012026 doi: 10.1088/1742-6596/1821/1/012026
    [19] Lee JP, Yu MT (2002) Pricing default-risky CAT bonds with moral hazard and basis risk. J Risk Insur 69: 25–44. https://doi.org/10.1111/1539-6975.00003 doi: 10.1111/1539-6975.00003
    [20] Lee JP, Yu MT (2007) Valuation of catastrophe reinsurance with catastrophe bonds. Insur Math Econ 41: 264–278. https://doi.org/10.1016/j.insmatheco.2006.11.003 doi: 10.1016/j.insmatheco.2006.11.003
    [21] Litzenberger RH, Beaglehole DR, Reynolds CE (1996) Assessing catastrophe reinsurance-linked securities as a new asset class. J Portfolio Manage 23: 76–86. https://doi.org/10.3905/jpm.1996.076 doi: 10.3905/jpm.1996.076
    [22] Liu XH, Meng SW, Li ZX (2019) Copula-mixed distribution model and its application in modeling earthquake loss in China. Syst Eng Theor Pract 39: 1855–1866. https://doi.org/10.12011/1000-6788-2017-2116-12 doi: 10.12011/1000-6788-2017-2116-12
    [23] Lo CL, Lee JP, Yu MT (2013) Valuation of insurers' contingent capital with counterparty risk and price endogeneity. J Bank Financ 37: 5025–5035. https://doi.org/10.1016/j.jbankfin.2013.09.007. doi: 10.1016/j.jbankfin.2013.09.007
    [24] Ma N, Bai YB, Meng SW (2021) Return period evaluation of the largest possible earthquake magnitudes in mainland China based on extreme value theory. Sensors 21: 3519. https://doi.org/10.3390/s21103519 doi: 10.3390/s21103519
    [25] Ma ZG, Ma CQ (2013) Pricing catastrophe risk bonds: A mixed approximation method. Insur Math Econ 52: 243–254. https://doi.org/10.1016/j.insmatheco.2012.12.007 doi: 10.1016/j.insmatheco.2012.12.007
    [26] Ma ZG, Ma CQ, Xiao SS (2017) Pricing zero-coupon catastrophe bonds using EVT with doubly stochastic poisson arrivals. Discrete Dyn Nat Soc 2017: 3279647. https://doi.org/10.1155/2017/3279647 doi: 10.1155/2017/3279647
    [27] McNeil AJ, Frey R (2000) Estimation of tail-related risk measures for heteroscedastic financial time series: An extreme value approach. J Empir Financ 7: 271–300. https://doi.org/10.1016/S0927-5398(00)00012-8 doi: 10.1016/S0927-5398(00)00012-8
    [28] Merton RC (1976) Option prices when underlying stock returns are discontinuous. J Financ Econ 3: 125–144. https://doi.org/10.1016/0304-405X(76)90022-2 doi: 10.1016/0304-405X(76)90022-2
    [29] Mousavi M, Akkar S, Erdik M (2019) A candidate proxy to be used in intensity-based triggering mechanism for parametric CAT-bond insurance: Istanbul case study. Earthq Spectra 35: 565–588. https://doi.org/10.1193/081018EQS201M doi: 10.1193/081018EQS201M
    [30] Nowak P, Romaniuk M (2013) Pricing and simulations of catastrophe bonds. Insur Math Econ 52: 18–28. https://doi.org/10.1016/j.insmatheco.2012.10.006 doi: 10.1016/j.insmatheco.2012.10.006
    [31] Pickands J (1975) Statistical inference using extreme order statistics. Ann Stat 3: 119–131. https://www.jstor.org/stable/2958083
    [32] Reshetar G (2008) Pricing of multiple-event coupon paying CAT bond. SSRN Electron J. http://dx.doi.org/10.2139/ssrn.1059021 doi: 10.2139/ssrn.1059021
    [33] Romaniuk M (2017) Analysis of the insurance portfolio with an embedded catastrophe bond in a case of uncertain parameter of the insurer's share. Adv Intel Syst Comput 524: 33–43. http://dx.doi.org/10.1007/978-3-319-46592-0_3 doi: 10.1007/978-3-319-46592-0_3
    [34] Shao J, Pantelous A, Papaioannou AD (2015) Catastrophe risk bonds with applications to earthquakes. Eur Actuar J 5: 113–138. https://doi.org/10.1007/s13385-015-0104-9 doi: 10.1007/s13385-015-0104-9
    [35] Shen L, Zhang Y, Zhuang X, et al. (2018) Reliability modeling for gear door lock system with dependent failures based on copula. ASME J Risk Uncertainty Part B 4: 041003. https://doi.org/10.1115/1.4039941 doi: 10.1115/1.4039941
    [36] Sklar M (1959) Fonctions de repartition an dimensions et leurs marges. Publ Inst Statist Univ Paris 8: 229–231.
    [37] Smack L (2016) Catastrophe bonds-Regulating a growing asset class. Risk Manage Insur Rev 19: 105–125. https://doi.org/10.1111/rmir.12057 doi: 10.1111/rmir.12057
    [38] Swiss Re (2020) Natural catastrophes in 2020: Secondary perils in the spotlight, but don't forget primary-peril risks. Sigma 1/2021, Zurich, Switzerland. Available from: https://www.swissre.com/institute/research/sigma-research/sigma-2021-01.html
    [39] Tao Z (2011) Zero-beta characteristic of CAT bonds. 2011 Fourth International Conference on Business Intelligence and Financial Engineering, 641–644. https://doi.ieeecomputersociety.org/10.1109/BIFE.2011.159
    [40] Woo G (2004) A catastrophe bond niche: Multiple event risk. Working Paper, NBER Insurance Group Work-Shop, Cambridge, UK. Available from: https://conference.nber.org/confer/2004/insw04/woo.pdf
    [41] Xu LY, Wang HM, Chen JF (2013) Research of drought disaster risk assessment based on copula-EVT model. Appl Stat Manage 32: 284–294.
    [42] Yao CZ, Sun BY, Lin JN (2017) A study of correlation between investor sentiment and stock market based on copula model. Kybernetes 46: 550-571. https://doi.org/10.1108/K-10-2016-0297 doi: 10.1108/K-10-2016-0297
    [43] Zhang XL, Tsai CCL (2018) The optimal write-down coefficients in a percentage for a catastrophe bond. N Am Actuar J 22: 1–21. https://doi.org/10.1080/10920277.2017.1283236 doi: 10.1080/10920277.2017.1283236
    [44] Zimbidis AA, Frangos NE, Pantelous AA (2007) Modeling earthquake risk via extreme value theory and pricing the respective catastrophe bonds. Astin Bull 37: 163–183. https://doi.org/10.1017/S0515036100014793 doi: 10.1017/S0515036100014793
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3800) PDF downloads(258) Cited by(9)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog