
This article presents a method to calibrate a 16-channel 40 GS/s time-interleaved analog-to-digital converter (TI-ADC) based on channel equalization and Monte Carlo method. First, the channel mismatch is estimated by the Monte Carlo method, and equalize each channel to meet the calibration requirement. This method does not require additional hardware circuits, every channel can be compensated. The calibration structure is simple and the convergence speed is fast, besides, the ADC is worked in background mode, which does not affect the conversion. The prototype, implemented in 28 nm CMOS, reaches a 41 dB SFDR with an input signal of 1.2 GHz and 5 dBm after the proposed background offset and gain mismatch calibration. Compared with previous works, the spurious-free dynamic range (SFDR) and the effective number of bits (ENOB) are better, the estimation accuracy is higher, the error is smaller and the faster speed of convergence improves the efficiency of signal processing.
Citation: Yongjie Zhao, Sida Li, Zhiping Huang. TI-ADC multi-channel mismatch estimation and calibration in ultra-high-speed optical signal acquisition system[J]. Mathematical Biosciences and Engineering, 2021, 18(6): 9050-9075. doi: 10.3934/mbe.2021446
[1] | Qiongyang Zhou, Tianyu Zhao, Kaidi Feng, Rui Gong, Yuhui Wang, Huijun Yang . Artificial intelligence in acupuncture: A bibliometric study. Mathematical Biosciences and Engineering, 2023, 20(6): 11367-11378. doi: 10.3934/mbe.2023504 |
[2] | Di Zhang, Bing Fan, Liu Lv, Da Li, Huijun Yang, Ping Jiang, Fangmei Jin . Research hotspots and trends of artificial intelligence in rheumatoid arthritis: A bibliometric and visualized study. Mathematical Biosciences and Engineering, 2023, 20(12): 20405-20421. doi: 10.3934/mbe.2023902 |
[3] | Zhiyan Xing, Yanlong Yang, Zuopeng Hu . Nash equilibrium realization of population games based on social learning processes. Mathematical Biosciences and Engineering, 2023, 20(9): 17116-17137. doi: 10.3934/mbe.2023763 |
[4] | Rachael C. Adams, Behnam Rashidieh . Can computers conceive the complexity of cancer to cure it? Using artificial intelligence technology in cancer modelling and drug discovery. Mathematical Biosciences and Engineering, 2020, 17(6): 6515-6530. doi: 10.3934/mbe.2020340 |
[5] | Chunpeng Tian, Zhaoyang Xu, Lukun Wang, Yunjie Liu . Arc fault detection using artificial intelligence: Challenges and benefits. Mathematical Biosciences and Engineering, 2023, 20(7): 12404-12432. doi: 10.3934/mbe.2023552 |
[6] | Fuhua Wang, Zongdong Zhang, Kai Wu, Dongxiang Jian, Qiang Chen, Chao Zhang, Yanling Dong, Xiaotong He, Lin Dong . Artificial intelligence techniques for ground fault line selection in power systems: State-of-the-art and research challenges. Mathematical Biosciences and Engineering, 2023, 20(8): 14518-14549. doi: 10.3934/mbe.2023650 |
[7] | Yu Lei, Zhi Su, Xiaotong He, Chao Cheng . Immersive virtual reality application for intelligent manufacturing: Applications and art design. Mathematical Biosciences and Engineering, 2023, 20(3): 4353-4387. doi: 10.3934/mbe.2023202 |
[8] | Anastasia-Maria Leventi-Peetz, Kai Weber . Probabilistic machine learning for breast cancer classification. Mathematical Biosciences and Engineering, 2023, 20(1): 624-655. doi: 10.3934/mbe.2023029 |
[9] | Jingyao Liu, Qinghe Feng, Yu Miao, Wei He, Weili Shi, Zhengang Jiang . COVID-19 disease identification network based on weakly supervised feature selection. Mathematical Biosciences and Engineering, 2023, 20(5): 9327-9348. doi: 10.3934/mbe.2023409 |
[10] | Xue Li, Huibo Zhou, Ming Zhao . Transformer-based cascade networks with spatial and channel reconstruction convolution for deepfake detection. Mathematical Biosciences and Engineering, 2024, 21(3): 4142-4164. doi: 10.3934/mbe.2024183 |
This article presents a method to calibrate a 16-channel 40 GS/s time-interleaved analog-to-digital converter (TI-ADC) based on channel equalization and Monte Carlo method. First, the channel mismatch is estimated by the Monte Carlo method, and equalize each channel to meet the calibration requirement. This method does not require additional hardware circuits, every channel can be compensated. The calibration structure is simple and the convergence speed is fast, besides, the ADC is worked in background mode, which does not affect the conversion. The prototype, implemented in 28 nm CMOS, reaches a 41 dB SFDR with an input signal of 1.2 GHz and 5 dBm after the proposed background offset and gain mismatch calibration. Compared with previous works, the spurious-free dynamic range (SFDR) and the effective number of bits (ENOB) are better, the estimation accuracy is higher, the error is smaller and the faster speed of convergence improves the efficiency of signal processing.
Reinsurance is an effective risk management tool for an insurer to mitigate the underwriting risk by transferring part of the risk exposure to a reinsurer. Starting from [1,2], the study of optimal reinsurance has remained a fascinating topic in actuarial science. Most existing literatures on optimal reinsurance are from an insurer's point of view. For example, by maximizing the expected concave utility function of an insurer's wealth, Arrow [3] showed that optimal reinsurance for an insurer is a stop-loss reinsurance. The result has been extended to different settings (see, e.g., [4,5] and references therein). It is well known that the optimal reinsurance for an insurer, which minimizes the variance of the insurer's loss, is also a stop-loss reinsurance (see [6]). However, Vajda [7] showed that the optimal reinsurance for a reinsurer, which minimizes the variance of the reinsurer's loss with a fixed net reinsurance premium, is a quota-share reinsurance among a class of ceded loss functions that include stop-loss reinsurance. Kaluszka and Okolewski [8] showed that if an insurer wants to maximize his expected utility with the maximal possible claim premium principle, the optimal form of reinsurance for the insurer is a limited stop-loss reinsurance. In recent years, Cai et al. [9,10] introduced two classes of optimal reinsurance models by minimizing the value-at-risk (VaR) and the conditional tail expectation (CTE) of the insurer's total risk exposure. Cai et al. [10] proved that depending on the risk measure level of confidence, the optimal reinsurance for an insurer, which minimizes the VaR and CTE of the total risk of the insurer, can be in the form of a stop-loss reinsurance or a quota-share reinsurance or a change-loss reinsurance under the expected value principle and among the increasing convex ceded loss functions. Recent references on VaR-minimization and CTE-minimization reinsurance models can be found in [11,12,13,14,15,16,17] and references therein.
However, a reinsurance contract involves two parties, an insurer and a reinsurer. The two parties have conflicting interests. An optimal reinsurance contract for an insurer may not be optimal for a reinsurer and it might be unacceptable for a reinsurer as pointed out by Borch [18]. Therefore, an interesting question about optimal reinsurance is to design a reinsurance contract so that it considers the interests of both an insurer and a reinsurer. Borch [1] first discussed the optimal quota-share retention and stop-loss retention that maximize the product of the expected utility functions of the two parties' wealth. Cai et al. derived the optimal reinsurance contracts that maximize the joint survival probability and joint profitable probability of the two parties, and gave the sufficient conditions for optimal reinsurance contracts within a wide class of reinsurance policies and under a general reinsurance premium principle, see [19,20]. Cai et al. [21] studied the optimal reinsurance strategy, which based on the minimum convex combination of the VaR of the insurer and the reinsurer under two types of constraints. Lo [22] discussed the generalized problems of [21] by using the Neyman-Pearson approach. Based on the optimal reinsurance strategy of [21], Jiang et al. [23] proved that the optimal reinsurance strategy is a Pareto-optimal reinsurance policy and gave optimal reinsurance strategies using the geometric method. Cai et al. [24] studied Pareto optimality of reinsurance arrangements under general model settings and obtained the explicit forms of the Pareto-optimal reinsurance contracts under TVaR risk measure and the expected value premium principle. By geometric approach, Fang et al. [25] studied Pareto-optimal reinsurance policies under general premium principles and gave the explicit parameters of the optimal ceded loss functions under Dutch premium principle and Wang's premium principle. Lo and Tang [26] characterized the set of Pareto-optimal reinsurance policies analytically and visualized the insurer-reinsurer trade-off structure geometrically. Huang and Yin [27] studied two classes of optimal reinsurance models from perspectives of both insurers and reinsurers by minimizing their convex combination where the risk is measured by a distortion risk measure and the premium is given by a distortion premium principle.
In this paper, we study the optimal reinsurance models by minimizing the insurer and the reinsurer's total costs under the criteria of loss function assuming that the reinsurance premium principles satisfy risk loading and stop-loss ordering preserving. The loss function is defined by the joint VaR based on the binary lower-orthant value-at-risk and the binary upper-orthant value-at-risk, which are proposed by Embrechts and Puccetti [28]. Methodologically, we determine the optimal reinsurance forms using the geometric approach of [11] over three ceded loss function sets, the class of increasing convex ceded loss functions, the class of ceded loss functions which satisfy both ceded and retained loss functions are increasing and the class of increasing concave ceded loss functions1*.
∗1Throughout this paper, the terms "increasing function" and "decreasing function" mean "non-decreasing function" and "non-increasing function", respectively.
The rest of the paper is organized as follows. In Section 2, we give definitions and propose an optimal reinsurance problem that takes into consideration the interests of both an insurer and a reinsurer. In Section 3, we derive optimal reinsurance forms over three ceded loss function sets by the geometric approach of [11], assuming that the reinsurance premium principles satisfy risk loading and stop-loss ordering preserving. In Section 4 and Section 5, we determine the corresponding optimal parameters under expectation premium principle and Dutch premium principle respectively. In Section 6, we provide four numerical examples. Conclusions are given in Section 7.
Let X be the loss or claim initially assumed by an insurer in a fixed time period. We assumed that X is a nonnegative random variable with distribution function F(x)=P{X≤x}, survival function S(x)=P{X≥x} and mean μ=E(X) (0<μ<∞). Under a reinsurance contract, a reinsurer will cover the part of the loss, say f(X) with 0≤f(X)≤X, and the insurer will retain the rest of the loss, which is denoted by If(X)=X−f(X). The losses If(X) and f(X) are called retained loss and ceded loss, respectively. Since the reinsurer shares the risk X, the insurer will pay an additional cost in the form of reinsurance premium to the reinsurer. We denote the reinsurance premium by Πf(X) which corresponds to a ceded loss function f(X). The total cost TfI of the insurer is composed of two components: the retained loss If(X) and the reinsurance premium Πf(X), that is
TfI=If(X)+Πf(X), | (2.1) |
and the total cost of the reinsurer is
TfR=f(X). | (2.2) |
For individual company, an important issue is to determine their maximum agammaegate loss which can occur with some given probability, value-at-risk (VaR) serves this purpose.
Definition 2.1. For 0<α<1, the VaR of a non-negative random variable X with distribution function F(x)=P{X≤x} at confidence level α is defined as
VaRX(α)=inf{x∈R:F(x)≥α}=F−1(α), | (2.3) |
where, F−1 is the generalized inverse function of the distribution function F(x).
The VaR defined by (2.3) is the maximum loss which is not exceeded at a given probability α. We list several properties of the VaR or the generalized inverse function F−1.
Proposition 2.1. For any α∈(0,1) and any nonnegative random variable X with distribution function F(x), the following properties hold:
(1) F(F−1(α))≥α.
(2) F−1(F(x))≤x for x≥0.
(3) If h is an increasing and left-continuous function, then VaRh(X)(α)=h(VaRX(α)).
Proof. Properties (1) and (2) follow immediately from Lemma 2.13 of [29] and the definition of the generalized inverse function, while for property (3), see the proof of Theorem 1 in [30].
In this paper, we assume that the initial loss X has a continuous and strictly increasing distribution function on (0,∞) with a possible mass at 0 and α∈(F(0),1) to avoid trivial cases, then
F(F−1(α))=α. | (2.4) |
For the insurer or the reinsurer, they can use Definition 2.1 to determine their maximum agammaegate cost which can occur with some given probability α. However, if the insurer and the reinsurer are considered as partners, then the total cost Tf is a two-dimensional random vector (TfI,TfR). For this case, Definition 2.1 does not make sense since, even for a one to one continuous distribution function, there are possibly infinitely many vectors (x,y)∈[0,∞)×[0,∞) at which Gf(x,y)=α, where
Gf(x,y)=P{TfI≤x, TfR≤y} |
is the distribution function of (TfI,TfR). Hence we use the definition of multivariate Value-at-Risk which is proposed by Embrechts and Puccetti (see [28]).
Definition 2.2. For α∈(0,1), the binary lower-orthant value-at-risk at confidence level α for the distribution function Gf(x,y) is the boundary of its α-level set, defined as
VaR_f(α):=∂{(x,y)∈R2+:Gf(x,y)≥α}. |
Analogously, the binary upper-orthant value-at-risk at confidence level α for the tail function ¯Gf(x,y) is defined as
¯VaRf(α):=∂{(x,y)∈R2+:¯Gf(x,y)≤1−α}, |
where
¯Gf(x,y)=P{TfI>x, TfR>y}. |
We now provide further analysis on the binary lower-orthant value-at-risk at confidence level α for the distribution function Gf(x,y) and the binary upper-orthant value-at-risk at confidence level α for the tail function ¯Gf(x,y) over the following three admissible sets of ceded loss functions:
F1≜{0⩽f(x)⩽x:f(x) is an increasing convex function }, | (2.5) |
F2≜{0⩽f(x)⩽x: both If(x) and f(x) are increasing functions }, | (2.6) |
F3≜{0⩽f(x)⩽x:f(x) is an increasing concave function }. | (2.7) |
In the set F2, the increasing condition on both ceded and retained loss functions is interesting and important. Both the insurer and the reinsurer are obligated to pay more for larger loss X, hence it potentially reduces moral hazard. In addition, in a reinsurance contract, sometimes in order to better protect the insurer, they let the loss proportion paid by the reinsurer increases in the loss (see [7]). Mathematically, f(x)/x is assumed to be an increasing function. If we assume that f(x) is increasing and convex, then f(x)/x is an increasing function. On the other hand, under the reinsurance policies with no upper limit on the indemnity, the reinsurance may be under a heavy financial burden, especially when the insurer suffers a large unexpected loss. Therefore, reinsurance contracts sometimes involve an upper limit on the indemnity in practice. In such a situation, ceded loss functions must not be convex functions but concave functions sometimes. Motivated by these observations, we consider ceded loss functions in the sets F1 and F3.
Note that F1⊂F2 (see [12]) and F3⊂F2. In addition, if f∈Fi,i=1,2,3, then If and f are increasing and continuous. Thus, from Proposition 1, we have
VaRTfI(α)=If(VaRX(α))+Πf(X), | (2.8) |
VaRTfR(α)=f(VaRX(α)). | (2.9) |
Based on the above analysis, we obtain the following theorem.
Theorem 2.1. For α∈(0,1), the binary lower-orthant value-at-risk at confidence level α for the distribution function Gf(x,y) is
VaR_f(α)=∂{(x,y)∈R2+:x≥VaRTfI(α) and y≥VaRTfR(α)}, |
and the binary upper-orthant value-at-risk at confidence level α for the tail function ¯Gf(x,y) is
¯VaRf(α)=∂{(x,y)∈R2+:x≥VaRTfI(α) or y≥VaRTfR(α)}. |
Proof. Let S1={(x,y)∈R2+:Gf(x,y)≥α} and S2={(x,y)∈R2+:x≥VaRTfI(α) and y≥VaRTfR(α)}. First, it is easy to see that S1⊆S2. Second, note that
Gf(VaRTfI(α),VaRTfR(α))=P{TfI≤VaRTfI(α), TfR≤VaRTfR(α)}=P{If(X)≤If(VaRX(α)), f(X)≤f(VaRX(α)}≥P{X≤VaRX(α)}=α, |
then for any (x,y)∈S2, we have Gf(x,y)≥Gf(VaRTfI(α),VaRTfR(α))≥α, thus we get S2⊆S1.
Similarly, let D1={(x,y)∈R2+:¯Gf(x,y)≤1−α} and D2={(x,y)∈R2+:x≥VaRTfI(α) or y≥VaRTfR(α)}. For any (x,y)∈D2, if x≥VaRTfI(α), then
¯Gf(x,y)=P{TfI>x, TfR>y}≤P{TfI>x}≤P{TfI>VaRTfI(α)}≤1−α. | (2.10) |
By the same arguments, we know that if y≥VaRTfR(α), then ¯Gf(x,y)≤1−α holds as well. Hence, D2⊆D1.
On the other hand, for any (x,y)∈¯D2, we have x<VaRTfI(α) and y<VaRTfR(α). Since TfI and TfR are co-monotonic, we have
¯Gf(x,y)=P{TfI>x, TfR>y}=min{P{TfI>x},P{TfR>y}}. | (2.11) |
Notice that for any random variable Y, if y<VaRY(α), we get P{Y≤y}<α. (Otherwise, suppose P{Y≤y}≥α, then from the definition of VaR, we get y≥VaRY(α).) Then, we have
P{TfI>x}>1−α and P{TfR>y}>1−α | (2.12) |
which implies ¯Gf(x,y)>1−α. Therefore, we have ¯D2⊆¯D1, and hence D2=D1.
The binary lower-orthant value-at-risk VaR_f(α) and the binary upper-orthant value-at-risk ¯VaRf(α) are illustrated in Figures 1 and 2.
Note that the joint VaR (VaRTfI(α),VaRTfR(α)) determines VaR_fα and ¯VaRfα. From both the insurer and the reinsurer's point of view, maximum agammaegate cost Tf which can occur with some given probability is the smaller the better, that is to say, VaR_fα and ¯VaRfα are closer to the origin the better. This motivates us to consider the loss function
L(f)=√[VaRTfI(α)]2+[VaRTfR(α)]2, | (2.13) |
and the optimization criteria for seeking the optimal reinsurance contract:
f∗=argminf L(f). | (2.14) |
In the rest of the paper, we will derive the optimal solutions corresponding to the reinsurance model (2.14) under the admissible ceded loss function sets Fi,i=1,2,3.
In this section, we consider the general reinsurance premium principles which satisfy the following two properties:
1. Risk loading: Π(X)≥E[X];
2. Stop-loss ordering preserving: Π(Y)≤Π(X) if Y is smaller than X in the stop-loss order (Y≤slX)2†.
†2 A random variable Y is said to be smaller than a random variable X in the stop-loss order sense, notation Y≤slX, if and only if Y has lower stop-loss premiums than X: E(Y−d)+≤E(X−d)+, −∞<d<+∞.
We emphasize that there are many premium principles which satisfy these two properties, such as expectation principle, p−mean value principle, Dutch principle, Wang's principle and exponential principle.
In this subsection, we derive the optimal reinsurance policies under the condition that the ceded loss function f∈F1. First, we define a ceded loss function set H1, which consists of all ceded loss functions h(x)=b(x−d)+ with 0≤b≤1 and d≥0. Note that H1 is a subclass of F1. Second, we show that the optimal ceded loss functions which minimize the loss function in the subclass H1 also optimally minimize the loss function in F1. We give the following proposition using the geometric method proposed by [11].
Proposition 3.1. For any f∈F1, there always exists a function h∈H1 such that L(h)≤L(f).
Proof. If f∈F1 is identically zero on [0,VaRX(α)], we consider h:=0∈H1. It is easy to see that h(X)≤f(X) in the usual stochastic order. It further leads to h(X)≤slf(X) according to the theory of stochastic orders in [31]. Then we have Πh(X)≤Πf(X). Consequently, from formulas (2.8) and (2.9), we obtain
VaRTfI(α)=VaRX(α)−f(VaRX(α))+Πf(X)≥VaRX(α)−h(VaRX(α))+Πh(X)=VaRThI(α), |
and
VaRTfR(α)=f(VaRX(α))=0=h(VaRX(α))=VaRThR(α). |
Hence we have L(h)≤L(f).
If f∈F1 is not identically zero on [0,VaRX(α)], let f′−(VaRX(α)) and f′+(VaRX(α)) be the left-hand derivative and right-hand derivative of f at VaRX(α). Let b be an any number in [f′−(VaRX(α)),f′+(VaRX(α))], then we have 0<b≤1. Let d=VaRX(α)−f(VaRX(α))b, define h(x)=b(x−d)+,x≥0. Then h∈H1, f(VaRX(α))=h(VaRX(α)) and f≥h for any x≥0 since f is convex. Hence we have
VaRTfI(α)=VaRX(α)−f(VaRX(α))+Πf(X)≥VaRX(α)−h(VaRX(α))+Πh(X)=VaRThI(α), |
and
VaRTfR(α)=f(VaRX(α))=h(VaRX(α))=VaRThR(α). |
Therefore, L(h)≤L(f) holds. The geometric interpretation of this proof can be seen from Figure 3.
Based on the Proposition 3.1, we know that change-loss reinsurance of the form f(x)=b(x−d)+ with 0≤b≤1 and d≥0 is optimal among F1 in the sense that it minimizes the loss function L(f). The optimal parameters b∗ and d∗ will be given under some specific reinsurance premium principles in the rest of sections.
In this subsection, we focus on the loss function minimization model for any ceded loss function f∈F2. As shown in [12], the ceded loss function f∈F2 is Lipschitz continuous, i.e.,
0⩽f(x2)−f(x1)⩽x2−x1,∀0⩽x1⩽x2. |
Let H2 denote the class of ceded loss function with the following representation h(x)=(x−a)+−(x−VaRX(α))+, a⩽VaRα(X). It is easy to see that H2 is a subclass of F2. Exactly, h(x) is a layer reinsurance with deductible a and upper limit VaRX(α). We will prove that the optimal functions which minimize the loss function in the subclass H2 also optimally minimize the loss function in F2.
Proposition 3.2. Let f∈F2 be a ceded function. There always exists a function h∈H2 such that L(h)⩽L(f).
Proof. For any f∈F2, define a=VaRX(α)−f(VaRX(α))⩾0 and h(x)=(x−a)+−(x−VaRX(α))+=min{(x−(VaRX(α)−f(VaRX(α))))+,f(VaRX(α)}, x⩾0. Then we have h∈H2 and f(VaRX(α))=h(VaRX(α)).
Furthermore, recall that the ceded loss function f∈F2 is non-negative and Lipschitz continuous, hence inequality f(x)⩾(x+f(VaRX(α))−VaRX(α))+ holds for x∈[0,VaRX(α)]. On the other hand, the increasing property of f(x) leads to h(x)=f(VaRX(α))⩽f(x) for all x>VaRX(α). Thus, inequality h(x)⩽f(x) holds for all x⩾0. Since the reinsurance premium preserves stop-loss order, we have
VaRTfI(α)=VaRX(α)−f(VaRX(α))+Πf(X)≥VaRX(α)−h(VaRX(α))+Πh(X)=VaRThI(α), |
and
VaRTfR(α)=f(VaRX(α))=h(VaRX(α))=VaRThR(α). |
Thus, L(h)≤L(f) holds. The geometric interpretation of this proof can be seen from Figure 4.
By Proposition 3.2, we know that layer reinsurance with deductible a and upper limit VaRX(α) is optimal among F2 in the sense that it minimizes the loss function L(f).
In this subsection, we derive the optimal solution to problem of (2.14) over F3. Let H3 be the class of non-negative function h(x) defined on [0,∞] with
h(x)=c(x−(x−VaRX(α))+), | (3.1) |
where 0⩽c⩽1. Note that H3⊂F3 and H3 contains the null function h(x)=0. The following result shows that the optimal ceded loss functions in F3 which minimize the L(f), must take the form of (3.1).
Proposition 3.3. For any f∈F3, there always exists a function h∈H3, such that L(h)⩽L(f).
Proof. For any f∈F3, let c=f(VaRX(α))VaRX(α), then c∈[0,1]. Define h(x)=c(x−(x−VaRX(α))+), obviously h∈H3 and h(VaRX(α))=f(VaRX(α)).
In addition, recall that the ceded loss function f∈F3 is increasing concave, hence f(x)⩾f(VaRX(α))VaRX(α)x=h(x) for x∈[0,VaRX(α)]. On the other hand, the increasing property of f(x) leads to h(x)=f(VaRX(α))⩽f(x) for x>VaRX(α). Since the reinsurance premium preserves stop-loss order, we have
VaRTfI(α)=VaRX(α)−f(VaRX(α))+Πf(X)≥VaRX(α)−h(VaRX(α))+Πh(X)=VaRThI(α), |
and
VaRTfR(α)=f(VaRX(α))=h(VaRX(α))=VaRThR(α). |
Thus, we have L(h)⩽L(f). The geometric interpretation of this proof can be seen from Figure 5.
From Proposition 3.3, we know that the quota-share reinsurance with a policy limit is always optimal among F3 in the sense that it minimizes the loss function L(f).
In this section, we consider the expectation reinsurance premium principle, i.e.,
Πf(X)=(1+θ)E[f(x)], | (4.1) |
where, θ>0 is the safety loading.
As a result of Proposition 3.1, we can deduce optimal ceded loss functions by confining attention to H1. For a change-loss reinsurance with b∈[0,1] and d∈[0,∞), the total costs of the insurer and the reinsurer are
Tb,dI=X−b(X−d)++ΠE(b,d), |
Tb,dR=b(X−d)+, |
where, ΠE(b,d)=(1+θ)E[b(X−d)+]=(1+θ)b∫∞dS(x)dx is the reinsurance premium. Then the VaR of Tb,dI and Tb,dR at confidence level α are
VaRTb,dI(α)=VaRX(α)−b(VaRX(α)−d)++ΠE(b,d), | (4.2) |
VaRTb,dR(α)=b(VaRX(α)−d)+. | (4.3) |
Hence, the loss function is
LE(b,d)={√[(1−b)VaRX(α)+bd+ΠE(b,d)]2+[b(VaRX(α)−d)]2,d≤VaRX(α),VaRX(α)+ΠE(b,d),d>VaRX(α). | (4.4) |
Lemma 4.1. Optimal ceded functions which minimize the loss function LE(b,d) in the class H1 exist.
Proof. Note that the function ΠE(b,d) is an increasing function with respect to b. Then the loss function LE(b,d) attains its minimum value over [0,1]×(VaRX(α),∞) at b=0 (the ceded function is h(x)≡0) and the minimum value is VaRX(α). Hence, the study of optimal ceded functions which minimize the loss function LE(b,d) in the class H1 is simplified to solving the two-parameter minimization problem over closed subset [0,1]×[0,VaRX(α)]. Since LE(b,d) is continuous, then the minimum of LE(b,d) over [0,1]×[0,VaRX(α)] must attain at some stationary point or lie on the boundary.
First, we define A≜[0,1]×[0,VaRX(α)]. In this subsection, we will identify the minimum points of LE(b,d) over A and discuss the optimal ceded function f∗1(x). We split A into five disjoint subsets, i.e. A=⋃5i=1Ai, where, A1={(0,d):0≤d≤VaRX(α)}, A2={(b,d):0<b<1,0<d<VaRX(α)}, A3={(1,d):0<d<VaRX(α)}, A4={(b,0):0<b⩽1} and A5={(b,VaRX(α)):0<b≤1}.
If (b,d)∈A, the loss function is
LE(b,d)=√[(1−b)VaRX(α)+bd+ΠE(b,d)]2+[b(VaRX(α)−d)]2. | (4.5) |
Let HE(b,d)=L2E(b,d)=[(1−b)VaRX(α)+bd+ΠE(b,d)]2+[b(VaRX(α)−d)]2, then HE(b,d) and LE(b,d) have the same minimum points. Thus, we will study the minimization problem of HE(b,d) on A in the rest of this subsection. Note that HE(b,d) is differentiable with partial derivatives
{∂HE(b,d)∂b=2[(VaRX(α)−g(d))2+(VaRX(α)−d)2]b+2VaRX(α)(g(d)−VaRX(α)),∂HE(b,d)∂d=2b[(1−b)VaRX(α)+bg(d)]g′(d)−2b2(VaRX(α)−d), | (4.6) |
where, g(d)=d+(1+θ)∫∞dS(x)dx.
Next, we divide the following analysis into five cases.
● First, we demonstrate that HE(b,d) has no minimum points on A5. For any (b,d)∈A5, HE(b,d)>[VaRX(α)]2=HE(0,d)=minA1HE(b,d), then the minimum value of HE(b,d) over A is not attainable in A5.
● The minimum points of HE(b,d) are located in A1 if and only if
min[0,VaRX(α)]g(d)≥VaRX(α). | (4.7) |
In fact, if inequality (4.7) holds, then it follows from the expression of ∂HE(b,d)∂b in (4.6) that ∂HE(b,d)∂b>0. Thus, HE(b,d) is strictly increasing with respect to b. Furthermore, for any d∈[0,VaRX(α)], HE(0,d)≡[VaRX(α)]2. As a result, the minimum value of HE(b,d) over A is attained at any point (0,d) in A1.
Conversely, if min[0,VaRX(α)]g(d)<VaRX(α), then there exists a ˜d∈[0,VaRX(α)] such that ∂HE(b,˜d)∂b<0 holds in a right neighborhood of b=0. That is to say, (0,˜d) is not a minimum point of HE(b,d). Since HE(0,d)=HE(0,˜d)≡[VaRX(α)]2 for any (0,d)∈A1, then no minimum points of HE(b,d) are located in A1.
● If (b∗,d∗)∈A2 is a minimum point of HE(b,d), then (b∗,d∗) is a stationary point of HE(b,d). Therefore, we have
{∂HE(b,d)∂b|(b,d)=(b∗,d∗)=0,∂HE(b,d)∂d|(b,d)=(b∗,d∗)=0. | (4.8) |
By straightforward algebra, we know that d∗ is a root of equation q(d)=0, where
q(d)=S(d)(VaRX(α)−d)−∫∞dS(x)dx. | (4.9) |
Substituting d∗ in the second equation of (4.8) yields
b∗=VaRX(α)g′(d∗)VaRX(α)−d∗+[VaRX(α)−g(d∗)]g′(d∗). | (4.10) |
Furthermore, b∗ must lie in (0,1), which is equivalent to
p(d∗)>0, | (4.11) |
where, the function p(d) is given by p(d)=VaRX(α)−d−g(d)g′(d).
● If (1,ˉd)∈A3 is a minimum point of HE(b,d), then Fermat's theorem implies
{∂HE(1,d)∂d|d=ˉd=0,∂HE(b,ˉd)∂b|b=1≤0, | (4.12) |
which is equivalent to
{p(ˉd)=0,g(ˉd)[g(ˉd)−VaRX(α)]+[VaRX(α)−ˉd]2≤0. | (4.13) |
● If (ˉb,0)∈A4 is a minimum point of HE(b,d), then ˉb must satisfy the following conditions
{∂HE(b,0)∂b|b=ˉb=0,∂HE(ˉb,d)∂d|d=0≥0. | (4.14) |
From (4.14), we yield
ˉb=VaRX(α)[VaRX(α)−g(0)][VaRX(α)−g(0)]2+[VaRX(α)]2 | (4.15) |
and
[(1−ˉb)VaRX(α)+ˉbg(0)]g′(0)−ˉbVaRX(α)≥0. | (4.16) |
Based on the above arguments, we analyze the conditions for the minimum points of HE(b,d) are located in sets Ai, i=1,2,3,4. The results are summarized in the following theorem.
Theorem 4.1. The optimal solutions to reinsurance problem (2.14) are given as follows.
(1) If one of the three conditions (C1)-(C3) holds, then the optimal ceded loss function is given by f∗1(x)=0, where, d0=S−1(11+θ) and
(C1):α≤θ1+θ,(C2):{F(0)<θ1+θ<α,g(d0)≥VaRX(α),(C3):{S(0)≤11+θ,(1+θ)μ≥VaRX(α). |
(2) If condition (C4) or (C5) holds, then the optimal ceded loss function is given by f∗1(x)=b∗(x−d∗)+, where d∗ is the unique solution of equation q(d)=0, b∗ is given by (4.10) and
(C4):{F(0)<θ1+θ<α,g(d0)<VaRX(α),p(d∗)>0,(C5):{S(0)≤11+θ,μ<S(0)VaRX(α),p(d∗)>0. |
(3) If condition (C6) or (C7) holds, then the optimal ceded loss function is given by f∗1(x)=(x−ˉd)+, where ˉd is the unique solution of equation p(d)=0 and
(C6):{F(0)<θ1+θ<α,g(d0)<VaRX(α),p(d∗)≤0,(C7):{S(0)≤11+θ,μ<S(0)VaRX(α),p(d∗)≤0. |
(4)If condition (C8) holds, then the optimal ceded loss function is given by f∗1(x)=ˉbx, where ˉb is given by (4.15) and
(C8):{S(0)≤11+θ,S(0)VaRX(α)≤μ<11+θVaRX(α). |
Proof. (1) If one of the three conditions (C1)–(C3) holds, it is easy to show that
min[0,VaRX(α)]g(d)≥VaRX(α). |
Then the minimum points of HE(b,d) are located in A1. That is to say, the optimal ceded loss function is f∗1(x)=0.
(2) If condition (C4) holds, then g′(d)<0 for any d∈[0,d0). From the expression of ∂HE(b,d)∂d in (4.6), we have ∂HE(b,d)∂d<0 for any (b,d)∈(0,1]×[0,d0]. Thus, the minimum points are not located in [0,1]×[0,d0]. Furthermore, let d1>d0 such that g(d1)=VaRX(α), from the expression of ∂HE(b,d)∂b in (4.6), we have ∂HE(b,d)∂b>0 for any (b,d)∈(0,1]×[d1,VaRX(α)]. Thus, the minimum points are also not located in [0,1]×[d1,VaRX(α)]. As a result, the minimum points of HE(b,d) over A are located in (0,1]×(d0,d1) and the minimum must be attainable at some stationary point (b∗,d∗) or must lie on the right boundary at some point (1,ˉd). Note that q′(d)=S′(d)(VaRX(α)−d)<0, q(d0)=S(d0)(VaRX(α)−d0)−∫∞d0S(x)dx=11+θ(VaRX(α)−g(d0))>0 and q(d1)=S(d1)(VaRX(α)−d1)−∫∞d1S(x)dx=[(1+θ)S(d1)−1]∫∞d1S(x)dx<0. Thus, the equation q(d)=0 has a unique solution d∗ in (d0,d1). Substituting d∗ in the second equation of (4.8) yields
b∗=VaRX(α)g′(d∗)VaRX(α)−d∗+[VaRX(α)−g(d∗)]g′(d∗). |
It is easy to show 0<b∗<1 since p(d∗)>0. Thus HE(b,d) has a unique stationary point (b∗,d∗). In the following, we show that HE(b,d) attains the minimum at the stationary point (b∗,d∗).
Conversely, we suppose that the minimum of HE(b,d) is attainable at (1,ˉd) if condition (C4) holds. Then we have p(ˉd)=0 and g(ˉd)[g(ˉd)−VaRX(α)]+[VaRX(α)−ˉd]2≤0. Since g′(ˉd)>0, we yield [g(ˉd)[g(ˉd)−VaRX(α)]+[VaRX(α)−ˉd]2]g′(ˉd)≤0. Straightforward algebra leads to q(ˉd)≥0. Note that q′(d)<0 and q(d∗)=0, then we have ˉd≤d∗. However, since p′(d)<0, p(d∗)>0 and p(ˉd)=0, then we have d∗<ˉd. This leads to contradictions. Thus, if condition (C4) holds, the function HE(b,d) attains the minimum at the stationary point (b∗,d∗), that is to say, the optimal ceded loss function is f∗1(x)=b∗(x−d∗)+.
If condition (C5) holds, then ∂HE(b,d)∂b>0 for any (b,d)∈(0,1]×[d1,VaRX(α)]. Thus, the minimum points are not located in [0,1]×[d1,VaRX(α)]. As a result, the minimum points of HE(b,d) over A are located in (0,1]×[0,d1) and the minimum must be attainable at some stationary point (b∗,d∗) or must lie on the right boundary at some point (1,ˉd) or must lie on the lower boundary at some point (ˉb,0). In the following, we consider equation q(d)=0. Note that q′(d)<0, q(0)=S(0)VaRX(α)−μ>0 and
q(d1)=S(d1)(VaRX(α)−d1)−∫∞d1S(x)dx=S(d1)(g(d1)−d1)−∫∞d1S(x)dx=S(d1)(1+θ)∫∞d1S(x)dx−∫∞d1S(x)dx=[S(d1)(1+θ)−1]∫∞d1S(x)dx<0. | (4.17) |
Thus, the equation q(d)=0 has a unique solution d∗ in (0,d1). Further, we know that HE(b,d) has a unique stationary point (b∗,d∗) if condition (C5) holds. By the same argument as above, we show that the minimum of HE(b,d) is not attainable at (1,ˉd) if p(d∗)>0 holds. Meanwhile, we demonstrate that the minimum of HE(b,d) is not attainable at (ˉb,0) if condition (C5) holds. Conversely, we suppose that the minimum value of HE(b,d) is attainable at (ˉb,0) if condition (C5) holds. Then we have conditions (4.15) and (4.16) hold. Substituting (4.15) into (4.16), we get μ−S(0)VaRX(α)≥0, that is contradicted to the second inequality of condition (C5). Thus, if condition (C5) holds, the function HE(b,d) attains the minimum at the stationary point (b∗,d∗).
In summary, if condition (C4) or (C5) holds, the optimal ceded loss function is given by f∗1(x)=b∗(x−d∗)+.
(3) If condition (C6) or (C7) holds, from the above arguments in (2), we know that HE(b,d) has no stationary points because p(d∗)≤0. Furthermore, if the second inequality of (C7) holds, the minimum value of HE(b,d) is not attainable at (ˉb,0). Thus, the function HE(b,d) attains the minimum at the boundary point (1,ˉd) if condition (C6) or (C7) holds, that is to say, the optimal ceded loss function is given by f∗1(x)=(x−ˉd)+.
(4) If condition(C8) holds, then ∂HE(b,d)∂b>0 for any (b,d)∈(0,1]×[d1,VaRX(α)]. Thus, the minimum points are not located in [0,1]×[d1,VaRX(α)]. As a result, the minimum points of HE(b,d) over A are located in (0,1]×[0,d1) and the minimum must be attainable at some stationary point (b∗,d∗) or must lie on the right boundary at some point (1,ˉd) or must lie on the lower boundary at some point (ˉb,0). In the following, we consider equation q(d)=0. Note that q(0)=S(0)VaRX(α)−μ≤0 and q′(d)<0, then the equation q(d)=0 has no solutions in (0,d1), namely, the function HE(b,d) has no stationary points. Thus, the minimum point of HE(b,d) over A must lie on the right boundary at some point (1,ˉd) or must lie on the lower boundary at some point (ˉb,0). If the minimum of HE(b,d) is attainable at (1,ˉd), the we have conditions (4.13) hold. Since g′(ˉd)>0, we yield [g(ˉd)[g(ˉd)−VaRX(α)]+[VaRX(α)−ˉd]2]g′(ˉd)≤0. Straightforward algebra leads to q(ˉd)≥0. This is contradicted to q(0)≤0 and q′(d)<0. Thus, minimum point of HE(b,d) over A must lie on the lower boundary at point (ˉb,0), namely, the optimal ceded loss function is given by f∗1(x)=ˉbx.
As a result of Proposition 3.2, we can deduce optimal ceded loss functions by confining attention to H2. For a layer reinsurance policy h(x)=(x−a)+−(x−VaRX(α))+ with a∈[0,VaRX(α)], the total costs of the insurer and the reinsurer under the VaR risk measure are
VaRTfI(α)=a+(1+θ)∫VaRX(α)aS(x)dx, |
VaRTfR(α)=VaRX(α)−a. |
Hence, the loss function is
LE(a)=√[a+(1+θ)∫VaRX(α)aS(x)dx]2+[VaRX(α)−a]2. |
Theorem 4.2. The optimal ceded loss function that solves (2.14) with F2 constraint is given by
f∗2(x)={(x−a∗1)+−(x−VaRX(α))+,θ1+θ<α,0,otherwise, | (4.18) |
where a∗1 is the unique solution of equation (4.19)
[a+(1+θ)∫VaRX(α)aS(x)dx][1−(1+θ)S(a)]−[VaRX(α)−a]=0. | (4.19) |
Proof. Let HE(a)=[a+(1+θ)∫VaRX(α)aS(x)dx]2+[VaRX(α)−a]2, then
H′E(a)=2(a+(1+θ)∫VaRX(α)aS(x)dx)(1−(1+θ)S(a))−2(VaRX(α)−a), | (4.20) |
HE″ | (4.21) |
If \alpha \leqslant \frac{\theta}{1+\theta} holds, it is easy to show that H_{E}'({\rm VaR}_{X}(\alpha))\leqslant 0 . According to (4.20) and (4.21), then H_{E}(a) and L_{E}(a) attain their minimum at a = {\rm VaR}_{X}(\alpha) . In this case, f_{2}^{*}(x)\equiv 0 .
If \alpha > \frac{\theta}{1+\theta} holds, then from (4.19) and (4.21), we have
\begin{equation*} H_{E}'(a)\gtreqqless 0\quad for\quad a_{1}^*\lesseqqgtr a. \end{equation*} |
Recall that 0\leqslant a \leqslant {\rm VaR}_{X}(\alpha) and H_{E}'(0) = 2(1+\theta)\int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx(1-(1+\theta)S(0))- 2{\rm VaR}_{X}(\alpha) . If 1-(1+\theta)S(0)\leqslant 0 , then H_{E}'(0) < 0 and if 1-(1+\theta)S(0) > 0 , then H_{E}'(0)\leqslant 2(1+\theta)\int^{{\rm VaR}_{X}(\alpha)}_0S(0)dx- 2{\rm VaR}_{X}(\alpha) < 0 . So H_{E}'(0) < 0 and H_{E}'({\rm VaR}_{X}(\alpha)) > 0 imply that a_{1}^* exists and is the only minimum point of H_{E}(a) and L_{E}(a) .
As a result of Proposition 3.3 , we can deduce optimal ceded loss functions by confining attention to \mathcal{H}^{3} . For a quota-share reinsurance with a policy limit h(x) = c(x-(x-{\rm VaR}_{X}(\alpha))_+) , the total costs of the insurer and the reinsurer under the VaR risk measure are
\begin{equation*} VaR_{T_{I}^{f}}(\alpha) = (1-c){\rm VaR}_{X}(\alpha)+(1+\theta)c\int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx, \end{equation*} |
\begin{equation*} VaR_{T_{R}^{f}}(\alpha) = c{\rm VaR}_{X}(\alpha). \end{equation*} |
Hence, the loss function is
\begin{equation*} L_{E}(c) = \sqrt{[(1-c){\rm VaR}_{X}(\alpha)+(1+\theta)c\int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx]^2+[c{\rm VaR}_{X}(\alpha)]^2}. \end{equation*} |
Theorem 4.3. The optimal ceded loss function that solves (2.14) with \mathcal{F}^3 constraint is given by
\begin{equation} f_{3}^{*}(x) = \left\{ \begin{aligned} &c_{1}^*(x-(x-{\rm VaR}_{X}(\alpha))_+), &\phi({\rm VaR}_{X}(\alpha)) \lt 0, \\ &0, &otherwise, \end{aligned} \right. \end{equation} | (4.22) |
where \phi({\rm VaR}_{X}(\alpha)) = (1+\theta)\int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx-{\rm VaR}_{X}(\alpha) and c_{1}^* = \frac {-\phi ({\rm VaR}_{X}(\alpha)){\rm VaR}_{X}(\alpha)}{({\rm VaR}_{X}(\alpha))^2+(\phi ({\rm VaR}_{X}(\alpha))^2} .
Proof. Let H_{E}(c) = [(1-c){\rm VaR}_{X}(\alpha)+(1+\theta)c\int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx]^2+[c{\rm VaR}_{X}(\alpha)]^2 , then
\begin{equation} H_{E}'(c) = 2c[({\rm VaR}_{X}(\alpha))^2+(\phi ({\rm VaR}_{X}(\alpha))^2]+2\phi({\rm VaR}_{X}(\alpha)){\rm VaR}_{X}(\alpha), \end{equation} | (4.23) |
\begin{equation} H_{E}''(c) = 2[({\rm VaR}_{X}(\alpha))^2+(\phi ({\rm VaR}_{X}(\alpha))^2] \gt 0. \end{equation} | (4.24) |
If \phi({\rm VaR}_{X}(\alpha))\geqslant 0 , according to (4.23), we have \frac {\partial H_{E}(c)}{\partial c}\geqslant 0 . Thus, L_{E}(c) attains its minimum at c = 0 . Therefore, the optimal ceded loss function is given by f_{3}^{*}(x) = 0 .
If \phi({\rm VaR}_{X}(\alpha)) < 0 , according to (4.23) and (4.24), L_{E}(c) attains its minimum at c = c_{1}^* . Thus, the optimal ceded loss function is given by f_{3}^{*}(x) = c_{1}^*(x-(x-{\rm VaR}_{X}(\alpha))_+) .
In this section, we determine the optimal reinsurance policies among the ceded loss function sets \mathcal{F}^i (i = 1, 2, 3) under the Dutch premium principle. The Dutch premium principle is given by
\begin{equation} \Pi_{f}(X) = E[f(X)]+\beta E[f(X)-E[f(X)]]_+, ^{} \end{equation} | (5.1) |
where, 0 < \beta \leqslant 1 .
From Proposition 2 , we know that the optimal ceded loss function in \mathcal{F}^1 can be determined by confining attention to \mathcal{H}^{1} . For a change-loss reinsurance with b\in [0, 1] and d\in [0, \infty) , the total costs of the insurer and the reinsurer under the Dutch premium principle are
T_{I}^{b, d} = X-b(X-d)_{+}+\Pi_{D}(b, d), |
T_{R}^{b, d} = b(X-d)_{+}, |
where \Pi_{D}(b, d) = b\int^{\infty}_dS(x)+\beta b\int^{\infty}_{d+\int^{\infty}_dS(x)dx}S(x)dx is the reinsurance premium. Then the VaR of T_{I}^{b, d} and T_{R}^{b, d} at confidence level \alpha are
\begin{eqnarray} {\rm VaR}_{T_{I}^{b, d}}(\alpha)& = &{\rm VaR}_{X}(\alpha)-b({\rm VaR}_{X}(\alpha)-d)_{+}+\Pi_{D}(b, d), \end{eqnarray} | (5.2) |
\begin{eqnarray} {\rm VaR}_{T_{R}^{b, d}}(\alpha)& = &b({\rm VaR}_{X}(\alpha)-d)_{+}. \end{eqnarray} | (5.3) |
Hence, the loss function is
\begin{eqnarray*} L_{D}(b, d) = \left \{ \begin{array}{ll} \sqrt{\big[(1-b){\rm VaR}_{X}(\alpha)+bd+\Pi_{D}(b, d)\big]^{2}+\big[b({\rm VaR}_{X}(\alpha)-d)\big]^{2}}, \quad &d\leq {\rm VaR}_{X}(\alpha), \\\\ {\rm VaR}_{X}(\alpha)+\Pi_{D}(b, d), \qquad \qquad \qquad \quad &d \gt {\rm VaR}_{X}(\alpha). \end{array} \right. \end{eqnarray*} |
Let H_{D}(b, d) = [(1-b){\rm VaR}_{X}(\alpha)+bd+\Pi_{D}(b, d)\big]^{2}+\big[b({\rm VaR}_{X}(\alpha)-d)]^{2} , then
\begin{eqnarray} \left \{ \begin{array}{ll} {\frac{\partial H_{D}(b, d)}{\partial b} = 2\big[\big({\rm VaR}_{X}(\alpha)-k(d)\big)^{2}+\big({\rm VaR}_{X}(\alpha)-d\big)^{2}\big]b+2{\rm VaR}_{X}(\alpha)\big(k(d)-{\rm VaR}_{X}(\alpha)\big)}, \\\\ {\frac{\partial H_{D}(b, d)}{\partial d} = 2b\big[(1-b){\rm VaR}_{X}(\alpha)+bk(d)\big]k'(d)-2b^{2}({\rm VaR}_{X}(\alpha)-d)}, \end{array} \right. \end{eqnarray} | (5.4) |
where, k(d) = d+\int^{\infty}_d S(x)dx+\beta\int^{\infty}_{d+\int^{\infty}_dS(x)dx}S(x)dx .
Theorem 5.1. The optimal ceded loss function to reinsurance problem (2.14) is given as follows.
(1) If condition (M1) holds, then the optimal ceded loss function is given by f_{4}^{*}(x) = 0 , where
(M1): \frac{{\rm VaR}_{X}(\alpha)-\mu}{\int^{\infty}_\mu S(x)dx}\leqslant \beta. |
(2) If condition (M2) holds, then the optimal ceded loss function is given by f_{4}^{*}(x) = b^{*}(x-d^{*})_{+} , where,
\begin{eqnarray*} (M2): \left \{ \begin{array}{ll} \beta \lt \frac{{\rm VaR}_{X}(\alpha)-\mu}{\int^{\infty}_\mu S(x)dx} , \\ {u(0) \gt 0}, \\ {v(d^{*}) \gt 0, } \end{array} \right. \end{eqnarray*} |
b^{*} = \frac{{\rm VaR}_{X}(\alpha)k'(d^{*})}{{\rm VaR}_{X}(\alpha)-d^{*}+[{\rm VaR}_{X}(\alpha)-k(d^{*})]k'(d^{*})} , d^{*} is the unique solution of equation u(d) = 0 and u(d) = {\rm VaR}_{X}(\alpha)-k(d)-k'(d)({\rm VaR}_{X}(\alpha)-d) , v(d) = {\rm VaR}_{X}(\alpha)-d-k(d)k'(d) .
(3) If condition (M3)holds, then the optimal ceded loss function is given by f_{4}^{*}(x) = (x-\bar{d})_{+} , where \bar{d} is the unique solution of equation v(d) = 0 and
\begin{eqnarray*} (M3): \left \{ \begin{array}{ll} \beta \lt \frac{{\rm VaR}_{X}(\alpha)-\mu}{\int^{\infty}_\mu S(x)dx}, \\ {u(0) \gt 0}, \\ {v(d^{*})\leqslant 0.} \end{array} \right. \end{eqnarray*} |
(4)If condition (M4)holds, then the optimal ceded loss function is given by f_{4}^{*}(x) = \bar{b}x , where \bar{b} = \frac{{\rm VaR}_{X}(\alpha)[{\rm VaR}_{X}(\alpha)-k(0)]}{[{\rm VaR}_{X}(\alpha)-k(0)]^{2}+[{\rm VaR}_{X}(\alpha)]^{2}} and
\begin{eqnarray*} (M4): \left \{ \begin{array}{ll} \beta \lt \frac{{\rm VaR}_{X}(\alpha)-\mu}{\int^{\infty}_\mu S(x)dx}, \\ {u(0)\leqslant 0.} \end{array} \right. \end{eqnarray*} |
Proof. Similarly to the proof of Lemma 4.1, the function \Pi_{D}(b, d) is an increasing function with respect to b . Then the study of optimal ceded functions which minimize the loss function L_{D}(b, d) in the class \mathcal{H}^{1} is simplified to solving the two-parameter minimization problem over closed subset [0, 1]\times [0, {\rm VaR}_{X}(\alpha)] . Since L_{D}(b, d) is continuous, then the minimum of L_{D}(b, d) over [0, 1]\times [0, {\rm VaR}_{X}(\alpha)] must attain at some stationary point or lie on the boundary.
(1) Note that the function k(d) is an increasing function. If condition (M1) holds, it is easy to show that
k(d)\geq {\rm VaR}_{X}(\alpha), \ {\rm for \ all} \ d \in [0, {\rm VaR}_{X}(\alpha)]. |
Then from the expression of \frac{\partial H_{D}(b, d)}{\partial b} in (5.4), we know that H_{D}(b, d) is an increasing function with respect to b . Thus the minimum points of H_{D}(b, d) are located in A_{1} .
Conversely, if condition (M1) does not hold, then there exists a \tilde{d} \in [0, {\rm VaR}_{X}(\alpha)] such that \frac{\partial H_{D}(b, \tilde{d})}{\partial b} < 0 holds in a right neighborhood of b = 0 . That is to say, (0, \tilde{d}) is not a minimum point of H_{D}(b, d) . Since H_{D}(0, d) = H_{D}(0, \tilde{d})\equiv [{\rm VaR}_{X}(\alpha)]^{2} for any (0, d)\in A_{1} , then no minimum points of H_{D}(b, d) are located in A_{1} .
That is to say, the minimum points of H_{D}(b, d) are located in A_{1} if and only if condition (M1) holds. In this case the optimal ceded loss function is f_{4}^{*}(x) = 0 .
(2) We first consider the stationary points of H_{D}(b, d) . Let
\begin{eqnarray} \left \{ \begin{array}{ll} {\frac{\partial H_{D}(b, d)}{\partial b} = 0}, \\ {\frac{\partial H_{D}(b, d)}{\partial d} = 0}. \end{array} \right. \end{eqnarray} | (5.5) |
By straightforward algebra, we obtain
\begin{eqnarray} \left \{ \begin{array}{ll} {u(d) = {\rm VaR}_{X}(\alpha)-k(d)-k'(d)({\rm VaR}_{X}(\alpha)-d) = 0}, \\ {b = \frac{{\rm VaR}_{X}(\alpha)k'(d)}{{\rm VaR}_{X}(\alpha)-d+[{\rm VaR}_{X}(\alpha)-k(d)]k'(d)}}. \end{array} \right. \end{eqnarray} | (5.6) |
If condition (M2) holds, then u(0) > 0 and u({\rm VaR}_{X}(\alpha)) < 0 hold. Since u'(d)\leq 0 for any d\in[0, {\rm VaR}_{X}(\alpha)] , then the equation u(d) = 0 has a unique root d^{*} in (0, {\rm VaR}_{X}(\alpha)) . Substituting d^{*} in the second equation of (5.6) yields
b^{*} = \frac{{\rm VaR}_{X}(\alpha)k'(d^{*})}{{\rm VaR}_{X}(\alpha)-d^{*}+[{\rm VaR}_{X}(\alpha)-k(d^{*})]k'(d^{*})}. |
Since v(d^{*}) > 0 , then we have 0 < b^{*} < 1 . Thus H_{D}(b, d) has a unique stationary point (b^{*}, d^{*}) . In the following, we show that H_{D}(b, d) attains the minimum at the stationary point (b^{*}, d^{*}) .
Conversely, if the minimum value of H_{D}(b, d) is attainable at some point (1, \bar{d}) on the right boundary, then Fermat's theorem implies
\begin{eqnarray} \left \{ \begin{array}{ll} {\frac{\partial H_{D}(1, d)}{\partial d}|_{d = \bar{d}} = 0}, \\ {\frac{\partial H_{D}(b, \bar{d})}{\partial b}|_{b = 1}\leq 0}, \end{array} \right. \end{eqnarray} | (5.7) |
which is equivalent to
\begin{eqnarray} \left \{ \begin{array}{ll} {v(\bar{d}) = 0}, \\ {k(\bar{d})[k(\bar{d})-{\rm VaR}_{X}(\alpha)]+[{\rm VaR}_{X}(\alpha)-\bar{d}]^{2}\leq 0}. \end{array} \right. \end{eqnarray} | (5.8) |
Since k'(\bar{d}) > 0 , we yield [k(\bar{d})[k(\bar{d})-{\rm VaR}_{X}(\alpha)]+[{\rm VaR}_{X}(\alpha)-\bar{d}]^{2}]k'(\bar{d})\leq 0 . Straightforward algebra leads to u(\bar{d})\geq 0 . Note that u'(d) < 0 and u(d^{*}) = 0 , then we have \bar{d}\leq d^{*} . However, since v'(d) < 0 , v(d^{*}) > 0 and v(\bar{d}) = 0 , then we have d^{*} < \bar{d} . This leads to contradictions. Thus, if condition (M2) holds, the function H_{D}(b, d) does not attain the minimum at the right boundary.
If the minimum value of H_{D}(b, d) is attainable at some point (\bar{b}, 0) on the lower boundary, then \bar{b} must satisfy the following conditions
\begin{eqnarray} \left \{ \begin{array}{ll} {\frac{\partial H_{D}(b, 0)}{\partial b}|_{b = \bar{b}} = 0}, \\ {\frac{\partial H_{D}(\bar{b}, d)}{\partial d}|_{d = 0}\geq 0}. \end{array} \right. \end{eqnarray} | (5.9) |
From (5.9), we yield
\begin{eqnarray} \left \{ \begin{array}{ll} {\bar{b} = \frac{{\rm VaR}_{X}(\alpha)[{\rm VaR}_{X}(\alpha)-k(0)]}{[{\rm VaR}_{X}(\alpha)-k(0)]^{2}+[{\rm VaR}_{X}(\alpha)]^{2}}}, \\ {[(1-\bar{b}){\rm VaR}_{X}(\alpha)+\bar{b}k(0)]k'(0)-\bar{b}{\rm VaR}_{X}(\alpha)\geq 0}, \end{array} \right. \end{eqnarray} | (5.10) |
which means u(0)\leq 0 , this is contradicted to the condition (M2).
In summary, if condition (M2) holds, the minimum of the function H_{D}(b, d) must be attained at the unique stationary point (b^{*}, d^{*}) , i.e., the optimal ceded loss function is given by f_{4}^{*}(x) = b^{*}(x-d^{*})_{+} .
(3) If condition (M3) holds, from the above arguments in (2), we know that H_{D}(b, d) has no stationary points because v(d^{*})\leq 0 and H_{E}(b, d) does not attain the minimum at (\bar{b}, 0) because u(0) > 0 . Thus, the function H_{D}(b, d) attains the minimum at the boundary point (1, \bar{d}) if condition (M3) holds, that is to say, the optimal ceded loss function is given by f_{4}^{*}(x) = (x-\bar{d})_{+} .
(4) If condition (M4) holds, then the equation u(d) = 0 has no solutions in (0, {\rm VaR}_{X}(\alpha)) , namely, the function H_{D}(b, d) has no stationary points. Thus, the minimum point of H_{D}(b, d) over \mathcal{A} must lie on the right boundary at some point (1, \bar{d}) or must lie on the lower boundary at some point (\bar{b}, 0) . If the minimum of H_{D}(b, d) is attainable at (1, \bar{d}) , then the conditions in (5.7) hold. Since k'(\bar{d}) > 0 , we yield [k(\bar{d})[k(\bar{d})-{\rm VaR}_{X}(\alpha)]+[{\rm VaR}_{X}(\alpha)-\bar{d}]^{2}]k'(\bar{d})\leq 0 . Straightforward algebra leads to u(\bar{d})\geq 0 . This is contradicted to u(0) \leq 0 and u'(d) < 0 . Thus, the minimum point of H_{E}(b, d) over \mathcal{A} must lie on the lower boundary at point (\bar{b}, 0) , namely, the optimal ceded loss function is given by f_{4}^{*}(x) = \bar{b}x .
For a layer reinsurance with a \in [0, {\rm VaR}_{X}(\alpha)] , the total costs of the insurer and the reinsurer under the VaR risk measure are
\begin{equation*} VaR_{T_{I}^{f}}(\alpha) = t(a)+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(a)}S(x)dx, \end{equation*} |
\begin{equation*} VaR_{T_{R}^{f}}(\alpha) = {\rm VaR}_{X}(\alpha)-a, \end{equation*} |
where, t(a) = a+\int^{{\rm VaR}_{X}(\alpha)}_aS(x)dx . Hence, the loss function is
\begin{equation*} L_{D}(a) = \sqrt{[t(a)+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(a)}S(x)dx]^2+[{\rm VaR}_{X}(\alpha)-a]^2}. \end{equation*} |
Theorem 5.2. The optimal ceded loss function that solves (2.14) with \mathcal{F}^2 constraint is given by
\begin{equation} f_{5}^*(x) = (x-a_{2}^*)_+-(x-{\rm VaR}_{X}(\alpha))_+, \end{equation} | (5.11) |
where a_{2}^* is the unique solution of equation
\begin{equation} \begin{aligned} [t(a)+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(a)}S(x)dx] [1-S(a)][1-\beta S(t(a))]- ({\rm VaR}_{X}(\alpha)-a) = 0 \end{aligned} \end{equation} | (5.12) |
Proof. Let H_{D}(a) = L_{D}^2(a) = [t(a)+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(a)}S(x)dx]^2+[{\rm VaR}_{X}(\alpha)-a]^2 , then
\begin{equation} H_{D}'(a) = 2\{[t(a)+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(a)}S(x)dx] [1-S(a)][1-\beta S(t(a))]- ({\rm VaR}_{X}(\alpha)-a)\}, \end{equation} | (5.13) |
\begin{equation} H_{D}''(a) \gt 0. \end{equation} | (5.14) |
From Eq (5.13), we know that
\begin{equation*} H_{D}'({\rm VaR}_{X}(\alpha)) = ({\rm VaR}_{X}(\alpha))(1-S({\rm VaR}_{X}(\alpha))(1-\beta S({\rm VaR}_{X}(\alpha)) \gt 0, \end{equation*} |
and
\begin{equation*} \begin{aligned} H_{D}'(0)& = (t(0)+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(0)}S(x)dx)(1-S(0))(1-\beta S(t(0))))-{\rm VaR}_{X}(\alpha)\\ & \lt t(0)+{\rm VaR}_{X}(\alpha)-t(0)-{\rm VaR}_{X}(\alpha)\\ & = 0. \end{aligned} \end{equation*} |
Hence, from (5.12) and (5.14), we have
\begin{equation*} H_{D}'(a)\gtreqqless 0 \Leftrightarrow a_{2}^*\lesseqqgtr a. \end{equation*} |
Therefore, a^* is the unique minimum point of H_{D}(a) . Since L_{D}(a) and H_{D}(a) have the same minimum points, then the optimal ceded loss function that solves (2.14) with \mathcal{F}^2 constraint is given by (5.11) and (5.12).
From Proposition 4 , we can deduce optimal ceded loss functions by confining attention to \mathcal{H}^{3} . For a quota-share reinsurance with a policy limit h(x) = c(x-(x-{\rm VaR}_{X}(\alpha))_+) , the total costs of the insurer and the reinsurer under the VaR risk measure are
\begin{equation} \begin{aligned} VaR_{T_{I}^{f}}(\alpha)& = (1-c){\rm VaR}_{X}(\alpha)+ct(0)+\beta c\int^{{\rm VaR}_{X}(\alpha)}_{t(0)}S(x)dx, \\ VaR_{T_{R}^{f}}(\alpha)& = c{\rm VaR}_{X}(\alpha). \end{aligned} \end{equation} | (5.15) |
Hence, the loss function is
\begin{equation*} L_{D}(c) = \sqrt{[(1-c){\rm VaR}_{X}(\alpha)+ct(0)+\beta c\int^{{\rm VaR}_{X}(\alpha)}_{t(0)}S(x)dx]^2+[c{\rm VaR}_{X}(\alpha)]^2}. \end{equation*} |
Theorem 5.3. The optimal ceded loss function that solves (2.14) with \mathcal{F}^3 constraint is given by
\begin{equation} f_{6}^{*}(x) = c_{2}^*(x-(x-{\rm VaR}_{X}(\alpha))_+), \end{equation} | (5.16) |
where
\begin{equation*} c_{2}^* = \frac {-\varphi ({\rm VaR}_{X}(\alpha)){\rm VaR}_{X}(\alpha)}{({\rm VaR}_{X}(\alpha))^2+(\varphi ({\rm VaR}_{X}(\alpha))^2} \end{equation*} |
and \varphi({\rm VaR}_{X}(\alpha)) = \int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(0)}S(x)dx-{\rm VaR}_{X}(\alpha) .
Proof. Let H_{D}(c) = L_{D}^2(c) = [(1-c){\rm VaR}_{X}(\alpha)+ct(0)+\beta c\int^{{\rm VaR}_{X}(\alpha)}_{t(0)}S(x)dx]^2+[c{\rm VaR}_{X}(\alpha)]^2 , then L_{D}(c) and H_{D}(c) have the same minimum points. Taking the derivative of H_{D}(c) , we obtain
\begin{equation} H_{D}'(c) = 2c[({\rm VaR}_{X}(\alpha))^2+(\varphi ({\rm VaR}_{X}(\alpha))^2]+2\varphi({\rm VaR}_{X}(\alpha)){\rm VaR}_{X}(\alpha), \end{equation} | (5.17) |
\begin{equation} H_{D}''(c) = 2[({\rm VaR}_{X}(\alpha))^2+(\varphi ({\rm VaR}_{X}(\alpha))^2] \gt 0. \end{equation} | (5.18) |
Note that
\begin{equation} \begin{aligned} \varphi({\rm VaR}_{X}(\alpha))& = \int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx+\beta \int^{{\rm VaR}_{X}(\alpha)}_{t(0)}S(x)dx-{\rm VaR}_{X}(\alpha)\\ & \lt \int^{{\rm VaR}_{X}(\alpha)}_0S(x)dx+{\rm VaR}_{X}(\alpha)-t(0)-{\rm VaR}_{X}(\alpha)\\ & = 0. \end{aligned} \end{equation} | (5.19) |
Then, according to (5.17), (5.18) and (5.19), H_{D}(c) and L_{D}(c) attain there minimum at c = c_{2}^* , where c_{2}^* = \frac {-\varphi ({\rm VaR}_{X}(\alpha)){\rm VaR}_{X}(\alpha)}{({\rm VaR}_{X}(\alpha))^2+(\varphi ({\rm VaR}_{X}(\alpha))^2}\leq \frac{1}{2}.
In this section, we construct four numerical examples to illustrate the optimal reinsurance policies that we derived in the previous sections. Let the confidence level \alpha = 0.95 , safety loading parameters \theta = 0.2 and \beta = 0.5 .
Example 6.1. Assume that the reinsurance premium is expectation premium principle and the loss variable X has an exponential distribution with survival function S(x) = e^{0.001x} , then F(0) = 0 < \frac{\theta}{1+\theta} = 0.1667 < \alpha = 0.95 , {\rm VaR}_{X}(\alpha) = 2995.73 > 1182.32 = g(d_{0}) , d^{*} = 1995.73 , p(d^{*}) = -806.73 . By Theorem (4.1), Theorem (4.2) and Theorem (4.3), we know that the optimal ceded loss function among \mathcal{F}^1 is f_{1}^{*} = (x-1599.90)_{+} , the optimal ceded loss function among \mathcal{F}^2 is f_{2}^{*} = (x-1622.55)_{+}-(x-2995.73)_{+} and the optimal ceded loss function among \mathcal{F}^3 is f_{3}^{*} = 0.4477(x-(x-2995.73)_{+}) .
Example 6.2. Assume that the reinsurance premium is expectation premium principle and the loss variable X has a Pareto distribution with survival function S(x) = (\frac{2000}{x+2000})^{3} , then F(0) = 0 < \frac{\theta}{1+\theta} = 0.1667 < \alpha = 0.95 , {\rm VaR}_{X}(\alpha) = 3428.84 > 1187.98 = g(d_{0}) , d^{*} = 1619.22 , p(d^{*}) = 226.05 . By Theorem (4.1), Theorem (4.2) and Theorem (4.3), we know that the optimal ceded loss function among \mathcal{F}^1 is f_{1}^{*} = 0.9236(x-1619.22)_{+} , the optimal ceded loss function among \mathcal{F}^2 is f_{2}^{*} = (x-1801.98)_{+}-(x-3428.84)_{+} and the optimal ceded loss function among \mathcal{F}^3 is f_{3}^{*} = 0.4692(x-(x-3428.84)_{+}) .
Example 6.3. Assume that the reinsurance premium is Dutch premium principle and the loss variable X has an exponential distribution with survival function S(x) = e^{0.001x} , then {\rm VaR}_{X}(\alpha) = 2995.73 , \frac{{\rm VaR}_{X}(\alpha)-\mu}{\int^{\infty}_\mu S(x)dx} = 5.4250 > 0.5 = \beta , u(0) = 1811.79 , d^{*} = 1950.79 , v(d^{*}) = -689.40 . By Theorem (5.1), Theorem (5.2) and Theorem (5.3), we know that the optimal ceded loss function among \mathcal{F}^1 is f_{4}^{*} = (x-1607.99)_{+} , the optimal ceded loss function among \mathcal{F}^2 is f_{5}^{*} = (x-2994.81)_{+}-(x-2995.73)_{+} and the optimal ceded loss function among \mathcal{F}^3 is f_{6}^{*} = 0.4500(x-(x-2995.73)_{+}) .
Example 6.4. Assume that the reinsurance premium is Dutch premium principle and the loss variable X as a Pareto distribution with survival function S(x) = (\frac{2000}{x+2000})^{3} , then {\rm VaR}_{X}(\alpha) = 3428.84 , \frac{{\rm VaR}_{X}(\alpha)-\mu}{\int^{\infty}_\mu S(x)dx} = 5.4649 > 0.5 = \beta , u(0) = 2206.61 , d^{*} = 1525.01 , v(d^{*}) = 397.65 . By Theorem (5.1), Theorem (5.2) and Theorem (5.3), we know that the optimal ceded loss function among \mathcal{F}^1 is f_{4}^{*} = 0.8676(x-1525.01)_{+} , the optimal ceded loss function among \mathcal{F}^2 is f_{5}^{*} = (x-3427.91)_{+}-(x-3428.84)_{+} and the optimal ceded loss function among \mathcal{F}^3 is f_{6}^{*} = 0.4690(x-(x-3428.84)_{+}) .
Remark 6.1. Note that the risks X have the same mean and the parameters are same in the above four examples. For the exponential case, the optimal reinsurance policy is a stop-loss reinsurance when f\in \mathcal{F}^1 , while for the Pareto case, the optimal reinsurance policy is a change-loss reinsurance when f\in \mathcal{F}^1 . Therefore, the form of the optimal reinsurance policy depends on the distribution of loss variable X .
The optimal reinsurance policies from the perspective of both the insurer and reinsurer have remained a fascinating topic in actuarial science. Many interesting optimal reinsurance models have been proposed. In contrast to the existing literatures, we provide two new findings to the optimal reinsurance models from both the insurer and reinsurer in this paper. First, we propose an optimization criterion to minimize their total costs under the criteria of loss function which is defined by the joint value-at-risk. Second, we extend the premium principle to a much wide class of premium principles satisfying two axioms: risk loading and stop-loss ordering preserving. Under these conditions, we derive the optimal reinsurance policies over three ceded loss function sets, (ⅰ) the change-loss reinsurance is optimal among the class of increasing convex ceded loss functions; (ⅱ) when the constraints on both ceded and retained loss functions are relaxed to increasing functions, the layer reinsurance is shown to be optimal; (ⅲ) the quota-share reinsurance with a limit is always optimal when the ceded loss functions are in the class of increasing concave functions. We further use the expectation premium principle and Dutch premium principle to illustrate the application of our results by deriving the optimal parameters.
We also wish to point out that further research on this topic is needed. First, for reinsurance, the challenges of classical insurance are amplified, particularly when it comes to dealing with extreme situations like large claims and rare events. We have to rethink classical models in order to cope successfully with the respective challenges. One of the better ways is to focus on modelling and statistics, related literature can be referred to [32,33]. Second, in most of optimal reinsurance problems, it is assumed that the distributions of the insurer's risks are known. However, in practice, only incomplete information on the distributions is available. How to obtain optimal reinsurance contracts with incomplete information is also an interesting topic. An attempt to such a problem is to use the statistical methods, see [34,35]. Third, although some papers have been devoted to deriving optimal reinsurance under model uncertainty, the optimal reinsurance with uncertainty still lacks of available analysis tools, maybe we can draw support from sub-linear expectation, for details, see [36,37]. We hope that these important open problems can be addressed in future research. We also believe that this article can and will foster further research in this direction.
The research was supported by Project of Shandong Province Higher Educational Science and Technology Program (J18KA249) and Social Science Planning Project of Shandong Province (20CTJJ02).
The authors declare that there is no conflict of interest.
[1] |
M. I. Zahoor, Z. Dou, S. B. H. Shah, I. U. Khan, S. Ayub, T. R. Gadekallu, Pilot decontamination using asynchronous fractional pilot scheduling in massive MIMO systems, Sensors, 20 (2020), 6213. doi: 10.3390/s20216213
![]() |
[2] |
M. H. Abidi, H. Alkhalefah, K. Moiduddin, M. Alazab, M. K. Mohammed, W. Ameen, et al., Optimal 5G network slicing using machine learning and deep learning concepts, Comput. Stand. Interfaces, 76 (2021), 103518. doi: 10.1016/j.csi.2021.103518
![]() |
[3] | N. Ning, Z. Sui, J. Li, S. Wu, H. Chen, S. Xu, et al., Multi scaling coefficients technique for noisy signal based gain error background calibration, in 2012 IEEE International Conference on Electron Devices and Solid State Circuit (EDSSC), (2012), 1–2. |
[4] |
N. Ning, Z. Sui, J. Li, S. Wu, H. Chen, S. Xu, et al., Multiscaling coefficients technique for gain error background calibration in pipelined ADC, J. Circuits Syst. Comput., 23 (2014), 1450034. doi: 10.1142/S0218126614500340
![]() |
[5] | C. C. Hsu, F. C. Huang, C. Y. Shih, C. C. Huang, Y. H. Lin, C. C. Lee, et al., An 11b 800MS/s time-interleaved ADC with digital background calibration, in 2007 IEEE International Solid-State Circuits Conference, (2017), 464–615. |
[6] | D. Wang, J. P. Keane, P. J. Hurst, B. C. Levy, S. H. Lewis, Convergence analysis of a background interstage gain calibration technique for pipelined ADCs, in 2005 IEEE International Symposium on Circuits and Systems, (2005), 4058–4061. |
[7] |
L. Guo, S. Tian, Z. Wang, Estimation and correction of gain mismatch and timing error in time-interleaved ADCs based on DFT, Metrol. Meas. Syst., 21 (2014), 535–544. doi: 10.2478/mms-2014-0045
![]() |
[8] | S. Liu, N. Lyu, J. Cui, Y. Zou, Improved blind timing skew estimation based on spectrum sparsity and ApFFT in time-interleaved ADCs, IEEE Trans. Instrum. Meas., 68 (2018), 73–86. |
[9] |
S. M. Jamal, D. Fu, N. C. J. Chang, P. J. Hurst, S. H. Lewis, A 10-b 120-Msample/s time-interleaved analog-to-digital converter with digital background calibration, IEEE J. Solid State Circuits, 37 (2002), 1618–1627. doi: 10.1109/JSSC.2002.804327
![]() |
[10] | M. Guo, J. Mao, S. W. Sin, H. Wei, R. P. Martins, A 5 GS/s 29 mW interleaved SAR ADC with 48.5 dB SNDR using digital-mixing background timing-skew calibration for direct sampling applications, IEEE Access, 8 (2002), 138944–138954. |
[11] | D. Xing, Y. Zhu, C. H. Chan, S. W. Sin, F. Ye, J. Ren, et al., Seven-bit 700-MS/s four-way time-interleaved SAR ADC with partial Vcm-based switching, IEEE Trans. VLSI Syst., 25 (2016), 1168–1172. |
[12] | N. Le Dortz, J. P. Blanc, T. Simon, S. Verhaeren, E. Rouat, P. Urard, et al., A 1.62GS/s time-interleaved SAR ADC with fully digital background mismatch calibration achieving interleaving spurs below 70dBFS, in Proceedings of the 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers, (2014), 386–388. |
[13] |
S. W. Sin, U. F. Chio, U. Seng-Pan, R. P. Martins, Statistical spectra and distortion analysis of time-interleaved sampling bandwidth mismatch, IEEE Trans. Circuits Syst. Ⅱ Express Briefs, 55 (2008), 648–652. doi: 10.1109/TCSII.2008.921600
![]() |
[14] |
K. C. Dyer, J. P. Keane, S. H. Lewis, Calibration and dynamic matching in data converters: part 1: linearity calibration and dynamic-matching techniques, IEEE Solid State Circuits Mag., 10 (2018, ), 46–55. doi: 10.1109/MSSC.2017.2771106
![]() |
[15] |
D. Dermit, M. Shrivas, K. Bunsen, J. L. Benites, J. Craninckx, E. Martens, A 1.67-GSps TI 10-Bit Ping-Pong SAR ADC With 51-dB SNDR in 16-nm FinFET, IEEE Solid State Circuits Lett., 3 (2020), 150–153. doi: 10.1109/LSSC.2020.3008264
![]() |
[16] | B. Razavi, Problem of timing mismatch in interleaved ADCs, Proc. Cust. Integr. Circuits Conf., 2012 (2012), 1–8. |
[17] |
B. Razavi, Design considerations for interleaved ADCs, IEEE J. Solid State Circuits, 48 (2013), 1806–1817. doi: 10.1109/JSSC.2013.2258814
![]() |
[18] |
D. Wang, X. Zhu, X. Guo, J. Luan, L. Zhou, D. Wu, et al., A 2.6 GS/s 8-Bit time-interleaved SAR ADC in 55 nm CMOS technology, Electronics, 8 (2019), 305. doi: 10.3390/electronics8020161
![]() |
[19] |
C. C. Huang, C. Y. Wang, J. T. Wu, A CMOS 6-bit 16-GS/s time-interleaved ADC using digital background calibration techniques, IEEE J. Solid State Circuits, 46 (2011), 848–858. doi: 10.1109/JSSC.2011.2109511
![]() |
[20] | H. Le Duc, D. M. Nguyen, C. Jabbour, T. Graba, P. Desgreys, O. Jamin, Hardware implementation of all digital calibration for undersampling TIADCs, in 2015 IEEE International Symposium on Circuits and Systems (ISCAS), (2015), 2181–2184. |
[21] |
M. Bagheri, F. Schembari, H. Zare-Hoseini, R. B. Staszewski, A. Nathan, Interchannel mismatch calibration techniques for time-interleaved SAR ADCs, IEEE Open J. Circuits Syst., 2 (2021), 420–433. doi: 10.1109/OJCAS.2021.3083680
![]() |
[22] |
K. Seong, D. K. Jung, D. H. Yoon, J. S. Han, J. E. Kim, T. T. H. Kim, Time-interleaved SAR ADC with background timing-skew calibration for UWB wireless communication in IoT systems, Sensors, 20 (2020), 2430. doi: 10.3390/s20082430
![]() |
[23] |
S. R. Khan, A. A. Hashmi, G. S. Choi, A fully digital background calibration technique for M-channel time-interleaved ADCs, Circuits Syst. Signal Process., 36 (2017), 3303–3319. doi: 10.1007/s00034-016-0456-7
![]() |
[24] |
Y. Qiu, Y. J. Liu, J. Zhou, G. Zhang, D. Chen, N. Du, All-digital blind background calibration technique for any channel time-interleaved ADC, IEEE Trans. Circuits Syst. Ⅰ Regul. Pap., 65 (2018), 2503–2514. doi: 10.1109/TCSI.2018.2794529
![]() |
[25] |
H. Niu, J. Yuan, An efficient spur-aliasing-free spectral calibration technique in time-interleaved ADCs, IEEE Trans. Circuits Syst. Ⅰ Regul. Pap., 67 (2020), 2229–2238. doi: 10.1109/TCSI.2020.2975304
![]() |
[26] | Waveform Measurement and Analysis Technical Committee, IEEE Standard for Terminology and Test Methods for Analog-to-Digital Converters, 2011. Available from: http://www2.imse-cnm.csic.es/elec_esi/asignat/LME/pdf/temas/IEEE_ADC.pdf. |
[27] | S. Singh, L. Anttila, M. Epp, W. Schlecker, M. Valkama, Analysis, blind identification, and correction of frequency response mismatch in two-channel time-interleaved ADCs, IEEE Trans. Microw. Theory Tech. 63 (2015), 1721–1734. |
[28] |
W. K. Hastings, Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57 (1970), 97–109. doi: 10.1093/biomet/57.1.97
![]() |
[29] | J. Lemley, F. Jagodzinski, R. Andonie, Big holes in big data: A monte carlo algorithm for detecting large hyper-rectangles in high dimensional data, in 2016 IEEE 40th Annual Computer Software and Applications Conference (COMPSAC), (2016), 563–571. |
[30] | A. Dubey, A. Lohiya, V. Narwal, A. K. Jha, P. Agarwal, G. Schaefer, Natural image interpolation using extreme learning machine, in International Conference on Soft Computing and Pattern Recognition, (2016), 340–350. |
[31] | T. Saramaki, R. Bregovic, Multirate Systems and Filterbanks, in Multirate systems: design and applications, IGI Global, (2002), 27–85. |
[32] | Y. Yin, G. Yang, H. Chen, A novel gain error background calibration algorithm for time-interleaved ADCs, in 2014 International Conference on Anti-Counterfeiting, Security and Identification (ASID), (2014), 1–4. |
[33] | S. Saleem, C. Vogel, LMS-based identification and compensation of timing mismatches in a two-channel time-interleaved analog-to-digital converter, Norchip 2007, (2007), 7–10. |
[34] | J. Wu, J. Wu, Background calibration of capacitor mismatch and gain error in pipelined-SAR ADC using partially split structure, in 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), (2021), 1882–1885. |
1. | Ananda Karmakar, Vinod Raturi, Sanjay Painuly, Shweta Rana, 2025, Chapter 33, 978-981-97-6709-0, 435, 10.1007/978-981-97-6710-6_33 | |
2. | Stephen Chan, Durga Chandrashekhar, Ward Almazloum, Yuanyuan Zhang, Nicholas Lord, Joerg Osterrieder, Jeffrey Chu, Stylized facts of metaverse non-fungible tokens, 2024, 653, 03784371, 130103, 10.1016/j.physa.2024.130103 | |
3. | Cihan Orak, Zeynep Turan, Using artificial intelligence in digital video production: A systematic review study, 2024, 7, 2618-6586, 286, 10.31681/jetol.1459434 | |
4. | Hendrawan Armanto, Harits Ar Rosyid, , Improved Non-Player Character (NPC) behavior using evolutionary algorithm—A systematic review, 2025, 52, 18759521, 100875, 10.1016/j.entcom.2024.100875 | |
5. | João Inácio, Nuno Fachada, João P. Matos-Carvalho, Carlos M. Fernandes, 2024, Chapter 19, 978-3-031-51451-7, 272, 10.1007/978-3-031-51452-4_19 | |
6. | Juan Huangfu, Ruoyuan Li, Junping Xu, Younghwan Pan, Fostering Continuous Innovation in Creative Education: A Multi-Path Configurational Analysis of Continuous Collaboration with AIGC in Chinese ACG Educational Contexts, 2024, 17, 2071-1050, 144, 10.3390/su17010144 | |
7. | Andrew Begemann, James Hutson, Navigating Copyright in AI-Enhanced Game Design: Legal Challenges in Multimodal and Dynamic Content Creation, 2025, 3, 2972-3671, 1, 10.58567/jie03010001 | |
8. | Dinko Jukić, Digital Games and AI: Education, Ethics and Culture, 2024, 2729-7527, 807, 10.34135/mmidentity-2024-78 | |
9. | Julie A. Delello, Woonhee Sung, Kouider Mokhtari, Julie Hebert, Amy Bronson, Tonia De Giuseppe, AI in the Classroom: Insights from Educators on Usage, Challenges, and Mental Health, 2025, 15, 2227-7102, 113, 10.3390/educsci15020113 | |
10. | Alberto Canzone, Giacomo Belmonte, Antonino Patti, Domenico Savio Salvatore Vicari, Fabio Rapisarda, Valerio Giustino, Patrik Drid, Antonino Bianco, The multiple uses of artificial intelligence in exercise programs: a narrative review, 2025, 13, 2296-2565, 10.3389/fpubh.2025.1510801 | |
11. | Ching-Huei Chen, Victor Law, The role of help-seeking from ChatGPT in digital game-based learning, 2025, 1042-1629, 10.1007/s11423-025-10464-3 |