Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Research article

A novel medical image fusion method based on multi-scale shearing rolling weighted guided image filter


  • Received: 14 May 2023 Revised: 14 June 2023 Accepted: 27 June 2023 Published: 21 July 2023
  • Medical image fusion is a crucial technology for biomedical diagnoses. However, current fusion methods struggle to balance algorithm design, visual effects, and computational efficiency. To address these challenges, we introduce a novel medical image fusion method based on the multi-scale shearing rolling weighted guided image filter (MSRWGIF). Inspired by the rolling guided filter, we construct the rolling weighted guided image filter (RWGIF) based on the weighted guided image filter. This filter offers progressive smoothing filtering of the image, generating smooth and detailed images. Then, we construct a novel image decomposition tool, MSRWGIF, by replacing non-subsampled shearlet transform's non-sampling pyramid filter with RWGIF to extract richer detailed information. In the first step of our method, we decompose the original images under MSRWGIF to obtain low-frequency subbands (LFS) and high-frequency subbands (HFS). Since LFS contain a large amount of energy-based information, we propose an improved local energy maximum (ILGM) fusion strategy. Meanwhile, HFS employ a fast and efficient parametric adaptive pulse coupled-neural network (AP-PCNN) model to combine more detailed information. Finally, the inverse MSRWGIF is utilized to generate the final fused image from fused LFS and HFS. To test the proposed method, we select multiple medical image sets for experimental simulation and confirm its advantages by combining seven high-quality representative metrics. The simplicity and efficiency of the method are compared with 11 classical fusion methods, illustrating significant improvements in the subjective and objective performance, especially for color medical image fusion.

    Citation: Fang Zhu, Wei Liu. A novel medical image fusion method based on multi-scale shearing rolling weighted guided image filter[J]. Mathematical Biosciences and Engineering, 2023, 20(8): 15374-15406. doi: 10.3934/mbe.2023687

    Related Papers:

    [1] Huaying Liao, Zhengqiu Zhang . Finite-time stability of equilibrium solutions for the inertial neural networks via the figure analysis method. Mathematical Biosciences and Engineering, 2023, 20(2): 3379-3395. doi: 10.3934/mbe.2023159
    [2] Pan Wang, Xuechen Li, Qianqian Zheng . Synchronization of inertial complex-valued memristor-based neural networks with time-varying delays. Mathematical Biosciences and Engineering, 2024, 21(2): 3319-3334. doi: 10.3934/mbe.2024147
    [3] Biwen Li, Xuan Cheng . Synchronization analysis of coupled fractional-order neural networks with time-varying delays. Mathematical Biosciences and Engineering, 2023, 20(8): 14846-14865. doi: 10.3934/mbe.2023665
    [4] Zhen Yang, Zhengqiu Zhang, Xiaoli Wang . New finite-time synchronization conditions of delayed multinonidentical coupled complex dynamical networks. Mathematical Biosciences and Engineering, 2023, 20(2): 3047-3069. doi: 10.3934/mbe.2023144
    [5] Na Zhang, Jianwei Xia, Tianjiao Liu, Chengyuan Yan, Xiao Wang . Dynamic event-triggered adaptive finite-time consensus control for multi-agent systems with time-varying actuator faults. Mathematical Biosciences and Engineering, 2023, 20(5): 7761-7783. doi: 10.3934/mbe.2023335
    [6] Ruoyu Wei, Jinde Cao . Prespecified-time bipartite synchronization of coupled reaction-diffusion memristive neural networks with competitive interactions. Mathematical Biosciences and Engineering, 2022, 19(12): 12814-12832. doi: 10.3934/mbe.2022598
    [7] Yanjie Hong, Wanbiao Ma . Sufficient and necessary conditions for global attractivity and stability of a class of discrete Hopfield-type neural networks with time delays. Mathematical Biosciences and Engineering, 2019, 16(5): 4936-4946. doi: 10.3934/mbe.2019249
    [8] Rui Ma, Jinjin Han, Li Ding . Finite-time trajectory tracking control of quadrotor UAV via adaptive RBF neural network with lumped uncertainties. Mathematical Biosciences and Engineering, 2023, 20(2): 1841-1855. doi: 10.3934/mbe.2023084
    [9] Hai Lin, Jingcheng Wang . Pinning control of complex networks with time-varying inner and outer coupling. Mathematical Biosciences and Engineering, 2021, 18(4): 3435-3447. doi: 10.3934/mbe.2021172
    [10] Zixiao Xiong, Xining Li, Ming Ye, Qimin Zhang . Finite-time stability and optimal control of an impulsive stochastic reaction-diffusion vegetation-water system driven by Lˊevy process with time-varying delay. Mathematical Biosciences and Engineering, 2021, 18(6): 8462-8498. doi: 10.3934/mbe.2021419
  • Medical image fusion is a crucial technology for biomedical diagnoses. However, current fusion methods struggle to balance algorithm design, visual effects, and computational efficiency. To address these challenges, we introduce a novel medical image fusion method based on the multi-scale shearing rolling weighted guided image filter (MSRWGIF). Inspired by the rolling guided filter, we construct the rolling weighted guided image filter (RWGIF) based on the weighted guided image filter. This filter offers progressive smoothing filtering of the image, generating smooth and detailed images. Then, we construct a novel image decomposition tool, MSRWGIF, by replacing non-subsampled shearlet transform's non-sampling pyramid filter with RWGIF to extract richer detailed information. In the first step of our method, we decompose the original images under MSRWGIF to obtain low-frequency subbands (LFS) and high-frequency subbands (HFS). Since LFS contain a large amount of energy-based information, we propose an improved local energy maximum (ILGM) fusion strategy. Meanwhile, HFS employ a fast and efficient parametric adaptive pulse coupled-neural network (AP-PCNN) model to combine more detailed information. Finally, the inverse MSRWGIF is utilized to generate the final fused image from fused LFS and HFS. To test the proposed method, we select multiple medical image sets for experimental simulation and confirm its advantages by combining seven high-quality representative metrics. The simplicity and efficiency of the method are compared with 11 classical fusion methods, illustrating significant improvements in the subjective and objective performance, especially for color medical image fusion.



    Hopfield neural network (HNN) is in the powerful class of artificial neural networks (ANNs), which has been introduced in the literature by John Hopfield in 1982. Since then, its investigation has become a worldwide focus [1,2,3,4,5]. All of those models are real-valued neural networks (NNs). However, complex-valued neural networks (CVNNs) can handle problems that cannot be handled with real-valued parallel networks [6,7]. Recently, some authors started the dynamical analysis of CVNNs. For example, in [8], Ali et al. studied the finite time stability analysis of delayed fractional-order memristive CVNNs by using the Gronwall inequality, H¨older inequality and inequality scaling skills. In [9], Zhang and Cao investigated the existence and global exponential stability of periodic solutions of neutral type CVNNs by collecting the Lyapunov functional method with the coincidence degree theory as well as graph theory. In article [21], the authors dealt with the problem stability of impulsive CVNNs with time delay.

    In the functioning of ANNs, time-delays always exist in the signal transmission between neurons on account to the limited pace of signal exchange and transmission. Thus, it is one of the main reasons poor performance and instability in a system is the presence of delays. Therefore, in recent years, the analysis of the stability of delayed ANNs has been the great attention of researchers, and various results have been discussed in the literature [16,17,18,19,20,21,22].

    ANNs with neutral-type delays were also very seldom discussed in the existing literature; such models are called neutral-type ANNs. They have been the subject of in-depth studies by many researchers, for instance, Guo and Du [17], concerned with the global exponential stability of periodic solution for neutral-type CVNNs.

    The theory of passivity is a significant notion of automatic for the analysis and control of models whose certain input/output characteristics are established in terms of energy criteria. The notions of passivity are adapted to several scientific fields and are effective for the regulation of electrical, mechanical, and electromechanical systems present in several fields of engineering, such as robotics, power electronics, aeronautics, etc. Many results are concentrated in the research of the passivity of NN systems. In [11], Li and Zheng proved the global exponential passivity for delayed quaternion-valued memristor-based NNs. Ge et al. [12] studied the robust passivity analysis for a class of uncertain NNs subject to mixed delays. The problem of passivity analysis for uncertain bidirectional associative memory NNs in the presence of mixed delays are discussed in [13]. In [35], Khonchaiyaphum et al. studied FTP analysis of neutral-type NNs.

    Many researchers have been interested in finite-time control problems of dynamical systems based on the Lyapunov theory of stability to give big attention to the asymptotic model of systems over an infinite time interval. Recently, many interested results have been published. In [14], Thuan et al. discussed the robust FTP for a fractional order NNs with uncertainties. In paper [19], Wei et al. are focused on a class of coupled quaternion-valued NNs with several delayed couplings to study fixed-time passivity.

    The main objective of this manuscript is to deal with the FTP for the following neutral-type CVNNs:

    {˙z(t)=Az(t)+Bf(z(t))+Cf(z(tτ(t)))+D˙z(th(t))+ω(t),u(t)=K1z(t)+K2f(z(t)). (1.1)

    Here, we have

    z()=[z1(),z2(),,zn()]TCn is the complex valued state vector,

    ω()Cn is the disturbance input which belongs to L2[0,+),p

    u()Cn is the control output,

    f(z())=[f(z1()),f(z2()),,f(zn())]TCn is the complex-valued activation function,

    A=diag{a1,a2,,an}>0Rn×n is a diagonal matrix,

    B=(bjk)Cn×n,C=(cjk)Cn×n,D=(djk)Cn×n are respectively the connection weight matrix and the connection weight matrix with delays,

    K1 is a known real constant matrice with appropriate dimension,

    K2 is a known complex matrice with appropriate dimension.

    The initial condition of (1.1) is given as

    z(s)=ψ(s),s[ρ,0], (1.2)

    where ρ=max{suptR(τ(t),suptR(h(t))}.

    The main contributions of our article are :

    We use a more adequate hypothesis for the complex-valued activation functions (CVAF) considered in our model. Based on this assumption, a controllable model is formed by divised the real and imaginary parts, and we give a sufficient condition to realize FTP. This result is more general than the existing passivity results on real-valued NNs [35,36,37].

    A Lyapunov–Krasovskii function that support triple, four, and five integral terms is introduced, and the Wirtinger-type inequality technique and convex combination approach are espoused.

    A new set of sufficient conditions in terms of LMIs is derived to guarantee FTB and FTP results. These conditions can be simply found by the Matlab LMI toolbox.

    The main contents of the manuscript are outlined as follows: in Section 2, we established new assumptions, definitions, and lemmas for the dynamic systems (1.1), which will be used later. New sufficient conditions on the FTB and FTP are discussed in Section 3. Two examples are presented in Section 4 to verify our results. At last, in Section 5, we end with the conclusion and perspective.

    In this section, we present new necessary assumptions and some definitions and lemmas, which are utilized in the next section.

    Assumption 1: The delays τ() and h() are differentiable functions satisfies the inequalities below:

    0τ(t)ˉτ, ˙τ(t)μ,0h(t)ˉh, ˙h(t)h1.

    Assumption 2: The neuron activation function fk() satisfy the below Lipschitz condition:

    |fk(z1)fk(z2)|lk|z1z2|,lk>0(k=1,2,,n). (2.1)

    Furthermore, by means of Assumption 2, it is clear to get

    (f(z1)f(z2))(f(z1)f(z2))(z1z2)LTL(z1z2), (2.2)

    where L=diag{l1,l2,,ln}.

    Assumption 3: The neuron activation functions f() can be divided into two parts as follows:

    f(z)=fR(x,y)+ifI(x,y),

    where z=x+iy,i shows the imaginary unit, and the real imaginary parts check the below conditions:

    1) The partial derivatives fx,fy exist and are continuous.

    2) There exist positive constant numbers γRRk,γRIk,γIRk,γIIk such that

    fRkxγRRk,fRkyγRIk,fIkxγIRk,fIkyγIIk.

    Thus, one can obtain that for any x1,x2,y1,y2R,

    |fRk(x1,y1)fRk(x2,y2)|γRRk|x1x2|+γRIk|y1y2|,|fIk(x1,y1)fIk(x2,y2)|γIRk|x1x2|+γIIk|y1y2|,

    for all k=1,2,,n.

    Remark 1: As we know, in many literatures [29,30,31], after the separation of the activation functions into real-imaginary type, fx and fy are still supposed to exist and be continuous and limited. Via these conditions, we can point that they checked the same inequalities given in hypothesis 3 by considering the mean value theorem for multivariable functions. Here, we delete these constraints of the partial derivatives and make the hypothesis into an appropriate one that imaginary and real parts of the activation functions just check the inequalities in hypothesis 3. Consequently, our study can help a broader class of CVAF.

    Remark 2: By applying the modulus of a complex number to some simple inequalities, it is worth mentioning that hypothesis 2 is equivalent to hypothesis 3. Therefore, for writing convenience, we will assume that activation functions satisfy hypothesis 2 to study our main results.

    Assumption 4: The neuron activation function fk() may be divided into two parts according to the complex number z as follows

    fk(z)=fRk(Re(z))+ifIk(Im(z))

    where fRk(),fIk():RR, then for any k=1,2,,n, there exist constants ˇlRk,ˆlRk,ˇlIkˆlIk such that

    ˇlRkfRk(κ1)fRk(κ2)κ1κ2ˆlRk,ˇlIkfIk(κ1)fIk(κ2)κ1κ2ˆlIk, (2.3)

    where fRk(0)=0,fIk(0)=0,κ1,κ2R,κ1κ2.

    For presentation convenience, we denote

    ˆLR=diag{ˆlR1,,ˆlRn},ˇLR=diag{ˇlR1,,ˇlRn},ˆLI=diag{ˆlI1,,ˆlIn},ˇLI=diag{ˇlI1,,ˇlIn},L1=ˆLRˇLR,L2=ˆLIˇLI,L3=ˆLR+ˇLR2,L4=ˆLI+ˇLI2,ˉL1=diag{L1,L2},ˉL2=diag{L3,L4}.

    Remark 3: In [23,24,25], hypotheses 3 and 4 are given on the activation functions which separate between the real and imaginary parts. Moreover, we can even simply say that hypothesis 3 is quite strict and that it is a special case of hypothesis 2. This fact has already been mentioned in [26] and [27]. In addition, hypothesis 4 is also a strong constraint. For instance, if the activation functions fk(z)(k=1,2,,n) checked hypothesis 4, we have:

    |fRk(x1)fRk(x2)|lRk|x1x2|,|fIk(y1)fIk(y2)|lIk|y1y2|

    where z1=x1+iy1,z2=x2+iy2, lRk=max{|ˇlRk|,|ˆlRk|}, and lIk=max{|ˇlIk|,|ˆlIk|}, then one can have that

    |fk(z1)fk(z2)|=|(fRk(x1)+ifIk(y1))(fRk(x2)+ifIk(y2))||fRk(x1)(fRk(x2)|+|fIk(y1)fIk(y2)|lRk|x1x2|+lIk|y1y2|(lRk)2+(lIk)2|z1z2|.

    Thus, the activation function fk(z)(k=1,2,,n) satisfies the Lipschitz condition of hypothesis 2. Therefore, hypotheses 3 and 4 imply hypothesis 2, that is, the condition of hypothesis 2 is a special case of hypotheses 3 and 4.

    Remark 4: Indeed, when the activation function is given as follows: fk(z)=fRk(x,y)+ifIk(x,y) includes fk(z)=fRk(Re(z))+ifIk(Im(z)) as a special case. Therefore, in the following section, we will indicate that fk(z) is of the type of distinct real-imaginary activation functions. Let z(t)=x(t)+iy(t),z(tτ(t))=zτ(t),x(tτ(t))=xτ(t), y(tτ(t))=yτ(t),x(th(t))=xh(t), y(th(t))=yh(t), B=BR+iBI,C=CR+iCI,f(z(t))=fR(x(t),y(t))+ifI(x(t),y(t)), f(zτ(t))=fR(xτ(t),yτ(t))+ifI(xτ(t),yτ(t)), ω(t)=ωR(t)+iωI(t), and u(t)=uR(t)+iuI(t), then model (1.1) can be divided into imaginary and real parts as the following form

    {˙x(t)=Ax(t)+BRfR(x(t),y(t))BIfI(x(t),y(t))+CRfR(xτ(t),yτ(t))CIfI(xτ(t),yτ(t))+DR˙xh(t)DI˙yh(t)+ωR(t),˙y(t)=Ay(t)+BIfR(x,y)(t))+BRfI(x(t),y(t))+CIfR(xτ(t),yτ(t))+CRfI(xτ(t),yτ(t))+DR˙yh(t)+DI˙xh(t)+ωI(t),uR(t)=K1x(t)+KR2fR(x(t),y(t))KI2fI(x(t),y(t)),uI(t)=K1y(t)+KI2fR(x(t),y(t))+KR2fI(x(t),y(t)), (2.4)

    The initial condition of (2.4) is given by:

    {x(s)=φR(s),s[ρ,0],y(s)=φI(s),s[ρ,0],

    where φR(s),φI(s)C([ρ,0],Rn).

    Let θ=(xy), ˉf(θ)=(fR(x,y)fI(x,y)), ˉf(θτ)=(fR(xτ,yτ)fI(xτ,yτ)), ˉω=(ωRωI), ˉA=(A00A), ˉB=(BRBIBIBR), ˉC=(CRCICICR), ˉD=(DRDIDIDR), ˉu=(uRuI),ˉK1=(K100K1),ˉK2=(KR1KI1KI1KR1).

    Model (2.4) can be rewritten as:

    {˙θ(t)=ˉAθ(t)+ˉBˉf(θ(t))+ˉCˉf(θτ(t))+ˉD˙θh(t)+ˉω(t),ˉu(t)=ˉK1θ(t)+ˉK2ˉf(θ(t)), (2.5)

    with

    θ(s)=ϕ(s),s[ρ,0],

    where ϕ(s)=(φR(s)φI(s))C([ρ,0],R2n).

    The norm is defined as follows: For every ϕ()C([ρ,0],R2n),

    It is clear from (2.3) that:

    \begin{eqnarray} \bar{L}_{k}^{-}\leq\frac{\bar{f}_{k}(\kappa_{1})-\bar{f}_{k}(\kappa_{2})}{\kappa_{1}-\kappa_{2}}\leq \bar{L}_{k}^{+}, \end{eqnarray} (2.6)

    where, \bar{L}^{-} = \left(\begin{array}{cc} \check{L}^{R} & 0 \\ 0 & \check{L}^{I} \\ \end{array} \right), \; \; \bar{L}^{+} = \left(\begin{array}{cc} \hat{L}^{R} & 0 \\ 0 & \hat{L}^{I} \\ \end{array} \right).

    Assumption 5: Given a positive value b^{R}, \; b^{I}, then \omega^{R}(t), \; \omega^{I}(t) checks

    \begin{eqnarray} &&\int_{0}^{T_{1}}(\omega^{R})^{T}(t)\omega^{R}(t)dt\leq b^{R},\;\;\int_{0}^{T_{1}}(\omega^{I})^{T}(t)\omega^{I}(t)dt\leq b^{I},\; T_{1}\geq 0,\;b^{R}\geq 0,\;b^{I}\geq 0,\\ &&\Rightarrow \int_{0}^{T_{1}}\bar{\omega}^{T}(t)\bar{\omega}(t)dt\leq \bar{b}, \end{eqnarray} (2.7)

    where \bar{b} = \left(\begin{array}{c} b^{R} \\ b^{I} \\ \end{array} \right).

    Definition 1 (FTB): For a known constant T_{1} > 0 , system (1.1) is FTB with regard to (\bar{c}_{1}, \; \bar{c}_{2}, \; T_{1}, \; \bar{L}, \; \bar{b}), and \bar{\omega}(\cdot) checked (2.7) if: \sup\limits_{t_{0}\in[-\bar{\tau}, 0]}\big\{\theta^{T}(t_{0})\bar{L}\theta(t_{0}), \; \dot{\theta}^{T}(t_{0})\bar{L}\dot{\theta}(t_{0})\big\}\leq \bar{c}_{1} \Rightarrow \theta^{T}(t)\bar{L}\theta(t)\leq \bar{c}_{2}, for t\in[0, \; T_{1}], where \bar{c}_{1} = \left(\begin{array}{c} c_{11} \\ c_{12} \\ \end{array} \right), \; \bar{c}_{2} = \left(\begin{array}{c} c_{21} \\ c_{22} \\ \end{array} \right), \; 0 < c_{11} < c_{21}, \:0 < c_{12} < c_{22}, \bar{L} > 0 is a matrix.

    Definition 2(FTP): For a known constant T_{1} > 0 , system (1.1) is said to be FTP, with regard to (\bar{c}_{1}, \; \bar{c}_{2}, \; T_{1}, \; \bar{L}, \; \bar{b}), if the beelow conditions are checked:

    (ⅰ) Model (1.1) is FTB with regard to (\bar{c}_{1}, \; \bar{c}_{2}, \; T_{1}, \; \bar{L}, \; \bar{b}).

    (ⅱ) For a known constant \gamma > 0, and since the zero initial condition, the below relation is true:

    2\int_{0}^{T_{1}}\bar{u}^{T}(t)\bar{\omega}(t)dt\geq -\gamma\int_{0}^{T_{1}}\bar{\omega}^{T}(t)\bar{\omega}(t)dt,

    when \bar{\omega}(\cdot) checked (2.7).

    Lemma 1 [28]: For a symmetric matrix \vartheta = \vartheta^{T}\geq0 , scalars e_{1} > e_{2} > 0, such that the integrations below are well defined. We have

    \begin{eqnarray} &&-(e_{1}-e_{2}) \int_{t-e_{1}}^{t-e_{2}}\theta^{T}(s)\vartheta\theta(s)ds\leq-\bigg( \int_{t-e_{1}}^{t-e_{2}}\theta(s)ds\bigg)^{T}\vartheta\bigg( \int_{t-e_{1}}^{t-e_{2}}\theta(s)ds\bigg),\\ &&-\frac{(e_{1}^{2}-e_{2}^{2})}{2} \int_{-e_{1}}^{-e_{2}}\int_{t+s}^{t}\theta^{T}(u)\vartheta\theta(u)duds\leq-\bigg( \int_{-e_{1}}^{-e_{2}}\int_{t+s}^{t}\theta(u)duds\bigg)^{T}\vartheta\bigg( \int_{-e_{1}}^{-e_{2}}\int_{t+s}^{t}\theta(u)duds\bigg),\\ &&-\frac{(e_{1}^{3}-e_{2}^{3})}{6} \int_{-e_{1}}^{-e_{2}}\int_{s}^{0}\int_{t+u}^{t}\theta^{T}(v)\vartheta\theta(v)dvduds\leq-( \int_{-e_{1}}^{-e_{2}}\int_{s}^{0}\int_{t+u}^{t}\theta(v)dvdud\lambda ds)^{T}\theta\\ &&\qquad\qquad\qquad\qquad\qquad\qquad\times( \int_{-e_{1}}^{-e_{2}}\int_{s}^{0}\int_{t+u}^{t}\theta(v)dvdud\lambda ds). \end{eqnarray} (2.8)

    Lemma 2: For a symmetric matrix \vartheta = \vartheta^{T}\geq0 , scalars e_{1} > e_{2} > 0, such that the integrations below are well defined. We have

    \begin{eqnarray} &&-\frac{(e_{1}^{4}-e_{2}^{4})}{24} \int_{-e_{1}}^{-e_{2}}\int_{s}^{0}\int_{\lambda}^{0}\int_{t+u}^{t}\theta^{T}(v)\vartheta\theta(v)dvdud\lambda ds\leq-( \int_{-e_{1}}^{-e_{2}}\int_{s}^{0}\int_{\lambda}^{0}\int_{t+u}^{t}\theta(v)dvdud\lambda ds)^{T}\vartheta\\ &&\qquad\qquad\qquad\qquad\qquad\qquad\times( \int_{-e_{1}}^{-e_{2}}\int_{s}^{0}\int_{\lambda}^{0}\int_{t+u}^{t}\theta(v)dvdud\lambda ds). \end{eqnarray} (2.9)

    Proof: The proof of Lemma 2 is inspired by the proof of Lemma 2 in [28].

    For any symmetric matrix \vartheta = \vartheta^{T}\geq0, we have

    \left[ \begin{array}{cc} \theta^{T}(v)\vartheta\theta(v) & \theta^{T}(v) \\ \theta(v) & \vartheta^{-1} \\ \end{array} \right]\geq0,

    then after integration of it from t+u to t, from \lambda to 0, from s to 0, and from -e_{1} to -e_{2} in turn, we can obtain

    \left( \begin{array}{cc} \Pi_{11} & \Pi_{12} \\ \Pi_{12}^{T} & \Pi_{22} \\ \end{array} \right)\geq0,

    where

    \begin{eqnarray*} \Pi_{11}& = & \int_{-e_{1}}^{-e_{2}}\int_{s}^{0}\int_{\lambda}^{0}\int_{t+u}^{t}\theta^{T}(v)\vartheta\theta(v)dvdud\lambda ds,\\ \Pi_{12}& = & \int_{-e_{1}}^{-e_{2}}\int_{s}^{0}\int_{\lambda}^{0}\int_{t+u}^{t}\theta^{T}(v)dvdud\lambda ds,\\ \Pi_{22}& = & \int_{-e_{1}}^{-e_{2}}\int_{s}^{0}\int_{\lambda}^{0}\int_{t+u}^{t}\vartheta^{-1}dvdud\lambda ds\\ & = & \int_{-e_{1}}^{-e_{2}}\int_{s}^{0}\int_{\lambda}^{0}-u \vartheta^{-1}dud\lambda ds\\ && = \int_{-e_{1}}^{-e_{2}}\int_{s}^{0}[-\frac{u^{2}}{2} \vartheta^{-1}]_{\lambda}^{0}d\lambda ds\\ && = \int_{-e_{1}}^{-e_{2}}[\frac{\lambda^{3}}{6} \vartheta^{-1}]_{s}^{0}ds\\ && = [-\frac{s^{4}}{24} M^{-1}]_{-e_{1}}^{-e_{2}} = \frac{e_{1}^{4}-e_{2}^{4}}{24} \vartheta^{-1}. \end{eqnarray*}

    According to Schur complement, [16] is equivalent to the below condition: \Pi_{22} > 0, \; \Pi_{11}-\Pi_{12}\Pi_{22}^{-1}\Pi_{12}^{T}\geq0. \Rightarrow \Pi_{11}\geq\Pi_{12}\Pi_{22}^{-1}\Pi_{12}^{T}, which is equivalent to inequality (2.9).

    This completes the proof.

    Lemma 3 [38]: For a given symmetric definite matrix \xi > 0 and for any differentiable function \zeta(\cdot): [e_{1}, \; e_{2}]\rightarrow \mathbb{R}^{n} , the below inequality holds:

    \begin{eqnarray*} \int_{e_{1}}^{e_{2}}\dot{\zeta}^{T}(s)\xi\dot{\zeta}(s)ds\geq \frac{1}{e_{2}-e_{1}}\left( \begin{array}{c} \zeta(e_{2}) \\ \zeta(e_{1}) \\ \chi \\ \end{array} \right)^{T} \times \Xi_{2}(\xi)\left( \begin{array}{c} \zeta(e_{2}) \\ \zeta(e_{1}) \\ \chi \\ \end{array} \right), \end{eqnarray*}

    where \chi = \frac{1}{e_{2}-e_{1}} \int_{e_{1}}^{e_{2}}\zeta(s)ds, \Xi_{2}(\xi) = \left(\begin{array}{ccc} \xi & -\xi & 0 \\ \star & \xi & 0 \\ 0 & 0 & 0 \\ \end{array} \right)+\frac{\pi^{2}}{4}\left(\begin{array}{ccc} \xi & \xi & -2\xi \\ \star & \xi & -2\xi \\ \star & \star & 4\xi \\ \end{array} \right).

    Remark 5: In this manuscript, our goal is to know how to determine the sufficient condition to study the FTB and FTP of the proposed model. So, to get the desired objectives, a processable whole model (2.5) is first formed by dividing the initial model into imaginary and real parts and founding an equivalent real-valued model. Second, using the Lyapunov function approach, the design procedure can be easily performed by checked the LMIs, so in the next section the expected conditions will be obtained.

    In this section, we will concentrate on the problem of FTB and FTP.

    In this first part, our goal is to study the FTB in the following system:

    \begin{eqnarray} \dot{\theta}(t) = -\bar{A}\theta(t)+\bar{B}\bar{f}(\theta(t))+\bar{C}\bar{f}(\theta^{\tau}(t))+\bar{D}\dot{\theta}^{h}(t)+\bar{\omega}(t). \end{eqnarray} (3.1)

    Theorem 1: Suppose that Assumptions 1–5 hold. Let \bar{\tau}, \; \mu, \; \bar{h}, \; h_{1}, and \delta be positive scalars, then system (3.1) is FTB, with respect to (\bar{c}_{1}, \bar{c}_{2}, T_{1}, \bar{L}, \bar{b}), if there exists symmetric positive definite matrices P, Q_{1}, Q_{2}, R_{01}, R_{1}, R_{02}, R_{2}, R_{3}, R_{4} R_{5} R_{6}, T_{01}, T_{2}, T_{3} and diagonal matrices M_{1} > 0, M_{2} > 0, M_{3} > 0, such that the following LMIs hold:

    \begin{eqnarray} \Omega = \left[ \begin{array}{cccccccccccccc} \eta_{1,1} & \bar{L}_{1}M_{3} & \eta_{1,3} & \eta_{1,4} & \eta_{1,5} & G_{1}D & \eta_{1,7} & \eta_{1,8} & 0 & \eta_{1,11} & \frac{\Pi^{2}}{2\bar{h}}R_{6} & \frac{\bar{\tau}^{2}}{2}T_{2} & \frac{\bar{\tau}^{3}}{6}T_{3} & G_{1} \\ \star& \eta_{2,2} & 0 & 0 & 0 & 0 & -M_{3}^{T}F_{2}^{T} & \eta_{2,8} & 0 & 0 & 0 & 0 & 0 & 0 \\ \star & \star & \eta_{3,3} & 0 & 0 & 0 & 0 & 0 & 0 & \frac{\Pi^{2}}{2\bar{\tau}}R_{4} & 0 & 0 & 0 & 0 \\ \star & \star & \star & \eta_{4,4} & 0 & 0 & 0 & 0 & 0 & 0 & \frac{\Pi^{2}}{2\bar{h}}R_{6} & 0 & 0 & 0 \\ \star & \star & \star & \star & \eta_{5,5} & G_{2}\bar{D} & \eta_{5,7} & G_{2}\bar{C} & 0 & 0 & 0 & 0 & 0 & G_{2} \\ \star & \star & \star & \star & \star & \eta_{6,6} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \star & \star & \star & \star & \star & \star & \eta_{7,7} & M_{3} & 0 & 0 & 0 & 0 & 0 & 0 \\ \star & \star & \star & \star & \star & \star & \star & \eta_{8,8} & 0 & 0 & 0 & 0 & 0 & 0 \\ \star & \star & \star & \star & \star & \star & \star & \star & \eta_{9,9} & 0 & 0 & 0 & 0 & 0 \\ \star & \star & \star & \star & \star & \star & \star & \star & \star & \eta_{10,10} & 0 & 0 & 0 & 0 \\ \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \eta_{11,11} & 0 & 0 & 0 \\ \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & -T_{2} & 0 & 0 \\ \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & -T_{3} & 0 \\ \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & -\delta I \\ \end{array} \right] < 0,\qquad \end{eqnarray} (3.2)
    \begin{eqnarray} \bar{c}_{1}\Gamma+\bar{b}(1-e^{-\delta T_{1}}) < \bar{c}_{2}\lambda_{1}e^{-\delta T_{1}}, \end{eqnarray} (3.3)

    where

    \begin{array}{l} \eta_{1,1} = R_{01}+R_{1}-R_{4}-\frac{\Pi^{2}}{4}R_{4}+\bar{\tau}^{2}R_{5}-R_{6}-\frac{\Pi^{2}}{4}R_{6}-\bar{\tau}^{2}T_{01}-\frac{\bar{\tau}^{4}}{4}T_{2} -\frac{\bar{\tau}^{6}}{6}T_{3}-\bar{L}_{1}M_{1}-\bar{L}_{1}M_{3}-G_{1}\bar{A}-\bar{A}^{T}G_{1}^{T}-\delta P, \\ \eta_{1,3} = R_{4}-\frac{\pi^{2}}{4}R_{4}, \eta_{1,4} = R_{6}-\frac{\Pi^{2}}{4}R_{6}, \\ \eta_{1,5} = P-\bar{L}^{-}Q_{1}+\bar{L}^{+}Q_{2}-G_{1}-\bar{A}^{T}G_{2}^{T}, \\ \eta_{1,7} = \bar{L}_{2}M_{1}+\bar{L}_{2}M_{3}+G_{1}\bar{B},\;\eta_{1,11} = \frac{\Pi^{2}}{2\bar{\tau}}R_{4}+\bar{\tau} T_{01}, \\ \eta_{2,2} = -(1-\mu)R_{01}-\bar{L}_{1}M_{2}-\bar{L}_{1}M_{3},\;\eta_{2,8} = \bar{L}_{2}M_{2}+M_{3}^{T}L_{2}^{T}, \\ \eta_{3,3} = -R_{4}-\frac{\pi^{2}}{4}R_{4}-R_{1},\;\eta_{4,4} = -R_{6}-\frac{\Pi^{2}}{4}R_{6}, \\ \eta_{5,5} = R_{3}+\bar{\tau}^{2}R_{4}+\bar{h}^{2}R_{6}+\frac{\bar{\tau}^{4}}{4}T_{01} +\frac{\bar{\tau}^{6}}{36}T_{2}+\frac{\bar{\tau}^{8}}{576}T_{3}-G_{2}-G_{2}^{T}, \\ \eta_{5,7} = Q_{1}^{T}-Q_{2}^{T}+G_{2}\bar{B}, \eta_{6,6} = -(1-h_{1})R_{3},\;\eta_{7,7} = R_{02}+R_{2}-M_{1}-M_{3}, \\ \eta_{8,8} = -(1-\mu)R_{02}-M_{2}-M_{3},\;\eta_{9,9} = -R_{2}, \eta_{10,10} = \frac{-\Pi^{2}}{\bar{\tau}^{2}}R_{4}-R_{5}-T_{01}, \\ \eta_{11,11} = -\frac{\Pi^{2}}{\bar{h}^{2}}R_{6}. \end{array}

    Proof: We consider the Lyapunov functional below:

    \begin{eqnarray} V(\theta(t)) = \sum\limits_{i = 1}^{5}V_{i}(\theta(t)), \end{eqnarray} (3.4)

    where

    \begin{eqnarray*} V_{1}(\theta(t))& = &\theta^{T}(t)P\theta(t),\\ V_{2}(\theta(t))& = &2 \int_{0}^{\theta(t)}Q_{1}(\bar{f}(s)-\bar{L}^{-}s)+Q_{2}(\bar{L}^{+}s-\bar{f}(s))ds,\\\\ V_{3}(\theta(t))& = & \int_{t-\tau(t)}^{t}\theta^{T}(s)R_{01}\theta(s)ds+ \int_{t-\tau}^{t}\theta^{T}(s)R_{1}\theta(s)ds\\ &+& \int_{t-\tau(t)}^{t}\bar{f}^{T}(\theta(s))R_{02}\bar{f}(\theta(s))ds+ \int_{t-\bar{\tau}}^{t}\bar{f}^{T}(\theta(s))R_{2}\bar{f}(\theta(s))ds,\\ V_{4}(\theta(t))& = & \int_{t-h(t)}^{t}\dot{\theta}^{T}(s)R_{3}\dot{\theta}(s)ds+\bar{\tau} \int_{-\bar{\bar{\tau}}}^{0}\int_{t+\beta}^{t}\dot{\theta}^{T}(s)R_{4}\dot{\theta}(s)dsd\beta\\ &+&\bar{\tau} \int_{-\bar{\tau}}^{0}\int_{t+\beta}^{t}\theta^{T}(s)R_{5}\theta(s)dsd\beta+\bar{h} \int_{-\bar{h}}^{0}\int_{t+\beta}^{t}\dot{\theta}^{T}(s)R_{6}\dot{\theta}(s)dsd\beta,\\ V_{5}(\theta(t))& = &\frac{\bar{\tau}^{2}}{2} \int_{-\bar{\tau}}^{0}\int_{\gamma}^{0}\int_{t+\beta}^{t}\dot{\theta}^{T}(s)T_{01}\dot{\theta}(s)dsd\beta d\gamma\\ &+&\frac{\bar{\tau}^{3}}{6} \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{\gamma}^{0}\int_{t+\beta}^{t}\dot{\theta}^{T}(s)T_{2}\dot{\theta}(s)dsd\beta d\gamma d\lambda\\ &+&\frac{\bar{\tau}^{4}}{24} \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{\alpha}^{0}\int_{\gamma}^{0}\int_{t+\beta}^{t}\dot{\theta}^{T}(s)T_{3}\dot{\theta}(s)dsd\beta d\gamma d\alpha d\lambda. \end{eqnarray*}

    Calculating the time-derivative of V(\theta(\cdot)) along any trajectory of model (3.1), we obtain

    \begin{eqnarray} \dot{V}(\theta(t)) = \sum\limits_{i = 1}^{5}\dot{V}_{i}(\theta(t)), \end{eqnarray} (3.5)

    where

    \begin{eqnarray*} \dot{V}_{1}(\theta(t))& = &2\theta^{T}(t)P\dot{\theta}(t),\\ \dot{V}_{2}(\theta(t))& = &2(\bar{f}(\theta(t))-\bar{L}^{-}\theta(t))^{T}Q_{1}\dot{\theta}(t)+2(\bar{L}^{+}\theta(t)-\bar{f}(\theta(t)))^{T}Q_{2}\dot{\theta}(t),\\ & = &2\bar{f}(\theta(t))^{T}Q_{1}\dot{\theta}(t)-2\theta(t)^{T}\bar{L}^{-}Q_{1}\dot{\theta}(t)+2\theta^{T}(t)\bar{L}^{+}Q_{2}\dot{\theta}(t)\\ &-&2\bar{f}(\theta(t))^{T}Q_{2}\dot{\theta}(t),\\ \dot{V}_{3}(\theta(t))&\leq&\theta^{T}(t)[R_{01}+R_{1}]\theta(t)+\bar{f}^{T}(\theta(t))[R_{02}+R_{2}]\bar{f}(\theta(t))\\ &-&(1-\mu)\bar{f}^{T}(\theta(t-\tau(t)))R_{02}\bar{f}(\theta(t-\tau(t)))-\theta^{T}(t-\bar{\tau})R_{1}\theta(t-\bar{\tau})\\ &-&(1-\mu)\theta^{T}(t-\tau(t))R_{01}\theta(t-\tau(t))-\bar{f}^{T}(\theta(t-\bar{\tau}))R_{2}\bar{f}(\theta(t-\bar{\tau})), \end{eqnarray*}
    \begin{eqnarray*} \dot{V}_{4}(\theta(t))& = &\dot{\theta}^{T}(t)R_{3}\dot{\theta}(t)-(1-\dot{h}(t))\dot{\theta}^{T}(t-h(t))R_{3}\dot{\theta}(t-h(t))\\ &+&\bar{\tau} \int_{-\bar{\tau}}^{0}\dot{\theta}^{T}(t)R_{4}\dot{\theta}(t)-\dot{\theta}^{T}(t+\beta)R_{4}\dot{\theta}(t+\beta)d\beta\\ &+&\bar{\tau}\int_{-\bar{\tau}}^{0}\theta^{T}(t)R_{5}\theta(t)-\theta^{T}(t+\beta)R_{5}\theta(t+\beta)d\beta\\ &+&\bar{h} \int_{-\bar{h}}^{0}\dot{\theta}^{T}(t)R_{6}\dot{\theta}(t)-\dot{\theta}^{T}(t+\beta)R_{6}\dot{\theta}(t+\beta)d\beta\\ &\leq&\dot{\theta}^{T}(t)R_{3}\dot{\theta}(t)-(1-h_{1})\dot{\theta}^{T}(t-h(t))R_{3}\dot{\theta}(t-h(t))\\ &+&\bar{\tau}^{2}\dot{\theta}^{T}(t)R_{4}\dot{\theta}(t)-\bar{\tau} \int_{t-\bar{\tau}}^{t}\dot{\theta}^{T}(s)R_{4}\dot{\theta}(s)ds+\bar{\tau}^{2}\theta^{T}(t)R_{5}\theta(t)\\ &-&\bar{\tau} \int_{t-\bar{\tau}}^{t}\theta^{T}(s)R_{5}\theta(s)ds+\bar{h}^{2}\dot{\theta}^{T}(t)R_{6}\dot{\theta}(t)- \bar{h} \int_{t-\bar{h}}^{t}\dot{\theta}^{T}(s)R_{6}\dot{\theta}(s)ds. \end{eqnarray*}

    Applying Lemma 3 to the above inequality, we get

    \begin{eqnarray*} -\bar{\tau} \int_{t-\bar{\tau}}^{t}\dot{\theta}^{T}(s)R_{4}\dot{\theta}(s)ds\leq-\left( \begin{array}{c} \theta(t) \\ \theta(t-\bar{\tau}) \\ \frac{1}{\bar{\tau}} \int_{t-\bar{\tau}}^{t}\theta(s)ds \\ \end{array} \right)^{T} \times \Xi_{2}(R_{4})\left( \begin{array}{c} \theta(t) \\ \theta(t-\bar{\tau}) \\ \frac{1}{\bar{\tau}} \int_{t-\bar{\tau}}^{t}\theta(s)ds \\ \end{array} \right), \end{eqnarray*}

    where

    \begin{eqnarray*} \Xi_{2}(R_{4}) = \left( \begin{array}{ccc} R_{4} & -R_{4} & 0 \\ \star & R_{4} & 0 \\ 0 & 0 & 0 \\ \end{array} \right)+\frac{\Pi^{2}}{4}\left( \begin{array}{ccc} R_{4} & R_{4} & -2R_{4} \\ \star & R_{4} & -2R_{4} \\ \star & \star & 4R_{4} \\ \end{array} \right), \end{eqnarray*}
    \begin{eqnarray*} -\bar{\tau} \int_{t-\bar{\tau}}^{t}\dot{\theta}^{T}(s)R_{4}\dot{\theta}(s)ds&\leq&-\bigg[\theta^{T}(t)(R_{4}+\frac{\Pi^{2}}{4}R_{4})\theta(t)+2\theta^{T}(t)(-R_{4}+\frac{\Pi^{2}}{4}R_{4})\theta(t-\bar{\tau})\nonumber\\ &+&2\theta^{T}(t)(-\frac{\Pi^{2}}{2\bar{\tau}}R_{4}) \int_{t-\bar{\tau}}^{t}\theta(s)ds+\theta^{T}(t-\bar{\tau})(R_{4}+\frac{\Pi^{2}}{4}R_{4})\theta(t-\bar{\tau})\nonumber\\ &+&2\theta^{T}(t-\bar{\tau})(-\frac{\Pi^{2}}{2\bar{\tau}}R_{4}) \int_{t-\bar{\tau}}^{t}\theta(s)ds+( \int_{t-\bar{\tau}}^{t}\theta^{T}(s)ds)\frac{\Pi^{2}}{\bar{\tau}^{2}}R_{4} \int_{t-\bar{\tau}}^{t}\theta(s)ds\bigg]. \end{eqnarray*}

    Similary,

    \begin{eqnarray*} -\bar{h} \int_{t-\bar{h}}^{t}\dot{\theta}^{T}(s)R_{6}\dot{\theta}(s)ds&\leq& -\bigg[\theta^{T}(t)(R_{6}+\frac{\Pi^{2}}{4}R_{6})\theta(t)+2\theta^{T}(t)(-R_{6}+\frac{\Pi^{2}}{4}R_{6})\theta(t-\bar{h})\nonumber\\ &+&2\theta^{T}(t)(-\frac{\Pi^{2}}{2\bar{h}}R_{6}) \int_{t-\bar{h}}^{t}\theta(s)ds+\theta^{T}(t-\bar{h})(R_{6}+\frac{\Pi^{2}}{4}R_{6})\theta(t-\bar{h})\nonumber\\ &+&2\theta^{T}(t-\bar{h})(-\frac{\Pi^{2}}{2\bar{h}}R_{6}) \int_{t-\bar{h}}^{t}\theta(s)ds\nonumber\\ &+& \int_{t-\bar{h}}^{t}\theta^{T}(s)ds(\frac{\Pi^{2}}{\bar{h}^{2}}R_{6}) \int_{t-\bar{h}}^{t}\theta(s)ds\bigg], \end{eqnarray*}

    and by applying Lemma 1,

    \begin{eqnarray*} -\bar{\tau} \int_{t-\bar{\tau}}^{t}\theta^{T}(s)R_{5}\theta(s)ds&\leq&-( \int_{t-\bar{\tau}}^{t}\theta^{T}(s)ds)^{T} R_{5}( \int_{t-\bar{\tau}}^{t}\theta(s)ds). \end{eqnarray*}
    \begin{eqnarray*} \dot{V}_{5}(\theta(t))& = &\frac{\bar{\tau}^{4}}{4}\dot{\theta}^{T}(t)T_{01}\dot{\theta}(t)-\frac{\bar{\tau}^{2}}{2} \int_{-\bar{\tau}}^{0}\int_{t+\gamma}^{t}\dot{\theta}^{T}(s)T_{01}\dot{\theta}(s)dsd\gamma\nonumber\\ &+&\frac{\bar{\tau}^{6}}{36}\dot{\theta}^{T}(t)T_{2}\dot{\theta}(t)-\frac{\bar{\tau}^{3}}{6} \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{t+\gamma}^{t}\dot{\theta}^{T}(s)T_{2}\dot{\theta}(s)dsd\gamma d\lambda\nonumber\\ &+&\frac{\bar{\tau}^{8}}{576}\dot{\theta}^{T}(t)T_{3}\dot{\theta}(t)-\frac{\bar{\tau}^{4}}{24} \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{\alpha}^{0}\int_{t+\gamma}^{t}\dot{\theta}^{T}(s)T_{3}\dot{\theta}(s)dsd\gamma d\alpha d\lambda. \end{eqnarray*}

    Applying Lemma 1 and 3 to the above inequality, we get

    \begin{eqnarray} \mathbf{A}_{1}& = &-\frac{\bar{\tau}^{2}}{2} \int_{-\bar{\tau}}^{0}\int_{t+\gamma}^{t}\dot{\theta}^{T}(s)T_{01}\dot{\theta}(s)dsd\gamma \\ \mathbf{A}_{1}&\leq&-\big( \int_{-\bar{\tau}}^{0}\int_{t+\gamma}^{t}\dot{\theta}^{T}(s)ds\big)T_{01}\big( \int_{-\bar{\tau}}^{0}\int_{t+\gamma}^{t}\dot{\theta}(s)dsd\gamma\big)\\ & = &-[\bar{\tau }\theta^{T}(t)-\int_{t-\bar{\tau}}^{t}\theta^{T}(s)ds]T_{1}[\bar{\tau} \theta(t)- \int_{t-\bar{\tau}}^{t}\theta(s)ds]\\ & = &-\bar{\tau}^{2}\theta^{T}(t)T_{01}\theta(t)+2\bar{\tau} \theta^{T}(t)T_{01} \int_{t-\bar{\tau}}^{t}\theta(s)ds-\big( \int_{t-\bar{\tau}}^{t}\theta^{T}(s)ds\big)T_{1}\big( \int_{t-\bar{\tau}}^{t}\theta(s)ds\big).\\ \\ \mathbf{A}_{2}& = &-\frac{\bar{\tau}^{3}}{6}\int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{t+\gamma}^{t}\dot{\theta}^{T}(s)T_{2}\dot{\theta}(s)dsd\gamma d\lambda\leq\big( \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{t+\gamma}^{t}\dot{\theta}^{T}(s)dsd\gamma d\lambda\big)\\ &\times& T_{2}\big( \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{t+\gamma}^{t}\dot{\theta}(s)dsd\gamma d\lambda\big)\\ & = &-\big[\frac{\bar{\tau}^{2}}{2}\theta^{T}(t)- \int_{-\bar{\tau}}^{0}\int_{t+\lambda}^{t}\theta^{T}(s)dsd\lambda\big] T_{2}\big[\frac{\bar{\tau}^{2}}{2}\theta(t)- \int_{-\bar{\tau}}^{0}\int_{t+\lambda}^{t}\theta(s)dsd\lambda\big]\\ & = &-\frac{\bar{\tau}^{4}}{4}\theta^{T}(t)T_{2}\theta(t)+2\theta^{T}(t)(\frac{\bar{\tau}^{2}}{2}T_{2}) \int_{-\bar{\tau}}^{0}\int_{t+\lambda}^{t}\theta(s)dsd\lambda\\ &-&\big( \int_{-\bar{\tau}}^{0}\int_{t+\lambda}^{t}\theta^{T}(s)dsd\lambda\big) T_{2}\big( \int_{-\bar{\tau}}^{0}\int_{t+\lambda}^{t}\theta(s)dsd\lambda\big).\\ \end{eqnarray}
    \begin{eqnarray} \mathbf{A}_{3}& = &-\frac{\bar{\tau}^{4}}{24} \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{\alpha}^{0}\int_{t+\gamma}^{t}\dot{\theta}^{T}(s)T_{3}\dot{\theta}(s)dsd\gamma d\alpha d\lambda\\ \mathbf{A}_{3}&\leq&-\bigg( \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{\alpha}^{0}\int_{t+\gamma}^{t}\dot{\theta}^{T}(s)dsd\gamma d\alpha d\lambda\bigg) T_{3}\bigg(\int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{\alpha}^{0}\int_{t+\gamma}^{t}\dot{\theta}(s)dsd\gamma d\alpha d\lambda\bigg)\\ & = &-\big[\frac{\bar{\tau}^{3}}{6}\theta^{T}(t)- \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{t+\alpha}^{t}\theta^{T}(s)dsd\alpha d\lambda\big] T_{3}\big[\frac{\bar{\tau}^{3}}{6}\theta(t)- \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{t+\alpha}^{t}\theta(s)dsd\alpha d\lambda\big]\\ && = -\frac{\bar{\tau}^{6}}{36}\theta^{T}(t)T_{3}\theta(t)+2\theta^{T}(t)(\frac{\bar{\tau}^{3}}{6}T_{3})\bigg( \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{t+\alpha}^{t}\theta(s)dsd\alpha d\lambda\bigg)\\ &&-\bigg( \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{t+\alpha}^{t}\theta^{T}(s)dsd\alpha d\lambda\bigg)T_{3}\bigg( \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{t+\alpha}^{t}\theta(s)dsd\alpha d\lambda\bigg). \end{eqnarray} (3.6)

    For any \rho_{1k} > 0, \; \rho_{2k} > 0, \; \rho_{3k} > 0, \; k = 1, \; 2, \cdots, n, it follows from (2.3) that

    \begin{eqnarray*} &&[\bar{f}_{k}(\theta_{k}(t))-\bar{L}_{k}^{-}\theta_{k}(t)]^{T}\rho_{1k}[\bar{L}_{k}^{+}\theta_{k}(t)-\bar{f}_{k}(\theta_{k}(t))]\geq0,\\ \nonumber\\ &&[\bar{f}_{k}(\theta_{k}(t-\tau(t)))-\bar{L}_{k}^{-}\theta_{k}(t-\tau(t))]^{T}\rho_{2k}[\bar{L}_{k}^{+}\theta_{k}(t-\tau(t))-\bar{f}_{k}(\theta_{k}(t-\tau(t)))]\geq0,\\ \nonumber\\ &&[\bar{f}_{k}(\theta_{k}(t))-\bar{f}_{k}(\theta_{k}(t-\tau(t)))-\bar{L}_{k}^{-}\big(\theta_{k}(t)-\theta_{k}(t-\tau(t))\big)]^{T}\rho_{3k}[\bar{L}_{k}^{+}(\theta_{k}(t)-\theta_{k}(t-\tau(t)))-\big(\bar{f}_{k}(\theta_{k}(t))\nonumber\\ &&+\bar{f}_{k}(\theta_{k}(t-\tau(t)))\big)]\geq0, \end{eqnarray*}

    which implies

    \begin{eqnarray} &&\left( \begin{array}{c} \theta^{T}(t) \\ \bar{f}^{T}(\theta(t)) \\ \end{array} \right)\left( \begin{array}{cc} -\bar{L}_{1}M_{1} & \bar{L}_{2}M_{1} \\ \star & -M_{1} \\ \end{array} \right)\times\left( \begin{array}{c} \theta(t) \\ \bar{f}(\theta(t)) \\ \end{array} \right)\geq0, \end{eqnarray} (3.7)
    \begin{eqnarray} &\left( \begin{array}{c} \theta^{T}(t-\tau(t)) \\ \bar{f}^{T}(\theta(t-\tau(t))) \\ \end{array} \right)\left( \begin{array}{cc} -\bar{L}_{1}M_{2} & \bar{L}_{2}M_{2} \\ \star & -M_{2} \\ \end{array} \right)\times\left( \begin{array}{c} \theta(t-\tau(t)) \\ \bar{f}(\theta(t-\tau(t))) \\ \end{array} \right)\geq0,\quad \end{eqnarray} (3.8)

    and

    \begin{eqnarray} &&\left( \begin{array}{c} \theta^{T}(t) \\ \bar{f}^{T}(\theta(t)) \\ \theta^{T}(t-\tau(t)) \\ \bar{f}^{T}(\theta(t-\tau(t))) \\ \end{array} \right)\times\left( \begin{array}{cccc} -\bar{L}_{1}M_{3} & \bar{L}_{2}M_{3} & \bar{L}_{1}M_{3} & -\bar{L}_{1}M_{3} \\ \star & -M_{3} & -\bar{L}_{2}M_{3} & \bar{L}_{3} \\ \star & \star & -\bar{L}_{1}M_{3} & \bar{L}_{1}M_{3} \\ \star & \star & \star & -M_{3} \\ \end{array} \right)\left( \begin{array}{c} \theta(t) \\ \bar{f}(\theta(t)) \\ \theta(t-\tau(t)) \\ \bar{f}(\theta(t-\tau(t))) \\ \end{array} \right)\geq0,\;\; \end{eqnarray} (3.9)

    where M_{1} = diag\{\rho_{11}, \; \rho_{12}, \cdots\rho_{1n}\}, \; M_{2} = diag\{\rho_{21}, \; \rho_{22}, \cdots\rho_{2n}\}, M_{3} = diag\{\rho_{31}, \; \rho_{32}, \cdots\rho_{3n}\}.

    Moreover, for any matrices G_{1} and G_{2} with appropriate dimensions, it is true that,

    \begin{eqnarray} &&2\big[\theta^{T}(t)G_{1}+\dot{\theta}^{T}(t)G_{2}\big][-\dot{\theta}(t)-\bar{A}\theta(t)+\bar{B}\bar{f}(\theta(t))+\bar{C}\bar{f}(\theta(t-\tau(t)))+\bar{D}\dot{\theta}(t-h(t))+\bar{\omega}(t)] = 0\\ && = -2\theta^{T}(t)G_{1}\dot{\theta}(t)-2\theta^{T}(t)G_{1}\bar{A}\theta(t)+2\theta^{T}(t)G_{1}\bar{B}\bar{f}(\theta(t))+2\theta^{T}(t)G_{1}\bar{C}\bar{f}(\theta(t-\tau(t)))\\ &&+2\theta^{T}(t)G_{1}\bar{D}\dot{\theta}(t-h(t))+2\theta^{T}(t)G_{1}\bar{\omega}(t)-2\dot{\theta}^{T}(t)G_{2}\dot{\theta}(t)-2\dot{\theta}^{T}(t)G_{2}\bar{A}\theta(t)\\ &&+2\dot{\theta}^{T}(t)G_{2}\bar{B}\bar{f}(\theta(t))+2\dot{\theta}^{T}(t)G_{2}\bar{C}f(\theta(t-\tau(t)))+2\dot{\theta}^{T}(t)G_{2}\bar{D}\dot{\theta}(t-h(t))\\ &&+2\dot{\theta}^{T}(t)G_{2}\bar{\omega}(t). \end{eqnarray} (3.10)

    We can say that

    \begin{eqnarray} \dot{V}(\theta(t))-\delta V_{1}(\theta(t))-\delta\bar{\omega}^{T}(t)\bar{\omega}(t)\leq\xi^{T}(t)\Omega\xi(t) < 0, \end{eqnarray} (3.11)

    where

    \begin{eqnarray*} \xi(t)& = &\bigg[\theta^{T}(t)\quad \theta^{T}(t-\tau(t))\quad \theta^{T}(t-\bar{\tau})\quad\theta^{T}(t-\bar{h})\quad \dot{\theta}^{T}(t)\quad\dot{\theta}^{T}(t-h(t))\\ &\;&\bar{f}^{T}(\theta(t)) \quad \bar{f}^{T}(\theta(t-\tau(t)))\quad \bar{f}^{T}(\theta(t-\bar{\tau})) \quad \int_{t-\bar{\tau}}^{t}\theta^{T}(s)ds\quad \int_{t-\bar{h}}^{t}\theta^{T}(s)ds\\ &\:& \int_{-\bar{\tau}}^{0}\int_{t+\lambda}^{t}\theta^{T}(s)dsd\lambda\quad \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{t+\alpha}^{t}\theta^{T}(s)dsd\alpha d\lambda \quad\bar{\omega}(t)\bigg]^{T}, \end{eqnarray*}

    and \Omega is given in (3.2).

    \begin{eqnarray} \dot{V}(\theta(t))&\leq& \delta V_{1}(\theta(t))+\delta \bar{\omega}^{T}(t)\bar{\omega}(t)\\ & < &\delta V(\theta(t))+\delta \bar{\omega}^{T}(t)\bar{\omega}(t) \end{eqnarray} (3.12)

    Multiplying by e^{-\delta t}, we can obtain

    \begin{eqnarray} &&e^{-\delta t}\dot{V}(\theta(t))-\delta e^{-\delta t} V(\theta(t)) < \delta e^{-\delta t}\bar{\omega}^{T}(t)\bar{\omega}(t),\\ &&\frac{d}{dt}(e^{-\delta t}V(\theta(t))) < \delta e^{-\delta t}\bar{\omega}^{T}(t)\bar{\omega}(t). \end{eqnarray} (3.13)

    By integrating (3.13] between 0 to t, such as t\in [0, \; T_{1}], we can write

    \begin{eqnarray} &&e^{-\delta t}V(\theta(t))-V(x(0)) < \delta \int_{0}^{t}e^{-\delta s}\bar{\omega}^{T}(s)\bar{\omega}(s)ds,\\ &&V(\theta(t)) < e^{\delta t}[V(\theta(0))+ \int_{0}^{t}e^{-\delta s}\bar{\omega}^{T}(s)\omega(s)ds].\qquad \end{eqnarray} (3.14)

    So,

    \begin{eqnarray*} &&V(\theta(0)) = \theta^{T}(0)P\theta(0)+2\int_{0}^{\theta(0)}Q_{1}(\bar{f}(s)-\bar{L}^{-}s)+Q_{2}(\bar{L}^{+}s-\bar{f}(s))ds\\ &&+\int_{-\tau(t)}^{0}\theta^{T}(s)R_{01}\theta(s)ds+\int_{-\bar{\tau}}^{0}\theta^{T}(s)R_{1}\theta(s)ds+ \int_{-\bar{\tau}}^{0}\bar{f}^{T}(\theta(s))R_{2}\bar{f}(\theta(s))ds\\ &&+ \int_{-\tau(t)}^{0}\bar{f}^{T}(\theta(s))R_{02}\bar{f}(\theta(s))ds+ \int_{-h(t)}^{0}\dot{\theta}^{T}(s)R_{3}\dot{\theta}(s)ds+\bar{\tau}\int_{-\bar{\tau}}^{0}\int_{\beta}^{t}\dot{\theta}^{T}(s)R_{4}\dot{\theta}(s)dsd\beta\\ &&+\bar{\tau}\int_{-\bar{\tau}}^{0}\int_{\beta}^{0}\theta^{T}(s)R_{5}\theta(s)dsd\beta+\bar{h} \int_{-\bar{h}}^{0}\int_{\beta}^{0}\dot{\theta}^{T}(s)R_{6}\dot{\theta}(s)dsd\beta\\ &&+\frac{\bar{\tau}^{2}}{2} \int_{-\bar{\tau}}^{0}\int_{\gamma}^{0}\int_{\beta}^{0}\dot{\theta}^{T}(s)T_{01}\dot{\theta}(s)dsd\beta d\gamma+\frac{\bar{\tau}^{3}}{6} \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{\gamma}^{0}\int_{\beta}^{0}\dot{\theta}^{T}(s)T_{2}\dot{\theta}(s)dsd\beta d\gamma d\lambda\\ &&+\frac{\bar{\tau}^{4}}{24} \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{\alpha}^{0}\int_{\gamma}^{0}\int_{\beta}^{0}\dot{\theta}^{T}(s)T_{3}\dot{\theta}(s)dsd\beta d\gamma d\alpha d\lambda. \end{eqnarray*}

    Letting

    \begin{eqnarray*} &&\hat{P} = \bar{L}^{-\frac{1}{2}}P\bar{L}^{-\frac{1}{2}},\; \hat{Q_{1}} = \bar{L}^{-\frac{1}{2}}Q_{1}\bar{L}^{-\frac{1}{2}},\;\hat{Q_{2}} = \bar{L}^{-\frac{1}{2}}Q_{2}\bar{L}^{-\frac{1}{2}},\;\hat{R_{01}} = \bar{L}^{-\frac{1}{2}}R_{01}\bar{L}^{-\frac{1}{2}},\\ &&\hat{R_{1}} = \bar{L}^{-\frac{1}{2}}R_{1}\bar{L}^{-\frac{1}{2}},\;\hat{R_{02}} = \bar{L}^{-\frac{1}{2}}R_{02}\bar{L}^{-\frac{1}{2}},\;\hat{R_{2}} = \bar{L}^{-\frac{1}{2}}R_{2}\bar{L}^{-\frac{1}{2}},\;\hat{R_{3}} = \bar{L}^{-\frac{1}{2}}R_{3}\bar{L}^{-\frac{1}{2}},\\ &&\hat{R_{4}} = \bar{L}^{-\frac{1}{2}}R_{4}\bar{L}^{-\frac{1}{2}},\;\hat{R_{5}} = \bar{L}^{-\frac{1}{2}}R_{5}\bar{L}^{-\frac{1}{2}},\;\hat{R_{6}} = \bar{L}^{-\frac{1}{2}}R_{6}\bar{L}^{-\frac{1}{2}},\;\hat{T}_{01} = \bar{L}^{-\frac{1}{2}}T_{01}\bar{L}^{-\frac{1}{2}},\\ &&\hat{T}_{2} = \bar{L}^{-\frac{1}{2}}T_{2}\bar{L}^{-\frac{1}{2}}, \hat{T}_{3} = \bar{L}^{-\frac{1}{2}}T_{3}\bar{L}^{-\frac{1}{2}},\;\lambda_{1} = \lambda_{\min}(\hat{P}),\; \lambda_{2} = \lambda_{\max}(\hat{P}),\\ &&\lambda_{3} = \lambda_{\max}(\hat{Q}_{1}),\;\lambda_{4} = \lambda_{\max}(\hat{Q}_{2}),\;\lambda_{5} = \lambda_{\max}(\hat{R}_{01}),\;\lambda_{6} = \lambda_{\max}(\hat{R}_{1}),\\ &&\lambda_{7} = \lambda_{\max}(\hat{R}_{02}),\;\lambda_{8} = \lambda_{\max}(\hat{R}_{2}),\;\lambda_{9} = \lambda_{\max}(\hat{R}_{3}),\;\lambda_{10} = \lambda_{\max}(\hat{R}_{4}),\\ &&\lambda_{11} = \lambda_{\max}(\hat{R}_{5}),\;\lambda_{12} = \lambda_{\max}(\hat{R}_{6}),\;\lambda_{13} = \lambda_{\max}(\hat{T}_{01}),\;\lambda_{14} = \lambda_{\max}(\hat{T}_{2}),\\ &&\lambda_{15} = \lambda_{\max}(\hat{T}_{3}), \end{eqnarray*}

    we obtain

    \begin{eqnarray*} V(\theta(0))& = &\theta^{T}(0)\bar{L}^{\frac{1}{2}}\hat{P}\bar{L}^{\frac{1}{2}}\theta(0)+2 \int_{0}^{\theta(0)}\bar{L}^{\frac{1}{2}}\hat{Q}_{1}\bar{L}^{\frac{1}{2}}(\bar{f}(s)-\bar{L}^{-}s)\\ &+&\bar{L}^{\frac{1}{2}}\hat{Q}_{2}\bar{L}^{\frac{1}{2}}(\bar{L}^{+}s-\bar{f}(s))ds+\int_{-\tau(t)}^{0}\theta^{T}(s)\bar{L}^{\frac{1}{2}}\hat{R}_{01}\bar{L}^{\frac{1}{2}}\theta(s)ds\\ &+&\int_{-\bar{\tau}}^{0}\theta^{T}(s)\bar{L}^{\frac{1}{2}}\hat{R}_{1}\bar{L}^{\frac{1}{2}}\theta(s)ds+ \int_{-\bar{\tau}}^{0}((\bar{L}^{+})^{2}\theta^{T}(s)\bar{L}^{\frac{1}{2}}\hat{R}_{2}\bar{L}^{\frac{1}{2}}\theta(s)ds\\ &+& \int_{-\tau(t)}^{0}((\bar{L}^{+})^{2}\theta^{T}(s)\bar{L}^{\frac{1}{2}}\hat{R}_{02}\bar{L}^{\frac{1}{2}}\theta(s)ds+ \int_{-h(t)}^{0}\dot{\theta}^{T}(s)\bar{L}^{\frac{1}{2}}\hat{R}_{3}\bar{L}^{\frac{1}{2}}\dot{\theta}(s)ds\\ &+&\bar{\tau} \int_{-\bar{\tau}}^{0}\int_{\beta}^{0}\dot{\theta}^{T}(s)\bar{L}^{\frac{1}{2}}\hat{R}_{4}\bar{L}^{\frac{1}{2}}\dot{\theta}(s)dsd\beta+\bar{\tau} \int_{-\bar{\tau}}^{0}\int_{\beta}^{0}\theta^{T}(s)\bar{L}^{\frac{1}{2}}\hat{R}_{5}\bar{L}^{\frac{1}{2}}\theta(s)dsd\beta\\ &+&\bar{h}\int_{-\bar{h}}^{0}\int_{\beta}^{0}\dot{\theta}^{T}(s)\bar{L}^{\frac{1}{2}}\hat{R}_{6}\bar{L}^{\frac{1}{2}}\dot{\theta}(s)dsd\beta+\frac{\bar{\tau}^{2}}{2} \int_{-\bar{\tau}}^{0}\int_{\gamma}^{0}\int_{\beta}^{0}\dot{\theta}^{T}(s)\bar{L}^{\frac{1}{2}}\hat{T}_{1}\bar{L}^{\frac{1}{2}}\dot{\theta}(s)dsd\beta d\gamma\\ &+&\frac{\bar{\tau}^{3}}{6} \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{\gamma}^{0}\int_{\beta}^{0}\dot{\theta}^{T}(s)\bar{L}^{\frac{1}{2}}\hat{T}_{2}\bar{L}^{\frac{1}{2}}\dot{\theta}(s)dsd\beta d\gamma d\lambda\\ &+&\frac{\bar{\tau}^{4}}{24} \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{\alpha}^{0}\int_{\gamma}^{0}\int_{\beta}^{0}\dot{\theta}^{T}(s)\bar{L}^{\frac{1}{2}}\hat{T}_{3}\bar{L}^{\frac{1}{2}}\dot{\theta}(s)dsd\beta d\gamma d\alpha d\lambda\\ &\leq&\bigg\{\lambda_{\max}(\hat{P})\theta^{T}(0)\bar{L}\theta(0)+2\lambda_{\max}(\hat{Q_{1}})[\max\{|\bar{L}^{+},\;\bar{L}^{-}|^{2}-\bar{L}^{-}\}]\\ &+&2\lambda_{\max}(\hat{Q_{2}})[\max\{\bar{L}^{+}-|\bar{L}^{+},\;\bar{L}^{-}|^{2}\}]+\bar{\tau}[\lambda_{\max}(\hat{R_{01}})+\lambda_{\max}(\hat{R_{1}})]\\ &+&\bar{\tau} [\max\{|\bar{L}^{+},\;\bar{L}^{-}|^{2}\}][\lambda_{\max}(\hat{R_{02}})+\lambda_{\max}(\hat{R_{2}})]+\bar{h}\lambda_{\max}(\hat{R_{3}})+\frac{\bar{\tau}^3}{2}\lambda_{\max}(\hat{R_{4}})\\ &+&\frac{\bar{\tau}^3}{2}\lambda_{\max}(\hat{R_{5}})+\frac{\bar{h}^3}{2}\lambda_{\max}(\hat{R_{6}})+\frac{\bar{\tau}^5}{12}\lambda_{\max}(\hat{T_{1}})+\frac{\bar{\tau}^7}{144}\lambda_{\max}(\hat{T_{2}})+\frac{\bar{\tau}^9}{2880}\lambda_{\max}(\hat{T_{3}})\bigg\}\\ &&\times \sup\limits_{t_{0}\in[-\bar{\tau},0]}\big\{\theta^{T}(t_{0})\bar{L}\theta(t_{0}),\;\dot{\theta}^{T}(t_{0})\bar{L}\dot{\theta}(t_{0})\big\},\\ &&V(\theta(0))\leq \bar{c}_{1}\Gamma, \end{eqnarray*}

    where

    \begin{eqnarray*} \Gamma& = &[\lambda_{2}+\lambda_{3}[\tilde{L}^{2}-\bar{L}^{-}]+\lambda_{4}[\bar{L}^{+}-\tilde{L}^{2}]+\bar{\tau}[\lambda_{5}+\lambda_{6}]+\bar{\tau} \tilde{L}^{2}[\lambda_{7}+\lambda_{8}]+\bar{h}\lambda_{9}+\frac{\bar{\tau}^{3}}{2}[\lambda_{10}\\ &+&\lambda_{11}]+\frac{\bar{h}^{3}}{2}\lambda_{12}+\frac{\bar{\tau}^{5}}{12}\lambda_{13}+\frac{\bar{\tau}^7}{144}\lambda_{14}+\frac{\bar{\tau}^9}{2880}\lambda_{15}, \end{eqnarray*}

    where \tilde{L} = \max\{|\bar{L}^{+}, \; \bar{L}^{-}|^{2}.

    Furthermore, it follows from (3.14) that

    \begin{eqnarray} V(\theta(t))\geq\theta^{T}(t)P\theta(t)\geq\lambda_{min}(\hat{P})\theta^{T}(t)\bar{L}\theta(t) = \lambda_{1}\theta^{T}(t)\bar{L}\theta(t). \end{eqnarray} (3.15)

    Because of inequalities (3.14) and (3.15), we obtain

    \begin{eqnarray} &&\lambda_{1} \theta^{T}(t)P\theta(t)\leq e^{\delta t}[V(\theta(0))+\int_{0}^{t}e^{-\delta t}\bar{\omega}^{T}(s)\bar{\omega}(s)ds],\\ &&\Rightarrow \theta^{T}(t)\bar{L}\theta(t)\leq \frac{e^{\delta T_{1}}[\bar{c}_{1}\Gamma+b(1-e^{(-\delta T_{1})})]}{\lambda_{1}}. \end{eqnarray} (3.16)

    From condition (3.3), we arrive at \theta^{T}(t)L\theta(t) < \bar{c}_{2}. From Definition 1, the model (3.1) is FTB with regard to (\bar{c}_{1}, \; \bar{c}_{2}, \; T_{1}, \; \bar{L}, \; \bar{b}).

    This allowed the proof to be obtained.

    Remark 6: In Theorem 1, sufficient conditions are met to verify that the model (3.1) is FTB, then Theorem 2 will present FTP conditions.

    In this second part, we study the FTP analysis for the below model

    \begin{eqnarray} \left\{ \begin{array}{ll} \dot{\theta}(t) = -\bar{A}\theta(t)+\bar{B}\bar{f}(\theta(t))+\bar{C}\bar{f}(\theta(t-\tau(t)))+\bar{D}\dot{\theta}(t-h(t))+\bar{\omega}(t), \\ \bar{u}(t) = \bar{K}_{1}\theta(t)+\bar{K}_{2}\bar{f}(\theta(t)). \end{array} \right. \end{eqnarray} (3.17)

    Theorem 2: Suppose that Assumptions 1–5 hold. Let \bar{\tau}, \; \mu, \; \bar{h}, \; h_{1}, and \delta be scalars, then system (3.17) is FTP with keeping the parameter (\bar{c}_{1}, \bar{c}_{2}, T_{1}, \bar{L}, \bar{b}), if there exists symmetric positive definite matrices P, Q_{1}, Q_{2}, R_{01}, R_{1}, R_{02}, R_{2}, R_{3}, R_{4}, R_{5}, T_{01}, T_{2}, T_{3} and diagonal matrices M_{1} > 0, M_{2} > 0, M_{3} > 0, and scalar \beta > 0 such that the following LMIs (3.18) hold:

    \begin{eqnarray} \tilde{\Omega} = \left[ \begin{array}{cccccccccccccc} \eta_{1,1} & \bar{L}_{1}M_{3} & \eta_{1,3} & \eta_{1,4} & \eta_{1,5} & G_{1}D & \eta_{1,7} & \eta_{1,8} & 0 & \eta_{1,10} & \frac{\Pi^{2}}{2h}R_{6} & \frac{\bar{\tau}^{2}}{2}T_{2} & \frac{\bar{\tau}^{3}}{6}T_{3} & G_{1}-\beta I-\bar{K}_{1} \\ \star& \eta_{2,2} & 0 & 0 & 0 & 0 & -M_{3}^{T}\bar{L}_{2}^{T} & \eta_{2,8} & 0 & 0 & 0 & 0 & 0 & 0 \\ \star & \star & \eta_{3,3} & 0 & 0 & 0 & 0 & 0 & 0 & \frac{\Pi^{2}}{2\bar{\tau}}R_{4} & 0 & 0 & 0 & 0 \\ \star & \star & \star & \eta_{4,4} & 0 & 0 & 0 & 0 & 0 & 0 & \frac{\Pi^{2}}{2h}R_{6} & 0 & 0 & 0 \\ \star & \star & \star & \star & \eta_{5,5} & G_{2}\bar{D} & \eta_{5,7} & G_{2}\bar{C} & 0 & 0 & 0 & 0 & 0 & G_{2}\\ \star & \star & \star & \star & \star & \eta_{6,6} & 0 &0 & 0 & 0 & 0 & 0 & 0 & 0\\ \star & \star & \star & \star & \star & \star & \eta_{7,7} & M_{3} & 0 & 0 & 0 & 0 & 0 & -\bar{K}_{2}\\ \star & \star & \star & \star & \star & \star & \star & \eta_{8,8} & 0 & 0 & 0 & 0 & 0 & 0\\ \star & \star & \star & \star & \star & \star & \star & \star & \eta_{9,9} & 0 & 0 & 0 & 0 & 0\\ \star & \star & \star & \star & \star & \star & \star & \star & \star & \eta_{10,10} & 0 & 0 & 0 & 0 \\ \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \eta_{11,11} & 0 & 0 & 0 \\ \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & -T_{2} & 0 & 0\\ \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & -T_{3} & 0 \\ \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & \star & -\beta I \\ \end{array} \right] < 0,\qquad \end{eqnarray} (3.18)
    \begin{eqnarray*} \bar{c}_{1}\Gamma+\bar{b}(1-e^{-\delta T_{1}}) < \bar{c}_{2}\lambda_{1}e^{-\delta T_{1}}, \end{eqnarray*}

    where the parameters are kept the same as Theorem 3.1.

    Proof: Select the Lyapunov function used in the Theorem 3.1. We get

    \begin{eqnarray*} &&\dot{V}(\theta(t))-\delta V_{1}(\theta(t))-2\bar{u}^{T}(t)\bar{\omega}(t)-\beta\bar{\omega}^{T}(t)\bar{\omega}(t)\leq\xi^{T}(t)\tilde{\Omega}\xi(t) < 0,\\ &&\Rightarrow \dot{V}(\theta(t))-\delta V(\theta(t))-2\bar{u}^{T}(t)\bar{\omega}(t)-\beta\bar{\omega}^{T}(t)\bar{\omega}(t) < 0, \end{eqnarray*}

    where \tilde{\Omega} is given in (3.18), then

    \begin{eqnarray} \dot{V}(\theta(t))-\delta V(\theta(t))&\leq&2\bar{u}^{T}(t)\bar{\omega}(t)+\beta\bar{\omega}^{T}(t)\bar{\omega}(t). \end{eqnarray} (3.19)

    Multiplying (3.19) by e^{-\delta t}, we can get

    \begin{eqnarray*} e^{-\delta t}\dot{V}(\theta(t))-\delta e^{-\delta t} V(\theta(t))\leq2e^{-\delta t}\bar{u}^{T}(t)\bar{\omega}(t)+\beta e^{-\delta t}\bar{\omega}^{T}(t)\bar{\omega}(t), \end{eqnarray*}
    \begin{eqnarray*} \frac{d}{dt}\big(e^{-\delta t}V(\theta(t))\big)\leq2e^{-\delta t}\bar{u}^{T}(t)\bar{\omega}(t)+\beta e^{-\delta t}\bar{\omega}^{T}(t)\bar{\omega}(t). \end{eqnarray*}

    Integrating the above inequality between 0 to T_{1}, we can write

    \begin{eqnarray*} e^{-\delta T_{1}}V(\theta(t))-V(\theta(0))&\leq&2\int_{0}^{T_{1}}e^{-\delta t}\bar{u}^{T}(t)\bar{\omega}(t)dt+\beta \int_{0}^{T_{1}}e^{-\delta t}\bar{\omega}^{T}(t)\bar{\omega}(t)dt. \end{eqnarray*}

    Consider the zero initial condition for \theta_{0} = 0 and we have V(\theta(0)) = 0, then

    \begin{eqnarray*} e^{-\delta T_{1}}V(\theta(t))&\leq&2\int_{0}^{T_{1}}e^{-\delta t}\bar{u}^{T}(t)\bar{\omega}(t)dt+\beta\int_{0}^{T_{1}}e^{-\delta t}\bar{\omega}^{T}(t)\bar{\omega}(t)dt, \end{eqnarray*}
    \begin{eqnarray} V(\theta(t))&\leq&e^{\delta T_{1}}\int_{0}^{T_{1}}[2\bar{u}^{T}(t)\bar{\omega}(t)+\beta\bar{\omega}^{T}(t)\bar{\omega}(t)]dt. \end{eqnarray} (3.20)

    Since V(\theta(t))\geq 0, we can say from (3.20) that

    \begin{eqnarray} 2\int_{0}^{T_{1}}\bar{u}^{T}(t)\bar{\omega}(t)dt\geq-\beta \int_{0}^{T_{1}}\bar{\omega}^{T}(t)\bar{\omega}(t)dt. \end{eqnarray} (3.21)

    Finally, we can say that the system (3.17) is FTP. This allowed the proof to be obtained.

    Remark 7: It should be emphasized that the LMI approach proposed in this paper is more useful for reducing the conservatism of the delay system, which may lead to obtain less conservative results. This proves the advantage of our proposed method.

    Remark 8: We note that the optimal value of \bar{c}_{2} depends on the parameter \delta, then the optimal minimum value of \bar{c}_{2} is determined from the minimum value of \delta, such that the LMIs matrix solution remains feasible.

    Remark 9: In the majority of published work, the Lyapunov function theory is the most effective approach to investigating the problem of stability and passivity for divers dynamical models. Moreover, it can be seen that the existing literature [11,12,13,15,19,32,33,34,35] contains the Lyapunov function with single, double, tripe, and four integral terms. However, in this article, we give the Lyapunov–Krasovskii function with five integral terms such as \frac{\bar{\tau}^{4}}{24} \int_{-\bar{\tau}}^{0}\int_{\lambda}^{0}\int_{\alpha}^{0}\int_{\gamma}^{0}\int_{t+\beta}^{t}\dot{\theta}^{T}(s)T_{3}\dot{\theta}(s)dsd\beta d\gamma d\alpha d\lambda. Different from the published work, this is the first time studying the problem of FTP of neutral-type CVNNs. Via a Lyapunov–Krasovskii function with triple, four and five integral terms, by utilizing Jensons inequality and the Wirtinger-type inequality technique, new sufficient conditions for FTB and FTP are taken in terms of LMIs, which is effective on reducing conservatism.

    In this section, two examples are given to show the feasibility of our results.

    Example 1: Consider the following model

    \begin{eqnarray} \left\{ \begin{array}{ll} \dot{z}(t) = -Az(t)+Bf(z(t))+Cf(z(t-\tau(t)))+D\dot{z}(t-h(t))+\omega(t),\\ z(s) = \psi(s),\;\;s\in[-\rho,\;0], \end{array} \right. \end{eqnarray} (4.1)

    where

    \begin{eqnarray*} A& = &\left( \begin{array}{cc} 1.9 & 0 \\ 0 & 1.2 \\ \end{array} \right),\;B = \left( \begin{array}{cc} 0.5+0.5i & 0.03+0.03i \\ -0.4-0.4i & -0.1-0.1i \\ \end{array} \right),\;C = \left( \begin{array}{cc} 1.1+1.1i & 0.03+0.03i \\ 0.07+0.07i & 0.1+0.1i \\ \end{array} \right),\\ D& = &\left( \begin{array}{cc} 0.1+0.1i & 0 \\ 0 & 0.1+0.1i \\ \end{array} \right),\;\bar{A} = \left( \begin{array}{cccc} 1.9 & 0 & 0 & 0 \\ 0 & 1.2 & 0 & 0 \\ 0 & 0 & 1.9 & 0 \\ 0 & 0 & 0 & 1.2 \\ \end{array} \right),\;\bar{B} = \left( \begin{array}{cccc} 0.5 & 0.03 & -0.5 & -0.03 \\ -0.4 & -0.1 & 0.4 & 0.1 \\ 0.5 & 0.03 & 0.5 & 0.03 \\ -0.4 & -0.1 & -0.4 & -0.1 \\ \end{array} \right),\\ \bar{C}& = &\left( \begin{array}{cccc} 1.1 & 0.03 & -1.1 & -0.03 \\ 0.07 & 0.1 & -0.07 & -0.1 \\ 1.1 & 0.03 & 1.1 & 0.03 \\ 0.07 & 0.1 & 0.07 & 0.1 \\ \end{array} \right),\;\bar{D} = \left( \begin{array}{cccc} 0.1 & 0 & -0.1 & 0 \\ 0 & 0.1 & 0 & -0.1 \\ 0.1 & 0 & 0.1 &0 \\ 0 & 0.1 & 0 & 0.1 \\ \end{array} \right), \end{eqnarray*}

    f_{j}(z) = 0.3(|x_{j}+1|-|x_{j}-1|)+i0.3(|y_{j}+1|-|y_{j}-1|). Thus, L_{1} = L_{2} = 0_{2\times 2} and L_{3} = L_{4} = \left(\begin{array}{cc} 0.3 & 0 \\ 0 & 0.3 \\ \end{array} \right), \Rightarrow \bar{L}_{1} = 0_{4\times 4} and \bar{L}_{2} = 0.3 I_{4} \tau(t) = 0.15(1-\sin(2t)), \; h(t) = 0.15(1-\cos(2t)), \; \omega(t) = 0.9\sin(\pi t)e^{-0.5t}+\sin(\pi t)e^{-0.3t}i , \delta = 1, \; \bar{c}_{1} = \left(\begin{array}{c} 0.5 \\ 0.5 \\ \end{array} \right) 0.5, \; \bar{b} = \left(\begin{array}{c} 0.81 \\ 0.60 \\ \end{array} \right), \; T_{1} = 15, matrix \bar{L} = I. Using the Matlab LMI toolbox to solve the LMIs (3.2) and (3.3) in Theorem 1, we obtained the feasible solutions for an optimal minimum value of \bar{c}_{2} = \left(\begin{array}{c} 9.8208 \\ 9.4072 \\ \end{array} \right). The trajectories of the solution of system (4.1) in Example 1 are shown in Figures 1 and 2, and the time history of (Re(z))^{T}L(Re(z)) and (Im(z))^{T}L(Im(z)) are given in Figure 3. Hence, it can be concluded that the system (4.1) is FTB.

    Figure 1.  The trajectories of the real parts of the solution of model (4.1) in Example 1 with 4 initial conditions.
    Figure 2.  The trajectories of the imaginary parts of the solution of model (4.1) in Example 1 with 4 initial conditions.
    Figure 3.  Time history of (Re(z))^{T}L(Re(z)) and (Im(z))^{T}L(Im(z)) of model (4.1) in Example 1 with 4 initial conditions.

    Example 2: Consider the following system :

    \begin{eqnarray} \left\{ \begin{array}{lll} \dot{z}(t) = -Az(t)+Bf(z(t))+Cf(z(t-\tau(t)))+D\dot{z}(t-h(t))+\omega(t),\\ u(t) = K_{1}z(t)+K_{2}f(z(t)),\\ z(s) = \psi(s),\;\;s\in[-\rho,\;0], \end{array} \right. \end{eqnarray} (4.2)

    where

    \begin{eqnarray*} A& = &\left( \begin{array}{cc} 3.8 & 0 \\ 0 & 2.4 \\ \end{array} \right),\;B = \left( \begin{array}{cc} 1+i & 0.06+0.06i \\ -0.8-0.8i & -0.2-0.2i \\ \end{array} \right),\;C = \left( \begin{array}{cc} 2.2+2.2i & 0.06+0.06i \\ 0.14+0.14i & 0.2+0.2i \\ \end{array} \right),\\D& = &\left( \begin{array}{cc} 0.2+0.2i & 0 \\ 0 & 0.2+0.2i \\ \end{array} \right),\;K_{1} = \left( \begin{array}{cc} 1.2 & 1.6 \\ 1.25 & 1 \\ \end{array} \right),\;K_{2} = \left( \begin{array}{cc} 1+i & 0.5+0.6i \\ 0.5-0.4i & 1-i \\ \end{array} \right),\\ \bar{A}& = &\left( \begin{array}{cccc} 3.8 & 0 & 0 & 0 \\ 0 & 2.4 & 0 & 0 \\ 0 & 0 & 3.8 & 0 \\ 0 & 0 & 0 & 2.4 \\ \end{array} \right),\;\bar{B} = \left( \begin{array}{cccc} 1 & 0.06 & -1 & -0.06 \\ -0.8 & -0.2 & 0.8 & 0.2 \\ 1 & 0.06 & 1 & 0.06 \\ -0.8 & -0.2 & -0.8 & -0.2 \\ \end{array} \right),\; \bar{C} = \left( \begin{array}{cccc} 2.2 & 0.06 & -2.2 & -0.06 \\ 0.14 & 0.2 & -0.14 & -0.2 \\ 2.2 & 0.06 & 2.2 & 0.06 \\ -0.14 & 0.2 & 0.14 & 0.2 \\ \end{array} \right),\\\bar{D}& = &\left( \begin{array}{cccc} 0.2 & 0 & -0.2 & 0 \\ 0 & 0.2 & 0 & -0.2 \\ 0.2 & 0 & 0.2 &0 \\ 0 & 0.2 & 0 & 0.2 \\ \end{array} \right),\;\bar{K}_{1} = \left( \begin{array}{cccc} 1.2 & -1.6 & 0 & 0 \\ -1.25 & 1 & 0 & 0 \\ 0 & 0 & 1.2 & -1.6 \\ 0 & 0 & -1.25 & 1 \\ \end{array} \right),\; \bar{K}_{2} = \left( \begin{array}{cccc} 1 & 0.5 & 1 & -0.6 \\ 0.5 & 1 & 0.4 & 1 \\ -1 & 0.6 & 1 & 0.5 \\ -0.4 & -1 & 0.5 & 1 \\ \end{array} \right), \end{eqnarray*}

    f_{j}(z) = 0.6\tanh(x_{j})+i0.7\tanh(y_{j}). Thus, L_{1} = L_{2} = 0_{2\times 2} and L_{3} = \left(\begin{array}{cc} 0.3 & 0 \\ 0 & 0.3 \\ \end{array} \right), \; L_{4} = \left(\begin{array}{c} 0.35 \\ 0.35 \\ \end{array} \right) \Rightarrow \bar{L}_{1} = 0_{4\times 4} and \bar{L}_{2} = \left(\begin{array}{cccc} 0.3 & 0 & 0 & 0 \\ 0 & 0.3 & 0 & 0 \\ 0 & 0 & 0.35 & 0 \\ 0 & 0 & 0 & 0.35 \\ \end{array} \right), \tau(t) = 0.15(1-\cos(2t)), \; h(t) = 0.15(1-\sin(2t)), \; \omega(t) = 0.2e^{-0.5t}+0.3e^{-0.3t}i, \delta = 1, \; \bar{c}_{1} = \left(\begin{array}{c} 0.1 \\ 0.1 \\ \end{array} \right) 0.5, \bar{b} = \left(\begin{array}{c} 0.019 \\ 0.053 \\ \end{array} \right), \; T_{1} = 12, matrix \bar{L} = I. Using the Matlab LMI toolbox to solve the LMIs (3.2) and (3.3) in Theorem 2, we obtained the feasible solutions for an optimal minimum value of \bar{c}_{2} = \left(\begin{array}{c} 12.4431 \\ 15.0562 \\ \end{array} \right). From Table 1, The results presented in this manuscript are significantly better than those of [32,33,34,35], which confirms the validity of our work.

    Table 1.  The maximum authorized limits of \bar{\tau} for different values \mu = 0.8 and \mu = 0.9 in Example 2.
    \mu 0.8 0.9
    [32] 3.9212 2.3901
    [33] 5.2403 3.5211
    [34] 5.6384 3.7718
    [35] 6.5411 4.5074
    \bar{\tau} of our result 6.9307 5.2113

     | Show Table
    DownLoad: CSV

    The trajectories of the solution of model (4.2) in Example 4 are given in Figure 4 and Figure 5. The time history of (Re(z))^{T}L(Re(z)) and (Im(z))^{T}L(Im(z)) are given in Figure 6. Hence, it can be concluded that the system (4.2) is FTP.

    Figure 4.  The trajectories of the real parts of the solution of model (4.2) in Example 2 with 4 initial conditions.
    Figure 5.  The trajectories of the imaginary parts of the solution of model (4.2) in Example 2 with 4 initial conditions.
    Figure 6.  Time history of (Re(z))^{T}L(Re(z)) and (Im(z))^{T}L(Im(z)) of system (4.2) in Example 4 with 4 initial conditions.

    This paper is focused on the FTP problem of neutral-type complex-valued NNs in the presence of time-varying delays. By using Lyapunov functionals, the Wirtinger inequality, and LMIs, new sufficient conditions are gotten to guarantee the finite-time boundedness and finite-time passivity of our model. Finally, two examples are presented to prove the effectiveness of our main results. In the future work, we will take the challenge to study the finite-time dissipativity of stochastic complex NNs with mixed delays via a non-separation approach and the fixed-time passivity of coupled clifford-valued NNs subject to multiple delayed couplings.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors declare there is no conflict of interest.



    [1] M. M. Emam, E. H. Houssein, R. M. Ghoniem, A modified reptile search algorithm for global optimization and image segmentation: Case study brain MRI images, Comput. Biol. Med., 152 (2023), 106404. https://doi.org/10.1016/j.compbiomed.2022.106404 doi: 10.1016/j.compbiomed.2022.106404
    [2] E. H. Houssein, D. A. Abdelkareem, M. M. Emam, M. A. Hameed, M. Younan, An efficient image segementation method for skin cancer imaging using improved golden jackal optimization algorithm, Comput. Biol. Med., 149 (2022), 106075. https://doi.org/10.1016/j.compbiomed.2022.106075 doi: 10.1016/j.compbiomed.2022.106075
    [3] W. Zhu, L. Liu, F. Kuang, L. Li, S. Xu, Y. Liang, An efficient multi-threshold image segmentation for skin cancer using boosting whale optimizer, Comput. Biol. Med., 151 (2022), 106227. https://doi.org/10.1016/j.compbiomed.2022.106227 doi: 10.1016/j.compbiomed.2022.106227
    [4] L. Nie, L. Zhang, L. Meng, X. Song, X. Chang, X. Li, Modeling disease progression via multisource multitask learners: A case study with Alzheimer's disease, IEEE Trans. Neural Networks Learn. Syst., 28 (2017), 1508–1519. https://doi.org/10.1109/TNNLS.2016.2520964 doi: 10.1109/TNNLS.2016.2520964
    [5] J. Tang, Q. Sun, Z. Wang, Y. Cao, Perfect-reconstruction 4-tap size-limited filter banks for image fusion application, in 2007 International Conference on Mechatronics and Automation, (2007), 255–260. https://doi.org/10.1109/ICMA.2007.4303550
    [6] J. Tang, A contrast based image fusion technique in the DCT domain, Digital Signal Process., 14 (2004), 218–226. https://doi.org/10.1016/j.dsp.2003.06.001 doi: 10.1016/j.dsp.2003.06.001
    [7] E. Candès, L. Demanet, D. Donoho, L. Ying, Fast discrete curvelet transforms, Multiscale Model. Simul., 5 (2006), 861–899. https://doi.org/10.1137/05064182X doi: 10.1137/05064182X
    [8] B. Yu, B. Jia, L. Ding, Z. Cai, Q. Wu, R. Law, et al., Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion, Neurocomputing, 182 (2016), 1–9. https://doi.org/10.1016/j.neucom.2015.10.084 doi: 10.1016/j.neucom.2015.10.084
    [9] Z. Zhu, M. Zheng, G. Qi, D. Wang, Y. Xiang, A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain, IEEE Access, 7 (2019), 20811–20824. https://doi.org/10.1109/ACCESS.2019.2898111 doi: 10.1109/ACCESS.2019.2898111
    [10] M. Yin, X. Liu, Y. Liu, X. Chen, Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain, IEEE Trans. Instrum. Meas., 68 (2019), 49–64. https://doi.org/10.1109/TIM.2018.2838778 doi: 10.1109/TIM.2018.2838778
    [11] H. Ullah, B. Ullah, L. Wu, F. Y. O. Abdalla, G. Ren, Y. Zhao, Multi-modality medical images fusion based on local-features fuzzy sets and novel sum-modified-Laplacian in non-subsampled shearlet transform domain, Biomed. Signal Process. Control, 57 (2020), 101724. https://doi.org/10.1016/j.bspc.2019.101724 doi: 10.1016/j.bspc.2019.101724
    [12] Z. Zhou, B. Wang, S. Li, M. Dong, Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters, Inf. Fusion, 30 (2016), 15–26. https://doi.org/10.1016/j.inffus.2015.11.003 doi: 10.1016/j.inffus.2015.11.003
    [13] X. Qiu, M. Li, L. Zhang, X. Yuan, Guided filter-based multi-focus image fusion through focus region detection, Signal Process. Image Commun., 72 (2019), 35–46. https://doi.org/10.1016/j.image.2018.12.004 doi: 10.1016/j.image.2018.12.004
    [14] L. Caraffa, J. P. Tarel, P. Charbonnier, The guided bilateral filter: when the joint/cross bilateral filter becomes robust, IEEE Trans. Image Process., 24 (2015), 1119–1208. https://doi.org/10.1109/TIP.2015.2389617 doi: 10.1109/TIP.2015.2389617
    [15] L. Jian, X. Yang, Z. Zhou, K. Zhou, K. Liu, Multi-scale image fusion through rolling guidance filter, Future Gener. Comput. Syst., 83 (2018), 310–325. https://doi.org/10.1016/j.future.2018.01.039 doi: 10.1016/j.future.2018.01.039
    [16] J. Du, W. Li, B. Xiao, Fusion of anatomical and function images using parallel saliency features, Inf. Sci., 430–431 (2018), 567–576. https://doi.org/10.1016/j.ins.2017.12.008 doi: 10.1016/j.ins.2017.12.008
    [17] R. J. Jevnisek, S. Avidan, Co-occurrence filter, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 3816–3824. https://doi.org/10.1109/CVPR.2017.406
    [18] Z. Li, J. Zheng, Z. Zhu, W. Yao, S. Wu, Weighted guided image filtering, IEEE Trans. Image Process., 24 (2015), 120–129. https://doi.org/10.1109/TIP.2014.2371234 doi: 10.1109/TIP.2014.2371234
    [19] H. Yin, Y. Gong, G. Qiu, Side window guided filtering, Signal Process., 165 (2019), 315–330. https://doi.org/10.1016/j.sigpro.2019.07.026 doi: 10.1016/j.sigpro.2019.07.026
    [20] M. Diwakar, P. Singh, A. Shankar, Multi-modal medical image fusion framework using co-occurrence filter and local extrema in NSST domain, Biomed. Signal Process. Control, 68 (2021), 102788. https://doi.org/10.1016/j.bspc.2021.102788 doi: 10.1016/j.bspc.2021.102788
    [21] W. Liu, Z. Wang, A novel multi-focus image fusion method using multiscale shearing non-local guided averaging filter, Signal Process., 166 (2020), 107252. https://doi.org/10.1016/j.sigpro.2019.107252 doi: 10.1016/j.sigpro.2019.107252
    [22] B. Meher, S. Agrawal, R. Panda, A. Abraham, A survey on region based image fusion methods, Inf. Fusion, 48 (2019), 119–132. https://doi.org/10.1016/j.inffus.2018.07.010 doi: 10.1016/j.inffus.2018.07.010
    [23] X. Li, F. Zhou, H. Tan, W. Zhang, C. Zhao, Multimodal medical image fusion based on joint bilateral filter and local gradient energy, Inf. Sci., 569 (2021), 302–325. https://doi.org/10.1016/j.ins.2021.04.052 doi: 10.1016/j.ins.2021.04.052
    [24] C. Xing, Z. Wang, Q. Quyang, C. Dong, C. Duan, Image fusion method based on spatially masked convolutional sparse representation, Image Vision Comput., 90 (2019), 103806. https://doi.org/10.1016/j.imavis.2019.08.010 doi: 10.1016/j.imavis.2019.08.010
    [25] S. Maqsood, U. Javed, Multi-modal medical image fusion based on two-scale image decomposition and sparse representation, Biomed. Signal Process. Control, 57 (2020), 101810. https://doi.org/10.1016/j.bspc.2019.101810 doi: 10.1016/j.bspc.2019.101810
    [26] S. Goyal, V. Singh, A. Rani, N. Yadav, FPRSGF denoised non-subsampled shearlet transform-based image fusion using sparse representation, Signal Image Video Process., 14 (2020), 719–726. https://doi.org/10.1007/s11760-019-01597-z doi: 10.1007/s11760-019-01597-z
    [27] F. Zhou, X. Li, M. Zhou, Y. Chen, H. Tan, A new dictionary construction based multimodal medical image fusion framework, Entropy, 21 (2019), 267. https://doi.org/10.3390/e21030267 doi: 10.3390/e21030267
    [28] Y. Liu, X. Chen, R. K. Ward, Z. J. Wang, Medical image fusion via convolutional sparsity based morphological component analysis, IEEE Signal Process. Lett., 26 (2019), 485–489. https://doi.org/10.1109/lsp.2019.2895749 doi: 10.1109/LSP.2019.2895749
    [29] Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, L. Zhang, IFCNN: A general image fusion framework based on convolutional neural network, Inf. Fusion, 54 (2020), 99–118. https://doi.org/10.1016/j.inffus.2019.07.011 doi: 10.1016/j.inffus.2019.07.011
    [30] H. Li, Y. Wang, Z. Yang, R. Wang, X. Li, D. Tao, Discriminative dictionary learning-based multiple component decomposition for detail-preserving noisy image fusion, IEEE Trans. Instrum. Meas., 69 (2020), 1082–1102. https://doi.org/10.1109/TIM.2019.2912239 doi: 10.1109/TIM.2019.2912239
    [31] H. Li, M. Yang, Z. Yu, Joint image fusion and super-resolution for enhanced visualization via semi-coupled discriminative dictionary learning and advantage embedding, Neurocomputing, 422 (2021), 62–84. https://doi.org/10.1016/j.neucom.2020.09.024 doi: 10.1016/j.neucom.2020.09.024
    [32] Q. Hu, S. Hu, F. Zhang, Multi-modality medical image fusion based on separable dictionary learning and Gabor filtering, Signal Process. Image Commun., 83 (2020), 115758. https://doi.org/10.1016/j.image.2019.115758 doi: 10.1016/j.image.2019.115758
    [33] J. Ma, H. Xu, J. Jiang, X. Mei, X. Zhang, DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion, IEEE Trans. Image Process., 29 (2020), 4980–4995. https://doi.org/10.1109/TIP.2020.2977573 doi: 10.1109/TIP.2020.2977573
    [34] H. Zhang, H. Xu, X. Tian, J. Jiang, J. Ma, Image fusion meets deep learning: A survey and perspective, Inf. Fusion, 76 (2021), 323–336. https://doi.org/10.1016/j.inffus.2021.06.008 doi: 10.1016/j.inffus.2021.06.008
    [35] K. Zhan, J. Shi, H. Wang, Y. Xie, Q. Li, Computational mechanisms of pulse-coupled neural networks: A comprehensive review, Arch. Computat. Methods Eng., 24 (2017), 573–588. https://doi.org/10.1007/s11831-016-9182-3 doi: 10.1007/s11831-016-9182-3
    [36] Y. Chen, S. Park, Y. Ma, R. Ala, A new automatic parameter setting method of a simplified PCNN for image segmentation, IEEE Trans. Neural Networks, 22 (2011). https://doi.org/10.1109/TNN.2011.2128880 doi: 10.1109/TNN.2011.2128880
    [37] G. Qu, D. Zhang, P. Yan, Information measure for performance of image fusion, Electron. Lett., 38 (2002), 313–315. https://doi.org/10.1049/EL:20020212 doi: 10.1049/el:20020212
    [38] C. S. Xydeas, V. Petrovic, Objective image fusion performance measure, Electron. Lett., 36 (2000), 308–309. https://doi.org/10.1049/el:20000267 doi: 10.1049/el:20000267
    [39] Y. Han, Y. Cai, Y. Cao, X. Xu, A new image fusion performance metric based on visual information fidelity, Inf. Fusion, 14 (2013), 127–135. https://doi.org/10.1016/j.inffus.2011.08.002 doi: 10.1016/j.inffus.2011.08.002
    [40] Y. Chen, R. S. Blum, A new automated quality assessment algorithm for image fusion, Image Vision Comput., 27 (2009), 1421–1432. https://doi.org/10.1016/j.imavis.2007.12.002 doi: 10.1016/j.imavis.2007.12.002
    [41] M. B. A. Haghighat, A. Aghagolzadeh, H. Seyedarabi, A non-reference image fusion metric based on mutual information of image features, Comput. Electr. Eng., 37 (2011), 744–756. https://doi.org/10.1016/j.compeleceng.2011.07.012 doi: 10.1016/j.compeleceng.2011.07.012
    [42] L. Zhang, H. Li, SR-SIM: A fast and high performance IQA index based on spectral residual, in 2012 19th IEEE International Conference on Image Processing, 19 (2012), 6467149. https://doi.org/10.1109/ICIP.2012.6467149
    [43] Z. Liu, E. Blasch, Z. Xue, J. Zhao, R. Laganiere, W. Wu, Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study, IEEE Trans. Pattern Anal. Mach. Intell., 34 (2012), 94–109. https://doi.org/10.1109/TPAMI.2011.109 doi: 10.1109/TPAMI.2011.109
    [44] Z. Zhu, Y. Chai, H. Yin, Y. Li, Z. Liu, A novel dictionary learning approach for multi-modality medical image fusion, Neurocomputing, 214 (2016), 471–482. https://doi.org/10.1016/j.neucom.2016.06.036 doi: 10.1016/j.neucom.2016.06.036
    [45] F. Zhou, X. Li, M. Zhou, Y. Chen, H. Tan, A new dictionary construction based multimodal medical image fusion framework, Entropy, 21 (2019), 1–20. https://doi.org/10.3390/e21030267 doi: 10.3390/e21030267
    [46] M. Kim, D. K. Han, H. Ko, Joint patch clustering-based dictionary learning for multimodal image fusion, Inf. Fusion, 27 (2016), 198–214. https://doi.org/10.1016/j.inffus.2015.03.003 doi: 10.1016/j.inffus.2015.03.003
    [47] C. He, Q. Liu, H. Li, H. Wang, Multimodal medical image fusion based on IHS and PCA, Procedia Eng., 7 (2010), 280–285. https://doi.org/10.1016/j.proeng.2010.11.045 doi: 10.1016/j.proeng.2010.11.045
    [48] Z. Xu, Medical image fusion using multi-level local extrema, Inf. Fusion, 19 (2014), 38–48. https://doi.org/10.1016/j.inffus.2013.01.001 doi: 10.1016/j.inffus.2013.01.001
    [49] J. Du, W. Li, B. Xiao, Anatomical-Functional image fusion by information of interest in local Laplacian filtering domain, IEEE Trans. Image Process., 26 (2017), 5855–5866. https://doi.org/10.1109/TIP.2017.2745202 doi: 10.1109/TIP.2017.2745202
    [50] J. Tang, Q. Sun, K. Agyepong, An image enhancement algorithm based on a new contrast measure in the wavelet domain for screening mammograms, in 2007 IEEE International Conference on Image Processing, 5 (2007), 16–19. https://doi.org/10.1109/ICIP.2007.4379757
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1915) PDF downloads(93) Cited by(1)

Figures and Tables

Figures(16)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog