
Cardiovascular disease (CVD) is a leading cause of mortality worldwide, and it is of utmost importance to accurately assess the risk of cardiovascular disease for prevention and intervention purposes. In recent years, machine learning has shown significant advancements in the field of cardiovascular disease risk prediction. In this context, we propose a novel framework known as CVD-OCSCatBoost, designed for the precise prediction of cardiovascular disease risk and the assessment of various risk factors. The framework utilizes Lasso regression for feature selection and incorporates an optimized category-boosting tree (CatBoost) model. Furthermore, we propose the opposition-based learning cuckoo search (OCS) algorithm. By integrating OCS with the CatBoost model, our objective is to develop OCSCatBoost, an enhanced classifier offering improved accuracy and efficiency in predicting CVD. Extensive comparisons with popular algorithms like the particle swarm optimization (PSO) algorithm, the seagull optimization algorithm (SOA), the cuckoo search algorithm (CS), K-nearest-neighbor classification, decision tree, logistic regression, grid-search support vector machine (SVM), grid-search XGBoost, default CatBoost, and grid-search CatBoost validate the efficacy of the OCSCatBoost algorithm. The experimental results demonstrate that the OCSCatBoost model achieves superior performance compared to other models, with overall accuracy, recall, and AUC values of 73.67%, 72.17%, and 0.8024, respectively. These outcomes highlight the potential of CVD-OCSCatBoost for improving cardiovascular disease risk prediction.
Citation: Zhaobin Qiu, Ying Qiao, Wanyuan Shi, Xiaoqian Liu. A robust framework for enhancing cardiovascular disease risk prediction using an optimized category boosting model[J]. Mathematical Biosciences and Engineering, 2024, 21(2): 2943-2969. doi: 10.3934/mbe.2024131
[1] | Yanxin Li, Shangkun Liu, Jia Li, Weimin Zheng . Congestion tracking control of multi-bottleneck TCP networks with input-saturation and dead-zone. AIMS Mathematics, 2024, 9(5): 10935-10954. doi: 10.3934/math.2024535 |
[2] | Hadil Alhazmi, Mohamed Kharrat . Echo state network-based adaptive control for nonstrict-feedback nonlinear systems with input dead-zone and external disturbance. AIMS Mathematics, 2024, 9(8): 20742-20762. doi: 10.3934/math.20241008 |
[3] | Shihua Zhang, Xiaohui Qi, Sen Yang . A cascade dead-zone extended state observer for a class of systems with measurement noise. AIMS Mathematics, 2023, 8(6): 14300-14320. doi: 10.3934/math.2023732 |
[4] | Xiaoling Liang, Chen Xu, Duansong Wang . Adaptive neural network control for marine surface vehicles platoon with input saturation and output constraints. AIMS Mathematics, 2020, 5(1): 587-602. doi: 10.3934/math.2020039 |
[5] | Mohamed Kharrat, Moez Krichen, Loay Alkhalifa, Karim Gasmi . Neural networks-based adaptive command filter control for nonlinear systems with unknown backlash-like hysteresis and its application to single link robot manipulator. AIMS Mathematics, 2024, 9(1): 959-973. doi: 10.3934/math.2024048 |
[6] | Xiaohang Su, Peng Liu, Haoran Jiang, Xinyu Yu . Neighbor event-triggered adaptive distributed control for multiagent systems with dead-zone inputs. AIMS Mathematics, 2024, 9(4): 10031-10049. doi: 10.3934/math.2024491 |
[7] | Miao Xiao, Zhe Lin, Qian Jiang, Dingcheng Yang, Xiongfeng Deng . Neural network-based adaptive finite-time tracking control for multiple inputs uncertain nonlinear systems with positive odd integer powers and unknown multiple faults. AIMS Mathematics, 2025, 10(3): 4819-4841. doi: 10.3934/math.2025221 |
[8] | Kairui Chen, Yongping Du, Shuyan Xia . Adaptive state observer event-triggered consensus control for multi-agent systems with actuator failures. AIMS Mathematics, 2024, 9(9): 25752-25775. doi: 10.3934/math.20241258 |
[9] | Yihang Kong, Xinghui Zhang, Yaxin Huang, Ancai Zhang, Jianlong Qiu . Prescribed-time adaptive stabilization of high-order stochastic nonlinear systems with unmodeled dynamics and time-varying powers. AIMS Mathematics, 2024, 9(10): 28447-28471. doi: 10.3934/math.20241380 |
[10] | Mohamed Kharrat, Hadil Alhazmi . Neural networks-based adaptive fault-tolerant control for a class of nonstrict-feedback nonlinear systems with actuator faults and input delay. AIMS Mathematics, 2024, 9(6): 13689-13711. doi: 10.3934/math.2024668 |
Cardiovascular disease (CVD) is a leading cause of mortality worldwide, and it is of utmost importance to accurately assess the risk of cardiovascular disease for prevention and intervention purposes. In recent years, machine learning has shown significant advancements in the field of cardiovascular disease risk prediction. In this context, we propose a novel framework known as CVD-OCSCatBoost, designed for the precise prediction of cardiovascular disease risk and the assessment of various risk factors. The framework utilizes Lasso regression for feature selection and incorporates an optimized category-boosting tree (CatBoost) model. Furthermore, we propose the opposition-based learning cuckoo search (OCS) algorithm. By integrating OCS with the CatBoost model, our objective is to develop OCSCatBoost, an enhanced classifier offering improved accuracy and efficiency in predicting CVD. Extensive comparisons with popular algorithms like the particle swarm optimization (PSO) algorithm, the seagull optimization algorithm (SOA), the cuckoo search algorithm (CS), K-nearest-neighbor classification, decision tree, logistic regression, grid-search support vector machine (SVM), grid-search XGBoost, default CatBoost, and grid-search CatBoost validate the efficacy of the OCSCatBoost algorithm. The experimental results demonstrate that the OCSCatBoost model achieves superior performance compared to other models, with overall accuracy, recall, and AUC values of 73.67%, 72.17%, and 0.8024, respectively. These outcomes highlight the potential of CVD-OCSCatBoost for improving cardiovascular disease risk prediction.
In the last few years, the uncertainties general consist in the nonlinear systems, the most important is that it is also widely applied in constraint. By taking adaptive control [1], the parametric uncertainty of nonlinear system is presented in [2]. In the actual systems, out of the realistic need as well as the difficulty of operations, the unknown continuous functions are approximated, which is based on fuzzy logic systems (FLSs) or neural networks (NNs) in [3,4,5,6,7]. On the basis of NNs or FLSs, adaptive control algorithm comes up for nonlinear single-input single-output (SISO) systems [8]. As same as for MIMO nonlinear systems [9] with unknown functions and discrete-time systems, the above method also applies. Besides, a multilayer NNs estimator was first developed in [10] to improve the compensation accuracy of model-based feedforward control terms. However, the aforementioned results do not refer to constraint.
As one of the most important factors restricting the system performance, constraints extensive exist in actual systems, such as robotic manipulator system, nonuniform gantry crane and so on. Constraint cannot be omitted, otherwise it may have an impact on the equipment and something unexpected, thus, constraint control has become a significant portion of nonlinear control. As we all know, the Barrier Lyapunov function (BLF) is the significant tool to dispose constrained problem. So far it widespread application in nonlinear systems with output constraint in [11] and full state constraints in [12,13], and its effectiveness is also verified. The NNs or FLSs is used to design adaptive controllers for nonlinear systems using neural networks with constant constraint in [5]. Nevertheless, none of the above methods mention time-varying constraints in [14]. In this article, the time-varying with full state constraints are further studied. Adaptive controllers are designed for time-varying output constraints and time-varying full-state constraints [15], respectively. However, the above backstepping recursion method ignore the feasibility conditions of virtual controllers, namely, the virtual controller is within the given constraint bounded. In [16], a new coordinate transformation is introduced to completely remove this limitation.
The constraints of the above research are all direct constraints on the states, whether full state constraints or other types of constraints. Subsequent some studies are not limited to state constraints. At present, the state transfer function is introduced for coordinate transformation in [17] indirect processing constraint such as constrain the error. Compared with the above-mentioned research methods, the advantage of this approach is not only independent of initial tracking condition, but also suitable for asymmetric time-varying constraints, which is studied in [15]. However, in any of these cases, the effect of dead-zone on constraint is omitted.
Mention nonlinear input, the most common are dead-zones, saturation, time-delay, and so on. An innovative approach is to propose a dead-zone compensation in motion control systems using adaptive fuzzy logic control. In this paper, the dead-zone nonlinear input is our focus. The existing dead-zones prevent us from getting the desired control results, and the problem caused by it is serious. For instance, if the robot servo system has nonlinear links such as friction and unknown dead zone, it not only reduces the efficiency of the control system, but also lead to the instability of the system. In recent years, the study of dead-zone has become the focus of control research. In [18], for discrete-time plants with unknown dead-zone, a new control structure with adaptive dead region inverse is put forward. To this purpose, some adaptive control method is proposed, such as neural network control and adaptive fuzzy sliding mode control. Adaptive tracking of asymmetric dead-zone input nonlinear systems with uncertain parameters is proposed. As is well-known, for dead-zone in multi-input multi-output nonlinear system, which has lower triangular structure and asymmetric structure, a new control method is proposed. To eliminate dead-zone effects, the dead-zones compensation control is implemented for precision instrument control.
In this paper, the adaptive neural network control method is proposed, it is the realization of the control target. The design of the controller is beneficial. After comprehension of above achievements, the control scheme has the following advantages.
1) Compared with previous adaptive neural network control methods, this article takes into account more complex case. The effects of delay constraints and dead-zones on system performance are considered, which is more practical in line with the needs of the actual system.
2) State transition function is introduced; the appropriate time node is selected to ensure that the states are in the constraint bounds. Namely, the initial state is out of bounds, and then in the bounds. There is no mention of delay constraints in [16], the problem of delay constraints is considered in this paper. It provides convenience for error tracking that not widely involved in the previous research for nonlinear adaptive control.
3) In this paper, based on the introduction of BLF, potential dead-zones in the system is resolved, which affects the stability of the system and increases steady state error. Manipulative know m and unknown d, the difficult problem of controller design is solved.
Consider a class of nonlinear strict-feedback systems with dead-zones as following:
{˙xi=gi(ˉxi)xi+1+fi(ˉxi),i=1,...,n−1˙xn=gn(ˉxn)D(u(t))+fn(ˉxn)y=x1 | (2.1) |
where x=[x1,…,xn]T∈R with ˉxi=[x1,…,xj]T j=1,…,n, xi∈R, D(u(t))∈R and y∈R are the state variables, the input and the output of the systems, respectively, gi(ˉxi) are unknown control coefficients and fi(ˉxi) are unknown smooth functions. xi∈(−k_ci(t),ˉkci(t)), xi is unconstrained when t∈[0,T]. Nevertheless, when t∈[T,∞), xi are within the given bound. So as to ensure the validity of the constraint, the value of T is crucial. Let −k_ci(t)<xi(t)<ˉkci(t), where k_ci(t) and ˉkci(t) are given.
Remark 1: There are many factors that affect system performance, but one of the most significant is constraint. In order to ensure that constraint is not violated in the control process, the work that needs attention in the adaptive control strategy is proposed in [19,20,21,22,23,24,25,26,27,28,29,30]. In this paper, to make sure constraints are not violated, the transfer function is introduced for coordinate transformation. Constraints appear some time later, the system stability is improved.
The delay constraint means that the constraint occurs over a period of time, which does not have to constrain the signal all the time. By designing the appropriate controller, the signal satisfies the constraint condition after a certain time.
All of state variables in the system (2.1) are constrained in the compact set. D(u(t)) is a dead-zone defined as:
D(u(t))={mr(u(t)−br),u(t)⩾br0,−bl<u(t)<bml(u(t)+bl),u(t)⩽−bl | (2.2) |
where u(t) is the dead-zone input, mr and ml respectively stand for dead-zone right slope and left slope, br and bl represent the right and left cut point of the input nonlinearity. The detailed structure diagram of dead zone is given in Figure 1.
The aforementioned model can be transformed into the following form:
D(u(t))=m(t)u(t)+b(t) | (2.3) |
where
m(t)={mr,u(t)>0ml,u(t)⩽0 |
and
b(t)={−mrbr,u(t)⩾br−m(t)u(t),−bl<u(t)<brmlbl,u(t)⩽−bl |
Assumption 1. The function m(t) is known, b(t) is unknown and its up bound is ˉb(t), namely, |b(t)|⩽ˉb(t).
The task is to design an adaptive controller u, such that the system output y tracks a desired trajectory yd(t). All the signals in the closed-loop system are bounded, meanwhile the full state constraints are not violated. It holds that −y_d(t)⩽yd(t)⩽ˉyd(t), where y_d(t) and ˉyd(t) are continuous positive functions, with k_c1(t)⩾y_d(t) and ˉkc1(t)>ˉyd(t).
In this paper, on account of radial basis function neural networks (RBFNNs) approximate ability, it is chosen to approximate unknown and continuous function.
Consider a continuous function h(z): Rq→R, the following form can be obtained:
hnn(z)=θ∗TS(z) | (3.1) |
where the input variable z∈Ωz⊂Rq, desired weight matrix θ∗=[θ1,θ2,…θl]T∈Rl, l>1 is the NN node number. In addition, S(z)=[s1(z),…,sl(z)]T, it is often expressed by Gaussian function, which has the following form:
si(z)=exp[−(z−μi)T(z−μi)η2i],i=1,2,…,l |
where μi=[μi1,μi2,…,μiq] is the center of NNs and ηi is the width of the Gaussian function.
According to the character of NNs, with regard to any continuous unknown function, it can be represented as
h(z)=θ∗TS(z)+ε(z) | (3.2) |
where ε(z) is alluded to as the least approximate error and |ε(z)|⩽ε with ε>0 for any z∈Ωz. The weight matrix θ∗ has the following representation
θ∗≜argminθ∈Rl{supZ∈Ωz|h(z)−θTS(z)|} |
Assumption 2. The function gi(ˉxi) is unknown, time-varying, and bounded away from zero, respectively. Its upper bound and lower bound are ˉgi and g_i, They are also unknown continuous function, such that 0<g_i⩽|gi(ˉxi)|⩽ˉgi, ∀ˉxi∈Ω⊂Rn.
A new asymmetric Barrier Lyapunov function is introduced
V=ζ2(t)(F1(t)+ζ(t))(F2(t)−ζ(t)) | (3.3) |
where F1(t) and F2(t) are positive functions, ζ(t) will be defined later. Note that V is valid in the interval −F1(t)<ζ(t)<F2(t).
Then, the transfer function is introduced to better constrain the states. The function is defined as
ζi(t)={0,τ(t)zi(t),zi(t),t=00<t<T(i=1,...,n)t⩾T | (3.4) |
where
τ(t)={1−(T−tT)n+2,0⩽t<T1,t⩾T | (3.5) |
The effect of (3.4) is to solve the problem of uncertain initial conditions. In the meantime, τ(t) is a continuous and differentiable function.
Remark 2: The T is a time node. There exist two crucial features for τ(t), τ(0)=0 and τ(t)=1 for t⩾T. These two properties have important applications in the conversion of initial values, that is to transform a non-zero initial value into a zero initial value. In the end, their values will converge to a finite bounded.
In the following study, adaptive control method and backstepping recursive are used to design virtual controllers αi, actual controller u and adaptive laws ˆwi. Adaptive control process has a total of n steps, the coordinate transformation is introduced as
zi=xi−αi−1(i=2,...,n) | (4.1) |
In the step 1, virtual controller α1 and adaptive law ˆw1 are defined as follows:
α1=−k1z1−ψ1−ˆw1P1 | (4.2) |
˙ˆw1=λ1G1ζ1τP1−δ1ˆw1 | (4.3) |
where k1 is a positive constant, δ1>0 is a constant, λ1>0 is a constant, ˆw1 stands for adaptive parameter. In addition, ψ1, P1, and G1 will be defined later.
In the step i(2⩽i⩽n−1), virtual controller αi and adaptive law ˆwi are defined as follows:
αi=−kizi−ψi−ˆwiPi | (4.4) |
˙ˆwi=λiGiζiτPi−δiˆwi | (4.5) |
where ki is a positive constant, δi>0 is a constant, λi>0 is a constant, ˆwi stands for adaptive parameter. In addition, ψi, Pi, and Gi will be defined later.
In the last step, actual controller u and adaptive law for ˆwn are defined as follows:
u=1m(−knzn−ψn−ˆwnPn) | (4.6) |
˙ˆwn=λnGnζnτPn−δnˆwn | (4.7) |
where kn is a positive constant, δn>0 is a constant, λn>0 is a constant, ˆwn stands for adaptive parameter. In addition, ψn, Pn, and Gn will be defined later.
Let us consider tracking error z1=x1−yd. From the first Eq (2.1), we get the derivation of z1
˙z1=g1x2+f1−˙yd=g1(z2+α1)+f1−˙yd | (4.8) |
From (3.4), we obtain
˙ζ1=˙τz1+τg1(z2+α1)+τ(f1−˙yd)=g1τα1+g1ζ2+˙τz1+τ(f1−˙yd) |
Remark 3: We just think about 0<t<T, so let ζi(t)=τ(t)zi(t). States are constrained indirectly through constraints on transition functions, this will be discussed below.
From zi=xi−αi−1(i=1,...,n), then we get
˙zi=gixi+1+fi−˙αi−1=gixi+1+fi−i−1∑j=1∂αi−1∂xj(gjxj+1+fj)−Δαi−1=gi(zi+1+αi)+fi−i−1∑j=1∂αi−1∂xj(gjxj+1+fj)−Δαi−1 | (4.9) |
where
˙αi−1=i−1∑j=1∂αi−1∂xj(gjxj+1+fj)+Δαi−1 | (4.10) |
with
Δαi−1=∑i−1j=0∂αi−1∂y(j)dy(j+1)d+∑i−1j=1∑i−jk=0∂αi−1∂F(k)j2F(k+1)j2+∑i−1j=0∂αi−1∂τ(j)τ(j+1)+∑i−1j=1∑i−jk=0∂αi−1∂F(k)j1F(k+1)j1 |
From (3.4), we obtain the following dynamics
˙ζi=˙τzi+τgi(zi+1+αi)+τ(fi−i−1∑j=1∂αi−1∂xj(gjxj+1+fj)−Δαi−1)=˙τzi+giζi+1+giταi+τ(fi−i−1∑j=1∂αi−1∂xj(gjxj+1+fj)−Δαi−1) |
Substituting (2.3) into zn=xn−αn−1, one deduces that
˙zn=gn(mu+b)+fn−n−1∑j=1∂αn−1∂xj(gjxj+1+fj)−Δαn−1 | (4.11) |
where
˙αn−1=n−1∑j=1∂αn−1∂xj(gjxj+1+fj)+Δαn−1 | (4.12) |
with
Δαn−1=∑n−1j=0∂αn−1∂y(j)dy(j+1)d+∑n−1j=1∑n−jk=0∂αn−1∂F(k)j2F(k+1)j2+∑n−1j=1∑n−jk=0∂αn−1∂F(k)j1F(k+1)j1+∑n−1j=0∂αn−1∂τ(j)τ(j+1) |
then, we obtain the following dynamics
˙ζn=˙τzn+τ(fn−n−1∑j=1∂αn−1∂xj(gjxj+1+fj)−Δαn−1)+τgn(mu+b) |
Please see the appendix for the specific derivation process
Theorem 1: Consider a class of nonlinear system (2.1) with a dead-zone (2.2) under Assumptions 1–2, virtual controllers are designed in (4.2) and (4.4), and the actual controller is designed in (4.6). Meanwhile the adaption laws are constructed in (4.3), (4.5) and (4.7). If the design parameters are chosen appropriately, it ensures that all signals in the closed-loop system are UUB and bounded. The system output y tracks a desired trajectory yd(t).
Proof. It is obvious that ˜wjˆwj=˜wj(wj−˜wj)≤−˜w2j2+w2j2 we can derive that
2Fj1Fj2−Fj1ξj+Fj2ξj⩾Fj1Fj2−Fj1ξj+Fj2ξj−ξ2j=(Fj1+ξj)(Fj2−ξj)>0 | (4.13) |
Then, we get
Gjξ2j=2Fj1Fj2−Fj1ξj+Fj2ξj[(Fj1+ξj)(Fj2−ξj)]2ξ2j⩾ξ2j(Fj1+ξj)(Fj2−ξj) | (4.14) |
2Fj1Fj2−Fj1ξj+Fj2ξj/(Fj1+ξj)(Fj2−ξj)⩾1 | (4.15) |
Hence, we have the following inequality:
˙Vn⩽−n∑j=1kjg_jξ2j(Fj1+ξj)(Fj2−ξj)−n∑j=1g_jδj2λj˜w2j+n∑j=1g_jδj2λjw2j+n∑j=1Ξj⩽−ρVn+c | (4.16) |
where ρ=min{kjg_j,δj}>0, c=n∑j=1g_jδj2λjw2j+n∑j=1Ξj
Besides, from (4.16), we get
0⩽Vn(t)⩽c/cρρ+(Vn(0)−c/cρρ)e−ρt | (4.17) |
Hence, we obtain that Vn(t)∈l∞, it implies that ˆw1∈l∞ and Vi0(t)∈l∞. The upper of ζi(t) is −Fi1(t) and lower is Fi2(t), so −Fi1(t)<ζi(t)<Fi2(t). Based on the shifting function τ(t), when t∈[0,T), τ(t) is strictly increasing. And τ(0)=0, when and only when t=0, then, ζi(0)=τ(0)zi(0)=0. We get −Fi1(0)<ζi(0)<Fi2(0). When 0<t<T, τ(t) is not zero. From (4.8), zi(t)=ζi(t)/τ(t), zi(t) is bounded because ζi(t) is bounded. When t⩾T, τ(t)=1, then we can get zi(t)=ζi(t), so −Fi1(t)<zi(t)<Fi2(t). In conclusion, zi(t) is bounded anyway. According to the definition of F11 and F12, thus F11, F12, ˙F11 and ˙F12 are bounded. From z1=x1−yd, z1 is bounded and x1 is also bounded. Because F11, F12, ˙F11, ˙F12, ˆw1, yd, ˙yd, τ and ˙τ are bounded, it follows that virtual controller α1 and adaptive law ˙ˆw1 are bounded. Then follow the definition of F21 and F22, thus F21, F22, ˙F21 and ˙F22 are bounded. From z2=x2−α1, z2 is bounded and x2 is also bounded. In the similar way, α2 and ˙ˆw2 can also be bounded. Continue this derivation, we can obtain that xi(i=3,⋯,n), αi(i=3,⋯,n−1), ˙ˆwi(i=3,⋯,n) and control input u are bounded. In a word, all signals in closed-loop systems are bounded. The proof is completed.
In this paper, to prove the effectiveness of this method, the simulation experiment is given. Consider the following nonlinear systems
{˙x1=x2˙x2=−mglsin(x1)2M−Bx2My=x1+D(u(t))M | (5.1) |
where y and u are the outputs and inputs of the system. Meanwhile, D(u(t)) is the output of the dead zone, described as
D(u(t))={0.2(u−0.3),u>0.30,−0.3⩽u⩽0.30.3(u+0.3),u<−0.3 | (5.2) |
and the states x1 and x2 are constrained by ˉkc1(t)<x1(t)<−k_c1(t) and ˉkc2(t)<x2(t)<−k_c2(t) with ˉkc1(t)=0.07+0.01sin(2t), k_c1(t)=0.11+0.05sin(4t), ˉkc2(t)=0.3sin(3t)+0.6, k_c2(t)=0.4sin(2t)+0.72. The ideal trajectory yd is chosen as yd=0.05cos(6t). The initial states are chosen as x1(0)=0.16, x2(0)=0.001, ˆw1(0)=0.6, ˆw2(0)=0.279. The controllers and adaptive laws of simulation system are chosen as follow:
α1=−k1z1−G1ζ1τ1−ˆw1G1ζ1τST1(z1)S1(z1)u=1m(−k2z2−G2ζ2τ−ˆw2G2ζ2τST2(z2)S2(z2))˙ˆw1=λ1G1ζ1τP1−δ1ˆw1˙ˆw2=λ2G2ζ2τP2−δ2ˆw2 |
In the simulation, the first NN includes 10 nodes with the center μ1 evenly spaced on [−1,1]×[−1,1]×[−1,1] with width η1=1, the second NN includes 16 nodes with the center μ2 evenly spaced on [−3,3]×[−3,3]×[−3,3]×[−3,3]×[−3,3] with width η2=1. The parameters in the simulation system are given as m=1kg, g=9.8m/s2, l=1m, M=0.5kg.m2, B=1N.m.s. The control parameters are selected as k1=29, k2=44, λ1=0.01, λ2=0.01, δ1=0.1, δ2=0.5. Delay constraint time T=1, x1 and x2 are out of bounds, when t∈(0,1) and while T⩾1, x1 and x2 are expected completely within the bounds. The selection of the delay constraint time is based on the minimum tracking error and the best tracking performance. Besides, according to the desired trajectory, we obtain that F11=0.01sin(2t)−1.93 and F12=0.05sin(4t)−1.89, F21=0.222+0.1sin(5t) and F22=0.25+0.05sin(10t).
The simulation results are given in Figures 2–7. Figure 2 shows the trajectory yd, output y and constrained intervals with a good tracking performance. Figure 3 shows the trajectory of error z1. The trajectory of the dead-zone D(u) is given in Figure 4. The trajectory of state x2 is displayed in Figure 5. The trajectory of error z2 and adaptive laws are shown in Figure 6 and Figure 7, respectively. Simulation results prove that all signals are bounded.
An adaptive neural network tracking control method is introduced for the strict feedback nonlinear systems with unknown dead-zones and full state constraints. The parameters of the dead-zones are unknown but bounded. Combining backsteping technique with neural network, it ensures that the state constraints are not violated. The feasibility of the control algorithm is proved. In the meantime, all signals in the closed-loop system are UUB. In this paper, delay constraint and BLF are combined. In other words, the constraint occurs after a period of time, it is not constrained from the beginning. Furthermore, the tracking error converges to a small area away from zero. In the end, simulation results verify the effectiveness of the design. We will further investigate the control performance by setting different delay times such as time-varying delay, which will enrich the applied range of the proposed control method. Besides, since the reliability is an interested topic and has been well discussed in [23,24,25,26,27], it will be taken into account in our future work.
Step 1: In order to achieve the desired control objective, the following Barrier Lyapunov function is chosen
{V_{10}} = \frac{{\zeta _1^2\left( t \right)}}{{\left( {{F_{11}}\left( t \right) + {\zeta _1}\left( t \right)} \right)\left( {{F_{12}}\left( t \right) - {\zeta _1}\left( t \right)} \right)}} |
where {F_{11}}\left(t \right) and {F_{12}}\left(t \right) are time varying continuous functions. Its definition will be given later. For {\zeta _1}\left(t \right), its scope is {F_{11}}\left(t \right) < {\zeta _1}\left(t \right) < {F_{12}}\left(t \right). The time-varying functions {F_{11}}\left(t \right) and {F_{12}}\left(t \right) are chosen such that
{F_{11}}\left( t \right) = {\underline k_{c1}}\left( t \right) - n{\underline y _{d}}, \;{F_{12}}\left( t \right) = {\bar k_{c1}}\left( t \right) - n{\bar y_d} |
where n is a positive constant.
Consider the Barrier Lyapunov function as
V = {V_{10}} + \frac{{\underline g _1}}{{2{\lambda _1}}}\tilde w_1^2 | (7.1) |
where {\lambda _1} > 0 is a constant, {w_1} = {\max _{k \in K}}\left\{ {\left. {{{\left\| {\theta _{K1}^*} \right\|}^2}} \right\}} \right., {\tilde w_1} = {w_1} - {\hat w_1} stands for the estimation error, with {\hat w_1} being the estimation of {w_1}. Based on (3.4), we get the time derivative of {V_1}
\begin{gathered} {{\dot V}_1} = {G_1}{\zeta _1}\left( {{{\dot \zeta }_1} + {N_1}{\zeta _1}} \right) - \frac{{\underline g _1}}{{{\lambda _1}}}{{\tilde w}_1}{{\dot {\hat w}}_1} \\ = {G_1}{\zeta _1}[{g_1}\tau {\alpha _1} + {g_1}{\zeta _2} + \dot \tau {z_1} + \tau \left( {{f_1} - {{\dot y}_d}} \right) + {\zeta _1}{N_1}] - \frac{{\underline g _1}}{{{\lambda _1}}}{{\tilde w}_1}{{\dot {\hat w}}_1} \\ \end{gathered} | (7.2) |
where
{G_1} = \frac{{2{F_{11}}{F_{12}} - {F_{11}}{\zeta _1} + {F_{12}}{\zeta _1}}}{{{{\left[ {\left( {{F_{11}} + {\zeta _1}} \right)\left( {{F_{12}} - {\zeta _1}} \right)} \right]}^2}}} |
and
{N_1} = \frac{{ - {{\dot F}_{11}}{F_{12}} - {F_{11}}{{\dot F}_{12}} + \left( {{{\dot F}_{11}} - {{\dot F}_{12}}} \right){\zeta _1}}}{{2{F_{11}}{F_{12}} - {F_{11}}{\zeta _1} + {F_{12}}{\zeta _1}}}. |
In this way, the design of controller and adaptive law are simplified.
Combining the Assumption 2, on the basis of Young’s inequality, we get
{G_1}{g_1}{\zeta _1}{\zeta _2} \leqslant {\underline g _2}G_1^2\zeta _1^2\zeta _2^2 + \frac{{g_1^2}}{{4{\underline g _2}}} | (7.3) |
{G_1}{\zeta _1}\dot \tau {z_1} \leqslant {\underline g _1}G_1^2\zeta _1^2{\dot \tau ^2}z_1^2 + \frac{1}{4{{\underline g _1}}} | (7.4) |
{G_1}\zeta _1^2{N_1} \leqslant {\underline g _1}G_1^2\zeta _1^4N_1^2 + \frac{1}{4{{\underline g _1}}} | (7.5) |
Substituting (7.3), (7.4) and (7.5) into (7.2) leads to
{\dot V_1} \leqslant {G_1}{\zeta _1}\tau ({g_1}{\alpha _1} + {\underline g _1}{G_1}{\dot \tau ^2}z_1^3 + {\underline g _1}{G_1}\zeta _1^2N_1^2{z_1} + {f_1} - {\dot y_d}) + {\underline g _2}G_1^2\zeta _1^2\zeta _2^2 - \frac{{\underline g _1}}{{{\lambda _1}}}{\tilde w_1}{\dot {\hat w}_1} + \frac{{g_1^2}}{{4{\underline g _2}}} + \frac{1}{2{{\underline g _1}}} | (7.6) |
Let
{h_1}({z_1}) = {\underline g _1}{G_1}{\dot \tau ^2}z_1^3 + {\underline g _1}{G_1}\zeta _1^2N_1^2{z_1} + {f_1} - {\dot y_d} | (7.7) |
The unknown continuous function {h_1}({z_1}) can be approximated by an RBF NN as
{h_1}({z_1}) = {\theta _1}^{*T}{S_1}\left( {{z_1}} \right) + {\varepsilon _1}\left( {{z_1}} \right) | (7.8) |
Substituting (7.7) and (7.8) into (7.6) and using Young’s inequality, one gets
{G_1}{\zeta _1}\tau {\varepsilon _1}\left( {{z_1}} \right) \leqslant {\underline g _1}G_1^2\zeta _1^2{\tau ^2} + \frac{{\varepsilon _1^2}}{4{{\underline g _1}}} | (7.9) |
{G_1}{\zeta _1}\tau {\theta _1}^{*T}{S_1}\left( {{z_1}} \right) \leqslant {\underline g _1}G_1^2\zeta _1^2{\tau ^2}{w_1}S_1^T\left( {{z_1}} \right){S_1}\left( {{z_1}} \right) + \frac{1}{4{{\underline g _1}}} | (7.10) |
Then, we obtain
{\dot V_1} \leqslant {G_1}{\zeta _1}\tau ({g_1}{\alpha _1} + {\underline g _1}{\psi _1} + {\underline g _1}{w_1}{P_1}) + {\underline g _2}{G_1}^2\zeta _1^2\zeta _2^2 - \frac{{\underline g _1}}{{{\lambda _1}}}{\tilde w_1}{\dot {\hat w}_1} + {\Xi _1} | (7.11) |
with
{P_1} = {G_1}{\zeta _1}\tau S_1^T\left( {{z_1}} \right){S_1}\left( {{z_1}} \right) |
{\psi _1} = {G_1}{\zeta _1}\tau |
{\Xi _1} = \frac{{g_1^2}}{{4{\underline g _2}}} + \frac{3}{4{{\underline g _1}}} + \frac{{\varepsilon _1^2}}{4{{\underline g _1}}} |
From Assumption 2, we get
\begin{gathered} {g_1}{G_1}{\zeta _1}\tau {\alpha _1} = - {k_1}{g_1}{G_1}{\zeta _1}^2 - {g_1}{G_1}{\zeta _1}\tau {\psi _1} - {g_1}{G_1}{\zeta _1}\tau {{\hat w}_1}{P_1} \\ \leqslant - {k_1}{\underline g _1}{G_1}{\zeta _1}^2 - {\underline g _1}{G_1}{\zeta _1}\tau {\psi _1} - {\underline g _1}{G_1}{\zeta _1}\tau {{\hat w}_1}{P_1} \\ \end{gathered} |
then
{\dot V_1} \leqslant - {k_1}{\underline g _1}{G_1}\zeta _1^2 + {\underline g _2}G_1^2\zeta _1^2\zeta _2^2 + \frac{{\underline g _1}}{{{\lambda _1}}}{\delta _1}{\tilde w_1}{\hat w_1} + {\Xi _1} | (7.12) |
Step i \left({i = 2, ..., n - 1} \right): The following Barrier Lyapunov function is chosen as
{V_{i0}} = \frac{{\zeta _i^2\left( t \right)}}{{\left( {{F_{i1}}\left( t \right) + {\zeta _i}\left( t \right)} \right)\left( {{F_{i2}}\left( t \right) - {\zeta _i}\left( t \right)} \right)}} |
Consider the BLF as
{V_i} = {V_{i - 1}} + {V_{i0}} + \frac{{{\underline g _i}}}{{2{\lambda _i}}}\tilde w_i^2 | (7.13) |
where {\lambda _i} > 0 is a constant, {w_i} = {\max _{k \in K}}\left\{ {\left. {{{\left\| {\theta _{Ki}^*} \right\|}^2}} \right\}} \right., {\tilde w_i} = {w_i} - {\hat w_i} stands for the estimation error, with {\hat w_i} being the estimating of {w_i}, Based on (3.4), we get the time derivative of {V_i}
\begin{gathered} {{\dot V}_i} \leqslant - \sum\limits_{j = 1}^{i - 1} {{k_j}{\underline g _j}{G_j}\zeta _j^2} + {\underline g _i}G_{i - 1}^2\zeta _{i - 1}^2\zeta _i^2 + \sum\limits_{j = 1}^{i - 1} {\frac{{{\underline g _j}}}{{{\lambda _j}}}{\delta _j}{{\tilde w}_j}{{\hat w}_j}} + \sum\limits_{j = 1}^{i - 1} {{\Xi _j} + {G_i}{\zeta _i}\left( {{{\dot \zeta }_i} + {N_i}{\zeta _i}} \right)} - \frac{{{\underline g _i}}}{{{\lambda _i}}}{{\tilde w}_i}{{\dot {\hat w}}_i} \\ \leqslant - \sum\limits_{j = 1}^{i - 1} {{k_j}{\underline g _j}{G_j}\zeta _j^2} + \sum\limits_{j = 1}^{i - 1} {\frac{{{\underline g _j}}}{{{\lambda _j}}}{\delta _j}{{\tilde w}_j}{{\hat w}_j}} + \sum\limits_{j = 1}^{i - 1} {{\Xi _j}} - \frac{{{\underline g _i}}}{{{\lambda _i}}}{{\tilde w}_i}{{\dot {\hat w}}_i} \\ + {G_i}{\zeta _i}\left( {{N_i}{\zeta _i} + {g_i}\tau {\alpha _i} + {g_i}{\zeta _{i + 1}} + \dot \tau {z_i} + \frac{{{\underline g _i}G_{i - 1}^2\zeta _{i - 1}^2{\zeta _i}}}{{{G_i}}}} \right) \\ + {G_i}{\zeta _i}\tau \left( {{f_i} - \sum\limits_{j = 1}^{i - 1} {\frac{{\partial {\alpha _{i - 1}}}}{{\partial {x_j}}}} \left( {{g_j}{x_{j + 1}} + {f_j}} \right) - \Delta {\alpha _{i - 1}}} \right) \\ \end{gathered} | (7.14) |
where
{G_i} = \frac{{2{F_{i1}}{F_{i2}} - {F_{i1}}{\zeta _i} + {F_{i2}}{\zeta _i}}}{{{{\left[ {\left( {{F_{i1}} + {\zeta _i}} \right)\left( {{F_{i2}} - {\zeta _i}} \right)} \right]}^2}}} |
and
{N_i} = \frac{{ - {{\dot F}_{i1}}{F_{i2}} - {F_{i1}}{{\dot F}_{i2}} + \left( {{{\dot F}_{i1}} - {{\dot F}_{i2}}} \right){\zeta _i}}}{{2{F_{i1}}{F_{i2}} - {F_{i1}}{\zeta _i} + {F_{i2}}{\zeta _i}}} |
According to the Young’s inequality, it yields
{G_i}{g_i}{\zeta _i}{\zeta _{i + 1}} \leqslant {\underline g _{i+1}}G_i^2\zeta _i^2\zeta _{i + 1}^2 + \frac{{g_i^2}}{{4{{\underline g _{i+1}}}}} | (7.15) |
{G_i}{\zeta _i}\dot \tau {z_i} \leqslant {\underline g _i}G_i^2\zeta _i^2{\dot \tau ^2}z_i^2 + \frac{1}{{4{\underline g _i}}} | (7.16) |
{G_i}\zeta _i^2{N_i} \leqslant {\underline g _i}G_i^2\zeta _i^4N_i^2 + \frac{1}{{4{\underline g _i}}} | (7.17) |
{\rm{Let}}~{h_i}({z_i}) = {f_i} - \sum\limits_{j = 1}^{i - 1} {\frac{{\partial {\alpha _{i - 1}}}}{{\partial {x_j}}}} \left( {{g_j}{x_{j + 1}} + {f_j}} \right) - \Delta {\alpha _{i - 1}} + {\underline g _i}{G_i}{\dot \tau ^2}z_i^3 + {\underline g _i}{G_i}\zeta _i^2N_i^2{z_i} + \frac{{{\underline g _i}G_{i - 1}^2\zeta _{i - 1}^2{z_i}}}{{{G_i}}} | (7.18) |
The unknown continuous function {h_i}({z_i}) can be approximated by an RBF NN as
{h_i}({z_i}) = {\theta _i}^{*T}{S_i}\left( {{z_i}} \right) + {\varepsilon _i}\left( {{z_i}} \right) | (7.19) |
Substituting (7.18) and (7.19) into (7.14) and using Young’s inequality, one gets
{G_i}{\zeta _i}\tau {\varepsilon _i} \leqslant {\underline g _i}G_i^2\zeta _i^2{\tau ^2} + \frac{{\varepsilon _i^2}}{{4{\underline g _i}}} | (7.20) |
{G_i}{\zeta _i}\tau {\theta _i}^{*T}{S_i}\left( {{z_i}} \right) \leqslant {\underline g _i}G_i^2\zeta _i^2{\tau ^2}{w_i}S_i^T\left( {{z_i}} \right){S_i}\left( {{z_i}} \right) + \frac{1}{{4{\underline g _i}}} | (7.21) |
Then, we obtain
\begin{gathered} {{\dot V}_i} \leqslant - \sum\limits_{j = 1}^{i - 1} {{k_j}{\underline g _j}{G_j}\zeta _j^2} + \sum\limits_{j = 1}^{i - 1} {\frac{{{\underline g _j}}}{{{\lambda _j}}}{\delta _j}{{\tilde w}_j}{{\hat w}_j}} + {{\underline g _{i+1}}}G_i^2\zeta _i^2\zeta _{i + 1}^2 \\ + {G_i}{\zeta _i}\tau \left( {{g_i}{\alpha _i} + {\underline g _i}{\psi _i} + {\underline g _i}{w_i}{P_i}} \right) - \frac{{{\underline g _i}}}{{{\lambda _i}}}{{\tilde w}_i}{{\dot {\hat w}}_i} + \sum\limits_{j = 1}^{i - 1} {{\Xi _j}} \\ \\ \end{gathered} | (7.22) |
with
{P_i} = {G_i}{\zeta _i}\tau S_i^T\left( {{z_i}} \right){S_i}\left( {{z_i}} \right) |
{\psi _i} = {G_i}{\zeta _i}\tau |
{\Xi _i} = \frac{{g_i^2}}{{4{{\underline g _{i+1}}}}} + \frac{3}{{4{\underline g _i}}} + \frac{{\varepsilon _i^2}}{{4{\underline g _i}}} |
Let
{g_i}{G_i}{\zeta _i}\tau {\alpha _i} = - {g_i}{k_i}{G_i}{\zeta _i}^2 - {g_i}{G_i}{\zeta _i}\tau {\psi _i} - {g_i}{G_i}{\zeta _i}\tau {\hat w_i}{P_i} \leqslant {\underline g _i}{k_i}{G_i}{\zeta _i}^2 - {\underline g _i}{G_i}{\zeta _i}\tau {\psi _i} - {\underline g _i}{G_i}{\zeta _i}\tau {\hat w_i}{P_i} |
Then, we get
{\dot V_i} \leqslant - \sum\limits_{j = 1}^i {{k_j}{\underline g _j}{G_j}\zeta _j^2} + \sum\limits_{j = 1}^i {\frac{{{\underline g _j}}}{{{\lambda _j}}}{\delta _j}{{\tilde w}_j}{{\hat w}_j}} + \sum\limits_{j = 1}^i {{\Xi _j}} + {\underline g _{i+1}}G_i^2\zeta _i^2\zeta _{i + 1}^2 | (7.23) |
Step n: Because {\dot V_n} is similar to {\dot V_i}, we can figure out the following formula
\begin{gathered} {{\dot V}_n} \leqslant {G_n}{\zeta _n}\left( {\frac{{{\underline g _n}G_{n - 1}^2\zeta _{n - 1}^2{\zeta _n}}}{{{G_n}}} + \tau {g_n}\left( {mu + b} \right) + \dot \tau {z_n}} \right) + {G_n}{N_n}{\zeta _n}^2 - \frac{{{\underline g _n}}}{{{\lambda _n}}}{{\tilde w}_n}{{\dot {\hat w}}_n} + \sum\limits_{j = 1}^{n - 1} {{\Xi _j}} + \sum\limits_{j = 1}^{n - 1} {\frac{{{\underline g _j}}}{{{\lambda _j}}}{\delta _j}{{\tilde w}_j}{{\hat w}_j}} \\ + {G_n}{\zeta _n}\tau \left( {{f_n} - \sum\limits_{j = 1}^{n - 1} {\frac{{\partial {\alpha _{n - 1}}}}{{\partial {x_j}}}} \left( {{g_j}{x_{j + 1}} + {f_j}} \right) - \Delta {\alpha _{n - 1}}} \right) - \sum\limits_{j = 1}^{n - 1} {{k_j}{\underline g _j}{G_j}\zeta _j^2} \\ \end{gathered} | (7.24) |
Since b is bounded, one obtains b/m \leqslant \bar b/m \triangleq \eta where \eta is a constant. Using Young’s inequality get the following formula:
{G_n}{\xi _n}\tau {g_n}\left( {mu + b} \right) = {G_n}{\xi _n}\tau {g_n}mu + {G_n}{\xi _n}\tau {g_n}m\frac{b}{m} \leqslant {G_n}{\xi _n}\tau {g_n}mu + G_n^2\xi _n^2{\tau ^2}m + \frac{{m{\eta ^2}g_n^2}}{4} | (7.25) |
According to the Young’s inequality, it causes
{G_n}{\zeta _n}\dot \tau {z_n} \leqslant {\underline g _n}G_n^2\zeta _n^2{\dot \tau ^2}z_n^2 + \frac{1}{{4{\underline g _n}}} | (7.26) |
{G_n}\zeta _n^2{N_n} \leqslant {\underline g _n}G_n^2\zeta _n^4N_n^2 + \frac{1}{{4{\underline g _n}}} | (7.27) |
Denote
{h_n}({z_n}) = {f_n} - \sum\limits_{j = 1}^{n - 1} {\frac{{\partial {\alpha _{n - 1}}}}{{\partial {x_j}}}} \left( {{g_j}{x_{j + 1}} + {f_j}} \right) - \Delta {\alpha _{n - 1}} + {G_n}{\zeta _n}\tau m + {\underline g _n}{G_n}{\dot \tau ^2}z_n^3 + {\underline g _n}{G_n}\zeta _n^2N_n^2{z_n} + \frac{{{\underline g _n}G_{n - 1}^2\zeta _{n - 1}^2{z_n}}}{{{G_n}}} | (7.28) |
The unknown continuous function {h_n}({z_n}) can be approximated by an RBF NN as
{h_n}({z_n}) = {\theta _n}^{*T}{S_n}\left( {{z_n}} \right) + {\varepsilon _n}\left( {{z_n}} \right) | (7.29) |
Substituting (7.28) and (7.29) into Error! Reference source not found., and using the Young’s inequality, it yields
{G_n}{\zeta _n}\tau {\varepsilon _n} \leqslant {\underline g _n}G_n^2\zeta _n^2{\tau ^2} + \frac{{\varepsilon _n^2}}{{4{\underline g _n}}} | (7.30) |
{G_n}{\zeta _n}\tau {\theta _n}^{*T}{S_n}\left( {{z_n}} \right) \leqslant {\underline g _n}G_n^2\zeta _n^2{\tau ^2}{w_n}S_n^T\left( {{z_n}} \right){S_n}\left( {{z_n}} \right) + \frac{1}{{4{\underline g _n}}} | (7.31) |
Then, using the above inequalities we get
{\dot V_n} \leqslant - \sum\limits_{j = 1}^{n - 1} {{k_j}{\underline g _j}{G_j}\zeta _j^2} + \sum\limits_{j = 1}^{n - 1} {\frac{{{\underline g _j}}}{{{\lambda _j}}}{\delta _j}{{\tilde w}_j}{{\hat w}_j} + \sum\limits_{j = 1}^{n - 1} {{\Xi _j}} } + {G_n}{\zeta _n}\tau \left( {{g_n}mu + {\underline g _n}{\psi _n} + {\underline g _n}{w_n}{P_n}} \right) - \frac{{{\underline g _n}}}{{{\lambda _n}}}{\tilde w_n}{\dot {\hat w}_n} | (7.32) |
where
{P_n} = {G_n}{\zeta _n}\tau S_n^T\left( {{z_n}} \right){S_n}\left( {{z_n}} \right) |
{\psi _n} = {G_n}{\zeta _n}\tau |
{\Xi _n} = \frac{3}{{4{\underline g _n}}} + \frac{{\varepsilon _n^2}}{{4{\underline g _n}}} + \frac{{m{\eta ^2}g_n^2}}{4} |
with
{G_n}{\zeta _n}\tau {g_n}mu = - {k_n}{g_n}{G_n}{\zeta _n}^2 - {g_n}{G_n}{\zeta _n}\tau {\psi _n} - {g_n}{G_n}{\zeta _n}\tau {\hat w_n}{{\rm P}_n} \leqslant - {k_n}{\underline g _n}{G_n}{\zeta _n}^2 - {\underline g _n}{G_n}{\zeta _n}\tau {\psi _n} - {\underline g _n}{G_n}{\zeta _n}\tau {\hat w_n}{{\rm P}_n} |
Then, we get
{\dot V_n} \leqslant - \sum\limits_{j = 1}^n {{k_j}{\underline g _j}{G_j}\zeta _j^2} + \sum\limits_{j = 1}^n {\frac{{{\underline g _j}}}{{{\lambda _j}}}{\delta _j}{{\tilde w}_j}{{\hat w}_j}} + \sum\limits_{j = 1}^n {{\Xi _j}} | (7.33) |
All authors declare no conflicts of interest.
[1] |
W. B. Kannel, D. Mcgee, T. Gordon, A general cardiovascular risk profile: The Frmingham study, Am. J. Cardiol., 38 (1976), 46–51. https://doi.org/10.1016/0002-9149(76)90061-8 doi: 10.1016/0002-9149(76)90061-8
![]() |
[2] |
R. M. Conroy, K. Pyoral, A. P. Fitzgerald, S. Sans, A. Menotti, G. De Backer, et al., Estimation of ten-year risk of fatal cardiovascular disease in Europe: The SCORE project, Eur. Heart J., 24 (2003), 987–1003. https://doi.org/10.1016/S0195-668X(03)00114-3 doi: 10.1016/S0195-668X(03)00114-3
![]() |
[3] |
C. Hippisley, Derivation and validation of QRISK, a new cardiovascular diseaserisk score for the United Kingdom: Prospective open cohort study, BMJ, 335 (2007), 136. https://doi.org/10.1136/bmj.39261.471806.55 doi: 10.1136/bmj.39261.471806.55
![]() |
[4] |
S. F. Weng, J. Reps, J. Kai, Can machine-learning improve cardiovascular risk prediction using routine clinical data, PLoS ONE, 12 (2017), e0174944. https://doi.org/10.1371/journal.pone.0174944 doi: 10.1371/journal.pone.0174944
![]() |
[5] |
A. C. Dimopoulos, M. Nikolaidou, F. F. Caballero, Machine learning methodologies versus cardiovascular risk scores, in predicting disease risk, BMC Med. Res. Methodol., 18 (2018). https://doi.org/10.1186/s12874-018-0644-1 doi: 10.1186/s12874-018-0644-1
![]() |
[6] |
W. Huang, T. W. Ying, W. L. C. Chin, Application of ensemble machine learning algorithms on lifestyle factors and wearables for cardiovascular risk prediction, Sci. Rep., 12 (2022), 1033. https://doi.org/10.1038/s41598-021-04649-y doi: 10.1038/s41598-021-04649-y
![]() |
[7] |
M. Ordikhani, M. S. Abadeh, C. Prugger, An evolutionary machine learning algorithm for cardiovascular disease risk prediction, PLoS ONE, 17 (2022), e0271723. https://doi.org/10.1371/journal.pone.0271723 doi: 10.1371/journal.pone.0271723
![]() |
[8] |
M. Pal, S. Parija, G. Panda, K. Dhama, R. K. Mohapatra, Risk prediction of cardiovascular disease using machine learning classifiers, Open Med., 17 (2022), 1100–1113. https://doi.org/10.1515/med-2022-0508 doi: 10.1515/med-2022-0508
![]() |
[9] |
L. R. Guarneros-Nolasco, N. A. Cruz-Ramos, G. Alor-Hernández, L. Rodríguez-Mazahua, J. L. Sánchez-Cervantes, Identifying the main risk factors for cardiovascular diseases prediction using machine learning algorithms, Mathematics, 9 (2021), 2537. https://doi.org/10.3390/math9202537 doi: 10.3390/math9202537
![]() |
[10] |
M. M. Ali, B. K. Paul, K. Ahmed, F. M. Bui, J. M. W. Quinn, M. A. Moni, Heart disease prediction using supervised machine learning algorithms: Performance analysis and comparison, Comput. Biol. Med., 136(2021), 104672. https://doi.org/10.1016/j.compbiomed.2021.104672 doi: 10.1016/j.compbiomed.2021.104672
![]() |
[11] |
K. Kanagarathinam, D. Sankaran, R. Manikandan, Machine learning-based risk prediction model for cardiovascular disease using a hybrid dataset, Data Knowl. Eng., 140 (2022), 102042. https://doi.org/10.1016/j.datak.2022.102042 doi: 10.1016/j.datak.2022.102042
![]() |
[12] |
J. M. Sung, I. J. Cho, D. Sung, S. Kim, Development and verification of prediction models for preventing cardiovascular diseases, PLoS ONE, 14 (2019), e0222809. https://doi.org/10.1371/journal.pone.0222809 doi: 10.1371/journal.pone.0222809
![]() |
[13] |
Y. Pan, M. Fu, B. Cheng, X. Tao, J. Guo, Enhanced deep learning assisted convolutional neural network for heart disease prediction on the internet of medical things platform, IEEE Access, 8 (2020), 189503–189512. https://doi.org/10.1109/ACCESS.2020.3026214 doi: 10.1109/ACCESS.2020.3026214
![]() |
[14] |
S. K. Pandey, R. R. Janghel, Automatic detection of arrhythmia from imbalanced ECG database using CNN model with SMOTE, Australas. Phys. Eng. Sci. Med., 42 (2019), 1129–1139. https://doi.org/10.1007/s13246-019-00815-9 doi: 10.1007/s13246-019-00815-9
![]() |
[15] |
L. Ali, A. Rahman, A. Khan, M. Zhou, A. Javeed, J. A. Khan, An automated diagnostic system for heart disease prediction based on χ2 statistical model and optimally configured deep neural network, IEEE Access, 7 (2019), 34938–34945. https://doi.org/10.1109/ACCESS.2019.2904800 doi: 10.1109/ACCESS.2019.2904800
![]() |
[16] |
I. D. Mienye, Y. Sun, Z. Wang, An improved ensemble learning approach for the prediction of heart disease risk, Inf. Med. Unlocked, 20 (2020), 100402. https://doi.org/10.1016/j.imu.2020.100402 doi: 10.1016/j.imu.2020.100402
![]() |
[17] |
S. Pandya, T. R. Gadekallu, P. K. Reddy, W. Wang, M. Alazab, InfusedHeart: A novel knowledge-infused learning framework for diagnosis of cardiovascular events, IEEE Trans. Comput. Soc. Syst., 2022 (2022). https://doi.org/10.1109/TCSS.2022.3151643 doi: 10.1109/TCSS.2022.3151643
![]() |
[18] |
P. Srinivas, R. Katarya, HyOPTXg: OPTUNA hyper-parameter optimization framework for predicting cardiovascular disease using XGBoost, Biomed. Signal Process. Control, 73 (2022), 103456. https://doi.org/10.1016/j.bspc.2021.103456 doi: 10.1016/j.bspc.2021.103456
![]() |
[19] |
V. Baviskar, M. Verma, P. Chatterjee, G. Singal, T. R. Gadekallu, Optimization using internet of agent based stacked sparse autoencoder model for heart disease prediction, Exp. Syst., 2023 (2023), e13359. https://doi.org/10.1111/exsy.13359 doi: 10.1111/exsy.13359
![]() |
[20] |
X. Wei, C. Rao, X. Xiao, L. Chen, M. Goh, Risk assessment of cardiovascular disease based on SOLSSA-CatBoost model, Exp. Syst. Appl., 219 (2023), 119648. https://doi.org/10.1016/j.eswa.2023.119648 doi: 10.1016/j.eswa.2023.119648
![]() |
[21] |
A. S. Kumar, R. Rekha, An improved hawks optimizer based learning algorithms for cardiovascular disease prediction, Biomed. Signal Process. Control, 81 (2023), 104442. https://doi.org/10.1016/j.bspc.2022.104442 doi: 10.1016/j.bspc.2022.104442
![]() |
[22] | X. S. Yang, Cuckoo search via Lxevy flights, in 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), (2009), 210–214. https://doi.org/10.1109/NABIC.2009.5393690 |
[23] | H. R. Tizhoosh, Opposition-based learning: a new scheme for machine intelligence, in Proceedings of IEEE International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce(CIMCA-IAWTIC06, (2005), 695–701. https://doi.org/10.1109/cimca.2005.1631345 |
[24] |
A. A. Ewees, A. E. Mohamed, E. H. Houssein, Improved grasshopper optimization algorithm using opposition-based learning, Exp. Syst. Appl., 112 (2018), 156–172. https://doi.org/10.1016/j.eswa.2018.06.023 doi: 10.1016/j.eswa.2018.06.023
![]() |
[25] |
X. Yu, W. Xu, C. Li, Opposition-based learning grey wolf optimizer for global optimization, Knowl.-Based Syst., 226 (2021), 107139. https://doi.org/10.1016/j.knosys.2021.107139 doi: 10.1016/j.knosys.2021.107139
![]() |
[26] |
M. Khishe, Greedy opposition-based learning for chimp optimization algorithm, Artif. Intell. Rev., 56 (2022), 7633–7663. https://doi.org/10.1007/s10462-022-10343-w doi: 10.1007/s10462-022-10343-w
![]() |
[27] |
M. Imran, S. Khan, H. Hlavacs, Intrusion detection in networks using cuckoo search optimization, Soft Comput., 26 (2022), 10651–10663. https://doi.org/10.1007/s00500-022-06798-2 doi: 10.1007/s00500-022-06798-2
![]() |
[28] |
B. Jia, B. Yu, Q. Wu, Adaptive affinity propagation method based on improved cuckoo search, Knowl.-Based Syst., 111 (2016), 27–35. https://doi.org/10.1016/j.knosys.2016.07.039 doi: 10.1016/j.knosys.2016.07.039
![]() |
[29] |
S. Chakraborty, K. Mali, Fuzzy and elitist cuckoo search based microscopic image segmentation approach, Appl. Soft Comput., 130 (2022), 109671. https://doi.org/10.1016/j.asoc.2022.109671 doi: 10.1016/j.asoc.2022.109671
![]() |
[30] |
P. N. Maddaiah, P. P. Narayanan, An improved Cuckoo search algorithm for optimization of artificial neural network training, Neural Process. Lett., 2023 (2023), 1–28. https://doi.org/10.1007/s11063-023-11411-0 doi: 10.1007/s11063-023-11411-0
![]() |
[31] | R. Eberhart, K. James, A new optimizer using particle swarm theory, in Proceedings of the Sixth International Symposium on Micro Machine and Human Science, (1995), 39–43. https://doi.org/10.1109/mhs.1995.494215 |
[32] |
G. Dhiman, V. Kumar, Seagull optimization algorithm: Theory and its applications for largescale industrial engineering problems, Knowl.-Based Syst., 165 (2019), 169–196. https://doi.org/10.1016/j.knosys.2018.11.024 doi: 10.1016/j.knosys.2018.11.024
![]() |
[33] | J. Maiga, G. G. Hungilo, Comparison of machine learning models in prediction of cardiovascular disease using health record data, in 2019 International Conference on Informatics, Multimedia, Cyber and Information System (ICIMCIS), (2019), 45–48. https://doi.org/10.1109/ICIMCIS48181.2019.8985205 |
[34] | A. Nikam, S. Bhandari, A. Mhaske, S. Mantri, Cardiovascular disease prediction using machine learning models, in 2020 IEEE Pune Section International Conference (PuneCon), (2020), 22–27. https://doi.org/10.1109/PuneCon50868.2020.9362367 |
[35] |
J. C. T. Arroyo, A. J. P. Delima, An optimized neural network using genetic algorithm for cardiovascular disease prediction, J. Adv. Inf. Technol., 13 (2022), 95–99. https://doi.org/10.12720/jait.13.1.95-99 doi: 10.12720/jait.13.1.95-99
![]() |
[36] |
M. Peng, F. Hou, Z. Cheng, T. Shen, K. Liu, C. Zhao, et al., A cardiovascular disease risk score model based on high contribution characteristics, Appl. Sci., 13 (2023), 893. https://doi.org/10.3390/app13020893 doi: 10.3390/app13020893
![]() |
[37] |
T. B. Olesen, M. Pareek, The influence of age and sex on the prognostic importance of traditional cardiovascular risk factors, selected circulating biomarkers and other markers of subclinical cardiovascular damage, Curr. Opin. Cardiol., 38 (2023), 21–31. https://doi.org/10.1097/hco.0000000000001005 doi: 10.1097/hco.0000000000001005
![]() |
[38] |
E. Harold, P. R. Bays, E. E. Taub, Ten things to know about ten cardiovascular disease risk factors, Am. J. Prev. Cardiol., 5 (2021), 100149. https://doi.org/10.1016/j.ajpc.2021.100149 doi: 10.1016/j.ajpc.2021.100149
![]() |
[39] |
C. Phanish, B. Radhika, Assessing the risk factors associated with cardiovascular disease, Eur. J. Prev. Cardiol., 25 (2018), 932–933. https://doi.org/10.1177/2047487318778652 doi: 10.1177/2047487318778652
![]() |
[40] |
A. Arafa, H. H. Lee, E. S. Eshak, K. Shirai, K. Liu, J. Li, et al., Modifiable risk factors for cardiovascular disease in Korea and Japan, Korean Circ. J., 51 (2021), 643–655. https://doi.org/10.4070/kcj.2021.0121 doi: 10.4070/kcj.2021.0121
![]() |
[41] |
M. George, K. George, T. Athanasios, Cardiovascular disease in Greece; the latest evidence on risk factors, Hell. J. Cardiol., 60 (2019), 271–275. https://doi.org/10.1016/j.hjc.2018.09.006 doi: 10.1016/j.hjc.2018.09.006
![]() |
[42] | P. Zhao, H. Li, Opposition-based Cuckoo search algorithm for optimization problems, in 2012 Fifth International Symposium on Computational Intelligence and Design, (2012), 344–347. https://doi.org/10.1109/ISCID.2012.93 |
[43] |
N. A. Baghdadi, S. M. F. Abdelaliem, A. Malki, I. Gad, A. Ewis, E. Atlam, Advanced machine learning techniques for cardiovascular disease early detection and diagnosis, J. Big Data, 10 (2023). https://doi.org/10.1186/s40537-023-00817-1 doi: 10.1186/s40537-023-00817-1
![]() |
[44] |
H. Huan, F. Zhen, L. Hai, J. Cheng, J. Lyu, Y. Zhang, et al., Gene function and cell surface protein association analysis based on single-cell multiomics data, Comput. Biol. Med., 157 (2023), 106733. https://doi.org/10.1016/j.compbiomed.2023.106733 doi: 10.1016/j.compbiomed.2023.106733
![]() |
[45] |
R. Meng, S. Yin, J. Sun, H. Hu, Q Zhao, ScAAGA: Single cell data analysis framework using asymmetric autoencoder with gene attention, Comput. Biol. Med., 165 (2023), 107414. https://doi.org/10.1016/j.compbiomed.2023.107414 doi: 10.1016/j.compbiomed.2023.107414
![]() |
[46] |
H. Gao, J. Sun, Y. Wang, Y. Lu, L. Liu, Q. Zhao, et al., Predicting metabolite–disease associations based on auto-encoder and non-negative matrix factorization, Briefings Bioinf., 24 (2023), bbad259. https://doi.org/10.1093/bib/bbad259 doi: 10.1093/bib/bbad259
![]() |
[47] |
W. Wang, L. Zhang, J. Sun, Q. Zhao, J. Shuai, Predicting the potential human lncRNA–miRNA interactions based on graph convolution network with conditional random field, Briefings Bioinf., 23 (2022), bbac463. https://doi.org/10.1093/bib/bbac463 doi: 10.1093/bib/bbac463
![]() |
[48] |
L. Zhang, P. Yang, H. Feng, Q. Zhao, H. Liu, Using network distance analysis to predict lncRNA–miRNA interactions, Interdiscip. Sci. Comput. Life Sci., 13 (2021), 535–545. https://doi.org/10.1007/s12539-021-00458-z doi: 10.1007/s12539-021-00458-z
![]() |
[49] |
F. Sun, J. Sun, Q. Zhao, A deep learning method for predicting metabolite–disease associations via graph neural network, Briefings Bioinf., 23 (2022), bbac266. https://doi.org/10.1093/bib/bbac266 doi: 10.1093/bib/bbac266
![]() |
[50] |
T. Wang, J. Sun, Q. Zhao, Investigating cardiotoxicity related with hERG channel blockers using molecular fingerprints and graph attention mechanism, Comput. Biol. Med., 153 (2023), 106464. https://doi.org/10.1016/j.compbiomed.2022.106464 doi: 10.1016/j.compbiomed.2022.106464
![]() |
[51] |
Z. Chen, L. Zhang, J. Sun, R. Meng, S. Yin, Q. Zhao, DCAMCP: A deep learning model based on capsule network and attention mechanism for molecular carcinogenicity prediction, J. Cell Mol. Med., 27 (2023), 3117–3126. https://doi.org/10.1111/jcmm.17889 doi: 10.1111/jcmm.17889
![]() |
1. | Yanxin Li, Shangkun Liu, Jia Li, Weimin Zheng, Congestion tracking control of multi-bottleneck TCP networks with input-saturation and dead-zone, 2024, 9, 2473-6988, 10935, 10.3934/math.2024535 | |
2. | Xinyu Lei, Fazhan Tao, Nan Wang, Zhumu Fu, Haoxiang Ma, Finite‐time prescribed performance adaptive fuzzy output feedback control for stochastic nonlinear systems with dead‐zone input, 2023, 1049-8923, 10.1002/rnc.7116 |