
Deep learning-based reduced order models (DL-ROMs) have been recently proposed to overcome common limitations shared by conventional ROMs–built, e.g., through proper orthogonal decomposition (POD)–when applied to nonlinear time-dependent parametrized PDEs. In particular, POD-DL-ROMs can achieve an extremely good efficiency in the training stage and faster than real-time performances at testing, thanks to a prior dimensionality reduction through POD and a DL-based prediction framework. Nonetheless, they share with conventional ROMs unsatisfactory performances regarding time extrapolation tasks. This work aims at taking a further step towards the use of DL algorithms for the efficient approximation of parametrized PDEs by introducing the μt-POD-LSTM-ROM framework. This latter extends the POD-DL-ROMs by adding a two-fold architecture taking advantage of long short-term memory (LSTM) cells, ultimately allowing long-term prediction of complex systems' evolution, with respect to the training window, for unseen input parameter values. Numerical results show that μt-POD-LSTM-ROMs enable the extrapolation for time windows up to 15 times larger than the training time interval, also achieving better performances at testing than POD-DL-ROMs.
Citation: Stefania Fresca, Federico Fatone, Andrea Manzoni. Long-time prediction of nonlinear parametrized dynamical systems by deep learning-based reduced order models[J]. Mathematics in Engineering, 2023, 5(6): 1-36. doi: 10.3934/mine.2023096
[1] | Dongxiang Gao, Yujun Zhang, Libing Wu, Sihan Liu . Fixed-time command filtered output feedback control for twin-roll inclined casting system with prescribed performance. Mathematical Biosciences and Engineering, 2024, 21(2): 2282-2301. doi: 10.3934/mbe.2024100 |
[2] | Gaowang Zhang, Feng Wang, Jian Chen, Huayi Li . Fixed-time sliding mode attitude control of a flexible spacecraft with rotating appendages connected by magnetic bearing. Mathematical Biosciences and Engineering, 2022, 19(3): 2286-2309. doi: 10.3934/mbe.2022106 |
[3] | Yun Liu, Yuhong Huo . Predefined-time sliding mode control of chaotic systems based on disturbance observer. Mathematical Biosciences and Engineering, 2024, 21(4): 5032-5046. doi: 10.3934/mbe.2024222 |
[4] | Jiao Wu, Shi Qiu, Ming Liu, Huayi Li, Yuan Liu . Finite-time velocity-free relative position coordinated control of spacecraft formation with dynamic event triggered transmission. Mathematical Biosciences and Engineering, 2022, 19(7): 6883-6906. doi: 10.3934/mbe.2022324 |
[5] | Hui Wang . Prescribed-time control of stochastic high-order nonlinear systems. Mathematical Biosciences and Engineering, 2022, 19(11): 11399-11408. doi: 10.3934/mbe.2022531 |
[6] | Tianqi Yu, Lei Liu, Yan-Jun Liu . Observer-based adaptive fuzzy output feedback control for functional constraint systems with dead-zone input. Mathematical Biosciences and Engineering, 2023, 20(2): 2628-2650. doi: 10.3934/mbe.2023123 |
[7] | Zhenqi He, Lu Yao . Improved successive approximation control for formation flying at libration points of solar-earth system. Mathematical Biosciences and Engineering, 2021, 18(4): 4084-4100. doi: 10.3934/mbe.2021205 |
[8] | Rinaldo M. Colombo, Mauro Garavello . Optimizing vaccination strategies in an age structured SIR model. Mathematical Biosciences and Engineering, 2020, 17(2): 1074-1089. doi: 10.3934/mbe.2020057 |
[9] | Jia Tan, ShiLong Chen, ZhengQiang Li . Robust tracking control of a flexible manipulator with limited control input based on backstepping and the Nussbaum function. Mathematical Biosciences and Engineering, 2023, 20(12): 20486-20509. doi: 10.3934/mbe.2023906 |
[10] | Na Zhang, Jianwei Xia, Tianjiao Liu, Chengyuan Yan, Xiao Wang . Dynamic event-triggered adaptive finite-time consensus control for multi-agent systems with time-varying actuator faults. Mathematical Biosciences and Engineering, 2023, 20(5): 7761-7783. doi: 10.3934/mbe.2023335 |
Deep learning-based reduced order models (DL-ROMs) have been recently proposed to overcome common limitations shared by conventional ROMs–built, e.g., through proper orthogonal decomposition (POD)–when applied to nonlinear time-dependent parametrized PDEs. In particular, POD-DL-ROMs can achieve an extremely good efficiency in the training stage and faster than real-time performances at testing, thanks to a prior dimensionality reduction through POD and a DL-based prediction framework. Nonetheless, they share with conventional ROMs unsatisfactory performances regarding time extrapolation tasks. This work aims at taking a further step towards the use of DL algorithms for the efficient approximation of parametrized PDEs by introducing the μt-POD-LSTM-ROM framework. This latter extends the POD-DL-ROMs by adding a two-fold architecture taking advantage of long short-term memory (LSTM) cells, ultimately allowing long-term prediction of complex systems' evolution, with respect to the training window, for unseen input parameter values. Numerical results show that μt-POD-LSTM-ROMs enable the extrapolation for time windows up to 15 times larger than the training time interval, also achieving better performances at testing than POD-DL-ROMs.
The attitude stabilization problem has attracted extensive attention in recent years for its significant applications in spacecraft navigation, satellite formation flying and the recycling of space debris. To achieve satisfactory tracking performance, system uncertainties and other nonlinear dynamics of the controlled spacecraft should be handled effectively. A wide range of control techniques have been proposed to address these issues, including adaptive control [1], output feedback control [2] and robust control [3]. What is more, spacecraft systems commonly experience actuator constraints such as saturation and degradation, which would severely undermine their practicality and reliability. To deal with these actuator nonlinearity problems, several methods have been developed in [4,5,6]. However, the majority of above-mentioned strategies can only realize asymptotic stabilization of the system, implying that the convergence time of tracking errors is infinite and cannot be determined by users, which is contrary to the requirement that some real-time space missions require a rapid convergence.
To improve the convergence rate of the aforementioned strategies, the concept of finite-time control was initially proposed in [7] and continuously applied to a variety of nonlinear systems[8,9,10]. However, the convergence time cannot be predefined as desired, and its upper bound is an infinite function of original states. This deficiency was addressed by developing a fixed-time controller [11], which has the appealing merit that its time is independent of initial configurations. Sliding mode methods are widely employed to realize control performance. In [12], a nonsingular terminal sliding mode surface (NTSMS) is designed by using a piecewise continuous function. And based on the NTSMS, an adaptive controller is proposed for spacecraft formation, allowing for fixed-time coordinated attitude tracking. In [13], a robust fixed-time attitude controller is established through the use of a faster fixed-time sliding mode surface and a fixed-time observer is designed for lumped uncertainty. In addition to sliding mode control, a backstepping technique can also be used in the construction of fixed-time controllers. In [14], a fixed-time backstepping controller is constructed by virtue of a command filter for a class of nonstrict-feedback nonlinear systems, and the fuzzy logic system is introduced to approximate uncertainties, input saturation and dead zones. In [15], a fixed-time control protocol is proposed for hyper sonic vehicles by using a tracking differentiator for the calculation of the derivatives of virtual control law. In [16], a command filter based backstepping control scheme is introduced to avoid the computational complexity of the derivative of virtual control law in conventional backstepping schemes. In [17], the backstepping control technique is combined with a dynamic surface method to light the computational burden. Notwithstanding, no explicit functions between tunable parameters and settling time can be obtained via the above-mentioned methods.
Recently, the stabilization of systems with predefined-time convergence has become a hotpot and many meaningful studies have been conducted on this topic because of its enhanced property as compared to fixed/finite-time control schemes when it comes to rendering states into the origin with the settling time that is explicitly equal to a user-tuning parameter. The sliding mode control and backstepping are two of the most commonly used methods for designing predefined-time controllers, and some representative work of predefined-time control of spacecraft systems has been reported in [18,19,20,21]. However, the estimate of the convergence time bound in these works is somewhat conservative, resulting in the actual settling time being several times shorter than the estimate.
Another important control goal of designed controllers is to achieve desirable transient performance. To this end, prescribed performance control (PPC) protocol was first developed in [22] and heavily implemented in spacecraft systems[23,24,25] in recent years. In [25], combining PPC and NTSMS, a fixed-time sliding mode attitude controller is designed for flexible spacecraft systems. Unlike the conventional exponential convergence prescribed performance function (PPF), a novel predefined-time PPF is designed in [26], and it has the more appealing property that the convergence time is prescribed by users. Chen et al. [27] utilized a polynomial function to design the PPF with predefined time convergence; it can mitigate the chattering problem caused by an exponential function. Bu et al. [28] constructed a finite-time prescribed performance controller for waverider vehicles, and no fuzzy/neural systems are required to estimate the unknown dynamics. In [29] and [30], two brand-new types of finite-time PPFs are explored for the purpose of minimizing the overshoot and overcoming the fragility problem caused by actuator saturation. Notwithstanding, the condition that the PPFs should be larger than the tracking errors at the incipient stage needs to be satisfied for the execution of most of the above PPF-based controllers.
To the best of the authors' knowledge, developing a predefined-time controller for spacecraft systems with inertia perturbation, space-environment disturbances and actuator faults is an open topic. Inspired by the existing work, we have designed a radial basis function neural network (RBFNN)-based controller with prescribed performance and appointed-time convergence which ensures that the tracking error will converge to a prescribed small region in the vicinity of the origin within the predefined time. The main contributions can be summarized as follows.
1) By employing the proposed PPC control scheme, both the convergence time and tracking accuracy can be arbitrarily predefined by users. The designed controller presents great robustness against input saturation, actuator misalignment and unexpected disturbance.
2) By applying a novel shifting function to conventional PPC, we release the constraints for the initial tracking errors to be smaller than the initial PPF values. Additionally, the shifting function also allows for an improved handling of input saturation.
3) Following the representative backstepping design methodology, we propose an attitude controller with prescribed performance and appointed-time convergence for spacecraft systems. The singularity problem associated with virtual control law is avoided via the design of a piecewise continuous function.
4) RBFNN and minimum learning parameter (MLP) techniques are combined to estimate the system uncertainty and the derivative of virtual control law. Moreover, the fixed-time convergence of the learning parameter is ensured by constructing the adaptive law.
Lemma 1. [11] Considering a nonlinear system
˙x=f(x,t) | (2.1) |
suppose that there exists a positive-definite Lyapunov function V and scalars γ1>0, γ2>0, p>1, 0<q<1 and Δ>0 such that the following property holds:
˙V≤−γ1Vp−γ2Vq+Δ | (2.2) |
Then, the equilibrium of (2.1) is practically fixed-time stable with the settling time Tf≤1γ1κ(1−p)+1γ2κ(q−1), where 0<κ<1. The solution of (2.1) will converge to a residual set that is given as
{x∣V≤min{(Δγ1(1−κ))1p,(Δγ2(1−κ))1q}} | (2.3) |
Lemma 2. [31] For x,y∈R, the following relationship holds:
|x|m|y|n≤mm+nc|x|m+n+nm+nc−mn|y|m+n | (2.4) |
where m>0, n>0 and c>0.
Lemma 3. [32] For y>x and l>0, we have
x(y−x)l≤l1+l(y1+l−x1+l) | (2.5) |
Lemma 4. [33] For the variables x1,x2,…xn>0, 0<y1≤1 and y2>1, the following inequalities hold:
n∑i=1xy1i≥(n∑i=1xi)y1 | (2.6) |
n∑i=1xy2i≥n1−y2(n∑i=1xi)y2 | (2.7) |
Notation 1. In this paper, sigα(β)=[|ξ1|αsgn(β1),|β2|αsgn(β2),…,|βn|αsgn(βn)]T, where β=[β1,β2,…,βn]T; sgn(⋅) denotes the sign function. The definition of the skew-symmetric matrix is given by β×=[0,−β3,β2;β3,0,−β1;−β2,β1,0], when n=3.
The kinematic equation and dynamics of a spacecraft can be presented as
[˙q0˙qv]=[−12qTv12(q×v+q0I3)]ω | (2.8) |
J˙ω+ω×Jω=τ+d | (2.9) |
where q=[q0,q1,q2,q3]T=[q0,qv]T∈R4 denotes the unit quaternion vector which is used to parameterize the orientation of spacecraft satisfying the identity q20+qTvqv=1; ω∈R3 represents the angular velocity of spacecraft; d∈R3 denotes the unknown environment disturbance torque; J=J0+ΔJ∈R3×3 denotes the inertia matrix of the spacecraft system, with J0 and ΔJ being the nominal and perturbed components, respectively; τ∈R3 is the control torque acting on spacecraft.
The relationship between the actual control torque and the command input can be given by [34]:
τ=Esat(uc)+σ | (2.10) |
where uc=[uc1,uc2,uc3]T∈R3 is the command torque generated by controllers; E=diag{e1(t),e2(t),e3(t)}∈R3×3 represents the failure coefficient matrix indicating the effectiveness condition of the actuator with 0≤ei(t)≤1; σ=[σ1,σ2,σ3]T∈R3 is the bias faults vector. The saturation characteristic of an actuator [35] can be formulated as sat(uci)=sgn(uci)⋅min{|uci|,umaxi}, where umaxi denotes the maximum permissible torque generated by actuators.
Define qd=[qd0,qd1,qd2,qd3]T=[qd0,qdv]T∈R4 as the expected attitude vector. The attitude error described in unit quaternion format [36] is given by qe=[qe0,qe1,qe2,qe3]T=[qe0,qev]T, qev=qd0qv−q×dvqv−q0qdv, qe0=q×dvqv+q0qd0 and q2e0+qTevqev=1. Considering the rest-to-rest attitude maneuvering case, we have that ωe=ω in this paper.
The attitude error dynamics of the spacecraft systems is given by
[˙qe0˙qev]=[−12qTev12(q×ev+qe0I3)]ωe | (2.11) |
˙ωe=J−10(M+τ+N+d) | (2.12) |
where M=−ωe×J0ωe and N=−ΔJ˙ωe−ωe×ΔJωe.
Define x1=qev and x2=˙qev; (2.11) and (2.12) can be reconstructed as
{˙x1=x2˙x2=G+Π+d2+F(qev)J0−1uc | (2.13) |
where G=˙F(qev)ωe+F(qev)J0−1M, Π=F(qev)J0−1N+F(qev)J0−1(E−I3)uc+F(qev)J0−1σ+F(qev)J0−1(E(sat(uc)−uc)), d2=F(qev)J0−1d and F(qev)=12(q×ev+qe0I3).
Assumption 1. The perturbed part of the inertia matrix, the external disturbance and the faulty torque are unknown but bounded. The lumped disturbance is bounded and satisfies ‖d2‖≤ˉd, where ˉd is a positive constant.
The primary objective is to design an adaptive controller for spacecraft systems in the presence of actuator fault so as to achieve the prespecified tracking accuracy within a predefined time and satisfy the prescribed performance boundaries throughout the entire process, as well as to ensure the fixed-time boundedness of other closed-loop signals. Both the settling time and tracking precision can be defined according to the specific requirements of users, irrespective of the initial conditions.
To guarantee that attitude trajectory of the spacecraft remains within the prescribed boundaries with desirable transient and static performance, the following constraints are constructed first:
−δ_iρi(t)<qevi(t)<¯δiρi(t) | (3.1) |
where ¯δi,δ_i>0, (i=1,2,3) are two adjustable parameters and ρi(t) is the PPF.
In this paper, we propose a novel predefined-time PPF as
ρi(t)={(ρi0−ρi∞)cos(πt2Tc)ai+ρi∞,t<Tcρi∞,t≥Tc | (3.2) |
where ρi0>ρi∞>0 are constants, Tc>0 is the predefined maximum allowable convergence time that can be arbitrarily defined by users and ai>0 represents a preset constant that can adjust the convergence rate. ρi(t) is a monotonically decreasing smooth function that can converge from ρi0 to ρi∞ within Tc.
The traditional PPC method requires that the initial tracking errors satisfy the condition (3.1). Based on this constraint, the values of ¯δi, δ_i and even ρi need to be reassigned when the original tracking error exceeds the initial value of the PPF, which is challenging considering that the initial configurations are unavailable. To overcome this weakness, we introduce a shifting function to map the value of the initial tracking error into the interval [−δ_iρi0,¯δiρi0] as
ηi=2kiπarctan(qevi) | (3.3) |
where ki={¯δiρi0,qevi≥0,δ_iρi0,qevi<0.. Note that we set ¯δi=δ_i=1 in the following paper for the simplicity of analysis.
Remark 1. From (3.3), it can be seen that limqevi→−∞ηi=−ρi0, limqevi→+∞ηi=ρi0, which indicates that, regardless of the largeness of the attitude errors, they will not violate the prescribed boundary requirements defined in (3.1) at the outset. Moreover, when limt→Tcηi=0, we can obtain limt→Tcqevi=0, meaning that the predefined-time attitude maneuvering can be achieved by rendering ηi to zero within a prescribed interval.
Since ηi satisfies the boundary conditions, we have
−ρi(t)<ηi<ρi(t) | (3.4) |
From (3.3), the prescribed boundary for the attitude error qevi is shifted into
−hi(t)<qevi<hi(t) | (3.5) |
where hi(t)=tan(π2kiρi(t)),(i=1,2,3). hi(t) is a monotonically decreasing function with limt→Tchi=tan(π2kiρi(∞)).
To convert the constraint on qevi into its unconstrained counterpart, the transformation function is defined as
T(εi)=2πarctan(εi) | (3.6) |
Obviously, T(εi) is a monotonically increasing function with the following properties: (1) −1<T(εi)<1; (2) limεi→+∞T(εi)=1; (3) limεi→−∞T(εi)=−1; (4) T(0)=0.
In what follows, we define
ηi=ρiT(εi) | (3.7) |
Therefore, the transformed error εi is introduced as
εi=T−1(ξi)=tan(π2ξi) | (3.8) |
where ξi=ηiρi is the normalized error.
During the period of actuator saturation, the attitude errors may grow to exceed the prescribed envelopes, which can result in the transformed error εi approaching infinity. Consequently, the actuator will be kept saturated, compromising the stability and reliability of the system. To this end, we redesign the coefficient of our proposed shifting function as follows:
ki={ρi,|sat(uci)−uci|>0si,|sat(uci)−uci|=0 | (3.9) |
where 0<si<π2 is a positive constant.
Remark 2. According to the properties of T(εi), it is obvious that the desired performance for the shifted attitude errors ηi prescribed in (3.4) can be achieved when the boundedness of εi is ensured. In this respect, the problem of (3.4) is converted into its equivalent of stabilizing the transformed state εi by designing the controller.
Remark 3. Unlike the previous finite-time PPFs proposed in [26,27,28,37], the settling of ρi0 is independent of qevi(0). With the assistance of the shifting function provided in (3.3), the PPF defined in (3.2) does not require prior knowledge of initial errors to design the parameters. The removal of restrictions on initial conditions simplifies the design process and contributes to the reliability and practicality of the proposed PPC scheme.
Remark 4. When there is input saturation, the shifting function ensures that the shifted error ηi remains within the appointed boundary [−ρi,ρi]. With a smaller ηi, the value of the normalized error ξi will be reduced, resulting in a smaller transformed error εi and a decline in the control input. When the actuator exits its saturation zone, the coefficient of the shifting function changes to si. Compared with the existing strategies[37,38,39], this method can reduce the control output by minimizing the absolute value of the normalized error ηi. (The proof can be seen in Appendix.)
To facilitate the implementation of backstepping methods, we can define
{z1=x1z2=x2−α2 | (4.1) |
From (2.13), the time derivative of z1 is
˙z1=x2 | (4.2) |
To remove the initial value constraints, we impose the following shifted function on z1 and obtain the shifted error signals:
η1i=2k1iπarctan(z1i) | (4.3) |
Define the first normalized tracking error ξ1i=η1iρ1i, and using the transformation function defined in (3.6), we can obtain
ε1i=tan(π2ξ1i) | (4.4) |
The time derivative of ε1i is
˙ε1i=∂ε1i∂ξ1i⋅˙ξ1i=π2sec(π2ξ1i)2⋅˙η1iρ1i−η1i˙ρ1iρ21i=π2ρ1isec(π2ξ1i)2⋅(2k1iπ√1+z21i˙z1i−η1i˙ρ1iρ1i)=ψ1i(g1ix2i−f1i) | (4.5) |
where ψ1i=π2ρ1isec(π2ξ1i)2, g1i=2k1iπ√1+z21i and f1i=η1i˙ρ1ρ1i.
We can rewrite ε1i in vector form as
˙ε1=ψ1(g1x2−f1) | (4.6) |
where ψ1=diag{ψ1i}, g1=diag{g1i} and f1=[f11,f12,f13]T.
Then, the virtual control law α2=[α21,α22,α23]T can be established as
α2=−(g1)−1(ψ1)−1(k1sigp(ε1)+k2ϕ1−ψ1f1) | (4.7) |
where k1>0, k2>0, p>1 and ϕ1=[ϕ11,ϕ12,ϕ13]T is a piecewise continuous function designed as
ϕ1i={sigq(ε1i),if|ε1i|>μl1sig(ε1i)(μ2)q−12+l2sig2(ε1i)(μ2)q2−1+l3sig2(ε1i)(μ2)q−32,if|ε1i|≤μ | (4.8) |
where 0<q<1, l1=0.5q2−2.5q+3, l2=−q2+4q−3, l3=0.5q2−1.5q+1 and μ is a tiny positive constant.
Remark 5. If α2 is designed as α2=−(g1)−1(ψ1)−1(k1sigp(ε1)+k2sigq(ε1)−ψ1f1), then its derivative will be ˙α2=−(g1)−1(ψ1)−1(k1˙ε1sigp−1(ε1)+k2˙ε1sigq−1(ε1)−ψ1˙f1). The singularity problem may happen in ˙α2 because of 0<q<1 when ε1i=0 and ˙ε1i≠0. To avoid the problem, we design the above piecewise function at the switching point μ. The values of l1, l2 and l3 are selected to ensure the continuity of ϕ1i and its first and second derivative.
The candidate of the first Lyapunov function is defined as
V1=12εT1ε1 | (4.9) |
Differentiating V1 yields
˙V1=εT1˙ε1=εT1ψ1[g1(z2+α2)−f1] | (4.10) |
Substituting (4.7) into (4.10), when |ε1i|>μ, we have
˙V1=εT1ψ1[g1(z2−(g1)−1(ψ1)−1(k1sigp(ε1)+k2sigq(ε1)−ψ1f1))−f1]=εT1ψ1g1z2−k13∑i=1|ε1i|p+1−k23∑i=1|ε1i|q+1 | (4.11) |
When |ε1i|≤μ, we can obtain
˙V1=εT1ψ1g1z2−k13∑i=1|ε1i|p+1−k2(l1(μ2)q−123∑i=1|ε1i|2+l2(μ2)q2−13∑i=1|ε1i|3+l3(μ2)q−323∑i=1|ε1i|4)≤εT1ψ1g1z2−k13∑i=1|ε1i|p+1−k23∑i=1|ε1i|q+1+k2(3∑i=1|ε1i|q+1+l1(μ2)q−123∑i=1|ε1i|2+l2(μ2)q2−13∑i=1|ε1i|3+l3(μ2)q−323∑i=1|ε1i|4)≤εT1ψ1g1z2−k13∑i=1|ε1i|p+1−k23∑i=1|ε1i|q+1+3k2((μ2)q+12+(l1+l2+l3)(μ2)q+12)≤εT1ψ1g1z2−k13∑i=1|ε1i|p+1−k23∑i=1|ε1i|q+1+6k2(μ2)q+12 | (4.12) |
Note that, when |ε1i|≤μ, there is only a bounded term 6k2(μ2)q+12 added to the structure of (4.11).
Taking the derivative of z2, we can obtain
˙z2=G+Π+d2+F(qev)J0−1uc−˙α2 | (4.13) |
Similarly, we can relax the feasibility condition by introducing the shifting function to z2 and obtain the shifted error signals:
η2i=2k2iπarctan(z2i) | (4.14) |
Define the second normalized tracking error ξ2i=η2iρ2i, and the ith component of the transformed error vector can be defined as
ε2i=tan(π2ξ2i) | (4.15) |
The time derivative of ε2i is
˙ε2i=∂ε2i∂ξ2i⋅˙ξ2i=π2sec(π2ξ2i)2⋅˙η2iρ2i−η2i˙ρ2iρ22i=π2ρ2isec(π2ξ2i)2⋅(2k2iπ√1+z22i˙z2i−η2i˙ρ2iρ2i)=ψ2i(g2i˙z2i−f2i) | (4.16) |
where ψ2i=π2ρ2isec(π2ξ2i)2, g2i=2k2iπ√1+z22i and f2i=η2i˙ρ2iρ2i.
The vector form can be rewritten as
˙ε2=ψ2(g2˙z2−f2) | (4.17) |
where ψ2=diag{ψ2i}, g2=diag{g2i} and f2=[f21,f22,f23]T.
Substituting (4.13) into (4.17), we have
˙ε2=ψ2(g2(G+Π+d2+F(qev)J0−1uc−˙α2)−f2) | (4.18) |
Choose the second Lyapunov function candidate as
V2=12εT2ε2 | (4.19) |
Differentiating V2 with respect of time yields
˙V2=εT2ψ2[g2(G+Π+d2+F(qev)J0−1uc−˙α2)−f2]=εT2D+εT2ψ2[g2(G+d2+F(qev)J0−1uc)−f2] | (4.20) |
where D=ψ2g2(Π−˙α2)=[D1,D2,D3]T is the lumped disturbance which can be approximated with the aid of the following RBFNN:
D(Z)=WTS(Z)+ζ | (4.21) |
where W∈Rn×3 denotes the optimal weight matrix, n represents the number of network neurons, Z=[qev,ωe,ε1]T is the input vector, ζ∈R3 is the approximation error vector with ‖ζ‖≤ζm and S(Z)=[S1(Z),S2(Z),...,Sn(Z)]Tn∑i=1Si(Z)∈Rn is the basis function vector with
Si=exp(−(Z−βi)T(Z−βi)H2),i=1,2,...n | (4.22) |
where βi and H are the receptive field center and width of the neural cell, respectively.
By defining θ=‖W‖2, we use the MLP technique. In this way, we regulate the norm of the ideal weight matrix rather than its elements and only one learning parameter is required to be updated for the execution of the neural network. Therefore, the computational burden and the complexity of the propounded strategy can be significantly reduced.
The RBFNN-based adaptive controller can be designed as
uc=−J0F(qev)−1(g2)−1(ψ2)−1(r1sigp(ε2)+r2sigq(ε2)+r3ε2+ψ2g2G−ψ2f2+ε2‖ε2‖2εT1ψ1g1z2+ˆθε22h2ST(Z)S(Z)) | (4.23) |
where r1>0, r2>0, r3>R+12 and R=‖ψ2‖2‖g2‖2. The adaptive law of the learning parameter θ is developed as
˙ˆθ=−w1ˆθ−w2ˆθq+λ‖ε2‖22h2ST(Z)S(Z) | (4.24) |
where w1>0, w2>0, λ>0 and h>0.
Substituting (4.23) into (4.20), we have
˙V2≤εT2ψ2g2d2+εT2D−r13∑i=1|ε2i|p+1−r23∑i=1|ε2i|q+1−r3‖ε2‖2−εT1ψ1g1z2−ˆθ‖ε2‖22h2ST(Z)S(Z) | (4.25) |
With the help of Young's inequality and the property that 0≤STS≤1, we have
εT2ψ2g2d2≤‖ε2‖2‖ψ2‖2‖g2‖22+‖d2‖22≤R‖ε2‖22+ˉd22 | (4.26) |
εT2D=εT2WTS(Z)+εT2ζ≤‖ε2‖‖W‖‖S(Z)‖+3∑i=1ε2iζi≤θ‖ε2‖2ST(Z)S(Z)2h2+h22+‖ε2‖22+3ζ2m2≤θ‖ε2‖22h2ST(Z)S(Z)+h22+‖ε2‖22+3ζ2m2 | (4.27) |
Theorem 1. Considering the spacecraft system (2.13), the controller (4.23) and the adaptive law (4.24), one can ensure the practical fixed-time boundedness of all of the closed-loop signals. Besides, for any constants ν and T, if the PPF (3.2) parameters are respectively set as ρ1i∞=2s1iπarctan(ν) and Tc=T, the tracking error will converge into the predefined region |qevi|≤ν within the predefined time T, irrespective of the initial conditions.
Proof. Choose the third Lyapunov function for the whole system:
V3=V1+V2+12λ˜θ2 | (4.28) |
where ˜θ=θ−ˆθ.
Differentiating V3 yields
˙V3=˙V1+˙V2−1λ˜θ˙ˆθ | (4.29) |
Together with (4.11), (4.12), (4.24), (4.25), (4.26) and (4.27), one has
˙V3≤εT1ψ1g1z2−k13∑i=1|ε1i|p+1−k23∑i=1|ε1i|q+1+6k2(μ2)q+12+R‖ε2‖22+ˉd22+θ‖ε2‖22h2ST(Z)S(Z)+h22+‖ε2‖22+3ζ2m2−r13∑i=1|ε2i|p+1−r23∑i=1|ε2i|q+1−r3‖ε2‖2−εT1ψ1g1z2−ˆθ‖ε2‖22h2ST(Z)S(Z)−˜θλ(−w1ˆθ−w2ˆθq+λ‖ε2‖22h2ST(Z)S(Z))≤−k13∑i=1|ε1i|p+1−k23∑i=1|ε1i|q+1−r13∑i=1|ε2i|p+1−r23∑i=1|ε2i|q+1+w1λ˜θˆθ+w2λ˜θˆθq+6k2(μ2)q+12+ˉd22+h22+3ζ2m2 | (4.30) |
With the help of Young's inequality, the following inequality is true.
w1λ˜θˆθ=w1λ˜θ(θ−˜θ)=−w1λ˜θ2+w1λ˜θθ≤−w12λ˜θ2+w12λθ2 | (4.31) |
By invoking (4.31), (4.30) can be rewritten as
˙V3≤−k13∑i=1|ε1i|p+1−k23∑i=1|ε1i|q+1−r13∑i=1|ε2i|p+1−r23∑i=1|ε2i|q+1−w12λ˜θ2+w12λθ2+w2λ˜θˆθq+Δ1 | (4.32) |
where Δ1=6k2(μ2)q+12+ˉd22+h22+3ζ2m2.
Applying Lemma 2 and selecting x=w12λ˜θ2, y=1, m=1+p2, n=1−p2 and c=2p+1 yields
(w12λ˜θ2)p+12≤w12λ˜θ2+1−p2(2p+1)−1+p1−p | (4.33) |
In view of Lemma 3, one has
w2λ˜θˆθq=w2λ˜θ(θ−˜θ)q≤w2qλ(1+q)(θq+1−˜θq+1) | (4.34) |
Hence, substituting (4.33) and (4.34) into (4.32), we can obtain
˙V3≤−k13∑i=1|ε1i|p+1−k23∑i=1|ε1i|q+1−r13∑i=1|ε2i|p+1−r23∑i=1|ε2i|q+1−(w12λ˜θ2)p+12−w2qλ(1+q)˜θq+1+w12λθ2+1−p2(2p+1)−1+p1−p+w2qλ(1+q)θq+1+Δ1≤−a1(εT1ε12)p+12−b1(εT1ε12)q+12−a2(εT2ε22)p+12−b2(εT2ε22)q+12−a3(˜θ22λ)p+12−b3(˜θ22λ)q+12+Δ≤−γ1Vp+123−γ2Vq+123+Δ | (4.35) |
where a1=k12p+12, b1=k12q+1231−q2, a2=r12p+12, b2=r22q+1231−q2, a3=wp+121, b3=2q+12w2qλq+12λ(1+q), γ1=min{a1,a2,a3}, γ2=min{31−q2b1,31−q2b2,31−q2b3} and Δ=w12λθ2+1−p2(2p+1)−1+p1−p+w2qλ(1+q)θq+1+Δ1.
(1) In light of Lemma 1, V3 will converge to the region Ωv=min{(Δγ1(1−κ))2p+1,(Δγ2(1−κ))2q+1} within the fixed time Tf. The settling time is bounded by Tf≤Tmax=2γ1κ(1−p)+2γ2κ(q−1).
Apparently, ε1i and ε2i will converge to the following regions, respectively:
Ωε1i=min{√2(Δγ1(1−κ))2p+1,√2(Δγ2(1−κ))2q+1} | (4.36) |
Ωε2i=min{√2(Δγ1(1−κ))2p+1,√2(Δγ2(1−κ))2q+1} | (4.37) |
Based on (4.4) and (4.15), we can further obtain the residual sets that η1i and η2i will respectively converge to Ωη1i and Ωη2i within Tf.
Ωη1i={η1i∣|η1i|≤ρ1i(Tf)2πarctan(Ωε1i)} | (4.38) |
Ωη2i={η2i∣|η2i|≤ρ2i(Tf)2πarctan(Ωε2i)} | (4.39) |
(2) In view of (3.4) and the property of ρ1i(t), the inequality −ρ1i(∞)<η1i<ρ1i(∞) is satisfied when t≥T. By designing ρ1i(∞)=ρ1i∞=2s1iπarctan(ν) and Tc=T, we have
|qevi|≤ν | (4.40) |
Therefore, the attitude error can converge to a prescribed region Ω={qevi∣|qevi|≤ν} within the predefined time T. The flowchart that manifests the process for generating the proposed control action is shown in Figure 1.
Remark 6. The tracking accuracy and convergence time can be explicitly and arbitrarily settled in advance, irrespective of the initial conditions, by tuning ν and T respectively. A smaller ν and T contribute to improved precision, as well as a shorter convergence period. However, it is noted that the setting of the parameters ν and T is exactly based on a trade-off between ambitious aims and allowable practices.
Remark 7. The control parameters p, q, r1, r2, k1 and k2 can be selected by trial-and-error methods to ensure that all other closed-loop signals are fixed-time bounded. The setting of these parameters does not necessarily require taking the values of ν and T into consideration.
The nominal component of the inertia matrix is defined as
J0=[201.20.91.2171.40.91.415]kg⋅m2. |
The uncertain part of the inertia matrix is
ΔJ=[4.20.90.60.9−72.50.62.55.89]kg⋅m2. |
The external disturbance is set to be
\boldsymbol d = \left[ {\begin{array}{*{20}{c}} {{\rm{ - 4 + 4cos(0}}{\rm{.2}}t{\rm{) - cos(0}}{\rm{.4}}t{\rm{)}}}\\ {{\rm{3 + 3sin(0}}{\rm{.2}}t{\rm{) - 2cos(0}}{\rm{.4}}t{\rm{)}}}\\ {{\rm{ - 3 + 4sin(0}}{\rm{.2t) - 3cos(0}}{\rm{.4}}t{\rm{)}}} \end{array}} \right]\times 10^{-2} \rm {N\cdot m}. |
The actuator misalignment takes the form of
{e_1} = \left\{{\begin{array}{*{20}{l}} 1\\ 0.6 \end{array}} \right.\begin{array}{*{20}{l}} {{\rm{ if }}\; t \le 2}\\ {{\rm{ if }}\; t > 2} \end{array}{\rm{ }} {e_2} = \left\{ {\begin{array}{*{20}{l}} 1\\ {0.4} \end{array}} \right.\begin{array}{*{20}{l}} {{\rm{ if }}\; t \le 4}\\ {{\rm{ if }}\; t > 4} \end{array}{\rm{ }} \\ {e_3} = \left\{ {\begin{array}{*{20}{l}} 1\\ {0.5} \end{array}} \right.\begin{array}{*{20}{l}} {{\rm{ if }}\; t \le 5}\\ {{\rm{ if }}\; t > 5} \end{array} |
{\sigma_1} = \left\{ {\begin{array}{*{20}{l}} 0\\ {-0.2} \end{array}} \right.\begin{array}{*{20}{l}} {{\rm{ if }} \; t \le 3}\\ {{\rm{ if }} \; t > 3} \end{array}{\rm{ }} {\sigma_2} = \left\{ {\begin{array}{*{20}{l}} 0\\ { 0.1} \end{array}} \right.\begin{array}{*{20}{l}} {{\rm{ if }} \; t \le 4}\\ {{\rm{ if }} \; t > 4} \end{array}{\rm{ }} {\sigma_3} = \left\{ {\begin{array}{*{20}{l}} 0\\ { - 0.1} \end{array}} \right.\begin{array}{*{20}{l}} {{\rm{ if }} \;t \le 6}\\ {{\rm{ if }} \; t > 6} \end{array} |
The desired attitude is {\boldsymbol q}_{d} = \left[1, 0, 0, 0\right]^{T} . We consider two groups of different initial values to perform the simulation. Case 1: {\boldsymbol q}\left(0 \right) = \left[{\rm{ 0.6698}}, {\rm{-0.5158}}, {\rm{0.4716}}, {\rm{0.2508}} \right]^T ; Case 2: {\boldsymbol q}\left(0 \right) = \left[{\rm{0.1737}}, {\rm{-0.2632}}, {\rm{0.7896}}, {\rm{-0.5264}} \right]^T . The maximum control torque is considered to be u_{maxi} = 7.5 N \cdot m.
For the virtual control law (4.7) and the actual control law (4.23), the parameters are selected as k_{1} = 1 , k_{2} = 2 , p = 1.2 , q = 0.8 , r_{1} = 10 , r_{2} = 5 and \mu = 0.01 . The parameters of the update law (4.24) are chosen as w_{1} = 2 , w_{2} = 1 , \lambda = 10 and h = 1 . The shifting function parameters are given as s_{1i} = s_{2i} = 0.4 . The initial PPFs are set as \rho_{1i0} = \rho_{2i0} = 0.4 . It is noteworthy that the initial condition has been violated since the initial errors q_{e2}(0) and q_{e3}(0) in Case 1 and q_{e2} (0) and q_{e3}(0) in Case 2 are bigger than \rho_{1i0} .
In this section, Cases 1 and 2 are considered to demonstrate the efficacy of our proposed approach when it comes to handling attitude tracking problems with a predefined convergence time independent of the original states. The predefined-time PPF parameters are given as \nu = 0.01 , T = 10 , \rho_{1i\infty} = \frac{2s_{1i}}{\pi}\arctan\left(\nu\right) = 0.0025 , \rho_{2i\infty} = 0.1 and a_{1i} = a_{2i} = 1.2 .
The simulation results are shown in Figures 2–7. Figures 2 and 3 show that the proposed controller performs fairly well under different initial conditions, and that the actual settling time is 7.5 s, which is shorter than the predefined one. With the implementation of the shifting function, the proposed controller is able to maintain attitude errors within prescribed envelopes despite the tracking errors exceeding the PPFs at the beginning. As shown in Figure 4, different control torques are required under different initial conditions to provide the desired performance. It can be seen in Figures 5 and 6 that the transformed errors \varepsilon_{1i} and \varepsilon_{2i} are fixed-time bounded. The boundedness of the adaptive parameter is shown in Figure 7.
To further illustrate that the attitude maneuvering performance of spacecraft systems can be prescribed with our proposed method in terms of convergence time, we present the results of the simulation with two different convergence times T = 10 and T = 15 . Case 1 is considered for the initial attitude value. Other parameters of the PPFs and the proposed controller remain unchanged from Simulation one.
The corresponding results are shown in Figures 8–10. It is observed in Figure 8 that the proposed controller will render attitude error into the predefined region \left| q_{evi} \right| \le 0.01 within T . Figures 8 and 9 also show that the convergence time of attitude errors with our proposed methods can be directly and arbitrarily set by selecting different values of T . In general, a smaller T indicates a shorter stabilization period but a greater control burden as shown in Figure 10.
To demonstrate that the tracking performance of spacecraft systems can be prescribed with our propounded controller in terms of control accuracy, we present the simulation with \nu as \nu = 0.01 and \nu = 0.001 and the same value of prescribed settling time T = 5 . Hence, \rho_{1i\infty} values are set as \rho_{1\infty i} = 0.0025 and \rho_{1\infty i} = 2.5 \times 10^{-4} , while other adjustable parameters remain the same as those in Simulation one. We consider the example in Case 2 as the initial value of the quaternion.
The results are depicted in Figures 11 and 12. It is shown in Figure 11 that the attitude error is stabilized into the prescribed region \left| q_{evi } \right| \le \nu within T no matter whether the initial values of the quaternion violate the prescribed constraints. Generally, a smaller \nu contributes to improved precision in attitude maneuvering at the expense of a heavier burden on the controller, as shown in Figure 12.
To illustrate the advantage of our propounded controller, the fault-tolerant fast fixed-time convergent attitude control (FTFFTCAC) proposed in [38] is considered to perform the comparative study. The preset convergence time and prescribed accuracy are respectively given as T = 10 , v = 0.01 . Other control parameters remain unchanged. We select Case 2 for the initial tracking errors.
For the following modified prescribed performance function (MPPF) developed in [38], the MPPF parameters are chosen as k = 0.4 , T_{m} = 10 , \rho_{0} = 1 and \rho_{\infty} = 0.01 . Other control parameters are chosen as [38].
\begin{equation} \rho _{i}(t) = \left\{ \begin{aligned} &\left(\rho_{0}-\rho_{\infty}\left(1+t/T_{m}\right)\right)\exp\left(\frac{-k t}{T_{m}-t}\right)+\rho_{\infty} &, t < T_{m}\\ &\rho_{\infty} &, t \ge T_{m} \end{aligned} \right. \end{equation} | (5.1) |
It can be seen in Figures 13 and 14 that our proposed control scheme exhibits better tracking performance than the FTFFTCAC scheme, with faster convergence and higher accuracy. Figure 15 shows that the control consumption of the designed controller is significantly less than that of FTFFTCAC, and that the control action is smoother.
To demonstrate the robustness of our proposed control scheme, an additional disturbance is imposed on the spacecraft during the period of 13–18 s with the term {\boldsymbol d}_{sud} = \left[{\begin{array}{*{20}{c}} 2+0.5\sin(0.2t)\\ 2+0.5\sin(0.2t)\\ 2+0.5\sin(0.2t) \end{array}} \right] \rm{N\cdot m} .
Figure 16 shows that the proposed controller can guarantee the tracking errors with performance in terms of convergence time and steady-state precision in the presence of an unexpected disturbance. The shifted errors are always kept within the constraints, which verifies the robustness of our proposed controller. It is depicted in Figure 17 that the control torque is bounded and not chattering when a sudden change in disturbance occurs. As shown in Figure 18, the attitude error q_{e1} reaches the guaranteed performance boundary at around t = 14 s, resulting in a loss of efficacy for the FTFFTCAC controller.
In this article, a novel adaptive predefined-time prescribed performance controller is presented for spacecraft systems. By employing a predefined-time PPF, we guarantee that the attitude errors will satisfy the prescribed tracking accuracy within a predefined time. By introducing a novel shifting function, we eliminate the constraints on initial errors, enabling the proposed method to be implemented even if the attitude errors violate the prescribed boundaries initially. RBFNN and MLP techniques have been introduced to approximate the lumped uncertain dynamics, and the adaptive law has been designed to ensure the fixed-time convergence of the learning parameter. Our proposed method has the notable merit of allowing the settling time and the tracking precision to be directly prespecified by setting two adjustable parameters. The proposed control scheme exhibits excellent performance against input saturation, actuator misalignment and unexpected disturbances.
This research was supported by the National Natural Science Foundation (NNSF) of China under Grants 61333008, 61603320, 61733017 and 61673327 and Xiamen Key Laboratory Of Big Data Intelligent Analysis and Decision.
The authors declare that there is no conflict of interest.
To achieve \left | e \right| \le v , the PPF parameter should be designed as \rho_{\infty } = v according to previous PPC schemes [37,38,39].
The PPF defined in (3.2) and in [37,38,39] can be rewritten in a general form as follows:
\begin{equation} \rho(t) = r\rho_{0}+(1-r)v \end{equation} | (A1) |
where 0 \le r \le 1 refers to the monotonically decreasing component of the PPFs. For a given time, r is a constant.
For our proposed PPC control strategy, according to Theorem 4.1, we need to design \rho_{\infty} = \frac{{2s}}{\pi}\arctan\left(\nu\right) to guarantee that the tracking error converges to the region \left |e \right| \le v . Therefore, the PPF defined in (3.2) can be rewritten as
\begin{equation} \rho(t) = r\rho_{0}+(1-r)k\arctan(v) \end{equation} | (A2) |
where 0 < k = \frac{{2s}}{\pi} < 1 .
The traditional formulation of normalized error in [37,38,39] can be written as
\begin{equation} \xi_{1} = \frac{e}{\rho} = \frac{e}{r\rho_{0}+(1-r)v} \end{equation} | (A3) |
In our proposed scheme, the new normalized error is defined as
\begin{equation} \xi_{2} = \frac{\eta}{\rho} = \frac{k\arctan(e)}{r\rho_{0}+(1-r)k\arctan(v)} \end{equation} | (A4) |
Letting f(e) = \left|\xi_{2}\right|-\left|\xi_{1} \right| = \xi_{2}\left(\left|e\right|\right)-\xi_{1}\left(\left|e\right|\right) yields
\begin{equation} \begin{aligned} f & = \frac{k\arctan\left(\left|e\right|\right)}{r\rho_{0}+(1-r)k\arctan(v)}-\frac{\left|e\right|}{r\rho_{0}+(1-r)v} \\ & = \frac{k\arctan\left(\left|e\right|\right)\left(r\rho_{0}+(1-r)v\right)-\left|e\right|\left(r\rho_{0}+(1-r)k\arctan(v)\right)}{\left(r\rho_{0}+(1-r)k\arctan(v)\right)\left(r\rho_{0}+(1-r)v\right)} \end{aligned} \end{equation} | (A5) |
Define g(\left|e\right|) = k\arctan(\left|e\right|)\left(r\rho_{0}+(1-r)v\right)-\left|e\right|\left(r\rho_{0}+(1-r)k\arctan(v)\right) . Differentiating g with respect to e , we can obtain
\begin{equation} \begin{aligned} \dot g & = \frac{{\rm sgn}(e)k\left(r\rho_{0}+(1-r)v\right)}{\sqrt{1+e^{2}}}-{\rm sgn}(e)\left(r\rho_{0}+(1-r)k\arctan(v)\right)\\ & = \frac{{\rm sgn}(e)k\left(r\rho_{0}+(1-r)v\right)-{\rm sgn}(e)\left(r\rho_{0}+(1-r)k\arctan(v)\right)\sqrt{1+e^{2}}}{\sqrt{1+e^{2}}} \end{aligned} \end{equation} | (A6) |
When v \to 0 , we have \arctan (v) = v . Thus, \dot g can be rewritten as
\begin{equation} \dot g = \frac{{\rm sgn}(e)r\rho_{0}\left(k-\sqrt{1+e^{2}}\right)+{\rm sgn}(e)k(1-r)v\left(1-\sqrt{1+e^{2}}\right)}{\sqrt{1+e^{2}}} \end{equation} | (A7) |
Given that 0 < k < 1 , when e < 0 , it is obvious that \dot g > 0 . Similarly, when e > 0 , we can obtain \dot g < 0 . Therefore, we have that f(e) \le f(0) = 0 for any e \in \mathfrak{R} . From this perspective, the absolute value of normalized error is reduced by our method. In addition, due to the property that the transformed function is monotonically increasing, a decrease in the transformed error and the control torque output can be achieved with the same error e .
[1] |
P. Benner, S. Gugercin, K. Willcox, A survey of projection-based model reduction methods for parametric dynamical systems, SIAM Rev., 57 (2015), 483–531. https://doi.org/10.1137/130932715 doi: 10.1137/130932715
![]() |
[2] | A. Quarteroni, A. Manzoni, F. Negri, Reduced basis methods for partial differential equations: an introduction, Springer, 2016. https://doi.org/10.1007/978-3-319-15431-2 |
[3] | P. Benner, A. Cohen, M. Ohlberger, K. Willcox, Model reduction and approximation: theory and algorithms, SIAM, 2017. |
[4] | A. Quarteroni, A. Valli, Numerical approximation of partial differential equations, Springer, 1994. https://doi.org/10.1007/978-3-540-85268-1 |
[5] |
A. Manzoni, An efficient computational framework for reduced basis approximation and a posteriori error estimation of parametrized Navier-Stokes flows, ESAIM: Math. Modell. Numer. Anal., 48 (2014), 1199–1226. https://doi.org/10.1051/m2an/2014013 doi: 10.1051/m2an/2014013
![]() |
[6] |
F. Ballarin, A. Manzoni, A. Quarteroni, G. Rozza, Supremizer stabilization of POD-Galerkin approximation of parametrized steady incompressible Navier-Stokes equations, Int. J. Numer. Meth. Eng., 102 (2015), 1136–1161. https://doi.org/10.1002/nme.4772 doi: 10.1002/nme.4772
![]() |
[7] |
N. Dal Santo, A. Manzoni, Hyper-reduced order models for parametrized unsteady Navier-Stokes equations on domains with variable shape, Adv. Comput. Math., 45 (2019), 2463–2501. https://doi.org/10.1007/s10444-019-09722-9 doi: 10.1007/s10444-019-09722-9
![]() |
[8] | C. Farhat, S. Grimberg, A. Manzoni, A. Quarteroni, Computational bottlenecks for PROMs: pre-computation and hyperreduction, In: P. Benner, S. Grivet-Talocia, A. Quarteroni, G. Rozza, W. Schilders, L. Silveira, Model order reduction: volume 2: snapshot-based methods and algorithms, Boston: De Gruyter, 2020,181–244. https://doi.org/10.1515/9783110671490-005 |
[9] |
G. Gobat, A. Opreni, S. Fresca, A. Manzoni, A. Frangi, Reduced order modeling of nonlinear microstructures through proper orthogonal decomposition, Mech. Syst. Signal Process., 171 (2022), 108864. https://doi.org/10.1016/j.ymssp.2022.108864 doi: 10.1016/j.ymssp.2022.108864
![]() |
[10] |
I. Lagaris, A. Likas, D. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Trans. Neur. Net., 9 (1998), 987–1000. https://doi.org/10.1109/72.712178 doi: 10.1109/72.712178
![]() |
[11] |
L. Aarts, P. van der Veer, Neural network method for solving partial differential equations, Neural Process. Lett., 14 (2001), 261–271. https://doi.org/10.1023/A:1012784129883 doi: 10.1023/A:1012784129883
![]() |
[12] |
K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural Networks, 4 (1991), 251–257. https://doi.org/10.1016/0893-6080(91)90009-T doi: 10.1016/0893-6080(91)90009-T
![]() |
[13] |
Y. Khoo, J. Lu, L. Ying, Solving parametric PDE problems with artificial neural networks, Eur. J. Appl. Math., 32 (2021), 421–435. https://doi.org/10.1017/S0956792520000182 doi: 10.1017/S0956792520000182
![]() |
[14] |
C. Michoski, M. Milosavljević, T. Oliver, D. Hatch, Solving differential equations using deep neural networks, Neurocomputing, 399 (2020), 193–212. https://doi.org/10.1016/j.neucom.2020.02.015 doi: 10.1016/j.neucom.2020.02.015
![]() |
[15] |
J. Berg, K. Nyström, Data-driven discovery of PDEs in complex datasets, J. Comput. Phys., 384 (2019), 239–252. https://doi.org/10.1016/j.jcp.2019.01.036 doi: 10.1016/j.jcp.2019.01.036
![]() |
[16] |
M. Raissi, P. Perdikaris, G. Karniadakis, Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys., 378 (2019), 686–707. https://doi.org/10.1016/j.jcp.2018.10.045 doi: 10.1016/j.jcp.2018.10.045
![]() |
[17] |
M. Raissi, Deep hidden physics models: deep learning of nonlinear partial differential equations, J. Mach. Learn. Res., 19 (2018), 932–955. https://doi.org/10.5555/3291125.3291150 doi: 10.5555/3291125.3291150
![]() |
[18] |
J. A. A. Opschoor, P. C. Petersen, C. Schwab, Deep ReLU networks and high-order finite element methods, Anal. Appl., 18 (2020), 715–770. https://doi.org/10.1142/S0219530519410136 doi: 10.1142/S0219530519410136
![]() |
[19] |
G. Kutyniok, P. Petersen, M. Raslan, R. Schneider, A theoretical analysis of deep neural networks and parametric PDEs, Constr. Approx., 55 (2021), 73–125. https://doi.org/10.1007/s00365-021-09551-4 doi: 10.1007/s00365-021-09551-4
![]() |
[20] |
D. Yarotsky, Error bounds for approximations with deep relu networks, Neural Networks, 94 (2017), 103–114. https://doi.org/10.1016/j.neunet.2017.07.002 doi: 10.1016/j.neunet.2017.07.002
![]() |
[21] |
N. Franco, A. Manzoni, P. Zunino, A deep learning approach to reduced order modelling of parameter dependent partial differential equations, Math. Comp., 92 (2023), 483–524. https://doi.org/10.1090/mcom/3781 doi: 10.1090/mcom/3781
![]() |
[22] | T. De Ryck, S. Mishra, Generic bounds on the approximation error for physics-informed (and) operator learning, arXiv, 2022. https://doi.org/10.48550/arXiv.2205.11393 |
[23] |
N. Kovachki, S. Lanthaler, S. Mishra, On universal approximation and error bounds for Fourier neural operators, J. Mach. Learn. Res., 22 (2021), 13237–13312. https://doi.org/10.5555/3546258.3546548 doi: 10.5555/3546258.3546548
![]() |
[24] |
S. Lanthaler, S. Mishra, G. E. Karniadakis, Error estimates for DeepONets: a deep learning framework in infinite dimensions, Trans. Math. Appl., 6 (2022), tnac001. https://doi.org/10.1093/imatrm/tnac001 doi: 10.1093/imatrm/tnac001
![]() |
[25] |
M. Guo, J. S. Hesthaven, Reduced order modeling for nonlinear structural analysis using gaussian process regression, Comput. Methods Appl. Mech. Eng., 341 (2018), 807–826. https://doi.org/10.1016/j.cma.2018.07.017 doi: 10.1016/j.cma.2018.07.017
![]() |
[26] |
M. Guo, J. S. Hesthaven, Data-driven reduced order modeling for time-dependent problems, Comput. Methods Appl. Mech. Eng., 345 (2019), 75–99. https://doi.org/10.1016/j.cma.2018.10.029 doi: 10.1016/j.cma.2018.10.029
![]() |
[27] |
J. S. Hesthaven, S. Ubbiali, Non-intrusive reduced order modeling of nonlinear problems using neural networks, J. Comput. Phys., 363 (2018), 55–78. https://doi.org/10.1016/j.jcp.2018.02.037 doi: 10.1016/j.jcp.2018.02.037
![]() |
[28] |
S. Pawar, S. E. Ahmed, O. San, A. Rasheed, Data-driven recovery of hidden physics in reduced order modeling of fluid flows, Phys. Fluids, 32 (2020), 036602. https://doi.org/10.1063/5.0002051 doi: 10.1063/5.0002051
![]() |
[29] |
B. A. Freno, K. T. Carlberg, Machine-learning error models for approximate solutions to parameterized systems of nonlinear equations, Comput. Methods Appl. Mech. Eng., 348 (2019) 250–296. https://doi.org/10.1016/j.cma.2019.01.024 doi: 10.1016/j.cma.2019.01.024
![]() |
[30] | E. J. Parish, K. T. Carlberg, Time-series machine learning error models for appproximate solutions to dynamical systems, 15^{th} National Congress of Computational Mechanics, 2019. |
[31] |
K. Lee, K. T. Carlberg, Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders, J. Comput. Phys., 404 (2020), 108973. https://doi.org/10.1016/j.jcp.2019.108973 doi: 10.1016/j.jcp.2019.108973
![]() |
[32] |
S. Fresca, L. Dedè, A. Manzoni, A comprehensive deep learning-based approach to reduced order modeling of nonlinear time-dependent parametrized PDEs, J. Sci. Comput., 87 (2021), 61. https://doi.org/10.1007/s10915-021-01462-7 doi: 10.1007/s10915-021-01462-7
![]() |
[33] |
S. Fresca, A. Manzoni, POD-DL-ROM: enhancing deep learning-based reduced order models for nonlinear parametrized PDEs by proper orthogonal decomposition, Comput. Methods Appl. Mech. Eng., 388 (2021), 114181. https://doi.org/10.1016/j.cma.2021.114181 doi: 10.1016/j.cma.2021.114181
![]() |
[34] |
N. Halko, P. G. Martinsson, J. A. Tropp, Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions, SIAM Rev., 53 (2011), 217–288. https://doi.org/10.1137/090771806 doi: 10.1137/090771806
![]() |
[35] |
S. Fresca, A. Manzoni, L. Dedè, A. Quarteroni, Deep learning-based reduced order models in cardiac electrophysiology, PLoS One, 15 (2020), e0239416. https://doi.org/10.1371/journal.pone.0239416 doi: 10.1371/journal.pone.0239416
![]() |
[36] |
S. Fresca, A. Manzoni, L. Dedè, A. Quarteroni, POD-enhanced deep learning-based reduced order models for the real-time simulation of cardiac electrophysiology in the left atrium, Front. Physiol., 12 (2021), 679076. https://doi.org/10.3389/fphys.2021.679076 doi: 10.3389/fphys.2021.679076
![]() |
[37] |
S. Fresca, A. Manzoni, Real-time simulation of parameter-dependent fluid flows through deep learning-based reduced order models, Fluids, 6 (2021), 259. https://doi.org/10.3390/fluids6070259 doi: 10.3390/fluids6070259
![]() |
[38] |
S. Fresca, G. Gobat, P. Fedeli, A. Frangi, A. Manzoni, Deep learning-based reduced order models for the real-time simulation of the nonlinear dynamics of microstructures, Int. J. Numer. Methods Eng., 123 (2022), 4749–4777. https://doi.org/10.1002/nme.7054 doi: 10.1002/nme.7054
![]() |
[39] |
S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Comput., 9 (1997), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735 doi: 10.1162/neco.1997.9.8.1735
![]() |
[40] |
F. Gers, J. Schmidhuber, F. Cummins, Learning to forget: continual prediction with LSTM, Neural Comput., 12 (2000), 2451–71. https://doi.org/10.1162/089976600300015015 doi: 10.1162/089976600300015015
![]() |
[41] | P. Sentz, K. Beckwith, E. C. Cyr, L. N. Olson, R. Patel, Reduced basis approximations of parameterized dynamical partial differential equations via neural networks, arXiv, 2021. https://doi.org/10.48550/arXiv.2110.10775 |
[42] |
R. Maulik, B. Lusch, P. Balaprakash, Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders, Phys. Fluids, 33 (2021), 037106. https://doi.org/10.1063/5.0039986 doi: 10.1063/5.0039986
![]() |
[43] |
J. Xu, K. Duraisamy, Multi-level convolutional autoencoder networks for parametric prediction of spatio-temporal dynamics, Comput. Methods Appl. Mech. Eng., 372 (2020), 113379. https://doi.org/10.1016/j.cma.2020.113379 doi: 10.1016/j.cma.2020.113379
![]() |
[44] |
N. T. Mücke, S. M. Bohté, C. W. Oosterlee, Reduced order modeling for parameterized time-dependent pdes using spatially and memory aware deep learning, J. Comput. Sci., 53 (2021), 101408. https://doi.org/10.1016/j.jocs.2021.101408 doi: 10.1016/j.jocs.2021.101408
![]() |
[45] |
Y. Hua, Z. Zhao, R. Li, X. Chen, Z. Liu, H. Zhang, Deep learning with long short-term memory for time series prediction, IEEE Commun. Mag., 57 (2019), 114–119. https://doi.org/10.1109/MCOM.2019.1800155 doi: 10.1109/MCOM.2019.1800155
![]() |
[46] |
R. Maulik, B. Lusch, P. Balaprakash, Non-autoregressive time-series methods for stable parametric reduced-order models, Phys. Fluids, 32 (2020), 087115. https://doi.org/10.1063/5.0019884 doi: 10.1063/5.0019884
![]() |
[47] | N. Srivastava, E. Mansimov, R. Salakhutdinov, Unsupervised learning of video representations using LSTMs, ICML'15: Proceedings of the 32nd International Conference on International Conference on Machine Learning, 37 (2015), 843–852. |
[48] |
P. Drineas, R. Kannan, M. W. Mahoney, Fast Monte Carlo algorithms for matrices Ⅱ: computing a low-rank approximation to a matrix, SIAM J. Comput., 36 (2006), 158–183. https://doi.org/10.1137/S0097539704442696 doi: 10.1137/S0097539704442696
![]() |
[49] |
M. Sangiorgio, F. Dercole, Robustness of LSTM neural networks for multi-step forecasting of chaotic time series, Chaos Soliton. Fract., 39 (2020), 110045. https://doi.org/10.1016/j.chaos.2020.110045 doi: 10.1016/j.chaos.2020.110045
![]() |
[50] | S. Du, T. Li, S. Horng, Time series forecasting using sequence-to-sequence deep learning framework, 2018 9th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP), 2018,171–176. https://doi.org/10.1109/PAAP.2018.00037 |
[51] | W. Zucchini, I. Macdonald, Hidden Markov models for time series: an introduction using R, 1 Ed., New York: Chapman and Hall/CRC, 2009. https://doi.org/10.1201/9781420010893 |
[52] | P. Dostál, Forecasting of time series with fuzzy logic, In: I. Zelinka, G. Chen, O. E. Rössler, V. Snasel, A. Abraham, Nostradamus 2013: prediction, modeling and analysis of complex systems, Heidelberg: Springer, 210 (2013), 155–161. https://doi.org/10.1007/978-3-319-00542-3_16 |
[53] | F. A. Gers, D. Eck, J. Schmidhuber, Applying LSTM to time series predictable through time-window approaches, In: G. Dorffner, H. Bischof, K. Hornik, Artificial neural networks — ICANN 2001, Lecture Notes in Computer Science, Springer, 2130 (2001), 669–676. https://doi.org/10.1007/3-540-44668-0_93 |
[54] | S. Siami-Namini, N. Tavakoli, A. Siami Namin, A comparison of ARIMA and LSTM in forecasting time series, 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), 2018, 1394–1401. https://doi.org/10.1109/ICMLA.2018.00227 |
[55] | R. T. Q. Chen, Y. Rubanova, J. Bettencourt, D. Duvenaud, Neural ordinary differential equations, NIPS'18: Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018, 6572–6583. https://doi.org/10.5555/3327757.3327764 |
[56] | S. Massaroli, M. Poli, J. Park, A. Yamashita, H. Asama, Dissecting neural ODEs, arXiv, 2021. https://doi.org/10.48550/arXiv.2002.08071 |
[57] |
P. R. Vlachas, W. Byeon, Z. Y. Wan, T. P. Sapsis, P. Koumoutsakos, Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks, Proc. R. Soc. A: Math. Phys. Eng. Sci., 474 (2018), 1–20. https://doi.org/10.1098/rspa.2017.0844 doi: 10.1098/rspa.2017.0844
![]() |
[58] | D. P. Kingma, J. Ba, ADAM: a method for stochastic optimization, 3rd International Conference for Learning Representations, San Diego, 2015. |
[59] | I. Goodfellow, Y. Bengio, A. Courville, Deep learning, MIT Press, 2016. Available from: http://www.deeplearningbook.org. |
[60] | D. Clevert, T. Unterthiner, S. Hochreiter, Fast and accurate deep network learning by exponential linear units (ELUs), arXiv, 2015. https://doi.org/10.48550/arXiv.1511.07289 |
[61] | K. He, X. Zhang, S. Ren, J. Sun, Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification, Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, 1026–1034. |
[62] | M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, et al., TensorFlow: large-scale machine learning on heterogeneous systems, 2015. Available from: https://www.tensorflow.org. |
[63] | F. Negri, redbkit v2.2, 2017. Available from: https://github.com/redbKIT/redbKIT. |
[64] |
J. Bergstra, Y. Bengio, Random search for hyper-parameter optimization, J. Mach. Learn. Res., 13 (2012), 281–305. https://doi.org/10.5555/2188385.2188395 doi: 10.5555/2188385.2188395
![]() |
[65] |
S. Chaturantabut, D. C. Sorensen, Discrete empirical interpolation for nonlinear model reduction, Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference, 2009, 4316–4321. https://doi.org/10.1109/CDC.2009.5400045 doi: 10.1109/CDC.2009.5400045
![]() |
[66] | C. M. Bishop, Neural networks for pattern recognition, Oxford University Press, Inc., 1995. |
1. | Yuhan Su, Shaoping Shen, Adaptive global prescribed performance control for rigid spacecraft subject to angular velocity constraints and input saturation, 2023, 111, 0924-090X, 21691, 10.1007/s11071-023-08979-6 | |
2. | Zikun Hu, Shaopin Shen, Yuhan Su, Zhibin Li, 2024, Dc Motor Control: A Global Prescribed Performance Control Strategy Based on Extended State Observer, 979-8-3503-8778-0, 3490, 10.1109/CCDC62350.2024.10587568 | |
3. | Jing Wang, Wei Zhao, Jinde Cao, Ju H. Park, Hao Shen, Reinforcement Learning-Based Predefined-Time Tracking Control for Nonlinear Systems Under Identifier-Critic–Actor Structure, 2024, 54, 2168-2267, 6345, 10.1109/TCYB.2024.3431670 | |
4. | Yue Sun, Youmin Gong, Jie Mei, Yanning Guo, Guangfu Ma, Weiren Wu, Collision-Free Approximate Optimal Control of Spacecraft Formation With Predefined Performance, 2025, 61, 0018-9251, 4192, 10.1109/TAES.2024.3501235 |