
This paper mainly studied the stochastic stability and the design of state feedback controllers for nonlinear singular continuous semi-Markov jump systems under false data injection attacks. Based on the Lyapunov function and the implicit function theorem, a basic stochastic stability condition of the system was given to ensure that the nonlinear singular semi-Markov jump system under attack was regular, impulse-free, unique, and stochastically stable. On this basis, the stochastic admissible linear matrix inequality conditions of the system were obtained by using the singular value decomposition of the matrix and Schur's complement lemma. To design the state feedback controller, based on the upper and lower bounds of the time-varying transition probability of the semi-Markov jump system and the singular value decomposition method, the stochastic stable linear matrix inequality condition of the closed-loop system under the false data injection attack was established. Finally, the validity and feasibility of the results were verified by numerical examples.
Citation: Yang Song, Beiyan Yang, Jimin Wang. Stability analysis and security control of nonlinear singular semi-Markov jump systems[J]. Electronic Research Archive, 2025, 33(1): 1-25. doi: 10.3934/era.2025001
[1] | Min Wang, Jiao Teng, Lei Wang, Junmei Wu . Application of ADMM to robust model predictive control problems for the turbofan aero-engine with external disturbances. AIMS Mathematics, 2022, 7(6): 10759-10777. doi: 10.3934/math.2022601 |
[2] | Cuijie Zhang, Zhaoyang Chu . New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities. AIMS Mathematics, 2023, 8(10): 23291-23312. doi: 10.3934/math.20231184 |
[3] | Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672 |
[4] | Pongsakorn Yotkaew, Nopparat Wairojjana, Nuttapol Pakkaranang . Accelerated non-monotonic explicit proximal-type method for solving equilibrium programming with convex constraints and its applications. AIMS Mathematics, 2021, 6(10): 10707-10727. doi: 10.3934/math.2021622 |
[5] | P. Dhivya, D. Diwakaran, P. Selvapriya . Best proximity points for proximal Górnicki mappings and applications to variational inequality problems. AIMS Mathematics, 2024, 9(3): 5886-5904. doi: 10.3934/math.2024287 |
[6] | YeongJae Kim, YongGwon Lee, SeungHoon Lee, Palanisamy Selvaraj, Ramalingam Sakthivel, OhMin Kwon . Design and experimentation of sampled-data controller in T-S fuzzy systems with input saturation through the use of linear switching methods. AIMS Mathematics, 2024, 9(1): 2389-2410. doi: 10.3934/math.2024118 |
[7] | Jun Moon . The Pontryagin type maximum principle for Caputo fractional optimal control problems with terminal and running state constraints. AIMS Mathematics, 2025, 10(1): 884-920. doi: 10.3934/math.2025042 |
[8] | Cunlin Li, Wenyu Zhang, Baojun Yang, Hooi Min Yee . A multi-player game equilibrium problem based on stochastic variational inequalities. AIMS Mathematics, 2024, 9(9): 26035-26048. doi: 10.3934/math.20241271 |
[9] | Regina S. Burachik, Bethany I. Caldwell, C. Yalçın Kaya . Douglas–Rachford algorithm for control- and state-constrained optimal control problems. AIMS Mathematics, 2024, 9(6): 13874-13893. doi: 10.3934/math.2024675 |
[10] | Na Mi, Juhe Sun, Li Wang, Yu Liu . Alternating direction method for the fixed point problem of set-valued mappings with second-order cone double constraints. AIMS Mathematics, 2023, 8(3): 6389-6406. doi: 10.3934/math.2023323 |
This paper mainly studied the stochastic stability and the design of state feedback controllers for nonlinear singular continuous semi-Markov jump systems under false data injection attacks. Based on the Lyapunov function and the implicit function theorem, a basic stochastic stability condition of the system was given to ensure that the nonlinear singular semi-Markov jump system under attack was regular, impulse-free, unique, and stochastically stable. On this basis, the stochastic admissible linear matrix inequality conditions of the system were obtained by using the singular value decomposition of the matrix and Schur's complement lemma. To design the state feedback controller, based on the upper and lower bounds of the time-varying transition probability of the semi-Markov jump system and the singular value decomposition method, the stochastic stable linear matrix inequality condition of the closed-loop system under the false data injection attack was established. Finally, the validity and feasibility of the results were verified by numerical examples.
Let D⊂ℜn be a nonempty, closed and convex set and f:ℜn→ℜn be a given continuous and monotone mapping. The variational inequality problem, denoted by Ⅵ(D,f), is to find a vector x∗∈D such that
⟨x−x∗,f(x∗)⟩≥0,∀x∈D. | (1.1) |
The Ⅵ(D,f) has found many important applications in areas such as nonlinear complementarity problems (where D=ℜn+) [1], traffic equilibrium and economic problems [2,3]. For recent applications and numerical methods of the Ⅵ(D,f), we refer the reader to [4,5,6] and the references therein.
In this paper, we consider a special case of the general Ⅵ problem, where the set D is assumed to have the following form:
D={x∈ℜn|x∈X,Ax=b}, | (1.2) |
here A∈ℜm×n is a given matrix, b∈ℜm is a given vector, X⊂ℜn is a nonempty, closed and convex subset. The solution set of (1.1) and (1.2), denoted by Ω∗, is assumed to be nonempty. Note that the Ⅵ problem (1.1) and (1.2) is closely related to the convex optimization problem with linear equality constraints:
minθ(x)s.t.Ax=b, x∈X. | (1.3) |
To show this, we recall the first order optimization conditions of problem (1.3). Let x∗ be a minimum point of the convex function θ(x) over D and ξ∗ be any vector in ∂θ(x∗), where ∂θ(⋅) denotes the subdifferential operator of θ(⋅). Then for any feasible direction d∈ℜn at x∗, we have (ξ∗)Td>0. It means that, the following variational inequality problem is captured:
⟨x−x∗,ξ∗⟩≥0,∀x∈D. | (1.4) |
Thus, solving the Ⅵ problem (1.4) amounts to solving (1.3). This Ⅵ characterization is also used in e.g., [7,8].
The Ⅵ problem (1.1) and (1.2) or its equivalent form (1.3) is one of the fundamental problems in convex optimization. In particular, it includes linear programming, conic and semidefinite optimization as special cases. It can find many applications in compressed sensing, image processing and data mining, see, e.g., [9,10,11]. We refer to [12] for recent examples and discussions. To solve the Ⅵ problem (1.1) and (1.2), some proximal-like algorithms have been developed over the past years, see e.g., [13] for a review. One benchmark method is the augmented Lagrangian Method (ALM) [14,15] for nonlinear problems. The ALM is applicable to solve Ⅵ(D,f) (1.1) and (1.2). More specifically, for a given uk=(xk,λk), ALM uses the following procedure to carry out the new iteration uk+1=(xk+1,λk+1)∈X×ℜm:
Find xk+1∈X such that
⟨x−xk+1,f(xk+1)−ATλk+βAT(Axk+1−b)⟩≥0,∀x∈X, | (1.5a) |
then update λk+1 via
λk+1=λk−β(Axk+1−b), | (1.5b) |
where β≥0 is a given penalty parameter for the violation of the linearly constraints. To make ALM (1.5) more efficient and flexible, some alternative strategies can be used. For example, some self-adaptive rules can be carried to adjust the parameter β based on certain strategies [16,17,18,19]. We can also use some correction technologies to the output point [20,21]. Let ˜uk denote the predictor generated by ALM. A simple and effective correction scheme is
uk+1=uk−αk(uk−˜uk), | (1.6) |
where 0<αk<2 is the step size parameter; see, e.g., [22,23] for details.
The main computational cost at each iteration of ALM is to solve the subproblem (1.5a), which is still a variational inequality, structurally the same as the original problem. So in many cases the ALM (1.5a) is easy to iterate only if A is an identity matrix. This is because, for this case, the solution of the subproblem (1.5a) corresponds to the proximal mappings of f and it usually has closed-form solutions or can be efficiently solved up to a high precision. However, in many applications in sparse optimization, we often encounter the case where A is not an identity matrix, and the resulting subproblem (1.5a) no longer has closed-form solutions. For this case, solving the subproblem (1.5a) could be computationally intensive because of the costly evaluation of (ATA+1βf)−1(Aυ). Therefore, efficient numerical methods with implementable iterative scheme are highly desired.
Recently, several techniques attempting to overcome this difficulty have been proposed. In the framework of the proximal point algorithm (PPA), there are two relevant approaches. The first one is regularization. By adding a customized regularization term to the saddle-point reformulation of (1.1) and (1.2), the primal subproblem becomes easy as it only involves a simple evaluation, see the customized PPA algorithms proposed in e.g., [24,25,26,27]. We refer the reader to e.g., [28,29] of the linearized regularization term for the separable case of problem (1.2). The second one employs prediction-correction technology which adds an asymmetrical proximal term to make a prediction, and then introduces a simple correction step to guarantee the convergence, see e.g., [30,31,32]. In this paper, we propose a new prediction-correction method for the Ⅵ(D,f). At each iteration, the method first makes a simple prediction step to obtain a point, and then generates a new iteration via a minor correction to the predictor. The reduced subproblems are easy to solve when the resolvent mapping of f can be efficiently evaluated. As can be seen in the Section 5, the proposed method converges faster with less iterations to achieve the same accuracy on most numerical cases.
The rest of this paper is organized as follows. In Section 2, we review some preliminaries which are useful for further analysis. In Section 1, we present the implementable prediction-correction method for Ⅵ(D,f). In Section 4, we establish the global convergence of the proposed method. The computational experiment is presented in Section 5. Finally, we make a conclusion in Section 6.
In this section, we reformulate the Ⅵ problem (1.1) and (1.2) in succinct form. Let Ω=X×ℜm. By attaching a Lagrange multiplier vector λ∈ℜm to the linear constraints Ax=b, the Ⅵ problem (1.1) and (1.2) can be converted to the following form:
⟨x−x∗,f(x∗)−ATλ∗λ−λ∗,Ax∗−b⟩≥0,∀(x,λ)T∈Ω. | (2.1) |
By denoting
u:=(xλ),F(u):=(f(x)−ATλAx−b), | (2.2) |
we can rewrite (2.1) into the following more compact form
⟨u−u∗,F(u∗)⟩≥0,∀u∈Ω. | (2.3) |
Henceforth, we will denote the Ⅵ problem (2.2) and (2.3) by Ⅵ(Ω,F). Now, we make some basic assumptions and summarize some well known results of Ⅵ, which will be used in subsequent analysis.
(A1) X is a simple closed convex set.
A set is said to be simple if the projection onto it can be easily obtained. Here, the projection of a point a onto the closed convex set X, denoted by PX(a), is defined as the nearest point x∈X to a, i.e.,
PX(a)=argmin{‖x−a‖ |x∈X}. |
(A2) The mapping f is point-to-point, monotone and continuous.
A mapping F:ℜn→ℜn is said to be monotone on Ω if
⟨u−υ,F(u)−F(υ)⟩≥0,∀u,υ∈Ω. | (2.4) |
(A3) The solution set of (2.2) and (2.3), denoted by Ω∗, is nonempty.
Remark 1. The mapping F(⋅) defined in (2.3) is monotone with respect to Ω since
(F(u)−F(˜u))T(u−˜u)≥0, ∀u,˜u∈Ω. | (2.5) |
Proof.
(F(u)−F(˜u))T(u−˜u)=(f(x)−f(˜x)−AT(λ−˜λ)A(x−˜x))T(x−˜xλ−˜λ)=(f(x)−f(˜x))T(x−˜x)≥0, | (2.6) |
where the last inequality follows from the monotone property of f.
Let G be a symmetric positive definite matrix. The G-norm of the vector u is denoted by ‖u‖G:=√⟨u,Gu⟩. In particular, when G=I, ‖u‖:=√⟨u,u⟩ is the Euclidean norm of u. For a matrix A, ‖A‖ denotes its norm ‖A‖:=max{‖Ax‖:‖x‖≤1}.
The following well-known properties of the projection operator will be used in the coming analysis. The proofs can be found in textbooks, e.g., [2,33].
Lemma 1. Let X∈ℜn be a nonempty closed convex set, PX(⋅) be the projection operator onto X under the G-norm. Then
⟨y−PX(y),G(x−PX(y))⟩≤0,∀y∈ℜn,∀x∈X. | (2.7) |
‖PX(x)−PX(y)‖G≤‖x−y‖G,∀x,y∈ℜn. | (2.8) |
‖x−PX(y)‖2G≤‖x−y‖2G−‖y−PX(y)‖2G,∀y∈ℜn,∀x∈X. | (2.9) |
For any arbitrary positive scalar β and u∈Ω, let e(u,β) denote the residual function associated with the mapping F, i.e.,
e(u,β)=u−PΩ[u−βF(u)]. | (2.10) |
Lemma 2. Solving VI(Ω,F) is equivalent to finding the zero point of the mapping
e(u,β):=(e1(u,β)e2(u,β))=(x−PX{x−β[f(x)−ATλ]}β(Ax−b)). | (2.11) |
Proof. See [2], pp 267.
We now formally present the new prediction-correction method for the Ⅵ(Ω,F) (Algorithm 1). The method can be viewed as a generalization of [31] in relaxed case.
Algorithm 1: A prediction-correction based method (PCM) for the Ⅵ(Ω,F). |
Step 0: Initialization step. Given a small number ϵ>0. Take γ∈(0,2), u0∈ℜn+m; set k=0. Choose the parameters r>0,s>0, such that rs>14‖ATA‖. (3.1) Step 1: Prediction step. Generate the predictor ˜xk via solving the following projection equation: ˜xk=PX[xk−1r(f(˜xk)−ATλk)]. (3.2a) Then update ˜λk∈ℜm via ˜λk=λk−1s(A˜xk−b). (3.2b) Step 2: Correction step. Generate the new iteration uk+1=(xk+1,λk+1) via xk+1=xk−γ(xk−˜xk)−γ2rAT(λk−˜λk), (3.3a) and λk+1=λk+γ2sA(xk−˜xk)−γ(I−12rsAAT)(λk−˜λk). (3.3b) Step 3: Convergence verification. If ‖uk+1−uk‖≤ϵ, stop; otherwise set k:=k+1; go to Step 1. |
Remark 2. Note that the regularized projection Eq (3.2a) amounts to solving the following VI problem
⟨x−˜xk,1r(f(˜xk)−ATλk)+(˜xk−xk)⟩≥0,∀x∈X, | (3.4) |
which represents to find ˜xk∈X such that
0∈˜xk+1rf(˜xk)−(1rATλk+xk). | (3.5) |
We can rewrite the above equation as
˜xk∈(I+1rf)−1(1rATλk+xk). | (3.6) |
Thus, the subproblem (3.2a) is equivalent to an evaluation of the resolvent operator (I+1rf)−1 when X is the finite-dimensional Euclidean spaces [34]. Notice that ALM (1.5a) needs to evaluate the operator (ATA+1βf)−1. Therefore, the resulting subproblem (3.2a) could be much easier to solve than ALM (1.5a). On the other hand, the correction step only consists of some elementary manipulations. Thus, the resulting method (3.2a)–(3.3b) is easily implementable.
Remark 3. The parameters 1r and 1s in the prediction step can be viewed as two step sizes for the projection step (3.2a) and dual step, respectively. Using the step size condition rs>14‖ATA‖ of this algorithm, the parameters 1r and 1s can be chosen larger values compared to the condition rs>‖ATA‖ of some other primal dual algorithms, e.g., linearized ALM, customized PPA algorithms. This larger step size is usually beneficial to the effectiveness and robustness of the algorithm. In Section 5 we will empirically see that our algorithm is significantly faster than some other primal dual algorithms by allowing larger step sizes.
In the following, we will focus our attention to solving Ⅵ(Ω,F). But at the beginning, to make the notation of the proof more succinct, we define some matrices:
G=(rI12AT12AsI),Q=(rIAT0sI). | (4.1) |
Obviously, when rs>14‖ATA‖, G is a positive definite matrix. Now, we start to prove the global convergence of the sequence {uk}. Towards this end, we here follow the work [35] to reformulate the algorithm into a prediction-correction method and establish its convergence results. We first prove some lemmas. The first lemma quantifies the discrepancy between the point ˜uk and a solution point of Ⅵ(Ω,F).
Lemma 3. Let {uk} be generated by the PCM and {˜uk} be generated by PCM (3.2), and the matrix Q be given in (4.1). Then we have
⟨u−˜uk,F(˜uk)−Q(uk−˜uk)⟩≥0,∀u∈Ω. | (4.2) |
Proof. Note that the sequence {˜uk} generated by (3.2) is actually solutions of the following VIs:
⟨x−˜xk,f(˜xk)−ATλk−r(xk−˜xk)⟩≥0,∀x∈X, | (4.3) |
and
⟨λ−˜λk,A˜xk−b−s(λk−˜λk)⟩≥0,∀λ∈ℜm. | (4.4) |
Combining (4.3) and (4.4) yields
⟨x−˜xk,f(˜xk)−AT˜λk−AT(λk−˜λk)−r(xk−˜xk)λ−˜λ,A˜xk−b−s(λk−˜λk)⟩≥0,∀u∈Ω. | (4.5) |
It can be rewritten as
⟨(x−˜xkλ−˜λk), (f(˜xk)−AT˜λkA˜xk−b)−(rIAT0sI)(xk−˜xkλk−˜λk)⟩≥0,∀u∈Ω. | (4.6) |
Using the notation of F(u) (2.3) and Q (4.1), the assertion (4.2) is proved.
The following lemma characterizes the correction step by a matrix form.
Lemma 4. Let {uk} be generated by the PCM and {˜uk} be generated by PCM (3.2), Then we have
uk−uk+1=γM(uk−˜uk), | (4.7) |
where
M=(I12rAT−12sAI−AAT2rs). | (4.8) |
Proof. From the correction Step (3.3a) and (3.3b), we have
(xk+1λk+1)=(xkλk)−γ((xk−˜xk)+AT2r(λk−˜λk)−12sA(xk−˜xk)+(I−12rsAAT)(λk−˜λk)), | (4.9) |
which can be written as
(xk−xk+1λk−λk+1)=γ(I12rAT−12sAI−AAT2rs)(xk−˜xkλk−˜λk). | (4.10) |
By noting the matrix M (4.8), the proof is completed.
Using the matrices Q (4.1) and M (4.8), we define
H=QM−1, | (4.11) |
then we have Q=HM. The inequality (4.2) can be written as
γ⟨u−˜uk,F(˜uk)−HM(uk−˜uk)⟩≥0,∀u∈Ω. | (4.12) |
Substituting (4.7) into (4.12) and using the monotonicity of F (see (2.5)), we obtain
˜uk∈Ω,⟨u−˜uk,F(u)⟩≥⟨u−˜uk,H(uk−uk+1)⟩,∀u∈Ω. | (4.13) |
Now, we prove a simple fact for the matrix H in the following lemma.
Lemma 5. The matrix H defined in (4.11) is positive definite for any r>0,s>0, rs>14‖ATA‖.
Proof. For the matrix Q defined by (4.1), we have
Q−1=(1rI−1rsAT01sI). |
Thus, it follows from (4.1) that
H−1=MQ−1=(I 12rAT−12sA I−AAT2rs)(1rI−1rsAT01sI)=(1rI−12rsAT−12rsA1sI). | (4.14) |
For any any r>0,s>0 satisfying rs>14‖ATA‖, H−1 is positive definite, and the positive definiteness of H is followed directly.
Then we show how to deal with the right-hand side of (4.13). The following lemma gives an equivalent expression of it in terms of ‖u−uk‖H and ‖u−uk+1‖H.
Lemma 6. Let {uk} be generated by the PCM and {˜uk} be generated by PCM (3.2). Then we have
⟨u−˜uk,H(uk−uk+1)⟩=12(‖u−uk+1‖2H−‖u−uk‖2H)+12‖uk−˜uk‖2G, ∀u∈Ω, | (4.15) |
where the matrix G=γ(QT+Q)−γ2MTHM.
Proof. Applying the identity
⟨a−b,H(c−d)⟩=12(‖a−d‖2H−‖a−c‖2H)+12(‖c−b‖2H−‖d−b‖2H), | (4.16) |
to the right term of (4.13) with a=u,b=˜uk,c=uk,d=uk+1, we obtain
⟨u−˜uk,H(uk−uk+1)⟩=12(‖u−uk+1‖2H−‖u−uk‖2H)+12(‖uk−˜uk‖2H−‖uk+1−˜uk‖2H). | (4.17) |
For the last term of (4.17), we have
‖uk−˜uk‖2H−‖uk+1−˜uk‖2H=‖uk−˜uk‖2H−‖(uk−˜uk)−(uk−uk+1)‖2H(4.7)=‖uk−˜uk‖2H−‖(uk−˜uk)−γM(uk−˜uk)‖2H=2γ⟨uk−˜uk,HM(uk−˜uk)⟩−γ2⟨uk−˜uk,MTHM(uk−˜uk)⟩(4.11)=⟨uk−˜uk,(γ(QT+Q)−γ2MTHM)(uk−˜uk)⟩. | (4.18) |
the assertion is proved.
Now, we investigate the positive definiteness for the matrix G. Using (4.11), we have
G=γ(QT+Q)−γ2MTHM=γ(QT+Q)−γ2MTQ=γ(2rIATA2sI)−γ2(I −12sAT12rA I−AAT2rs)(rIAT0sI)=γ(2rIATA2sI)−γ2(rI12AT12AsI)=(2γ−γ2)(rI12AT12AsI). | (4.19) |
Thus, when rs>14‖ATA‖ and γ∈(0,2), the matrix G is guaranteed to be positive definite, and we can easily obtained the contraction property of the algorithm. This is given by the following theorem.
Theorem 1. Suppose the condition
rs>14‖ATA‖ | (4.20) |
holds. Let the relaxation factor γ∈(0,2). Then, for any u∗=(x∗,λ∗)T∈Ω∗, the sequence uk+1=(xk+1,λk+1)T generated by PCM satisfies the following inequality:
‖uk+1−u∗‖2H≤‖uk−u∗‖2H−‖uk−˜uk‖2G. | (4.21) |
Proof.
Combining (4.13) and (4.15), we have
˜uk∈Ω,⟨u−˜uk,F(u)⟩≥12(‖u−uk+1‖2H−‖u−uk‖2H)+12‖uk−˜uk‖2G,∀u∈Ω. | (4.22) |
Note that u∗∈Ω. We get
‖uk−u∗‖2H−‖uk+1−u∗‖2H≥‖uk−˜uk‖2G+2⟨˜uk−u∗,F(u∗)⟩. |
Since u∗ is a solution of (2.3) and ˜uk∈Ω, we have
⟨˜uk−u∗,F(u∗)⟩≥0, | (4.23) |
and thus
‖uk−u∗‖2H−‖uk+1−u∗‖2H≥‖uk−˜uk‖2G. |
The assertion (4.21) follows directly.
Based on the above results, we are now ready to prove the global convergence of the algorithm.
Theorem 2. Let {uk} be the sequence generated by PCM for the Ⅵ problem (2.2) and (2.3). Then, for any r>0,s>0 satisfying rs>14‖ATA‖ and γ∈(0,2), the sequence {uk} converges to a solution of Ⅵ(Ω,F).
Proof. We follows the standard analytic framework of contraction-type methods to prove the convergence of the proposed algorithm. It follows from Theorem 1 that {uk} is bounded. Then we have that
limk→∞‖uk−˜uk‖G=0. | (4.24) |
Consequently,
limk→∞‖xk−˜xk‖=0, | (4.25) |
and
limk→∞‖A˜xk−b‖=limk→∞‖s(λk−˜λk)‖=0. | (4.26) |
Since
˜xk=PX[˜xk−1r(f(˜xk)−AT˜λk)+(xk−˜xk)+1rAT(λk−˜λk)], | (4.27) |
and
˜λk=λk−1s(A˜xk−b), | (4.28) |
it follows from (4.25) and (4.26) that
{limk→∞˜xk−PX[˜xk−1r(f(˜xk)−AT˜λk)]=0,(4.29a)limk→∞A˜xk−b=0.(4.29b) |
Because ˜uk is also bounded, it has at least one cluster point. Let u∞ be a cluster point of ˜uk and let ˜ukj be the subsequence converges to u∞. It follows from (4.29) that
{limj→∞˜xkj−PX[˜xkj−1r(f(˜xkj)−AT˜λkj)]=0,(4.30a)limj→∞A˜xkj−b=0.(4.30b) |
Consequently, we have
{x∞−PX[x∞−1r(f(x∞)−ATλ∞)]=0,(4.31a)Ax∞−b=0.(4.31b) |
Using the continuity of F and the projection operator PΩ(⋅), we have that u∞ is a solution of Ⅵ(Ω,F). On the other hand, by taking limits over the subsequences in (4.24) and using limj→∞˜ukj=u∞. we have that, for any k>kj,
‖uk−u∞‖H≤‖ukj−u∞‖H. |
Thus, the sequence {uk} converges to u∞, which is a solution of Ⅵ(Ω,F).
In this paper, we test the performance of PCM (3.2a)–(3.3b) for solving the basis pursuit (BP) and matrix completion problem. All the simulations are performed on a Lenovo Laptop with CPU Intel with 2.81GHz and 16GB RAM memory, using Matlab R2015b.
The BP problem arises from areas such as the communities of information theory, signal processing, statistics, machine learning. it seeks to encode a large sparse signal representations through a relatively small number of linear equations. The BP problem can be cast as the following equality-constrained l1 minimization problem
min‖x‖1s.t.Ax=b, | (5.1) |
where x∈ℜn, data A∈ℜm×n with m<n, b∈ℜm. Here we assume that A has full row rank. By invoking the first-order optimality condition, BP is equivalent to Ⅵ (1.1) and (1.2) with f(x)=∂‖x‖1. Applying PCM with γ=1 for this problem, we get the following iterative scheme:
{˜xk∈Pℜn[xk−1r(∂‖˜xk‖1−ATλk)],(5.2a)˜λk=λk−1s(A˜xk−b),(5.2b)xk+1=˜xk−12rAT(λk−˜λk),(5.2c)λk+1=λk+12sA(xk−˜xk)−(I−AAT2rs)(λk−˜λk).(5.2d) |
Note that the projection (5.2a) is equivalent to the l1 shrinkage operation:
˜xk:=Shrink(xk+1rATλk,1r), |
where the l1 shrinkage operator, denoted by Shrink(M,ξ), is defined as
[Shrink(M,ξ)]i:={Mi−ξ,if Mi>ξ,Mi+ξ,if Mi<−ξ,0,if |Mi|≤ξ,i=1,2,…,n. |
In our experiments, we focus on comparing our algorithm with the linearized ALM [36] (L-ALM) and the customized PPA (C-PPA) in [27] and verifying its efficiency. Similar with PCM, L-ALM and C-PPA also depend on Shrink operation, which have the same easiness of implementation.
The data used in this experiment is similar to the one employed in [37]. The basic setup of the problem is as follows. The data matrix A is a i.i.d. standard Gaussian matrix generated by the randn(m, n) function in MATLAB with m=n/2. The original sparse signal xtrue is sampled from i.i.d. standard Gaussian distributions with m/5 nonzero values. The output b is then created as the signs of b=Ax. It is desirable to test problems that have a precisely known solution. In fact, when the original signal xtrue is very sparse, it reduces to a minimization problem. The parameters used in the numerical experiments are similar to that in [19,38]: we set the relaxation factor γ=1, s∈(25,100), r=1.01‖ATA‖/(4s) for PCA and s∈(25,100), r=1.01‖ATA‖/s for C-PPA. (In order to ensure the convergence of L-ALM and C-PPA, the parameters r,s should satisfy rs>‖ATA‖.) We set the criterion error as min{relL=‖xk−xk−1‖,relS=Axk−b}, and declare successful recovery when this error is less than Tol=10−3. In all the tests, the initial iteration is (x0,λ0)=(0,1).
We first test the sensitivity of γ of PCM. We fix s=50 and choose different values of γ in the interval [0.6,1.8]. The number of iterations required are reported in Figure 1. The curves in Figure 1 indicate that γ∈(1,1.4) is preferable when we implement Algorithm PCM in practice.
In order to investigate the stability and efficiency of our algorithms, we test 8 groups of problems with different n and we generated the model by 10 times and reported the average results. The comparisons of these algorithms for small BP problems are presented in Tables 1–3.
PCM | L-ALM | C-PPA | ||||||||||
n | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) |
100 | 81 | 4.2e-04 | 1.9e-04 | 0.03 | 161 | 1.6e-04 | 7.5e-04 | 0.03 | 232 | 8.2e-05 | 8.6e-04 | 0.04 |
300 | 88 | 5.1e-04 | 7.3e-04 | 0.05 | 207 | 7.5e-05 | 6.3e-04 | 0.07 | 311 | 5.3e-05 | 9.4e-04 | 0.10 |
600 | 110 | 9.2e-05 | 7.9e-04 | 0.54 | 262 | 7.2e-05 | 8.9e-04 | 0.62 | 374 | 6.0e-05 | 9.4e-04 | 1.01 |
1000 | 141 | 4.4e-05 | 9.8e-04 | 1.81 | 291 | 3.3e-05 | 7.9e-04 | 2.21 | 411 | 3.2e-05 | 9.4e-04 | 2.86 |
1500 | 139 | 6.7e-05 | 6.6e-04 | 3.95 | 359 | 7.0e-05 | 9.4e-04 | 5.04 | 484 | 6.9e-05 | 9.0e-04 | 6.54 |
2000 | 151 | 6.3e-05 | 7.9e-04 | 6.55 | 406 | 1.2e-05 | 9.7e-04 | 9.24 | 536 | 1.3e-05 | 9.7e-04 | 12.08 |
2500 | 171 | 3.9e-05 | 8.5e-04 | 11.57 | 519 | 1.3e-05 | 8.9e-04 | 18.60 | 635 | 1.8e-05 | 9.8e-04 | 22.69 |
3000 | 199 | 2.9e-05 | 8.8e-04 | 18.82 | 585 | 1.7e-05 | 9.8e-04 | 29.46 | 709 | 1.7e-05 | 9.0e-04 | 35.61 |
PCM | L-ALM | C-PPA | ||||||||||
n | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) |
100 | 80 | 5.9e-04 | 7.0e-04 | 0.03 | 151 | 3.7e-04 | 9.6e-04 | 0.03 | 224 | 1.4e-04 | 9.4e-04 | 0.04 |
300 | 87 | 6.1e-04 | 8.1e-04 | 0.05 | 203 | 1.2e-04 | 8.5e-04 | 0.09 | 299 | 3.3e-05 | 9.0e-04 | 0.10 |
600 | 110 | 1.1e-04 | 6.1e-04 | 0.49 | 249 | 3.5e-05 | 9.9e-04 | 0.63 | 331 | 5.7e-05 | 8.6e-04 | 0.87 |
1000 | 137 | 4.0e-05 | 8.5e-04 | 1.75 | 266 | 3.2e-05 | 7.9e-04 | 1.80 | 370 | 3.1e-05 | 8.3e-04 | 2.62 |
1500 | 135 | 3.5e-05 | 8.6e-04 | 3.54 | 312 | 3.6e-05 | 8.5e-04 | 4.35 | 424 | 3.4e-05 | 8.2e-04 | 5.84 |
2000 | 146 | 5.4e-05 | 7.6e-04 | 6.38 | 331 | 1.3e-05 | 9.8e-04 | 7.57 | 445 | 1.5e-05 | 9.5e-04 | 10.19 |
2500 | 143 | 4.1e-05 | 9.7e-04 | 9.67 | 375 | 1.4e-05 | 9.3e-04 | 13.41 | 491 | 1.5e-05 | 9.1e-04 | 17.63 |
3000 | 170 | 3.1e-05 | 9.7e-04 | 16.00 | 418 | 1.5e-05 | 9.5e-04 | 20.70 | 533 | 1.6e-05 | 8.9e-04 | 26.42 |
PCM | L-ALM | C-PPA | ||||||||||
n | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) |
100 | 82 | 5.3e-04 | 5.7e-04 | 0.02 | 164 | 2.7e-04 | 6.2e-04 | 0.03 | 236 | 6.3e-05 | 9.2e-04 | 0.04 |
300 | 99 | 1.9e-04 | 7.9e-04 | 0.05 | 213 | 9.4e-05 | 5.9e-04 | 0.07 | 284 | 4.2e-05 | 9.2e-04 | 0.09 |
600 | 117 | 6.7e-05 | 8.9e-04 | 0.55 | 226 | 1.1e-04 | 9.9e-04 | 0.58 | 316 | 8.5e-05 | 9.3e-04 | 0.84 |
1000 | 179 | 4.2e-05 | 8.7e-04 | 2.35 | 253 | 5.4e-05 | 9.5e-04 | 1.68 | 369 | 1.9e-05 | 9.5e-04 | 2.47 |
1500 | 159 | 3.4e-05 | 9.3e-04 | 4.20 | 299 | 3.4e-05 | 8.1e-04 | 4.13 | 399 | 3.5e-05 | 7.9e-04 | 5.39 |
2000 | 142 | 6.6e-05 | 9.4e-04 | 6.18 | 313 | 1.3e-05 | 9.0e-04 | 7.32 | 410 | 1.9e-05 | 9.7e-04 | 9.56 |
2500 | 139 | 2.9e-05 | 9.8e-04 | 9.41 | 337 | 1.4e-05 | 1.0e-03 | 12.20 | 444 | 1.1e-05 | 9.8e-04 | 16.00 |
3000 | 168 | 3.7e-05 | 9.0e-04 | 15.92 | 370 | 1.6e-05 | 8.2e-04 | 18.54 | 464 | 2.2e-05 | 9.0e-04 | 23.22 |
From Tables 1–3, it can be seen that the PCM performs the best, both in terms of number of iterations and CPU time for all test cases. These numerical results illustrate that if the step size condition relaxed, can indeed be beneficial to yield larger step sizes, which could accelerate the convergence of algorithm.
To verify the performance results of our algorithm, we plotted the approximation error relL=‖xk−xk−1‖,relS=‖Axk−b‖ achieved for n=1500,s=50 by each of the algorithms versus the iteration number k in Figures 2 and 3, respectively. It is clear to see that PCM outperforms all the other algorithms significantly in terms of number of iterations.
Matrix completion problem (MC) comes from many fields such as signal processing, statistics, machine learning communities. It tries to recover the low-rank matrix X from its incomplete known entries. Mathematically, its convex formula is as follows:
min‖X‖∗s.t.Xij=Mij,(i,j)∈Ω, | (5.3) |
where ‖X‖∗ is the nuclear norm of X, M is the unknown matrix with p available sampled entries and Ω is a set of pairs of indices of cardinality p. By invoking the first-order optimality condition, MC can also be equivalent to Ⅵ (1.1) and (1.2) with f(x)=∂‖X‖∗.
The basic setup of the problem is as follows. We first generate two random matrices ML∈ℜn×ra and MR∈ℜn×ra, all with i.i.d. standard Gaussian entries, and then set the low-rank matrix M=MLMTR. The available index set Ω is randomly uniformly sampled in all cardinality sets |Ω|. We denote the oversampling factor (OS) by OS=|Ω|/ra(2n−ra), i.e., the ratio of sample size to degrees of freedom in an asymmetric matrix of rank ra. The relative error of the approximation X is defined as
relative error=‖XΩ−MΩ‖F‖MΩ‖F. | (5.4) |
We set the relative error Tol=10−5 as the tolerance for all algorithms. In all tests, the initial iteration is (X0,Λ0)=(0,0). The parameters used in the numerical experiments are set as follows: we set r=0.006, s=1.01/(4r), γ=1 for PCM, s=1.01/r for L-ALM and C-PPA.
Tables 4–6 list the comparison between these algorithms with three different OS values. The results confirm that PCM outperforms other methods in terms of computation time and number of iterations in all cases.
Problems | PCM | L-ALM | C-PPA | ||||
n | ra | Rel.err. | Iter. | Rel.err. | Iter. | Rel.err. | Iter. |
100 | 5 | 9.71e-06 | 60 | 6.22e-05 | 101 | 5.83e-05 | 101 |
100 | 10 | 9.74e-06 | 18 | 9.17e-06 | 38 | 9.15e-06 | 37 |
200 | 5 | 9.40e-06 | 69 | 1.41e-04 | 101 | 1.33e-04 | 101 |
200 | 10 | 8.88e-06 | 27 | 9.14e-06 | 56 | 9.15e-06 | 55 |
500 | 10 | 9.67e-06 | 43 | 9.29e-06 | 72 | 8.96e-06 | 71 |
500 | 15 | 8.93e-06 | 31 | 9.87e-06 | 50 | 9.06e-06 | 49 |
1000 | 10 | 9.21e-06 | 71 | 9.23e-06 | 119 | 9.52e-06 | 117 |
1500 | 10 | 9.92e-06 | 95 | 9.67e-06 | 166 | 9.41e-06 | 165 |
Problems | PCM | L-ALM | C-PPA | ||||
n | ra | Rel.err. | Iter. | Rel.err. | Iter. | Rel.err. | Iter. |
100 | 5 | 8.67e-06 | 14 | 7.66e-06 | 29 | 7.67e-06 | 28 |
100 | 10 | 8.39e-06 | 13 | 9.10e-06 | 27 | 9.12e-06 | 26 |
200 | 5 | 6.84e-06 | 21 | 8.85e-06 | 36 | 8.67e-06 | 35 |
200 | 10 | 5.78e-06 | 7 | 6.17e-06 | 13 | 8.32e-06 | 10 |
500 | 10 | 9.13e-06 | 24 | 8.22e-06 | 39 | 8.86e-06 | 38 |
500 | 15 | 8.34e-06 | 17 | 9.13e-06 | 26 | 9.74e-06 | 25 |
1000 | 10 | 8.22e-06 | 48 | 9.25e-06 | 77 | 9.33e-06 | 76 |
1500 | 10 | 9.41e-06 | 66 | 9.39e-06 | 112 | 9.40e-06 | 111 |
Problems | PCM | L-ALM | C-PPA | ||||
n | ra | Rel.err. | Iter. | Rel.err. | Iter. | Rel.err. | Iter. |
100 | 5 | 9.62e-06 | 11 | 8.92e-06 | 23 | 8.99e-06 | 22 |
100 | 10 | 9.62e-06 | 13 | 9.15e-06 | 27 | 9.20e-06 | 26 |
200 | 5 | 4.73e-06 | 13 | 9.27e-06 | 21 | 8.34e-06 | 20 |
200 | 10 | 7.98e-06 | 6 | 6.15e-06 | 13 | 8.34e-06 | 10 |
500 | 10 | 8.15e-06 | 17 | 8.51e-06 | 25 | 9.93e-06 | 24 |
500 | 15 | 4.58e-06 | 10 | 4.22e-06 | 14 | 7.19e-06 | 13 |
1000 | 10 | 9.35e-06 | 35 | 9.38e-06 | 56 | 9.81e-06 | 55 |
1500 | 10 | 9.75e-06 | 50 | 9.20e-06 | 84 | 9.46e-06 | 83 |
This paper proposes a new prediction-correction method for solving the monotone variational inequalities with linear constraints. At the prediction step, the implementation is carried out by a simple projection. At the correction step, the method introduces a simple updating step to generate the new iteration. We establish the global convergence of the method. The convergence condition of the method also allows larger step sizes that can in potential make the algorithm numerically converge faster. The numerical experiments approve the efficiency of the proposed methods. The future work is to explore combining self adaptive technique for the method. Besides, further applications of our method are expected.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This research was funded by NSF of Shaanxi Province grant number 2020-JQ-485.
The authors declare no conflict of interest.
[1] |
X. Jin, W. M. Haddad, T. Yucelen, An adaptive control architecture for mitigating sensor and actuator attacks in cyber-physical systems, IEEE Trans. Autom. Control, 62 (2017), 6058–6064. https://doi.org/10.1109/TAC.2017.2652127 doi: 10.1109/TAC.2017.2652127
![]() |
[2] |
L. Yao, X. Huang, Z. Wang, H. Shen, A multi-sensor-based switching event-triggered mechanism for synchronization control of Markovian jump neural networks under DoS attacks, IEEE Trans. Inf. Forensics Secur., 19 (2024), 7548–7559. https://doi.org/10.1109/TIFS.2024.3441812 doi: 10.1109/TIFS.2024.3441812
![]() |
[3] |
Z. Cao, Y. Niu, J. Song, Finite-time sliding-mode control of Markovian jump cyber-physical systems against randomly occurring injection attacks, IEEE Trans. Autom. Control, 65 (2019), 1264–1271. https://doi.org/10.1109/TAC.2019.2926156 doi: 10.1109/TAC.2019.2926156
![]() |
[4] |
H. Xiao, D. Ding, H. Dong, G. Wei, Adaptive event-triggered state estimation for large-scale systems subject to deception attacks, Sci. China Inf. Sci, 65 (2022), 122207. https://doi.org/10.1007/s11432-020-3142-5 doi: 10.1007/s11432-020-3142-5
![]() |
[5] |
H. Guo, J. Sun, Z. Pang, Analysis of replay attacks with countermeasure for state estimation of cyber-physical systems, IEEE Trans. Circuits Syst. II-Express Briefs, 71 (2024), 206–210. https://doi.org/10.1109/TCSII.2023.3302151 doi: 10.1109/TCSII.2023.3302151
![]() |
[6] |
H. Guo, Z. Pang, J. Sun, J. Li, An output-coding-based detection scheme against replay attacks in cyber-physical systems, IEEE Trans. Circuits Syst. II-Express Briefs, 68 (2021), 3306–3310. https://doi.org/10.1109/TCSII.2021.3063835 doi: 10.1109/TCSII.2021.3063835
![]() |
[7] |
J. Wang, D. Wang, H. Yan, H. Shen, Composite anti-disturbance H∞ control for hidden Markov jump systems with multi-sensor against replay attacks, IEEE Trans. Autom. Control, 69 (2024), 1760–1766. https://doi.org/10.1109/TAC.2023.3326861 doi: 10.1109/TAC.2023.3326861
![]() |
[8] |
Z. G. Wu, P. Shi, Z. Shu, H. Su, R. Lu, Passivity-based asynchronous control for Markov jump systems, IEEE Trans. Autom. Control, 62 (2016), 2020–2025. https://doi.org/10.1109/TAC.2016.2593742 doi: 10.1109/TAC.2016.2593742
![]() |
[9] |
Z. G. Wu, P. Shi, H. Su, J. Chu, Asynchronous l2-l∞ filtering for discrete-time stochastic Markov jump systems with randomly occurred sensor nonlinearities, Automatica, 50 (2014), 180–186. https://doi.org/10.1016/j.automatica.2013.09.041 doi: 10.1016/j.automatica.2013.09.041
![]() |
[10] |
A. M. de Oliveira, O. L. V. Costa, Mixed control of hidden Markov jump systems, Int. J. Robust Nonlinear Control, 28 (2018), 1261–1280. https://doi.org/10.1002/rnc.3952 doi: 10.1002/rnc.3952
![]() |
[11] |
Y. Guo, Stabilization of positive Markov jump systems, J. Franklin Inst., 353 (2016), 3428–3440. https://doi.org/10.1016/j.jfranklin.2016.06.026 doi: 10.1016/j.jfranklin.2016.06.026
![]() |
[12] |
E. K. Boukas, Z. K. Liu, P. Shi, Delay-dependent stability and output feedback stabilisation of Markov jump system with time-delay, IEE Proc.-Control Theory Appl., 149 (2002), 379–386. https://doi.org/10.1049/ip-cta:20020442 doi: 10.1049/ip-cta:20020442
![]() |
[13] |
K. Mathiyalagan, J. H. Park, R. Sakthivel, S. M. Anthoni, Robust mixed H∞ and passive filtering for networked Markov jump systems with impulses, Signal Process., 101 (2014), 162–173. https://doi.org/10.1016/j.sigpro.2014.02.007 doi: 10.1016/j.sigpro.2014.02.007
![]() |
[14] |
Q. Zhu, J. Cao, Robust exponential stability of Markovian jump impulsive stochastic Cohen-Grossberg neural networks with mixed time delays, IEEE Trans. Neural Networks, 21 (2010), 1314–1325. https://doi.org/10.1109/TNN.2010.2054108 doi: 10.1109/TNN.2010.2054108
![]() |
[15] |
J. Wang, J. Wu, H. Shen, J. Cao, L. Rutkowski, Fuzzy H∞ control of discrete-time nonlinear Markov jump systems via a novel hybrid reinforcement Q-learning method, IEEE Trans. Cybern., 53 (2023), 7380–7391. https://doi.org/10.1109/TCYB.2022.3220537 doi: 10.1109/TCYB.2022.3220537
![]() |
[16] |
H. Shen, Y. Wang, J. Wang, J. H. Park, A fuzzy-model-based approach to optimal control for nonlinear Markov jump singularly perturbed systems: A novel integral reinforcement learning scheme, IEEE Trans. Fuzzy Syst., 31 (2023), 3734–3740. https://doi.org/10.1109/TFUZZ.2023.3265666 doi: 10.1109/TFUZZ.2023.3265666
![]() |
[17] |
M. Shen, Y. Gu, S. Zhu, G. Zong, X. Zhao, Mismatched quantized H∞ output-feedback control of fuzzy Markov jump systems with a dynamic guaranteed cost triggering scheme, IEEE Trans. Fuzzy Syst., 32 (2024), 1681–1692. https://doi.org/10.1109/TFUZZ.2023.3330297 doi: 10.1109/TFUZZ.2023.3330297
![]() |
[18] |
M. Shen, Y. Ma, J. H. Park, Q. G. Wang, Fuzzy tracking control for Markov jump systems with mismatched faults by iterative proportional–integral observers, IEEE Trans. Fuzzy Syst., 30 (2020), 542–554. https://doi.org/10.1109/TFUZZ.2020.3041589 doi: 10.1109/TFUZZ.2020.3041589
![]() |
[19] |
D. Li, S. Liu, J. Cui, Threshold dynamics and ergodicity of an SIRS epidemic model with semi-Markov switching, J. Differ. Equations, 266 (2019), 3973–4017. https://doi.org/10.1016/j.jde.2018.09.026 doi: 10.1016/j.jde.2018.09.026
![]() |
[20] |
R. Kutner, J. Masoliver, The continuous time random walk, still trendy: fifty-year history, state of art and outlook, Eur. Phys. J. B., 90 (2017), 1–13. https://doi.org/10.1140/epjb/e2016-70578-3 doi: 10.1140/epjb/e2016-70578-3
![]() |
[21] |
X. Mu, Z. Hu, Stability analysis for semi-Markovian switched stochastic systems with asynchronously impulsive jumps, Sci. China Inf. Sci., 64 (2021), 1–13. https://doi.org/10.1007/s11432-019-2726-0 doi: 10.1007/s11432-019-2726-0
![]() |
[22] |
J. Wang, S. Ma, C. Zhang, Stability analysis and stabilization for nonlinear continuous-time descriptor semi-Markov jump systems, Appl. Math. Comput., 279 (2016), 90–102. https://doi.org/10.1016/j.amc.2016.01.013 doi: 10.1016/j.amc.2016.01.013
![]() |
[23] |
B. Wang, Q. Zhu, Stability analysis of discrete-time semi-Markov jump linear systems with time delay, IEEE Trans. Autom. Control, 68 (2023), 6758–6765. https://doi.org/10.1109/TAC.2023.3240926 doi: 10.1109/TAC.2023.3240926
![]() |
[24] |
Z. Ning, L. Zhang, P. Colaneri, Semi-Markov jump linear systems with incomplete sojourn and transition information: Analysis and synthesis, IEEE Trans. Autom. Control, 65 (2019), 159–174. https://doi.org/10.1109/TAC.2019.2907796 doi: 10.1109/TAC.2019.2907796
![]() |
[25] |
Y. Tian, H. Yan, H. Zhang, M. Wang, J. Yi, Time-varying gain controller synthesis of piecewise homogeneous semi-Markov jump linear systems, Automatica, 146 (2022), 110594. https://doi.org/10.1016/j.automatica.2022.110594 doi: 10.1016/j.automatica.2022.110594
![]() |
[26] |
B. Wang, Q. Zhu, Stability analysis of semi-Markov switched stochastic systems, Automatica, 94 (2018), 72–80. https://doi.org/10.1016/j.automatica.2018.04.016 doi: 10.1016/j.automatica.2018.04.016
![]() |
[27] |
B. Wang, Q. Zhu, Mode dependent H∞ filtering for semi‐Markovian jump linear systems with sojourn time dependent transition rates, IET Contr. Theory Appl., 13 (2019), 3019–3025. https://doi.org/10.1049/iet-cta.2019.0141 doi: 10.1049/iet-cta.2019.0141
![]() |
[28] |
Y. Wei, J. H. Park, J. Qiu, L. Wu, H. Y. Jung, Sliding mode control for semi-Markovian jump systems via output feedback, Automatica, 81 (2017), 133–141. https://doi.org/10.1016/j.automatica.2017.03.032 doi: 10.1016/j.automatica.2017.03.032
![]() |
[29] |
W. Qi, J. H. Park, J. Cheng, Y. Kao, Robust stabilisation for non‐linear time‐delay semi‐Markovian jump systems via sliding mode control, IET Contr. Theory Appl., 11 (2017), 1504–1513. https://doi.org/10.1049/iet-cta.2016.1465 doi: 10.1049/iet-cta.2016.1465
![]() |
[30] | H. Shen, M. Xing, H. Yan, J. Cao, Observer-based l2–l∞ control for singularly perturbed semi-Markov jump systems with improved weighted TOD protocol, Sci. China Inf. Sci., 65 (2022), 1–2. https://link.springer.com/article/10.1007/s11432-021-3345-1 |
[31] |
H. Shen, C. Peng, H. Yan, S. Xu, Data-driven near optimization for fast sampling singularly perturbed systems, IEEE Trans. Autom. Control, 69 (2024), 4689–4694. https://doi.org/10.1109/TAC.2024.3352703 doi: 10.1109/TAC.2024.3352703
![]() |
[32] |
K. Sivaranjani, R. Rakkiyappan, Y. H. Joo, Event triggered reliable synchronization of semi-Markovian jumping complex dynamical networks via generalized integral inequalities, J. Franklin Inst., 355 (2018), 3691–3716. https://doi.org/10.1016/j.jfranklin.2018.01.050 doi: 10.1016/j.jfranklin.2018.01.050
![]() |
[33] |
B. Kaviarasan, O. M. Kwon, M. J. Park, R. Sakthivel, Dissipative constraint-based control design for singular semi-Markovian jump systems using state decomposition approach, Nonlinear Anal.-Hybrid Syst., 47 (2023), 101302. https://doi.org/10.1016/j.nahs.2022.101302 doi: 10.1016/j.nahs.2022.101302
![]() |
[34] |
K. Ding, Q. Zhu, Extended dissipative anti-disturbance control for delayed switched singular semi-Markovian jump systems with multi-disturbance via disturbance observer, Automatica, 128 (2021), 109556. https://doi.org/10.1016/j.automatica.2021.109556 doi: 10.1016/j.automatica.2021.109556
![]() |
[35] |
Y. Gao, L. Jia, Stability in distribution for uncertain delay differential equations based on new Lipschitz condition, J. Ambient Intell. Humaniz. Comput., 14 (2023), 13585–13599. https://doi.org/10.1007/s12652-022-03826-9 doi: 10.1007/s12652-022-03826-9
![]() |
[36] |
J. Brasche, M. Demuth, Dynkin's formula and large coupling convergence, J. Funct. Anal., 219 (2005), 34–69. https://doi.org/10.1016/j.jfa.2004.06.007 doi: 10.1016/j.jfa.2004.06.007
![]() |
[37] |
E. V. Haynsworth, Applications of an inequality for the Schur complement, Proc. Am. Math. Soc., 24 (1970), 512–516. https://doi.org/10.2307/2037398 doi: 10.2307/2037398
![]() |
[38] | Y. Tian, Z. Ning, H. Yan, Y. Peng, Time-elapsed-reliant observer-based control of semi-Markov jump linear systems with bilaterally bounded sojourn time, IEEE Trans. Circuits Syst. II-Express Briefs, (2024), in press. https://doi.org/10.1109/TCSII.2024.3491032 |
[39] |
L. Zhang, H. Zhang, X. Yue, T. Wang, Actor-critic optimal control for semi-Markovian jump systems with time delay, IEEE Trans. Circuits Syst. II-Express Briefs, 71 (2024), 2164–2168. https://doi.org/10.1109/TCSII.2023.3335343 doi: 10.1109/TCSII.2023.3335343
![]() |
1. | Ting Chen, Jianxiao Ma, Shuang Li, Zhenjun Zhu, Xiucheng Guo, Dynamic Evaluation Method for Mutation Degree of Passenger Flow in Urban Rail Transit, 2023, 15, 2071-1050, 15793, 10.3390/su152215793 |
PCM | L-ALM | C-PPA | ||||||||||
n | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) |
100 | 81 | 4.2e-04 | 1.9e-04 | 0.03 | 161 | 1.6e-04 | 7.5e-04 | 0.03 | 232 | 8.2e-05 | 8.6e-04 | 0.04 |
300 | 88 | 5.1e-04 | 7.3e-04 | 0.05 | 207 | 7.5e-05 | 6.3e-04 | 0.07 | 311 | 5.3e-05 | 9.4e-04 | 0.10 |
600 | 110 | 9.2e-05 | 7.9e-04 | 0.54 | 262 | 7.2e-05 | 8.9e-04 | 0.62 | 374 | 6.0e-05 | 9.4e-04 | 1.01 |
1000 | 141 | 4.4e-05 | 9.8e-04 | 1.81 | 291 | 3.3e-05 | 7.9e-04 | 2.21 | 411 | 3.2e-05 | 9.4e-04 | 2.86 |
1500 | 139 | 6.7e-05 | 6.6e-04 | 3.95 | 359 | 7.0e-05 | 9.4e-04 | 5.04 | 484 | 6.9e-05 | 9.0e-04 | 6.54 |
2000 | 151 | 6.3e-05 | 7.9e-04 | 6.55 | 406 | 1.2e-05 | 9.7e-04 | 9.24 | 536 | 1.3e-05 | 9.7e-04 | 12.08 |
2500 | 171 | 3.9e-05 | 8.5e-04 | 11.57 | 519 | 1.3e-05 | 8.9e-04 | 18.60 | 635 | 1.8e-05 | 9.8e-04 | 22.69 |
3000 | 199 | 2.9e-05 | 8.8e-04 | 18.82 | 585 | 1.7e-05 | 9.8e-04 | 29.46 | 709 | 1.7e-05 | 9.0e-04 | 35.61 |
PCM | L-ALM | C-PPA | ||||||||||
n | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) |
100 | 80 | 5.9e-04 | 7.0e-04 | 0.03 | 151 | 3.7e-04 | 9.6e-04 | 0.03 | 224 | 1.4e-04 | 9.4e-04 | 0.04 |
300 | 87 | 6.1e-04 | 8.1e-04 | 0.05 | 203 | 1.2e-04 | 8.5e-04 | 0.09 | 299 | 3.3e-05 | 9.0e-04 | 0.10 |
600 | 110 | 1.1e-04 | 6.1e-04 | 0.49 | 249 | 3.5e-05 | 9.9e-04 | 0.63 | 331 | 5.7e-05 | 8.6e-04 | 0.87 |
1000 | 137 | 4.0e-05 | 8.5e-04 | 1.75 | 266 | 3.2e-05 | 7.9e-04 | 1.80 | 370 | 3.1e-05 | 8.3e-04 | 2.62 |
1500 | 135 | 3.5e-05 | 8.6e-04 | 3.54 | 312 | 3.6e-05 | 8.5e-04 | 4.35 | 424 | 3.4e-05 | 8.2e-04 | 5.84 |
2000 | 146 | 5.4e-05 | 7.6e-04 | 6.38 | 331 | 1.3e-05 | 9.8e-04 | 7.57 | 445 | 1.5e-05 | 9.5e-04 | 10.19 |
2500 | 143 | 4.1e-05 | 9.7e-04 | 9.67 | 375 | 1.4e-05 | 9.3e-04 | 13.41 | 491 | 1.5e-05 | 9.1e-04 | 17.63 |
3000 | 170 | 3.1e-05 | 9.7e-04 | 16.00 | 418 | 1.5e-05 | 9.5e-04 | 20.70 | 533 | 1.6e-05 | 8.9e-04 | 26.42 |
PCM | L-ALM | C-PPA | ||||||||||
n | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) |
100 | 82 | 5.3e-04 | 5.7e-04 | 0.02 | 164 | 2.7e-04 | 6.2e-04 | 0.03 | 236 | 6.3e-05 | 9.2e-04 | 0.04 |
300 | 99 | 1.9e-04 | 7.9e-04 | 0.05 | 213 | 9.4e-05 | 5.9e-04 | 0.07 | 284 | 4.2e-05 | 9.2e-04 | 0.09 |
600 | 117 | 6.7e-05 | 8.9e-04 | 0.55 | 226 | 1.1e-04 | 9.9e-04 | 0.58 | 316 | 8.5e-05 | 9.3e-04 | 0.84 |
1000 | 179 | 4.2e-05 | 8.7e-04 | 2.35 | 253 | 5.4e-05 | 9.5e-04 | 1.68 | 369 | 1.9e-05 | 9.5e-04 | 2.47 |
1500 | 159 | 3.4e-05 | 9.3e-04 | 4.20 | 299 | 3.4e-05 | 8.1e-04 | 4.13 | 399 | 3.5e-05 | 7.9e-04 | 5.39 |
2000 | 142 | 6.6e-05 | 9.4e-04 | 6.18 | 313 | 1.3e-05 | 9.0e-04 | 7.32 | 410 | 1.9e-05 | 9.7e-04 | 9.56 |
2500 | 139 | 2.9e-05 | 9.8e-04 | 9.41 | 337 | 1.4e-05 | 1.0e-03 | 12.20 | 444 | 1.1e-05 | 9.8e-04 | 16.00 |
3000 | 168 | 3.7e-05 | 9.0e-04 | 15.92 | 370 | 1.6e-05 | 8.2e-04 | 18.54 | 464 | 2.2e-05 | 9.0e-04 | 23.22 |
Problems | PCM | L-ALM | C-PPA | ||||
n | ra | Rel.err. | Iter. | Rel.err. | Iter. | Rel.err. | Iter. |
100 | 5 | 9.71e-06 | 60 | 6.22e-05 | 101 | 5.83e-05 | 101 |
100 | 10 | 9.74e-06 | 18 | 9.17e-06 | 38 | 9.15e-06 | 37 |
200 | 5 | 9.40e-06 | 69 | 1.41e-04 | 101 | 1.33e-04 | 101 |
200 | 10 | 8.88e-06 | 27 | 9.14e-06 | 56 | 9.15e-06 | 55 |
500 | 10 | 9.67e-06 | 43 | 9.29e-06 | 72 | 8.96e-06 | 71 |
500 | 15 | 8.93e-06 | 31 | 9.87e-06 | 50 | 9.06e-06 | 49 |
1000 | 10 | 9.21e-06 | 71 | 9.23e-06 | 119 | 9.52e-06 | 117 |
1500 | 10 | 9.92e-06 | 95 | 9.67e-06 | 166 | 9.41e-06 | 165 |
Problems | PCM | L-ALM | C-PPA | ||||
n | ra | Rel.err. | Iter. | Rel.err. | Iter. | Rel.err. | Iter. |
100 | 5 | 8.67e-06 | 14 | 7.66e-06 | 29 | 7.67e-06 | 28 |
100 | 10 | 8.39e-06 | 13 | 9.10e-06 | 27 | 9.12e-06 | 26 |
200 | 5 | 6.84e-06 | 21 | 8.85e-06 | 36 | 8.67e-06 | 35 |
200 | 10 | 5.78e-06 | 7 | 6.17e-06 | 13 | 8.32e-06 | 10 |
500 | 10 | 9.13e-06 | 24 | 8.22e-06 | 39 | 8.86e-06 | 38 |
500 | 15 | 8.34e-06 | 17 | 9.13e-06 | 26 | 9.74e-06 | 25 |
1000 | 10 | 8.22e-06 | 48 | 9.25e-06 | 77 | 9.33e-06 | 76 |
1500 | 10 | 9.41e-06 | 66 | 9.39e-06 | 112 | 9.40e-06 | 111 |
Problems | PCM | L-ALM | C-PPA | ||||
n | ra | Rel.err. | Iter. | Rel.err. | Iter. | Rel.err. | Iter. |
100 | 5 | 9.62e-06 | 11 | 8.92e-06 | 23 | 8.99e-06 | 22 |
100 | 10 | 9.62e-06 | 13 | 9.15e-06 | 27 | 9.20e-06 | 26 |
200 | 5 | 4.73e-06 | 13 | 9.27e-06 | 21 | 8.34e-06 | 20 |
200 | 10 | 7.98e-06 | 6 | 6.15e-06 | 13 | 8.34e-06 | 10 |
500 | 10 | 8.15e-06 | 17 | 8.51e-06 | 25 | 9.93e-06 | 24 |
500 | 15 | 4.58e-06 | 10 | 4.22e-06 | 14 | 7.19e-06 | 13 |
1000 | 10 | 9.35e-06 | 35 | 9.38e-06 | 56 | 9.81e-06 | 55 |
1500 | 10 | 9.75e-06 | 50 | 9.20e-06 | 84 | 9.46e-06 | 83 |
PCM | L-ALM | C-PPA | ||||||||||
n | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) |
100 | 81 | 4.2e-04 | 1.9e-04 | 0.03 | 161 | 1.6e-04 | 7.5e-04 | 0.03 | 232 | 8.2e-05 | 8.6e-04 | 0.04 |
300 | 88 | 5.1e-04 | 7.3e-04 | 0.05 | 207 | 7.5e-05 | 6.3e-04 | 0.07 | 311 | 5.3e-05 | 9.4e-04 | 0.10 |
600 | 110 | 9.2e-05 | 7.9e-04 | 0.54 | 262 | 7.2e-05 | 8.9e-04 | 0.62 | 374 | 6.0e-05 | 9.4e-04 | 1.01 |
1000 | 141 | 4.4e-05 | 9.8e-04 | 1.81 | 291 | 3.3e-05 | 7.9e-04 | 2.21 | 411 | 3.2e-05 | 9.4e-04 | 2.86 |
1500 | 139 | 6.7e-05 | 6.6e-04 | 3.95 | 359 | 7.0e-05 | 9.4e-04 | 5.04 | 484 | 6.9e-05 | 9.0e-04 | 6.54 |
2000 | 151 | 6.3e-05 | 7.9e-04 | 6.55 | 406 | 1.2e-05 | 9.7e-04 | 9.24 | 536 | 1.3e-05 | 9.7e-04 | 12.08 |
2500 | 171 | 3.9e-05 | 8.5e-04 | 11.57 | 519 | 1.3e-05 | 8.9e-04 | 18.60 | 635 | 1.8e-05 | 9.8e-04 | 22.69 |
3000 | 199 | 2.9e-05 | 8.8e-04 | 18.82 | 585 | 1.7e-05 | 9.8e-04 | 29.46 | 709 | 1.7e-05 | 9.0e-04 | 35.61 |
PCM | L-ALM | C-PPA | ||||||||||
n | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) |
100 | 80 | 5.9e-04 | 7.0e-04 | 0.03 | 151 | 3.7e-04 | 9.6e-04 | 0.03 | 224 | 1.4e-04 | 9.4e-04 | 0.04 |
300 | 87 | 6.1e-04 | 8.1e-04 | 0.05 | 203 | 1.2e-04 | 8.5e-04 | 0.09 | 299 | 3.3e-05 | 9.0e-04 | 0.10 |
600 | 110 | 1.1e-04 | 6.1e-04 | 0.49 | 249 | 3.5e-05 | 9.9e-04 | 0.63 | 331 | 5.7e-05 | 8.6e-04 | 0.87 |
1000 | 137 | 4.0e-05 | 8.5e-04 | 1.75 | 266 | 3.2e-05 | 7.9e-04 | 1.80 | 370 | 3.1e-05 | 8.3e-04 | 2.62 |
1500 | 135 | 3.5e-05 | 8.6e-04 | 3.54 | 312 | 3.6e-05 | 8.5e-04 | 4.35 | 424 | 3.4e-05 | 8.2e-04 | 5.84 |
2000 | 146 | 5.4e-05 | 7.6e-04 | 6.38 | 331 | 1.3e-05 | 9.8e-04 | 7.57 | 445 | 1.5e-05 | 9.5e-04 | 10.19 |
2500 | 143 | 4.1e-05 | 9.7e-04 | 9.67 | 375 | 1.4e-05 | 9.3e-04 | 13.41 | 491 | 1.5e-05 | 9.1e-04 | 17.63 |
3000 | 170 | 3.1e-05 | 9.7e-04 | 16.00 | 418 | 1.5e-05 | 9.5e-04 | 20.70 | 533 | 1.6e-05 | 8.9e-04 | 26.42 |
PCM | L-ALM | C-PPA | ||||||||||
n | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) | Iter. | relL | relS | CPU(s) |
100 | 82 | 5.3e-04 | 5.7e-04 | 0.02 | 164 | 2.7e-04 | 6.2e-04 | 0.03 | 236 | 6.3e-05 | 9.2e-04 | 0.04 |
300 | 99 | 1.9e-04 | 7.9e-04 | 0.05 | 213 | 9.4e-05 | 5.9e-04 | 0.07 | 284 | 4.2e-05 | 9.2e-04 | 0.09 |
600 | 117 | 6.7e-05 | 8.9e-04 | 0.55 | 226 | 1.1e-04 | 9.9e-04 | 0.58 | 316 | 8.5e-05 | 9.3e-04 | 0.84 |
1000 | 179 | 4.2e-05 | 8.7e-04 | 2.35 | 253 | 5.4e-05 | 9.5e-04 | 1.68 | 369 | 1.9e-05 | 9.5e-04 | 2.47 |
1500 | 159 | 3.4e-05 | 9.3e-04 | 4.20 | 299 | 3.4e-05 | 8.1e-04 | 4.13 | 399 | 3.5e-05 | 7.9e-04 | 5.39 |
2000 | 142 | 6.6e-05 | 9.4e-04 | 6.18 | 313 | 1.3e-05 | 9.0e-04 | 7.32 | 410 | 1.9e-05 | 9.7e-04 | 9.56 |
2500 | 139 | 2.9e-05 | 9.8e-04 | 9.41 | 337 | 1.4e-05 | 1.0e-03 | 12.20 | 444 | 1.1e-05 | 9.8e-04 | 16.00 |
3000 | 168 | 3.7e-05 | 9.0e-04 | 15.92 | 370 | 1.6e-05 | 8.2e-04 | 18.54 | 464 | 2.2e-05 | 9.0e-04 | 23.22 |
Problems | PCM | L-ALM | C-PPA | ||||
n | ra | Rel.err. | Iter. | Rel.err. | Iter. | Rel.err. | Iter. |
100 | 5 | 9.71e-06 | 60 | 6.22e-05 | 101 | 5.83e-05 | 101 |
100 | 10 | 9.74e-06 | 18 | 9.17e-06 | 38 | 9.15e-06 | 37 |
200 | 5 | 9.40e-06 | 69 | 1.41e-04 | 101 | 1.33e-04 | 101 |
200 | 10 | 8.88e-06 | 27 | 9.14e-06 | 56 | 9.15e-06 | 55 |
500 | 10 | 9.67e-06 | 43 | 9.29e-06 | 72 | 8.96e-06 | 71 |
500 | 15 | 8.93e-06 | 31 | 9.87e-06 | 50 | 9.06e-06 | 49 |
1000 | 10 | 9.21e-06 | 71 | 9.23e-06 | 119 | 9.52e-06 | 117 |
1500 | 10 | 9.92e-06 | 95 | 9.67e-06 | 166 | 9.41e-06 | 165 |
Problems | PCM | L-ALM | C-PPA | ||||
n | ra | Rel.err. | Iter. | Rel.err. | Iter. | Rel.err. | Iter. |
100 | 5 | 8.67e-06 | 14 | 7.66e-06 | 29 | 7.67e-06 | 28 |
100 | 10 | 8.39e-06 | 13 | 9.10e-06 | 27 | 9.12e-06 | 26 |
200 | 5 | 6.84e-06 | 21 | 8.85e-06 | 36 | 8.67e-06 | 35 |
200 | 10 | 5.78e-06 | 7 | 6.17e-06 | 13 | 8.32e-06 | 10 |
500 | 10 | 9.13e-06 | 24 | 8.22e-06 | 39 | 8.86e-06 | 38 |
500 | 15 | 8.34e-06 | 17 | 9.13e-06 | 26 | 9.74e-06 | 25 |
1000 | 10 | 8.22e-06 | 48 | 9.25e-06 | 77 | 9.33e-06 | 76 |
1500 | 10 | 9.41e-06 | 66 | 9.39e-06 | 112 | 9.40e-06 | 111 |
Problems | PCM | L-ALM | C-PPA | ||||
n | ra | Rel.err. | Iter. | Rel.err. | Iter. | Rel.err. | Iter. |
100 | 5 | 9.62e-06 | 11 | 8.92e-06 | 23 | 8.99e-06 | 22 |
100 | 10 | 9.62e-06 | 13 | 9.15e-06 | 27 | 9.20e-06 | 26 |
200 | 5 | 4.73e-06 | 13 | 9.27e-06 | 21 | 8.34e-06 | 20 |
200 | 10 | 7.98e-06 | 6 | 6.15e-06 | 13 | 8.34e-06 | 10 |
500 | 10 | 8.15e-06 | 17 | 8.51e-06 | 25 | 9.93e-06 | 24 |
500 | 15 | 4.58e-06 | 10 | 4.22e-06 | 14 | 7.19e-06 | 13 |
1000 | 10 | 9.35e-06 | 35 | 9.38e-06 | 56 | 9.81e-06 | 55 |
1500 | 10 | 9.75e-06 | 50 | 9.20e-06 | 84 | 9.46e-06 | 83 |