Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Research article

Pointwise Jacobson type necessary conditions for optimal control problems governed by impulsive differential systems

  • Received: 31 December 2023 Revised: 29 February 2024 Accepted: 01 March 2024 Published: 07 March 2024
  • This work focuses on an exploration of the pointwise Jacobson-type necessary conditions for optimal control problems governed by differential systems with impulse at fixed times; the pointwise Jacobson-type necessary optimality conditions refer to a type of pointwise second-order necessary optimality conditions for optimal singular control in the classical sense. By introducing an impulsive linear matrix Riccati differential equation, we derive the integral representation of the functional second-order variational. Based on this, the integral form of the second-order necessary conditions and the pointwise Jacobson-type necessary conditions are obtained. Incidentally, we have established the Legendre-Clebsch condition and the pointwise Legendre-Clebsch condition. Finally, an example is provided to illustrate the effectiveness of the main result.

    Citation: Huifu Xia, Yunfei Peng. Pointwise Jacobson type necessary conditions for optimal control problems governed by impulsive differential systems[J]. Electronic Research Archive, 2024, 32(3): 2075-2098. doi: 10.3934/era.2024094

    Related Papers:

    [1] Wanshun Zhao, Kelin Li, Yanchao Shi . Exponential synchronization of neural networks with mixed delays under impulsive control. Electronic Research Archive, 2024, 32(9): 5287-5305. doi: 10.3934/era.2024244
    [2] Wei Ji . Optimal control problems with time inconsistency. Electronic Research Archive, 2023, 31(1): 492-508. doi: 10.3934/era.2023024
    [3] Zhaoyan Meng, Shuting Lyu, Mengqing Zhang, Xining Li, Qimin Zhang . Sufficient and necessary conditions of near-optimal controls for a stochastic listeriosis model with spatial diffusion. Electronic Research Archive, 2024, 32(5): 3059-3091. doi: 10.3934/era.2024140
    [4] Yang Song, Beiyan Yang, Jimin Wang . Stability analysis and security control of nonlinear singular semi-Markov jump systems. Electronic Research Archive, 2025, 33(1): 1-25. doi: 10.3934/era.2025001
    [5] Daliang Zhao, Yansheng Liu . Controllability of nonlinear fractional evolution systems in Banach spaces: A survey. Electronic Research Archive, 2021, 29(5): 3551-3580. doi: 10.3934/era.2021083
    [6] Changling Xu, Huilai Li . Two-grid methods of finite element approximation for parabolic integro-differential optimal control problems. Electronic Research Archive, 2023, 31(8): 4818-4842. doi: 10.3934/era.2023247
    [7] N. U. Ahmed, Saroj Biswas . Optimal strategy for removal of greenhouse gas in the atmosphere to avert global climate crisis. Electronic Research Archive, 2023, 31(12): 7452-7472. doi: 10.3934/era.2023376
    [8] Chengbo Yi, Jiayi Cai, Rui Guo . Synchronization of a class of nonlinear multiple neural networks with delays via a dynamic event-triggered impulsive control strategy. Electronic Research Archive, 2024, 32(7): 4581-4603. doi: 10.3934/era.2024208
    [9] Haijun Wang, Gege Kang, Ruifang Zhang . On optimality conditions and duality for multiobjective fractional optimization problem with vanishing constraints. Electronic Research Archive, 2024, 32(8): 5109-5126. doi: 10.3934/era.2024235
    [10] Xinzheng Xu, Yanyan Ding, Zhenhu Lv, Zhongnian Li, Renke Sun . Optimized pointwise convolution operation by Ghost blocks. Electronic Research Archive, 2023, 31(6): 3187-3199. doi: 10.3934/era.2023161
  • This work focuses on an exploration of the pointwise Jacobson-type necessary conditions for optimal control problems governed by differential systems with impulse at fixed times; the pointwise Jacobson-type necessary optimality conditions refer to a type of pointwise second-order necessary optimality conditions for optimal singular control in the classical sense. By introducing an impulsive linear matrix Riccati differential equation, we derive the integral representation of the functional second-order variational. Based on this, the integral form of the second-order necessary conditions and the pointwise Jacobson-type necessary conditions are obtained. Incidentally, we have established the Legendre-Clebsch condition and the pointwise Legendre-Clebsch condition. Finally, an example is provided to illustrate the effectiveness of the main result.



    Impulsive differential equations are employed for the analysis of real-world phenomena that are characterized by the state of instantaneous system changes. It has been found to play a vital role in many areas, such as sampled-data control, communication networks, industrial robots, biology, and so on. Due to the widespread existence of impulsive perturbation, extensive research has been dedicated to the exploration of the stability of impulsive differential systems (see [1,2,3,4]). Meanwhile, some scholars have focused on studying the optimal impulsive control problem, which has resulted in numerous interesting findings (see [5,6,7,8,9]).

    Finding the necessary optimality conditions is one of the central tasks in optimal control theory. Pontryagin and his co-authors have made milestone contributions (see [10]). As pointed out in [11]: "The mathematical significance of the maximum principle lies in that maximizing the Hamiltonian is much easier than the original control problem that is infinite-dimensional". Essentially, the Hamiltonian maximization occurs pointwise. Since Pontryagin's maximum principle was discovered, the first-order and second-order necessary conditions for optimal control problems in both finite- and infinite-dimensional spaces have been extensively researched (see [11,12]).

    However, it is not always possible to find the optimal control by pointwise maximizing the Hamiltonian, and this kind of optimal control problem is often referred to as the singular control problems. For singular control problems, the first task is to discover new necessary conditions to distinguish optimal singular controls from other singular controls; one way to do this is to look for second-order conditions. It is common to seek a second-order necessary condition that requires a quadratic functional to be non-negative. Ideally, we would prefer the second-order necessary conditions to have pointwise characteristics that are similar to Pontryagin's maximum principle, which means that the pointwise mode is preferable for maximizing a particular function. The pointwise necessary optimality conditions are reviewed in [13]; the Jacobson conditions and the Goh conditions are generally considered as two types of pointwise second-order necessary optimality conditions for the optimal singular control problem. In addition, references [14,15,16] are very comprehensive references on the singular control problem. Regarding the Goh conditions, the original contribution can be seen in [17]. In short, the Goh conditions are obtained by applying Goh's transformation. Goh's transformation approaches have been designed to transform the original singular problem into a new nonsingular one; then, for the nonsingular optimal control problem, the classical Legendre-Clebsch condition may be applied. In this process, Goh's transformation may be used several times; therefore, the Goh conditions are also called the generalized Legendre-Clebsch conditions; recent research results are very abundant, for example, references [18,19,20,21] and related references. Conversely, limited attention has been given to investigating the Jacobson-type conditions.

    There is an interesting story about Jacobson-type necessary conditions. Until Jacobson discovered the Jacobson-type necessary conditions, it was thought that there was no Riccati-type matrix differential equation for singular control problems. In fact, Jacobson introduced a linear matrix Riccati differential equation that is similar to the nonlinear matrix Riccati differential equation in the standard LQ problem. Thus, a "new" necessary condition for singular control was obtained in [22]. After this, Jacobson found that this new condition is different from the generalized Legendre-Clebsch condition, and Jacobson has demonstrated that these two necessary conditions are generally insufficient for optimality in [23]. Therefore, the matrix Riccati differential equation can not only solve nonsingular problems, they can also solve important optimal singular control problems; singular control and nonsingular control can even be considered under a unified framework (see Theorem 4.2 in [15,24]).

    Recent studies [25,26] have established the Jacobson-type pointwise second-order necessary optimality conditions for deterministic and stochastic optimal singular control problems, respectively. Particularly, the relaxation methods for control problems have been used to solve the singular control problem of a lack of a linear structure in control problems; also, the pointwise second-order necessary conditions have been obtained in [25]. When discussing the second-order necessary conditions, only the allowable set of singular control is considered, rather than the original allowable control set, which can greatly reduce the computational expense of singular control problems. References [25,26] have also been applied to derive the pointwise second-order necessary optimality conditions for singular control problems with constraints in finite- or infinite-dimensional spaces. For further details, please refer to [13,27,28,29,30,31]. Theorem 4.3 in [25] describes the Jacobson-type second-order necessary optimality condition of singular control problems governed by pulseless controlled systems according to Pontryagin. For the definition of singular control in the classical sense and in the Pontryagin sense, see Definitions 1 and 2 in [14], as well as (4.4) and (4.5) in [25].

    Regarding the importance of the impulsive system, the second-order necessary conditions for the optimal control problem governed by impulsive differential systems have been studied in [32], which focuses on impulsive systems with multi-point nonlocal and integral boundary conditions; the second-order necessary conditions of the integral form has been obtained by introducing the impulsive matrix function directly. When discussing the second-order necessary conditions for singular control, the perturbation control takes its value in full space, and not in the singular control region (see (3.13)); therefore, the conclusion is not reached by the pointwise Jacobson-type necessary conditions. In addition, references [33,34,35] consider second-order necessary conditions whereby the measure is impulsive control; the system can experience an infinite number of jumps in a finite amount of time, but this is different from what we are going to consider.

    Inspired by the above discussions, we aimed to study the following optimal control problem, as governed by impulsive differential systems, which differs from the previous works.

    Problem P: Let T>0 and Λ={ti0<t1<t2<<tk<T}(0,T) be given; URm is a nonempty bounded convex open set.

    minJ(u())=T0l(t,y(t),u(t))dt+G(y(T)), (1.1)

    subject to

    {˙y(t)=f(t,y(t),u(t)),t[0,T]Λ,y(ti+)y(ti)=Ji(y(ti)),tiΛ,y(0)=y0, (1.2)

    where u()Uad={u()u()measurable,u(t)U}, and l, G, f, and Ji(i=1,2,,k) are the given maps.

    Here, we generalize the control model of [15,25] to an impulsive controlled system. For Problem P, the difficulty lies in the introduction of the appropriate impulsive adjoint matrix differential equation, which we overcome by borrowing the method presented in [25]. The main conclusions include Theorems 3.1 and 4.6, where the former is a generalization of Theorem 4.2 in [15] to impulsive controlled systems; the latter is a similar conclusion of Theorem 4.3 in [25], but the difference is that Theorem 4.6 is a conclusion of the impulsive controlled systems, and in the classical sense, while Theorem 4.3 in [25] is a conclusion for the pulseless controlled systems, and in the Pontryagin sense.

    The main novelties and contributions of this paper can be summarized as follows: (i) generalization of pulseless controlled systems to impulsive controlled systems, which is helpful to increase the applicability of the model; (ii) through the use of C([0,T]) as a dense subspace in L1([0,T]), as per the results of functional analysis, the condition of conclusion is weakened; (iii) pointwise Jacobson-type necessary conditions have been obtained, which facilitates the calculations for and distinguishes the optimal singular control from other singular controls in the classical sense (see Definition 2.2).

    The outline of this paper is as follows. Some preliminaries are proposed in Section 2. In Section 3, the integral-form second-order necessary conditions are derived. In Section 4, the pointwise Jacobson-type necessary conditions are given. An example is considered to elucidate the proposed main results in Section 5, and Section 6 concludes this paper.

    In this section, we will present some preliminaries, which includes basic assumptions, the definition of the singular control in the classical sense, the solvability of impulsive systems, and a lemma that has been obtained via functional analysis.

    Let B denote the transposition of a matrix B. Define C1([0,T]Λ,Rn)={y:[0,T]Rn|y is continuous differential at t[0,T]Λ} and PCl([0,T],Rn)(PCr([0,T],Rn))={y:[0,T]Rn|y is continuous at t[0,T]Λ,y is left (right) continuous and exists right (left) limit at tΛ}, obviously endowed with the norm yPC=sup{y(t+),y(t)|t[0,T]}, PCl([0,T],Rn); also, PCr([0,T],Rn) denotes Banach spaces.

    Let us assume the following:

    (A1) URm is a nonempty bounded convex open set.

    (A2) The functions denoted by F=(fl):[0,T]×Rn×URn+1 are measurable in t and twice continuously differentiable in (y,u); for any ρ>0, there exists a constant L(ρ)>0 such that, for all y,ˆyRn and u,ˆuU with y,ˆy,u,ˆuρ, and for all t[0,T] such that

    {F(t,y,u)F(t,ˆy,ˆu)L(ρ)(yˆy+uˆu),Fy(t,y,u)Fy(t,ˆy,ˆu)L(ρ)(yˆy+uˆu),Fu(t,y,u)Fu(t,ˆy,ˆu)L(ρ)(yˆy+uˆu),

    there is a constant h>0 such that

    F(t,y,u)h(1+y), for all (t,u)[0,T]×U, (2.1)

    where Fy(t,y,u) (or Fu(t,y,u)) denotes the Jacobi matrix of F in y (or u).

    (A3) the functions denoted by ˜Ji=(JiG):RnRn+1 (i=1,2,,k) are twice continuously differentiable in x, and, for any ρ>0, there exists a constant L(ρ)>0 such that, for all y,ˆyRn with y,ˆyρ, we have

    {˜Ji(y)˜Ji(ˆy)L(ρ)yˆy,˜Jix(y)˜Jiy(ˆy)L(ρ)yˆy,

    where ˜Jiy(y) denotes the Jacobi matrix of ˜Ji in y.

    Denote H(t)=l(t,y(t),u(t))+f(t,y(t),u(t)),φ(t). To simplify the notation, [t] is used to replace (t,ˉy(t),¯φ(t),ˉu(t)) when evaluating the dynamics f and the Hamiltonian H; for example, f[t]=f(t,ˉy(t),ˉu(t)) and H[t]=l(t,ˉy(t),ˉu(t))+f(t,ˉy(t),ˉu(t)),¯φ(t).

    Remark 2.1. H(t)=l(t,y(t),u(t))+f(t,y(t),u(t)),φ(t) is often referred to as the Hamiltonian function, or, simply, the Hamiltonian. The Hamiltonian function is represented in many reports as ˜H(t)=l(t,y(t),u(t))+f(t,y(t),u(t)),φ(t); the difference between the two is that Pontryagin's maximum principle maximizes the Hamiltonian ˜H or minimizes the Hamiltonian H. Accordingly, it also leads to a difference between positivity and negativity of the optimal inequality in the maximum principle. In this article, we use H to denote the Hamiltonian; in other words, we need to minimize the Hamiltonian H.

    Now, we will prove an elementary theorem that will be useful in the following sections.

    Theorem 2.2. Let (A1)–(A3) hold; for any fixed uUad, the control system (1.2) has a unique solution yuPCl([0,T],Rn) given by

    yu(t)=y0+t0f(s,yu(s),u(s))ds+0<ti<tJi(yu(ti)), (2.2)

    and there exists a constant M=M(h,T,y0,J1(0),J2(0),,Jk(0)) such that

    yu(t)Mfor all(t,u)[0,T]×Uad. (2.3)

    Proof. By the qualitative theory of differential equations, it follows from (A1)–(A3) that the system of equations

    {˙y(t)=f(t,y(t),u(t)),t[0,t1],y(0)=y0,

    has a unique solution yuC([0,t1],Rn) given by

    yu(t)=y0+t0f(s,yu(s),u(s))ds,t[0,t1],

    and

    yu(t)eht(ht1+y0) for all (t,u)[0,t1]×Uad.

    Let

    y1=yu(t1)+J1(yu(t1)),

    then, one also can infer that the system of equations

    {˙y(t)=f(t,y(t),u(t)),t(t1,t2],y(t1+)=y1,

    has a unique solution yuC((t1,t2],Rn) given by

    yu(t)=y0+t0f(s,yu(s),u(s))ds+J1(yu(t1)),t(t1,t2],

    and

    yu(t)eh(tt1)(h(t2t1)+y1) for all (t,u)(t1,t2]×Uad.

    Using a step by step method, together with

    Ji(yu(ti))L(yu(ti))yu(ti)+Ji(0),i=1,2,,k,

    it is not difficult to claim that there exists a constant M such that (2.2) and (2.3) hold. Therefore, we have completed the proof of Theorem 2.2.

    To establish the main conclusion of Theorem 4.6, we now introduce the definition of singular control in the classical sense (see Definition 2 in [14], as well as (4.4) in [25]).

    Definition 2.3. We refer to the elements in the following equation as singular controls in the classical sense:

    ¯Uad={v()Uad|Hu(t,ˉy(t),v(t),¯φ(t))=0,Huu(t,y(t),v(t),φ(t))0}.

    Remark 2.4. If the Hamiltonian H is linear in control u, then ¯Uad is a singular control set in the classical sense.

    Now, we shall introduce a fundamental lemma, as derived from functional analysis, that will be utilized to establish the necessary condition for Problem P.

    Lemma 2.5. Let h(t) be an n-dimensional piecewise continuous vector value function on [0,T], and suppose that

    T0h(t),a(t)dt=0,

    for all n-dimensional piecewise continuous vector value functions a(t) on [0,T]; then, h(t)=0 at all continuous moments of h(t) on [0,T].

    Proof. Suppose that h(t) at some continuous moments ˉt when h(ˉt)0 because h(t) is an n-dimensional piecewise continuous vector value function; therefore, there exists an interval Iˉt at ˉt such that

    h(t)0,tIˉt.

    In this case, given the following:

    a(t)={h(t),tIˉt,0,t[0,T]Iˉt,

    then

    T0h(t)h(t)dt=Iˉth(t)h(t)dt=Iˉth(t)2dt>0.

    This is a contradiction. Therefore, we have finished the proof of Lemma 2.5.

    The purpose of this section is to prove Theorem 3.1, which establishes the integral form of the second-order necessary conditions for Problem P. The basic idea is that the condition of being second-order variational and non-negative is a necessary condition for an optimal control problem. We will borrow the method adopted in [25] and introduce a linear impulsive adjoint matrix to prove it.

    Theorem 3.1. Let (A1)–(A3) hold and ˉu represent the optimal control of J over Uad; it is necessary that there exist functions (¯y,¯φ,¯W,¯Φ)PCl([0,T],Rn)×PCr([0,T],Rn)×PCr([0,T],Rn×n)×PCl([0,T],Rn×n) such that the following equations and inequality hold:

    {˙ˉy(t)=f(t,ˉy(t),ˉu(t)),t[0,T]Λ,ˉy(ti+)ˉy(ti)=Ji(ˉy(ti)),tiΛ,ˉy(0)=y0; (3.1)
    {˙¯φ(t)=fy(t,ˉy(t),ˉu(t))¯φ(t)ly(t,ˉy(t),ˉu(t)),t[0,T]Λ,¯φ(ti)=¯φ(ti)+Jiy(ˉy(ti))¯φ(ti),tiΛ,¯φ(T)=Gy(ˉy(T)); (3.2)
    {˙¯W(t)=fy(t,ˉy(t),ˉu(t))¯W(t)¯W(t)fy(t,ˉy(t),ˉu(t))¯φ(t)fyy(t,ˉy(t),ˉu(t))lyy(t,ˉy(t),ˉu(t)),t[0,T]Λ,¯W(ti)=¯W(ti)+Jiy(ˉy(ti))¯W(ti)+¯W(ti+)Jiy(ˉy(ti))+Jiy(ˉy(ti))¯W(ti)Jiy(ˉy(ti))+¯φ(ti)Jiyy(ˉy(ti)),tiΛ,¯W(T)=Gyy(ˉy(T)); (3.3)
    {˙¯Φ(t)=fy(t,ˉy(t),ˉu(t))¯Φ(t),t[0,T]Λ,¯Φ(ti+)=¯Φ(ti)+Jiy(ˉy(ti))¯Φ(ti),tiΛ,¯Φ(0)=I; (3.4)
    12T0Huu[t][u(t)ˉu(t)],u(t)ˉu(t)dt+T0t0[Huy[t]+¯W(t)fu[t]][u(t)ˉu(t)],¯Φ(t)¯Φ(s)1fu[s]T[u(s)ˉu(s)]dsdt0for allu()Uad.

    Proof. Now, let (ˉy(),ˉu()) be the given optimal pair and ε(0,1]. For an arbitrary but fixed u()Uad, let uε()=ˉu()+ε(u()ˉu()). It follows from the assumption (A1) that uε()Uad; according to Theorem 2.2, uε determines the unique allowed state yε(); then, we can get from (A2), (A3) and Theorem 2.2 (see (2.2) and (2.3)) that

    yε(t)ˉy(t)t0f(s,yε(s),uε(s))f(s,ˉy(s),ˉu(s))ds+0<ti<tJi(yε(ti))Ji(ˉy(ti))L(M)t0(yε(t)ˉy(t)+εu(t)ˉu(t))ds+L(M)0<ti<tyε(ti)ˉy(ti).

    Using the impulse integral inequality (see [1]), we have

    limε0yεˉyPC=0. (3.5)

    Let

    Y(t)=limε0Yε(t)=limε0yε(t)y(t)ε.

    In the same way as for (3.5), it is easy to show that

    limε0YεYPC=0, (3.6)

    and Y solves the following system of variational equations:

    {˙Y(t)=fy[t]Y(t)+fu[t](u(t)ˉu(t)),t[0,T]Λ,Y(ti+)=Y(ti)+Jiy(ˉy(ti))Y(ti),tiΛ,Y(0)=0. (3.7)

    To obtain the first-order necessary condition for Problem P, the following proposition will be used.

    Proposition 3.2. Let (A2) and (A3) hold and ¯φPCr([0,T],Rn) be the solution of the impulsive adjoint equation given by (3.2). Then

    T0fu(t,ˉy(t),ˉu(t))¯φ(t),u(t)ˉu(t)dt=T0ly(s,ˉy(s),ˉu(s)),Y(s)ds+Gy(ˉy(T)),Y(T). (3.8)

    Proof. Since C([0,T]) is a dense subspace in L1([0,T]), there exist function sequences {fαy}, {lαy}C([0,T] such that

    fαy()fy(,ˉy(),ˉu()) and lαy()ly(,ˉy(),ˉu()) in L1([0,T]) as α. (3.9)

    Moreover, it follows from (A3) that the system of linear impulsive differential equations given by

    {˙φα(t)=fαy(t)φα(t)lαy(t),t[0,T]Λ,φα(ti)=φα(ti)+Jiy(ˉy(ti))φα(ti),tiΛ,φα(T)=Gy(ˉy(T)), (3.10)

    has a unique solution φαPCr([0,T],Rn)C1([0,T]Λ,Rn), and that there is a constant β>0 such that

    φαPCβ for all α.

    Hence, we have

    φα(t)¯φ(t)Ttlαy(s)ly(s,ˉy(s),ˉu(s))ds+Ttfαy(s)fy(s,ˉy(s),ˉu(s))φα(s)ds+Ttfy(s,ˉy(s),ˉu(s))φα(s)¯φ(s)ds+t<ti<TJiy(ˉy(ti))φα(ti)¯φ(ti).

    Therefore, using the same method as for (3.5), it is not difficult to show that

    limαφα¯φPC=0. (3.11)

    Consequently, we can infer from (3.7) and (3.10) that

    T0fu(t,ˉy(t),ˉu(t))φα(t),u(t)ˉu(t)dt=T0φα(t),fu(t,ˉy(t),ˉu(t))(u(t)ˉu(t))dt=T0φα(t),˙Y(t)fy(t,ˉy(t),ˉu(t))Y(t)dt=φα(T),Y(T)ki=1[φα(ti),Y(ti+)φα(ti),Y(ti)]T0˙φα(t)+fαy(t)φα(t),Y(t)dt=Gy(ˉy(T)),Y(T)T0˙φα(t)+fαy(t)φα(t),Y(t)dt=Gy(ˉy(T)),Y(T)+T0lαy(t),Y(t)dt.

    Let α in the above expression; using (3.9) and (3.11), we have (3.8). Therefore, we have finished the proof of Proposition 3.2.

    Based on the above proposition, we now continue to prove Theorem 3.1.

    By the optimality of ˉu, one can ascertain from the assumptions (A2) and (A3), (3.2), (3.6), (3.7), and Proposition 3.2 (see (3.8)) that

    0limε0J(uε)J(ˉu)ε=limε0T010ly(t,ˉy(t)+τ(yε(t)ˉy(t)),ˉu(t))dτ,Yε(t)dt+limε0T010lu(t,yε(t),ˉu(t)+τε(u(t)ˉu(t)))dτ,u(t)ˉu(t)dt+limε010Gy(ˉy(T)+τ(yε(T)ˉy(T)))dτ,Yε(T)=T0ly(t,ˉy(t),ˉu(t)),Y(t)dt+T0lu(t,ˉy(t),ˉu(t)),u(t)ˉu(t)dt+Gy(ˉy(T)),Y(T)=T0lu(t,ˉy(t),ˉu(t))+fu(t,ˉy(t),ˉu(t))¯φ(t),u(t)ˉu(t)dt,

    which leads to the following optimal inequality:

    T0lu(t,ˉy(t),ˉu(t))+fu(t,ˉy(t),ˉu(t))¯φ(t),u(t)ˉu(t)dt0.

    Because of the arbitrariness of uUad, we get

    T0lu(t,ˉy(t),ˉu(t))+fu(t,ˉy(t),ˉu(t))¯φ(t),u(t)ˉu(t)dt=0. (3.12)

    Moreover, combining this with (A1) and Proposition 3.2, it follows that, for all continuous time points t[0,T] of ˉu(t), we have

    Hu[t]=lu(t,ˉy(t),ˉu(t))+fu(t,ˉy(t),ˉu(t)),¯φ(t)=0.

    We define

    ¯U(t)={vUlu(t,ˉy(t),v)+fu(t,ˉy(t),v),¯φ(t)=0}. (3.13)

    We refer to this as the singular control region in the classical sense, which will be used later.

    Let

    Zε()(Zε1()Zε2()Zεn())=Yε(t)Y(t)ε,

    and

    Zk(t)=limε0Zεk(t).

    In the same way as for (3.5), one can also claim from (A2) and (A3) that

    limε0ZεZPC([0,T])=0, (3.14)

    and Z() denotes the solution of the following system of equations:

    {˙Z(t)=fy(t,ˉy(t),ˉu(t))Z(t)+12fuu(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],[u(t)ˉu(t)]+fuy(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],Y(t)+12fyy(t,ˉy(t),ˉu(t))Y(t),Y(t),t(0,T]Λ,Z(ti+)=Z(ti)+Jiy(ˉy(ti))Z(ti)+12Jiyy(ˉy(ti))Y(ti),Y(ti),tiΛ,Z(0)=0. (3.15)

    where

    12fuu(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],[u(t)ˉu(t)]=12{f1uu(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],[u(t)ˉu(t)]f2uu(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],[u(t)ˉu(t)]fnuu(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],[u(t)ˉu(t)]},
    12fyy(t,ˉy(t),ˉu(t))Y(t),Y(t)=12{f1yy(t,ˉy(t),ˉu(t))Y(t),Y(t)f2yy(t,ˉy(t),ˉu(t))Y(t),Y(t)fnyy(t,ˉy(t),ˉu(t))Y(t),Y(t)},

    and

    fuy(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],Y(t)={f1uy(t,ˉy(t),ˉu(t))[u(t)ˉu(t)]),Y(t)f2uy(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],Y(t)fnuy(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],Y(t)}.

    Meanwhile, one can infer from (3.7) that X:=YY is the solution to the following system of equations:

    {˙X(t)=fy(t,ˉy(t),ˉu(t))X(t)+X(t)fy(t,ˉy(t),ˉu(t))+fu(t,ˉy(t),ˉu(t))(u(t)ˉu(t))Y(t)+Y(t)(fu(t,ˉy(t),ˉu(t))(u(t)ˉu(t))),t[0,T]Λ,X(ti+)=X(ti)+Jiy(ˉy(ti))X(ti)+X(ti)Jiy(ˉy(ti))+Jiy(ˉy(ti))X(ti)Jiy(ˉy(ti)),tiΛ,X(0)=0. (3.16)

    The subsequent proposition plays a crucial role in obtaining the integral form of the second-order necessary conditions.

    Proposition 3.3. Let (A2) and (A3) hold and WPCr([0,T],Rn×n), ¯ΦPCl([0,T],Rn×n) be the solution of (3.3) and (3.4). Then,

    T0¯W(t)fu(t,ˉy(t),ˉu(t))(u(t)ˉu(t)),Y(t)dt=12T0[lyy(t,ˉy(t),ˉu(t))+¯φ(t)fyy(t,ˉy(t),ˉu(t))]Y(t),Y(t)dt+12Gyy(ˉy(T))Y(T),Y(T)+12ki=1¯φ(ti)Jiyy(ˉy(ti))Y(ti)),Y(ti)), (3.17)

    and

    T0(luy(t,ˉy(t),ˉu(t))+¯φ(t)fuy(t,ˉy(t),ˉu(t))+¯W(t)fu(t,ˉy(t),ˉu(t))T)[u(t)ˉu(t)],Y(t)dt=T0dtt0(luy(t,ˉy(t),ˉu(t))+¯φ(t)fuy(t,ˉy(t),ˉu(t))+¯W(t)fu(t,ˉy(t),ˉu(t))T)[u(t)ˉu(t)],.¯Φ(t)¯Φ(s)1fu(s,ˉy(s),ˉu(s))T[u(s)ˉu(s)]ds. (3.18)

    Proof. Since C([0,T]) is a dense subspace in L1([0,T]), there exist function sequences {fαy}, {fαyy}, {fαuy}, {fαu}, {lαy}, {lαyy}, {lαuy}C([0,T] such that

    {fαy()fy(,ˉy(),ˉu()),lαy()ly(,ˉy(),ˉu()),fαyy()fyy(,ˉy(),ˉu()),lαyy()lyy(,ˉy(),ˉu()),fαuy()fuy(,ˉy(),ˉu()),lαuy()luy(,ˉy(),ˉu()),fαufu, in L1([0,T]) as α. (3.19)

    Consequently, by (A3) and (3.19), one can infer that the systems of linear impulsive matrix differential equations given by

    {˙Wα(t)=fαy(t)Wα(t)Wα(t)fαy(t)¯φ(t)fαyy(t)lαyy(t),t[0,T]Λ,Wα(ti)=Wα(ti+)+Jiy(ˉy(ti))Wα(ti+)+Wα(ti+)Jiy(ˉy(ti))+Jiy(ˉy(ti))Wα(ti+)Jiy(ˉy(ti))+¯φ(ti)Jiyy(ˉy(ti)),tiΛ,Wα(T)=Gyy(ˉy(T)), (3.20)

    and

    {˙Φα(t)=fαy(t)Φα(t),t[0,T]Λ,Φα(ti+)=Φα(ti)+Jiy(ˉy(ti))Φα(ti),tiΛ,Φα(0)=I, (3.21)

    each have a unique solution WαPCr([0,T],Rn×n)C1([0,T]Λ,Rn×n) and ΦαPCl([0,T],Rn×n)C1([0,T]Λ,Rn×n), respectively. Not only that, there exists a constant γ>0 such that

    WαPCγandΦαPCγ for all α.

    Moreover, we have

    Wα(t)¯W(t)Ttlαyy(s)lyy(s,ˉy(s),ˉu(s))ds+Tt¯φ(s)fαyy(s)fyy(s,ˉy(s),ˉu(s))ds+2γTtfαy(s)fy(s,ˉy(s),ˉu(s))ds+2TtWα(s)¯W(s)fy(s,ˉy(s),ˉu(s))ds+2t<ti<T(Jiy(ˉy(ti))+Jiy(ˉy(ti))2)Wα(ti)¯W(ti),

    and

    Φα(t)¯Φ(t)γt0fαy(s)fy(s,ˉy(s),ˉu(s))ds+t0fy(s,ˉy(s),ˉu(s))Φα(s)¯Φ(s)ds+0<ti<tJiy(ˉy(ti))Φα(ti)¯Φ(ti).

    In the same way as for (3.5), we obtain

    limαWα¯WPC=0andlimαΦα¯ΦPC=0. (3.22)

    In addition, it is obvious from (3.20) that Wα is also a solution of (3.20). This means that

    Wα(t)=Wα(t) for all t[0,T]. (3.23)

    Since tr(AB)=tr(BA) for all k×j matrix A and j×k matrix B, we can get from (3.2), (3.7), (3.16), (3.19), (3.20), (3.22), and (3.23) that

    2T0¯W(t)fu(t,ˉy(t),ˉu(t))(u(t)ˉu(t)),Y(t)dt=2limαT0Wα(t)(˙Y(t)fy(t,ˉy(t),ˉu(t))Y(t)),Y(t)dt=limαtrT0[Wα(t)(˙Y(t)fy(t,ˉy(t),ˉu(t))Y(t))Y(t)+Wα(t)Y(t)(˙Y(t)Y(t)fy(t,ˉy(t),ˉu(t)))]dt=limαtrT0Wα(t)[˙X(t)fy(t,ˉy(t),ˉu(t))X(t)X(t)fy(t,ˉy(t),ˉu(t))]dt=limα{T0[˙Wα(t)+Wα(t)fy(t,ˉy(t),ˉu(t))+fy(t,ˉy(t),ˉu(t))Wα(t)]Y(t),Y(t)dt+Wα(T)Y(T),Y(T)+ki=1[Wα(ti)Y(ti),Y(ti)Wα(ti)Y(ti+),Y(ti+)]}=limα{T0[˙Wα(t)+Wα(t)fαy(t)+fαy(t)Wα(t)]Y(t),Y(t)dt+Wα(T)Y(T),Y(T)+ki=1[(Wα(ti)Wα(ti))Y(ti),Y(ti)(Jiy(ˉy(ti))Wα(ti)+Wα(ti)Jiy(ˉy(ti)))Y(ti),Y(ti)Jiy(ˉy(ti))Wα(ti)Jiy(ˉy(ti))Y(ti),Y(ti)]}=limα{T0[lαyy(t)+¯φ(t)fαyy(t)]Y(t),Y(t)dt+Wα(T)Y(T),Y(T)+ki=1[(Wα(ti)Wα(ti))Y(ti),Y(ti)(Jiy(ˉy(ti))Wα(ti)+Wα(ti)Jiy(ˉy(ti)))Y(ti),Y(ti)Jiy(ˉy(ti))Wα(ti)Jiy(ˉy(ti))Y(ti),Y(ti)]}=T0[lyy(t,ˉy(t),ˉu(t))+¯φ(t)fyy(t,ˉy(t),ˉu(t))]Y(t),Y(t)dt+Gyy(ˉy(T))Y(T),Y(T)+ki=1¯φ(ti)Jiyy(ˉy(ti))Y(ti),Y(ti),

    i.e., (3.17) holds.

    Now, let us prove (3.18). By (3.4), (3.7), (3.21), and (3.22), we have

    which means that (3.18) holds. Therefore, we have finished the proof of Proposition 3.3.

    Based on the above propositions, we now continue to prove Theorem 3.1.

    Since represents optimal control of J over , together with Proposition 3.2 (see (3.8)) and (3.12), we have

    (3.24)

    Taken together with (A2), (A3), and (3.24), one can get

    Then, combining this with (3.6) and (3.14), the above expression, (A2), and (A3) leads to the following:

    By (3.2), we have

    (3.25)

    Since is a dense subspace in , there exist function sequences such that

    (3.26)

    It follows immediately from (3.15), (3.19), and (3.26) that the system of equations given by

    (3.27)

    has a unique solution and

    (3.28)

    where is the solution to (3.15).

    Moreover, we can infer from (3.19) and (3.26)–(3.28) that

    (3.29)

    Taken together with (3.29), we deduce from (3.25) that

    (3.30)

    The following inequality follows from (3.30) and (3.17):

    (3.31)

    Then, by (3.18) and (3.31), we can show that

    (3.32)

    Thus, we have finished the proof of Theorem 3.1.

    Remark 3.4. Theorem 3.1 does not establish whether the optimal control of Problem P is singular or nonsingular; it is a unified conclusion, similar to the equations in (4.5.2) of Theorem 4.2 in [15]. Therefore, Theorem 3.1 is a generalization of Theorem 4.2 in [15] to impulsive controlled systems. Based on Theorem 3.1, singular control and nonsingular control in the classical sense can be considered under a unified framework for Problem P.

    In this section, on the basis of Theorem 3.1 in the previous section, we first obtain the Legendre-Clebsch condition; then, we give a corollary for the integral form of the second-order necessary optimality conditions for optimal singular control; finally, we give the pointwise Jacobson type necessary conditions and the pointwise Legendre-Clebsch condition.

    Corollary 4.1. Let (A1)–(A3) hold and denote the optimal control of over ; it is necessary that there exist a pair of functions such that (3.1), (3.2), and

    (4.1)

    hold.

    Proof. To prove Corollary 4.1, let ; take the special control variational problem as follows:

    (4.2)

    where is a constant vector, is a sufficiently small positive number, is any continuous time of . For , satisfies (3.4); then, the solution of the variational problem (3.7) is given by

    Then

    where . Utilizing the continuity of the mean value theorem for integrals, we have

    (4.3)

    Substituting (4.3) into (3.32), we have

    Observe that is independent of and can be arbitrarily small; we have

    Since is an arbitrary vector and denotes arbitrary continuous time, holds.

    Remark 4.2. Condition (4.1) represents the Legendre-Clebsch condition for the optimal control problem Problem P. At the same time, it also shows the rationality of the conventional hypothesis .

    Remark 4.3. For the LQ problem, where , when , the problem is called a nonsingular problem; when , the problem is called the totally singular case; when , the problem is called the partially singular case (see Definitions (4.4)–(4.6) in [15]). Whether or not the optimal control problem is singular, and regardless of the kind of singularity, this classification standard is often adopted.

    The following is a corollary of Theorem 3.1 in the case in which , that is, Problem P is a totally singular problem according to the definitions in [15].

    Corollary 4.4. Let (A1)–(A3) hold and denotes the optimal singular control of over ; it is necessary that there exist functions that satisfy (3.1)–(3.4) and

    Remark 4.5. Corollary 4.4 is a similar conclusion of Theorem 4.3 in [25] for Problem P.

    The following is the pointwise Jacobson-type second-order necessary optimality condition for Problem P. The conclusion is similar to that of Theorem 4.3 in [25]. Note that the set (see (3.13)) of values for differs from the set of values that is described in [32]; thus, it essentially confirms the pointwise characteristic. The author of [25] has proven similar conclusions under the weaker condition, suggesting that the control region is a Polish space. In fact, under our basic assumption (A1), we can use the method on page 93 in [15] to prove it.

    Theorem 4.6. Let (A1)–(A3) hold and denote the optimal singular control of over ; it is necessary that there exist functions that satisfy (3.1)–(3.4), and, for all continuous time of , we have

    (4.4)

    Proof. {Note that Definition 2.3 implies that , apply the same control perturbation as for (4.2);} we have

    (4.5)

    and the dominant term in the expansion of (4.5) for sufficiently small is given by

    Since can be chosen as any continuous time of , let ; thus, (4.4) holds and we have finished the proof of Theorem 4.6.

    Using the same idea as in Theorem 4.6, we can also obtain the pointwise Legendre-Clebsch necessary optimality condition corresponding to Corollary 4.1.

    Corollary 4.7. Let (A1)–(A3) hold and denote the optimal control of over ; it is necessary that there exist a pair of functions such that (3.1), (3.2), and, for all continuous time of

    (4.6)

    hold.

    Remark 4.8. Comparing Corollaries 4.7 and 4.1, it can be found that if the pointwise condition is satisfied, is not required, as only (4.6) needs to be satisfied.

    In this section, we will give an example to illustrate the effectiveness of Theorem 4.6.

    Let

    subject to

    Obviously, the Hamiltonian satisfies the conditions for linear control, as denoted by and . According to Remark (2.4), the problem is singular, i.e., . It is not difficult to assert that denotes singular control; this is because, by , we can get that , and ; consequently, , , and ; by Definition 2.3, denotes singular control. The question is whether it is optimal singular control. Now, let us use Theorem 4.6 to determine that it must not be optimal singular control.

    By (3.2), we have

    Using (3.3), we have

    and

    and

    Substituting , and directly into the above equations, the following results can be obtained directly

    and

    By (4.4), the necessary condition for the singular control to be the optimal control scheme is given by

    (5.1)

    But, because of the above equations, (5.1) cannot be true for arbitrary but fixed . Therefore, regarding singular control , according to Theorem 4.6, it must not be optimal singular control.

    In this paper, we have investigated the pointwise Jacobson type necessary conditions for Problem P. By introducing an impulsive linear matrix Riccati differential equation, we have derived the integral representation of the functional second-order variational equation. On this basis, we obtained the integral form of the second-order necessary conditions and the pointwise Jacobson type necessary conditions for optimal singular control in the classical sense. Incidentally, the Legendre-Clebsch condition and the pointwise Legendre-Clebsch condition were also obtained. These conclusions have been derived under weaker conditions, thereby enriching existing conclusions. In the future, we will continue to research the pointwise Jacobson-type second-order necessary optimality conditions in the Pontryagin sense.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are grateful to the anonymous referees for their helpful comments and valuable suggestions which have improved the quality of the manuscript. This work was supported by the National Natural Science Foundation of China (No. 12061021 and No. 11161009).

    The authors declare that there is no conflict of interest.



    [1] V. Lakshmikantham, D. D. Bainov, P. S. Simeonov, Theory of Impulsive Differential Equations, World Scientific, London, 1989.
    [2] A. M. Samoilenko, N. A. Perestyuk, Y. Chapovsky, Impusive Differential Equations, World Scientific, Hong Kong, 1995.
    [3] T. Yang, Impulsive Control Theory, Springer-Verlag, Hong Kong, 2001.
    [4] S. I. Gurgula, N. A. Perestyuk, On stability of solutions of impulsive systems, Vestn. Kiev Univ., Mat. Mekh, 23 (1981), 33–40.
    [5] A. Bensoussa, Optimal impulsive control theory, Lect. Notes Control Inf. Sci., (1979), 17–41. https://doi.org/10.1007/bfb0009374
    [6] P. L. Lions, B. Perthame, Quasi-variational inequalities and ergodic impulse control, SIAM J. Control Optim., 24 (1986), 604–615. https://doi.org/10.1137/0324036 doi: 10.1137/0324036
    [7] Y. Wang, J. Lu, Some recent results of analysis and control for impulsive systems, Commun. Nonlinear Sci. Numer. Simul., 80 (2019), 1–15. https://doi.org/10.1016/j.cnsns.2019.104862 doi: 10.1016/j.cnsns.2019.104862
    [8] N. U. Ahmed, Existence of optimal controls for a general class of impulsive systems on Banach spaces, SIAM J. Control Optim., 42 (2003), 669–685. https://doi.org/10.1137/S0363012901391299 doi: 10.1137/S0363012901391299
    [9] J. P. Belfo, J. M. Lemos, Optimal Impulsive Control for Cancer Therapy, Springer, Switzerland, 2021. https://doi.org/10.1007/978-3-030-50488-5
    [10] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze, E. F. Mischenko, The Mathematical Theory of Optimal Processes, Wiley-Interscience, New York, 1962.
    [11] J. Yong, X. Y. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations, Springer, New York, 1999.
    [12] X. Li, J. Yong, Optimal Control Theory for Infinite Dimensional Systems, Birkhauser, Boston, 1995.
    [13] H. Frankowska, D. Hoehener, Jacobson type necessary optimality conditions for general control systems, in 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, Japan, 54 (2015), 1304–1309. https://doi.org/10.1109/cdc.2015.7402391
    [14] R. F. Gabasov, F. M. Kirillova, High order necessary conditions for optimality, SIAM J. Control, 10 (1972), 127–168. https://doi.org/10.1137/0310012 doi: 10.1137/0310012
    [15] D. J. Bell, D. H. Jacobson, Singular Optimal Control Problems, Academic Press, NewYork, 1975.
    [16] J. L. Speyer, D. H. Jacobson, Primer on Optimal Control Theory, Society for Industrial and Applied Mathematics, Philadelphia, 2010.
    [17] B. S. Goh, Necessary conditions for singular extremals involving multiple control variables, SIAM J. Control, 4 (1966), 716–731. https://doi.org/10.1137/0304052 doi: 10.1137/0304052
    [18] M. Aronna, J. Bonnans, A. V. Dmitruk, P. Lotito, Quadratic order conditions for bang-singular extremals, preprint, arXiv: 1107.0161.
    [19] H. Frankowska, D. Tonon, Pointwise second-order necessary optimality conditions for the Mayer problem with control constraints, SIAM J. Control Optim., 51 (2013), 3814–3843. https://doi.or-g/10.1137/130906799 doi: 10.1137/130906799
    [20] A. J. Krener, The high order maximal principle and its application to singular extremals, SIAM J. Control Optim., 15 (1977), 256–293. https://doi.org/10.1137/0315019 doi: 10.1137/0315019
    [21] H. Schättler, U. Ledzewicz, Geometric Optimal Control: Theory, Methods and Examples, Springer, New York, 2012.
    [22] D. H. Jacobson, A new necessary condition of optimality for singular control problems, SIAM J. Control, 7 (1969), 578–595. https://doi.org/10.1137/0307042 doi: 10.1137/0307042
    [23] D. H. Jacobson, On conditions of optimality for singular control problems, IEEE Trans. Autom. Control, 15 (1970), 109–110. https://doi.org/10.1109/tac.1970.1099361 doi: 10.1109/tac.1970.1099361
    [24] D. H. Jacobson, Sufficient conditions for nonnegativity of the second variation in singular and nonsingular control problems, SIAM J. Control, 8 (1970), 403–423. https://doi.org/10.1137/030-8029 doi: 10.1137/030-8029
    [25] H. Lou, Second-order necessary/sufficient conditions for optimal control problem in the absence of linear structure, preprint, arXiv: 1008.1020.
    [26] S. Tang, A second-order maximum principle for singular optimal stochastic controls, Discrete Contin. Dyn. Syst. Ser. B, 14 (2010), 1581–1599. https://doi.org/10.3934/dcdsb.2010.14.1581 doi: 10.3934/dcdsb.2010.14.1581
    [27] H. Frankowska, Q. Lü, Second order necessary conditions for optimal control problems of evolution equations involving final point equality constraints, ESAIM. Control Optim. Calc. Var., 27 (2021), 1–38. https://doi.org/10.1051/cocv/2021065 doi: 10.1051/cocv/2021065
    [28] H. Frankowska, D. Hoehener, Pointwise second-order necessary optimality conditions and second-order sensitivity relations in optimal control, J. Differ. Equations, 262 (2017), 5735–5772. https://doi.org/10.1016/j.jde.2017.02.013 doi: 10.1016/j.jde.2017.02.013
    [29] D. Hoehener, Variational approach to second-order optimality conditions for control problems with pure state constraints, SIAM J. Control Optim., 50 (2012), 1139–1173. https://doi.org/10.1137/1108-28320 doi: 10.1137/1108-28320
    [30] Q. Cui, L. Deng, X. Zhang, Pointwise second-order necessary conditions for optimal control problems evolved on Riemannian manifolds, C. R. Math., 354 (2016), 191–194. https://doi.org/10.1016/j.crma.2015.09.032 doi: 10.1016/j.crma.2015.09.032
    [31] L. Deng, X. Zhang, Second order necessary conditions for endpoints-constrained optimal control problems on Riemannian manifolds, J. Differ. Equations, 272 (2021), 854–910. https://doi.org/10.1016/j.jde.2020.10.005 doi: 10.1016/j.jde.2020.10.005
    [32] A. Ashyralyev, Y. A. Sharifov, Optimal control problem for impulsive systems with integral boundary conditions, Electron. J. Differ. Equations, 2013 (2013), 1–11. https://doi.org/10.1063/1.4747627 doi: 10.1063/1.4747627
    [33] A. V. Arutyunov, D. Y. Karamzin, F. L. Pereira, N. Y. Chernikova, Second-order necessary optimality conditions in optimal impulsive control problems, Differ. Equations, 54 (2018), 1083–1101. https://doi.org/10.1134/s0012266118080086 doi: 10.1134/s0012266118080086
    [34] V. A. Dykhta, The variational maximum principle and second-order optimality conditions for impulse processes and singular processes, Sib. Math. J., 35 (1994), 65–76. https://doi.org/10.100-7/bf02104948 doi: 10.100-7/bf02104948
    [35] A. Arutyunov, V. Jacimovic, F. Pereira, Second order necessary conditions for optimal impulsive control problems, J. Dyn. Control Syst., 9 (2003), 131–153. https://doi.org/10.1023/A:10221114-02527 doi: 10.1023/A:10221114-02527
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(999) PDF downloads(51) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog