Processing math: 64%
Research article

Analysis of tensile properties in tempered martensite steels with different cementite particle size distributions

  • Received: 31 July 2024 Revised: 15 October 2024 Accepted: 28 October 2024 Published: 01 November 2024
  • In this study, the tensile properties of tempered martensite steel were analyzed using a combination of an experimental approach and deep learning. The martensite steels were tempered in two stages, and fine and coarse cementite particles were mixed through two-stage tempering. The samples were heated to 923 and 973 K and held isothermally for 30, 45, and 60 min. They were then cooled to 723, 773, and 823 K; held isothermally for 30, 45, and 60 min; and furnace-cooled to room temperature (296 ± 2 K). The combination of low tempering temperature and short holding time in the first stage resulted in high tensile strength. When the tempering temperature at the first stage was 923 K, the combination of low tempering temperature and long holding time at the second stage resulted in high total elongation. This means that decreasing the number of coarse cementite particles and increasing the number of fine cementite particles improve the strength–ductility balance. Using the results obtained by the experimental approach, an image-based regression model was constructed that can accurately suggest the relationship between the microstructure and tensile properties of tempered martensite steel. We succeeded in developing image-based regression models with high accuracy using a convolutional neural network (CNN). Moreover, gradient-weighted class activation mapping (Grad-CAM) suggested that fine cementite particles and coarse and spheroidal cementite particles are the dominant factors for tensile strength and total elongation, respectively.

    Citation: Kengo Sawai, Keiya Sugiura, Toshio Ogawa, Ta-Te Chen, Fei Sun, Yoshitaka Adachi. Analysis of tensile properties in tempered martensite steels with different cementite particle size distributions[J]. AIMS Materials Science, 2024, 11(5): 1056-1064. doi: 10.3934/matersci.2024050

    Related Papers:

    [1] Wanshun Zhao, Kelin Li, Yanchao Shi . Exponential synchronization of neural networks with mixed delays under impulsive control. Electronic Research Archive, 2024, 32(9): 5287-5305. doi: 10.3934/era.2024244
    [2] Wei Ji . Optimal control problems with time inconsistency. Electronic Research Archive, 2023, 31(1): 492-508. doi: 10.3934/era.2023024
    [3] Zhaoyan Meng, Shuting Lyu, Mengqing Zhang, Xining Li, Qimin Zhang . Sufficient and necessary conditions of near-optimal controls for a stochastic listeriosis model with spatial diffusion. Electronic Research Archive, 2024, 32(5): 3059-3091. doi: 10.3934/era.2024140
    [4] Yang Song, Beiyan Yang, Jimin Wang . Stability analysis and security control of nonlinear singular semi-Markov jump systems. Electronic Research Archive, 2025, 33(1): 1-25. doi: 10.3934/era.2025001
    [5] Daliang Zhao, Yansheng Liu . Controllability of nonlinear fractional evolution systems in Banach spaces: A survey. Electronic Research Archive, 2021, 29(5): 3551-3580. doi: 10.3934/era.2021083
    [6] Changling Xu, Huilai Li . Two-grid methods of finite element approximation for parabolic integro-differential optimal control problems. Electronic Research Archive, 2023, 31(8): 4818-4842. doi: 10.3934/era.2023247
    [7] N. U. Ahmed, Saroj Biswas . Optimal strategy for removal of greenhouse gas in the atmosphere to avert global climate crisis. Electronic Research Archive, 2023, 31(12): 7452-7472. doi: 10.3934/era.2023376
    [8] Chengbo Yi, Jiayi Cai, Rui Guo . Synchronization of a class of nonlinear multiple neural networks with delays via a dynamic event-triggered impulsive control strategy. Electronic Research Archive, 2024, 32(7): 4581-4603. doi: 10.3934/era.2024208
    [9] Haijun Wang, Gege Kang, Ruifang Zhang . On optimality conditions and duality for multiobjective fractional optimization problem with vanishing constraints. Electronic Research Archive, 2024, 32(8): 5109-5126. doi: 10.3934/era.2024235
    [10] Xinzheng Xu, Yanyan Ding, Zhenhu Lv, Zhongnian Li, Renke Sun . Optimized pointwise convolution operation by Ghost blocks. Electronic Research Archive, 2023, 31(6): 3187-3199. doi: 10.3934/era.2023161
  • In this study, the tensile properties of tempered martensite steel were analyzed using a combination of an experimental approach and deep learning. The martensite steels were tempered in two stages, and fine and coarse cementite particles were mixed through two-stage tempering. The samples were heated to 923 and 973 K and held isothermally for 30, 45, and 60 min. They were then cooled to 723, 773, and 823 K; held isothermally for 30, 45, and 60 min; and furnace-cooled to room temperature (296 ± 2 K). The combination of low tempering temperature and short holding time in the first stage resulted in high tensile strength. When the tempering temperature at the first stage was 923 K, the combination of low tempering temperature and long holding time at the second stage resulted in high total elongation. This means that decreasing the number of coarse cementite particles and increasing the number of fine cementite particles improve the strength–ductility balance. Using the results obtained by the experimental approach, an image-based regression model was constructed that can accurately suggest the relationship between the microstructure and tensile properties of tempered martensite steel. We succeeded in developing image-based regression models with high accuracy using a convolutional neural network (CNN). Moreover, gradient-weighted class activation mapping (Grad-CAM) suggested that fine cementite particles and coarse and spheroidal cementite particles are the dominant factors for tensile strength and total elongation, respectively.



    Impulsive differential equations are employed for the analysis of real-world phenomena that are characterized by the state of instantaneous system changes. It has been found to play a vital role in many areas, such as sampled-data control, communication networks, industrial robots, biology, and so on. Due to the widespread existence of impulsive perturbation, extensive research has been dedicated to the exploration of the stability of impulsive differential systems (see [1,2,3,4]). Meanwhile, some scholars have focused on studying the optimal impulsive control problem, which has resulted in numerous interesting findings (see [5,6,7,8,9]).

    Finding the necessary optimality conditions is one of the central tasks in optimal control theory. Pontryagin and his co-authors have made milestone contributions (see [10]). As pointed out in [11]: "The mathematical significance of the maximum principle lies in that maximizing the Hamiltonian is much easier than the original control problem that is infinite-dimensional". Essentially, the Hamiltonian maximization occurs pointwise. Since Pontryagin's maximum principle was discovered, the first-order and second-order necessary conditions for optimal control problems in both finite- and infinite-dimensional spaces have been extensively researched (see [11,12]).

    However, it is not always possible to find the optimal control by pointwise maximizing the Hamiltonian, and this kind of optimal control problem is often referred to as the singular control problems. For singular control problems, the first task is to discover new necessary conditions to distinguish optimal singular controls from other singular controls; one way to do this is to look for second-order conditions. It is common to seek a second-order necessary condition that requires a quadratic functional to be non-negative. Ideally, we would prefer the second-order necessary conditions to have pointwise characteristics that are similar to Pontryagin's maximum principle, which means that the pointwise mode is preferable for maximizing a particular function. The pointwise necessary optimality conditions are reviewed in [13]; the Jacobson conditions and the Goh conditions are generally considered as two types of pointwise second-order necessary optimality conditions for the optimal singular control problem. In addition, references [14,15,16] are very comprehensive references on the singular control problem. Regarding the Goh conditions, the original contribution can be seen in [17]. In short, the Goh conditions are obtained by applying Goh's transformation. Goh's transformation approaches have been designed to transform the original singular problem into a new nonsingular one; then, for the nonsingular optimal control problem, the classical Legendre-Clebsch condition may be applied. In this process, Goh's transformation may be used several times; therefore, the Goh conditions are also called the generalized Legendre-Clebsch conditions; recent research results are very abundant, for example, references [18,19,20,21] and related references. Conversely, limited attention has been given to investigating the Jacobson-type conditions.

    There is an interesting story about Jacobson-type necessary conditions. Until Jacobson discovered the Jacobson-type necessary conditions, it was thought that there was no Riccati-type matrix differential equation for singular control problems. In fact, Jacobson introduced a linear matrix Riccati differential equation that is similar to the nonlinear matrix Riccati differential equation in the standard LQ problem. Thus, a "new" necessary condition for singular control was obtained in [22]. After this, Jacobson found that this new condition is different from the generalized Legendre-Clebsch condition, and Jacobson has demonstrated that these two necessary conditions are generally insufficient for optimality in [23]. Therefore, the matrix Riccati differential equation can not only solve nonsingular problems, they can also solve important optimal singular control problems; singular control and nonsingular control can even be considered under a unified framework (see Theorem 4.2 in [15,24]).

    Recent studies [25,26] have established the Jacobson-type pointwise second-order necessary optimality conditions for deterministic and stochastic optimal singular control problems, respectively. Particularly, the relaxation methods for control problems have been used to solve the singular control problem of a lack of a linear structure in control problems; also, the pointwise second-order necessary conditions have been obtained in [25]. When discussing the second-order necessary conditions, only the allowable set of singular control is considered, rather than the original allowable control set, which can greatly reduce the computational expense of singular control problems. References [25,26] have also been applied to derive the pointwise second-order necessary optimality conditions for singular control problems with constraints in finite- or infinite-dimensional spaces. For further details, please refer to [13,27,28,29,30,31]. Theorem 4.3 in [25] describes the Jacobson-type second-order necessary optimality condition of singular control problems governed by pulseless controlled systems according to Pontryagin. For the definition of singular control in the classical sense and in the Pontryagin sense, see Definitions 1 and 2 in [14], as well as (4.4) and (4.5) in [25].

    Regarding the importance of the impulsive system, the second-order necessary conditions for the optimal control problem governed by impulsive differential systems have been studied in [32], which focuses on impulsive systems with multi-point nonlocal and integral boundary conditions; the second-order necessary conditions of the integral form has been obtained by introducing the impulsive matrix function directly. When discussing the second-order necessary conditions for singular control, the perturbation control takes its value in full space, and not in the singular control region (see (3.13)); therefore, the conclusion is not reached by the pointwise Jacobson-type necessary conditions. In addition, references [33,34,35] consider second-order necessary conditions whereby the measure is impulsive control; the system can experience an infinite number of jumps in a finite amount of time, but this is different from what we are going to consider.

    Inspired by the above discussions, we aimed to study the following optimal control problem, as governed by impulsive differential systems, which differs from the previous works.

    Problem P: Let T>0 and Λ={ti0<t1<t2<<tk<T}(0,T) be given; URm is a nonempty bounded convex open set.

    minJ(u())=T0l(t,y(t),u(t))dt+G(y(T)), (1.1)

    subject to

    {˙y(t)=f(t,y(t),u(t)),t[0,T]Λ,y(ti+)y(ti)=Ji(y(ti)),tiΛ,y(0)=y0, (1.2)

    where u()Uad={u()u()measurable,u(t)U}, and l, G, f, and Ji(i=1,2,,k) are the given maps.

    Here, we generalize the control model of [15,25] to an impulsive controlled system. For Problem P, the difficulty lies in the introduction of the appropriate impulsive adjoint matrix differential equation, which we overcome by borrowing the method presented in [25]. The main conclusions include Theorems 3.1 and 4.6, where the former is a generalization of Theorem 4.2 in [15] to impulsive controlled systems; the latter is a similar conclusion of Theorem 4.3 in [25], but the difference is that Theorem 4.6 is a conclusion of the impulsive controlled systems, and in the classical sense, while Theorem 4.3 in [25] is a conclusion for the pulseless controlled systems, and in the Pontryagin sense.

    The main novelties and contributions of this paper can be summarized as follows: (i) generalization of pulseless controlled systems to impulsive controlled systems, which is helpful to increase the applicability of the model; (ii) through the use of C([0,T]) as a dense subspace in L1([0,T]), as per the results of functional analysis, the condition of conclusion is weakened; (iii) pointwise Jacobson-type necessary conditions have been obtained, which facilitates the calculations for and distinguishes the optimal singular control from other singular controls in the classical sense (see Definition 2.2).

    The outline of this paper is as follows. Some preliminaries are proposed in Section 2. In Section 3, the integral-form second-order necessary conditions are derived. In Section 4, the pointwise Jacobson-type necessary conditions are given. An example is considered to elucidate the proposed main results in Section 5, and Section 6 concludes this paper.

    In this section, we will present some preliminaries, which includes basic assumptions, the definition of the singular control in the classical sense, the solvability of impulsive systems, and a lemma that has been obtained via functional analysis.

    Let B denote the transposition of a matrix B. Define C1([0,T]Λ,Rn)={y:[0,T]Rn|y is continuous differential at t[0,T]Λ} and PCl([0,T],Rn)(PCr([0,T],Rn))={y:[0,T]Rn|y is continuous at t[0,T]Λ,y is left (right) continuous and exists right (left) limit at tΛ}, obviously endowed with the norm yPC=sup{y(t+),y(t)|t[0,T]}, PCl([0,T],Rn); also, PCr([0,T],Rn) denotes Banach spaces.

    Let us assume the following:

    (A1) URm is a nonempty bounded convex open set.

    (A2) The functions denoted by F=(fl):[0,T]×Rn×URn+1 are measurable in t and twice continuously differentiable in (y,u); for any ρ>0, there exists a constant L(ρ)>0 such that, for all y,ˆyRn and u,ˆuU with y,ˆy,u,ˆuρ, and for all t[0,T] such that

    {F(t,y,u)F(t,ˆy,ˆu)L(ρ)(yˆy+uˆu),Fy(t,y,u)Fy(t,ˆy,ˆu)L(ρ)(yˆy+uˆu),Fu(t,y,u)Fu(t,ˆy,ˆu)L(ρ)(yˆy+uˆu),

    there is a constant h>0 such that

    F(t,y,u)h(1+y), for all (t,u)[0,T]×U, (2.1)

    where Fy(t,y,u) (or Fu(t,y,u)) denotes the Jacobi matrix of F in y (or u).

    (A3) the functions denoted by ˜Ji=(JiG):RnRn+1 (i=1,2,,k) are twice continuously differentiable in x, and, for any ρ>0, there exists a constant L(ρ)>0 such that, for all y,ˆyRn with y,ˆyρ, we have

    {˜Ji(y)˜Ji(ˆy)L(ρ)yˆy,˜Jix(y)˜Jiy(ˆy)L(ρ)yˆy,

    where ˜Jiy(y) denotes the Jacobi matrix of ˜Ji in y.

    Denote H(t)=l(t,y(t),u(t))+f(t,y(t),u(t)),φ(t). To simplify the notation, [t] is used to replace (t,ˉy(t),¯φ(t),ˉu(t)) when evaluating the dynamics f and the Hamiltonian H; for example, f[t]=f(t,ˉy(t),ˉu(t)) and H[t]=l(t,ˉy(t),ˉu(t))+f(t,ˉy(t),ˉu(t)),¯φ(t).

    Remark 2.1. H(t)=l(t,y(t),u(t))+f(t,y(t),u(t)),φ(t) is often referred to as the Hamiltonian function, or, simply, the Hamiltonian. The Hamiltonian function is represented in many reports as ˜H(t)=l(t,y(t),u(t))+f(t,y(t),u(t)),φ(t); the difference between the two is that Pontryagin's maximum principle maximizes the Hamiltonian ˜H or minimizes the Hamiltonian H. Accordingly, it also leads to a difference between positivity and negativity of the optimal inequality in the maximum principle. In this article, we use H to denote the Hamiltonian; in other words, we need to minimize the Hamiltonian H.

    Now, we will prove an elementary theorem that will be useful in the following sections.

    Theorem 2.2. Let (A1)–(A3) hold; for any fixed uUad, the control system (1.2) has a unique solution yuPCl([0,T],Rn) given by

    yu(t)=y0+t0f(s,yu(s),u(s))ds+0<ti<tJi(yu(ti)), (2.2)

    and there exists a constant M=M(h,T,y0,J1(0),J2(0),,Jk(0)) such that

    yu(t)Mfor all(t,u)[0,T]×Uad. (2.3)

    Proof. By the qualitative theory of differential equations, it follows from (A1)–(A3) that the system of equations

    {˙y(t)=f(t,y(t),u(t)),t[0,t1],y(0)=y0,

    has a unique solution yuC([0,t1],Rn) given by

    yu(t)=y0+t0f(s,yu(s),u(s))ds,t[0,t1],

    and

    yu(t)eht(ht1+y0) for all (t,u)[0,t1]×Uad.

    Let

    y1=yu(t1)+J1(yu(t1)),

    then, one also can infer that the system of equations

    {˙y(t)=f(t,y(t),u(t)),t(t1,t2],y(t1+)=y1,

    has a unique solution yuC((t1,t2],Rn) given by

    yu(t)=y0+t0f(s,yu(s),u(s))ds+J1(yu(t1)),t(t1,t2],

    and

    yu(t)eh(tt1)(h(t2t1)+y1) for all (t,u)(t1,t2]×Uad.

    Using a step by step method, together with

    Ji(yu(ti))L(yu(ti))yu(ti)+Ji(0),i=1,2,,k,

    it is not difficult to claim that there exists a constant M such that (2.2) and (2.3) hold. Therefore, we have completed the proof of Theorem 2.2.

    To establish the main conclusion of Theorem 4.6, we now introduce the definition of singular control in the classical sense (see Definition 2 in [14], as well as (4.4) in [25]).

    Definition 2.3. We refer to the elements in the following equation as singular controls in the classical sense:

    ¯Uad={v()Uad|Hu(t,ˉy(t),v(t),¯φ(t))=0,Huu(t,y(t),v(t),φ(t))0}.

    Remark 2.4. If the Hamiltonian H is linear in control u, then ¯Uad is a singular control set in the classical sense.

    Now, we shall introduce a fundamental lemma, as derived from functional analysis, that will be utilized to establish the necessary condition for Problem P.

    Lemma 2.5. Let h(t) be an n-dimensional piecewise continuous vector value function on [0,T], and suppose that

    T0h(t),a(t)dt=0,

    for all n-dimensional piecewise continuous vector value functions a(t) on [0,T]; then, h(t)=0 at all continuous moments of h(t) on [0,T].

    Proof. Suppose that h(t) at some continuous moments ˉt when h(ˉt)0 because h(t) is an n-dimensional piecewise continuous vector value function; therefore, there exists an interval Iˉt at ˉt such that

    h(t)0,tIˉt.

    In this case, given the following:

    a(t)={h(t),tIˉt,0,t[0,T]Iˉt,

    then

    T0h(t)h(t)dt=Iˉth(t)h(t)dt=Iˉth(t)2dt>0.

    This is a contradiction. Therefore, we have finished the proof of Lemma 2.5.

    The purpose of this section is to prove Theorem 3.1, which establishes the integral form of the second-order necessary conditions for Problem P. The basic idea is that the condition of being second-order variational and non-negative is a necessary condition for an optimal control problem. We will borrow the method adopted in [25] and introduce a linear impulsive adjoint matrix to prove it.

    Theorem 3.1. Let (A1)–(A3) hold and ˉu represent the optimal control of J over Uad; it is necessary that there exist functions (¯y,¯φ,¯W,¯Φ)PCl([0,T],Rn)×PCr([0,T],Rn)×PCr([0,T],Rn×n)×PCl([0,T],Rn×n) such that the following equations and inequality hold:

    {˙ˉy(t)=f(t,ˉy(t),ˉu(t)),t[0,T]Λ,ˉy(ti+)ˉy(ti)=Ji(ˉy(ti)),tiΛ,ˉy(0)=y0; (3.1)
    {˙¯φ(t)=fy(t,ˉy(t),ˉu(t))¯φ(t)ly(t,ˉy(t),ˉu(t)),t[0,T]Λ,¯φ(ti)=¯φ(ti)+Jiy(ˉy(ti))¯φ(ti),tiΛ,¯φ(T)=Gy(ˉy(T)); (3.2)
    {˙¯W(t)=fy(t,ˉy(t),ˉu(t))¯W(t)¯W(t)fy(t,ˉy(t),ˉu(t))¯φ(t)fyy(t,ˉy(t),ˉu(t))lyy(t,ˉy(t),ˉu(t)),t[0,T]Λ,¯W(ti)=¯W(ti)+Jiy(ˉy(ti))¯W(ti)+¯W(ti+)Jiy(ˉy(ti))+Jiy(ˉy(ti))¯W(ti)Jiy(ˉy(ti))+¯φ(ti)Jiyy(ˉy(ti)),tiΛ,¯W(T)=Gyy(ˉy(T)); (3.3)
    {˙¯Φ(t)=fy(t,ˉy(t),ˉu(t))¯Φ(t),t[0,T]Λ,¯Φ(ti+)=¯Φ(ti)+Jiy(ˉy(ti))¯Φ(ti),tiΛ,¯Φ(0)=I; (3.4)
    12T0Huu[t][u(t)ˉu(t)],u(t)ˉu(t)dt+T0t0[Huy[t]+¯W(t)fu[t]][u(t)ˉu(t)],¯Φ(t)¯Φ(s)1fu[s]T[u(s)ˉu(s)]dsdt0for allu()Uad.

    Proof. Now, let (ˉy(),ˉu()) be the given optimal pair and ε(0,1]. For an arbitrary but fixed u()Uad, let uε()=ˉu()+ε(u()ˉu()). It follows from the assumption (A1) that uε()Uad; according to Theorem 2.2, uε determines the unique allowed state yε(); then, we can get from (A2), (A3) and Theorem 2.2 (see (2.2) and (2.3)) that

    yε(t)ˉy(t)t0f(s,yε(s),uε(s))f(s,ˉy(s),ˉu(s))ds+0<ti<tJi(yε(ti))Ji(ˉy(ti))L(M)t0(yε(t)ˉy(t)+εu(t)ˉu(t))ds+L(M)0<ti<tyε(ti)ˉy(ti).

    Using the impulse integral inequality (see [1]), we have

    limε0yεˉyPC=0. (3.5)

    Let

    Y(t)=limε0Yε(t)=limε0yε(t)y(t)ε.

    In the same way as for (3.5), it is easy to show that

    limε0YεYPC=0, (3.6)

    and Y solves the following system of variational equations:

    {˙Y(t)=fy[t]Y(t)+fu[t](u(t)ˉu(t)),t[0,T]Λ,Y(ti+)=Y(ti)+Jiy(ˉy(ti))Y(ti),tiΛ,Y(0)=0. (3.7)

    To obtain the first-order necessary condition for Problem P, the following proposition will be used.

    Proposition 3.2. Let (A2) and (A3) hold and ¯φPCr([0,T],Rn) be the solution of the impulsive adjoint equation given by (3.2). Then

    T0fu(t,ˉy(t),ˉu(t))¯φ(t),u(t)ˉu(t)dt=T0ly(s,ˉy(s),ˉu(s)),Y(s)ds+Gy(ˉy(T)),Y(T). (3.8)

    Proof. Since C([0,T]) is a dense subspace in L1([0,T]), there exist function sequences {fαy}, {lαy}C([0,T] such that

    fαy()fy(,ˉy(),ˉu()) and lαy()ly(,ˉy(),ˉu()) in L1([0,T]) as α. (3.9)

    Moreover, it follows from (A3) that the system of linear impulsive differential equations given by

    {˙φα(t)=fαy(t)φα(t)lαy(t),t[0,T]Λ,φα(ti)=φα(ti)+Jiy(ˉy(ti))φα(ti),tiΛ,φα(T)=Gy(ˉy(T)), (3.10)

    has a unique solution φαPCr([0,T],Rn)C1([0,T]Λ,Rn), and that there is a constant β>0 such that

    φαPCβ for all α.

    Hence, we have

    φα(t)¯φ(t)Ttlαy(s)ly(s,ˉy(s),ˉu(s))ds+Ttfαy(s)fy(s,ˉy(s),ˉu(s))φα(s)ds+Ttfy(s,ˉy(s),ˉu(s))φα(s)¯φ(s)ds+t<ti<TJiy(ˉy(ti))φα(ti)¯φ(ti).

    Therefore, using the same method as for (3.5), it is not difficult to show that

    limαφα¯φPC=0. (3.11)

    Consequently, we can infer from (3.7) and (3.10) that

    T0fu(t,ˉy(t),ˉu(t))φα(t),u(t)ˉu(t)dt=T0φα(t),fu(t,ˉy(t),ˉu(t))(u(t)ˉu(t))dt=T0φα(t),˙Y(t)fy(t,ˉy(t),ˉu(t))Y(t)dt=φα(T),Y(T)ki=1[φα(ti),Y(ti+)φα(ti),Y(ti)]T0˙φα(t)+fαy(t)φα(t),Y(t)dt=Gy(ˉy(T)),Y(T)T0˙φα(t)+fαy(t)φα(t),Y(t)dt=Gy(ˉy(T)),Y(T)+T0lαy(t),Y(t)dt.

    Let α in the above expression; using (3.9) and (3.11), we have (3.8). Therefore, we have finished the proof of Proposition 3.2.

    Based on the above proposition, we now continue to prove Theorem 3.1.

    By the optimality of ˉu, one can ascertain from the assumptions (A2) and (A3), (3.2), (3.6), (3.7), and Proposition 3.2 (see (3.8)) that

    0limε0J(uε)J(ˉu)ε=limε0T010ly(t,ˉy(t)+τ(yε(t)ˉy(t)),ˉu(t))dτ,Yε(t)dt+limε0T010lu(t,yε(t),ˉu(t)+τε(u(t)ˉu(t)))dτ,u(t)ˉu(t)dt+limε010Gy(ˉy(T)+τ(yε(T)ˉy(T)))dτ,Yε(T)=T0ly(t,ˉy(t),ˉu(t)),Y(t)dt+T0lu(t,ˉy(t),ˉu(t)),u(t)ˉu(t)dt+Gy(ˉy(T)),Y(T)=T0lu(t,ˉy(t),ˉu(t))+fu(t,ˉy(t),ˉu(t))¯φ(t),u(t)ˉu(t)dt,

    which leads to the following optimal inequality:

    T0lu(t,ˉy(t),ˉu(t))+fu(t,ˉy(t),ˉu(t))¯φ(t),u(t)ˉu(t)dt0.

    Because of the arbitrariness of uUad, we get

    T0lu(t,ˉy(t),ˉu(t))+fu(t,ˉy(t),ˉu(t))¯φ(t),u(t)ˉu(t)dt=0. (3.12)

    Moreover, combining this with (A1) and Proposition 3.2, it follows that, for all continuous time points t[0,T] of ˉu(t), we have

    Hu[t]=lu(t,ˉy(t),ˉu(t))+fu(t,ˉy(t),ˉu(t)),¯φ(t)=0.

    We define

    ¯U(t)={vUlu(t,ˉy(t),v)+fu(t,ˉy(t),v),¯φ(t)=0}. (3.13)

    We refer to this as the singular control region in the classical sense, which will be used later.

    Let

    Zε()(Zε1()Zε2()Zεn())=Yε(t)Y(t)ε,

    and

    Zk(t)=limε0Zεk(t).

    In the same way as for (3.5), one can also claim from (A2) and (A3) that

    limε0ZεZPC([0,T])=0, (3.14)

    and Z() denotes the solution of the following system of equations:

    {˙Z(t)=fy(t,ˉy(t),ˉu(t))Z(t)+12fuu(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],[u(t)ˉu(t)]+fuy(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],Y(t)+12fyy(t,ˉy(t),ˉu(t))Y(t),Y(t),t(0,T]Λ,Z(ti+)=Z(ti)+Jiy(ˉy(ti))Z(ti)+12Jiyy(ˉy(ti))Y(ti),Y(ti),tiΛ,Z(0)=0. (3.15)

    where

    12fuu(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],[u(t)ˉu(t)]=12{f1uu(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],[u(t)ˉu(t)]f2uu(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],[u(t)ˉu(t)]fnuu(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],[u(t)ˉu(t)]},
    12fyy(t,ˉy(t),ˉu(t))Y(t),Y(t)=12{f1yy(t,ˉy(t),ˉu(t))Y(t),Y(t)f2yy(t,ˉy(t),ˉu(t))Y(t),Y(t)fnyy(t,ˉy(t),ˉu(t))Y(t),Y(t)},

    and

    fuy(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],Y(t)={f1uy(t,ˉy(t),ˉu(t))[u(t)ˉu(t)]),Y(t)f2uy(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],Y(t)fnuy(t,ˉy(t),ˉu(t))[u(t)ˉu(t)],Y(t)}.

    Meanwhile, one can infer from (3.7) that X:=YY is the solution to the following system of equations:

    {˙X(t)=fy(t,ˉy(t),ˉu(t))X(t)+X(t)fy(t,ˉy(t),ˉu(t))+fu(t,ˉy(t),ˉu(t))(u(t)ˉu(t))Y(t)+Y(t)(fu(t,ˉy(t),ˉu(t))(u(t)ˉu(t))),t[0,T]Λ,X(ti+)=X(ti)+Jiy(ˉy(ti))X(ti)+X(ti)Jiy(ˉy(ti))+Jiy(ˉy(ti))X(ti)Jiy(ˉy(ti)),tiΛ,X(0)=0. (3.16)

    The subsequent proposition plays a crucial role in obtaining the integral form of the second-order necessary conditions.

    Proposition 3.3. Let (A2) and (A3) hold and WPCr([0,T],Rn×n), ¯ΦPCl([0,T],Rn×n) be the solution of (3.3) and (3.4). Then,

    T0¯W(t)fu(t,ˉy(t),ˉu(t))(u(t)ˉu(t)),Y(t)dt=12T0[lyy(t,ˉy(t),ˉu(t))+¯φ(t)fyy(t,ˉy(t),ˉu(t))]Y(t),Y(t)dt+12Gyy(ˉy(T))Y(T),Y(T)+12ki=1¯φ(ti)Jiyy(ˉy(ti))Y(ti)),Y(ti)), (3.17)

    and

    T0(luy(t,ˉy(t),ˉu(t))+¯φ(t)fuy(t,ˉy(t),ˉu(t))+¯W(t)fu(t,ˉy(t),ˉu(t))T)[u(t)ˉu(t)],Y(t)dt=T0dtt0(luy(t,ˉy(t),ˉu(t))+¯φ(t)fuy(t,ˉy(t),ˉu(t))+¯W(t)fu(t,ˉy(t),ˉu(t))T)[u(t)ˉu(t)],.¯Φ(t)¯Φ(s)1fu(s,ˉy(s),ˉu(s))T[u(s)ˉu(s)]ds. (3.18)

    Proof. Since C([0,T]) is a dense subspace in L1([0,T]), there exist function sequences {fαy}, {fαyy}, {fαuy}, {fαu}, {lαy}, {lαyy}, {lαuy}C([0,T] such that

    {fαy()fy(,ˉy(),ˉu()),lαy()ly(,ˉy(),ˉu()),fαyy()fyy(,ˉy(),ˉu()),lαyy()lyy(,ˉy(),ˉu()),fαuy()fuy(,ˉy(),ˉu()),lαuy()luy(,ˉy(),ˉu()),fαufu, in L1([0,T]) as α. (3.19)

    Consequently, by (A3) and (3.19), one can infer that the systems of linear impulsive matrix differential equations given by

    {˙Wα(t)=fαy(t)Wα(t)Wα(t)fαy(t)¯φ(t)fαyy(t)lαyy(t),t[0,T]Λ,Wα(ti)=Wα(ti+)+Jiy(ˉy(ti))Wα(ti+)+Wα(ti+)Jiy(ˉy(ti))+Jiy(ˉy(ti))Wα(ti+)Jiy(ˉy(ti))+¯φ(ti)Jiyy(ˉy(ti)),tiΛ,Wα(T)=Gyy(ˉy(T)), (3.20)

    and

    {˙Φα(t)=fαy(t)Φα(t),t[0,T]Λ,Φα(ti+)=Φα(ti)+Jiy(ˉy(ti))Φα(ti),tiΛ,Φα(0)=I, (3.21)

    each have a unique solution WαPCr([0,T],Rn×n)C1([0,T]Λ,Rn×n) and ΦαPCl([0,T],Rn×n)C1([0,T]Λ,Rn×n), respectively. Not only that, there exists a constant γ>0 such that

    WαPCγandΦαPCγ for all α.

    Moreover, we have

    Wα(t)¯W(t)Ttlαyy(s)lyy(s,ˉy(s),ˉu(s))ds+Tt¯φ(s)fαyy(s)fyy(s,ˉy(s),ˉu(s))ds+2γTtfαy(s)fy(s,ˉy(s),ˉu(s))ds+2TtWα(s)¯W(s)fy(s,ˉy(s),ˉu(s))ds+2t<ti<T(Jiy(ˉy(ti))+Jiy(ˉy(ti))2)Wα(ti)¯W(ti),

    and

    Φα(t)¯Φ(t)γt0fαy(s)fy(s,ˉy(s),ˉu(s))ds+t0fy(s,ˉy(s),ˉu(s))Φα(s)¯Φ(s)ds+0<ti<tJiy(ˉy(ti))Φα(ti)¯Φ(ti).

    In the same way as for (3.5), we obtain

    limαWα¯WPC=0andlimαΦα¯ΦPC=0. (3.22)

    In addition, it is obvious from (3.20) that Wα is also a solution of (3.20). This means that

    Wα(t)=Wα(t) for all t[0,T]. (3.23)

    Since tr(AB)=tr(BA) for all k×j matrix A and j×k matrix B, we can get from (3.2), (3.7), (3.16), (3.19), (3.20), (3.22), and (3.23) that

    2T0¯W(t)fu(t,ˉy(t),ˉu(t))(u(t)ˉu(t)),Y(t)dt=2limαT0Wα(t)(˙Y(t)fy(t,ˉy(t),ˉu(t))Y(t)),Y(t)dt=limαtrT0[Wα(t)(˙Y(t)fy(t,ˉy(t),ˉu(t))Y(t))Y(t)+Wα(t)Y(t)(˙Y(t)Y(t)fy(t,ˉy(t),ˉu(t)))]dt=limαtrT0Wα(t)[˙X(t)fy(t,ˉy(t),ˉu(t))X(t)X(t)fy(t,ˉy(t),ˉu(t))]dt=limα{T0[˙Wα(t)+Wα(t)fy(t,ˉy(t),ˉu(t))+fy(t,ˉy(t),ˉu(t))Wα(t)]Y(t),Y(t)dt+Wα(T)Y(T),Y(T)+ki=1[Wα(ti)Y(ti),Y(ti)Wα(ti)Y(ti+),Y(ti+)]}=limα{T0[˙Wα(t)+Wα(t)fαy(t)+fαy(t)Wα(t)]Y(t),Y(t)dt+Wα(T)Y(T),Y(T)+ki=1[(Wα(ti)Wα(ti))Y(ti),Y(ti)(Jiy(ˉy(ti))Wα(ti)+Wα(ti)Jiy(ˉy(ti)))Y(ti),Y(ti)Jiy(ˉy(ti))Wα(ti)Jiy(ˉy(ti))Y(ti),Y(ti)]}=limα{T0[lαyy(t)+¯φ(t)fαyy(t)]Y(t),Y(t)dt+Wα(T)Y(T),Y(T)+ki=1[(Wα(ti)Wα(ti))Y(ti),Y(ti)(Jiy(ˉy(ti))Wα(ti)+Wα(ti)Jiy(ˉy(ti)))Y(ti),Y(ti)Jiy(ˉy(ti))Wα(ti)Jiy(ˉy(ti))Y(ti),Y(ti)]}=T0[lyy(t,ˉy(t),ˉu(t))+¯φ(t)fyy(t,ˉy(t),ˉu(t))]Y(t),Y(t)dt+Gyy(ˉy(T))Y(T),Y(T)+ki=1¯φ(ti)Jiyy(ˉy(ti))Y(ti),Y(ti),

    i.e., (3.17) holds.

    Now, let us prove (3.18). By (3.4), (3.7), (3.21), and (3.22), we have

    \begin{eqnarray*} Y(t)& = &\overline{\Phi}(t)\int_{0}^{t}\overline{\Phi}(s)^{-1}f_{u}[s]^{\top}(u(s)-\bar{u}(s))ds\\ & = &\lim\limits_{\alpha\rightarrow 0}\Phi_{\alpha}(t)\int_{0}^{t}\Phi_{\alpha}(s)^{-1}f_{u}[s]^{\top}(u(s)-\bar{u}(s))ds,\\ \end{eqnarray*}

    which means that (3.18) holds. Therefore, we have finished the proof of Proposition 3.3.

    Based on the above propositions, we now continue to prove Theorem 3.1.

    Since \bar{u} represents optimal control of J over \mathcal{U}_{ad} , together with Proposition 3.2 (see (3.8)) and (3.12), we have

    \begin{eqnarray} &&\int_{0}^{T} \langle l_{y}\left(t,\bar{y}(t),\bar{u}(t)\right),Y(t) \rangle ds+\int_{0}^{T}\langle l_{u}\left(t,\bar{y}(t),\bar{u}(t)\right),u(t)-\bar{u}(t)\rangle dt+ \langle G_{y}\left(\bar{y}(T)\right),Y(T)\rangle\\ & = &\int_{0}^{T}\langle l_{u}\left(t,\bar{y}(t),\bar{u}(t)\right)+f_{u}\left(t,\bar{y}(t),\bar{u}(t)\right)\varphi(t),u(t)-\bar{u}(t)\rangle dt\\ & = &0\mbox{ for all }u\in \mathcal{U}_{ad}. \end{eqnarray} (3.24)

    Taken together with (A2), (A3), and (3.24), one can get

    \begin{eqnarray*} &&\frac{J\left(u^{\varepsilon}(\cdot)\right)-J\left(\bar{u}(\cdot)\right)}{\varepsilon}\\ & = &\int_{0}^{T}\langle\int_{0}^{1}\left(l_{u}\left(t,y^{\varepsilon}(t),\bar{u}(t)+\tau \varepsilon\left(u(t)-\bar{u}(t)\right)\right)-l_{u}\left(t,\bar{y}(t),\bar{u}(t)\right)\right)d\tau,u(t)-\bar{u}(t)\rangle dt\\ &&+\int_{0}^{T}\langle\int_{0}^{1}\left(l_{y}\left(t,\bar{y}(t)+\tau\left(y^{\varepsilon}(t)-\bar{y}(t)\right),\bar{u}(t)\right)-l_{y}\left(t,\bar{y}(t),\bar{u}(t)\right)\right)d\tau,Y^{\varepsilon}(t)\rangle dt\\ &&+\int_{0}^{T} \langle l_{y}\left(t,\bar{y}(t),\bar{u}(t)\right),Y^{\varepsilon}(t)-Y(t) \rangle dt+\langle G_{y}\left(\bar{y}(T)\right),Y^{\varepsilon}(T)-Y(T)\rangle\\ &&+\langle\int_{0}^{1} \left(G_{y}\left(\bar{y}(T)+\tau\left(y^{\varepsilon}(T)-\bar{y}(T)\right)\right)-G_{y}\left(\bar{y}(T)\right)\right)d\tau,Y^{\varepsilon}(T)\rangle\\ & = &\varepsilon\int_{0}^{T}\langle\int_{0}^{1}\tau\int_{0}^{1}l_{uu}\left(t,y^{\varepsilon}(t),\bar{u}(t)+\nu\tau \varepsilon\left(u(t)-\bar{u}(t)\right)\right)d\nu d\tau[u(t)-\bar{u}(t)],u(t)-\bar{u}(t)\rangle dt\\ &&+\varepsilon\int_{0}^{T}\langle\int_{0}^{1}l_{uy}\left(t,\bar{y}(t)+\tau\left(y^{\varepsilon}(t)-\bar{y}(t)\right),\bar{u}(t)\right) (u(t)-\bar{u}(t))d\tau ,Y^{\varepsilon}(t)\rangle dt\\ &&+\varepsilon\int_{0}^{T}\langle\int_{0}^{1}\tau\int_{0}^{1}l_{yy}\left(t,\bar{y}(t)+\nu\tau\left(y^{\varepsilon}(t)-\bar{y}(t)\right),\bar{u}(t)\right)d\nu d\tau Y^{\varepsilon}(t),Y^{\varepsilon}(t)\rangle dt\\ &&+\varepsilon\int_{0}^{T} \langle l_{y}\left(t,\bar{y}(t),\bar{u}(t)\right),Z^{\varepsilon}(t)\rangle dt+\varepsilon\langle G_{y}\left(\bar{y}(T)\right),Z^{\varepsilon}(T)\rangle\\ &&+\varepsilon\langle\int_{0}^{1} \tau\int_{0}^{1}G_{yy}\left(\bar{y}(T)+\nu\tau\left(y^{\varepsilon}(T)-\bar{y}(T)\right)\right)d\nu d\tau Y^{\varepsilon}(T),Y^{\varepsilon}(T)\rangle. \end{eqnarray*}

    Then, combining this with (3.6) and (3.14), the above expression, (A2), and (A3) leads to the following:

    \begin{eqnarray*} \label{3.28} &&\frac{1}{2}\int_{0}^{T}\langle l_{uu}\left(t,\bar{y}(t),\bar{u}(t)\right)[u(t)-\bar{u}(t)],u(t)-\bar{u}(t)\rangle dt\nonumber\\ &&+\int_{0}^{T}\langle l_{uy}\left(t,\bar{y}(t),\bar{u}(t)\right) (u(t)-\bar{u}(t)),Y(t)\rangle ds\nonumber\\ &&+\frac{1}{2}\int_{0}^{T}\langle l_{yy}\left(t,\bar{y}(t),\bar{u}(t)\right) Y(t),Y(t)\rangle dt+\frac{1}{2}\langle G_{yy}\left(\bar{y}(T)\right) Y(T),Y(T)\rangle\\ &&+\int_{0}^{T} \langle l_{y}(t,\bar{y}(t),\bar{u}(t)) ,Z(t)\rangle ds+\langle G_{y}\left(\bar{y}(T)\right),Z(T)\rangle\geq0\mbox{ for all }u\in \mathcal{U}_{ad}.\nonumber \end{eqnarray*}

    By (3.2), we have

    \begin{eqnarray} &&\frac{1}{2}\int_{0}^{T}\langle l_{uu}\left(t,\bar{y}(t),\bar{u}(t)\right)[u(t)-\bar{u}(t)],u(t)-\bar{u}(t)\rangle dt\\ &&+\int_{0}^{T}\langle l_{uy}\left(t,\bar{y}(t),\bar{u}(t)\right) (u(t)-\bar{u}(t)),Y(t)\rangle ds\\ &&+\frac{1}{2}\int_{0}^{T}\langle l_{yy}\left(t,\bar{y}(t),\bar{u}(t)\right) Y(t),Y(t)\rangle dt+\frac{1}{2}\langle G_{yy}\left(\bar{y}(T)\right) Y(T),Y(T)\rangle\\ &&-\int_{0}^{T} \langle \dot{\overline{\varphi}}(t)+f_{y}(t,\bar{y}(t),\bar{u}(t))\overline{\varphi}(t) ,Z(t)\rangle ds+\langle G_{y}\left(\bar{y}(T)\right),Z(T)\rangle\geq0\mbox{ for all }u\in \mathcal{U}_{ad}. \end{eqnarray} (3.25)

    Since C([0, T]) is a dense subspace in L^{1}([0, T]) , there exist function sequences \left\{u^{\alpha}\right\}, \left\{f_{uu}^{\alpha}\right\}\subseteq C([0, T]) such that

    \begin{eqnarray} \lim\limits_{\alpha\rightarrow \infty}\left\|u^{\alpha}-[u-\bar{u}]\right\|_{L^{1}} = 0\mbox{ and }\lim\limits_{\alpha\rightarrow \infty}\left\|f_{uu}^{\alpha}(\cdot)-f_{uu}(\cdot,\bar{y}(\cdot),\bar{u}(\cdot))\right\|_{L^{1}} = 0. \end{eqnarray} (3.26)

    It follows immediately from (3.15), (3.19), and (3.26) that the system of equations given by

    \begin{eqnarray} \left\{ \begin{array}{ll} \dot{Z}^{\alpha}(t) = f^{\alpha}_{y}(t)^\top Z^{\alpha}(t)+\frac{1}{2}u^{\alpha}(t)^\top f^{\alpha}_{uu}(t)u^{\alpha}(t)\\ \qquad\qquad + u^{\alpha}(t)^\top f^{\alpha}_{yu}(t)Y(t) +\frac{1}{2}Y(t)^\top f^{\alpha}_{yy}(t)Y(t),\qquad \qquad t\in [0,T]\setminus\Lambda, \\ Z^{\alpha}(t_i+) = Z^{\alpha}(t_i)+\frac{1}{2}Y(t_{i}))^{\top}J_{iyy}(\bar{y}(t_{i}))Y(t_{i}))+J_{iy}(\bar{y}(t_{i}))^{\top}Z^{\alpha}(t_i),\qquad t_i\in \Lambda,\\ Z^{\alpha}(0) = 0, \end{array}\right. \end{eqnarray} (3.27)

    has a unique solution Z^{\alpha}\in PC_{l}\left([0, T], \mathbb{R}^{n}\right)\bigcap C^{1}\left([0, T]\setminus\Lambda, \mathbb{R}^{n}\right) and

    \begin{eqnarray} \lim\limits_{\alpha\rightarrow \infty}\left\|Z^{\alpha}-Z\right\|_{PC} = 0, \end{eqnarray} (3.28)

    where Z(\cdot) is the solution to (3.15).

    Moreover, we can infer from (3.19) and (3.26)–(3.28) that

    \begin{eqnarray} &&-\int_{0}^{T} \langle \dot{\overline{\varphi}}(t)+f_{y}(t,\bar{y}(t),\bar{u}(t))\overline{\varphi}(t) ,Z(t)\rangle dt+\langle G_{y}\left(\bar{y}(T)\right),Z(T)\rangle\\ & = &-\lim\limits_{\alpha\rightarrow \infty}\int_{0}^{T} \langle \dot{\overline{\varphi}}(t)+f^{\alpha}_{y}(t)\overline{\varphi}(t) ,Z^{\alpha}(t)\rangle dt+\langle G_{y}\left(\bar{y}(T)\right),Z(T)\rangle\\ & = &\lim\limits_{\alpha\rightarrow \infty}\left\{\int_{0}^{T} \langle \overline{\varphi}(t), \dot{Z}^{\alpha}(t)-f^{\alpha\top}_{y}(t)Z^{\alpha}(t) \rangle dt-\sum\limits_{i = 1}^{k}\left[\langle \overline{\varphi}(t_i-),Z^{\alpha}(t_i) \rangle-\langle\overline{\varphi}(t_i),Z^{\alpha}(t_i+)\rangle\right]\right\}\\ & = &\frac{1}{2} \lim\limits_{\alpha\rightarrow \infty}\int_{0}^{T} \langle \overline{\varphi}(t),u^{\alpha}(t)^\top f^{\alpha}_{uu}(t)u^{\alpha}(t)+ 2u^{\alpha}(t)^\top f^{\alpha}_{yu}(t)Y(t) +Y(t)^\top f^{\alpha}_{yy}(t)Y(t)\rangle dt\\ &&+\frac{1}{2}\sum\limits_{i = 1}^{k}\langle \overline{\varphi}(t_i), Y(t_{i}))^{\top}J_{iyy}(\bar{y}(t_{i}))Y(t_{i}))\rangle\\ & = &\frac{1}{2} \int_{0}^{T} \langle \overline{\varphi}(t),[u(t)-\bar{u}(t)]^{\top}f_{uu}(t,\bar{y}(t),\bar{u}(t))[u(t)-\bar{u}(t)]\rangle dt\\ &&+\int_{0}^{T} \langle \overline{\varphi}(t), (u(t)-\bar{u}(t))^{\top}(t)f_{uy}(t,\bar{y}(t),\bar{u}(t))Y(t)\rangle dt\\ &&+\frac{1}{2} \int_{0}^{T} \langle \overline{\varphi}(t),Y(t)^{\top}f_{yy}(t,\bar{y}(t),\bar{u}(t))Y(t)\rangle dt\\ &&+\frac{1}{2}\sum\limits_{i = 1}^{k}\langle \overline{\varphi}(t_i),Y(t_{i})^{\top}J_{iyy}(\bar{y}(t_{i}))Y(t_{i})))\rangle. \end{eqnarray} (3.29)

    Taken together with (3.29), we deduce from (3.25) that

    \begin{eqnarray} &&\frac{1}{2}\int_{0}^{T}\langle \left[l_{uu}\left(t,\bar{y}(t),\bar{u}(t)\right)+\overline{\varphi}(t)^{\top}f_{uu}(t,\bar{y}(t),\bar{u}(t))\right][u(t)-\bar{u}(t)],u(t)-\bar{u}(t)\rangle ds\\ &&+\int_{0}^{T}\langle \left[l_{uy}\left(t,\bar{y}(t),\bar{u}(t)\right) +\overline{\varphi}(t)^{\top}f_{uy}(t,\bar{y}(t),\bar{u}(t))\right][u(t)-\bar{u}(t)],Y(t)\rangle dt\\ &&+\frac{1}{2}\int_{0}^{T}\langle \left[l_{yy}\left(t,\bar{y}(t),\bar{u}(t)\right)+\overline{\varphi}(t)^{\top}f_{yy}(t,\bar{y}(t),\bar{u}(t))\right] Y(t),Y(t)\rangle dt\\ &&+\frac{1}{2}\langle G_{yy}\left(\bar{y}(T)\right) Y(T),Y(T)\rangle+\frac{1}{2}\sum\limits_{i = 1}^{k}\langle \overline{\varphi}(t_i)^{\top}J_{iyy}(\bar{y}(t_{i}))Y(t_{i})),Y(t_{i}))\rangle \geq0\mbox{ for all }u\in \mathcal{U}_{ad}. \end{eqnarray} (3.30)

    The following inequality follows from (3.30) and (3.17):

    \begin{eqnarray} &&\frac{1}{2}\int_{0}^{T}\langle [l_{uu}(t,\bar{y}(t),\bar{u}(t))+\overline{\varphi}(t)^{\top}f_{uu}(t,\bar{y}(t),\bar{u}(t))][u(t)-\bar{u}(t)],u(t)-\bar{u}(t)\rangle dt\\ &&+\int_{0}^{T}\langle [l_{uy}(t,\bar{y}(t),\bar{u}(t)) +\overline{\varphi}(t)^{\top}f_{uy}(t,\bar{y}(t),\bar{u}(t))+\overline{W}(t)f_{u}(t,\bar{y}(t),\bar{u}(t))^{\top}][u(t)-\bar{u}(t)],. \\ &&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad Y(t)\rangle dt \geq0\mbox{ for all }u\in \mathcal{U}_{ad}. \end{eqnarray} (3.31)

    Then, by (3.18) and (3.31), we can show that

    \begin{eqnarray} &&\frac{1}{2}\int_{0}^{T}\langle [l_{uu}(t,\bar{y}(t),\bar{u}(t))+\overline{\varphi}(t)^{\top}f_{uu}(t,\bar{y}(t),\bar{u}(t))][u(t)-\bar{u}(t)],u(t)-\bar{u}(t)\rangle dt\\ &&+\int_{0}^{T}\int_{0}^{t}\langle [l_{uy}(t,\bar{y}(t),\bar{u}(t)) +\overline{\varphi}(t)^{\top}f_{uy}(t,\bar{y}(t),\bar{u}(t))+\overline{W}(t)f_{u}(t,\bar{y}(t),\bar{u}(t))^{\top}][u(t)-\bar{u}(t)],. \\ &&\qquad\qquad\qquad\overline{\Phi}(t)\overline{\Phi}(s)^{-1}f_{u}(s,\bar{y}(s),\bar{u}(s))^{T}[u(s)-\bar{u}(s)]ds\rangle dt \geq0\mbox{ for all }u\in \mathcal{U}_{ad}, \end{eqnarray} (3.32)

    Thus, we have finished the proof of Theorem 3.1.

    Remark 3.4. Theorem 3.1 does not establish whether the optimal control of Problem P is singular or nonsingular; it is a unified conclusion, similar to the equations in (4.5.2) of Theorem 4.2 in [15]. Therefore, Theorem 3.1 is a generalization of Theorem 4.2 in [15] to impulsive controlled systems. Based on Theorem 3.1, singular control and nonsingular control in the classical sense can be considered under a unified framework for Problem P.

    In this section, on the basis of Theorem 3.1 in the previous section, we first obtain the Legendre-Clebsch condition; then, we give a corollary for the integral form of the second-order necessary optimality conditions for optimal singular control; finally, we give the pointwise Jacobson type necessary conditions and the pointwise Legendre-Clebsch condition.

    Corollary 4.1. Let (A1)–(A3) hold and \bar{u} denote the optimal control of J over \mathcal{U}_{ad} ; it is necessary that there exist a pair of functions (\overline{y}, \overline{\varphi})\in PC_{l}\left([0, T], \mathbb{R}^{n}\right) \times PC_{r}\left([0, T], \mathbb{R}^{n}\right) such that (3.1), (3.2), and

    \begin{eqnarray} H_{uu}[t]\geq0,\;\mathit{\text{for all the continuous time of}}\; \bar{u}(t), t\in [0,T], \end{eqnarray} (4.1)

    hold.

    Proof. To prove Corollary 4.1, let \bar{u}(\cdot)\in \mathcal{U}_{ad} ; take the special control variational problem as follows:

    \begin{eqnarray} u(t)-\bar{u}(t) = \left\{\begin{array}{ll} 0,\; & t\in[t_{0},\bar{t}),\\ h,\; & t\in[\bar{t}, \bar{t}+\varepsilon ),\\ 0,\; &t\in[\bar{t}+\varepsilon , T],\\ \end{array}\right.\qquad \end{eqnarray} (4.2)

    where h\in \mathbb{R}^{r} is a constant vector, \varepsilon is a sufficiently small positive number, \bar{t} is any continuous time of \bar{u}(t) . For u(t)-\bar{u}(t) , \overline{\Phi}(t) satisfies (3.4); then, the solution Y(t) of the variational problem (3.7) is given by

    \begin{eqnarray*} Y(t) = \left\{\begin{array}{ll} 0,\; & t\in[t_{0},\bar{t}),\\ \int_{\bar{t}}^{t}\overline{\Phi}(t)\overline{\Phi}(s)^{-1}f_{u}[s]^{\top}hds,\; & t\in[\bar{t}, \bar{t}+\varepsilon ),\\ \int_{\bar{t}}^{\bar{t}+\varepsilon}\overline{\Phi}(t)\overline{\Phi}(s)^{-1}f_{u}[s]^{\top}hds,\; & t\in[\bar{t}+\varepsilon , T].\\ \end{array}\right.\qquad \end{eqnarray*}

    Then

    \begin{eqnarray*} \|Y(t)\| = \left\{\begin{array}{ll} 0,\; & t\in[t_{0},\bar{t}),\\ O(\varepsilon),\; & t\in[\bar{t}, T],\\ \end{array}\right.\qquad \end{eqnarray*}

    where \lim\limits_{\varepsilon\rightarrow 0}\frac{O(\varepsilon)}{\varepsilon} = C\; (\text{nonzero constant}) . Utilizing the continuity of the mean value theorem for integrals, we have

    \begin{eqnarray} \int_{0}^{T}Y(t)^{\top}H_{yy}[t]Y(t)dt& = &\int_{\bar{t}}^{\bar{t}+\varepsilon}Y(t)^{\top}H_{yy}[t]Y(t)dt = O(\varepsilon^{2}) = o(\varepsilon),\\ \int_{0}^{T}Y(t)^{\top}H_{uy}[t][u(t)-\bar{u}(t)]dt & = &\int_{\bar{t}}^{\bar{t}+\varepsilon}Y(t)^{\top}H_{uy}[t]hdt\\ & = &\varepsilon Y(\bar{t})^{\top}H_{uy}[\bar{t}]h+ o(\varepsilon),\\ \int_{0}^{T} [u(t)-\bar{u}(t)]^{\top}H_{uu}[t][u(t)-\bar{u}(t)]dt& = &\int_{\bar{t}}^{\bar{t}+\varepsilon}h^{\top}H_{uu}[t]hdt\\ & = &\varepsilon h^{\top}H_{uu}[\bar{t}]h+ o(\varepsilon). \end{eqnarray} (4.3)

    Substituting (4.3) into (3.32), we have

    \begin{eqnarray*} \varepsilon h^{\top}H_{uu}[t]h+ o(\varepsilon )\geq 0.\nonumber \end{eqnarray*}

    Observe that h^{\top}H_{uu}[t]h is independent of \varepsilon and \varepsilon can be arbitrarily small; we have

    \begin{eqnarray*} \label{4.4} h^{\top}H_{uu}[t]h\geq 0. \end{eqnarray*}

    Since h\in \mathbb{R}^{r} is an arbitrary vector and t denotes arbitrary continuous time, (4.1) holds.

    Remark 4.2. Condition (4.1) represents the Legendre-Clebsch condition for the optimal control problem Problem P. At the same time, it also shows the rationality of the conventional hypothesis H_{uu}[t]\geq 0, t\in [t_{0}, T] .

    Remark 4.3. For the LQ problem, where R[t] = H_{uu}[t] , when R[t] > 0, \forall t\in [t_{0}, T] , the problem is called a nonsingular problem; when R[t] = 0, \forall t\in [t_{0}, T] , the problem is called the totally singular case; when R[t]\geq0, \forall t\in [t_{0}, T] , the problem is called the partially singular case (see Definitions (4.4)–(4.6) in [15]). Whether or not the optimal control problem is singular, and regardless of the kind of singularity, this classification standard is often adopted.

    The following is a corollary of Theorem 3.1 in the case in which H_{uu}[t]\equiv0 , that is, Problem P is a totally singular problem according to the definitions in [15].

    Corollary 4.4. Let (A1)–(A3) hold and \bar{u}(\cdot)\in \overline{\mathcal{U}}_{ad} denotes the optimal singular control of J over \mathcal{U}_{ad} ; it is necessary that there exist functions (\overline{y}, \overline{\varphi}, \overline{W}, \overline{\Phi})\in PC_{l}\left([0, T], \mathbb{R}^{n}\right) \times PC_{r}\left([0, T], \mathbb{R}^{n}\right) \times PC_{r}\left([0, T], \mathbb{R}^{n\times n}\right) \times PC_{l}\left([0, T], \mathbb{R}^{n\times n}\right) that satisfy (3.1)–(3.4) and

    \begin{eqnarray*} \label{4.5} &&\int_{0}^{T}dt\int_{0}^{t}\langle \left(\overline{W}(t)f_{u}[t]^{\top}+H_{uy}[t]\right) [u(t)-\bar{u}(t)],\overline{\Phi}(t)\overline{\Phi}(s)^{-1}f_{u}[s]^{\top}[u(s)-\bar{u}(s)]\rangle ds\nonumber\\ &&\geq 0,\quad \forall u\in \overline{\mathcal{U}}_{ad}. \end{eqnarray*}

    Remark 4.5. Corollary 4.4 is a similar conclusion of Theorem 4.3 in [25] for Problem P.

    The following is the pointwise Jacobson-type second-order necessary optimality condition for Problem P. The conclusion is similar to that of Theorem 4.3 in [25]. Note that the set \overline{U}(t) (see (3.13)) of values for v differs from the set \mathbb{R}^{m} of values that is described in [32]; thus, it essentially confirms the pointwise characteristic. The author of [25] has proven similar conclusions under the weaker condition, suggesting that the control region U is a Polish space. In fact, under our basic assumption (A1), we can use the method on page 93 in [15] to prove it.

    Theorem 4.6. Let (A1)–(A3) hold and \bar{u}(\cdot)\in \overline{\mathcal{U}}_{ad} denote the optimal singular control of J over \mathcal{U}_{ad} ; it is necessary that there exist functions (\overline{y}, \overline{\varphi}, \overline{W}, \overline{\Phi})\in PC_{l}\left([0, T], \mathbb{R}^{n}\right) \times PC_{r}\left([0, T], \mathbb{R}^{n}\right) \times PC_{r}\left([0, T], \mathbb{R}^{n\times n}\right) \times PC_{l}\left([0, T], \mathbb{R}^{n\times n}\right) that satisfy (3.1)–(3.4), and, for all continuous time of \bar{u}(t), t\in [0, T] , we have

    \begin{eqnarray} \langle \left(\overline{W}(t)f_{u}[t]^{\top}+H_{uy}[t]\right)[v-\bar{u}(t)], f_{u}[t]^{\top}[v-\bar{u}(t)]\rangle \geq 0,\quad \forall v\in \overline{U}(t). \end{eqnarray} (4.4)

    Proof. {Note that Definition 2.3 implies that H_{uu}[t]\equiv0 \; \text{for all} \; t\in [0, T] , apply the same control perturbation as for (4.2);} we have

    \begin{eqnarray} &&\int_{0}^{T}\langle \left(\overline{W}(t)f_{u}[t]^{\top}+H_{yu}[t]\right) h,Y(t)\rangle dt \\ & = &\int_{\bar{t}}^{\bar{t}+\varepsilon}\langle \left(\overline{W}(t)f_{u}[t]^{\top}+H_{yu}[t]\right) h,Y(t)\rangle dt, \end{eqnarray} (4.5)

    and the dominant term in the expansion of (4.5) for sufficiently small \varepsilon is given by

    (\varepsilon )^{2}\langle \left(\overline{W}(t)f_{u}[t]^{\top}+H_{yu}[t]\right)h,f_{u}[t]^{\top}h\rangle \big|_{\bar{t}}.

    Since \bar{t} can be chosen as any continuous time of \bar{u}(t), t\in [0, T] , let h = v-\bar{u}(t), \forall v\in \overline{U}(t) ; thus, (4.4) holds and we have finished the proof of Theorem 4.6.

    Using the same idea as in Theorem 4.6, we can also obtain the pointwise Legendre-Clebsch necessary optimality condition corresponding to Corollary 4.1.

    Corollary 4.7. Let (A1)–(A3) hold and \bar{u} denote the optimal control of J over \mathcal{U}_{ad} ; it is necessary that there exist a pair of functions (\overline{y}, \overline{\varphi})\in PC_{l}\left([0, T], \mathbb{R}^{n}\right) \times PC_{r}\left([0, T], \mathbb{R}^{n}\right) such that (3.1), (3.2), and, for all continuous time of \bar{u}(t), t\in [0, T]

    \begin{eqnarray} (v-\bar{u}(t))^{\top}H_{uu}[t](v-\bar{u}(t))\geq0, \forall v\in \overline{U}(t), \end{eqnarray} (4.6)

    hold.

    Remark 4.8. Comparing Corollaries 4.7 and 4.1, it can be found that if the pointwise condition is satisfied, H_{uu}\geq 0 is not required, as only (4.6) needs to be satisfied.

    In this section, we will give an example to illustrate the effectiveness of Theorem 4.6.

    Let

    \begin{eqnarray*} \label{1.1} \min J(u(\cdot)) = y_{2}(1), \end{eqnarray*}

    subject to

    \begin{eqnarray*} \left\{ \begin{array}{ll} \dot{y}_{1}(t) = u,&t\in[0,1]\setminus 0.5,\\ \dot{y}_{2}(t) = -y_{1}^{2},&t\in[0,1]\setminus 0.5,\\ y_{1}(0.5+) = y_{1}(0.5)+y_{1}(0.5),\\ y_{2}(0.5+) = y_{2}(0.5),\\ y_{1}(0) = 0,\\ y_{2}(0) = 0. \end{array}\right. \end{eqnarray*}

    Obviously, the Hamiltonian H(t, y, u, \varphi) = \varphi_{1}u-\varphi_{2}y_{1}^{2} satisfies the conditions for linear control, as denoted by u and u\in U = \mathbb{R} . According to Remark (2.4), the problem is singular, i.e., \overline{U}(t) = \mathbb{R} . It is not difficult to assert that \bar{u}\equiv 0 denotes singular control; this is because, by \bar{u}\equiv 0 , we can get that \bar{y}_{1}\equiv 0, \bar{y}_{2}\equiv 0 , and \overline{\varphi}_{1} = 0, t\in [0, 1] ; consequently, H(t, y, u, \varphi) = \overline{\varphi}_{1}\bar{u}-\overline{\varphi}_{2}\bar{y}_{1}^{2}\equiv 0 , H_{u}\equiv 0 , and H_{uu}\equiv 0 ; by Definition 2.3, \bar{u}\equiv 0, t\in [0, 1] denotes singular control. The question is whether it is optimal singular control. Now, let us use Theorem 4.6 to determine that it must not be optimal singular control.

    By (3.2), we have

    \begin{eqnarray*} \left\{ \begin{array}{ll} \dot{\overline{\varphi}}_{1}(t) = 2\overline{\varphi}_{2}\bar{y}_{1},&t\in[0,1]\setminus 0.5,\\ \dot{\overline{\varphi}}_{2}(t) = 0,&t\in[0,1]\setminus 0.5,\\ \overline{\varphi}_{1}(0.5-) = \overline{\varphi}_{1}(0.5),\\ \overline{\varphi}_{2}(0.5-) = \overline{\varphi}_{1}(0.5)+\overline{\varphi}_{2}(0.5),\\ \overline{\varphi}_{1}(1) = 0,\\ \overline{\varphi}_{2}(1) = 1.\\ \end{array}\right. \end{eqnarray*}

    Using (3.3), we have

    \begin{eqnarray*} \dot{\overline{W}}(t)& = &\left[{\begin{array}{cc} \dot{\overline{w}}_{11}(t) & \dot{\overline{w}}_{12}(t)\\ \dot{\overline{w}}_{21}(t) & \dot{\overline{w}}_{22}(t)\\ \end{array}}\right] = -\left[{\begin{array}{cc} 0 & -2\bar{y}_{1}\\ 0 & 0\\ \end{array}}\right] \left[{\begin{array}{cc} \overline{w}_{11}(t) & \overline{w}_{12}(t)\\ \overline{w}_{21}(t) & \overline{w}_{22}(t)\\ \end{array}}\right]\\ && -\left[{\begin{array}{cc} \overline{w}_{11}(t) & \overline{w}_{12}(t)\\ \overline{w}_{21}(t) & \overline{w}_{22}(t)\\ \end{array}}\right] \left[{\begin{array}{cc} 0 & 0\\ -2\bar{y}_{1} & 0\\ \end{array}}\right] +\left[{\begin{array}{cc} \overline{\varphi}_{2}(t) & 0\\ 0 & 0\\ \end{array}}\right], \end{eqnarray*}

    and

    \begin{eqnarray*} \left[{\begin{array}{cc} \overline{w}_{11}(1) & \overline{w}_{12}(1)\\ \overline{w}_{21}(1) & \overline{w}_{22}(1)\\ \end{array}}\right] = \left[{\begin{array}{cc} 0 & 0\\ 0 & 0\\ \end{array}}\right], \end{eqnarray*}

    and

    \begin{eqnarray*} \overline{W}(0.5-)& = &\left[{\begin{array}{cc} \overline{w}_{11}(0.5-) & \overline{w}_{12}(0.5-)\\ \overline{w}_{21}(0.5-) & \overline{w}_{22}(0.5-)\\ \end{array}}\right]\\ & = &\left[{\begin{array}{cc} 0 & 0\\ 1 & 0\\ \end{array}}\right] \left[{\begin{array}{cc} \overline{w}_{11}(0.5) & \overline{w}_{12}(0.5)\\ \overline{w}_{21}(0.5) & \overline{w}_{22}(0.5)\\ \end{array}}\right]\\ &+& \left[{\begin{array}{cc} \overline{w}_{11}(0.5) & \overline{w}_{12}(0.5)\\ \overline{w}_{21}(0.5) & \overline{w}_{22}(0.5)\\ \end{array}}\right] \left[{\begin{array}{cc} 0 & 1\\ 0 & 0\\ \end{array}}\right]\\ &+&\left[{\begin{array}{cc} 0 & 0\\ 1 & 0\\ \end{array}}\right] \left[{\begin{array}{cc} \overline{w}_{11}(0.5) & \overline{w}_{12}(0.5)\\ \overline{w}_{21}(0.5) & \overline{w}_{22}(0.5)\\ \end{array}}\right] \left[{\begin{array}{cc} 0 & 1\\ 0 & 0\\ \end{array}}\right]. \end{eqnarray*}

    Substituting \bar{u}\equiv 0, \bar{y}_{1}\equiv 0 , and \bar{y}_{2}\equiv 0 directly into the above equations, the following results can be obtained directly

    \begin{eqnarray*} \left\{ \begin{array}{ll} \overline{\varphi}_{1}(t)\equiv 0,\\ \overline{\varphi}_{2}(t)\equiv 1, \end{array}\right. \quad t\in[0,1], \end{eqnarray*}

    and

    \begin{eqnarray*} \left\{ \begin{array}{ll} \overline{w}_{11} = t-1, t\in[0,1],\\ \overline{w}_{12} = \overline{w}_{21} = \overline{w}_{22} = \left\{\begin{array}{ll} 0,&t\in[0.5,1],\\ -0.5,&t\in[0,0.5).\\ \end{array}\right.\\ \end{array}\right. \end{eqnarray*}

    By (4.4), the necessary condition for the singular control \bar{u}\equiv 0 to be the optimal control scheme is given by

    \begin{eqnarray} (t-1)v^{2}\geq 0, \forall v\; \in \overline{U}(t). \end{eqnarray} (5.1)

    But, because of the above equations, (5.1) cannot be true for arbitrary but fixed t\in (0, 1) . Therefore, regarding singular control \bar{u} = 0 , according to Theorem 4.6, it must not be optimal singular control.

    In this paper, we have investigated the pointwise Jacobson type necessary conditions for Problem P. By introducing an impulsive linear matrix Riccati differential equation, we have derived the integral representation of the functional second-order variational equation. On this basis, we obtained the integral form of the second-order necessary conditions and the pointwise Jacobson type necessary conditions for optimal singular control in the classical sense. Incidentally, the Legendre-Clebsch condition and the pointwise Legendre-Clebsch condition were also obtained. These conclusions have been derived under weaker conditions, thereby enriching existing conclusions. In the future, we will continue to research the pointwise Jacobson-type second-order necessary optimality conditions in the Pontryagin sense.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are grateful to the anonymous referees for their helpful comments and valuable suggestions which have improved the quality of the manuscript. This work was supported by the National Natural Science Foundation of China (No. 12061021 and No. 11161009).

    The authors declare that there is no conflict of interest.



    [1] Nansai K, Watari T (2022) Innovative changes in material use for transition to a carbon-neutral society. Mater Cycles Waste Manag Res 33: 17–24. https://doi.org/10.3985/mcwmr.33.17 doi: 10.3985/mcwmr.33.17
    [2] Tsuchiyama T, Sakamoto T, Tanaka S, et al. (2020) Control of core-shell type second phase formed via interrupted quenching and intercritical annealing in a medium manganese steel. ISIJ Int 60: 2954–2962. https://doi.org/10.2355/isijinternational.ISIJINT-2020-164 doi: 10.2355/isijinternational.ISIJINT-2020-164
    [3] Kim JH, Kwon MH, Lee JS, et al. (2021) Influence of isothermal treatment prior to initial quenching of Q & P process on microstructure and mechanical properties of medium Mn steel. ISIJ Int 61: 518–526. https://doi.org/10.2355/isijinternational.ISIJINT-2019-733 doi: 10.2355/isijinternational.ISIJINT-2019-733
    [4] Okai D, Yae M, Yamamoto A, et al. (2017) EBSD observation of pure iron with near-cube orientation fabricated by cold rolling and annealing. Mater Trans 58: 838–841. https://doi.org/10.2320/matertrans.M2016443 doi: 10.2320/matertrans.M2016443
    [5] Okai D, Yamamoto A, DoiT, et al. (2021) Characteristics of cube orientation for pure iron tape fabricated by cold rolling and annealing. Mater Sci Forum 1016: 1830–1834. https://doi.org/10.4028/www.scientific.net/msf.1016.1830 doi: 10.4028/www.scientific.net/msf.1016.1830
    [6] Liu CT, Gurland J (1968) The strengthening mechanism in spheroidized carbon steel. Trans TMS-AIME 242: 1535–1542.
    [7] Ning J, Feng Y, Wang M, et al. (2017) Dependence of tensile properties on microstructural features of bimodal-sized ferrite/cementite steels. J Iron Steel Res Int 24: 67–76. https://doi.org/10.1016/S1006-706X(17)30010-9 doi: 10.1016/S1006-706X(17)30010-9
    [8] Tsuchida N, Ueji R, Gong W, et al. (2023) Stress partitioning between bcc and cementite phases discussed from phase stress and dislocation density in martensite steels. Scr Mater 222: 115002. https://doi.org/10.1016/j.scriptamat.2022.115002 doi: 10.1016/j.scriptamat.2022.115002
    [9] Hayakawa K, Ogawa T, He L, et al. (2024) Improvement in the strength–ductility balance of tempered martensite steel by controlling cementite particle size distribution. J Mater Eng Perform 33: 6675–6685. https://doi.org/10.1007/s11665-023-08428-w doi: 10.1007/s11665-023-08428-w
    [10] Wang Z, Ogawa T, Adachi Y (2019) Properties-to-microstructure-to-processing inverse analysis for steels via machine learning. ISIJ Int 59: 1691–1694. https://doi.org/10.2355/isijinternational.ISIJINT-2019-089 doi: 10.2355/isijinternational.ISIJINT-2019-089
    [11] Sawai K, Chen TT, Sun F, et al. (2024) Image regression analysis for linking the microstructure and property of steel. Results Mater 21: 100526. https://doi.org/10.1016/j.rinma.2023.100526 doi: 10.1016/j.rinma.2023.100526
    [12] Noguchi S, Wang H, Inoue J (2022) Identification of microstructures critically affecting material properties using machine learning framework based on met-allurgists' thinking process. Sci Rep 12: 14238. https://doi.org/10.1038/s41598-022-17614-0 doi: 10.1038/s41598-022-17614-0
    [13] Krizhevsky A, Sutskever I, Hinton GE (2017) ImageNet classification with deep convolutional neural networks. Commun ACM 60: 84–90. https://doi.org/10.1145/3065386 doi: 10.1145/3065386
    [14] Selvaraju RR, Cogswell M, Das A, et al. (2020) Grad-CAM: Visual explanations from deep networks via gradient-based localization. Int J Comput Vision 128: 336–359. https://doi.org/10.1007/s11263-019-01228-7 doi: 10.1007/s11263-019-01228-7
    [15] Liang JW, Shen YF, Misra RDK, et al. (2021) High strength-superplasticity combination of ultrafine-grained ferritic steel: the significant role of nanoscale carbides. J Mater Sci Technol 83: 131–144. https://doi.org/10.1016/j.jmst.2020.11.078 doi: 10.1016/j.jmst.2020.11.078
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1101) PDF downloads(51) Cited by(0)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog