Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Neutral differential equations with noncanonical operator: Oscillation behavior of solutions

  • Received: 03 November 2020 Accepted: 31 December 2020 Published: 15 January 2021
  • MSC : 34C10, 34K11

  • The objective of this work is to study the oscillatory behavior of neutral differential equations with several delays. By using both Riccati substitution technique and comparison with delay equations of first-order, we establish new oscillation criteria. Our new criteria are simplifying and complementing some related results that have been published in the literature. Moreover, some examples are given to show the applicability of our results.

    Citation: Elmetwally M. Elabbasy, Amany Nabih, Taher A. Nofal, Wedad R. Alharbi, Osama Moaaz. Neutral differential equations with noncanonical operator: Oscillation behavior of solutions[J]. AIMS Mathematics, 2021, 6(4): 3272-3287. doi: 10.3934/math.2021196

    Related Papers:

    [1] Zhen-Zhen Tao, Bing Sun . A feedback design for numerical solution to optimal control problems based on Hamilton-Jacobi-Bellman equation. Electronic Research Archive, 2021, 29(5): 3429-3447. doi: 10.3934/era.2021046
    [2] Yiming Su, Haiyan Liu, Mi Chen . Robust equilibrium reinsurance and investment strategy for the insurer and reinsurer under weighted mean-variance criterion. Electronic Research Archive, 2023, 31(10): 6384-6411. doi: 10.3934/era.2023323
    [3] Julian Gerstenberg, Ralph Neininger, Denis Spiegel . On solutions of the distributional Bellman equation. Electronic Research Archive, 2023, 31(8): 4459-4483. doi: 10.3934/era.2023228
    [4] Changling Xu, Huilai Li . Two-grid methods of finite element approximation for parabolic integro-differential optimal control problems. Electronic Research Archive, 2023, 31(8): 4818-4842. doi: 10.3934/era.2023247
    [5] Peng Yu, Shuping Tan, Jin Guo, Yong Song . Data-driven optimal controller design for sub-satellite deployment of tethered satellite system. Electronic Research Archive, 2024, 32(1): 505-522. doi: 10.3934/era.2024025
    [6] Saeedreza Tofighi, Farshad Merrikh-Bayat, Farhad Bayat . Designing and tuning MIMO feedforward controllers using iterated LMI restriction. Electronic Research Archive, 2022, 30(7): 2465-2486. doi: 10.3934/era.2022126
    [7] Sida Lin, Dongyao Yang, Jinlong Yuan, Changzhi Wu, Tao Zhou, An Li, Chuanye Gu, Jun Xie, Kuikui Gao . A new computational method for sparse optimal control of cyber-physical systems with varying delay. Electronic Research Archive, 2024, 32(12): 6553-6577. doi: 10.3934/era.2024306
    [8] Suayip Toprakseven, Seza Dinibutun . A weak Galerkin finite element method for parabolic singularly perturbed convection-diffusion equations on layer-adapted meshes. Electronic Research Archive, 2024, 32(8): 5033-5066. doi: 10.3934/era.2024232
    [9] Cheng Wang . Convergence analysis of Fourier pseudo-spectral schemes for three-dimensional incompressible Navier-Stokes equations. Electronic Research Archive, 2021, 29(5): 2915-2944. doi: 10.3934/era.2021019
    [10] Liangliang Li . Chaotic oscillations of 1D wave equation with or without energy-injections. Electronic Research Archive, 2022, 30(7): 2600-2617. doi: 10.3934/era.2022133
  • The objective of this work is to study the oscillatory behavior of neutral differential equations with several delays. By using both Riccati substitution technique and comparison with delay equations of first-order, we establish new oscillation criteria. Our new criteria are simplifying and complementing some related results that have been published in the literature. Moreover, some examples are given to show the applicability of our results.


    It is well-known that the optimal feedback control can be synthesized by the dynamic programming (DP) approach, while finding the optimal control in feedback form is generally argued as the Holy Grail of control theory ([16]). However, before we arrive at the feedback control by this approach, there are two difficulties that need to face. The notion of viscosity solution to partial differential equation (PDE) overcomes the first one that the HJB equation usually has no classical solution regardless of the smoothness of its coefficients ([7,8,9]). Under this framework, such kind of weak solution simultaneously holds the existence and uniqueness. The value function of the optimal control problem is the unique viscosity solution to the HJB equation. Nevertheless, for most of optimal control problems, it is usually impossible to find the analytical viscosity solution, which is the second difficulty to hinder the utilization of the DP approach. The numerical solution is almost the only practically significant way for finding the optimal control via solving the HJB equation. Luckily enough, nowadays there are some literatures available devoted to these topics where the analysis of DP methods for deterministic and stochastic control problems is presented in detail ([3,6,11,12]).

    Besides the Bellman DP principle, another milestone work in control theory is contributed to the Pontryagin maximum principle, which establishes a necessary optimality condition satisfied by the investigational optimal control systems consisted of the state equation and its adjoint system ([23,26]). Nevertheless, in essence, the control presented by the Pontryagin maximum principle is of open-loop, which is usually less than satisfactory. Moreover, it is notorious for its sophistication and the fact that it rarely gives an insight into the structure of the optimal controls ([24]). Generally speaking, a feedback control system is considered superior to an open-loop system. Many unnecessary disturbances and noise signals from outside the system can be rejected by using feedback ([4]). Furthermore, even from the numerical solution point of view, for the two-point boundary problem obtained by the necessary condition of optimality, the recognized shooting method has to overcome the difficulty of "guess" for the initial data to start the iterative numerical process ([14,27]), which is generally not an easy job.

    People need to make great efforts to develop more efficient algorithms to find the optimal control. Two real reasons for this could not be clearer. One is due to the lack of the analytical solution to the practical optimal control problems. We must numerically solve the investigational problem. Furthermore, the ultimate aim of the control theory is to use it in the real world. It also coincides with the requirement of the information age. The other reason is caused by the curse-of-dimensionality, the inherent defect of the DP approach ([22]). Thanks to many model reduction techniques ([1,3,13,20,19]), this problem can be partially circumvented. It makes some effective algorithms for low dimensional problems, thus initially confined to only "toy'' problems, can be generalized to deal with much more complicated practical optimal control problems. The upwind finite-difference scheme is such a well-adapted algorithm and has been successfully applied to many examples ([13,14,15]). And most of all, its convergence has been rigorously proven in [28].

    Certainly it is not easy to find an optimal feedback control by the HJB-based approach and lots of complicated computations are usually involved. Even so, on this topic, there are still many interesting results available in literature, such as [1,5,17,19,22,29], to name just a few. We can unfortunately select only some more relevant research to mention here. For two practical optimal control problems, [14,15] obtain the optimal feedback controls by repeatedly calling the algorithm to the HJB equation. In [2,10,18], the HJB equation is firstly solved to gain the value function. And then the convex combination of numerical value function is applied to obtain the approximation of optimal feedback control. Note that the obtained numerical solution of control function is completely ignored and not directly applied to obtain optimal feedback control. A class of explicit Markov chain approximation methods are introduced and studied in detail in [21]. And these methods are tailored to solving the HJBs for both stochastic and deterministic continuous time nonlinear optimal control problems on finite and infinite horizons. The paper [20] considers a distributed volume control problem in infinite time region and obtains the optimal control by the convex combination of the control values at each point of the polyhedron, which can be interpreted as a special interpolation. The idea of using the interpolation to find the optimal control is also mentioned in [13] but it is kept mostly to one-dimensional linear interpolation for one-dimensional problem. The multi-dimensional problems are not involved in the analysis.

    In this paper, we are concerned with a feedback design for numerical solution to the optimal control problem in finite time horizon under the framework of the DPVS approach. To synthesize the optimal feedback control, we solve the HJB equation by the upwind finite-difference scheme. Different from the usual existing algorithms, the numerical control function is interpolated in turn to gain the approximation of optimal feedback control-trajectory pair, which avoid solving the HJB equation repeatedly, thus efficaciously promote the computation efficiency. Moreover, the design can deal with the optimal control problem equipped with multiple controls. And more generally, here, the adopted interpolation is not limited to one-dimensional linear interpolation. Other interpolation techniques including multi-dimensional linear interpolation, cubic spline interpolation, nearest-neighbor interpolation and cubic-Hermite interpolation go well likewise in this new design.

    The remainder of this paper is organized as follows. In Section 2, we revisit the procedure of the DPVS approach and present the upwind finite-difference scheme to the HJB equation. An interpolation algorithm then follows, which consists of the core of this paper. The effectiveness of the algorithm is proven in Section 3 by solving five quite distinct examples from each other before some concluding comments are made in Section 4.

    Set T>0 and URm be a closed, bounded and convex set. Denote by U[0,T]=L([0,T];U) the admissible control set. We consider the following control system

    {ddty(t)=f(t,y(t),u(t)),t(0,T],y(0)=y0, (2.1)

    where y:[0,T]Rn is the state, the control u()U[0,T] and y0Rn is a given initial value. Assume that the mapping f:[0,T]×Rn×URn is continuous. Take the cost functional as

    J(u())=T0L(s,y(s),u(s))ds+h(y(T)), (2.2)

    where both the running cost L:[0,T]×Rn×RmR and the terminal cost h:RnR are continuous. The optimal control problem is to minimize (2.2) over U[0,T], i.e.,

    infuU[0,T]J(u()) (2.3)

    subject to the control system (2.1).

    In order to present the DP principle, we consider a family of dynamic optimal control problems. Namely, over U[t,T]=L([t,T];U), we minimize the cost functional

    J(u();t,x)=TtL(s,y(s),u(s))ds+h(y(T)),

    subject to

    {ddsy(s)=f(s,y(s),u(s)),s(t,T],y(t)=x,

    where (t,x)[0,T)×Rn is the initial time and state. Define the value function by

    v(t,x)=infu()U[t,T]J(u();t,x),(t,x)[0,T)×Rn,v(T,x)=h(x),xRn, (2.4)

    which is the associated minimal cost functional. It satisfies the DP principle ([3])

    v(t,x)=infu()U[t,τ]{τtL(s,y(s;t,x),u(s))ds+v(τ,y(τ;t,x))},

    for any τ[t,T).

    Then, if the value function v is smooth enough, such as vC1([0,T]×Rn), the following HJB equation can be derived from the DP principle

    tv(t,x)infuU{f(t,x,u)xv(t,x)+L(t,x,u)}=0,(t,x)[0,T)×Rn,v(T,x)=h(x),xRn, (2.5)

    where x denotes the gradient with respect to x. By solving the HJB equation (2.5), we next turn to the feedback synthesis, i.e., the construction of the optimal control u(t) with the optimal state y(t). For any mapping ˉu(t,x) satisfying

    ˉu(t,x)arginfuU{f(t,x,u)xv(t,x)+L(t,x,u)},

    we set

    u(t)=ˉu(t,y(t)), (2.6)

    for almost all t[0,T], where y(t) satisfies

    ddty(t)=f(t,y(t),ˉu(t,y(t))),t(0,T], (2.7)

    with y(0)=y0. Then we have

    f(t,y(t),u(t))xv(t,y(t))+L(t,y(t),u(t))=infuU{f(t,y(t),u)xv(t,y(t))+L(t,y(t),u)}

    for almost all t[0,T]. Thus u() is an optimal control of (2.3) from [3]. Here, ˉu in (2.6) is called as the control function or the optimal feedback mapping. When the mapping ˉu is known in (2.7), the system (2.7) is a closed loop system.

    The DPVS approach tells us that once a viscosity solution of the HJB equation (2.5) is obtained numerically, we are able to find the numerical solution of the feedback law by the feedback synthesis above.

    Next we revisit the upwind finite-difference scheme to the HJB equation (2.5). Actually, it is such a well-adapted algorithm and has been successfully applied to many examples ([14,13,15]). And most of all, its convergence has been rigorously proven in [28].

    Take a polygonal region [0,T]×nk=1[ak,bk] as the computational domain. Set the time step Δt=TN and denote tj=T+jΔt,j=0,1,2,,N, which divide the time domain [0,T] into N equal parts. Similarly, for each k=1,2,,n, divide [ak,bk] into Mk equal parts. Denote Δxk=bkakMk, k=1,2,,n, the space steps. And let i=(i1,i2,,in), ik+=(i1,,ik1,ik+1,ik+1,,in), ik=(i1,,ik1,ik1,ik+1,,in), where ik=0,1,,Mk, k=1,2,,n. Denote the point xi=(a1+i1Δx1,a2+i2Δx2,,an+inΔxn). Then the upwind finite-difference scheme of the HJB equation (2.5) can be written as

    vj+1ivjiΔt+nk=1(1+signfjk,i2vjik+vjiΔxk+1signfjk,i2vjivjikΔxk)fjk,i+Lji=0,uj+1i=arginfuU(nk=1fk(tj+1,xi,u)vj+1ivj+1ikΔxk+L(tj+1,xi,u)), (2.8)

    for ik=j+1,j+2,,Mkj1, k=1,2,,n and j=0,1,,N1. Here, the signz denotes the sign of z. And uji,vji are, respectively, the approximations of ˉu(tj,xi) and v(tj,xi). Then, denote

    fjk,i=fk(tj,xi,uji),Lji=L(tj,xi,uji).

    In the following, we construct a HJB-based feedback algorithm for (2.3). Its detailed steps are listed as follows.

    Step 1. Initial partition on time and space. Divide the calculation domain into mesh as mentioned in Section 2 above. Denote (uj,yj) by the optimal control-trajectory pair at tj=jΔt for j=0,1,,N.

    Step 2. Compute the approximations of value function and control function. Based on the numerical scheme (2.8), set γk=ΔtΔxk for k=1,2,,n.

    Sub-step 2.1. By the same space steps Δxk, k=1,2,,n, extend the initial solution region nk=1[ak,bk] to a new one nk=1[ak(N+1)Δxk,bk+(N+1)Δxk] by adding the 2(N+1) nodes. (This is a rough expansion and more precise extension steps can be found in [29]). Renumber the points on the new region nk=1[ak(N+1)Δxk,bk+(N+1)Δxk] like in Section 2. Denote ii=(ii1,ii2, ,iin), iik+=(ii1,,iik1,iik+1,iik+1,,iin), iik=(i1,,iik1,iik1,iik+1,,iin), where iik=0,1,,Mk+2(N+1) and k=1,2,,n. Define xii=(a1+(ii1N1)Δx1,a2+(ii2N1)Δx2,,an+(iinN1)Δxn).

    Sub-step 2.2. Initialize {v0ii}Mk+2(N+1)iik=0 and {u0ii}Mk+2(N+1)iik=0 as follows.

    v0ii=h(xii),u0ii=arginfuU(nk=1fk(t0,xii,u)v0iiv0iikΔxk+L(t0,xii,u)).

    Sub-step 2.3. For j=0 to N1, compute vj+1ii and uj+1ii as follows. For iik=j+1 to Mk+2Nj+1, k=1,2,,n, set

    vj+1ii=(1+nk=1γk|fjk,ii|)vjiink=11+signfjk,ii2γkfjk,iivjiik++nk=11signfjk,ii2γkfjk,iivjiikΔtLjii,uj+1ii=arginfuU(nk=1fk(tj+1,xii,u)vj+1iivj+1iikΔxk+L(tj+1,xii,u)).

    Sub-step 2.4. Choose the numerical solution of value function and control function over [0,T]×nk=1[ak,bk]. Denote them by {{vji}Mkik=0}Nj=0 and {{uji}Mkik=0}Nj=0, k=1,2,,n, respectively. Here, vji is the approximation of value function v(tj,xi) and uji is the approximation of control function ˉu(tj,xi).

    Step 3. Compute the optimal pairs {(uj,yj)}Nj=0.} For initial setting, let y0=y0. Set j=0.

    Sub-step 3.1. Calculate the j-th optimal feedback control uj by applying the numerical solution {uNji}Mkik=0, k=1,2,,n, obtained in Step 2. Choose the linear interpolation (or cubic spline interpolation, nearest-neighbor interpolation and cubic-Hermite interpolation) for point yj by using the value {uNji}Mkik=0 at known points {xi}Mkik=0, k=1,2,,n. Actually, the value obtained by interpolation is an approximation of ˉu(tj,yj), which is denoted as ˜u(tj,yj). Then, we set

    uj=˜u(tj,yj).

    Sub-step 3.2. Calculate the (j+1)-th optimal trajectory yj+1. Discrete the state equation by the forward difference:

    yj+1=yjf(tj,yj,uj)Δt.

    With the known uj and yj, we solve the finite difference equation above to obtain the (j+1)-th optimal trajectory yj+1.

    Sub-step 3.3. Iterate for the next time instant. Set j=j+1. If j>N, end the procedure. Otherwise, go to Sub-step 3.1. Continue the iteration until we get all optimal feedback control-trajectory pairs {(uj,yj)}Nj=0.

    Remark 1. In Sub-step 3.1, we can also adopt other interpolation methods, such as polynomial interpolation, average interpolation, the Newton interpolation, Lagrange's interpolation and so on ([25]). Similarly, other difference method can be used to discrete state equation in Sub-step 3.2.

    This section is devoted to five numerical examples to verify the effectiveness of the algorithm. The simulations are implemented by using MATLAB programme in a desktop computer with a core process (i5-6600M) and 8GB of random access memory.

    Example 1. Consider the following control problem ([31])

    {˙y(t)=y(t)u(t), t(0,1],y(0)=y0,J(u())=y(1),minu()UJ(u()),

    where U=L([0,1];U), U=[0,1], y0R is a given initial value. Its corresponding HJB equation is

    tv(t,x)infu[0,1]{xuxv(t,x)}=0,(t,x)[0,1)×R,v(1,x)=x,xR.

    Its exact value function v is

    v(t,x)={xe1t,   if x>0,x,        if x0,

    and the control function is

    ˉu(t,x)={1,   if x>0,0, if x0. (3.1)

    Here, we choose the region D=[0,1]×[32,32], i.e., 0t1 and 32x32. The number of nodes N on the t axis is chosen as 41. And the number of nodes M on the x axis is set as 31. Then the time step Δt=140 and the space step Δx=110. Adopting the linear interpolation, we numerically solve this example by the interpolation algorithm presented in Section 2 above. And the results are plotted in Figures 1 and 2.

    Figure 1.  Numerical and exact solutions in Example 1 with initial state y0=12.
    Figure 2.  Numerical and exact solutions in Example 1 with initial state y0=12.

    Moreover, in order to show the performance of the presented interpolation algorithm, two existing algorithms are additionally chosen to solve Example 1, simultaneously. The corresponding results are also given in Figures 1 and 2. And the comparisons on numerical simulations are listed in Table 1.

    Table 1.  The maximum norm error estimates and CPU times for Example 1 with initial state y0=12 and y0=12, respectively.
    Algorithm Initial state Control Trajectory CPU(s)
    Interpolation y0=12 6.6000e-05 3.3001e-05 0.005615
    y0=12 6.6000e-05 0.016695 0.005484
    Algorithm 1 y0=12 6.6000e-05 3.3001e-05 0.015370
    y0=12 6.6000e-05 0.016695 0.016507
    Algorithm 2 y0=12 6.6000e-05 3.3001e-05 0.025182
    y0=12 6.6000e-05 0.016695 0.025733

     | Show Table
    DownLoad: CSV

    Actually, one chosen algorithm is the minimisation of the Hamiltonian. People can refer to Algorithm 3.2 in [13] for details. For convenience, we use Algorithm 1 to denote it. In the other one algorithm, the optimal feedback control is also gained by minimizing the Hamiltonian function. But it is different from Algorithm 1, in which the value function is obtained by the convex combination of numerical value function ([18]). We name it as Algorithm 2.

    When y0=12, it is easy to obtain the optimal control u(t)0, t[0,1], and the optimal trajectory y(t)12, t[0,1]. Figure 1(a) shows the numerical solutions of optimal feedback control agrees well with the exact solution for these three algorithms. In Figure 1(b), the numerical solution of optimal trajectory also agrees well with its exact solution. When y0=12, the optimal control is u(t)1, t[0,1], and the optimal trajectory is y(t)=12et, t[0,1]. Figure 2 shows both the numerical solutions of optimal feedback control and trajectory agree well with the exact solutions when initial state y0=12.

    For these three algorithms, Table 1 lists the computed errors in the maximum norm between the numerical solutions and exact solutions including the optimal feedback control u and optimal trajectory y with initial state y0=12 and y0=12. So small error estimates tell us that the numerical solutions obtained by the interpolation algorithm, Algorithms 1 and 2 are very close to the exact solutions.

    Moreover, the CPU times are listed in Table 1. They are the computing time of these three algorithms after we obtain the numerical control function and numerical value function by upwind finite-difference method. By comparing the CPU times, we find that the interpolation algorithm takes the least time than Algorithms 1 and 2 with the similar error estimation. It illustrates that linear interpolation is meaningful for finding an approximation of the optimal feedback control, although the control function (3.1) is discontinuous. In all, numerical results shows that the interpolation algorithm is not only feasible but also efficient for gaining the optimal feedback control.

    Example 2. Consider the following control problem ([30])

    {˙y(t)=y(t)+u(t),   t(0,1],y(0)=1,J(u())=1210u2(t)dt+12y2(1),minu()UJ(u()),

    where U=L([0,1];U), U=R. Its corresponding HJB equation can be read as

    tv(t,x)infuR{(x+u)xv(t,x)+12u2}=0,(t,x)[0,1)×R,v(1,x)=12x2,xR. (3.2)

    The exact solution to the HJB equation (3.2) is

    v(t,x)=x21+e2(t1),

    and the control function is

    ˉu(t,x)=2x1+e2(t1). (3.3)

    With (2.6), (2.7) and (3.3), we obtain the optimal feedback control u(t)=2et1+e2 and the optimal trajectory y(t)=et+et21+e2.

    Now we numerically solve Example 2. Choose the computational region [0,1]×[0,1], i.e., 0t1 and 0x1. Here we choose a uniform mesh along t axis with N=41 nodes, i.e., Δt=140, and a uniform mesh along x axis with M=21 nodes, i.e., Δx=120. In this example, the chosen interpolation is also the linear interpolation. The obtained numerical solutions are plotted in Figure 3. By this figure, we see that both the numerical solutions of optimal feedback control and trajectory agree well with the exact solutions. Moreover, in Table 2, we list the maximum errors between the numerical solutions and exact solutions of optimal feedback control and trajectory. They are, respectively, 0.025896 and 0.017297, which are very small. It also shows that the algorithm is effective.

    Figure 3.  Numerical and exact solutions in Example 2 with initial state y0=1.
    Table 2.  The maximum norm error estimates for Example 2 with initial state y0=1.
    Error Control Trajectory
    y0=1 0.025896 0.017297

     | Show Table
    DownLoad: CSV

    In addition, in Examples 1 and 2, the control problems are all one-dimensional problems. Next, we present other two optimal control examples with two space dimensions.

    Example 3. Consider the following two-dimensional control problem ([29]) with two state variables and one control.

    {(˙x(t)˙y(t))=(x(t)y(t))u(t),   t(0,1],x(0)=x0,  y(0)=y0,J(u())=x(1)y(1),minu()UJ(u()),

    where U=L([0,1];U), U=[0,1], (x0,y0)R2. Its corresponding HJB equation is

    tv(t,x,y)infu[0,1]{xuvx(t,x,y)+yuvy(t,x,y)}=0,(t,x,y)[0,1)×R2,v(1,x,y)=(x+y),(x,y)R2. (3.4)

    In this example, the exact solution of the value function is

    v(t,x,y)={(x+y)e1t,if x+y>0,(x+y),if x+y0,

    and the control function is

    ˉu(t,x,y)={1,   if x+y>0,0,   if x+y0. (3.5)

    From (2.6), (2.7) and (3.5), when initial states (x0,y0)=(12,12), the optimal feedback control is u(t)0,t[0,1] and the optimal trajectory is (x(t),y(t))(12,12),t[0,1]. When the initial states (x0,y0)=(12,12), the optimal feedback control is u(t)1 and the optimal trajectory is (x(t),y(t))=(12et,12et),t[0,1].

    Next we use the interpolation algorithm to solve the HJB equation (3.4) in the range [0,1]×[32,32]2, i.e., 0t1, 32x32, 32y32. Here we still choose a uniform mesh on t axis, x axis and y axis. The nodes N on t axis is set as 41, i.e., Δt=140. The nodes on x and y axis are set as M1=M2=31, i.e., Δx=Δy=110. Again, the linear interpolation is chosen to be applied in the algorithm.

    From Figures 4 and 5, we see that, even with different initial values, the images of numerical solutions agree well with the exact solutions for the optimal feedback control and trajectory.

    Figure 4.  Numerical and exact solutions in Example 3 with initial states (x0,y0)=(12,12).
    Figure 5.  Numerical and exact solutions in Example 3 with initial states (x0,y0)=(12,12).

    Example 3 shows that, even for two-dimensional control problem, the linear interpolation is meaningful for finding an approximation of the optimal feedback control. On the other hand, we find that the discontinuity of the control function (3.5) does not hinder us to find the optimal feedback control.

    Furthermore, all three examples above are equipped with one control. And only the linear interpolation is adopted in the algorithm. And to show the effectiveness and efficiency of the interpolation algorithm in an even better fashion, we consider an optimal control problem with two states and two controls in Example 4. And four interpolations including linear interpolation, cubic spline interpolation, nearest-neighbor interpolation and cubic-Hermite interpolation are successively adopted in the investigation.

    Example 4. Consider the following optimal control problem with two state variables and two controls.

    {(˙x1(t)˙x2(t))=(x1(t)+u1(t)u2(t)),   t(0,1],x1(0)=x10,  x2(0)=x20,J(u1(),u2())=1210[u21(t)+u22(t)+x22(t)]dx+12x21(1),min(u1(),u2())UJ(u1(),u2()), (3.6)

    where (x10,x20)R2, U=L([0,1];U), U={(p,q)R2||p|1,|q|1,|p+q2|1}. Denote by || the absolute value. The HJB equation of (3.6) can be written as

    tv(t,x1,x2)inf|u1|,|u2|,|u1+u22|1{(x1+u1)vx1(t,x1,x2)+u2vx2(t,x1,x2)+12u21+12u22+12x22}=0,(t,x1,x2)[0,1)×R2,v(1,x1,x2)=12x21,(x1,x2)R2. (3.7)

    Here we choose the computational region [0,1]×[1,1]2, i.e. 0t1, 1x11 and 1x21. The number of nodes N on t axis is chosen as 41. And the number of nodes on x1 and x2 axis are set as M1=M2=41. Then the time step Δt=140 and the space steps Δx1=Δx2=120. To visualize the numerical result, we plot the value function v, two control functions ˉu1 and ˉu2, as well as ˉu1+ˉu22 at different across-sections in Figures 6-8. From Figures 6(b-d), 7(b-d) and 8(b-d), we see that both the numerical controls ˉu1 and ˉu2, and ˉu1+ˉu22 satisfy the given constraints.

    Figure 6.  Numerical solutions of HJB equation (3.6) in Example 4..
    Figure 7.  Numerical solutions of HJB equation (3.6) in Example 4 at x1=0.
    Figure 8.  Numerical solutions of the HJB equation (3.6) in Example 4 at t=12.

    Moreover, in this example, the exact value function v and control function (ˉu1,ˉu2) of the HJB equation (3.7) are unknown in literature. Although the control variables have so many constraints including nonlinear constraint, when (x10,x20)=(12,12), by calculation we find the optimal controls u1(t)=et1+e2, u2(t)=etet22(1+e2), t[0,1], and the optimal trajectories x1(t)=et+et22(1+e2), x2(t)=et+et22(1+e2), t[0,1]. Furthermore, in this example, we utilize the linear interpolation and the cubic spline interpolation to find the optimal feedback control. And their respective numerical results are plotted in Figures 9 and 10. From these two figures, we see that, no matter if it is the linear interpolation or the cubic spline interpolation, all numerical solutions have a good agreement with the exact solutions.

    Figure 9.  Numerical and exact solutions in Example 4 with initial states (x10,x20)=(12,12) (Linear interpolation case).
    Figure 10.  Numerical and exact solutions in Example 4 with initial states (x10,x20)=(12,12) (Cubic spline interpolation case).
    Table 3.  The maximum norm error estimates for Example 3 with initial states (x0,y0)=(12,12) and (x0,y0)=(12,12), respectively.
    Errors Control Trajectory
    (x0,y0)=(12,12) 6.6000e-05 4.6671e-05
    (x0,y0)=(12,12) 6.6000e-05 0.023611

     | Show Table
    DownLoad: CSV

    Then, in order to investigate the influence of different interpolation on the interpolation algorithm, we provide the error results of four interpolation cases for this example. In Table 4, we list the computational errors in the maximum norms between the numerical solutions and exact solutions including optimal control (u1,u2) and optimal trajectory (x1,x2) with initial state x10=12 and x20=12.

    Table 4.  The maximum norm error estimates for Example 4 with initial states (x10,x20)=(12,12) in four interpolation cases.
    Error Control Trajectory
    Linear 0.038706 0.02681
    Cubic spline 0.038706 0.026801
    Nearest-neighbor 0.069388 0.030588
    Cubic-Hermite 0.038706 0.026804

     | Show Table
    DownLoad: CSV

    In the linear interpolation case, they are 0.038706 and 0.02681, respectively. As for the cubic spline interpolation case, they are 0.038706 and 0.026801, respectively. In nearest-neighbor and cubic-Hermite interpolation cases, we also get very small errors as well. Although the errors in nearest-neighbor case are little bit bigger than others, all of them show a nice approximation to the exact solutions. Relatively speaking, by Table 4, we find that the nearest-neighbor interpolation is the worst among these four interpolations. And other three interpolations almost perform equally well. From these small error estimates, we know that the numerical solutions obtained by the interpolation algorithm are very close the exact solutions. The experimental results of Example 4 demonstrate again that interpolation algorithm is extremely effective.

    Let me emphasize that, for four examples above, all optimal controls are continuous, while the case of optimal control with jumps is not considered. Next, we use the interpolation algorithm to solve a control problem in which optimal control has jumps. We attempt to show that the interpolation algorithm is also feasible for this kind of problems.

    Example 5. Consider the following optimal control problem with two state variables and one control.

    {(˙x1(t)˙x2(t))=(x2(t)x1(t)+14u(t)),   t(0,2π],x1(0)=0,  x2(0)=0,J(u())=x2(2π),min(u())UJ(u()), (3.8)

    where U=L([0,2π];U), U=[1,1]. The HJB equation of (3.8) can be written as

    tv(t,x1,x2)inf|u|1{x2vx1(t,x1,x2)+(x1+14u)vx2(t,x1,x2)}=0,(t,x1,x2)[0,2π)×R2,v(2π,x1,x2)=x2,x2R.

    Here we choose the computational region (t,x1,x2)[0,2π]×[32,32]2. The number of nodes N on t axis is chosen as 41. And the number of nodes on x1 and x2 axis are set as M1=M2=11. Then the time step Δt=π20 and the space steps Δx1=Δx2=310. In this case, for the given initial state (x10,x20)=(0,0), the unique optimal control u(t) and the corresponding optimal trajectory (x1(t),x2(t)) of the system (3.8) can be obtained analytically by

    u(t)={1,0t<π2,1,π2t<3π2,1,3π2t2π, (3.9)

    and

    (x1(t)x2(t))={14(cost1sint),0t<π2,14(cost2sint+12costsint),π2t<3π2,14(cost4sint14costsint),3π2t2π.

    From (3.9), we see that the optimal control is not continuous. It has two jumps in [0,2π]. For this example, we execute the computation by the interpolation algorithm above. We choose the linear interpolation and the Heun method in Sub-step 3.2. The simulation results are plotted in Figure 11, in which we find all numerical solutions have a good agreement with the exact solutions to optimal control and trajectory. The computational errors in the maximum norms between the numerical solutions and exact solutions to optimal control u and optimal trajectory (x1,x2) are, respectively, 4.8000e-05 and 0.0911. These small error estimates demonstrate that interpolation algorithm is likewise effective for finding the numerical optimal controls for solving control problem with optimal control with jumps.

    Figure 11.  Numerical and exact solutions in Example 5 (Linear interpolation case).

    In this paper, we construct an interpolation algorithm for finding an approximation of optimal feedback control. And five experimental examples with different constraints (even nonlinear constraints) on control are successfully solved to demonstrate the effectiveness of the presented interpolation algorithm. The numerical results show that the algorithm with interpolation is very efficient for solving practical optimal control problems.

    The authors would like to thank the editor and the anonymous referees for their very careful reading and constructive suggestions that improve the manuscript substantially.



    [1] R. P. Agarwal, S. R. Grace, D. O'Regan, Oscillation Theory for Diy erence and Functional Differential Equations, Kluwer Academic, Dordrecht, The Netherlands, 2000.
    [2] R. P. Agarwal, S. R. Grace, D. O'Regan, Oscillation criteria for certain nth order differential equations with deviating arguments, J. Math. Anal. Appl., 262 (2001), 601–622.
    [3] B. Baculikova, J. Dzurina, J. R. Graef, On the oscillation of higher-order delay differential equations, Journal of Mathematical Sciences, 187 (2012), 387–400.
    [4] M. Bartusek, M. Cecchi, Z. Dosl, M. Marini, Fourth-order differential equation with deviating argument, Abstr. Appl. Anal., 2012 (2012), 185242.
    [5] M. Bohner, T. S. Hassan, T. Li, Fite–Hille–Wintner-type oscillation criteria for second-order half-linear dynamic equations with deviating arguments, Indagationes Mathematicae, 29 (2018), 548–560.
    [6] T. Li, N. Pintus, G. Viglialoro, Properties of solutions to porous medium problems with different sources and boundary conditions, Zeitschrift für angewandte Mathematik und Physik, 70 (2019), 86.
    [7] T. Li, Y. V. Rogovchenko, On asymptotic behavior of solutions to higher-order sublinear Emden–Fowler delay differential equations, Appl. Math. Lett., 67 (2017), 53–59.
    [8] T. Li, Y. V. Rogovchenko, On the asymptotic behavior of solutions to a class of third-order nonlinear neutral differential equations, Appl. Math. Lett., 105 (2020), 106293.
    [9] J. Džurina, S. R. Grace, I. Jadlovsk, T. Li, Oscillation criteria for second-order Emden–Fowler delay differential equations with a sublinear neutral term, Math. Nachr., 293 (2020), 910–922.
    [10] J. K. Hale, Theory of Functional Differential Equations, Springer, New York, 1977.
    [11] I. Kiguradze, T. Chanturia, Asymptotic properties of solutions of nonautonomous ordinary differential equations, vol. 89, Kluwer Academic Publishers Group, Dordrecht, 2012.
    [12] G. S. Ladde, V. Lakshmikantham, B. G. Zhang, Oscillation Theory of Differential Equations with Deviating Arguments, Marcel Dekker, New York, 1987.
    [13] T. Li, Y. V. Rogovchenko, Oscillation criteria for even-order neutral differential equations, Appl. Math. Lett., 61 (2016), 35–41.
    [14] T. Li, Y. V. Rogovchenko, Asymptotic behavior of higher-order quasilinear neutral differential equations, Abstr. Appl. Anal., 2014 (2014), 395368.
    [15] T. Li, Tongxing, Y. V. Rogovchenko, On the asymptotic behavior of solutions to a class of third-order nonlinear neutral differential equations, Appl. Math. Lett., 105 (2020), 106293.
    [16] F. Meng, R. Xu, Oscillation criteria for certain even order quasi-linear neutral differential equations with deviating arguments, Appl. Math. Comput., 190 (2007), 458–464.
    [17] O. Moaaz, P. Kumam, O. Bazighifan, On the oscillatory behavior of a class of fourth-order nonlinear differential equation, Symmetry, 12 (2020), 524.
    [18] O. Moaaz, A. Muhib, New oscillation criteria for nonlinear delay differential equations of fourth-order, Appl. Math. Comput., 377 (2020), 125192.
    [19] O. Moaaz, I. Dassios, O. Bazighifan, Oscillation criteria of higher-order neutral differential equations with several deviating arguments, Mathematics, 8 (2020), 412.
    [20] O. Moaaz, I. Dassios, O. Bazighifan, A. Muhib, Oscillation theorems for nonlinear differential equations of fourth-order, Mathematics, 8 (2020), 520.
    [21] O. Moaaz, D. Baleanu, A. Muhib, New aspects for non-existence of kneser solutions of neutral differential equations with odd-order, Mathematics, 8 (2020), 494.
    [22] Ch. G. Philos, On the existence of nonoscillatory solutions tending to zero at for differential equations with positive delays, Arch. Math., 36 (1981), 168–178.
    [23] G. Viglialoro, J. Murcia, A singular elliptic problem related to the membrane equilibrium equations, Int. J. Comput. Math., 90 (2013), 2185–2196.
    [24] T. Li, G. Viglialoro, Analysis and explicit solvability of degenerate tensorial problems, Bound. Value Probl., 2018 (2018), 1–13.
    [25] G. Xing, T. Li, C. Zhang, Oscillation of higher-order quasi-linear neutral differential equations, Adv. Differ. Equ., 2011 (2011), 45.
    [26] Z. Xu, Y. Xia, Integral averaging technique and oscillation of certain even order delay differential equations, J. Math. Appl. Anal., 292 (2004), 238–246.
    [27] A. Zafer, Oscillation criteria for even order neutral differential equations, Appl. Math. Lett., 11 (1998), 21–25.
    [28] B. G. Zhang, G. S. Ladde, Oscillation of even order delay differential equations, J. Math. Appl. Anal., 127 (1987), 140–150.
    [29] C. Zhang, T. Li, B. Suna, E. Thandapani, On the oscillation of higher-order half-linear delay differential equations, Appl. Math. Lett., 24 (2011), 1618–1621.
    [30] C. Zhang, R. P. Agarwal, M. Bohner, T. Li, New results for oscillatory behavior of even-order half-linear delay differential equations, Appl. Math. Lett., 26 (2013), 179–183.
    [31] C. Zhang, T. Li, S. H. Saker, Oscillation of fourth-order delay differential equations, J. Math. Sci., 201 (2014), 296–309.
    [32] Q. Zhang, J. Yan, Oscillation behavior of even order neutral differential equations with variable coefficients, Appl. Math. Lett., 19 (2006), 1202–1206.
    [33] Q. Zhang, J. Yan, L. Gao, Oscillation behavior of even-order nonlinear neutral differential equations with variable coefficients, Comput. Math. Appl., 59 (2010), 426–430.
  • This article has been cited by:

    1. Sida Lin, Lixia Meng, Jinlong Yuan, Changzhi Wu, An Li, Chongyang Liu, Jun Xie, Sequential adaptive switching time optimization technique for maximum hands-off control problems, 2024, 32, 2688-1594, 2229, 10.3934/era.2024101
    2. Vadim Azhmyakov, Warodom Werapun, Jakapan Suaboot, On the Hamilton–Jacobi-Bellman approach to the relaxed optimal feedback control problems, 2025, 00160032, 107667, 10.1016/j.jfranklin.2025.107667
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3213) PDF downloads(167) Cited by(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog