Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article Special Issues

Due date assignment scheduling with positional-dependent weights and proportional setup times


  • In this paper, we investigate the single-machine scheduling problem that considers due date assignment and past-sequence-dependent setup times simultaneously. Under common (slack and different) due date assignment, the objective is to find jointly the optimal sequence and optimal due dates to minimize the weighted sum of lateness, number of early and delayed jobs, and due date cost, where the weight only depends on it's position in a sequence (i.e., a position-dependent weight). Optimal properties of the problem are given and then the polynomial time algorithm is proposed to obtain the optimal solution.

    Citation: Xuyin Wang, Weiguo Liu, Lu Li, Peizhen Zhao, Ruifeng Zhang. Due date assignment scheduling with positional-dependent weights and proportional setup times[J]. Mathematical Biosciences and Engineering, 2022, 19(5): 5104-5119. doi: 10.3934/mbe.2022238

    Related Papers:

    [1] Gongwei Liu . The existence, general decay and blow-up for a plate equation with nonlinear damping and a logarithmic source term. Electronic Research Archive, 2020, 28(1): 263-289. doi: 10.3934/era.2020016
    [2] Hami Gündoğdu . RETRACTED ARTICLE: Impact of damping coefficients on nonlinear wave dynamics in shallow water with dual damping mechanisms. Electronic Research Archive, 2025, 33(4): 2567-2576. doi: 10.3934/era.2025114
    [3] Zayd Hajjej . Asymptotic stability for solutions of a coupled system of quasi-linear viscoelastic Kirchhoff plate equations. Electronic Research Archive, 2023, 31(6): 3471-3494. doi: 10.3934/era.2023176
    [4] Wei Shi, Xinguang Yang, Xingjie Yan . Determination of the 3D Navier-Stokes equations with damping. Electronic Research Archive, 2022, 30(10): 3872-3886. doi: 10.3934/era.2022197
    [5] Yi Cheng, Ying Chu . A class of fourth-order hyperbolic equations with strongly damped and nonlinear logarithmic terms. Electronic Research Archive, 2021, 29(6): 3867-3887. doi: 10.3934/era.2021066
    [6] Abdelhadi Safsaf, Suleman Alfalqi, Ahmed Bchatnia, Abderrahmane Beniani . Blow-up dynamics in nonlinear coupled wave equations with fractional damping and external source. Electronic Research Archive, 2024, 32(10): 5738-5751. doi: 10.3934/era.2024265
    [7] Jorge A. Esquivel-Avila . Blow-up in damped abstract nonlinear equations. Electronic Research Archive, 2020, 28(1): 347-367. doi: 10.3934/era.2020020
    [8] Youqian Bai, Yongkun Li . Almost periodic positive solutions of two generalized Nicholson's blowflies equations with iterative term. Electronic Research Archive, 2024, 32(5): 3230-3240. doi: 10.3934/era.2024148
    [9] Ibtissam Issa, Zayd Hajjej . Stabilization for a degenerate wave equation with drift and potential term with boundary fractional derivative control. Electronic Research Archive, 2024, 32(8): 4926-4953. doi: 10.3934/era.2024227
    [10] Huafei Di, Yadong Shang, Jiali Yu . Existence and uniform decay estimates for the fourth order wave equation with nonlinear boundary damping and interior source. Electronic Research Archive, 2020, 28(1): 221-261. doi: 10.3934/era.2020015
  • In this paper, we investigate the single-machine scheduling problem that considers due date assignment and past-sequence-dependent setup times simultaneously. Under common (slack and different) due date assignment, the objective is to find jointly the optimal sequence and optimal due dates to minimize the weighted sum of lateness, number of early and delayed jobs, and due date cost, where the weight only depends on it's position in a sequence (i.e., a position-dependent weight). Optimal properties of the problem are given and then the polynomial time algorithm is proposed to obtain the optimal solution.



    The importance of bridges is undeniable and their presence in human daily life goes back a long time. With the presence of the bridges, road and railway traffic runs without any interruption over rivers and hazardous areas, time and fuel are saved, congestion on roads is minimized, distances between places are reduced, and many accidents have been avoided, as the bridges have reduced the number of bends and zig-zags in roads. As a result, many economies have grown and many societies have become connected. However, bridges have brought some challenges, such as collapse and instability due to natural hazards such as wind, earthquakes, etc. To overcome these difficulties, engineers and scientists have made efforts to find the best designs and possible models. Our aim in this work is to investigate the following plate problem

    {utt+Δ2u+α(t)g(ut)=u|u|β,inΩ×(0,T),u(0,y,t)=uxx(0,y,t)=u(π,y,t)=uxx(π,y,t)=0,(y,t)(d,d)×(0,T),uyy(x,±d,t)+σuxx(x,±d,t)=0,(x,t)(0,π)×(0,T),uyyy(x,±d,t)+(2σ)uxxy(x,±d,t)=0,(x,t)(0,π)×(0,T),u(x,y,0)=u0(x,y),ut(x,y,0)=u1(x,y), in Ω×(0,T), (1.1)

    where Ω=(0,π)×(d,d), d,β>0, g:RR and α:[0,+)(0,+) is a nonincreasing differentiable function, u is the vertical displacement of the bridge and σ is the Poisson ratio. This is a weakly damped nonlinear suspension-bridge problem, in which the damping is modulated by a time dependent-coefficient α(t). Firstly, we prove the local existence using the Faedo-Gherkin method and Banach fixed point theorem. Secondly, we prove the global existence by using the well-depth method. Finally, we establish an explicit and general decay result, depending on g and α, for which the exponential and polynomial decay rate estimates are only special cases. The proof is based on the multiplier method and makes use of some properties of convex functions, including the use of the general Young inequality and Jensen's inequality.

    The famous report by Claude-Louis Navier [1] was the only mathematical treatise of suspension bridges for several decades. Another milestone theoretical contribution was the monograph by Melan [2]. After the Tacoma collapse, engineers felt the necessary to introduce the time variable in mathematical models and equations in order to attempt explanations of what had occurred. As a matter of fact, in Appendix VI of the Federal Report [3], a model of inextensible cables is derived and the linearized Melan equation was obtained. Other important contributions were the works by Smith-Vincent and the analysis of vibrations in suspension bridges presented by Bleich-McCullough-Rosecrans-Vincent [4]. In all these historical references, the bridge was modelled linearly as a beam suspended to a cable. Hence, all the equations were linear. Mathematicians have not shown any interest in suspension bridges until recently. McKenna, in 1987, introduced the first nonlinear models to study them from a theoretical point of view, and he was followed by several other mathematicians (see [5,6]). McKenna's main idea was to consider the slackening of the hangers as a nonlinear phenomenon, a statement which is by now well-known also among engineers [7,8]. The slackening phenomenon was analyzed in various complex beam models by several authors (see [9,10,11]). Motivated by the wonderful book of Rocard [12], where it was pointed out that the correct way to model a suspension bridge is through a thin plate, Ferrero-Gazzola [13] introduced the following hyperbolic problem:

    {utt(x,y,t)+ηut+Δ2u(x,y,t)+h(x,y,u)=f,inΩ×R+,u(0,y,t)=uxx(0,y,t)=u(π,y,t)=uxx(π,y,t)=0,(y,t)(,)×R+,uyy(x,±,t)+σuxx(x,±,t)=0,(x,t)(0,π)×R+,uyyy(x,±,t)+(2σ)uxxy(x,±,t)=0,(x,t)(0,π)×R+,u(x,y,0)=u0(x,y),ut(x,y,0)=u1(x,y), in Ω×R+, (2.1)

    where Ω=(0,π)×(,) is a planar rectangular plate, σ is the well-known Poisson ratio, η is the damping coefficient, h is the nonlinear restoring force of the hangers and f is an external force. After the appearance of the above model, many mathematicians showed interest in investigating variants of it, using different kinds of damping with the aim to obtain stability of the bridge modeled through the above problem. Messaoudi [14] considered the following nonlinear Petrovsky equation

    utt+Δ2u+aut|ut|m2=bu|u|p2, (2.2)

    and proved the existence of a local weak solution, showed that this solution is global if mp and blows up in finite time if p>m and the energy is negative. Wang [15] considered the equation

    utt+δut+Δ2u+au=u|u|p2, (2.3)

    where a=a(x,y,t) together with the above initial and boundary conditions. After showing the uniqueness and existence of local solutions, he gave sufficient conditions for global existence and finite-time blow-up of solutions. Mukiawa [16] considered a plate equation modeling a suspension bridge with weak damping and hanger restoring force. He proved the well-posedness and established an explicit and general decay result without putting restrictive growth conditions on the frictional damping term. Messaoudi and Mukiawa [17] studied problem (2.3), where the linear frictional damping was replaced by nonlinear frictional damping and established the existence of a global weak solution and proved exponential and polynomial stability results. Audu et al. [18] considered a plate equation as a model for a suspension bridge with a general nonlinear internal feedback and time-varying weight. Under some conditions on the feedback and the coefficient functions, the authors established a general decay estimate. For more results related to the existence of work on similar problems, we mention the work of Xu et al. [19], in which they proved the local existence of a weak solution by the Galerkin method and the global existence by the potential well method. He et al. [20] considered the following Kirchhoff type equation

    (a+bΩ|u|2dx)Δu=f(u)+h,in Ω, (2.4)

    where ΩR3 is a bounded domain or Ω=R3, 0hL2(Ω) and fC(R,R). The authors proved the existence of at least one or two positive solutions by using the monotonicity trick, and nonexistence criterion is also established by virtue of the corresponding Pohoaev identity. Recently, Wang et al. [21] considered the fractional Rayleigh-Stokes problem where the nonlinearity term satisfied certain critical conditions and proved the local existence, uniqueness and continuous dependence upon the initial data of ε-regular mild solutions. More results in this direction can be found in [22,23,24,25,26,27]. The paper is organized as follows. In Section 3, we present some preliminaries and essential lemmas. We prove the local existence in Section 4 and the global existence in Section 5. The statement and the proof of our stability result will be given in Section 6.

    In this section, we present some material needed in the proofs of our results. First, we introduce the following space

    H2(Ω)={wH2(Ω):w=0on{0,π}×(d,d)}, (3.1)

    together with the inner product

    (u,v)H2=Ω(ΔuΔv+(1σ)(2uxyvxyuxxvyyuyyvxx))dx. (3.2)

    It is well known that (H2(Ω),(,)H2) is a Hilbert space, and the norm .2H2 is equivalent to the usual H2, see [13]. Throughout this paper, c is used to denote a generic positive constant.

    Lemma 3.1. [15]Let uH2(Ω) and assume that 1p<, then, there exists a positive constant Ce=Ce(Ω,p)>0 such that

    upCeuH2(Ω).

    Lemma 3.2. (Jensen's inequality)Let ψ:[a,b]R be a convex function. Assume that the functions f:(0,L)[a,b] and r:(0,L)R are integrable such that r(x)0, for any x(0,L) and L0r(x)dx=k>0. Then,

    ψ(1kL0f(x)r(x)dx)1kL0ψ(f(x))r(x)dx. (3.3)

    We consider the following hypotheses:

    (H1). The function g:RR is nondecreasing C0 function satisfying for ε,c1,c2>0,

    c1|s||g(s)|c2|s|, if |s|ε,|s|2+g2(s)G1(sg(s)), if |s|ε, (3.4)

    where G:R+R+ is a C1 function which is linear or strictly increasing and strictly convex C2 function on [0,ε] with G(0)=0 and G(0)=0. In addition, the function g satisfies, for ϑ>0,

    (g(s1)g(s2))(s1s2)ϑ|s1s2|2. (3.5)

    (H2). The function α:R+R+ is a nonincreasing differentiable function such that 0α(t)dt=.

    Remark 3.3. Hypothesis (H1) implies that sg(s)>0, for all s0 and it was introduced and employed by Lasiecka and Tataru [28]. It was shown there that the monotonicity and continuity of g guarantee the existence of the function G with the properties stated in (H1).

    Remark 3.4. As in [28], we use Condition (3.5) to prove the uniqueness of the solution.

    The following lemmas will be of essential use in establishing our main results.

    Lemma 3.5. [29] Let E:R+R+ be a nonincreasing function and γ:R+R+ be a strictlyincreasing C1-function, with γ(t)+ as t+. Assume that there exists c>0 such that

    Sγ(t)E(t)dtcE(S)1S<+.

    Then there exist positive constants k and ω such that

    E(t)keωγ(t).

    Lemma 3.6. [30] Let E:R+R+ be a differentiable and nonincreasing function and χ:R+R+ be a convex and increasing function such that χ(0)=0. Assume that

    +sχ(E(t))dtE(s),s0. (3.6)

    Then, E satisfies the following estimate

    E(t)ψ1(h(t)+ψ(E(0))),t0, (3.7)

    where ψ(t)=1t1χ(s)ds, and

    {h(t)=0,0tE(0)χ(E(0)),h1(t)=t+ψ1(t+ψ(E(0)))χ(ψ1(t+ψ(E(0)))),t>0.

    In this section, we state and prove the local existence of weak solutions of problem (1.1). Similar results can be found in [31,32]. To this end, we consider the following problem

    {utt(x,y,t)+Δ2u(x,y,t)+α(t)g(ut)=f(x,t),inΩ×(0,T),u(0,y,t)=uxx(0,y,t)=u(π,y,t)=uxx(π,y,t)=0,(y,t)(d,d)×(0,T),uyy(x,±d,t)+σuxx(x,±d,t)=0,(x,t)(0,π)×(0,T),uyyy(x,±d,t)+(2σ)uxxy(x,±d,t)=0,(x,t)(0,π)×(0,T),u(x,y,0)=u0(x,y),ut(x,y,0)=u1(x,y), in Ω×(0,T), (4.1)

    where fL2(Ω×(0,T)) and (u0,u1)H2(Ω)×L2(Ω). Then, we prove the following theorem:

    Theorem 4.1. Let (u0,u1)H2(Ω)×L2(Ω). Assume that(H1) and (H2) hold. Then, problem (4.1) has a unique local weaksolution

    uL([0,T),H2(Ω)),utL([0,T),L2(Ω)),uttL([0,T),H(Ω)),

    where H(Ω) is the dual space of H2(Ω).

    Proof. Uniqueness: Suppose that (4.1) has two weak solutions (u,v). Then, w=uv satisfies

    {wtt(x,y,t)+Δ2w(x,y,t)+α(t)g(ut)α(t)g(vt)=0,inΩ×(0,T),w(0,y,t)=wxx(0,y,t)=w(π,y,t)=wxx(π,y,t)=0,(y,t)(d,d)×(0,T),wyy(x,±d,t)+σwxx(x,±d,t)=0,(x,t)(0,π)×(0,T),wyyy(x,±d,t)+(2σ)wxxy(x,±d,t)=0,(x,t)(0,π)×(0,T),w(x,y,0)=wt(x,y,0)=0, in Ω×(0,T). (4.2)

    Multiplying (4.2) by wt and integrating over (0,t), we get

    12ddt[Ω(w2t+|Δw|2)dx]+α(t)Ω(g(ut)g(vt))(utvt)dx=0. (4.3)

    Integrating (4.3) over (0,t), we obtain

    Ω(w2t+|Δw|2)dx+2α(t)t0Ω(g(ut)g(vt))(utvt)dxds=0. (4.4)

    Using Condition (3.5) and (H2), for a.e.xΩ, we have

    Ω(w2t+|Δw|2dx)=0, (4.5)

    We conclude u=v=0 on Ω×(0,T), which proves the uniqueness of the solution of problem (4.1). Existence: To prove the existence of the solution for problem (4.1), we use the Faedo-Galerkin method as follows: First, we consider {vj}j=1 an orthonormal basis of H2(Ω) and define, for all k1, a sequence vk in Vk=span{v1,v2,...,vk}H2(Ω), given by

    uk(x,t)=Σkj=1aj(t)vj(x),

    for all xΩ and t(0,T) and satisfies the following approximate problem

    {Ωuktt(x,t)vjdx+ΩΔuk(x,t)Δvjdx+α(t)Ωg(ukt)vj=Ωf(x,t)vjdx,inΩ×(0,T),uk(x,y,0)=uk0(x,y),ukt(x,y,0)=uk1(x,y), in Ω×(0,T), (4.6)

    for all j=1,2,...,k,

    uk(0)=uk0=Σki=1u0,vivi, ukt(0)=uk1=Σki=1u1,vivi, (4.7)

    such that

    uk0u0H2(Ω),uk1u1L2(Ω). (4.8)

    For any k1, problem (4.6) generates a system of k nonlinear ordinary differential equations. The ODE's standard existence theory assures the existence of a unique local solution uk for problem (4.6) on [0,Tk), with 0<TkT. Next, we have to show, by a priori estimates, that Tk=T,k1. Now, multiplying (4.6) by aj(t), using Green's formula and the boundary conditions, and then summing each result over j we obtain, for all 0<tTk,

    12ddt[Ω(|ukt|2+(Δuk)2)dx]+α(t)Ωuktg(ukt)dx=Ωf(x,t)ukt(x,t)dx. (4.9)

    Then, integrating (4.9) over (0,t) leads to

    12Ω(|ukt|2+|Δuk|2)dx+t0Ωα(s)uktg(ukt)dxds=12Ω(|uk1|2+|Δuk0|2)dx+t0Ωf(x,t)ukt(x,t)dxds. (4.10)

    From the convergence (4.8), using the fact that fL2(Ω×(0,T)), and exploiting Young's inequality, then (4.10) becomes, for some C>0, and for any t[0,tk)

    12Ω[|ukt|2+|Δuk|2dx]+t0Ωα(s)uktg(ukt)dxds12Ω[|uk1|2+|Δuk0|2]dx+εt0Ω|ukt|2dxds+Cεt0Ω|f(x,s)|2dxdsCε+εsup (4.11)

    Therefore, we obtain

    \begin{equation} \begin{aligned} & \frac{1}{2} \sup\limits_{(0,T_{k}) } \int_{\Omega } \vert u^{k}_{t}\vert ^{2}dx+ \frac{1}{2} \sup\limits_{(0,T_{k}) } \int_{\Omega } \vert \Delta u^{k}\vert ^{2}dx + \frac{1}{2} \sup\limits_{(0,T_{k}) } \int_0^{t_k}\int_{\Omega } \alpha (s) u^{k}_{t}(x,s) g\left( u^{k}_{t}(x,s)\right) dxds \\ & \leq C_{\varepsilon}+ \varepsilon \sup\limits_{(0,T_{k}) }\int_{\Omega } \vert u^{k}_{t}\vert ^{2}dx. \end{aligned} \end{equation} (4.12)

    Choosing \varepsilon = \frac{1}{4}, estimate (4.12) yields, for all T_{k}\leq T and C > 0 ,

    \begin{equation} \begin{aligned} & \sup\limits_{(0,T_{k}) }\int_{\Omega } \vert u^{k}_{t}\vert ^{2}dx+ \sup\limits_{(0,T_{k}) } \int_{\Omega } \vert \Delta u^{k}\vert ^{2}dx + \sup\limits_{(0,T_{k}) } \int_0^{t_k}\int_{\Omega } \alpha (s) u^{k}_{t}(x,s) g\left( u^{k}_{t}(x,s)\right) dxds \leq C. \end{aligned} \end{equation} (4.13)

    Consequently, the solution u^{k} can be extended to (0, T) , for any k\geq 1. In addition, we have

    \begin{equation*} (u^{k} )\ \text{is bounded in} \ L^{\infty}((0,T), H^2_* (\Omega))\; \text{and}\; (u_{t}^{k} ) \ \text{is bounded in} \ L^{\infty}((0,T),L^{2}(\Omega)). \end{equation*}

    Therefore, we can extract a subsequence, denoted by (u^{\ell}) such that, when \ell \rightarrow \infty, we have

    \begin{equation*} u^{\ell} \rightarrow u \ \ \text{weakly * in} \ L^{\infty} ((0,T),H^2_* (\Omega)) \; \text{and}\; u_{t}^{\ell} \rightarrow u_{t} \ \text{weakly * in} \ L^{\infty} ((0,T),L^{2}(\Omega)). \\ \end{equation*}

    Next, we prove that g(u^\ell_t) is bounded in L^2\left((0, T);L^{2}(\Omega)\right) . For this purpose, we consider two cases:

    Case 1. G is linear on [0, \varepsilon] . Then using (H1) and Young's inequality, we get

    \begin{equation} \begin{aligned} &\int_{\Omega}g^2(u^\ell_t) dx \le c\int_{\Omega}u^\ell_t g(u^\ell_t)dx-\int_{\Omega}\vert u_t^\ell\vert^2dx\\ & \quad \le \frac{c}{4 \delta_0}\int_{\Omega}\vert u^\ell_t \vert^2 dx+\delta_0\int_{\Omega}g^2(u^\ell_t)dx, \end{aligned} \end{equation} (4.14)

    for a suitable choice of \delta_0 and using the fact that u^\ell_t is bounded in L^{2}((0, T), L^{2}(\Omega)) , we obtain

    \begin{equation} \int_{0}^{T}\int_{\Omega}g^2(u^\ell_t) dxdt \le c. \end{equation} (4.15)

    Case 2. G is nonlinear. Let 0 < \varepsilon_1 \le \varepsilon such that

    \begin{equation} sg(s) \le \min\{ {\varepsilon, G(\varepsilon)}\}\text{ for all }\vert s\vert \le \varepsilon_1. \end{equation} (4.16)

    Then, one can show that

    \begin{equation} \begin{aligned} \begin{cases} &s^{2}+g^{2}(s)\le G^{-1}(sg(s)) \quad \text{for all} \quad \vert s \vert \le \varepsilon_1 \\ &c^{\prime}_{1}\vert s \vert \le \vert g(s) \vert \le c^{\prime}_{2}\vert s \vert \quad \text{for all} \quad \vert s \vert \ge \varepsilon_1. \end{cases} \end{aligned} \end{equation} (4.17)

    Define the following sets

    \begin{equation} \Omega_1 = \{x\in \Omega:\vert u^\ell_t\vert \le \varepsilon_1\},\; \text{and}\; \Omega_2 = \{x\in \Omega: \vert u^\ell_t \vert > \varepsilon_1\}. \end{equation} (4.18)

    Then, using (4.17) and (4.18) leads for some c^{\prime}_{2} > 0 ,

    \begin{equation} \begin{aligned} &\int_{\Omega}g^{2}(u^\ell_t)dx = \int_{\Omega_2}g^{2}(u^\ell_t)dx+\int_{\Omega_1}g^{2}(u^\ell_t)dx\\ & \quad \le c^{\prime}_2\int_{\Omega_2}\vert u^\ell_t \vert ^2 dx+ \int_{\Omega_1}\left( \vert u^\ell\vert^2_t+g^2(u^\ell_t)\right)dx-\int_{\Omega_1}\vert u_t^\ell \vert^2 dx\\ & \quad \le c^{\prime}_{2}\int_{\Omega_2}\vert u^\ell_t \vert ^2 dx+\int_{\Omega_1}G^{-1}(u^\ell_t g(u^\ell_t))dx. \end{aligned} \end{equation} (4.19)

    Let

    J^\ell(t): = \int_{\Omega_1}u^\ell_t g(u^\ell_t)dx,
    \begin{equation} E^\ell(t) = \frac{1}{2}\left(\| u_t^\ell\|_2^2+\| u^\ell\|_{H^2_*(\Omega )}^2\right)-\frac{1}{\beta+2}\|u^\ell\|_{\beta+2}^{\beta+2}, \end{equation} (4.20)

    and

    \begin{equation} \left(E^\ell \right)^{\prime}(t) = -\alpha(t) \int_{\Omega}u_t^\ell g(u_t^\ell)dx \le 0. \end{equation} (4.21)

    Using (4.19) and Jensen's inequality, we obtain

    \begin{equation} \begin{aligned} &\int_{\Omega}g^{2}(u^\ell_t)dx\le c\int_{\Omega}\vert u^\ell_t \vert ^2 dx+G^{-1}(J^\ell(t))\\ & \quad = c\int_{\Omega}\vert u^\ell_t \vert ^2 dx+\frac{ G^{\prime}\left(\varepsilon_0\frac{E^\ell(t)}{E^\ell(0)}\right)}{ G^{\prime}\left(\varepsilon_0\frac{E^\ell(t)}{E^\ell(0)}\right)}G^{-1}\left(J^\ell(t)\right). \end{aligned} \end{equation} (4.22)

    Using the convexity of G ( G^\prime is increasing), we obtain for t \in (0, T) ,

    G^{\prime}\left(\varepsilon_0\frac{E^\ell(t)}{E^\ell(0)}\right)\ge G^{\prime}\left(\varepsilon_0\frac{E^\ell(T)}{E^\ell(0)}\right) = c.

    Let G^{*} be the convex conjugate of G in the sense of Young (see [33], pp. 61–64), then, for s\in(0, G^{\prime}(\varepsilon)],

    \begin{equation} \begin{aligned} G^{*}(s) = s(G^{\prime})^{-1}(s)-G[(G^{\prime})^{-1}(s)]\le s(G^{\prime})^{-1}(s). \end{aligned} \end{equation} (4.23)

    Using the general Young inequality

    AB \le G^{*}(A)+G(B), \quad \text{if} \quad A\in (0,G^{\prime}(\varepsilon)], \quad B\in(0,\varepsilon],

    for

    A = G^{\prime}\left(\varepsilon_{0}\frac{E^\ell(t)}{E^\ell(0)}\right) \quad \text{and} \quad B = G^{-1}\left(J^\ell(t)\right),

    and using the fact that E^{\ell}(t)\le E^{\ell}(0) , we get

    \begin{equation} \begin{aligned} &\int_{\Omega}g^{2}(u^\ell_t)dx\le c\int_{\Omega}\vert u^\ell_t \vert ^2 dx+c\varepsilon_0 \frac{E^\ell(t)}{E^\ell(0)}G^{\prime}\left(\varepsilon_0\frac{E^\ell(t)}{E^l(0)}\right)-C(E^\ell)^{\prime}(t)\\ & \quad \le c\int_{\Omega}\vert u^\ell_t \vert ^2 dx+c\varepsilon_0 \frac{E^\ell(t)}{E^\ell(0)}G^{\prime}\left(\varepsilon_0\frac{E^\ell(t)}{E^\ell(0)}\right)-C(E^\ell)^{\prime}(t)\\ & \quad \le c\int_{\Omega}\vert u^\ell_t \vert ^2 dx+c-C(E^\ell)^{\prime}(t). \end{aligned} \end{equation} (4.24)

    Integrating (4.24) over (0, T) , we obtain

    \begin{equation} \begin{aligned} \int_{0}^{T}\int_{\Omega}g^{2}(u^\ell_t)dxdt\le c\int_{0}^{T}\int_{\Omega}\vert u^\ell_t \vert ^2 dxdt+cT-C\left(E^{\ell}(T)-E^{\ell}(0)\right). \end{aligned} \end{equation} (4.25)

    Using (4.21) and the fact that u^\ell_t is bounded in L^{2}\left((0, T);L^{2}(\Omega)\right) , we conclude that g(u^\ell_t) is bounded in L^{2}\left((0, T);L^{2}(\Omega)\right) . So, we find, up to a subsequence, that

    \begin{equation} g(u^\ell_t) {\rightharpoonup} \chi\text{ in } L^{2}\left((0,T);L^{2}(\Omega)\right). \end{equation} (4.26)

    Now, we have to show that \chi = g(u_t) . In (4.6), we use u^{\ell} instead of u^{k} and then integrate over (0, t) to get

    \begin{equation} \begin{array}{ll} \int_{\Omega }u_{t}^{\ell}v_{j}dx-\int_{\Omega }u_{1}^{\ell}v_{j}dx+\int_0^t \int_{\Omega } \Delta u^{\ell} \Delta v_{j}dxds+ \int_0^t\int_{\Omega } \alpha (s) g (u_t^{\ell}) v_{j}dxds = \int_0^t\int_{\Omega }f v_{j}dxds, \; j < \ell. \end{array} \end{equation} (4.27)

    As \ell \rightarrow + \infty , we easily check that

    \begin{equation} \begin{array}{ll} \int_{\Omega }u_{t}v_{j}dx-\int_{\Omega }u_{1}v_{j}dx+\int_0^t \int_{\Omega } \Delta u \Delta v_{j}dxds+ \int_0^t\int_{\Omega } \alpha (s)\chi v_{j}dxds = \int_0^t\int_{\Omega }f v_{j}dxds, \; j\geq 1. \end{array} \end{equation} (4.28)

    Hence, for v \in H^2_*(\Omega), we have

    \begin{equation} \begin{array}{ll} \int_{\Omega }u_{t}v dx-\int_{\Omega }u_{1}v dx+\int_0^t \int_{\Omega } \Delta u \Delta v dxds+ \int_0^t\int_{\Omega } \alpha (s) \chi v dxds = \int_0^t\int_{\Omega }f v dxds. \end{array} \end{equation} (4.29)

    Since all terms define absolute continuous functions, we get, for a.e. \; t\in[0, T] and for v \in H^2_*(\Omega), the following

    \begin{equation} \begin{array}{ll} \frac{d}{dt}\int_{\Omega }u_{t}v dx+ \int_{\Omega } \Delta u \Delta v dx+ \int_{\Omega } \alpha (t) \chi v dx = \int_{\Omega }f v dxds. \end{array} \end{equation} (4.30)

    This implies that

    \begin{equation} u_{tt}+\Delta^2 u+\alpha(t)\chi = f, \; \text{in}\; D'\left(\Omega \times (0,T)\right). \end{equation} (4.31)

    Using (H1), we see that

    \begin{equation} X^{\ell}: = \int_0^T \int_{\Omega } \alpha (s) \left(u_t^{\ell}-v \right) \left(g (u_t^{\ell})-g(v) \right) dx dt \geq 0,\; v \in L^{2}\left((0,T);L^{2}(\Omega)\right). \end{equation} (4.32)

    So, by using (4.6) and replacing u^{k} by u^{\ell} , we get

    \begin{equation} \begin{aligned} X^{\ell}& = \int_0^T \int_{\Omega } f u_t^{\ell}dx dt+ \frac{1}{2} \int_{\Omega }\left( \vert u_t^{\ell} \vert^2+ \vert \Delta u^{\ell} \vert^2 \right) dx- \frac{1}{2} \int_{\Omega } \vert u_t^{\ell}(x,T) \vert^2 dx\\&- \frac{1}{2} \int_{\Omega } \vert \Delta u_t^{\ell}(x,T) \vert^2 dx- \int_0^T \int_{\Omega } \alpha (t)g (u_t^{\ell})vdxdt-\int_0^T \int_{\Omega } \alpha (t) g (v)\left(u_t^{\ell}-v\right)dxdt. \end{aligned} \end{equation} (4.33)

    Taking \ell \rightarrow +\infty , we obtain

    \begin{equation} \begin{aligned} 0 &\leq \lim \sup\limits_{\ell} X^{\ell} \leq \int_0^T \int_{\Omega } f u_t dxdt+ \frac{1}{2} \int_{\Omega }\left( \vert u_1 \vert^2+ \vert \Delta u_0 \vert^2 \right) dx\\&- \frac{1}{2} \int_{\Omega } \vert u_t(x,T) \vert^2 dx- \frac{1}{2} \int_{\Omega } \vert \Delta u_t(x,T) \vert^2 dx- \int_0^T \int_{\Omega } \alpha (t) \chi vdxdt\\&-\int_0^T \int_{\Omega } \alpha (t) g (v)\left(u_t-v\right)dxdt. \end{aligned} \end{equation} (4.34)

    Replacing v by u_t in (4.30) and integrating over (0, T) , we obtain

    \begin{equation} \begin{aligned} \int_0^T \int_{\Omega } f u_t dxdt& = \frac{1}{2} \int_{\Omega }\left( \vert u_t(x,T) \vert^2 dx+ \vert \Delta u(x,T) \vert^2 \right)dx - \frac{1}{2} \int_{\Omega } \vert u_1 \vert^2 dx\\&- \frac{1}{2} \int_{\Omega } \vert \Delta u_0 \vert^2 dx+ \int_0^T \int_{\Omega }\alpha (t) \chi u_t dxdt. \end{aligned} \end{equation} (4.35)

    Adding of (4.34) and (4.35), we get

    \begin{equation} \begin{aligned} 0 \leq \lim \sup\limits_{\ell} X^{\ell} \leq \int_0^T \int_{\Omega }\alpha(t) \chi u_t dxdt- \int_0^T \int_{\Omega }\alpha(t) \chi v dxdt- \int_0^T \int_{\Omega }\alpha (t) g(v) (u_t-v) dxdt.\end{aligned} \end{equation} (4.36)

    This gives

    \begin{equation} \begin{aligned} \int_0^T \int_{\Omega }\alpha (t)\left(\chi-g(v)\right) (u_t-v)dxdt \geq 0,\; v\in L^2\left((0,T),L^2 (\Omega)\right).\end{aligned} \end{equation} (4.37)

    Hence,

    \begin{equation} \begin{aligned} \int_0^T \int_{\Omega }\alpha (t)\left(\chi-g(v)\right) (u_t-v)dxdt \geq 0,\; v\in L^2\left(\Omega \times (0,T)\right).\end{aligned} \end{equation} (4.38)

    Let v = \lambda w + u_t , where \lambda > 0 and w \in L^2\left(\Omega \times (0, T)\right) . Then, we get

    \begin{equation} \begin{aligned} -\lambda\int_0^T \int_{\Omega }\alpha (t)\left(\chi-g(\lambda w + u_t)\right) w dxdt \geq 0,\; w\in L^2\left(\Omega \times (0,T)\right).\end{aligned} \end{equation} (4.39)

    For \lambda > 0 , we have

    \begin{equation} \begin{aligned} \lambda\int_0^T \int_{\Omega }\alpha (t)\left(\chi-g(\lambda w + u_t)\right) w dxdt \leq 0,\; w\in L^2\left(\Omega \times (0,T)\right).\end{aligned} \end{equation} (4.40)

    As \lambda \rightarrow 0 and using the continuity of g with respect of \lambda , we get

    \begin{equation} \begin{aligned} \lambda\int_0^T \int_{\Omega }\alpha (t)\left(\chi-g(u_t)\right) w dxdt \leq 0,\; w\in L^2\left(\Omega \times (0,T)\right).\end{aligned} \end{equation} (4.41)

    Similarly, for \lambda < 0 , we get

    \begin{equation} \begin{aligned} \lambda\int_0^T \int_{\Omega }\alpha (t) \left(\chi-g(u_t)\right) w dt \geq 0,\; w\in L^2\left(\Omega \times (0,T)\right).\end{aligned} \end{equation} (4.42)

    This implies that \chi = g(u_t) . Hence, (4.30) becomes

    \begin{equation} \begin{aligned} \int_{\Omega }\left(u_{tt}v +\Delta u \Delta v + \alpha (t) g(u_t) v\right) dx = \int_{\Omega } f v dx,\; v\in L^2\left((0,T); H^2_*(\Omega )\right).\end{aligned} \end{equation} (4.43)

    which gives

    \begin{equation} \begin{aligned} u_{tt} +\Delta^2 u+ \alpha (t) g(u_t) = f ,\; \text{in}\; D'\left(\Omega \times (0,T)\right).\end{aligned} \end{equation} (4.44)

    To handle the initial conditions of problem (4.1), we first note that

    \begin{equation} \begin{aligned} u^{\ell}{\rightharpoonup} u \quad \text{weakly * }\text{in} \quad L^{\infty}(0,T;H^2_*(\Omega ))\\ u_{t}^{\ell}{\rightharpoonup} u_{t} \quad \text{weakly * in} \quad L^{\infty}(0,T;L^2(\Omega )). \end{aligned} \end{equation} (4.45)

    Thus, using Lion's Lemma and (4.6), we easily obtain u^{\ell} \rightarrow u \in C\left([0, T];L^2(\Omega)\right). Therefore, u^{\ell}(x, 0) makes sense and u^{\ell}(x, 0)\rightarrow u(x, 0) \in L^2(\Omega). Also, we see that

    u^{\ell}(x,0) = u_0^{\ell}\rightarrow u_0(x) \in H^2_*(\Omega ).

    Hence, u(x, 0) = u_0(x) . As in [34], let \phi\in C^{\infty}_{0}(0, T), and replacing u^{k} by u^{\ell} , we obtain from (4.6) and for any j \leq \ell

    \begin{equation} \left\{ \begin{array}{ll} -\int_0^T \int_{\Omega }u_{t}^{\ell}(x,t) v_{j}(x) \phi'(t) dxdt = -\int_0^T\int_{\Omega } \Delta u^{\ell}(x,t) \Delta v_{j}(x) \phi(t)dx dt\\- \int_0^T\int_{\Omega } \alpha (t) g (u_t^{\ell}) v_{j}(x)\phi(t)dxdt +\int_0^T\int_{\Omega }f(x,t)v_{j}(x)\phi(t) dxdt. \end{array}\right. \end{equation} (4.46)

    As \ell\to +\infty , we have for any \phi\in C^{\infty}_{0}((0, T)),

    \begin{equation} \left\{ \begin{array}{ll} -\int_0^T \int_{\Omega }u_{t}(x,t) v_{j}(x) \phi'(t) dxdt = -\int_0^T\int_{\Omega } \Delta u(x,t) \Delta v_{j}(x) \phi(t)dx dt\\- \int_0^T\int_{\Omega } \alpha (t) g (u_t) v_{j}(x)\phi(t)dxdt +\int_0^T\int_{\Omega }f(x,t)v_{j}(x)\phi(t) dxdt, \end{array}\right. \end{equation} (4.47)

    for all j\geq1. This implies that

    \begin{equation} \begin{array}{ll} -\int_0^T \int_{\Omega }u_{t}(x,t) v(x) \phi'(t) dxdt = \int_0^T\int_{\Omega } \left[-\Delta^2 u(x,t) - \alpha (t) g (u_t)+f(x,t)\right]v(x) \phi(t)dx dt, \end{array} \end{equation} (4.48)

    for all v \in H^2_*(\Omega) . This means that u_{tt}\in L^{\infty}((0, T);\mathcal{H}(\Omega)) and u solves the equation

    \begin{equation} \begin{aligned} u_{tt} +\Delta^2 u+ \alpha (t) g(u_t) = f. \end{aligned} \end{equation} (4.49)

    Thus

    u_{t}\in L^{\infty}((0,T);L^2(\Omega)), \; u_{tt}\in L^{\infty}((0,T);\mathcal{H}(\Omega)).

    Consequently, u_t\in C((0, T);\mathcal{H}(\Omega)). So, u^{\ell}_{t}(x, 0) makes sense and follows that

    u^{\ell}_{t}(x,0)\to u_{t}(x,0)\text{ in }\mathcal{H}(\Omega)

    and since

    u^{\ell}_{t}(x,0) = u^{\ell}_{1}(x)\to u_{1}(x)\text{ in }L^{2}(\Omega),

    then

    u_{t}(x,0) = u_{1}(x).

    This ends the proof of Theorem 4.1.

    Now, we proceed to establish the local existence result for problem (1.1).

    Theorem 4.2. Let (u_{0}, u_{1}) \in H^2_*(\Omega)\times L^{2}(\Omega) begiven. Then problem (1.1) has a unique local weak solution

    u\in L^{\infty}\left([0,T),H^2_{*}(\Omega)\right), \quad u_t\in L^{\infty}\left([0,T),L^2(\Omega)\right), \quad u_{tt}\in L^{\infty}\left([0,T),\mathcal{H}(\Omega)\right).

    Remark 4.3. In this remark, we point out four cases regarding the solution of problem (1.1):

    1) If \beta = 0 , g is linear and (u_0, u_1)\in \left(H^4(\Omega)\cap H^2_{*}(\Omega)\right)\times H^2_{*}(\Omega) , then problem (1.1) has a unique classical solution

    u\in C^2\left([0,T),H^2_{*}(\Omega)\right), \quad u_t\in C^1\left([0,T),L^2(\Omega)\right), \quad u_{tt} \in C\left([0,T),\mathcal{H}(\Omega)\right).

    2) If \beta = 0 , g is linear and (u_0, u_1)\in H^2_{*}(\Omega)\times L^2(\Omega) , then problem (1.1) has a unique weak solution

    u\in C^1\left([0,T),H^2_{*}(\Omega)\right), \quad u_t\in C\left([0,T),L^2(\Omega)\right), \quad u_{tt} \in L^{\infty}\left([0,T),\mathcal{H}(\Omega)\right).

    3) If \beta > 0 or g is nonlinear and (u_0, u_1)\in H^2_{*}(\Omega)\times L^2(\Omega) , then problem (1.1) has a unique weak solution

    u\in L^{\infty}\left([0,T),H^2_{*}(\Omega)\right), \quad u_t\in L^{\infty}\left([0,T),L^2(\Omega)\right), \quad u_{tt} \in L^{\infty}\left([0,T),\mathcal{H}(\Omega)\right).

    4) If \beta > 0 or g is nonlinear and (u_0, u_1)\in \left(H^4(\Omega)\cap H^2_{*}(\Omega)\right)\times H^2_{*}(\Omega) , then problem (1.1) has a unique strong solution

    u\in L^{\infty}\left([0,T),H^4(\Omega)\cap H^2_{*}(\Omega)\right), \quad u_t\in L^{\infty}\left(([0,T),H^2_{*}(\Omega)\right), \quad u_{tt} \in L^{\infty}([0,T),L^2(\Omega)).

    Proof. To prove Theorem 4.2, we first let v \in L^{\infty}\left([0, T), H^2_{*}(\Omega)\right) and \widetilde{f}(v) = \vert v \vert^\beta v. Then, by the embedding Lemma 3.1, we have

    \begin{equation} \begin{aligned} \vert \vert \widetilde{f}(v) \vert \vert^2_2 = \int_{\Omega} \vert v \vert^{2(\beta+1)} dx < + \infty. \end{aligned} \end{equation} (4.50)

    Hence,

    \widetilde{f}(v)\in L^{\infty}([0,T),L^2(\Omega)) \subset L^2(\Omega \times (0,T)).

    Therefore, for each v \in L^{\infty}([0, T), H^2_{*}(\Omega)), there exists a unique solution

    u\in L^{\infty}([0,T),H^2_{*}(\Omega)), \quad u_t\in L^{\infty}([0,T),L^2(\Omega))

    satisfying the following nonlinear problem

    \begin{equation} \begin{aligned} \left\{ \begin{array}{lcr} u_{tt}+\Delta ^2 u +\alpha(t) g(u_t) = \widetilde{f}(v),\; \text{in} \;\Omega \times (0, T), \\ u(0,y,t) = u_{xx}(0,y,t) = u(\pi ,y,t) = u_{xx}(\pi ,y,t) = 0,\;(y,t)\in (-d, d)\times (0, T), \\ u_{yy}(x,\pm d,t)+\sigma u_{xx}(x,\pm d, t) = 0,\;(x,t)\in (0,\pi )\times (0, T), \\ u_{yyy}(x,\pm d,t)+(2-\sigma )u_{xxy}(x,\pm d, t) = 0,\;(x,t)\in (0,\pi )\times (0, T),\\ u(x,y,0) = u_0(x,y),\; u_t(x,y,0) = u_1(x,y),\; \text{ in } \;\Omega \times (0, T), \end{array} \right. \end{aligned} \end{equation} (4.51)

    Now, let

    W_{T} = \left\lbrace w \in L^{\infty}((0,T),H^2_*(\Omega ))/ w_{t} \in L^{\infty}((0,T),L^{2} (\Omega)) \right\rbrace,

    and define the map K : W_{T} \longrightarrow W_{T} by K(v) = u. We note that W_{T} is a Banach space with respect to the following norm

    || w||_{W_{T}} = || w||_{L^{\infty}((0,T),H^2_*(\Omega ))}+|| w_t||_{L^{\infty}((0,T),L^{2} (\Omega))}.

    Multiply (4.51) by u_t and integrate over \Omega \times (0, t) , we get for all t \leq T,

    \begin{equation} \begin{aligned} &\frac{1}{2} \int_{\Omega} u_t^2dx + \frac{1}{2} \int_{\Omega} \vert \Delta u \vert^2 dx+\int_0^t \int_{\Omega} \alpha (s) u_t g(u_t) dxds\\& = \frac{1}{2} \int_{\Omega} u_1^2dx + \frac{1}{2} \int_{\Omega} \vert \Delta u_0 \vert^2 dx+ \int_0^t\int_{\Omega} \vert v \vert^\beta v u_t dxds. \end{aligned} \end{equation} (4.52)

    Using Young's inequality and the embedding Lemma 3.1, we have

    \begin{equation} \begin{aligned} \int_{\Omega} \vert v \vert^\beta v u_t dx &\leq \frac{\varepsilon}{4}\int_{\Omega} u_t^2dx+ \frac{4}{\varepsilon}\int_{\Omega} \vert v \vert^{2(\beta+1)}dx\\ & \leq \frac{\varepsilon}{4}\int_{\Omega} u_t^2dx+ \frac{4 C_e}{\varepsilon} \vert \vert v \vert\vert_{H^2_*}^{2(\beta+1)}. \end{aligned} \end{equation} (4.53)

    Thus, (4.52) becomes

    \begin{equation} \begin{aligned} \frac{1}{2} \int_{\Omega} u_t^2dx + \frac{1}{2} \int_{\Omega} \vert \Delta u \vert^2 dx \leq \lambda_0 + \frac{\varepsilon T}{4}\sup\limits_{(0,T)}\int_{\Omega} u_t^2dx+\frac{C_e}{\varepsilon}\int_0^T \vert \vert v \vert\vert_{H^2_*}^{2(\beta+1)}dt, \end{aligned} \end{equation} (4.54)

    where \lambda_0 = \frac{1}{2} \vert \vert u_1\vert \vert_2^2+\frac{1}{2} \vert \vert \Delta u_0\vert \vert_2^2 and C_e is the embedding constant. Choosing \varepsilon such that \frac{\varepsilon T}{2} = \frac{1}{4}, we get

    ||u||_{W_{T}}^2 \leq \lambda +T b \vert \vert v \vert\vert_{W_{T}}^{2(\beta+1)}.

    Suppose that \vert \vert v \vert\vert_{W_{T}} \leq M and for M^2 > \lambda and T \leq T_0 < \frac{M^2-\lambda}{bM^{2(\beta+1)}} , we conclude that

    ||u||_{W_{T}}^2 \leq \lambda +T b M^{2(\beta+1)}\leq M^2.

    Therefore, we deduce that K:B \longrightarrow B, where

    B = \left\lbrace w \in L^{\infty}((0,T),H^2_*(\Omega ))/ w_{t} \in L^{\infty}((0,T),L^{2} (\Omega)); \vert \vert w \vert\vert_{W_{T}} \leq M\right\rbrace.

    Next, we prove, for T_0 (even smaller), K is a contraction. For this purpose, let u_1 = K(v_1) and u_2 = K(v_2) and set u = u_1-u_2, then u satisfies the following

    \begin{equation} \begin{aligned} u_{tt}+\Delta ^2 u +\alpha(t) g(u_1{_t})-\alpha(t) g(u_2{_t}) = \vert v_1 \vert^\beta v_1-\vert v_2 \vert^\beta v_2. \end{aligned} \end{equation} (4.55)

    Multiplying (4.55) by u_t and integrating over \Omega\times (0, t) we get, for all t \leq T,

    \begin{equation} \begin{aligned} \frac{1}{2} \int_{\Omega} u_t^2dx + \frac{1}{2} \int_{\Omega} \vert \Delta u \vert^2 dx &+\int_0^t \int_{\Omega} \left(\alpha(t) g(u_1{_t})-\alpha(t) g(u_2{_t})\right) \left(u_1{_t}- u_2{_t}\right)dxds\\& = \int_0^t \int_{\Omega}\left(\widetilde{f}(v_1)-\widetilde{f}(v_2)\right)u_t dxds. \end{aligned} \end{equation} (4.56)

    Using (3.5) and (H2), we have

    \begin{equation} \begin{aligned} \frac{1}{2} \int_{\Omega} u_t^2dx + \frac{1}{2} \int_{\Omega} \vert \Delta u \vert^2 dx \leq \int_0^t \int_{\Omega}\left(\widetilde{f}(v_1)-\widetilde{f}(v_2)\right)u_t dxds. \end{aligned} \end{equation} (4.57)

    Now, we evaluate

    \begin{equation} \begin{aligned} \Lambda: = \int_{\Omega}\vert \widetilde{f}(v_1)-\widetilde{f}(v_2)\vert \vert u_t \vert dx = \int_{\Omega}\vert \widetilde{f}'(\xi)\vert \vert v \vert\vert u_t \vert dx, \end{aligned} \end{equation} (4.58)

    where v = v_1-v_2 , \xi = \tau v_1 +(1-\tau)v_2 , 0\leq \tau \leq1 , and \widetilde{f}'(\xi) = (\beta+1) \vert \xi \vert^\beta .

    Young's inequality implies

    \begin{equation} \begin{aligned} \Lambda & \leq \frac{\delta}{2} \int_{\Omega} u_t^2dx+ \frac{2}{\delta} \int_{\Omega}\vert \widetilde{f}'(\xi)\vert^2 \vert v \vert^2 dx \leq \frac{\delta}{2} \int_{\Omega} u_t^2dx+ \frac{2 (\beta+1)^2}{\delta} \int_{\Omega}\vert \alpha v_1 +(1-\alpha)v_2 \vert^{2\beta} \vert v \vert^2 dx\\ & \leq \frac{\delta}{2} \int_{\Omega} u_t^2dx +C_\delta \left (\vert v \vert^{\frac{2n}{n-2}} \right)^{\frac{n-2}{n}} \left(\vert \alpha v_1 +(1-\alpha)v_2 \vert^{n\beta} \right)^{\frac{2}{n}}. \end{aligned} \end{equation} (4.59)

    Using the embedding Lemma 3.1, we arrive at

    \begin{equation} \begin{aligned} \Lambda &\leq \frac{\delta}{2} \int_{\Omega} u_t^2dx+C_\delta C_e \vert \vert v \vert\vert^2_{H^2_*} \left(\vert \vert v_1 \vert\vert^{2\beta}_{H^2_*}+\vert \vert v_2 \vert\vert^{2\beta}_{H^2_*} \right)\\& \leq \frac{\delta}{2} \int_{\Omega} u_t^2dx+4C_\delta C_e M^{2\beta}\vert \vert v \vert\vert^{2\beta}_{H^2_*}. \end{aligned} \end{equation} (4.60)

    Therefore, (4.57) takes the form

    \begin{equation} \begin{aligned} \frac{1}{2} \vert \vert u \vert\vert^2_{W_{T}} \leq \frac{\delta T_0}{2} \vert \vert u \vert\vert^2_{W_{T}}+C_\delta M^{2\beta}T_0 \vert \vert v \vert\vert^{2\beta}_{W_{T}}. \end{aligned} \end{equation} (4.61)

    Choosing \delta sufficiently small, we see that

    \begin{equation} \begin{aligned} \vert\vert u \vert \vert^2_{W_{T}} \leq 4C_\delta M^{2\beta}T_0 \vert \vert v \vert\vert^{2\beta}_{W_{T}} = \gamma_0 T_0 \vert \vert v \vert\vert^{2\beta}_{W_{T}}. \end{aligned} \end{equation} (4.62)

    Taking T_0 small enough so that,

    \begin{equation} \begin{aligned} \vert \vert u \vert\vert^2_{W_{T}} \leq \nu \vert \vert v \vert\vert^{2\beta}_{W_{T}},\text{for some}\; 0 < \nu < 1. \end{aligned} \end{equation} (4.63)

    Thus, K is a contraction. The Banach fixed point theorem implies the existence of a unique u \in B satisfying K(u) = u. Thus, u is a local solution of (1.1).

    Uniqueness: Suppose that problem (1.1) has two weak solutions (u, v) . Taking, w = u-v, that satisfies the following equation, for all t\in \left(0, T\right),

    \begin{equation} \begin{aligned} \left\{ \begin{array}{lcr} w_{tt}-\Delta ^2 w +\alpha(t) g(u_t)-\alpha(t) g(v_t) = u \vert u\vert^\beta -v \vert v\vert^\beta\\ w(0,y,t) = w_{xx}(0,y,t) = w(\pi ,y,t) = w_{xx}(\pi ,y,t) = 0,\;(y,t)\in (-d, d)\times (0, T),\\ w(x,0) = w_t(x,0) = 0,\; \text{ in }\; \Omega. \end{array} \right. \end{aligned} \end{equation} (4.64)

    Multiplying (4.64) by w_t and integrating over \Omega \times (0, t) , we obtain

    \begin{equation} \begin{aligned} &\frac{1}{2} \int_{\Omega} w_t^2dx + \frac{1}{2} \int_{\Omega} \vert \Delta w \vert^2 dx+\int_0^t \int_{\Omega} \left( \alpha(t)g(u_t)- \alpha(t)g(v_t)\right) \left(u_t-v_t\right)dxds\\& = \int_0^t\int_{\Omega} \left(u \vert u\vert^\beta -v \vert v\vert^\beta\right) w_t dxds. \end{aligned} \end{equation} (4.65)

    Using (3.5) and (H2) implies that

    \begin{equation} \begin{aligned} \frac{1}{2} \int_{\Omega} w_t^2dx + \frac{1}{2} \int_{\Omega} \vert \Delta w \vert^2 dx \leq \int_0^t\int_{\Omega} \left(u \vert u\vert^\beta -v \vert v\vert^\beta\right) w_t dxds. \end{aligned} \end{equation} (4.66)

    By repeating the same above estimates, we obtain

    \begin{equation} \begin{aligned} \int_{\Omega} \left(w_t^2dx + \vert \Delta w \vert^2\right) dx = 0. \end{aligned} \end{equation} (4.67)

    This gives w \equiv 0. The proof of the uniqueness is completed.

    In this section, we prove that problem (1.1) has a global solution. For this purpose, we introduce the following functionals. The energy functional associated with problem (1.1) is

    \begin{equation} E(t) = \frac{1}{2}\left(\| u_t\|_2^2+\| u\|_{H^2_*(\Omega )}^2\right)-\frac{1}{\beta+2}\|u\|_{\beta+2}^{\beta+2}. \end{equation} (5.1)

    Direct differentiation of (5.1), using (1.1), leads to

    \begin{equation} E^{\prime}(t) = -\alpha(t) \int_{\Omega}u_t g(u_t)dx \le 0. \end{equation} (5.2)
    \begin{equation} J(t) = \frac{1}{2}\| u\|_{H^2_*(\Omega )}^2-\frac{1}{\beta+2}\|u\|_{\beta+2}^{\beta+2} \end{equation} (5.3)

    and

    \begin{equation} I(t) = \| u\|_{H^2_*(\Omega )}^2-\|u\|_{\beta+2}^{\beta+2}. \end{equation} (5.4)

    Clearly, we have

    \begin{equation} E(t) = J(t)+\frac{1}{2}\|u_t\|_2^2. \end{equation} (5.5)

    Lemma 5.1. Suppose that (H1) and (H2) hold and (u_0, u_1)\in H^2_{*}(\Omega)\times L^2(\Omega) , such that

    \begin{equation} 0 < \gamma = C_e^{\beta+2} \left(\frac{2(\beta+2)}{\beta}E(0)\right)^{\frac{\beta}{2}} < 1 , \quad I(u_0) > 0, \end{equation} (5.6)

    then I(u(t)) > 0, \; \forall t > 0 .

    Proof. Since I(u_0) > 0 , then there exists (by continuity) T_m < T such that I(u(t)\ge 0 , \forall t\in [0, T_m] ; which gives

    \begin{equation} \begin{aligned} J(t)& = \frac{1}{2}\| u\|_{H^2_*(\Omega )}^2-\frac{1}{\beta+2}\|u\|_{\beta+2}^{\beta+2}\\ & = \frac{\beta}{2(\beta+2)} \| u\|_{H^2_*(\Omega )}^2+\frac{1}{\beta+2}I(t)\\ &\ge \frac{\beta}{2(\beta+2)} \| u\|_{H^2_*(\Omega )}^2. \end{aligned} \end{equation} (5.7)

    By using (5.2), (5.5) and (5.7), we have

    \begin{equation} \begin{aligned} \| u\|_{H^2_*(\Omega )}^2 \le \frac{2(\beta+2)}{\beta}J(t)\le \frac{2(\beta+2)}{\beta} E(t)\le \frac{2(\beta+2)}{\beta} E(0), \quad \forall t\in [0,T_m]. \end{aligned} \end{equation} (5.8)

    The embedding theorem, (5.6) and (5.8) give, \forall t\in [0, T_m],

    \begin{equation} \|u\|_{\beta+2}^{\beta+2}\le C_e^{\beta+2} \| u\|_{H^2_*(\Omega )}^{\beta+2}\le C_e^{\beta+2} \| u\|_{H^2_*(\Omega )}^{\beta} \| u\|_{H^2_*(\Omega )}^{2}\le \gamma \| u\|_{H^2_*(\Omega )}^2 < \| u\|_{H^2_*(\Omega )}^{2}. \end{equation} (5.9)

    Therefore,

    I(t) = \| u\|_{H^2_*(\Omega )}^2-\|u\|_{\beta+2}^{\beta+2} > 0, \quad \forall t\in [0,T_m].

    By repeating this procedure, and using the fact that

    \lim\limits_{t\to T_m} {C_e^{\beta+2}\left(\frac{2(\beta+2)}{\beta}E(t)\right)^{\frac{\beta}{2}} }\le \gamma < 1,

    T_m is extended to T.

    Remark 5.2. The restriction (5.6) on the initial data will guarantee the nonnegativeness of E(t).

    Proposition 5.3. Suppose that (H1) and (H2) hold. Let (u_0, u_1)\in {H^2_*(\Omega)}\times L^2(\Omega) be given, satisfying (5.6). Thenthe solution of (1.1) is global and bounded.

    Proof. It suffices to show that \| u\|_{H^2_*(\Omega)}^{2}+\|u_t\|_2^2 is bounded independently of t . To achieve this, we use (5.2), (5.4) and (5.5) to get

    \begin{equation} \begin{aligned} E(0)\ge E(t)& = J(t)+\frac{1}{2}\|u_t\|_2^2\\ &\ge \frac{\beta-2}{2\beta}\| u\|_{H^2_*(\Omega )}^{2}+\frac{1}{2}\|u_t\|_2^2+\frac{1}{\beta}I(t)\\ &\ge \frac{\beta-2}{2\beta}\| u\|_{H^2_*(\Omega )}^{2}+\frac{1}{2}\|u_t\|_2^2, \end{aligned} \end{equation} (5.10)

    since I(t) is positive. Therefore

    \| u\|_{H^2_*(\Omega )}^{2}+\|u_t\|_2^2\le CE(0),

    where C is a positive constant, which depends only on \beta .

    In this section, we state and prove our stability result. For this purpose, we establish some lemmas.

    Lemma 6.1. (Case: G is linear) Let u be the solution of (1.1). Then, for T > S \geq 0 , the energy functionalsatisfies

    \begin{equation} \begin{aligned} &\int_S^T \alpha(t) E(t) dt \leq c E(S). \end{aligned} \end{equation} (6.1)

    Proof. We multiply (1.1) by \alpha u and integrate over \Omega \times (S, T) to get

    \begin{equation} \begin{aligned} 0 & = \int_{S}^{T}\alpha (t)\int_{\Omega }\left( uu_{tt}+u\Delta^2 u+\alpha (t)ug(u_t)-\vert u\vert^{\beta+2}\right) dxdt \\ & = \int_{S}^{T}\alpha (t)\int_{\Omega }\left( \left( uu_{t}\right) _{t}-u_{t}^{2}+\alpha (t)ug(u_t)-\vert u\vert^{\beta+2}\right) dxdt+\int_{S}^{T}\alpha (t)\| u\|_{H_*^2(\Omega)}^{2} dt\\ & = \int_{S}^{T}\alpha (t)\frac{d}{dt}\left( \int_{\Omega }uu_{t}dx\right) dt+\int_{S}^{T}\alpha (t)\int_{\Omega } u_{t}^{2} dxdt\\ & \quad +\int_{S}^{T}\alpha (t)\| u\|_{H_*^2(\Omega)}^{2} dt-2\int_{S}^{T}\alpha (t)\int_{\Omega }u_{t}^{2}dxdt\\ & \quad +\int_{S}^{T}\alpha ^{2}(t)\int_{\Omega }ug(u_t)dxdt-\int_{S}^{T}\alpha (t) \|u\|_{\beta+2}^{\beta+2}dt. \end{aligned} \end{equation} (6.2)

    Adding and subtracting the following terms

    \gamma \int_{S}^{T}\alpha(t) \| u\|_{H_*^2(\Omega)}^{2}dt+(1+\gamma)\int_{S}^{T}\alpha(t)\|u_t\|_2^2dt,\text{ where }\gamma \text{ is defined in (5.6)},

    to (6.2), and recalling (5.9), we arrive at

    \begin{equation} \begin{aligned} &\int_{S}^{T}\alpha (t)\frac{d}{dt}\left( \int_{\Omega }uu_{t}dx\right) dt+(1-\gamma)\int_{S}^{T}\alpha(t) \left( \| u\|_{H_*^2(\Omega)}^{2}+\|u_t\|_2^2\right)dt\\ & \quad -(2-\gamma)\int_{S}^{T}\alpha (t)\int_{\Omega }u_{t}^{2}dxdt+\int_{S}^{T}\alpha ^{2}(t)\int_{\Omega }ug(u_t)dxdt\\ & \quad = -\int_{S}^{T}\alpha(t)\left(\gamma \| u\|_{H_*^2(\Omega)}^{2}-\|u\|_{\beta+2}^{\beta+2}\right)dt\le 0. \end{aligned} \end{equation} (6.3)

    Integrating the first term of (6.3) by parts and using (5.1), then (6.3) becomes

    \begin{equation} \begin{aligned} (1-\gamma)\int_{S}^{T}\alpha Edt&\le (1-\gamma)\int_{S}^{T}\alpha \left( \| u\|_{H_*^2(\Omega)}^{2}+\|u_t\|_2^2\right)dt\\ &\le -\left[ \alpha \int_{\Omega }uu_{t}dx \right] _{S}^{T}+\int_{S}^{T}\alpha ^{\prime }\int_{\Omega }uu_{t}dxdt\\ & \quad +(2-\gamma)\int_{S}^{T}\alpha \int_{\Omega }u_{t}^{2}dxdt-\int_{S}^{T}\alpha ^{2} \int_{\Omega }ug(u_t)dxdt. \end{aligned} \end{equation} (6.4)

    Now, we estimate the terms in the right-hand side of (6.4) as follows:

    1) Estimate for -\left[ \alpha \int_{\Omega }uu_{t}dx \right] _{S}^{T}.

    Using Lemma 3.1 and Young's inequality, we obtain

    \begin{equation} \int_{\Omega }uu_{t}dx\leq \frac{1}{2}\int_{\Omega }\left( u^{2}+u_{t}^{2}\right) dx\leq c\|u\|_{H^2_*(\Omega)}+\|u_{t}\|^2_2 \leq cE(t), \end{equation} (6.5)

    which implies that

    \begin{equation} -\left[ \alpha \int_{\Omega }uu_{t}dx \right] _{S}^{T}\leq c[-\alpha (T)E(T)+\alpha (S)E(S)]\leq c\alpha (S) E(S)\le cE(S) . \end{equation} (6.6)

    2) Estimate for \int_{S}^{T}\alpha ^{\prime }\int_{\Omega }uu_{t}dxdt .

    The use of (6.5) and (H2) leads to

    \begin{equation} \begin{aligned} \int_{S}^{T}\alpha ^{\prime }\int_{\Omega }uu_{t}dxdt &\leq c\left\vert \int_{S}^{T}\alpha ^{\prime }Edt\right\vert \leq cE(S)\left\vert \int_{S}^{T}\alpha ^{\prime }dt\right\vert \le cE(S). \end{aligned} \end{equation} (6.7)

    3) Estimate for \int_{S}^{T}\alpha \left(\int_{\Omega}u_t^2dx\right)dt.

    Using (H1), (5.2) and recalling that G is linear, we have

    \begin{equation} \begin{aligned} \int_{S}^{T}\alpha \left(\int_{\Omega}u_t^2dx\right)dt&\le \frac{1}{c_1} \int_{S}^{T}\alpha(t)\int_{\Omega}u_tg(u_t)dxdt\\ &\le -\int_{S}^{T}cE^{\prime}(t)dt\\ &\le cE(S). \end{aligned} \end{equation} (6.8)

    4) Estimate for -\int_{S}^{T}\alpha ^{2}(t) \int_{\Omega }ug(u_t)dxdt.

    Using (H1), Lemma 3.1, Holder's inequality and recalling G is linear, we obtain

    \begin{equation} \begin{aligned} \alpha^2(t)\int_{\Omega}ug(u_t)dx &\le \alpha^2(t)\left(\int_{\Omega}\vert u\vert^{2}dx\right)^{\frac{1}{2}} \left(\int_{\Omega}\vert g(u_t)\vert^{2}dx\right)^{\frac{1}{2}}\\ &\le \alpha^{\frac{3}{2}} (t) \|u\|_{H^2_*(\Omega)} \left(\alpha(t)\int_{\Omega}u_t g(u_t) dx\right)^{\frac{1}{2}}\\ &\le c \alpha(t) E^{\frac{1}{2}}(t)\left(-E^\prime(t)\right)^{\frac{1}{2}}. \end{aligned} \end{equation} (6.9)

    Applying Young's inequality to E^{\frac{1}{2}}(t) (-E'(t))^{\frac{1}{2}} with p = 2 and p^* = 2 , to get

    \begin{equation} \begin{aligned} \alpha^2(t)\int_{\Omega}ug(u_t)dx &\le c\alpha(t)\left(\varepsilon E(t) -C_{\varepsilon} E^{\prime}(t)\right)\\ &\le c\varepsilon \alpha E(t)-C_{\varepsilon}E^{\prime}(t), \end{aligned} \end{equation} (6.10)

    which implies that

    \begin{equation} \begin{aligned} &\int_{S}^{T}\alpha^2 (t) \left(\int_{\Omega}(-ug(u_t))dx\right)dt\\ & \quad \le c\varepsilon \int_{S}^{T}\alpha(t) E(t) dt+C_{\varepsilon}E(S). \end{aligned} \end{equation} (6.11)

    Combining the above estimates and taking \varepsilon small enough, we get (6.1).

    Lemma 6.2. (Case: G is nonlinear) Let u be the solution of (1.1). Then, for T > S \geq 0 , the energy functionalsatisfies

    \begin{equation} \begin{aligned} \int_{S}^{T}\alpha(t) \tilde{\phi}\left(E(t)\right)dt\le c \tilde{\phi}(E(S))+c\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\int_{\Omega}\left(\vert u_t\vert^2+\vert u g(u_t)\vert \right)dxdt, \end{aligned} \end{equation} (6.12)

    where \tilde{\phi}:\mathbb{R}^+ \to \mathbb{R}^+ is any convex, increasing and of class C^1[0, \infty) function such that \tilde{\phi}(0) = 0 .

    Proof. We multiply (1.1) by \alpha(t) \frac{\tilde{\phi}(E)}{E}u and integrate over \Omega \times (S, T) to get

    \begin{equation} \begin{aligned} 0 & = \int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\int_{\Omega }\left( \left( uu_{t}\right) _{t}-u_{t}^{2}+\alpha (t)ug(u_t)-\vert u\vert^{\beta+2}\right) dxdt\\ & \quad +\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\| u\|_{H_*^2(\Omega)}^{2} dt\\ & = \int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\frac{d}{dt}\left( \int_{\Omega }uu_{t}dx\right) dt+\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\| u\|_{H_*^2(\Omega)}^{2} dt\\ & \quad +\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\int_{\Omega } u_{t}^{2} dxdt-2\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\int_{\Omega } u_{t}^{2} dxdt\\ & \quad +\int_{S}^{T}\alpha ^{2}(t)\frac{\tilde{\phi}(E)}{E}\int_{\Omega }ug(u_t)dxdt-\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\|u\|_{\beta+2}^{\beta+2}. \end{aligned} \end{equation} (6.13)

    Adding and subtracting to (6.13) the following terms

    \gamma \int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\| u\|_{H_*^2(\Omega)}^{2} dt+(1+\gamma)\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\|u_t\|_2^2dt, \text{ where }\gamma \text{ is defined in (5.6)},

    we arrive at

    \begin{equation} \begin{aligned} &(1-\gamma)\int_{S}^{T}\alpha(t) \tilde{\phi}(E)dt\le -\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\frac{d}{dt}\left( \int_{\Omega }uu_{t}dx\right) dt\\ & \quad +(2-\gamma)\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\int_{\Omega } u_{t}^{2} dxdt-\int_{S}^{T}\alpha ^{2}(t)\frac{\tilde{\phi}(E)}{E}\int_{\Omega }ug(u_t)dxdt\\ & \quad -\int_{S}^{T}\alpha \frac{\tilde{\phi}(E)}{E} \left(\gamma \| u\|_{H_*^2(\Omega)}^{2}-\|u\|_{\beta+2}^{\beta+2}\right). \end{aligned} \end{equation} (6.14)

    Using (5.9), it is easy to deduce that -\int_{S}^{T}\alpha \frac{\tilde{\phi}(E)}{E} \left(\gamma \| u\|_{H_*^2(\Omega)}^{2}-\|u\|_{\beta+2}^{\beta+2}\right)dt\le 0.

    Integrating by parts in the first term, in the right-hand side of (6.14), we get

    \begin{equation} \begin{aligned} (1-\gamma)\int_{S}^{T}\alpha(t) \tilde{\phi}(E)dt\le &-\left[ \alpha(t) \frac{\tilde{\phi}(E)}{E}\int_{\Omega }uu_{t}dx \right] _{S}^{T}\\ &+\int_{S}^{T}\int_{\Omega}u_t\left(\alpha^{\prime}(t) \frac{\tilde{\phi}(E)}{E}u+\alpha(t)\left( \frac{\tilde{\phi}(E)}{E}\right)^{\prime}u\right)dxdt\\ &+(2-\gamma)\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\int_{\Omega } u_{t}^{2} dxdt\\ &-\int_{S}^{T} \alpha^2(t) \frac{\tilde{\phi}(E)}{E}\int_{\Omega }ug(u_t)dxdt. \end{aligned} \end{equation} (6.15)

    Using Cauchy Schwarz' inequality, Lemmas 3.1 and 5.1, we obtain

    \begin{equation} \begin{aligned} \int_{\Omega}uu_t dx &\le \left(\int_{\Omega}\vert u\vert ^2dx\right)^{\frac{1}{2}} \left(\int_{\Omega}\vert u_t\vert ^2dx\right)^{\frac{1}{2}}\\ &\le c\|u\|_{H_*^2(\Omega)}\|u_t\|_2\le cE(t). \end{aligned} \end{equation} (6.16)

    Using (6.16), the properties of \alpha(t) and the fact that the function s\to \frac{\tilde{\phi}(s)}{s} is non-decreasing and E is non-increasing, we have

    \begin{equation} \begin{aligned} \int_{S}^{T}\alpha^{\prime}(t) \frac{\tilde{\phi}(E)}{E}\int_{\Omega}uu_t dxdt &\le c\int_{S}^{T}\alpha^{\prime}(t)\frac{\tilde{\phi}(E)}{E}Edt\\ &\le c \tilde{\phi}(E(S))\int_{S}^{T}\alpha^{\prime}(t) dt\le c\tilde{\phi}(E(S)). \end{aligned} \end{equation} (6.17)

    Similarly, we get

    \begin{equation} \begin{aligned} \int_{S}^{T}\alpha(t)\left( \frac{\tilde{\phi}(E)}{E}\right)^{\prime}\int_{\Omega}uu_tdxdt &\le E(S) \int_{S}^{T}\alpha(t)\left( \frac{\tilde{\phi}(E)}{E}\right)^{\prime}dt\\ &\le E(S) \left[\alpha(t) \frac{\tilde{\phi}(E)}{E}\right]_{S}^{T}-E(S)\int_{S}^{T}\alpha^{\prime}(t) \frac{\tilde{\phi}(E)}{E}dt\\ &\le E(S) \left(\alpha(T) \frac{\tilde{\phi}(E(T))}{E(T)}-\alpha(S) \frac{\tilde{\phi}(E(S))}{E(S)}\right)\\ & \quad - E(S)\frac{\tilde{\phi}(E(S))}{E(S)}\int_{S}^{T}\alpha^{\prime}(t)dt\\ &\le E(S) \alpha(T) \frac{\tilde{\phi}(E(T))}{E(T)}- \tilde{\phi}(E(S))\left(\alpha(T)-\alpha(S)\right)\\ &\le E(S) \alpha(S) \frac{\tilde{\phi}(E(S))}{E(S)}+\tilde{\phi}(E(S))\alpha(S)\le c\tilde{\phi}(E(S)). \end{aligned} \end{equation} (6.18)

    A combination of (6.15)–(6.18) leads to (6.12).

    In order to finalize the proof of our result, we let

    \tilde{\phi}(s) = 2\varepsilon_0 s G^{\prime}(\varepsilon^2_0 s), \text{ and} \quad G_1(s) = G(s^2),

    where \varepsilon_0 > 0 is small enough and G^* and G_1^* denote the dual functions of the convex functions G and G_1 respectively in the sense of Young (see, Arnold [33], pp. 64).

    Lemma 6.3. Suppose G is nonlinear, then the following estimates

    \begin{equation} G^{*}\left(\frac{\tilde{\phi}(s)}{s} \right)\le \frac{\tilde{\phi}(s)}{s}\left( G^{\prime}\right)^{-1}\left( \frac{\tilde{\phi}(s)}{s}\right) \end{equation} (6.19)

    and

    \begin{equation} G_1^{*}\left(\frac{\tilde{\phi}(s)}{\sqrt{s}}\right)\le \varepsilon_0 \tilde{\phi}(\sqrt{s}). \end{equation} (6.20)

    hold, where \tilde{\phi} is defined earlier in Lemma 6.2.

    Proof. Since G^* and G_1^* are the dual functions of the convex functions G and G_1 respectively, then

    \begin{equation} \begin{aligned} G^*(s) = s(G^{\prime})^{-1}(s)-G\left[(G^{\prime})^{-1}(s)\right]\le s(G^{\prime})^{-1}(s) \end{aligned} \end{equation} (6.21)

    and

    \begin{equation} \begin{aligned} G_1^*(s) = s(G_1^{\prime})^{-1}(s)-G_1\left[(G_1^{\prime})^{-1}(s)\right]\le s(G_1^{\prime})^{-1}(s). \end{aligned} \end{equation} (6.22)

    Using (6.21) and the definition of \tilde{\phi} , we obtain (6.19). For the proof of (6.20), we use (6.22) and the definitions of G_1 and \tilde{\phi} to obtain

    \begin{equation} \begin{aligned} \frac{\tilde{\phi}(s)}{\sqrt{s}}(G_1^{\prime})^{-1}\left( \frac{\tilde{\phi}(s)}{\sqrt{s}}\right)&\le 2\varepsilon_0 \sqrt{s} G^{\prime}(\varepsilon^2_0 s)(G_1^{\prime})^{-1}\left(2\varepsilon_0 \sqrt{s} G^{\prime}(\varepsilon^2_0 s)\right)\\ & = 2\varepsilon_0 \sqrt{s} G^{\prime}(\varepsilon^2_0 s)(G_1^{\prime})^{-1}\left( G_1^{\prime}(\varepsilon_0 \sqrt{s}) \right)\\ & = 2\varepsilon^2_0 s G^{\prime}(\varepsilon^2_0 s)\\ & = \varepsilon_0 \tilde{\phi}(\sqrt{s}). \end{aligned} \end{equation} (6.23)

    Now, we state and prove our main decay results.

    Theorem 6.4. Let (u_0, u_1)\in H^2_{*}(\Omega)\times L^2(\Omega) . Assume that (H1) and (H2) hold. Then there exist positive constants k and c such that, for t large, the solution of (1.1) satisfies

    \begin{equation} \begin{aligned} E(t)\le ke^{-c\int_{0}^{t}\alpha (s)ds},\qquad \qquad \qquad \mathit{\text{if G is linear,}} \end{aligned} \end{equation} (6.24)
    \begin{equation} \begin{aligned} E(t)\le \psi^{-1}\left(h(\tilde{\alpha}(t))+\psi\left(E(0)\right)\right),\qquad \mathit{\text{if G is nonlinear,}} \end{aligned} \end{equation} (6.25)

    where

    \tilde{\alpha}(t) = \int_{0}^{t}\alpha(t)dt,\mathit{\text{}} \psi(t) = \int_{t}^{1}\frac{1}{\chi(s)}ds,\mathit{\text{and}}\chi(s) = 2\varepsilon_0 c s G^{\prime}(\varepsilon^2_0 s)

    and

    \begin{equation*} \begin{aligned} \begin{cases} h(t) = 0, \quad 0\le t \le \frac{E(0)}{\chi(E(0))},\\\\ h^{-1}(t) = t+\frac{\psi^{-1}\left(t+\psi(E(0))\right)}{\chi\left(\psi^{-1}(t+\psi(E(0)))\right)}, \quad t > 0. \end{cases} \end{aligned} \end{equation*}

    Proof. To establish (6.24), we use (6.1) and Lemma 3.5 for \gamma(t) = \int_{0}^{t}\alpha(s)ds . Consequently the result follows. For the proof of (6.25), we re-estimate the terms of (6.12) as follows: we consider the following partition of \Omega :

    \begin{equation*} \Omega_1 = {\{x\in \Omega: \vert u_t\vert \ge \varepsilon_1\},} \quad \Omega_2 = {\{x\in \Omega: \vert u_t\vert \le \varepsilon_1\}}. \end{equation*}

    So,

    \begin{equation*} \begin{aligned} \int_{S}^{T}\alpha(t)\frac{\tilde{\phi}(E)}{E}&\int_{\Omega_1}\left(\vert u_t\vert ^{2}+\vert u g(u_t)\vert \right)dxdt\\ & = \int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\int_{\Omega_1}\vert u_t\vert ^{2}dxdt+\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\int_{\Omega_1}\vert u g(u_t)\vert dxdt\\ &: = I_1+I_2. \end{aligned} \end{equation*}

    Using the definition of \Omega_1 , (3.4) and (5.2), we have

    \begin{equation} \begin{aligned} I_1&\le c\int_{S}^{T}\alpha(t)\frac{\tilde{\phi}(E)}{E}\int_{\Omega_1}u_t g(u_t)dxdt\\ &\le c\int_{S}^{T}\frac{\tilde{\phi}(E)}{E}\left(-E^{\prime}(t)\right)dt\le c\tilde{\phi}(E(S)). \end{aligned} \end{equation} (6.26)

    After applying Hölder's and Young's inequalities and Lemma 3.1, we obtain for some \varepsilon > 0 ,

    \begin{equation} \begin{aligned} I_2&\le \int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\left(\int_{\Omega_1}\vert u\vert^{2}dx\right)^{\frac{1}{2}}\left( \int_{\Omega_1}\vert g(u_t) \vert^{2} \right)^{\frac{1}{2}}dt\\ &\le \varepsilon \int_{S}^{T}\alpha(t)\frac{\tilde{\phi}^{2}(E)}{E^{2}}\|u\|_{H^2_*(\Omega)}dt+c(\varepsilon)\int_{S}^{T}\alpha(t)\int_{\Omega_1}\vert g(u_t)\vert^{2}dt. \end{aligned} \end{equation} (6.27)

    The definition of \Omega_1 , (3.4), (5.1), (5.2) and (6.27) lead to

    \begin{equation} \begin{aligned} I_2&\le \varepsilon \int_{S}^{T}\alpha(t)\frac{\tilde{\phi}^{2}(E)}{E}dt+c(\varepsilon)\int_{S}^{T}\alpha(t)\int_{\Omega_1}u_t g(u_t)dxdt\\ &\le\varepsilon \int_{S}^{T}\alpha(t)\frac{\tilde{\phi}^{2}(E)}{E}dt+c(\varepsilon)E(S). \end{aligned} \end{equation} (6.28)

    Using the definition of \tilde{\phi} and the convexity of G , then (6.28) becomes

    \begin{equation} \begin{aligned} &I_2\le \varepsilon \int_{S}^{T}\alpha(t)\frac{\tilde{\phi}^{2}(E)}{E}dt+cE(S)\\ & = 2\varepsilon \varepsilon_0 \int_{S}^{T}\alpha(t) \tilde{\phi}(E) G^{\prime}\left(\varepsilon_0^2E(t)\right)dt+cE(S)\\ &\le 2\varepsilon \varepsilon_0 \int_{S}^{T}\alpha(t) \tilde{\phi}(E) G^{\prime}\left(\varepsilon_0^2E(0)\right)dt+cE(S)\\ &\le 2c\varepsilon \varepsilon_0 \int_{S}^{T}\alpha(t) \tilde{\phi}(E)dt+cE(S). \end{aligned} \end{equation} (6.29)

    Combining (6.12), (6.26) and (6.29) and choosing \varepsilon small enough, we obtain

    \begin{equation} \begin{aligned} \int_{S}^{T}\alpha(t) \tilde{\phi}(E)dt \le &c\tilde{\phi}(E)+c\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\int_{\Omega_2}\left(\vert u_t\vert^2+\vert u g(u_t)\vert\right)dxdt. \end{aligned} \end{equation} (6.30)

    Using Young's inequality and Jensen's inequality (Eq 3.3), (Eq 3.4) and (Eq 5.1), we get

    \begin{equation} \begin{aligned} \int_{S}^{T}\alpha(t) &\frac{\tilde{\phi}(E)}{E}\int_{\Omega_2}\left(\vert u_t\vert^2+\vert u g(u_t)\vert\right)dxdt\le \int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\int_{\Omega_2}G^{-1}\left( u_t g(u_t)\right)dxdt\\ & \quad +\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\|u\|^{\frac{1}{2}}_{H^2_*(\Omega)}\left(\int_{\Omega_2}G^{-1}(u_tg(u_t))dx\right)^{\frac{1}{2}}dxdt\\ &\le \vert \Omega \vert\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}G^{-1}\left(\frac{1}{\vert \Omega \vert}\int_{\Omega}u_tg(u_t)dx\right)dt\\ & \quad +\int_{S}^{T}\alpha(t) \frac{\tilde{\phi}(E)}{E}\sqrt{E}\sqrt{\vert \Omega\vert G^{-1}\left(\frac{1}{\vert \Omega \vert}\int_{\Omega}u_tg(u_t)dx\right)}dt. \end{aligned} \end{equation} (6.31)

    Applying the generalized Young inequality

    \begin{equation*} AB\le G^*(A)+G(B) \end{equation*}

    to the first term of (6.31), with A = \frac{\tilde{\phi}(E)}{E} and B = G^{-1}\left(\frac{1}{\vert \Omega \vert}\int_{\Omega}u_tg(u_t)dx\right) , we easily see that

    \begin{equation} \begin{aligned} \frac{\tilde{\phi}(E)}{E}G^{-1}\left(\frac{1}{\vert \Omega \vert}\int_{\Omega}u_tg(u_t)dx\right)\le G^*\left(\frac{\tilde{\phi}(E)}{E}\right)+ \frac{1}{\vert \Omega \vert}\int_{\Omega}u_tg(u_t)dx. \end{aligned} \end{equation} (6.32)

    Then we apply it to the second term of (6.31), with A = \frac{\tilde{\phi}(E)}{E} \sqrt{E} and B = \sqrt{\vert \Omega\vert G^{-1}\left(\frac{1}{\vert \Omega \vert}\int_{\Omega}u_tg(u_t)dx\right)} to obtain

    \begin{equation} \begin{aligned} \frac{\tilde{\phi}(E)}{E} \sqrt{E}\sqrt{\vert \Omega\vert G^{-1}\left(\frac{1}{\vert \Omega \vert}\int_{\Omega}u_tg(u_t)dx\right)}\le G_1^*\left( \frac{\tilde{\phi}(E)}{E} \sqrt{E} \right) +\vert \Omega\vert G^{-1}\left(\frac{1}{\vert \Omega \vert}\int_{\Omega}u_tg(u_t)dx\right). \end{aligned} \end{equation} (6.33)

    Combining (6.31)–(6.33) and using (6.19) and (6.20), we arrive at

    \begin{equation} \begin{aligned} \int_{S}^{T}\alpha(t) &\frac{\tilde{\phi}(E)}{E}\int_{\Omega_2}\left(\vert u_t\vert^2+\vert u g(u_t)\vert\right)dxdt\\ &\le c \int_{S}^{T} \alpha(t) \left( G_1^*\left(\frac{\tilde{\phi}(E)}{E} \sqrt{E}\right)+G^*\left( \frac{\tilde{\phi}(E)}{E}\right)\right)dt+c\int_{S}^{T}\alpha(t)\int_{\Omega}u_t g(u_t)dxdt\\ &\le c \int_{S}^{T} \alpha(t) \left(\varepsilon_0+\frac{(G^{\prime})^{-1}\left(\frac{\tilde{\phi}(E)}{E}\right)}{E}\right)\tilde{\phi}(E)dt+cE(S). \end{aligned} \end{equation} (6.34)

    Using the definition of \tilde{\phi} and the fact that s \to (G^{\prime})^{-1}(s) is non-decreasing, we deduce that, for 0 < \varepsilon_0\le \frac{1}{2} ,

    \begin{equation} \frac{(G^{\prime})^{-1}\left(\frac{\tilde{\phi}(E)}{E}\right)}{E} = \frac{(G^{\prime})^{-1}\left( 2\varepsilon_0 G^{\prime}(\varepsilon_0^2 E)\right)}{E}\le \varepsilon_0^2. \end{equation} (6.35)

    Combining (6.34) and (6.35) leads to

    \begin{equation} \begin{aligned} \int_{S}^{T}\alpha(t) &\frac{\tilde{\phi}(E)}{E}\int_{\Omega_2}\left(\vert u_t\vert^2+\vert u g(u_t)\vert\right)dxdt\le c\varepsilon_0 \int_{S}^{T}\alpha(t) \tilde{\phi}(E)dt+cE(S). \end{aligned} \end{equation} (6.36)

    Then, choosing \varepsilon_0 small enough, we deduce from (6.30) and (6.36) that

    \begin{equation*} \begin{aligned} \int_{S}^{T}\alpha(t) \tilde{\phi}(E(t))dt\le c \left( 1+\frac{\tilde{\phi}(E(S))}{E(S)}\right)E(S). \end{aligned} \end{equation*}

    Using the facts that E is non-increasing and s \to \frac{\tilde{\phi}(s)}{s} is non-decreasing, we obtain

    \begin{equation} \int_{S}^{+\infty}\alpha(t) \tilde{\phi}(E(t))dt \le cE(S). \end{equation} (6.37)

    Let \tilde{E} = E\circ \tilde{\alpha}^{-1} , where \tilde{\alpha}(t) = \int_{0}^{t}\alpha(s)ds . Then we deduce from (6.37) that

    \begin{equation*} \begin{aligned} \int_{S}^{\infty} \tilde{\phi}(\tilde{E}(t))dt& = \int_{S}^{\infty} \tilde{\phi}(E(\tilde{\alpha}^{-1}(t)))dt\\ & = \int_{\tilde{\alpha}^{-1}(S)}^{\infty} \alpha(\eta)\tilde{\phi}(E(\eta))d\eta\\ &\le cE\left(\tilde{\alpha}^{-1}(S) \right)\le c \tilde{E}(S). \end{aligned} \end{equation*}

    Using Lemma 3.6 for \tilde{E} and \chi(s) = \frac{1}{c} \tilde{\phi}(s) , we deduce from (3.6) the following estimate

    \begin{equation*} \tilde{E}(t)\le \psi^{-1}\left(h(t)+\psi\left(E(0)\right)\right), \end{equation*}

    which gives (6.25), by using the definition of \tilde{E} and the change of variables.

    Remark 6.5. The stability result (6.25) is a decay result. Indeed,

    \begin{equation*} \begin{aligned} h^{-1}(t)& = t+\frac{\psi^{-1}\left(t+\psi(E(0))\right)}{\chi\left(\psi^{-1}(t+\psi(E(0)))\right)}\\ & = t+\frac{c}{2\varepsilon_0 c G^{\prime}\left(\varepsilon_0^2 \psi^{-1}(t+r) \right)}\\ &\ge t+\frac{c}{2\varepsilon_0 c G^{\prime}\left(\varepsilon_0^2 \psi^{-1}(r) \right)}\\ &\ge t+\tilde{c}. \end{aligned} \end{equation*}

    Hence, \lim_{t\to\infty}h^{-1}(t) = \infty , which implies that \lim_{t \to\infty}h(t) = \infty . Using the convexity of G , we have

    \begin{equation*} \begin{aligned} \psi(t)& = \int_{t}^{1}\frac{1}{\chi(s)}ds = \int_{t}^{1}\frac{c}{2\varepsilon_0 sG^{\prime}\left(\varepsilon_0^2s\right)}\ge \int_{t}^{1}\frac{c}{sG^{\prime}\left(\varepsilon_0^2\right)}\ge c \left[\ln{\vert s\vert}\right]_{t}^{1} = -c\ln{t}. \end{aligned} \end{equation*}

    Therefore, \lim_{t\to 0^+}\psi(t) = \infty which leads to \lim_{t\to \infty}\psi^{-1}(t) = 0 .

    Examples

    1) Let g(s) = s^m , where m\ge 1 . Then the function G is defined in the neighborhood of zero by

    G(s) = c s^{\frac{m+1}{2}}

    which gives, near zero

    \chi(s) = \frac{c(m+1)}{2} s^{\frac{m+1}{2}}.

    So, we obtain

    \begin{equation*} \psi(t) = c \int_{t}^{1}\frac{2}{(m+1)s^{\frac{m+1}{2}}}ds = \left\{ \begin{array}{ll} \frac{c}{t^{\frac{m-1}{2}}}, & if \hbox{$\text{ }m > 1$;} \\ \\ -c\ln{t}, & if \hbox{$\text{ }m = 1$,} \end{array} \right. \end{equation*}

    and then, in the neighborhood of \infty

    \begin{equation*} \psi^{-1}(t) = \left\{ \begin{array}{ll} c t^{-\frac{2}{m-1}}, & if \hbox{$\text{ }m > 1$;} \\ c e^{-t}, & if \hbox{$\text{ }m = 1$,} \end{array} \right. \end{equation*}

    Using the fact that h(t) = t as t goes to infinity, we obtain from (6.24) and (6.25)

    \begin{equation*} E(t)\le \left\{ \begin{array}{ll} c \left(\int_{0}^{t}\alpha(s)ds\right)^{-\frac{2}{m-1}}, & if \hbox{$\text{ }m > 1$;} \\ \\ c e^{-{\int_{0}^{t}\alpha(s)ds}}, & if \hbox{$\text{ }m = 1$.} \end{array} \right. \end{equation*}

    2) Let g(s) = s^m \sqrt{-\ln{s}} , where m\ge 1 . Then the function G is defined in the neighborhood of zero by

    G(s) = c s^{\frac{m+1}{2}} \sqrt{-\ln{\sqrt{s}}}

    which gives, near zero

    \chi(s) = cs^{\frac{m+1}{2}}\left(-\ln{\sqrt{s}}\right)^{-\frac{1}{2}}\left(\frac{m+1}{2}\left(-\ln{\sqrt{s}}\right)-\frac{1}{4}\right).

    Therefore, we get

    \begin{equation*} \begin{aligned} \psi(t) = &c \int_{t}^{1}\frac{1}{ s^{\frac{m+1}{2}}\left(-\ln{\sqrt{s}}\right)^{-\frac{1}{2}}\left(\frac{m+1}{2}\left(-\ln{\sqrt{s}}\right)-\frac{1}{4}\right)}ds\\ = &c\int_{1}^{\frac{1}{\sqrt{t}}}\frac{\tau^{m-2}}{(\ln{\tau})^{-\frac{1}{2}} \left(\frac{m+1}{2}\ln{\tau}-\frac{1}{4} \right)}d\tau\\ = &\left\{ \begin{array}{ll} \frac{c}{t^{\frac{m-1}{2}}\sqrt{-\ln{t} }}, & if \hbox{$\text{ }m > 1$;} \\ \\ c\sqrt{-\ln{t}}, & if \hbox{$\text{ }m = 1$,} \end{array} \right. \end{aligned} \end{equation*}

    and then, in the neighborhood of \infty , we have

    \begin{equation*} \psi^{-1}(t) = \left\{ \begin{array}{ll} c t^{-\frac{2}{m-1}}\left( \ln{t} \right)^{-\frac{1}{m-1}}, & if \hbox{$\text{ }m > 1$;} \\ \\ c e^{-t^2}, & if \hbox{$\text{ }m = 1$,} \end{array} \right. \end{equation*}

    Using the fact that h(t) = t as t goes to infinity, we obtain

    \begin{equation*} E(t)\le \left\{ \begin{array}{ll} c \left(\int_{0}^{t}\alpha(s)ds\right)^{-\frac{2}{m-1}} \left(\ln{\left(\int_{0}^{t}\alpha(s)ds\right)}\right)^{-\frac{1}{m-1}}, & if \hbox{$\text{ }m > 1$;} \\ \\ c e^{- \left(\int_{0}^{t}\alpha(s)ds\right)^2}, & if \hbox{$\text{ }m = 1$,} \end{array} \right. \end{equation*}

    The authors would like to express their profound gratitude to King Fahd University of Petroleum and Minerals (KFUPM) for its continuous support. The authors also thank the referees for their very careful reading and valuable comments. This work was funded by KFUPM under Project #SB201003.

    The authors declare there is no conflicts of interest.



    [1] A. Allahverdi, C. T. Ng, T. C. E. Cheng, M. Y. Kovalyov, A survey of scheduling problems with setup times or costs, Eur. J. Oper. Res., 187 (2008), 985–1032. https://doi.org/10.1016/j.ejor.2006.06.060 doi: 10.1016/j.ejor.2006.06.060
    [2] A. Allahverdi, The third comprehensive survey on scheduling problems with setup times/costs, Eur. J. Oper. Res., 246 (2015), 345–378. https://doi.org/10.1016/j.ejor.2015.04.004 doi: 10.1016/j.ejor.2015.04.004
    [3] C. Koulamas, G. J. Kyparisis, Single-machine scheduling problems with past-sequence-dependent setup times, Eur. J. Oper. Res., 187 (2008), 1045–1049. https://doi.org/10.1016/j.ejor.2006.03.066 doi: 10.1016/j.ejor.2006.03.066
    [4] C. Koulamas, G. J. Kyparisis, New results for single-machine scheduling with past-sequence-dependent setup times and due date-related objectives, Eur. J. Oper. Res., 278 (2019), 149–159. https://doi.org/10.1016/j.ejor.2019.04.022 doi: 10.1016/j.ejor.2019.04.022
    [5] D. Biskup, J. Herrmann, Single-machine scheduling against due dates with past-sequence-dependent setup times, Eur. J. Oper. Res., 191 (2008), 587–592. https://doi.org/10.1016/j.ejor.2007.08.028 doi: 10.1016/j.ejor.2007.08.028
    [6] J. B. Wang, Single-machine scheduling with past-sequence-dependent setup times and time-dependent learning effect, Comput. Ind. Eng., 55 (2008), 584–591. https://doi.org/10.1016/j.cie.2008.01.017 doi: 10.1016/j.cie.2008.01.017
    [7] J. B. Wang, J. X. Li, Single machine past-sequence-dependent setup times scheduling with general position-dependent and time-dependent learning effects, Appl. Math. Modell., 35 (2011), 1388–1395. https://doi.org/10.1016/j.apm.2010.09.017 doi: 10.1016/j.apm.2010.09.017
    [8] C. J. Hsu, W. H. Kuo, D. L. Yang, Unrelated parallel machine scheduling with past-sequence-dependent setup time and learning effects, Appl. Math. Modell., 35 (2011), 1492–1496. https://doi.org/10.1016/j.apm.2010.09.026 doi: 10.1016/j.apm.2010.09.026
    [9] T. C. E. Cheng, W. C. Lee, C. C. Wu, Single-machine scheduling with deteriorating jobs and past-sequence-dependent setup times, Appl. Math. Modell., 35 (2011), 1861–1867. https://doi.org/10.1016/j.apm.2010.10.015 doi: 10.1016/j.apm.2010.10.015
    [10] X. Huang, G. Li, Y. Huo, P. Ji, Single machine scheduling with general time-dependent deterioration, position-dependent learning and past sequence-dependent setup times, Optim. Lett., 7 (2013), 1793–1804. https://doi.org/10.1007/s11590-012-0522-4 doi: 10.1007/s11590-012-0522-4
    [11] X. Y. Wang, J. J. Wang, Scheduling problems with past-sequence-dependent setup times and general effects of deterioration and learning, Appl. Math. Modell., 37 (2013), 4905–4914. http://dx.doi.org/10.1016/j.apm.2012.09.044 doi: 10.1016/j.apm.2012.09.044
    [12] J. B. Wang, J. X. Xu, F. Guo, M. Liu, Single-machine scheduling problems with job rejection, deterioration effects and past-sequence-dependent setup times, Eng. Optim., 54 (2022), 471–486. https://doi.org/10.1080/0305215X.2021.1876041 doi: 10.1080/0305215X.2021.1876041
    [13] V. S. Gordon, J. M. Proth, C. B. Chu, A survey of the state of-the-art of common due date assignment and scheduling research, Eur. J. Oper. Res., 139 (2002), 1–25. https://doi.org/10.1016/S0377-2217(01)00181-3 doi: 10.1016/S0377-2217(01)00181-3
    [14] V. S. Gordon, J. M. Proth, C. B. Chu, Due date assignment and scheduling: SLK, TWK and other due date assignment models, Prod. Plan. Control, 13 (2002), 117–132. https://doi.org/10.1080/09537280110069621 doi: 10.1080/09537280110069621
    [15] G. A. Rolim, M. S. Nagano, Structural properties and algorithms for earliness and tardiness scheduling against common due dates and windows: A review, Comput. Ind. Eng., 149 (2020), 106803. https://doi.org/10.1016/j.cie.2020.106803 doi: 10.1016/j.cie.2020.106803
    [16] M. Sterna, Late and early work scheduling: A survey, Omega, 104 (2021), 102453. https://doi.org/10.1016/j.omega.2021.102453 doi: 10.1016/j.omega.2021.102453
    [17] W. Wang, Single-machine due-date assignment scheduling with generalized earliness-tardiness penalties including proportional setup times, J. Appl. Math. Comput., 2021 (2021), 1–19. https://doi.org/10.1007/s12190-021-01555-4 doi: 10.1007/s12190-021-01555-4
    [18] L. Y. Wang, X. Huang, W. W. Liu, Y. B. Wu, J. B. Wang, Scheduling with position-dependent weights, due-date assignment and past-sequence-dependent setup times, RAIRO Oper. Res., 55 (2021), S2747–S2758. https://doi.org/10.1051/ro/2020117 doi: 10.1051/ro/2020117
    [19] P. Brucker, Scheduling Algorithms, 3rd edition, Springer-Berlin, 2007. https://link.springer.com/book/10.1007/978-3-540-69516-5
    [20] W. Liu, X. Hu, X. Y. Wang, Single machine scheduling with slack due dates assignment, Eng. Optim., 49 (2017), 709–717. https://doi.org/10.1080/0305215X.2016.1197611 doi: 10.1080/0305215X.2016.1197611
    [21] C. Jiang, D. Zou, D. Bai, J. B. Wang, Proportionate flowshop scheduling with position-dependent weights, Eng. Optim., 52 (2020), 37–52. https://doi.org/10.1080/0305215X.2019.1573898 doi: 10.1080/0305215X.2019.1573898
    [22] M. Pinedo, Scheduling theory, algorithms, and systems, Prentice Hall, New Jersey, 2016. https://doi.org/10.1007/978-3-319-26580-3
    [23] G. H. Hardy, J. E. Littlewood, G. Polya, Inequalities, Cambridge University Press, 1988. https://doi.org/10.1017/s0025557200143451
    [24] X. Y. Wang, Z. Zhou, X. Zhang, P. Ji, J. B. Wang, Several flow shop scheduling problems with truncated position-based learning effect, Comput. Oper. Res., 40 (2013), 2906–2929. http://dx.doi.org/10.1016/j.cor.2013.07.001 doi: 10.1016/j.cor.2013.07.001
    [25] X. Huang, Bicriterion scheduling with group technology and deterioration effect, J. Appl. Math. Comput., 60 (2019), 455–464. https://doi.org/10.1007/s12190-018-01222-1 doi: 10.1007/s12190-018-01222-1
    [26] C. Liu, C. Xiong, Single machine resource allocation scheduling problems with deterioration effect and general positional effect, Math. Biosci. Eng., 18 (2021), 2562–2578. https://doi.org/10.3934/mbe.2021130 doi: 10.3934/mbe.2021130
    [27] C. C. Wu, D. Bai, X. Zhang, S. R. Cheng, J. C. Lin, Z. L. Wu, et al., A robust customer order scheduling problem along with scenario-dependent component processing times and due dates, J. Manuf. Syst., 58 (2021), 291–305. https://doi.org/10.1016/j.jmsy.2020.12.013 doi: 10.1016/j.jmsy.2020.12.013
    [28] C. C. Wu, D. Y. Bai, J. H. Chen, W. C. Lin, L. Xing, J. C. Lin, et al., Several variants of simulated annealing hyper-heuristic for a single-machine scheduling with two-scenario-based dependent processing times, Swarm Evol. Comput., 60 (2021), 100765. https://doi.org/10.1016/j.swevo.2020.100765 doi: 10.1016/j.swevo.2020.100765
  • This article has been cited by:

    1. Adel M. Al-Mahdi, Long-time behavior for a nonlinear Timoshenko system: Thermal damping versus weak damping of variable-exponents type, 2023, 8, 2473-6988, 29577, 10.3934/math.20231515
    2. Ayşe Fidan, Erhan Pişkin, Ercan Çelik, Guozhen Lu, Existence, Decay, and Blow-up of Solutions for a Weighted m-Biharmonic Equation with Nonlinear Damping and Source Terms, 2024, 2024, 2314-8888, 1, 10.1155/2024/5866792
    3. Mohammad Kafini, Mohammad M. Al-Gharabli, Adel M. Al-Mahdi, Existence and stability results of nonlinear swelling equations with logarithmic source terms, 2024, 9, 2473-6988, 12825, 10.3934/math.2024627
    4. Muhammad Fahim Aslam, Jianghao Hao, Salah Boulaaras, Luqman Bashir, Blow-Up of Solutions in a Fractionally Damped Plate Equation with Infinite Memory and Logarithmic Nonlinearity, 2025, 14, 2075-1680, 80, 10.3390/axioms14020080
    5. Mohammad M. Al-Gharabli, Adel M. Al-Mahdi, Ahmad Mugbil, Effects of Viscoelastic Damping and Nonlinear Feedback Modulated by Time-Dependent Coefficient in a Suspension Bridge System: Existence and Stability Decay Results, 2025, 22, 1660-5446, 10.1007/s00009-025-02837-y
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2223) PDF downloads(76) Cited by(9)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog