Salp swarm algorithm (SSA) is a recently proposed, powerful swarm-intelligence based optimizer, which is inspired by the unique foraging style of salps in oceans. However, the original SSA suffers from some limitations including immature balance between exploitation and exploration operators, slow convergence and local optimal stagnation. To alleviate these deficiencies, a modified SSA (called VC-SSA) with velocity clamping strategy, reduction factor tactic, and adaptive weight mechanism is developed. Firstly, a novel velocity clamping mechanism is designed to boost the exploitation ability and the solution accuracy. Next, a reduction factor is arranged to bolster the exploration capability and accelerate the convergence speed. Finally, a novel position update equation is designed by injecting an inertia weight to catch a better balance between local and global search. 23 classical benchmark test problems, 30 complex optimization tasks from CEC 2017, and five engineering design problems are employed to authenticate the effectiveness of the developed VC-SSA. The experimental results of VC-SSA are compared with a series of cutting-edge metaheuristics. The comparisons reveal that VC-SSA provides better performance against the canonical SSA, SSA variants, and other well-established metaheuristic paradigms. In addition, VC-SSA is utilized to handle a mobile robot path planning task. The results show that VC-SSA can provide the best results compared to the competitors and it can serve as an auxiliary tool for mobile robot path planning.
Citation: Hongwei Ding, Xingguo Cao, Zongshan Wang, Gaurav Dhiman, Peng Hou, Jie Wang, Aishan Li, Xiang Hu. Velocity clamping-assisted adaptive salp swarm algorithm: balance analysis and case studies[J]. Mathematical Biosciences and Engineering, 2022, 19(8): 7756-7804. doi: 10.3934/mbe.2022364
[1] | Hongxia Lin, Sabana, Qing Sun, Ruiqi You, Xiaochuan Guo . The stability and decay of 2D incompressible Boussinesq equation with partial vertical dissipation. Communications in Analysis and Mechanics, 2025, 17(1): 100-127. doi: 10.3934/cam.2025005 |
[2] | Chunyou Sun, Junyan Tan . Attractors for a Navier–Stokes–Allen–Cahn system with unmatched densities. Communications in Analysis and Mechanics, 2025, 17(1): 237-262. doi: 10.3934/cam.2025010 |
[3] | Zhigang Wang . Serrin-type blowup Criterion for the degenerate compressible Navier-Stokes equations. Communications in Analysis and Mechanics, 2025, 17(1): 145-158. doi: 10.3934/cam.2025007 |
[4] | Reinhard Racke . Blow-up for hyperbolized compressible Navier-Stokes equations. Communications in Analysis and Mechanics, 2025, 17(2): 550-581. doi: 10.3934/cam.2025022 |
[5] | Haifa El Jarroudi, Mustapha El Jarroudi . Asymptotic behavior of a viscous incompressible fluid flow in a fractal network of branching tubes. Communications in Analysis and Mechanics, 2024, 16(3): 655-699. doi: 10.3934/cam.2024030 |
[6] | Andrea Brugnoli, Ghislain Haine, Denis Matignon . Stokes-Dirac structures for distributed parameter port-Hamiltonian systems: An analytical viewpoint. Communications in Analysis and Mechanics, 2023, 15(3): 362-387. doi: 10.3934/cam.2023018 |
[7] | Antoine Bendimerad-Hohl, Ghislain Haine, Laurent Lefèvre, Denis Matignon . Stokes-Lagrange and Stokes-Dirac representations of N-dimensional port-Hamiltonian systems for modeling and control. Communications in Analysis and Mechanics, 2025, 17(2): 474-519. doi: 10.3934/cam.2025020 |
[8] | Rui Sun, Weihua Deng . A generalized time fractional Schrödinger equation with signed potential. Communications in Analysis and Mechanics, 2024, 16(2): 262-277. doi: 10.3934/cam.2024012 |
[9] | Hilal Essaouini, Pierre Capodanno . Analysis of small oscillations of a pendulum partially filled with a viscoelastic fluid. Communications in Analysis and Mechanics, 2023, 15(3): 388-409. doi: 10.3934/cam.2023019 |
[10] | İrem Akbulut Arık, Seda İğret Araz . Delay differential equations with fractional differential operators: Existence, uniqueness and applications to chaos. Communications in Analysis and Mechanics, 2024, 16(1): 169-192. doi: 10.3934/cam.2024008 |
Salp swarm algorithm (SSA) is a recently proposed, powerful swarm-intelligence based optimizer, which is inspired by the unique foraging style of salps in oceans. However, the original SSA suffers from some limitations including immature balance between exploitation and exploration operators, slow convergence and local optimal stagnation. To alleviate these deficiencies, a modified SSA (called VC-SSA) with velocity clamping strategy, reduction factor tactic, and adaptive weight mechanism is developed. Firstly, a novel velocity clamping mechanism is designed to boost the exploitation ability and the solution accuracy. Next, a reduction factor is arranged to bolster the exploration capability and accelerate the convergence speed. Finally, a novel position update equation is designed by injecting an inertia weight to catch a better balance between local and global search. 23 classical benchmark test problems, 30 complex optimization tasks from CEC 2017, and five engineering design problems are employed to authenticate the effectiveness of the developed VC-SSA. The experimental results of VC-SSA are compared with a series of cutting-edge metaheuristics. The comparisons reveal that VC-SSA provides better performance against the canonical SSA, SSA variants, and other well-established metaheuristic paradigms. In addition, VC-SSA is utilized to handle a mobile robot path planning task. The results show that VC-SSA can provide the best results compared to the competitors and it can serve as an auxiliary tool for mobile robot path planning.
As it is well-known that the cylindrically symmetric Navier-Stokes equations take the form
ρt+(rρu)rr=0, | (1.1) |
ρ(ut+uur)−ρv2r+Pr=(λ(ru)rr)r−2uμrr, | (1.2) |
ρ(vt+uvr)+ρuvr=(μvr)r+2μvrr−(μv)rr−uvr2, | (1.3) |
ρ(wt+uwr)=(μwr)r+μwrr, | (1.4) |
ρ(e+uer)+P(ru)rr=(κrθr)rr+Q, | (1.5) |
where ρ(r,t) is the density, u(r,t), v(r,t), and w(r,t) are velocities in different directions, θ(r,t) is the temperature, the pressure P and the internal energy e are related with the density and temperature
P=P(ρ,θ)=Rρθande=e(ρ,θ)=cvθ, | (1.6) |
the specific gas constant R and the specific heat at constant volume cv are positive constants, respectively; the symbol Q denotes
Q=λ(ru)2rr2−4μuurr+μw2r+μ(vr−vr)2, | (1.7) |
μ and λ are viscosity coefficients, and κ is the heat conductivity coefficient.
Without loss of generality, we shall consider the system (1.1) with the following initial-boundary data:
{(ρ,u,v,w,θ)|t=0=(ρ0,u0,v0,w0,θ0)(r),0<a≤r≤b<∞,(u,v,w,∂rθ)|r=a=(u,v,w,∂rθ)|r=b=0,t≥0. | (1.8) |
Our main goal is to show the large-time behavior of global solutions to the initial-boundary value problem (1.1)–(1.8) with large initial data. For this purpose, it is convenient to transform the initial-boundary value problem (1.1)–(1.8) into Lagrangian coordinates. We introduce the Lagrangian coordinates (t,x) and denote (˜ρ,˜u,˜v,˜w,˜θ)(t,x)=(ρ,u,v,w,θ)(t,r), where
r=r(t,x)=r0(x)+∫t0u(s,r(s,x))ds, | (1.9) |
and
r0(x):=f−1(x),f(r):=∫rayρ0(y)dz. |
Note that the function f is invertible on [a,b] provided that ρ0(y)>0 for each y∈[a,b] (which will be assumed in Theorem 1.1). Due to (1.1)1 and (1.8), we see
∂∂t∫r(t,x)ayρ(t,y)dy=0. |
Then it is easy to check
∫r(t,x)ayρ(t,y)dy=f(r0(x))=xand∫r(t,1)byρ(t,y)dy=0, | (1.10) |
which translates the domain [0,T]×[a,b] into [0,T]×[0,1]. Hereafter, we denote (˜ρ,˜u,˜v,˜w,˜θ) by (ρ,u,v,w,θ) for simplicity. The identities (1.9) and (1.10) imply
rt(t,x)=u(t,x),rx(t,x)=r−1τ(t,x), | (1.11) |
where τ:=ρ−1 is the specific volume. By means of identities (1.11), system (1.1)–(1.8) is changed to
τt=(ru)x, | (1.12) |
ut−v2r+rPx=r(λ(ru)xτ)x−2uμx, | (1.13) |
vt−uvr=r(μrvxτ)x+2μvx−(μv)x−μτvr2, | (1.14) |
wt=r(μrwxτ)x+μwx, | (1.15) |
et+P(ru)x=(κr2θxτ)x+Q, | (1.16) |
where t>0, x∈Ω=(0,1), P=Rθτ, e=Rθγ−1, and Q=λ(ru)2xτ−4μuux+μr2w2xτ+μτ(rvxτ−vr)2.
Throughout this paper, we assume that μ,λ, and κ are power functions of absolute temperature as follows:
μ=˜μθα,λ=˜λθα,κ=˜κθβ, | (1.17) |
where constants ˜μ,˜λ,˜κ,α, and β are positive constants.
The objective of this paper is to study the global existence and stability of the solutions to an initial-boundary value problem of (1.12)–(1.16) with the initial data:
(τ,u,v,w,θ)(x,0)=(τ0,u0,v0,w0,θ0),x∈(0,1), | (1.18) |
and the boundary conditions:
(u,v,w,θx)(0,t)=(u,v,w,θx)(1,t)=0,t≥0. | (1.19) |
Using Navier-Stokes equations as a model for describing fluid motion has been widely accepted by the physics community. In recent years, some significant progress has been made in the study of Navier-Stokes equations with constant viscosity coefficients. When the initial value has a certain small property and vacuum state does not exist, the global existence, uniqueness, and large-time behavior of the solutions can be easily calculated [1,2,3,4,5,6,7,8]. However, solving the problem of large initial values is very challenging, and the first significant breakthrough was achieved by Lions [9]. Besides, by assuming that the initial value is only sufficiently small in the energy space, Hoff [10,11] confirmed the existence of global weak solutions. In the process of studying fluid motion, a vacuum state is often involved, which makes calculations far more complex. The results in [12,13] indicate the Cauchy problem of Navier-Stokes equations with constant coefficients containing vacuum state is not appropriate. This uncertainty is reflected by the fact that the solutions of the system have no continuous dependence on the initial values. Based on physical considerations, Liu-Xin-Yang [12] studied the Cauchy problem of the Navier-Stokes equations with density dependent viscosity, and proved its local suitability. However, only when the temperature and density change within a suitable range, real fluids can be considered as ideal fluids (viscosity coefficients are constants). In the case of large changes in temperature or density, the viscosity of the real fluid will vary greatly [14].
On the other hand, Navier-Stokes equations can be developed using the Chapman-Enskog expansion of the microscopic particle collision model Boltzman equation. Consequently, it can be determined that the viscosity depends on the temperature. However, compared to the abundant research using classical models, the studies on the physical case using the temperature-dependent viscosity model are lacking. Because the viscosity and heat conductivity are both temperature-dependent, degeneracy and strong non-linearity may appear. Pan-Zhang [15] and Huang-Shi [16] obtained global strong solutions and large-time behavior in bounded domains for one-dimensional Navier-Stokes equations, when α=0 and 0<β<1. The studies of Liu-Yang-Zhao-Zhou [17] and Wan-Wang [18] also acquired global solutions of Navier-Stokes equations in one dimensional and cylindrically symmetrical cases, respectively, with the requirement that |γ−1| was small enough. Wang-Zhao [19] removed the smallness condition of |γ−1|, and established global classical solutions to Navier-Stokes equations in the one-dimensional whole space when μ and κ satisfy:
μ=˜μh(τ)θα,κ=˜κh(τ)θα, |
where α is small enough. In their calculations, the viscosity and heat-conductivity were dependent on temperature and density, and to overcome the difficulties caused by density, the following conditions could not be removed:
‖h(τ)−1τ−1‖L∞(Ω)+‖h(τ)−1τ‖L∞(Ω)≤C. |
This means that estimate of ‖τx‖L2(Ω) can be directly obtained without the upper and lower bounds of density, as long as the coefficient μ−1 or κ−1 appears. However, if h(τ) is constant, then the constants l1=l2=0 and the result of this case cannot be established using the model in [19]. Recently, Sun-Zhang-Zhao [20] considered an initial-boundary value problem of the compressible Navier-Stokes equations for one-dimensional viscous and heat-conducting ideal polytropic fluids with temperature-dependent transport coefficients, and discovered the global-in-time existence of strong solutions. In that paper, the initial data could be large if α≥0 is small and the growth exponent β≥0 is arbitrarily large. It is worth mentioning that the smallness of α>0 depends on the size of the initial data. However, unfortunately the study did not provide a specific relationship between α and the initial data in [20]. Our main results are concluded as follows.
Theorem 1.1. For given positive constants M0,V0>0, assume that
‖(τ0,u0,v0,w0,θ0)‖H2(Ω)≤M0,infx∈(0,1){τ0,θ0}≥V0. | (1.20) |
Then there exist ϵ0>0 and C0 which depend only on β, M0, and V0, such that the initial-boundary value problem (1.12)–(1.19) with 0≤α≤ϵ0:=min{|α1|,|α2|} and β>0 admit a unique global-in-time strong solution (τ,u,v,w,θ) on [0,1]×[0,+∞) satisfying
C−10≤τ(x,t)≤C0,C−11≤θ(x,t)≤C1, |
and
(τ−ˉτ,u,v,w,θ−E0)∈C([0,+∞);H2(Ω)), |
where α1,α2 defined in what follows are dependent only on β, M0, and V0 (see details in (3.2), (3.5), and (3.6)). Moreover, for any t>0, the exponential decay rate is
‖(τ−ˉτ,u,v,w,θ−E0)‖2H1+‖r−ˉr‖2H2≤Ce−γ0t, | (1.21) |
where
ˉτ=∫10τdx,E0=∫10(θ0+u20+v20+w202cv)dx,ˉr=[a2+2ˉτx]12. |
A few remarks are in order.
Remark 1. For k=1,2 and 1≤p≤∞, we adopt the simplified nations for the standard Sobolev space as follows:
‖⋅‖:=‖⋅‖L2(Ω),‖⋅‖k:=‖⋅‖Hk(Ω),‖f‖∞:=maxx∈Ω|f(x)|,‖⋅‖Lp:=‖⋅‖Lp(Ω). |
Remark 2. We remark here that the growth exponent β∈(0,+∞) can be arbitrarily large, and the choice of ϵ0>0 depends only on β, V0, and the H2−norm of the initial data. An outline of this paper is as follows. We devote Section 2 to a discussion of a number of a priori estimates independent of time, which are needed to extend the local solution to all time. Based on the previous estimates, the main results, Theorem 1.1 are proved in Section 3.
Remark 3. In this paper, the positive c, C, and Ci(i=0,1,⋯,16) are some positive constants which depend only on β, M0, and V0, but not on the time t. Furthermore, c and C are different from line to line.
First of all, define
X(t1,t2;m1,m2;N):={(τ,u,v,w,θ)∈C([t1,t2];H2(Ω)),τx∈L2(t1,t2;H1(Ω))(ux,vx,wx,θx)∈L2(t1,t2;H2(Ω)),τt∈C([t1,t2];H1(Ω))∩L2(t1,t2;H1(Ω)),(ut,vt,wt,θt)∈C([t1,t2];L2(Ω))∩L2(t1,t2;H1(Ω)),τ≥m1,θ≥m2,E(t1,t2)≤N2,∀(x,t)∈[0,1]×[t1,t2]}, |
where N, m1, m2, and t1,t2(t2>t1) are constants and
E(t1,t2):=supt1≤t≤t2‖(τx,ux,θx)‖21+‖θt‖2+∫t2t1‖θt‖2dt |
with
θt|t=t1:=1cv[−P(ru)x+(κr2θxτ)x+Q]|t=t1, |
θxt|t=t1:=1cv[−P(ru)x+(κr2θxτ)x+Q]x|t=t1. |
The main purpose of this section is to derive the global t-independent estimates of the solutions (τ,u,v,w,θ)∈X(0,T;m1,m2,N).
We start with the following basic energy estimate.
Lemma 2.1. Assume that the conditions listed in Theorem 1.1 hold. Then there exists a constant 0<ϵ1≤1 depending only on M0 and V0, such that if
m−α2≤2,Nα≤2,αH(m1,m2,N)≤ϵ1, | (2.1) |
where
H(m1,m2,N):=(m1+m2+N+1)5, |
then for T≥0,
∫10ηˆθ(τ,u,v,w,θ)(x,t)dx+∫T0∫10[τu2θ+u2x+w2xτθ+θβθ2xτθ2+τθ(rvxτ−vr)2]dxds≤C, | (2.2) |
where
ηˆθ(τ,u,v,w,θ):=ˆθϕ(τˉτ)+u2+v2+w22+cvˆθϕ(θˆθ),ϕ(z):=z−logz−1. |
Proof. Multiplying (1.12)–(1.16) by Rˆθ(ˉτ−1−τ−1), u, v, w, and (1−ˆθθ−1), respectively, integrating over [0,1], and adding them together, one obtains
ddt∫10ηˆθ(τ,u,v,w,θ)dx+∫10[˜κr2θβθ2xτθ2+Qθ]dx=0, | (2.3) |
where Q=λ(ru)2xτ−4μuux+μr2w2xτ+μτ(rvxτ−vr)2.
Apparently, by means of λ=2μ+λ′, one has
λ(ru)2x−4τμuux=(2μ+λ′)r2u2x+(2μ+λ′)τ2u2r2+2λ′τuux=2μr2u2x+2μτ2u2r2+2μ+3λ′3[rux+τur]2−2μ3[rux+τur]2=23μ(r2u2x+τ2u2r2)+2μ+3λ′3[rux+τur]2+2μ3[rux−τur]2 ≥23μ(r2u2x+τ2u2r2). |
Thus, one has
Q≥Cu2xτ+Cτu2+Cw2xτ+Cτ(rvxτ−vr)2, |
which combined with (2.1) and (2.3) yields
ddt∫10ηˆθ(τ,u,v,w,θ)(t,x)dx+c∫10(τu2θ+u2x+w2xτθ+θβθ2xτθ2+τθ(rvxτ−vr)2)dx≤0. | (2.4) |
Integrating (2.4) over (0,T), we can obtain (2.2) by the initial conditions (τ0,u0,v0,θ0).
Next, by means of Lemma 2.1, we derive the upper and lower bounds of τ.
Lemma 2.2. Assume that the conditions of Lemma 2.1 hold. Then for (x,t)∈Ω×[0,∞),
C−10≤τ(x,t)≤C0. |
Proof. The proof is divided into three steps.
Step 1 (Representation of the formula for τ).
It follows from (1.13) that
(ur)t+u2−v2r2+2uμxr+Px=(λ(lnτ)t)x=λ(lnτ)xt+λx(ru)xτ. |
that is
(uλr)t+g+(λ−1P)x=(lnτ)xt, | (2.5) |
where
g:=u2−v2λr2+2uμxλr−(λ−1)xP−λx(ru)xλτ−(λ−1)tur. |
Integrating (2.5) over [0,t]×[x1(t),x], we have
∫xx1(t)(uλr−u0λ0r0)dξ+∫t0∫xx1(t)gdξds+∫t0λ−1P(x)−λ−1P(x1)ds=lnτ(x,t)−lnτ(x1(t),t)−[lnτ0(x)−lnτ(x1(t),0)], | (2.6) |
where x1(t)∈[0,1] is determined by the following progresses. Next, for convenience, we define
F:=(ru)xτ−λ−1P−∫x0g(ξ)dξ,φ:=∫t0F(x,s)ds+∫x0u0λ0r0dξ. |
It follows from the definitions above that
φx=uλr,φt=F. | (2.7) |
By the definition of F and (1.12), one has
∫t0[λ−1P(x1(t),s)+∫x1(t)0g(ξ,s)dξ]ds=∫t0((ru)xτ−F)(x1(t),s)ds=lnτ(x1(t),t)−lnτ(x1(t),0)−∫t0F(x1(t),s)ds. | (2.8) |
Due to (1.12) and (2.7), we have
(τφ)t−(ruφ)x=τφt−ruφx=τF−u2λ=(ru)x−τPλ−τ∫x0g(ξ)dξ−u2λ. | (2.9) |
Integrating (2.9) over [0,t]×Ω, one has
∫10φτdx=∫10τ0∫x0(u0λ0r0)(ξ)dξdx−∫t0∫10[τλP+τ∫x0gdξ+u2λ]dxds. | (2.10) |
Hence, by virtue of the mean value theorem, there exits x1(t)∈[0,1] such that φ(x1(t),t)=∫10φτdx. By the definition of φ, (2.8), and (2.10), one obtains
∫t0F(x1(t),s)ds=φ(x1(t),t)−∫x1(t)0u0λ0r0(ξ)dξ=∫10τ0∫x0u0λ0r0(ξ)dξdx−∫t0∫10(τλP+τ∫x0gdξ+u2λ)dxds−∫x1(t)0u0λ0r0(ξ)dξ. | (2.11) |
Putting (2.11) into (2.8), it follows that
∫t0(Pλ(x1(t),s)+∫x1(t)0g(ξ,s)dξ)ds=lnτ(x1(t),t)−lnτ(x1(t),0)−∫10τ0∫x0u0λ0r0(ξ)dξdx+∫x1(t)0u0λ0r0(ξ)dξ+∫t0∫10(τλP+τ∫x0gdξ+u2λ)dxds. | (2.12) |
Inserting (2.12) into (2.6), we derive
∫t0Pλds+∫t0∫x0gdξds−∫t0∫10(τλP+τ∫x0gdξ+u2λ)dxds+∫xx1(t)(uλr−u0λ0r0)dξ+∫10τ0∫x0u0λ0r0dξdx−∫x1(t)0u0λ0r0dξ=lnτ−lnτ0. | (2.13) |
Let
g=u2−v2λr2+g1, |
where
g1:=2uμxλr−(λ−1)xP−λx(ru)xλτ−(λ−1)tur. |
It follows from (2.13) that
τ=B−1AD, | (2.14) |
where
A:=exp{∫t0[Pλ(x,s)+∫x0(g1(ξ,s)+u2λr2)dξ+∫10τ∫x0(v2λr2−g1)dξdx]ds},B:=exp{∫t0[∫10(τλP+τ∫x0u2λr2(ξ)dξ+u2λ)dx+∫x0v2λr2dξ]ds},D:=τ0exp{∫10τ0∫x0u0λ0r0dξdx−∫x1(t)0u0λ0r0dξ+∫xx1(t)(uλr−u0λ0r0)(ξ)dξ}. |
By (2.14), one has
τD−1B=A. | (2.15) |
Define that
J:=Pλ(x,s)+∫x0(g1(ξ,s)+u2λr2)dξ+∫10τ∫x0(v2λr2−g1)dξdx. |
Then, multiplying (2.15) by J gives
τD−1BJ=ddtA. |
Since A(0)=1, integrating the above equality over (0,t) about time, one has
τ=DB−1+1λ∫t0B(s)B(t)D(t)D(s)τ[Pλ(x,s)+∫x0(g1(ξ,s)+u2λr2)dξ+∫10τ∫x0(v2λr2−g1)dξdx]ds. | (2.16) |
Step 2 (Lower bound for τ). First of all, by means of (2.1) and (2.2), one has
C−1≤D≤C. | (2.17) |
Next, we estimate B. Employing Jensen's inequality to the convex function ϕ, we have
∫10zdx−log∫10zdx−1≤∫10ϕ(z)dx. | (2.18) |
By (2.18) and Lemma 2.1, one obtains
C−1≤∫10τdx,ˉθ:=∫10θdx≤C, | (2.19) |
which means that
C−1≤∫10τλPdx≤C. | (2.20) |
Hence, by means of the definition of B and (2.20), choosing ε1 suitably small, there exist two constants C1 and C2, such that
ec1t≤B(t)≤ec2t. | (2.21) |
That is,
e−c1(t−s)≤B(s)B(t)≤e−c2(t−s). | (2.22) |
Apparently, by means of (2.1) and (2.19), we deduce
|τ∫10τ∫x0g1dξdx|≤C|α|‖τ‖2∞(‖θ−1‖∞‖θx‖‖u‖+‖θ−ατ−1‖∞‖θx‖+‖θ−1τ−1‖∞‖θx‖‖u‖1+‖θ−1τ−1‖∞‖θt‖‖u‖)≤Cε1. | (2.23) |
Similarly, one also has
‖∫x0g1dξ‖∞≤Cε1. | (2.24) |
Thus, for t≤t0<∞,
τ≥DB−1−Cε1∫t0e−c2(t−s)ds=DB−1−Cε1c2(1−e−c2t)≥Ce−ct0−ε2(1−e−c2t0). |
For a enough large t, we have
infx∈Ωτ(x,t)≥C∫t0B(s)B(t)θds−ε2(1−e−c2t). | (2.25) |
So, we need the estimates of θ and B(s)B(t). By the mean value theorem and (2.19), there exits x2(t)∈[0,1], such that
C−1≤θ(x2(t),t)≤C. | (2.26) |
By Cauchy-Schwarz's inequality and (2.19), one has
|[ln(θ+1)]β2+1−[ln(θ(x2(t),t)+1)]β2+1|=|∫xx2(ln(θ+1))β2θx√τ(θ+1)√τ(ξ)dξ|≤(∫10(ln(θ+1))βθ2xτ(θ+1)2dx)12(∫10τdx)12≤C(∫10θβθ2xτθ2dx)1/2, |
which means that
θ≥C−C∫10θβθ2xτθ2dx. | (2.27) |
By (2.16)–(2.17), (2.23)–(2.24), (2.21), Lemma 2.1, and (2.19), one has
∫10τdx≤Ce−ct+C∫t0B(s)B(t)ds, |
that is
∫t0B(s)B(t)ds≥C−Ce−ct. | (2.28) |
Putting (2.27) into (2.25), by (2.22), (2.28), and Lemma 2.1, for a enough large t, one has
∫t0B(s)B(t)θds≥C∫t0B(s)B(t)(1−∫10θβθ2xτθ2dx)ds≥C−Ce−ct−C(∫t/20+∫tt/2)B(s)B(t)∫10θβθ2xτθ2dxds≥C−Ce−ct−C∫t/20e−c(t−s)∫10θβθ2xτθ2dxds−C∫tt/2∫10θβθ2xτθ2dxds≥C−Ce−ct−Ce−ct/2−C∫tt/2∫10θβθ2xτθ2dxds≥C. | (2.29) |
Inserting (2.29) into (2.25), for a large enough time T0, when t>T0, it follows that
infx∈Ωτ(x,t)≥C. |
Step 3 (Upper bound for τ). By (2.17), (2.22)–(2.24), and Lemma 2.1, one obtains
‖τ‖∞≤C+C∫t0e−c2(t−s)‖τ‖∞(∫10θβθ2xτθ2dx+1)ds, | (2.30) |
where we have used the results
{‖θ‖∞≤C+C‖τ‖∞∫10θβθ2xτθ2dxwhen0<β≤1,‖θ‖∞≤C+C∫10θβθ2xτθ2dxwhen1<β<∞. | (2.31) |
In fact, by Hölder's inequality, for 0<β≤1,
|θ1/2(x,t)−θ1/2(x2(t),t)|≤∫10θ−12θxdx≤‖τ‖1/2∞(∫10θβθ2xτθ2dx)1/2(∫10θ1−βdx)1/2≤‖τ‖1/2∞(∫10θβθ2xτθ2dx)1/2. | (2.32) |
For 1<β<∞,
|θβ/2(x,t)−θβ/2(x2(t),t)|≤∫10θβ/2θxθdx≤(∫10θβθ2xτθ2dx)1/2(∫10τdx)1/2. | (2.33) |
By means of (2.26) and (2.32)–(2.33), we can obtain (2.31).
Thus, the inequality (2.30) combined with Gronwall's inequality and Lemma 2.1 yields that for any t≥0,
supt≥0‖τ(x,t)‖∞≤C. |
However, we cannot get the time-space estimate of vx in Lemma 2.1. To obtain this estimate, we need the following result.
Lemma 2.3. Assume that the conditions listed in Lemma 2.1 hold. Then for any p>0 and T≥0,
∫10θ1−pdx+∫T0∫10(θβθ2xθp+1+θα(u2+u2x+w2x)θp+θατθp(rvxτ−vr)2)dxds≤C. | (2.34) |
Proof. By Lemma 2.1, the result of (2.34) has been established for p=1. In the following steps, we do the estimate for p>0 and p≠1. Multiplying (1.16) by θ−p, integrating over [0,1], and using integration by parts gives
cvp−1ddt∫10θ1−pdx+p∫10˜κr2θβθ2xτθp+1dx+∫10Qθpdx=R∫10θ1−pτ(ru)xdx=R∫10θ1−p−E0τ(ru)xdx+RE0∫10(ru)xτdx. | (2.35) |
Apparently, there exists constant C(p) depending on p such that
|θ1−p−E0|≤C(p)|θ1/2−E1/20|(E1/20+θ12−p). | (2.36) |
By means of (2.35), (2.36), Lemma 2.2, (1.13), and (1.12), we deduce
cvp−1ddt∫10θ1−pdx+p∫10˜κr2θβθ2xτθp+1dx+∫10Qθpdx≤C(p)‖θ1/2−E1/20‖∞∫10(E1/20+θ12−p)(|u|+|ux|)dx+RE0ddt∫10lnτdx≤C(p)‖θ1/2−E1/20‖∞[(∫10u2+u2xθdx)12(∫10θdx)12+(∫10θ1−pdx)12(∫10u2+u2xτθpdx)12]+RE0ddt∫10lnτdx≤C(p)‖θ1/2−E1/20‖2∞+C(p)∫10u2+u2xθdx+δ∫10u2+u2xτθpdx+C(δ,p)‖θ1/2−E1/20‖2∞∫10θ1−pdx+RE0ddt∫10lnτdx. | (2.37) |
Thus, employing the truth of
∫t0‖θ1/2−E1/20‖2∞ds≤C, | (2.38) |
we can conclude from (2.37), Grönwall's inequality, and Lemma 2.2 that (2.34) is correct. In fact,
‖θ1/2−E1/20‖∞≤‖θ1/2−ˉθ1/2‖∞+‖ˉθ1/2−E1/20‖∞. | (2.39) |
By virtue of Lemmas 2.1–2.2 and (2.19), one has
|ˉθζ−Eζ0|=|∫10ddη{[∫10(θ+ηu2+v2+w22cv)dx]ζ}dη|=|ζ∫10[∫10(θ+ηu2+v2+w22cv)dx]ζ−1dη∫10(u,v,w)22dx|≤C‖(u,v,w)‖‖(u,v,w)‖∞≤C∫10|(ux,(vr)x,wx)|dx≤C(∫10[u2xθ+1θ(rvxτ−vr)2+w2xθdx])1/2(∫10θdx)1/2≤C(∫10[u2xθ+1θ(rvxτ−vr)2+w2xθ]dx)1/2, | (2.40) |
where we have used the fact that
(vr)x=τr2(rvxτ−vr). |
For β<1, it follows from Lemma 2.1 and (2.19) that
‖θ1/2−ˉθ1/2‖∞≤C∫10θ−12|θx|dx≤C(∫10θβθ2xθ2dx)12(∫10θ1−βdx)12≤C(∫10θβθ2xθ2dx)12. | (2.41) |
For 1≤β<∞,
‖θ12−ˉθ12‖∞≤C‖θβ2−ˉθβ2‖∞≤C∫10θβ2−1|θx|dx≤C(∫10θβθ2xθ2dx)12. | (2.42) |
Hence, by (2.39)–(2.42) and Lemmas 2.1–2.2, we can derive (2.38). The proof of Lemma 2.3 is thus complete.
According to Lemmas 2.1–2.3, we can conclude that the following results have been established.
Corollary 2.1. Assume that the conditions listed in Lemma 2.1 hold. Then for −∞<q<1, 0<p<∞, and T≥0,
C1≤τ≤C,C−1≤∫10τdx≤C,C−1≤∫10θdx≤C,∫10(|lnτ|+|lnθ|+θq+u2+v2+w2)dx≤C3,∫T0∫10[(u2+u2x+v2+v2x+w2x+τ2t)(1+θ−p)+θβθ2xθ1+p]dxds≤C. | (2.43) |
Here, we have taken p=α in (2.34) to obtain the time-space estimates of v and vx.
Using the result above, we establish the following estimate about τx.
Lemma 2.4. Assume that the conditions listed in Lemma 2.1 hold. Then for T≥0,
∫10τ2xdx+∫T0∫10τ2x(1+θ)dxds≤C2. |
Proof. According to the chain rule, one has
(λτxτ)t=(λτtτ)x+λθτ(τxθt−τtθx). | (2.44) |
By means of (1.12), (1.13), and (2.44), we have
(λτxτ)t=utr+Px−v2r2+2uμxr+λθτ(τxθt−τtθx). | (2.45) |
Multiplying (2.45) by λτxτ, integrating over [0,1] about x, and using (1.12) and (2.44), we obtain
ddt∫10[12(λτxτ)2−λuτxrτ]dx+∫10Rλθτ2xτ3dx=∫10(ur)xλτtτdx+∫10Rλτxθxτ2dx+∫10λτx(u2−v2)τr2dx+∫102uμxλτxrτdx+∫10λθτ2(λτx−r−1uτ)(τxθt−τtθx)dx:=5∑i=1Ii. | (2.46) |
By Hölder's inequality, (2.1), (1.12), and Corollary 2.1, one has
I1=∫10(uxr−τur3)λτtτdx≤C‖(u,ux,τt)‖2≤C‖(u,ux)‖2. | (2.47) |
Using Corollary 2.1 and taking p=β, one has
∫T0∫10θ2xθdxds≤C. | (2.48) |
Hence, we argue the term I2 as the following
I2≤δ∫10τ2xθτ3dx+C(δ)∫10θ2xθdx. | (2.49) |
By means of integration by parts, Corollary 2.1, and (2.1), one can derive
I3=−∫10logτ(λr2(u2−v2))xdx≤C‖lnτ‖∞∫10|(|α|θxu2,|α|θxv2,θαu2,θαv2,θαu2x,θαv2x)|dx≤C∫10(u2,v2,u2x,v2x)dx. | (2.50) |
By virtue of (2.1), we derive
I4≤C|α|m−322N(∫10u2dx)1/2(∫10τ2xθτ3dx)1/2≤δ∫10τ2xθτ3dx+C(δ)∫10u2dx. | (2.51) |
By means of (2.1), Corollary 2.1, and (1.16), one can deduce
I5≤C∫10|α||θ−1||(τ2xθt,uτxθt,τxuθx,u2θx,τxuxθx,uuxθx)|dx≤C|α|m−22N∫10τ2xθτ3dx+C|α|m−322N(∫10u2+u2xdx)1/2(∫10τ2xθτ2dx)1/2+C|α|m−12N∫10u2+u2xdx≤ε∫10τ2xθτ3dx+C(ε)∫10u2+u2xdx. | (2.52) |
Inserting (2.47) and (2.49)–(2.52) into (2.46), and choosing ε suitable small, we obtain
ddt∫10[12(λτxτ)2−λuτxrτ]dx+c∫10θτ2xdx≤C‖(u,ux,θx/√θ,v,vx)‖2. | (2.53) |
Integrating (2.53) over [0,t], using Cauchy-Schwarz's inequality, (2.48), and Corollary 2.1, for any t≥0, one has
∫10τ2xdx+∫t0∫10τ2xθdxds≤C. | (2.54) |
By virtue of (2.54), we have
ˉθ∫10τ2xdx=∫10τ2x(ˉθ−θ)dx+∫10τ2xθdx≤ˉθ2∫10τ2xdx+12ˉθ‖θ−ˉθ‖2∞∫10τ2xdx+∫10τ2xθdx≤ˉθ2∫10τ2xdx+C‖θ−ˉθ‖2∞+∫10τ2xθdx. | (2.55) |
It follows from (2.19) and (2.48) that
∫T0‖θ−ˉθ‖2∞ds≤C∫T0∫10θ2xθdx∫10θdxdt≤C. | (2.56) |
Thus, it follows from (2.55)–(2.56) that
∫T0∫10τ2xdxdt≤C. | (2.57) |
The proof of Lemma 2.4 has been completed by (2.54) and (2.57).
Next, based on the estimate of τx, we are devoted to derive the estimates on the first-order derivatives of wx.
Lemma 2.5. Assume that the conditions listed in Lemma 2.1 hold. Then for T≥0,
∫10w2xdx+∫T0∫10w2xxdxdt≤C3. |
Proof. Multiplying (1.15) by wxx and integrating over [0,1] about x, we find from (2.1) and Lemma 2.4 that
12ddt‖wx‖2+∫10μr2w2xxτdx=−∫10rwxxwx(μrτ)xdx−∫10μwxxwxdx≤C∫10|wxxwx|(|α|m−12|θx|+1+|τx|)dx≤ε‖wxx‖2+C(ε)‖wx‖2+C(ε)‖τx‖2‖wx‖2∞≤ε‖wxx‖2+C(ε)‖wx‖2. | (2.58) |
Taking ε suitably small in (2.58) finds
12ddt‖wx‖2+c∫10w2xxdx≤C‖wx‖2. | (2.59) |
The proof of Lemma 2.5 is complete by integrating (2.59) over (0,t) about time and choosing ε suitably small.
Based on the above result, we have the following uniform first-order derivatives estimates on the velocity (u,v).
Lemma 2.6. Assume that the conditions listed in Lemma 2.1 hold. Then for T≥0,
∫10(u2x+v2x+τ2t)dx+∫T0∫10(u2xx+v2xx+θ2x+u2t+v2t+w2t+τ2tx)dxdt≤C4. |
Proof. Multiplying (1.13) and (1.14) by uxx and vxx, respectively, and integrating over Ω about x, by integration by parts, one has
12ddt∫10(u2x+v2x)dx+∫10r2τ(λu2xx+μv2xx)dx=∫10uxxrPxdx+∫10(vxxuvr−uxxv2r)dx−∫10uxxr[(λ(ru)xτ)x−λruxxτ]dx+2∫10uμxuxxdx−∫10vxx[rvx(μrτ)x+2μvx−(μv)x−μτvr2]dx:=5∑i=1IIi. | (2.60) |
By Cauchy-Schwarz's inequality, one has
II1≤ε‖uxx‖2+C(ε)‖(θx,τx)‖2. | (2.61) |
It follows from Sobolev's inequality, the boundary condition of v, and Corollary 2.1, that we have
II2≤ε‖(uxx,vxx)‖2+C(ε)‖v‖2∞‖(u,v)‖2≤ε‖(uxx,vxx)‖2+C(ε)‖vx‖2. | (2.62) |
Direct computation from (2.1) yields
II3≤ε‖uxx‖2+C(ε)∫10[τ2xu2x+(1+|α|m−22θ2x)|(ux,uτx,u)|2]dx≤ε‖uxx‖2+C(ε)‖(ux,u)‖2+C(ε)‖τx‖2‖(ux,u)‖2∞≤2ε‖uxx‖2+C(ε)‖(ux,u)‖2, | (2.63) |
II4≤ε‖uxx‖2+C(ε)|α|N2m−22‖u‖2≤ε‖uxx‖2+C(ε)‖u‖2, | (2.64) |
and
II5≤ε‖vxx‖2+C(ε)∫10[v2x(1+|α|m−22θ2x+τ2x)+v2]dx≤2ε‖vxx‖2+C(ε)‖(vx,v)‖2. | (2.65) |
Putting (2.61)–(2.65) into (2.60) and taking ε suitably small gives
12ddt∫10(u2x+v2x)dx+c∫10(u2xx+v2xx)dx≤C‖(θx,τx,vx,ux,u,v)‖2. | (2.66) |
Integrating (2.66) over (0,T) about time, and using Lemma 2.4 and Corollary 2.1, we find
∫10(u2x+v2x)dx+∫T0∫10(u2xx+v2xx)dxdt≤C+C∫T0∫10θ2xdxdt. | (2.67) |
For β>1, we take p=β−1 in (2.43), and then
∫T0∫10θ2xdxdt≤C. | (2.68) |
Substituting (2.68) into (2.67), it follows for β>1 that
∫10(u2x+v2x)dx+∫T0∫10(u2xx+v2xx+θ2x)dxdt≤C. | (2.69) |
Next, we need to estimate the L2(Ω×(0,t))-norm of θx for 0<β≤1. We deduce from multiplying (1.16) by θ1−β2 and integration by parts that
2cv4−βddt∫10θ2−β2dx+2−β2∫10˜κr2θβ2θ2xτdx=−R∫10θ2−β2τ(ru)xdx+∫10θ1−β2Qdx=R∫10ˉθ2−β2−θ2−β2τ(ru)xdx−Rˉθ2−β2∫10(ru)xτdx+∫10θ1−β2Qdx≤C∫10|ˉθ2−β2−θ2−β2||(u,ux)|dx−Rˉθ2−β2ddt∫10lnτdx+∫10θ1−β2Qdx. | (2.70) |
Notice that
∫10|ˉθ2−β2−θ2−β2||(u,ux)|dx≤C‖ˉθ1−β4−θ1−β4‖∞(∫10(1+θ2−β2)dx)1/2(∫10(u2+u2x)dx)1/2≤C(∫10θ−β4|θx|dx)2+C∫10(1+θ2−β2)dx∫10(u2+u2x)dx≤C∫10θ1−β2dx∫10θ2xθdx+C∫10(1+θ2−β2)dx∫10(u2+u2x)dx≤C∫10θ2xθdx+C∫10(1+θ2−β2)dx∫10(u2+u2x)dx, | (2.71) |
and
∫10θ1−β2Qdx≤C(‖ˉθ1−β2−θ1−β2‖∞+1)∫10(u2+u2x+v2+v2x+w2x)dx≤C∫10θ−β2|θx|dx∫10(u2x+v2x+w2x)dx+C∫10(u2x+v2x+w2x)dx≤C∫10(θ−12+θβ4)|θx|dx∫10(u2x+v2x+w2x)dx+C∫10(u2x+v2x+w2x)dx≤ε∫10θβ2θ2xdx+C(ε)(∫10(u2x+v2x+w2x)dx)2+C∫10(θ2xθ+u2x+v2x+w2x)dx. | (2.72) |
We can conclude from (2.70)–(2.72) that
∫10θ2−β2dx+∫T0∫10θβ/2θ2xdxdt≤C+C∫T0(∫10(u2x+v2x+w2x)dx)2ds, |
which combined with Young's inequality and Corollary 2.1 yields
∫T0∫10θ2xdxdt≤C∫T0∫10θβθ2xθ2dxds+C∫T0∫10θβ/2θ2xdxds≤C+C∫T0(∫10(u2x+v2x+w2x)dx)2dt. | (2.73) |
By means of Lemma 2.5, (2.67), and (2.73), we find for 0<β≤1,
∫10(u2x+v2x)dx+∫T0∫10(u2xx+v2xx+θ2x)dxdt≤C. | (2.74) |
By virtue of (1.12)–(1.16), (2.1), Corollary 2.1, Lemma 2.4, (2.69), and (2.74), it follows that
∫10τ2tdx+∫T0∫10(u2t+v2t+w2t+τ2tx)dxds≤C. | (2.75) |
To obtain the first-order derivative estimate of the temperature, we need to first establish the uniform upper and lower bounds of θ.
Lemma 2.7. Assume that the conditions listed in Lemma 2.1 hold. Then for T≥0,
C−11≤θ≤C1. |
Proof. First of all, multiplying (1.16) by θ, and integrating over [0,1] about x, yields
cv2ddt∫10θ2dx+∫10˜κr2θβθ2xτdx=∫10θQdx−R∫10θ2(ru)xτdx≤C‖(u,ux,v,vx,wx)‖2∞∫10θdx+‖ux‖2∞∫10θ2dx. | (2.76) |
Applying Gronwall's inequality to (2.76), we can obtain
∫10θ2dx+∫T0∫10θβθ2xdxdt≤C. | (2.77) |
Based on the estimate above, we can get the bound of ∫10θβθ2xdx which will be used to obtain the upper bound of θ. Multiplying (1.16) by θβθt and integrating over (0,1) about x, it follows that
cv∫10θβθ2tdx+R∫10θβ+1θt(ru)xτdx−∫10θβθtQdx=∫10(˜κr2θβθxτ)xθβθtdx. | (2.78) |
By integration by parts, one has
∫10(˜κr2θβθxτ)xθβθtdx=−∫10˜κr2θβθxτ(θβθx)tdx=−˜κ2ddt∫10r2τ(θβθx)2dx+˜κ2∫10(2ruτ−ruτ−r3uxτ2)(θβθx)2dx. | (2.79) |
Inserting (2.79) into (2.78), we can deduce that
˜κ2ddt∫10r2τ(θβθx)2dx+cv∫10θβθ2tdx=−R∫10θβ+1θt(ru)xτdx+∫10θβθtQdx+˜κ2∫10(ruτ−r3uxτ2)(θβθx)2dx≤cv2∫10θβθ2tdx+C∫10θβ+2(u2+u2x)dx+C∫10θβ(u4+u4x+v4+v4x+w4x)dx+C‖(u,ux)‖∞∫10(θβθx)2dx≤cv2∫10θβθ2tdx+C‖(u2,u2x,u4,u4x,v4,v4x,w4x)‖∞+C(∫10θβθ2tdx)2. | (2.80) |
By Sobolev's inequality, Corollary 2.1, and Lemmas 2.5–2.6, one can find that
∫T0‖(u2,u2x,u4,u4x,v4,v4x,w4x)‖∞ds≤C. | (2.81) |
By virtue of (2.80), Grönwall's inequality, and (2.81), we can obtain
∫10(θβθx)2dx+∫T0∫10θβθ2tdxds≤C. | (2.82) |
Thanks to (2.82), it follows that
‖θβ+1−ˉθβ+1‖∞≤C(∫10(θβθx)2dx)12≤C. | (2.83) |
That is, for t≥0,
‖θ‖∞≤C. | (2.84) |
Thanks to (2.77) and (2.84), one has
∫T0∫10(θβ+1−ˉθβ+1)2dxdt≤∫T0∫10θ2βθ2xdxdt≤Csup0≤t≤T‖θ‖β∞∫T0∫10θβθ2xdxdt≤C. | (2.85) |
Combining (2.83) and (2.84), one has
∫T0|ddt∫10(θβ+1−ˉθβ+1)2dx|dt≤C∫T0∫10(θβ+1−ˉθβ+1)2dxdt+C∫T0‖θβθt‖2dt≤Csup0≤t≤T‖θ‖β∞∫T0∫10θβθ2xdxdt≤C. | (2.86) |
So, from (2.83), (2.85), and (2.86), one has
limt→+∞∫10(θβ+1−ˉθβ+1)2dx=0. |
From (2.83), when t→+∞,
‖(θβ+1−ˉθβ+1)‖2∞≤C‖(θβ+1−ˉθβ+1)‖‖θβθx‖→0, |
and we can obtain that there exists some time T0≫1 such that when t>T0,
θ(x,t)≥γ12. | (2.87) |
Fixing T0 in (2.87), multiplying (1.16) by θ−p, p>2, and integrating over [0,1] about x yield
cvp−1ddt‖θ−1‖p−1p−1+p∫10˜κr2θβθ2xτθp+1dx+∫10Qθpdx=R∫10θτθp(ru)xdx≤12∫10u2+u2xτθpdx+C‖θ−1‖p−2Lp−1. |
Hence,
ddt‖θ−1‖Lp−1≤C, |
where C is a generic positive constant independent of p. Thus, integrating the above inequality over (0,t) and letting p→∞, we arrive that
θ−1(x,t)≤C(T0+1)↔θ(x,t)≥[C(T0+1)]−1,∀(x,t)∈[0,1]×[T0,+∞). |
The proof of Lemma 2.7 is complete.
Lemma 2.8. Assume that the conditions listed in Lemma 2.1 hold. Then for T≥0,
∫10θ2xdx+∫T0∫10(θ2xx+θ2t)dxds≤C5. |
Proof. Multiplying (1.16) by θxx, integrating over [0,1] on x, and by Hölder's, Poincaré's, and Cauchy-Schwarz's inequalities, Corollary 2.1, Lemma 2.4, and Lemma 2.7, we have
cv2ddt∫10θ2xdx+∫10κr2θ2xxτdx=∫10θxx[Rθτ(ru)x−θx(κr2τ)x−Q]dx≤ε∫10θ2xxdx+C(ε)∫10[θ2(ru)2x−θ2x(κr2τ)2x−Q2]dx≤ε∫10θ2xxdx+C(ε)‖ux‖2‖θ‖2∞+C(ε)‖θx‖2+C(ε)‖θx‖2∞‖τx‖2+C(ε)∫10(u4+v4+u4x+v4x+w4x)dx≤ε‖θxx‖2+C(ε)(‖ux‖2+‖θx‖2+‖θx‖‖θxx‖+‖u‖2‖u‖2+‖v‖2∞‖v‖2)+C(ε)(‖ux‖2∞‖ux‖2+‖vx‖2∞‖vx‖2+‖wx‖2∞‖wx‖2)≤ε‖θxx‖2+C(ε)‖(ux,vx,wx,uxx,vxx,wxx)‖2+C(ε)‖θx‖2. | (2.88) |
Choosing ε suitably small in (2.88) gives
cv2ddt∫10θ2xdx+c∫10θ2xxdx≤C‖(ux,vx,wx)‖21+C‖θx‖2. | (2.89) |
Integrating (2.89) and using Lemmas 2.5–2.6, one has
‖θx(t)‖2+∫T0‖θxx‖2ds≤C. | (2.90) |
Hence, similar to (2.75), by means of (1.16), Corollary 2.1, Lemmas 2.4–2.7, and (2.90), one can deduce that
∫T0∫10θ2tdxdt≤C. |
Next, we derive the second-order derivatives estimates of (τ,u,v,w,θ).
Lemma 2.9. Assume that the conditions listed in Lemma 2.1 hold. Then for T≥0,
∫10(u2t+v2t+w2t+θ2t+u2xx+v2xx+w2xx+θ2xx+τ2xt)dx+∫T0∫10(u2xt+τ2tt+v2xt+w2xt+θ2xt)dxds≤C6. |
Proof. Applying ∂t to (1.13) and multiplying by ut in L2, one has
12ddt∫10u2tdx+∫10˜λr2θαu2xtτdx=∫10ruxt[Pt−(λτ)t(ru)x−λτ((ru)xt−ruxt)]dx−∫10rxut[λ(ru)xτ]tdx+∫10ut[(v2r)t−rtPx+rxPt+rt(λ(ru)xτ)x−2(uμx)t]dx:=3∑i=1IIIi. | (2.91) |
Applying ∂t to (1.14) and multiplying by vt in L2, one has
12ddt∫10v2tdx+∫10˜μr2θαv2xtτdx=∫10{vt[2(μvx)t−rx(μrvxτ)t−(μv)xt]−rvxtvx[μrτ]t}dx+∫10vt[rt(μrvxτ)x−(uvr)t−(μτvr2)t]dx:=5∑i=4IIIi. | (2.92) |
Applying ∂t to (1.15) and multiplying by wt in L2, one has
12ddt∫10w2tdx+∫10˜μr2θαw2xtτdx=∫10wtrt(μrwxτ)xdx−∫10{rwxwxt(μrτ)t+wt[rx(μrwxτ)t−(μwx)t]}dx:=7∑i=6IIIi. | (2.93) |
Adding (2.91)–(2.93) together, we get
12ddt∫10(u2t+v2t+w2t)dx+∫10˜λr2θαu2xt+˜μr2θαv2xt+˜μr2θαw2xtτdx=7∑i=1IIIi. | (2.94) |
Before the computations of III1 to III7, we need to keep in mind the following facts:
‖(u,v,w)‖∞≤C,a≤r≤b,rx=r−1τ,rt=u,rtx=ux,C−1≤τ≤C,C−1≤θ≤C,|(ru)x|≤C|(u,ux)|,|(ru)xt−ruxt|≤C|(u2,ut,uux)|,|(ru)xt|≤C|(u2,ut,uux,uxt)|. |
Then, by Hölder's, Sobolev's, and Cauchy-Schwarz's inequalities, one has
III1≤C∫10|uxt||(θt,τt,θtux,τtux,u,ut,ux)|dx≤ε‖uxt‖2+C(ε)‖(θt,τt,u,ut)‖2+C(ε)‖(θt,τt)‖2∞‖ux‖2≤ε‖uxt‖2+δ‖θxt‖2+C(ε,δ)‖(θt,τt,u,ut,τxt)‖2, | (2.95) |
and
III2≤C∫10|ut||(θt,θtux,u2,ut,ux,uxt,τt,τ2t)|dx≤ε‖uxt‖2+C(ε)‖(ut,θt,u,ux,τt)‖2+ε‖θt‖2∞‖ux‖2+C(ε)‖τt‖2∞‖τt‖2≤ε‖uxt‖2+δ‖θxt‖2+C(ε,δ)‖(ut,θt,u,ux,τt,τxt)‖2. | (2.96) |
By virtue of (1.13), one has
|(λ(ru)xτ)x|≤C|(ut,v2,θx,τx)|. |
Thus, it follows from Hölder's, Sobolev's, and Cauchy-Schwarz's inequalities that
III3≤C∫10|ut||(vt,v2,θx,τx,τt,θt,ut,utθx,θxθt,θxt)|dx≤ε‖θxt‖2+C(ε)‖(vt,ut,v,θx,θt,θt,τx,τt)‖2+ε‖(ut,θt)‖2∞‖θx‖2≤ε‖(uxt,θxt)‖2+C(ε)‖(vt,ut,θt,τt,v,θx,τx)‖2, | (2.97) |
and
III4≤C∫10|vt||(θtvx,vx,vxt,vxτt,θxθt,θxt,θxvt)|dx+C∫10|vxtvx||(θt,v,τt)|dx≤ε2‖(vxt,θxt)‖2+C(ε)‖(vt,vx)‖2+C(ε)‖(θt,τt,vt)‖2∞‖(vx,θx)‖2≤ε‖(vxt,θxt)‖2+C(ε)‖(vt,vx,θt,τt,τtx)‖2. | (2.98) |
It follows from (1.14) that
|(μrvxτ)x|≤C|(vt,v,vx,θx)|. |
Then
III5≤C∫10|vt||(vt,v,θx,vx,ut,θt,τt)|dx≤C‖(vt,v,θx,vx,ut,θt,τt)‖2. | (2.99) |
By virtue of (1.15), we can obtain
III6≤C∫10|wt||u||(μrwxτ)x|dx≤C∫10|wt||(wt,wx)|dx≤C‖(wt,wx)‖2, | (2.100) |
and
III7≤C∫10|wxt||(wxθt,wxτt,wx)|+|wt||(θtwx,τtwx,wx,wxt)|dx≤ε‖wxt‖2+C(ε)‖(wx,wt)‖2+C(ε)‖(θt,τt)‖2∞‖wx‖2≤ε‖(wxt,θxt)‖2+C(ε)‖(wx,wt,θt,τt,τxt)‖2. | (2.101) |
Putting (2.95)–(2.101) into (2.94) gives
12ddt‖(ut,vt,wt)‖2+c‖(uxt,vxt,wxt)‖2≤ε‖(uxt,vxt,wxt,θxt)‖2+C(ε)‖(ut,vt,wt,θt,wx,τx,θx)‖2+C(ε)‖(τt,u,v)‖21. | (2.102) |
Applying ∂t to (1.16) and multiplying by θt in L2, it follows that
cv2ddt∫10θ2tdx+∫10˜κθβr2θ2xtτdx=∫10θt[Qt−(P(ru)x)t]−θxθxt(κr2τ)tdx. | (2.103) |
First of all, by means of the definition of Q, one has
|θtQt|≤C|θt||(u,τt,ut,ux,uxt,uxτt,uxut,u2x,uxuxt)|+C|θt||(θt,θtu2x,τtu2x,θtw2x,w2x,τtw2x,wxwxt,θtv2x,τtv2x)|+C|θt||(v2x,vxt,τtvx,vt,v,vx,vxvxt,vxvt)|≤C(ε)|(θt,u,τt,ut,ux,wx,vx,vt,v,vx)|2+ε|(uxt,wxt,vxt)|2+C(ε)|(τt,ut,ux)|2|(ux,wx,vx)|2+C(ε)|θt|2|(ux,vx,wx)|2. | (2.104) |
Using (2.104) and Sobolev's inequality, we can derive from (2.103) that
cv2ddt∫10θ2tdx+∫10˜κθβr2θ2xtτdx≤C∫10(|θt||(Qt,θt,τt,θtux,τtux,u,ut,ux,uxt)|+|θx||(θxtθt,θxt,θxtτt)|)dx≤C(ε)‖(θt,u,τt,ut,ux,θx,wx,vx,vt,v,vx)‖2+ε‖(uxt,wxt,vxt,θxt)‖2+C(ε)‖(τt,θt,ut,ux)‖2∞‖(ux,wx,vx,θx)‖2+C(ε)‖τt‖2∞‖θt‖2≤C(ε)‖(θt,u,τt,ut,ux,wx,vx,vt,v,τtx,uxx)‖2+ε‖(uxt,wxt,vxt,θxt)‖2+C(ε)‖(ux,vx,wx)‖21‖θt‖2+C(ε)‖τt‖21‖θt‖2. | (2.105) |
Adding (2.102) to (2.105) and choosing ε>0 suitably small, it follows that
12ddt‖(√cvθt,ut,vt,wt)‖2+c‖(uxt,vxt,wxt,θxt)‖2≤C‖(ut,vt,wt,θt,wx,τx,θx)‖2+C‖(ux,τt,v)‖21+C‖(ux,vx,wx)‖21‖θt‖2+C‖τt‖21‖θt‖2. | (2.106) |
By means of (2.106) and Grönwall's inequality, we deduce
‖(ut,vt,wt,θt)‖2+∫T0‖(uxt,vxt,wxt,θxt)‖2ds≤C. | (2.107) |
According to (1.13), one has
λr2uxxτ=ut−v2r+rPx+2uμx−r[(λ(ru)xτ)x−λruxxτ], |
which means that
|uxx|≤C|(ut,v,θx,τx,θxux,τxux,u,ux)|. |
Hence, by means of (2.107), we obtain
‖uxx‖2≤C. |
Similarly, use the equations (1.12)–(1.16), we also can derive
‖(vxx,wxx,θxx,τtx)‖2+∫T0‖τtt‖2ds≤C7. | (2.108) |
Here, we omit the details of (2.108). The proof of Lemma 2.9 is complete.
Lemma 2.10. Assume that the conditions listed in Lemma 2.1 hold. Then for T≥0,
∫10τ2xxdx+∫T0∫10(τ2xx+τ2xxt+u2xxx+v2xxx+w2xxx+θ2xxx)dxds≤C7. |
Proof. Apply ∂x to (2.45) and multiply by (λτx/τ)x in L2 to get
12ddt∫10(λτxτ)2xdx+∫10Rθλτ(λτxτ)2xdx=∫10(λτxτ)x[λθτ(τxθt−τtθx)]xdx+∫10(λτxτ)x(utr−v2r2+2uμxr)xdx−R∫10(λτxτ)x[2θxτxτ2−θxxτ−2θτ2xτ3−θτxλτ(λτ)x]dx≤C(ε)∫10[|(τx,θx)|2|(τxθt,τtθx)|2+|(τxxθt,τxθxt,τtxθx,τtθxx)|2]dx+C(ε)∫10[|(uxt,ut,vx,v,uxθx,θxx,θ2x,θx)|2+|(θxx,τ2x,θxτx)|2]dx+ε∫10(λτxτ)2xdx:=9∑i=8IIIi+ε∫10(λτxτ)2xdx, | (2.109) |
where the following fact has been used:
\begin{equation*} \begin{split} \left(\frac{\theta}{\tau}\right)_{xx} & = \frac{\theta_{xx}}{\tau}-2\frac{\theta_x\tau_x}{\tau^2}+2\frac{\theta\tau_x^2}{\tau^3}-\frac{\theta\tau_{xx}}{\tau^2} \\& = \frac{\theta_{xx}}{\tau}-2\frac{\theta_x\tau_x}{\tau^2}+2\frac{\theta\tau_x^2}{\tau^3} -\frac{\theta}{\lambda\tau}\left[\left(\frac{\lambda\tau_x}{\tau}\right)_x-\frac{\lambda_x\tau_x}{\tau} +\frac{\lambda\tau_x^2}{\tau^2}\right]. \end{split} \end{equation*} |
By Sobolev's inequality and Lemmas 2.6–2.9, we have
\begin{equation} \begin{split} III_8 \leq& C( \varepsilon)\Big( \|(\tau_x, \theta_x)\|_\infty^4\|(\tau_t, \theta_t)\|^2 +\|\theta_t\|_\infty^2\|\tau_{xx}\|^2 \\& +\|\tau_x\|_\infty^2\|\theta_{xt}\|^2 +\|\tau_t\|_\infty^2\|\theta_{xx}\|^2 +\|\theta_x\|_\infty^2\|\tau_{xt}\|^2 \Big) \\\leq& C( \varepsilon) \Big( \|(\tau_x, \theta_x)\|^4 +\|(\tau_x, \theta_x)\|^2\|(\tau_{xx}, \theta_{xx})\|^2 \\&+\|\theta_t\|_{1}^2\|\tau_{xx}\|^2 +\|\theta_{xt}\|^2\|\tau_x\|^2 +\|\tau_t\|_{1}^2+\|\theta_x\|_{1}^2 \Big) \\\leq& C( \varepsilon)\Big(\|(\tau_x, \theta_x, \tau_t)\|^2+\|\theta_t\|^2\|\tau_{xx}\|^2\Big), \end{split} \end{equation} | (2.110) |
and
\begin{equation} \begin{split} III_9 &\leq C( \varepsilon)\|(u_{xt}, u_t, v_x, v, \theta_{xx}, \theta_x)\|^2 +C( \varepsilon)\|(u_x, \theta_x, \tau_x)\|_\infty^2\|(\theta_x, \tau_x)\|^2 \\&\leq \varepsilon \|\tau_{xx}\|^2+C( \varepsilon)\|(u_{xt}, u_t, v_x, v, \theta_{xx}, \theta_x, \tau_x)\|^2. \end{split} \end{equation} | (2.111) |
Noting that
\begin{equation*} |\tau_{xx}|\leq C\left|\left(\frac{\lambda\tau_x}{\tau}\right)_x\right| +C\Big|\Big(\theta_x\tau_x, \tau_x^2\Big)\Big|, \end{equation*} |
we can derive from Sobolev's inequality and Lemma 2.4 that
\begin{equation*} \begin{split} \|\tau_{xx}\|^2&\leq C\left\|\left(\frac{\lambda\tau_x}{\tau}\right)_x\right\|^2 +C\|\theta_x\|_\infty^2\|\tau_x\|^2+C\|\tau_x\|_4^4 \\&\leq C\left\|\left(\frac{\lambda\tau_x}{\tau}\right)_x\right\|^2 +C\|\theta_x\|_{1}^2+C\|\tau_x\|^4+C\|\tau_x\|^3\|\tau_{xx}\|. \end{split} \end{equation*} |
So, it follows from Cauchy-Schwarz's inequality and Lemma 2.4 that
\begin{equation} \|\tau_{xx}\|^2\leq C\|\theta_{x}\|_{1}^2+C\|\tau_x\|^2+C\left\|\left(\frac{\lambda\tau_x}{\tau}\right)_x\right\|^2. \end{equation} | (2.112) |
Taking \varepsilon suitably small, putting (2.110)–(2.111) into (2.109), and using Lemmas 2.4, 2.8, and 2.9, we find
\begin{equation} \begin{split} &\frac{1}{2}\frac{ \mathrm{d}}{ \mathrm{d} t}\int_0^1\left(\frac{\lambda\tau_x}{\tau}\right)_x^2 \mathrm{d} x +c\int_0^1\left(\frac{\lambda\tau_x}{\tau}\right)_x^2 \mathrm{d} x \\&\quad\leq C\|(\tau_t, \theta_t, \theta_{x}, u_t, v)\|_{1}^2+C\|\tau_x\|^2+C\|\theta_{t}\|_{1}^2\left\|\left(\frac{\lambda\tau_x}{\tau}\right)_x\right\|^2. \end{split} \end{equation} | (2.113) |
By (2.113), Grönwall's inequality, Corollary 2.1, and Lemmas 2.6 and 2.8–2.9, one obtains
\begin{equation} \int_0^1\left(\frac{\lambda\tau_x}{\tau}\right)_x^2 \mathrm{d} x +\int_0^T\int_0^1\left(\frac{\lambda\tau_x}{\tau}\right)_x^2 \mathrm{d} x \mathrm{d} s\leq C. \end{equation} | (2.114) |
It follows from (2.112) and (2.114) that
\begin{equation} \|\tau_{xx}\|^2+\int_0^T\|\tau_{xx}\|^2 \mathrm{d} s\leq C. \end{equation} | (2.115) |
Letting \partial_x act on (1.13) gives
\begin{equation} \begin{split} &\frac{\tilde\lambda\theta^2 r^2u_{xxx}}{\tau}+r_x\left(\frac{\lambda\tau_t}{\tau}\right)_x +r\left[\left(\frac{\lambda}{\tau}\right)_x\tau_t\right]_x+r\left(\frac{\lambda}{\tau}\right)_x(ru)_{xx} \\&\quad = u_{xt}-\left(\frac{v^2}{r}\right)_x+(rP_x)_x+2\Big(u\mu_x\Big)_x-\frac{r\lambda}{\tau}[(ru)_{xxx}-ru_{xxx}]. \end{split} \end{equation} | (2.116) |
It follows from (2.115) and (2.116) that
\begin{equation*} \begin{split} &\int_0^T\|u_{xxx}\|^2 \mathrm{d} s \\&\quad\leq C\int_0^T\|(\theta_x\tau_t, \tau_{xt}, \tau_t\tau_x)\|^2 \mathrm{d} s +C\int_0^T\|(\theta_x^2\tau_t, \theta_{xx}\tau_t, \theta_x\tau_{tx}, \theta_x\tau_t\tau_x)\|^2 \mathrm{d} s \\&\qquad+C\int_0^T\|(\tau_{xx}\tau_t, \tau_x\tau_{tx}, \tau_x^2\tau_t)\|^2 \mathrm{d} s +C\int_0^T\|(\theta_x, \theta_x\tau_x, \theta_xu_x, \theta_xu_{xx})\|^2 \mathrm{d} s \\&\qquad+C\int_0^T\|(u_{xt}, v_x, v, \theta_x, \tau_x, \theta_{xx}, \theta_x\tau_x, \tau_{xx})\|^2 \mathrm{d} s +C\int_0^T\|(u_x\theta_x, \theta_x^2, \theta_{xx})\|^2 \mathrm{d} s \\&\qquad+C\int_0^T\|(u, \tau_x, \tau_{xx}, u_x, \tau_xu_x, u_{xx})\|^2 \mathrm{d} s \\&\quad\leq C\int_0^T\|(\tau_t, \tau_{xx}, \tau_{tx}, \theta_x, u_x, u_{xt}, v_x, v, \theta_{xx}, u, \tau_x, u_{xx})\|^2 \mathrm{d} s \\&\quad \leq C, \end{split} \end{equation*} |
where the following fact has been used:
\begin{equation*} \|(\theta_x, \tau_x, \theta_x^2, \tau_t, \theta_x\tau_t, \tau_x^2)\|_\infty \leq C+C\|(\theta_x, \tau_t, \tau_x)\|_{1}^2 \leq C. \end{equation*} |
Similarly, using (1.14)–(1.15), we also have
\begin{equation*} \int_0^T\|(v_{xxx}, w_{xxx})\|^2 \mathrm{d} s\leq C. \end{equation*} |
Letting \partial_x act on (1.16) gives
\begin{equation} \frac{\tilde{\kappa}\theta^\beta r^2\theta_{xxx}}{\tau} = c_v\theta_{xt}+(P\tau_t)_x-\left(\frac{\kappa r^2}{\tau}\right)_{xx}\theta_x-2\left(\frac{\kappa r^2}{ \tau}\right)_x\theta_{xx}-Q_x. \end{equation} | (2.117) |
It follows from (2.114) and (2.117) that
\begin{equation} \begin{split} &\int_0^T\|\theta_{xxx}\|^2 \mathrm{d} s \\&\quad\leq C\int_0^T\|(\theta_{xt}, \theta_x\tau_t, \tau_x\tau_t, \tau_{tx})\|^2 \mathrm{d} s \\&\qquad +C\int_0^T\|(\theta_x^3, \theta_{xx}\theta_x, \theta_x^2, \theta_x^2\tau_x, \theta_x\tau_x, \theta_x\tau_x^2, \theta_x\tau_{xx})\|^2 \mathrm{d} s \\&\qquad +C\int_0^T\|(\theta_{xx}, \tau_x\theta_{xx})\|^2+\|Q_x\|^2 \mathrm{d} s. \end{split} \end{equation} | (2.118) |
By the definition of Q , one has
\begin{equation} \begin{split} &\int_0^T\|Q_x\|^2 \mathrm{d} s \\&\quad \leq C\int_0^T\|(\theta_x, \theta_xu_x, u, \tau_x, u_x, u_{xx}, u_x\tau_x, u_x^2, u_xu_{xx})\|^2 \mathrm{d} s \\&\qquad +C\int_0^T\|(\theta_xw_x^2, w_x^2, w_xw_{xx}, w_x^2\tau_x)\|^2 \mathrm{d} s \\&\qquad +C\int_0^T\|(\theta_xv_x^2, \tau_xv_x^2, v_x^2, v_xv_{xx}, v_x^2\tau_x, v_x, v_{xx}, v_x\tau_x, v)\|^2 \mathrm{d} s. \end{split} \end{equation} | (2.119) |
Since the following estimates have been obtained:
\begin{equation*} \|(\theta_x, \tau_x, u_x, w_x, v_x)\|_\infty\leq C\|(\theta_x, \tau_x, u_x, w_x, v_x)\|_{1}\leq C, \end{equation*} |
putting (2.119) into (2.118) yields
\begin{equation*} \begin{split} &\int_0^T\|\theta_{xxx}\|^2 \mathrm{d} s \\&\quad \leq C\int_0^T\|(\theta_{xt}, \tau_t, \tau_{xt}, \theta_x, \theta_{xx}, \tau_x, \tau_{xx}, u_x, u, u_{xx}, w_x, w_{xx}, v_x, v_{xx}, v)\|^2 \mathrm{d} s \\&\quad\leq C. \end{split} \end{equation*} |
The proof of Lemma 2.10 is complete.
With all a priori estimates from Section 2 at hand, we can complete the proof of Theorem 1.1 . For this purpose, it will be shown that the existence and uniqueness of local solutions to the initial-boundary value problem (1.12)–(1.19) can be obtained by using the Banach theorem and the contractivity of the operator defined by the linearization of the problem on a small time interval.
Lemma 3.1. Letting (1.20) hold, then there exists T_0 = T_0(V_0, V_0, M_0) > 0 , depending only on \beta , V_0 , and M_0 , such that the initial boundary value problem (1.12)–(1.19) has a unique solution (\tau, u, v, w, \theta)\in X(0, T_0;\frac{1}{2}V_0, \frac{1}{2}V_0, 2M_0) .
Proof of Theorem 1.1: First, to prove Theorem 1.1, according to (1.20), one has
\begin{eqnarray*} &&\tau_0\geq V_0, \theta_0\geq V_0, \qquad \forall x\in \Omega, \\&&\|(\tau_0, u_0, v_0, w_0, \theta_0)\|_{H^2}\leq M_0. \end{eqnarray*} |
Combined with Lemma 3.1, there exists t_1 = T_0(V_0, V_0, M_0) such that (\tau, u, v, w, \theta)\in X(0, t_1;\frac{1}{2}V_0, \frac{1}{2}V_0, 2M_0).
We find the positive constant |\alpha| \leq \alpha_1 , where \alpha_1 satisfies
\begin{equation} \left(\frac{1}{2}V_0\right)^{-|\alpha_1|}\leq 2, \qquad (2M_0)^{|\alpha_1|}\leq 2, \qquad |\alpha_1|H(\frac{1}{2}V_0, \frac{1}{2}V_0, 2M_0)\leq \epsilon_1, \end{equation} | (3.1) |
where \epsilon_1 is chosen in Lemma 2.1. That means that one can choose
\begin{equation} |\alpha_1|: = \min\left\{\frac{\ln 2}{|\ln 2-\ln V_0|}, \frac{\ln 2}{|\ln 2+\ln M_0|}, \epsilon_1 H^{-1}\left(\frac{1}{2}V_0, \frac{1}{2}V_0, 2M_0\right)\right\}. \end{equation} | (3.2) |
One deduces from Lemmas 2.1–2.10 with T = t_1 that for each t\in [0, t_1] , the local solution (\tau, u, v, w, \theta) satisfies
\begin{equation} C_0^{-1}\leq v(x, t)\leq C_0, \qquad C_1^{-1}\leq \theta(x, t)\leq C_1, \qquad x\in (0, 1), \end{equation} | (3.3) |
and
\begin{equation} \sup\limits_{0\leq t\leq t_1}\|(\tau, u, v, w, \theta)\|_{2}^2+\int_{0}^{t_1}\|\theta_{t}\|^2 \mathrm{d} t\leq C_8^2, \end{equation} | (3.4) |
where C_i(i = 2, \cdots, 7) is chosen in Section 2 and C_8^2: = \sum_{i = 2}^{7}C_i . It follows from Lemma 2.9 and Lemma 2.10 that (\tau, u, v, w, \theta)\in C([0, T); H^2). If one takes (\tau, u, v, w, \theta)(\cdot, t_1) as the initial data and applies Lemma 3.1 again, the local solution (\tau, u, v, w, \theta) can be extended to the time interval [t_1, t_1+t_2] with t_2(C_0, C_1, C_8) such that (\tau, u, v, w, \theta)\in X(t_1, t_1+t_2; \frac{1}{2}C_0, \frac{1}{2}C_1, \frac{1}{2}C_8). Moreover, for all (x, t)\in [0, 1]\times [0, t_1+t_2] , one gets
\begin{equation*} \frac{1}{2}C_0\leq v(x, t), \qquad \frac{1}{2}C_1\leq \theta(x, t), \end{equation*} |
and
\begin{equation*} \sup\limits_{t_1\leq t\leq t_1+t_2}\|(\tau, u, v, w, \theta)\|_{2}^2+\int_{t_1}^{t_1+t_2}\|\theta_{t}\|^2 \mathrm{d} t\leq 4C_8^2, \end{equation*} |
which combined with (3.3) and (3.4) implies that for all t\in [0, t_1+t_2],
\begin{equation*} \frac{1}{2}C_0\leq v(x, t), \qquad \frac{1}{2}C_1\leq \theta(x, t), \end{equation*} |
\begin{equation*} \sup\limits_{0\leq t\leq t_1+t_2}\|(\tau, u, v, w, \theta)\|_{2}^2+\int_{0}^{t_1+t_2}\|\theta_{t}\|^2 \mathrm{d} t\leq 5C_8^2. \end{equation*} |
Take \alpha\leq \min \{\alpha_1, \alpha_2\}, where \alpha_i(i = 1, 2) are positive constants satisfying (3.1) and
\begin{equation*} \left(\frac{1}{2}C_0\right)^{-\alpha_2}\leq 2, \qquad (\sqrt{5}C_8)^{\alpha_2}\leq 2, \qquad \alpha_2H(\frac{1}{2}C_0, \frac{1}{2}C_1, \sqrt{5}C_8)\leq \epsilon_1, \end{equation*} |
where the value of \epsilon_1 is chosen in Lemma 2.1. That means that we can choose
\begin{equation} |\alpha_2|: = \min\left\{\frac{\ln 2}{|\ln 2-\ln C_0|}, \frac{\ln 2}{|\ln \sqrt{5}+\ln C_8|}, \epsilon_1 H^{-1}\left(\frac{1}{2}C_0, \frac{1}{2}C_1, \sqrt{5}C_8\right)\right\}. \end{equation} | (3.5) |
Then one can employ Lemmas 2.1–2.10 with T = t_1+t_2 to infer the local solution (\tau, u, v, w, \theta) satisfying (3.3) and (3.4).
Choosing
\begin{equation} \epsilon_0 = \min \{\alpha_1, \alpha_2\}, \end{equation} | (3.6) |
and repeating the above procedure, one can extend the solution (\tau, u, v, w, \theta) step-by-step to a global one provided that |\alpha|\leq \epsilon_0 . Furthermore,
\begin{equation*} \|(\tau, u, v, w, \theta)\|_{H^2}^2+\int_{0}^{+\infty}\left[\|(u_x, v_x, w_x, \theta_{x})\|^2+\|\tau\|^2\right] \mathrm{d} t\leq C_9^2, \end{equation*} |
from which we derive that the solution (\tau, u, v, w, \theta)\in X(0, +\infty; C_0, C_1, C_9).
The large-time behavior (1.21) follows from Lemmas 2.4–2.10 by using a standard argument [21].
First, thanks to (1.15), (2.1), (2.43), (2.55), (2.62), (2.73), Corollary 2.1, and Lemmas 2.4–2.10, taking \hat{\theta} = E_0 , one has
\begin{eqnarray} &&\frac{ \mathrm{d}}{ \mathrm{d} t}\int_0^1\eta_{E_0}(\tau, u, v, w, \theta) \mathrm{d} x +c_1\|(u, v)\|_{1}^2+c_1\|(w_x, \theta_x)\|^2\leq0, \end{eqnarray} | (3.7) |
\begin{eqnarray} && \frac{ \mathrm{d}}{ \mathrm{d} t}\int_0^1\left[\frac{1}{2}\left(\frac{\lambda\tau_x}{\tau}\right)^2-\frac{\lambda u\tau_x}{r \tau}\right] \mathrm{d} x +c_2\|\tau_x\|^2 \leq C_{10}\|(u, u_x, \theta_x, v, v_x)\|^2, \end{eqnarray} | (3.8) |
\begin{eqnarray} && \frac{ \mathrm{d}}{ \mathrm{d} t}\|(u_x, v_x, w_x)\|^2 +c_3\|(u_{xx}, v_{xx}, w_{xx})\|^2 \leq C_{11}\|(\theta_x, \tau_x, v_x, u_x, u, v, w_x)\|^2, \end{eqnarray} | (3.9) |
\begin{eqnarray} && \frac{ \mathrm{d}}{ \mathrm{d} t}\| \theta_x\|^2 +c_4\| \theta_{xx}\|^2 \leq C_{12}\|(u_x, v_x, w_x)\|_{1}^2+C_{12}\| \theta_x\|^2. \end{eqnarray} | (3.10) |
By Cauchy-Schwarz's inequality, one has
\begin{equation} \left|\frac{\lambda u\tau_x}{r \tau}\right|\leq \frac{1}{4}\left(\frac{\lambda\tau_x}{\tau}\right)^2+C\|u\|^2. \end{equation} | (3.11) |
Hence, by means of (3.11), Poincaré's inequalities, Corollary 2.1, and Lemma 2.7, one can deduce
\begin{equation*} c\|\tau_x\|^2-C_{13}\|u\|^2\leq\int_0^1\left[\frac{1}{2}\left(\frac{\lambda\tau_x}{\tau}\right)^2-\frac{\lambda u\tau_x}{r \tau}\right] \mathrm{d} x\leq C\|(\tau_x, u_x)\|^2. \end{equation*} |
Multiplying (3.7)–(3.10) by C_{14} , C_{15} , and C_{16} , respectively, and adding them together with (3.10), one has
\begin{equation} \frac{ \mathrm{d}}{ \mathrm{d} t}\mathcal{A} +c\|(u_x, v_x, w_x, \theta_x)\|_{H^1}^2+c\|\tau_x\|^2\leq 0, \end{equation} | (3.12) |
where we have defined
\begin{equation*} \mathcal{A}: = \int_0^1C_{14}\eta_{E_0}(\tau, u, v, w, \theta)+C_{15}\left[\frac{1}{2}\left(\frac{\lambda\tau_x}{\tau}\right)^2-\frac{\lambda u\tau_x}{r \tau}\right] \mathrm{d} x+C_{16}\|(u_x, v_x, w_x)\|^2+\|\theta_x\|^2, \end{equation*} |
and chosen constants C_{14} > C_{15} > C_{16} > 0 suitably large such that
c_1C_{14}-C_{10}C_{15}-C_{11}C_{16}-C_{12} > 0, |
c_2C_{15}-C_{11}C_{16}-C_{12} > 0, |
c_3C_{16}-C_{12} > 0. |
Taking \frac{C_{14}}{2} > C_{13} and using Poincaré's inequality gives
\begin{equation} c\|(\tau-\bar\tau, u, v, w, \theta-E_0)\|^2\leq\mathcal{A}\leq C\|(u_x, v_x, w_x, \theta_x)\|_1^2+C\|\tau_x\|^2, \end{equation} | (3.13) |
where we have used the facts
\begin{equation*} \|\theta-E_0\|^2\leq C\int_0^1|\theta-\bar\theta|^2 \mathrm{d} x+C\|(u, v, w)\|^2\leq C\|(\theta_x, u_x, v_x, w_x)\|^2. \end{equation*} |
By means of (3.12) and (3.13), we can derive that
\begin{equation} \|(\tau-\bar\tau, u, v, w, \theta-E_0)(t)\|_{H^1( \Omega)}^2\leq C\text{e}^{-ct}. \end{equation} | (3.14) |
By means of \bar r , one has
\begin{equation} r^2-\bar r^2 = 2\int_0^x\tau-\bar\tau \mathrm{d}\xi. \end{equation} | (3.15) |
By means of (3.14) and (3.15), we have
\begin{equation*} \|r-\bar r\|_2^2\leq C\text{e}^{-ct}. \end{equation*} |
The proof is thus complete.
Dandan Song: Writing-original draft, Writing-review & editing, Supervision, Formal Analysis; Xiaokui Zhao: Writing-review & editing, Methodology, Supervision.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors are grateful to the referees for their helpful suggestions and comments on the manuscript. This work was supported by the NNSFC (Grant No. 12101200), the Doctoral Scientific Research Foundation of Henan Polytechnic University (No. B2021-53) and the China Postdoctoral Science Foundation (Grant No. 2022M721035).
The authors declare there is no conflict of interest.
[1] |
F. Cicirelli, A. Forestiero, A Giordano, C. Mastroianni, Transparent and efficient parallelization of swarm algorithms, ACM Trans. Auton. Adapt. Syst., 11 (2016), 1-26. https://doi.org/10.1145/2897373 doi: 10.1145/2897373
![]() |
[2] |
A. M. Lal, S. M. Anouncia, Modernizing the multi-temporal multispectral remotely sensed image change detection for global maxima through binary particle swarm optimization, J. King Saud Univ., Comput. Inf. Sci., 34 (2022), 95-103. https://doi.org/10.1016/j.jksuci.2018.10.010 doi: 10.1016/j.jksuci.2018.10.010
![]() |
[3] |
A. Faramarzi, M. Heidarinejad, S. Mirjalili, A. H. Gandomi, Marine predators algorithm: A nature-inspired metaheuristic, Expert Syst. Appl., 152 (2020), 113377. https://doi.org/10.1016/j.eswa.2020.113377 doi: 10.1016/j.eswa.2020.113377
![]() |
[4] |
D. Sarkar, S. Choudhury, A. Majumder, Enhanced-Ant-AODV for optimal route selection in mobile ad-hoc network, J. King Saud Univ., Comput. Inf. Sci., 33 (2021), 1186-1201. https://doi.org/10.1016/j.jksuci.2018.08.013 doi: 10.1016/j.jksuci.2018.08.013
![]() |
[5] |
G. G. Wang, S. Deb, L. D. S. Coelho, Earthworm optimisation algorithm: a bio-inspired metaheuristic algorithm for global optimisation problems, Int. J. Bio-Inspired Comput., 12 (2018), 1-22. https://doi.org/10.1504/IJBIC.2018.093328 doi: 10.1504/IJBIC.2018.093328
![]() |
[6] |
D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, J. Global Optim., 39 (2007), 459-471. https://doi.org/10.1007/s10898-007-9149-x doi: 10.1007/s10898-007-9149-x
![]() |
[7] |
Z. Wang, H. Ding, B. Li, L. Bao, Z. Yang, An energy efficient routing protocol based on improved artificial bee colony algorithm for wireless sensor networks, IEEE Access, 8 (2020), 133577-133596. https://doi.org/10.1109/ACCESS.2020.3010313 doi: 10.1109/ACCESS.2020.3010313
![]() |
[8] |
S. Mirjalili, Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm, Knowl.-Based Syst., 89 (2015), 228-249. https://doi.org/10.1016/j.knosys.2015.07.006 doi: 10.1016/j.knosys.2015.07.006
![]() |
[9] | X. S. Yang, Firefly algorithm, in Engineering Optimization: An Introduction with Metaheuristic Applications, (2010), 221-230. https://doi.org/10.1002/9780470640425.ch17 |
[10] |
Z. Wang, H. Ding, B. Li, L. Bao, Z. Yang, Q. Liu, Energy efficient cluster based routing protocol for WSN using firefly algorithm and ant colony optimization, Wireless Pers. Commun., 2022. https://doi.org/10.1007/s11277-022-09651-9 doi: 10.1007/s11277-022-09651-9
![]() |
[11] |
S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software, 95 (2016), 51-67. https://doi.org/10.1016/j.advengsoft.2016.01.008 doi: 10.1016/j.advengsoft.2016.01.008
![]() |
[12] |
X. S. Yang, S. Deb, Cuckoo search: recent advances and applications, Neural Comput. Appl., 24 (2014), 169-174. https://doi.org/10.1007/s00521-013-1367-1 doi: 10.1007/s00521-013-1367-1
![]() |
[13] |
S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Software, 69 (2014), 46-61. https://doi.org/10.1016/j.advengsoft.2013.12.007 doi: 10.1016/j.advengsoft.2013.12.007
![]() |
[14] |
K. P. B. Resma, M. S. Nair, Multilevel thresholding for image segmentation using Krill Herd Optimization algorithm, J. King Saud Univ., Comput. Inf. Sci., 33 (2021), 528-541. https://doi.org/10.1016/j.jksuci.2018.04.007 doi: 10.1016/j.jksuci.2018.04.007
![]() |
[15] |
A. A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, H. Chen, Harris hawks optimization: Algorithm and applications, Future Gener. Comput. Syst., 97 (2019), 849-872. https://doi.org/10.1016/j.future.2019.02.028 doi: 10.1016/j.future.2019.02.028
![]() |
[16] |
G. G. Wang, S. Deb, Z. Cui, Monarch butterfly optimization, Neural Comput. Appl., 31 (2019), 1995-2014. https://doi.org/10.1007/s00521-015-1923-y doi: 10.1007/s00521-015-1923-y
![]() |
[17] |
S. Saremi, S. Mirjalili, A. Lewis, Grasshopper optimisation algorithm: theory and application, Adv. Eng. Software, 105 (2017), 30-47. https://doi.org/10.1016/j.advengsoft.2017.01.004 doi: 10.1016/j.advengsoft.2017.01.004
![]() |
[18] |
W. Li, G. G.Wang, A. H. Alavi, Learning-based elephant herding optimization algorithm for solving numerical optimization problems, Knowl.-Based Syst., 195 (2020), 105675. https://doi.org/10.1016/j.knosys.2020.105675 doi: 10.1016/j.knosys.2020.105675
![]() |
[19] |
S. Li, H. Chen, M. Wang, A. A. Heidari, S. Mirjalili, Slime mould algorithm: A new method for stochastic optimization, Future Gener. Comput. Syst., 111 (2020), 300-323. https://doi.org/10.1016/j.future.2020.03.055 doi: 10.1016/j.future.2020.03.055
![]() |
[20] |
Y. Yang, H. Chen, A. A. Heidari, A. H. Gandomi, Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts, Expert Syst. Appl., 177 (2021), 114864. https://doi.org/10.1016/j.eswa.2021.114864 doi: 10.1016/j.eswa.2021.114864
![]() |
[21] |
I. Ahmadianfar, A. A. Heidari, A. H. Gandomi, X. Chu, H. Chen, RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method, Expert Syst. Appl., 181 (2021), 115079. https://doi.org/10.1016/j.eswa.2021.115079 doi: 10.1016/j.eswa.2021.115079
![]() |
[22] |
J. Tu, H. Chen, M. Wang, A. H. Gandomi, The colony predation algorithm, J. Bionic Eng., 18 (2021), 674-710. https://doi.org/10.1007/s42235-021-0050-y doi: 10.1007/s42235-021-0050-y
![]() |
[23] |
I. Ahmadianfar, A. A. Heidari, S. Noshadian, H. Chen, A. H. Gandomi, INFO: An efficient optimization algorithm based on weighted mean of vectors, Expert Syst. Appl., 195 (2022), 116516. https://doi.org/10.1016/j.eswa.2022.116516 doi: 10.1016/j.eswa.2022.116516
![]() |
[24] |
S. Mirjalili, A. H. Gandomi, S. Z. Mirjalili, S. Saremi, H. Faris, S. M. Mirjalili, Salp swarm algorithm: A bio-inspired optimizer for engineering design problems, Adv. Eng. Software, 114 (2017), 163-191. https://doi.org/10.1016/j.advengsoft.2017.07.002 doi: 10.1016/j.advengsoft.2017.07.002
![]() |
[25] |
Z. Wang, H. Ding, Z. Yang, B. Li, Z. Guan, L. Bao, Rank-driven salp swarm algorithm with orthogonal opposition-based learning for global optimization, Appl. Intell., 52 (2022), 7922-7964. https://doi.org/10.1007/s10489-021-02776-7 doi: 10.1007/s10489-021-02776-7
![]() |
[26] |
A. A. Ewees, M. A. Al-qaness, M. Abd Elaziz, Enhanced salp swarm algorithm based on firefly algorithm for unrelated parallel machine scheduling with setup times, Appl. Math. Modell., 94 (2021), 285-305. https://doi.org/10.1016/j.apm.2021.01.017 doi: 10.1016/j.apm.2021.01.017
![]() |
[27] |
Q. Tu, Y. Liu, F. Han, X. Liu, Y. Xie, Range-free localization using reliable anchor pair selection and quantum-behaved salp swarm algorithm for anisotropic wireless sensor networks, Ad Hoc Networks, 113 (2021), 102406. https://doi.org/10.1016/j.adhoc.2020.102406 doi: 10.1016/j.adhoc.2020.102406
![]() |
[28] |
M. Tubishat, N. Idris, L. Shuib, M. A. Abushariah, S. Mirjalili, Improved salp swarm algorithm based on opposition based learning and novel local search algorithm for feature selection, Expert Syst. Appl., 145 (2020), 113122. https://doi.org/10.1016/j.eswa.2019.113122 doi: 10.1016/j.eswa.2019.113122
![]() |
[29] |
B. Nautiyal, R. Prakash, V. Vimal, G. Liang, H. Chen, Improved salp swarm algorithm with mutation schemes for solving global optimization and engineering problems, Eng. Comput., 2021. https://doi.org/10.1007/s00366-020-01252-z doi: 10.1007/s00366-020-01252-z
![]() |
[30] |
M. M. Saafan, E. M. El-Gendy, IWOSSA: An improved whale optimization salp swarm algorithm for solving optimization problems, Expert Syst. Appl., 176 (2021), 114901. https://doi.org/10.1016/j.eswa.2021.114901 doi: 10.1016/j.eswa.2021.114901
![]() |
[31] |
D. Bairathi, D. Gopalani, An improved salp swarm algorithm for complex multi-modal problems, Soft Comput., 25 (2021), 10441-10465. https://doi.org/10.1007/s00500-021-05757-7 doi: 10.1007/s00500-021-05757-7
![]() |
[32] |
E. Çelik, N. Öztürk, Y. Arya, Advancement of the search process of salp swarm algorithm for global optimization problems, Expert Syst. Appl., 182 (2021), 115292. https://doi.org/10.1016/j.eswa.2021.115292 doi: 10.1016/j.eswa.2021.115292
![]() |
[33] |
Q. Zhang, Z. Wang, A. A. Heidari, W. Gui, Q. Shao, H. Chen, et al., Gaussian barebone salp swarm algorithm with stochastic fractal search for medical image segmentation: a COVID-19 case study, Comput. Biol. Med., 139 (2021), 104941. https://doi.org/10.1016/j.compbiomed.2021.104941 doi: 10.1016/j.compbiomed.2021.104941
![]() |
[34] |
H. Zhang, T. Liu, X. Ye, A. A. Heidari, G. Liang, H. Chen, et al., Differential evolution-assisted salp swarm algorithm with chaotic structure for real-world problems, Eng. Comput., 2022. https://doi.org/10.1007/s00366-021-01545-x doi: 10.1007/s00366-021-01545-x
![]() |
[35] |
S. Zhao, P. Wang, X. Zhao, H. Tuiabieh, M. Mafarja, H. Chen, Elite dominance scheme ingrained adaptive salp swarm algorithm: a comprehensive study, Eng. Comput., 2021. https://doi.org/10.1007/s00366-021-01464-x doi: 10.1007/s00366-021-01464-x
![]() |
[36] |
J. Song, C. Chen, A. A. Heidari, J. Liu, H. Yu, H. Chen, Performance optimization of annealing salp swarm algorithm: frameworks and applications for engineering design, J. Comput. Des. Eng., 9 (2022), 633-669. https://doi.org/10.1093/jcde/qwac021 doi: 10.1093/jcde/qwac021
![]() |
[37] |
S. Zhao, P. Wang, A. A. Heidari, H. Chen, W. He, S. Xu, Performance optimization of salp swarm algorithm for multi-threshold image segmentation: Comprehensive study of breast cancer microscopy, Comput. Biol. Med., 139 (2021), 105015. https://doi.org/10.1016/j.compbiomed.2021.105015 doi: 10.1016/j.compbiomed.2021.105015
![]() |
[38] |
D. H. Wolpert, W. G. Macready, No free lunch theorems for optimization, IEEE Trans. Evol. Comput., 1 (1997), 67-82. https://doi.org/10.1109/4235.585893 doi: 10.1109/4235.585893
![]() |
[39] |
J. J. Jena, S. C. Satapathy, A new adaptive tuned Social Group Optimization (SGO) algorithm with sigmoid-adaptive inertia weight for solving engineering design problems, Multimed Tools Appl., 2021. https://doi.org/10.1007/s11042-021-11266-4 doi: 10.1007/s11042-021-11266-4
![]() |
[40] |
A. Naik, S. C. Satapathy, A. S. Ashour, N. Dey, Social group optimization for global optimization of multimodal functions and data clustering problems, Neural Comput. Appl., 30 (2018), 271-287. https://doi.org/10.1007/s00521-016-2686-9 doi: 10.1007/s00521-016-2686-9
![]() |
[41] |
P. Sun, H. Liu, Y. Zhang, L. Tu, Q. Meng, An intensify atom search optimization for engineering design problems, Appl. Math. Modell., 89 (2021), 837-859. https://doi.org/10.1016/j.apm.2020.07.052 doi: 10.1016/j.apm.2020.07.052
![]() |
[42] |
W. Zhao, L. Wang, Z. Zhang, Atom search optimization and its application to solve a hydrogeologic parameter estimation problem, Knowl.-Based Syst., 163 (2019), 283-304. https://doi.org/10.1016/j.knosys.2018.08.030 doi: 10.1016/j.knosys.2018.08.030
![]() |
[43] |
L. Ma, C. Wang, N. Xie, M. Shi, Y. Ye, L. Wang, Moth-flame optimization algorithm based on diversity and mutation strategy, Appl. Intell., 51 (2021), 5836-5872. https://doi.org/10.1007/s10489-020-02081-9 doi: 10.1007/s10489-020-02081-9
![]() |
[44] |
Y. Li, Y. Zhao, J. Liu, Dynamic sine cosine algorithm for large-scale global optimization problems, Expert Syst. Appl., 177 (2021), 114950. https://doi.org/10.1016/j.eswa.2021.114950 doi: 10.1016/j.eswa.2021.114950
![]() |
[45] |
S. Mirjalili, SCA: A sine cosine algorithm for solving optimization problems, Knowl.-Based Syst., 96 (2016), 120-133. https://doi.org/10.1016/j.knosys.2015.12.022 doi: 10.1016/j.knosys.2015.12.022
![]() |
[46] |
W. Long, J. Jiao, X. Liang, T. Wu, M. Xu, S. Cai, Pinhole-imaging-based learning butterfly optimization algorithm for global optimization and feature selection, Appl. Soft Comput., 103 (2021), 107146. https://doi.org/10.1016/j.asoc.2021.107146 doi: 10.1016/j.asoc.2021.107146
![]() |
[47] |
R. Salgotra, U. Singh, S. Singh, G. Singh, N. Mittal, Self-adaptive salp swarm algorithm for engineering optimization problems, Appl. Math. Modell., 89 (2021), 188-207. https://doi.org/10.1016/j.apm.2020.08.014 doi: 10.1016/j.apm.2020.08.014
![]() |
[48] |
H. Ren, J. Li, H. Chen, C. Li, Stability of salp swarm algorithm with random replacement and double adaptive weighting, Appl. Math. Modell., 95 (2021), 503-523. https://doi.org/10.1016/j.apm.2021.02.002 doi: 10.1016/j.apm.2021.02.002
![]() |
[49] |
G. I. Sayed, G. Khoriba, M. H. Haggag, A novel chaotic salp swarm algorithm for global optimization and feature selection, Appl. Intell., 48 (2018), 3462-3481. https://doi.org/10.1007/s10489-018-1158-6 doi: 10.1007/s10489-018-1158-6
![]() |
[50] |
M. H. Qais, H. M. Hasanien, S. Alghuwainem, Enhanced salp swarm algorithm: application to variable speed wind generators, Eng. Appl. Artif. Intell., 80 (2019), 82-96. https://doi.org/10.1016/j.engappai.2019.01.011 doi: 10.1016/j.engappai.2019.01.011
![]() |
[51] |
M. Braik, A. Sheta, H. Turabieh, H. Alhiary, A novel lifetime scheme for enhancing the convergence performance of salp swarm algorithm, Soft Comput., 25 (2021), 181-206. https://doi.org/10.1007/s00500-020-05130-0 doi: 10.1007/s00500-020-05130-0
![]() |
[52] |
A. G. Hussien, An enhanced opposition-based salp swarm algorithm for global optimization and engineering problems, J. Ambient Intell. Humaniz. Comput., 13 (2022), 129-150. https://doi.org/10.1007/s12652-021-02892-9 doi: 10.1007/s12652-021-02892-9
![]() |
[53] |
F. A. Ozbay, B. Alatas, Adaptive salp swarm optimization algorithms with inertia weights for novel fake news detection model in online social media, Multimedia Tools Appl., 80 (2021), 34333-34357. https://doi.org/10.1007/s11042-021-11006-8 doi: 10.1007/s11042-021-11006-8
![]() |
[54] |
S. Kaur, L. K. Awasthi, A. L. Sangal, G. Dhiman, Tunicate swarm algorithm: a new bio-inspired based metaheuristic paradigm for global optimization, Eng. Appl. Artif. Intell., 90 (2020), 103541. https://doi.org/10.1016/j.engappai.2020.103541 doi: 10.1016/j.engappai.2020.103541
![]() |
[55] |
S. Dhargupta, M. Ghosh, S. Mirjalili, R. Sarkar, Selective opposition based grey wolf optimization, Expert Syst. Appl., 151 (2020), 113389. https://doi.org/10.1016/j.eswa.2020.113389 doi: 10.1016/j.eswa.2020.113389
![]() |
[56] |
A. Faramarzi, M. Heidarinejad, B. Stephens, S. Mirjalili, Equilibrium optimizer: A novel optimization algorithm, Knowl.-Based Syst., 191 (2020), 105190. https://doi.org/10.1016/j.knosys.2019.105190 doi: 10.1016/j.knosys.2019.105190
![]() |
[57] |
L. Abualigah, A. Diabat, S. Mirjalili, M. A. Elaziz, A. H. Gandomi, The arithmetic optimization algorithm, Comput. Methods Appl. Mech. Eng., 376 (2021), 113609. https://doi.org/10.1016/j.cma.2020.113609 doi: 10.1016/j.cma.2020.113609
![]() |
[58] |
F. A. Hashim, K. Hussain, E. H. Houssein, M. S. Mabrouk, W. Al-Atabany, Archimedes optimization algorithm: a new metaheuristic algorithm for solving optimization problems, Appl. Intell., 51 (2021), 1531-1551. https://doi.org/10.1007/s10489-020-01893-z doi: 10.1007/s10489-020-01893-z
![]() |
[59] |
M. H. Nadimi-Shahraki, S. Taghian, S. Mirjalili, An improved grey wolf optimizer for solving engineering problems, Expert Syst. Appl., 166 (2021), 113917. https://doi.org/10.1016/j.eswa.2020.113917 doi: 10.1016/j.eswa.2020.113917
![]() |
[60] |
W. Shan, Z. Qiao, A. A. Heidari, H. Chen, H. Turabieh, Y. Teng, Double adaptive weights for stabilization of moth flame optimizer: balance analysis, engineering cases, and medical diagnosis, Knowl.-Based Syst., 214 (2021), 106728. https://doi.org/10.1016/j.knosys.2020.106728 doi: 10.1016/j.knosys.2020.106728
![]() |
[61] |
X. Yu, W. Xu, C. Li, Opposition-based learning grey wolf optimizer for global optimization, Knowl.-Based Syst., 226 (2021), 107139. https://doi.org/10.1016/j.knosys.2021.107139 doi: 10.1016/j.knosys.2021.107139
![]() |
[62] |
B. S. Yildiz, N. Pholdee, S. Bureerat, A. R. Yildiz, S. M. Sait, Enhanced grasshopper optimization algorithm using elite opposition-based learning for solving real-world engineering problems, Eng. Comput., 2021. https://doi.org/10.1007/s00366-021-01368-w doi: 10.1007/s00366-021-01368-w
![]() |
[63] |
I. Ahmadianfar, O. Bozorg-Haddad, X. Chu, Gradient-based optimizer: A new metaheuristic optimization algorithm, Inf. Sci., 540 (2020), 131-159. https://doi.org/10.1016/j.ins.2020.06.037 doi: 10.1016/j.ins.2020.06.037
![]() |
[64] |
P. R. Singh, M. A. Elaziz, S. Xiong, Modified spider monkey optimization based on Nelder-Mead method for global optimization, Expert Syst. Appl., 110 (2018), 264-289. https://doi.org/10.1016/j.eswa.2018.05.040 doi: 10.1016/j.eswa.2018.05.040
![]() |
[65] |
L. Gu, R. J. Yang, C. H. Tho, M. Makowskit, O. Faruquet, Y. Li, Optimisation and robustness for crashworthiness of side impact, Int. J. Veh. Des., 26 (2004), 348-360. https://doi.org/10.1504/IJVD.2001.005210 doi: 10.1504/IJVD.2001.005210
![]() |
[66] |
C. A. Coello, Use of a self-adaptive penalty approach for engineering optimization problems, Comput. Ind., 41 (2000), 113-127. https://doi.org/10.1016/S0166-3615(99)00046-9 doi: 10.1016/S0166-3615(99)00046-9
![]() |
[67] |
H. Eskandar, A. Sadollah, A. Bahreininejad, M. Hamdi, Water cycle algorithm-A novel metaheuristic optimization method for solving constrained engineering optimization problems, Comput. Struct., 110-111 (2012), 151-166. https://doi.org/10.1016/j.compstruc.2012.07.010 doi: 10.1016/j.compstruc.2012.07.010
![]() |
[68] |
A. H. Gandomi, X. S. Yang, A. H. Alavi, Mixed variable structural optimization using firefly algorithm, Comput. Struct., 89 (2011), 2325-2336. https://doi.org/10.1016/j.compstruc.2011.08.002 doi: 10.1016/j.compstruc.2011.08.002
![]() |
[69] |
E. Rashedi, H. Nezamabadi-Pour, S. Saryazdi, GSA: A gravitational search algorithm, Inf. Sci., 179 (2009), 2232-2248. https://doi.org/10.1016/j.ins.2009.03.004 doi: 10.1016/j.ins.2009.03.004
![]() |
[70] | X. S. Yang, A new metaheuristic bat-inspired algorithm, in Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), (2010), 65-74. https://doi.org/10.1007/978-3-642-12538-6_6 |
[71] |
M. Azizi, Atomic orbital search: A novel metaheuristic algorithm, Appl. Math. Modell., 93 (2021), 657-683. https://doi.org/10.1016/j.apm.2020.12.021 doi: 10.1016/j.apm.2020.12.021
![]() |
[72] |
S. Mirjalili, S. M. Mirjalili, A. Hatamlou, Multi-verse optimizer: A nature-inspired algorithm for global optimization, Neural Comput. Appl., 27 (2016), 495-513. https://doi.org/10.1007/s00521-015-1870-7 doi: 10.1007/s00521-015-1870-7
![]() |
[73] |
M. Y. Cheng, D. Prayogo, Symbiotic organisms search: a new metaheuristic optimization algorithm, Comput Struct., 139 (2014), 98-112. https://doi.org/10.1016/j.compstruc.2014.03.007 doi: 10.1016/j.compstruc.2014.03.007
![]() |
[74] |
S. Mirjalili, The ant lion optimizer, Adv. Eng. Software, 83 (2015), 80-98. https://doi.org/10.1016/j.advengsoft.2015.01.010 doi: 10.1016/j.advengsoft.2015.01.010
![]() |
[75] |
H. Chickermane, H. Gea, Structural optimization using a new local approximation method, Int. J. Numer. Methods Eng., 39 (1996), 829-846. https://doi.org/10.1002/(SICI)1097-0207(19960315)39:5<829::AID-NME884>3.0.CO;2-U doi: 10.1002/(SICI)1097-0207(19960315)39:5<829::AID-NME884>3.0.CO;2-U
![]() |
[76] |
Z. Wang, Q. Luo, Y. Zhou, Hybrid metaheuristic algorithm using butterfly and flower pollination base on mutualism mechanism for global optimization problems, Eng. Comput., 37 (2021), 3665-3698. https://doi.org/10.1007/s00366-020-01025-8 doi: 10.1007/s00366-020-01025-8
![]() |
[77] |
Y. J. Zheng, Water wave optimization: A new nature-inspired metaheuristic, Comput. Oper. Res., 55 (2015), 1-11. https://doi.org/10.1016/j.cor.2014.10.008 doi: 10.1016/j.cor.2014.10.008
![]() |
[78] |
P. Savsani, V. Savsani, Passing vehicle search (PVS): A novel metaheuristic algorithm, Appl. Math. Modell., 40 (2016), 3951-3978. https://doi.org/10.1016/j.apm.2015.10.040 doi: 10.1016/j.apm.2015.10.040
![]() |
[79] |
F. Gul, I. Mir, L. Abualigah, P. Sumari, A. Forestiero, A consolidated review of path planning and optimization techniques: technical perspectives and future directions, Electronics, 10 (2021), 2250. https://doi.org/10.3390/electronics10182250 doi: 10.3390/electronics10182250
![]() |
[80] |
Z. Wang, H. Ding, J. Yang, J. Wang, B. Li, Z. Yang, P. Hou, Advanced orthogonal opposition-based learning-driven dynamic salp swarm algorithm: framework and case studies, IET Control Theory Appl., 2022. https://doi.org/10.1049/cth2.12277 doi: 10.1049/cth2.12277
![]() |
[81] |
D. Agarwal, P. S. Bharti, Implementing modified swarm intelligence algorithm based on slime moulds for path planning and obstacle avoidance problem in mobile robots, Appl. Soft Comput., 107 (2021), 107372. https://doi.org/10.1016/j.asoc.2021.107372 doi: 10.1016/j.asoc.2021.107372
![]() |