
The paper surveys, classifies and investigates theoretically and numerically main classes of line search methods for unconstrained optimization. Quasi-Newton (QN) and conjugate gradient (CG) methods are considered as representative classes of effective numerical methods for solving large-scale unconstrained optimization problems. In this paper, we investigate, classify and compare main QN and CG methods to present a global overview of scientific advances in this field. Some of the most recent trends in this field are presented. A number of numerical experiments is performed with the aim to give an experimental and natural answer regarding the numerical one another comparison of different QN and CG methods.
Citation: Predrag S. Stanimirović, Branislav Ivanov, Haifeng Ma, Dijana Mosić. A survey of gradient methods for solving nonlinear optimization[J]. Electronic Research Archive, 2020, 28(4): 1573-1624. doi: 10.3934/era.2020115
[1] | Jun Zhou . Initial boundary value problem for a inhomogeneous pseudo-parabolic equation. Electronic Research Archive, 2020, 28(1): 67-90. doi: 10.3934/era.2020005 |
[2] | Yang Cao, Qiuting Zhao . Initial boundary value problem of a class of mixed pseudo-parabolic Kirchhoff equations. Electronic Research Archive, 2021, 29(6): 3833-3851. doi: 10.3934/era.2021064 |
[3] | Qianqian Zhu, Yaojun Ye, Shuting Chang . Blow-up upper and lower bounds for solutions of a class of higher order nonlinear pseudo-parabolic equations. Electronic Research Archive, 2024, 32(2): 945-961. doi: 10.3934/era.2024046 |
[4] | Xu Liu, Jun Zhou . Initial-boundary value problem for a fourth-order plate equation with Hardy-Hénon potential and polynomial nonlinearity. Electronic Research Archive, 2020, 28(2): 599-625. doi: 10.3934/era.2020032 |
[5] | Hui Jian, Min Gong, Meixia Cai . Global existence, blow-up and mass concentration for the inhomogeneous nonlinear Schrödinger equation with inverse-square potential. Electronic Research Archive, 2023, 31(12): 7427-7451. doi: 10.3934/era.2023375 |
[6] | Shuting Chang, Yaojun Ye . Upper and lower bounds for the blow-up time of a fourth-order parabolic equation with exponential nonlinearity. Electronic Research Archive, 2024, 32(11): 6225-6234. doi: 10.3934/era.2024289 |
[7] | Yitian Wang, Xiaoping Liu, Yuxuan Chen . Semilinear pseudo-parabolic equations on manifolds with conical singularities. Electronic Research Archive, 2021, 29(6): 3687-3720. doi: 10.3934/era.2021057 |
[8] | Yaning Li, Yuting Yang . The critical exponents for a semilinear fractional pseudo-parabolic equation with nonlinear memory in a bounded domain. Electronic Research Archive, 2023, 31(5): 2555-2567. doi: 10.3934/era.2023129 |
[9] | Mingyou Zhang, Qingsong Zhao, Yu Liu, Wenke Li . Finite time blow-up and global existence of solutions for semilinear parabolic equations with nonlinear dynamical boundary condition. Electronic Research Archive, 2020, 28(1): 369-381. doi: 10.3934/era.2020021 |
[10] | Lianbing She, Nan Liu, Xin Li, Renhai Wang . Three types of weak pullback attractors for lattice pseudo-parabolic equations driven by locally Lipschitz noise. Electronic Research Archive, 2021, 29(5): 3097-3119. doi: 10.3934/era.2021028 |
The paper surveys, classifies and investigates theoretically and numerically main classes of line search methods for unconstrained optimization. Quasi-Newton (QN) and conjugate gradient (CG) methods are considered as representative classes of effective numerical methods for solving large-scale unconstrained optimization problems. In this paper, we investigate, classify and compare main QN and CG methods to present a global overview of scientific advances in this field. Some of the most recent trends in this field are presented. A number of numerical experiments is performed with the aim to give an experimental and natural answer regarding the numerical one another comparison of different QN and CG methods.
In this paper, we consider the following initial-boundary value problem
{ut−Δut−Δu=|x|σ|u|p−1u,x∈Ω,t>0,u(x,t)=0,x∈∂Ω,t>0,u(x,0)=u0(x),x∈Ω | (1) |
and its corresponding steady-state problem
{−Δu=|x|σ|u|p−1u,x∈Ω,u=0,x∈∂Ω, | (2) |
where
1<p<{∞,n=1,2;n+2n−2,n≥3,σ>{−n,n=1,2;(p+1)(n−2)2−n,n≥3. | (3) |
(1) was called homogeneous (inhomogeneous) pseudo-parabolic equation when
The homogeneous problem, i.e.
Li and Du [12] studied the Cauchy problem of equation in (1) with
(1) If
(2) If
Φα:={ξ(x)∈BC(Rn):ξ(x)≥0,lim inf|x|↑∞|x|αξ(x)>0}, |
and
Φα:={ξ(x)∈BC(Rn):ξ(x)≥0,lim sup|x|↑∞|x|αξ(x)<∞}. |
Here
In view of the above introductions, we find that
(1) for Cauchy problem in
(2) for zero Dirichlet problem in a bounded domain
The difficulty of allowing
σ>(p+1)(n−2)2−n⏟<0 if n≥3 |
for
The main results of this paper can be summarized as follows: Let
(1) (the case
(2) (the case
(3) (arbitrary initial energy level) For any
(4) Moreover, under suitable assumptions, we show the exponential decay of global solutions and lifespan (i.e. the upper bound of blow-up time) of the blowing-up solutions.
The organizations of the remain part of this paper are as follows. In Section 2, we introduce the notations used in this paper and the main results of this paper; in Section 3, we give some preliminaries which will be used in the proofs; in Section 4, we give the proofs of the main results.
Throughout this paper we denote the norm of
‖ϕ‖Lγ={(∫Ω|ϕ(x)|γdx)1γ, if 1≤γ<∞;esssupx∈Ω|ϕ(x)|, if γ=∞. |
We denote the
Lp+1σ(Ω):={ϕ:ϕ is measurable on Ω and ‖u‖Lp+1σ<∞}, | (4) |
where
‖ϕ‖Lp+1σ:=(∫Ω|x|σ|ϕ(x)|p+1dx)1p+1,ϕ∈Lp+1σ(Ω). | (5) |
By standard arguments as the space
We denote the inner product of
(ϕ,φ)H10:=∫Ω(∇ϕ(x)⋅∇φ(x)+ϕ(x)φ(x))dx,ϕ,φ∈H10(Ω). | (6) |
The norm of
‖ϕ‖H10:=√(ϕ,ϕ)H10=√‖∇ϕ‖2L2+‖ϕ‖2L2,ϕ∈H10(Ω). | (7) |
An equivalent norm of
‖∇ϕ‖L2≤‖ϕ‖H10≤√λ1+1λ1‖∇ϕ‖L2,ϕ∈H10(Ω), | (8) |
where
λ1=infϕ∈H10(Ω)‖∇ϕ‖2L2‖ϕ‖2L2. | (9) |
Moreover, by Theorem 3.2, we have
for p and σ satisfying (4), H10(Ω)↪Lp+1σ(Ω) continuously and compactly. | (10) |
Then we let
Cpσ=supu∈H10(Ω)∖{0}‖ϕ‖Lp+1σ‖∇ϕ‖L2. | (11) |
We define two functionals
J(ϕ):=12‖∇ϕ‖2L2−1p+1‖ϕ‖p+1Lp+1σ | (12) |
and
I(ϕ):=‖∇ϕ‖2L2−‖ϕ‖p+1Lp+1σ. | (13) |
By (3) and (10), we know that
We denote the mountain-pass level
d:=infϕ∈NJ(ϕ), | (14) |
where
N:={ϕ∈H10(Ω)∖{0}:I(ϕ)=0}. | (15) |
By Theorem 3.3, we have
d=p−12(p+1)C−2(p+1)p−1pσ, | (16) |
where
For
Jρ={ϕ∈H10(Ω):J(ϕ)<ρ}. | (17) |
Then, we define the set
Nρ={ϕ∈N:‖∇ϕ‖2L2<2(p+1)ρp−1},ρ>d. | (18) |
For
λρ:=infϕ∈Nρ‖ϕ‖H10,Λρ:=supϕ∈Nρ‖ϕ‖H10 | (19) |
and two sets
Sρ:={ϕ∈H10(Ω):‖ϕ‖H10≤λρ,I(ϕ)>0},Sρ:={ϕ∈H10(Ω):‖ϕ‖H10≥Λρ,I(ϕ)<0}. | (20) |
Remark 1. There are two remarks on the above definitions.
(1) By the definitions of
(2) By Theorem 3.4, we have
√2(p+1)dp−1≤λρ≤Λρ≤√2(p+1)(λ1+1)ρλ1(p−1). | (21) |
Then the sets
‖sϕ‖H10≤√2(p+1)dp−1⇔s≤δ1:=√2(p+1)dp−1‖ϕ‖−1H10,I(sϕ)=s2‖∇ϕ‖2L2−sp+1‖ϕ‖p−1Lp+1σ>0⇔s<δ2:=(‖∇ϕ‖2L2‖ϕ‖p+1Lp+1σ)1p−1,‖sϕ‖H10≥√2(p+1)(λ1+1)ρλ1(p−1)⇔s≥δ3:=√2(p+1)(λ1+1)ρλ1(p−1)‖ϕ‖−1H10,I(sϕ)=s2‖∇ϕ‖2L2−sp+1‖ϕ‖p−1Lp+1σ<0⇔s>δ2. |
So,
{sϕ:0<s<min{δ1,δ2}}⊂Sρ,{sϕ:s>max{δ2,δ3}}⊂Sρ. |
In this paper we consider weak solutions to problem (1), local existence of which can be obtained by Galerkin's method (see for example [22,Chapter II,Sections 3 and 4]) and a standard limit process and the details are omitted.
Definition 2.1. Assume
∫Ω(utv+∇ut⋅∇v+∇u⋅∇v−|x|σ|u|p−1uv)dx=0 | (22) |
holds for any
u(⋅,0)=u0(⋅) in H10(Ω). | (23) |
Remark 2. There are some remarks on the above definition.
(1) Since
(2) Denote by
(3) Taking
‖u(⋅,t)‖2H10=‖u0‖2H10−2∫t0I(u(⋅,s))ds,0≤t≤T, | (24) |
where
(4) Taking
J(u(⋅,t))=J(u0)−∫t0‖us(⋅,s)‖2H10ds,0≤t≤T, | (25) |
where
Definition 2.2. Assume (3) holds. A function
∫Ω(∇u⋅∇v−|x|σ|u|p−1uv)dx=0 | (26) |
holds for any
Remark 3. There are some remarks to the above definition.
(1) By (10), we know all the terms in (26) are well-defined.
(2) If we denote by
Φ={ϕ∈H10(Ω):J′(ϕ)=0 in H−1(Ω)}⊂(N∪{0}), | (27) |
where
With the set
Definition 2.3. Assume (3) holds. A function
J(u)=infϕ∈Φ∖{0}J(ϕ). |
With the above preparations, now we can state the main results of this paper. Firstly, we consider the case
(1)
(2)
(3)
Theorem 2.4. Assume (3) holds and
‖∇u(⋅,t)‖L2≤√2(p+1)J(u0)p−1,0≤t<∞, | (28) |
where
V:={ϕ∈H10(Ω):J(ϕ)≤d,I(ϕ)>0}. | (29) |
In, in addition,
‖u(⋅,t)‖H10≤‖u0‖H10exp[−λ1λ1+1(1−(J(u0)d)p−12)t]. | (30) |
Remark 4. Since
J(u0)>p−12(p+1)‖∇u0‖2L2>0. |
So the equality (28) makes sense.
Theorem 2.5. Assume (3) holds and
limt↑Tmax∫t0‖u(⋅,s)‖2H10ds=∞, |
where
W:={ϕ∈H10(Ω):J(ϕ)≤d,I(ϕ)<0} | (31) |
and
Tmax≤4p‖u0‖2H10(p−1)2(p+1)(d−J(u0)). | (32) |
Remark 5. There are two remarks.
(1) If
(2) The sets
f(s)=J(sϕ)=s22‖∇ϕ‖2L2−sp+1p+1‖ϕ‖p+1Lp+1σ,g(s)=I(sϕ)=s2‖∇ϕ‖2L2−sp+1‖ϕ‖p+1Lp+1σ. |
Then (see Fig. 2)
(a)
maxs∈[0,∞)f(s)=f(s∗3)=p−12(p+1)(‖∇ϕ‖L2‖ϕ‖Lp+1σ)2(p+1)p−1≤d⏟By (14) since s∗3ϕ∈N, | (33) |
(b)
maxs∈[0,∞)g(s)=g(s∗1)=p−1p+1(2p+1)2p−1(‖∇ϕ‖L2‖ϕ‖Lp+1σ)2(p+1)p−1, |
(c)
f(s∗2)=g(s∗2)=p−12p(p+12p)2p−1(‖∇ϕ‖L2‖ϕ‖Lp+1σ)2(p+1)p−1, |
where
s∗1:=(2‖∇ϕ‖2L2(p+1)‖ϕ‖p+1Lp+1σ)1p−1<s∗2:=((p+1)‖∇ϕ‖2L22p‖ϕ‖p+1Lp+1σ)1p−1<s∗3:=(‖∇ϕ‖2L2‖ϕ‖p+1Lp+1σ)1p−1<s∗4:=((p+1)‖∇ϕ‖2L22‖ϕ‖p+1Lp+1σ)1p−1. |
So,
Theorem 2.6. Assume (3) holds and
G:={ϕ∈H10(Ω):J(ϕ)=d,I(ϕ)=0}. | (34) |
Remark 6. There are two remarks on the above theorem.
(1) Unlike Remark 5, it is not easy to show
(2) To prove the above Theorem, we only need to show
Theorem 2.7. Assume (3) holds and let
Secondly, we consider the case
Theorem 2.8. Assume (3) holds and the initial value
(i): If
(ii): If
Here
Next, we show the solution of the problem (1) can blow up at arbitrary initial energy level (Theorem 2.10). To this end, we firstly introduce the following theorem.
Theorem 2.9. Assume (3) holds and
Tmax≤8p‖u0‖2H10(p−1)2(λ1(p−1)λ1+1‖u0‖2H10−2(p+1)J(u0)) | (35) |
and
limt↑Tmax∫t0‖u(⋅,s)‖2H10ds=∞, |
where
ˆW:={ϕ∈H10(Ω):J(ϕ)<λ1(p−1)2(λ1+1)(p+1)‖ϕ‖2H10}. | (36) |
and
By using the above theorem, we get the following theorem.
Theorem 2.10. For any
The following lemma can be found in [11].
Lemma 3.1. Suppose that
F″(t)F(t)−(1+γ)(F′(t))2≥0 |
for some constant
T≤F(0)γF′(0)<∞ |
and
Theorem 3.2. Assume
Proof. Since
We divide the proof into three cases. We will use the notation
Case 1.
H10(Ω)↪Lp+1(Ω) continuously and compactly. | (37) |
Then we have, for any
‖u‖p+1Lp+1σ=∫Ω|x|σ|u|p+1dx≤Rσ‖u‖p+1Lp+1≲‖u‖p+1H10, |
which, together with (37), implies
Case 2.
H10(Ω)↪L(p+1)rr−1(Ω) continuously and compactly, | (38) |
for any
‖u‖p+1Lp+1σ=∫Ω|x|σ|u|p+1dx≤(∫B(0,R)|x|σrdx)1r(∫Ω|u|(p+1)rr−1dx)r−1r≤{(2σr+1Rσr+1)1r‖u‖p+1L(p+1)rr−1≲‖u‖p+1H10,n=1;(2πσr+2Rσr+2)1r‖u‖p+1L(p+1)rr−1≲‖u‖p+1H10,n=2, |
which, together with (38), implies
Case 3.
−σn<1r<1−(p+1)(n−2)2n. |
By the second inequality of the above inequalities, we have
(p+1)rr−1=p+11−1r<p+1(p+1)(n−2)2n=2nn−2. |
So,
H10(Ω)↪L(p+1)rr−1(Ω) continuously and compactly. | (39) |
Then by Hölder's inequality, for any
‖u‖p+1Lp+1σ=∫Ω|x|σ|u|p+1dx≤(∫B(0,R)|x|σrdx)1r(∫Ω|u|(p+1)rr−1dx)r−1r≤(ωn−1σr+nRσr+n)1r‖u‖p+1L(p+1)rr−1≲‖u‖p+1H10, |
which, together with (39), implies
Theorem 3.3. Assume
d=p−12(p+1)C2(p+1)p−1pσ, |
where
Proof. Firstly, we show
infϕ∈NJ(ϕ)=minϕ∈H10(Ω)∖{0}J(s∗ϕϕ), | (40) |
where
s∗ϕ:=(‖∇ϕ‖2L2‖ϕ‖p+1Lp+1σ)1p−1. | (41) |
By the definition of
On one hand, since
minϕ∈H10(Ω)∖{0}J(s∗ϕϕ)≤minϕ∈NJ(s∗ϕϕ)=minϕ∈NJ(ϕ). |
On the other hand, since
infϕ∈NJ(ϕ)≤infϕ∈H10(Ω)∖{0}J(s∗ϕϕ). |
Then (40) follows from the above two inequalities.
By (40), the definition of
d=minϕ∈H10(Ω)∖{0}J(s∗ϕϕ)=p−12(p+1)minϕ∈H10(Ω)∖{0}(‖∇ϕ‖L2‖ϕ‖Lp+1σ)2(p+1)p−1=p−12(p+1)C−2(p+1)p−1pσ. |
Theorem 3.4. Assume (3) holds. Let
√2(p+1)dp−1≤λρ≤Λρ≤√2(p+1)(λ1+1)ρλ1(p−1). | (42) |
Proof. Let
λρ≤Λρ. | (43) |
Since
d=infϕ∈NJ(ϕ)=p−12(p+1)infϕ∈N‖∇ϕ‖2L2≤p−12(p+1)infϕ∈Nρ‖ϕ‖2H10=p−12(p+1)λ2ρ, |
which implies
λρ≥√2(p+1)dp−1 |
On the other hand, by (8) and (18), we have
Λρ=supϕ∈Nρ‖ϕ‖H10≤√λ1+1λ1supϕ∈Nρ‖∇ϕ‖L2≤√λ1+1λ1√2(p+1)ρp−1. |
Combining the above two inequalities with (43), we get (42), the proof is complete.
Theorem 3.5. Assume (3) holds and
Proof. We only prove the invariance of
For any
‖∇ϕ‖2L2<‖ϕ‖p+1Lp+1σ≤Cp+1pσ‖∇ϕ‖p+1L2, |
which implies
‖∇ϕ‖L2>C−p+1p−1pσ. | (44) |
Let
I(u(⋅,t))<0,t∈[0,ε]. | (45) |
Then by (24),
J(u(⋅,t))<d for t∈(0,ε]. | (46) |
We argument by contradiction. Since
J(u(⋅,t0))<d | (47) |
(note (25) and (46),
‖∇u(⋅,t0)‖L2≥C−p+1p−1pσ>0, |
which, together with
J(u(⋅,t0))≥d, |
which contradicts (47). So the conclusion holds.
Theorem 3.6. Assume (3) holds and
‖∇u(⋅,t)‖2L2≥2(p+1)p−1d,0≤t<Tmax, | (48) |
where
Proof. Let
By the proof in Theorem 3.3,
d=minϕ∈H10(Ω)∖{0}J(s∗ϕϕ)≤minϕ∈N−J(s∗ϕϕ)≤J(s∗uu(⋅,t))=(s∗u)22‖∇u(⋅,t)‖2L2−(s∗u)p+1p+1‖u(⋅,t)‖p+1Lp+1σ≤((s∗u)22−(s∗u)p+1p+1)‖∇u(⋅,t)‖2L2, |
where we have used
d=minϕ∈H10(Ω)∖{0}J(s∗ϕϕ)≤minϕ∈N−J(s∗ϕϕ)≤J(s∗uu(⋅,t))=(s∗u)22‖∇u(⋅,t)‖2L2−(s∗u)p+1p+1‖u(⋅,t)‖p+1Lp+1σ≤((s∗u)22−(s∗u)p+1p+1)‖∇u(⋅,t)‖2L2, |
Then
d≤max0≤s≤1(s22−sp+1p+1)‖∇u(⋅,t)‖2L2=(s22−sp+1p+1)s=1‖∇u(⋅,t)‖2L2=p−12(p+1)‖∇u(⋅,t)‖2L2, |
and (48) follows from the above inequality.
Theorem 3.7. Assume (3) holds and
Proof. Firstly, we show
12‖∇u0‖2L2−1p+1‖u0‖p+1Lp+1σ=J(u0)<λ1(p−1)2(λ1+1)(p+1)‖u0‖2H10≤p−12(p+1)‖∇u0‖2L2, |
which implies
I(u0)=‖∇u0‖2L2−‖u0‖p+1Lp+1σ<0. |
Secondly, we prove
J(u0)<λ1(p−1)2(λ1+1)(p+1)‖u0‖2H10<λ1(p−1)2(λ1+1)(p+1)‖u(⋅,t0)‖2H10≤p−12(p+1)‖∇u(⋅,t0)‖2L2. | (49) |
On the other hand, by (24), (12), (13) and
J(u0)≥J(u(⋅,t0))=p−12(p+1)‖∇u(⋅,t0)‖2L2, |
which contradicts (49). The proof is complete.
Proof of Theorem 2.4. Let
J(u0)≥J(u(⋅,t))≥p−12(p+1)‖∇u(⋅,t)‖2L2,0≤t<Tmax, |
which implies
‖∇u(⋅,t)‖L2≤√2(p+1)J(u0)p−1,0≤t<∞. | (50) |
Next, we prove
ddt(‖u(⋅,t)‖2H10)=−2I(u(⋅,t))=−2(‖∇u(⋅,t)‖2L2−‖u(⋅,t)‖p+1Lp+1σ)≤−2(1−Cp+1pσ‖∇u(⋅,t)‖p−1L2)‖∇u(⋅,t)‖2L2≤−2(1−Cp+1pσ(√2(p+1)J(u0)p−1)p−1)‖∇u(⋅,t)‖2L2=−2(1−(J(u0)d)p−12)‖∇u(⋅,t)‖2L2≤−2λ1λ1+1(1−(J(u0)d)p−12)‖u(⋅,t)‖2H10, |
which leads to
‖u(⋅,t)‖2H10≤‖u0‖2H10exp[−2λ1λ1+1(1−(J(u0)d)p−12)t]. |
The proof is complete.
Proof of Theorem 2.5. Let
Firstly, we consider the case
ξ(t):=(∫t0‖u(⋅,s)‖2H10ds)12,η(t):=(∫t0‖us(⋅,s)‖2H10ds)12,0≤t<Tmax. | (51) |
For any
F(t):=ξ2(t)+(T∗−t)‖u0‖2H10+β(t+α)2,0≤t≤T∗. | (52) |
Then
F(0)=T∗‖u0‖2H10+βα2>0, | (53) |
F′(t)=‖u(⋅,t)‖2H10−‖u0‖2H10+2β(t+α)=2(12∫t0dds‖u(⋅,s)‖2H10ds+β(t+α)),0≤t≤T∗, | (54) |
and (by (24), (12), (13), (48), (25))
F″(t)=−2I(u(⋅,t))+2β=(p−1)‖∇u(⋅,t)‖2L2−2(p+1)J(u(⋅,t))+2β≥2(p+1)(d−J(u0))+2(p+1)η2(t)+2β,0≤t≤T∗. | (55) |
Since
F′(t)≥2β(t+α). |
Then
F(t)=F(0)+∫t0F′(s)ds≥T∗‖u0‖2H10+βα2+2αβt+βt2,0≤t≤T∗. | (56) |
By (6), Schwartz's inequality and Hölder's inequality, we have
12∫t0dds‖u(⋅,s)‖2H10ds=∫t0(u(⋅,s),us(⋅,s))H10ds≤∫t0‖u(⋅,s)‖H10‖us(⋅,s)‖H10ds≤ξ(t)η(t),0≤t≤T∗, |
which, together with the definition of
(F(t)−(T∗−t)‖u0‖2H10)(η2(t)+β)=(ξ2(t)+β(t+α)2)(η2(t)+β)=ξ2(t)η2(t)+βξ2(t)+β(t+α)2η2(t)+β2(t+α)2≥ξ2(t)η2(t)+2ξ(t)η(t)β(t+α)+β2(t+α)2≥(ξ(t)η(t)+β(t+α))2≥(12∫t0dds‖u(⋅,s)‖2H10ds+β(t+α))2,0≤t≤T∗. |
Then it follows from (54) and the above inequality that
(F′(t))2=4(12t∫0dds‖u(s)‖2H10ds+β(t+α))2≤4F(t)(η2(t)+β),0≤t≤T∗. | (57) |
In view of (55), (56), and (57), we have
F(t)F″(t)−p+12(F′(t))2≥F(t)(2(p+1)(d−J(u0))−2pβ),0≤t≤T∗. |
If we take
0<β≤p+1p(d−J(u0)), | (58) |
then
T∗≤F(0)(p+12−1)F′(0)=T∗‖u0‖2H10+βα2(p−1)αβ. |
Then for
α∈(‖u0‖2H10(p−1)β,∞), | (59) |
we get
T∗≤βα2(p−1)αβ−‖u0‖2H10. |
Minimizing the above inequality for
T∗≤βα2(p−1)αβ−‖u0‖2H10|α=2‖u0‖2H10(p−1)β=4‖u0‖2H10(p−1)2β. |
Minimizing the above inequality for
T∗≤4p‖u0‖2H10(p−1)2(p+1)(d−J(u0)). |
By the arbitrariness of
Tmax≤4p‖u0‖2H10(p−1)2(p+1)(d−J(u0)). |
Secondly, we consider the case
Proof of Theorems 2.6 and 2.7. Since Theorem 2.6 follows from Theorem 2.7 directly, we only need to prove Theorem 2.7.
Firstly, we show
d=infϕ∈NJ(ϕ)=p−12(p+1)infϕ∈N‖∇ϕ‖2L2. |
Then a minimizing sequence
limk↑∞J(ϕk)=p−12(p+1)limk↑∞‖∇ϕk‖2L2=d, | (60) |
which implies
(1)
(2)
Now, in view of
limk↑∞J(ϕk)=p−12(p+1)limk↑∞‖∇ϕk‖2L2=d, | (60) |
We claim
‖∇φ‖2L2=‖φ‖p+1Lp+1σ i.e. I(φ)=0. | (62) |
In fact, if the claim is not true, then by (61),
‖∇φ‖2L2<‖φ‖p+1Lp+1σ. |
By the proof of Theorem 3.3, we know that
J(s∗φφ)≥d, | (63) |
where
J(s∗φφ)≥d, | (63) |
On the other hand, since
J(s∗φφ)=p−12(p+1)(s∗φ)2‖∇φ‖2L2<p−12(p+1)‖∇φ‖2L2≤p−12(p+1)lim infk↑∞‖∇ϕk‖2L2=d, |
which contradicts to (63). So the claim is true, i.e.
limk↑∞‖∇ϕk‖2L2=‖φ‖p+1Lp+1σ, |
which, together with
Second, we prove
limk↑∞‖∇ϕk‖2L2=‖φ‖p+1Lp+1σ, |
Then
A:={τ(s)(φ+sv):s∈(−ε,ε)} |
is a curve on
A:={τ(s)(φ+sv):s∈(−ε,ε)} |
where
ξ:=2∫Ω∇(φ+sv)⋅∇vdx‖φ+sv‖p+1Lp+1σ,η:=(p+1)∫Ω|x|σ|φ+sv|p−1(φ+sv)vdx‖∇(φ+sv)‖2L2. |
Since (62), we get
τ′(0)=1(p−1)‖φ‖p+1Lp+1σ(2∫Ω∇φ∇vdx−(p+1)∫Ω|x|σ|φ|p−1φvdx). | (65) |
Let
ϱ(s):=J(τ(s)(φ+sv))=τ2(s)2‖∇(φ+sv)‖2L2−τp+1(s)p+1‖φ+sv‖p+1Lp+1σ,s∈(−ε,ε). |
Since
0=ϱ′(0)=τ(s)τ′(s)‖∇(φ+sv)‖2L2+τ2(s)∫Ω∇(φ+sv)⋅∇vdx|s=0−τp(s)τ′(s)‖φ+sv‖p+1Lp+1σ−τp+1(s)∫Ω|x|σ|φ+sv|p−1(φ+sv)vdx|s=0=∫Ω∇φ⋅∇vdx−∫Ω|x|σ|φ|p−1φvdx. |
So,
Finally, in view of Definition 2.3 and
d=infϕ∈Φ∖{0}J(ϕ). | (66) |
In fact, by the above proof and (27), we have
d=infϕ∈NJ(ϕ) |
and
Proof of Theorem 2.8. Let
ω(u0)=∩t≥0¯{u(⋅,s):s≥t}H10(Ω) |
the
(i) Assume
v(x,t)={u(x,t), if 0≤t≤t0;0, if t>t0 |
is a global weak solution of problem (1), and the proof is complete.
We claim that
I(u(⋅,t))>0,0≤t<Tmax. | (67) |
Since
I(u(⋅,t))>0,0≤t<t0 | (68) |
and
I(u(⋅,t0))=0, | (69) |
which together with the definition of
‖u(⋅,t0)‖H10≥λρ. | (70) |
On the other hand, it follows from (24), (68) and
‖u(⋅,t)‖H10<‖u0‖H10≤λρ, |
which contradicts (70). So (67) is true. Then by (24) again, we get
‖u(⋅,t)‖H10≤‖u0‖H10,0≤t<Tmax, |
which implies
By (24) and (67),
limt↑∞‖u(⋅,t)‖H10=c. |
Taking
∫∞0I(u(⋅,s))ds≤12(‖u0‖2H10−c)<∞. |
Note that
limn↑∞I(u(⋅,tn))=0. | (71) |
Let
u(⋅,tn)→ω in H10(Ω) as n↑∞. | (72) |
Then by (71), we get
I(ω)=limn↑∞I(u(⋅,tn))=0. | (73) |
As the above, one can easily see
‖ω‖H10<λρ≤λJ(u0),J(ω)<J(u0)⏟⇒ω∈JJ(u0), |
which implies
limt↑∞‖u(⋅,t)‖H10=limn↑∞‖u(⋅,tn)‖H10=‖ω‖H10=0. |
(ⅱ) Assume
(74) |
Since
(75) |
and
(76) |
Since (75), by (44) and
which, together with the definition of
(77) |
On the other hand, it follows from (24), (75) and
which contradicts (77). So (74) is true.
Suppose by contradiction that
Taking
Note
(78) |
Let
(79) |
Since
Then by (78), we get
(80) |
By (24), (25) and (74), one can easily see
which implies
Proof of Theorem 2.9. Let
(81) |
The remain proofs are similar to the proof of Theorem 2.9. For any
(82) |
We also have (56) and (57). Then it follows from (56), (57) and (82) that
If we take
(83) |
then
Then for
(84) |
we get
Minimizing the above inequality for
Minimizing the above inequality for
By the arbitrariness of
Proof of Theorem 2.10. For any
(85) |
For such
(86) |
where (see Remark 5)
(86) |
which can be done since
and
By Remark 5 again,
(87) |
By (87) and (86), we can choose
and (note (85))
Let
[1] |
A new reprojection of the conjugate directions. Numer. Algebra Control Optim. (2019) 9: 157-171. ![]() |
[2] |
Descent property and global convergence of the Fletcher-Reeves method with inexact line search. IMA J. Numer. Anal. (1985) 5: 121-124. ![]() |
[3] | An unconstrained optimization test functions collection. Adv. Model. Optim. (2008) 10: 147-161. |
[4] |
An acceleration of gradient descent algorithm with backtracking for unconstrained optimization. Numer. Algorithms (2006) 42: 63-73. ![]() |
[5] |
A Dai-Liao conjugate gradient algorithm with clustering of eigenvalues. Numer. Algorithms (2018) 77: 1273-1282. ![]() |
[6] | N. Andrei, Relaxed gradient descent and a new gradient descent methods for unconstrained optimization, Visited August 19, (2018). |
[7] |
N. Andrei, Nonlinear Conjugate Gradient Methods for Unconstrained Optimization, 1st edition, Springer International Publishing, 2020. doi: 10.1007/978-3-030-42950-8
![]() |
[8] |
Minimization of functions having Lipschitz continuous first partial derivatives. Pacific J. Math. (1966) 16: 1-3. ![]() |
[9] |
The Dai-Liao nonlinear conjugate gradient method with optimal parameter choices. European J. Oper. Res. (2014) 234: 625-630. ![]() |
[10] |
B. Baluch, Z. Salleh, A. Alhawarat and U. A. M. Roslan, A new modified three-term conjugate gradient method with sufficient descent property and its global convergence, J. Math., 2017 (2017), Article ID 2715854, 12 pages. doi: 10.1155/2017/2715854
![]() |
[11] |
Two-point step-size gradient method. IMA J. Numer. Anal. (1988) 8: 141-148. ![]() |
[12] |
M. Bastani and D. K. Salkuyeh, On the GSOR iteration method for image restoration, Numerical Algebra, Control and Optimization, (2020). doi: 10.3934/naco.2020013
![]() |
[13] |
An evaluation of quasi-Newton methods for application to FSI problems involving free surface flow and solid body contact. Computers & Structures (2016) 173: 71-83. ![]() |
[14] |
CUTE: Constrained and unconstrained testing environments. ACM Trans. Math. Softw. (1995) 21: 123-160. ![]() |
[15] |
A classification of quasi-Newton methods. Numer. Algorithms (2003) 33: 123-135. ![]() |
[16] |
A conjugate gradient algorithm and its applications in image restoration. Appl. Numer. Math. (2020) 152: 243-252. ![]() |
[17] |
A two-term PRP-based descent method. Numer. Funct. Anal. Optim. (2007) 28: 1217-1230. ![]() |
[18] |
A sufficient descent conjugate gradient method and its global convergence. Optim. Methods Softw. (2016) 31: 577-590. ![]() |
[19] |
Stepsize analysis for descent methods. J. Optim. Theory Appl. (1981) 33: 187-205. ![]() |
[20] |
Convergence of quasi-Newton matrices generated by the symmetric rank one update. Math. Programming (1991) 50: 177-195. ![]() |
[21] |
Y.-H. Dai, Nonlinear Conjugate Gradient Methods, Wiley Encyclopedia of Operations Research and Management Science, (2011). doi: 10.1002/9780470400531.eorms0183
![]() |
[22] | Y.-H. Dai, Alternate Step Gradient Method, Report AMSS–2001–041, Academy of Mathematics and Systems Sciences, Chinese Academy of Sciences, 2001. |
[23] | A nonmonotone conjugate gradient algorithm for unconstrained optimization. J. Syst. Sci. Complex. (2002) 15: 139-145. |
[24] | Y.-H. Dai and R. Fletcher, On the Asymptotic Behaviour of some New Gradient Methods, Numerical Analysis Report, NA/212, Dept. of Math. University of Dundee, Scotland, UK, 2003. |
[25] |
A nonlinear conjugate gradient algorithm with an optimal property and an improved wolfe line search. SIAM. J. Optim. (2013) 23: 296-320. ![]() |
[26] |
New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. (2001) 43: 87-101. ![]() |
[27] |
R-linear convergence of the Barzilai and Borwein gradient method. IMA J. Numer. Anal. (2002) 22: 1-10. ![]() |
[28] | Testing different conjugate gradient methods for large-scale unconstrained optimization. J. Comput. Math. (2003) 21: 311-320. |
[29] |
Another improved Wei–Yao–Liu nonlinear conjugate gradient method with sufficient descent property. Appl. Math. Comput. (2012) 218: 7421-7430. ![]() |
[30] |
A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. (1999) 10: 177-182. ![]() |
[31] |
An efficient hybrid conjugate gradient method for unconstrained optimization. Ann. Oper. Res. (2001) 103: 33-47. ![]() |
[32] | Y. H. Dai and Y. Yuan, A class of Globally Convergent Conjugate Gradient Methods, Research report ICM-98-030, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, 1998. |
[33] | A class of globally convergent conjugate gradient methods. Sci. China Ser. A (2003) 46: 251-261. |
[34] |
Modified two-point step-size gradient methods for unconstrained optimization. Comput. Optim. Appl. (2002) 22: 103-109. ![]() |
[35] |
Alternate minimization gradient method. IMA J. Numer. Anal. (2003) 23: 377-393. ![]() |
[36] |
Analysis of monotone gradient methods. J. Ind. Manag. Optim. (2005) 1: 181-192. ![]() |
[37] | Y.-H. Dai and H. Zhang, An Adaptive Two-Point Step-size gradient method, Research report, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, 2001. |
[38] |
The conjugate gradient method for linear and nonlinear operator equations. SIAM J. Numer. Anal. (1967) 4: 10-26. ![]() |
[39] |
S. Delladji, M. Belloufi and B. Sellami, Behavior of the combination of PRP and HZ methods for unconstrained optimization, Numerical Algebra, Control and Optimization, (2020). doi: 10.3934/naco.2020032
![]() |
[40] | Investigation of quasi-Newton methods for unconstrained optimization. International Journal of Computer Application (2010) 29: 48-58. |
[41] |
Two modifications of the method of the multiplicative parameters in descent gradient methods. Appl. Math, Comput. (2012) 218: 8672-8683. ![]() |
[42] |
S. S. Djordjević, Unconstrained Optimization Methods: Conjugate Gradient Methods and Trust-Region Methods, Book Chapter, 2019. doi: 10.5772/intechopen.84374
![]() |
[43] |
Benchmarking optimization software with performance profiles. Math. Program. (2002) 91: 201-213. ![]() |
[44] |
The application of quasi-Nnewton methods in fluid mechanics. Internat. J. Numer. Methods Engrg. (1981) 17: 707-718. ![]() |
[45] | D. K. Faddeev and I. S. Sominskiǐ, Collection of Problems on Higher Algebra. Gostekhizdat, 2nd edition, Moscow, 1949. |
[46] |
A. G. Farizawani, M. Puteh, Y. Marina and A. Rivaie, A review of artificial neural network learning rule based on multiple variant of conjugate gradient approaches, Journal of Physics: Conference Series, 1529 (2020). doi: 10.1088/1742-6596/1529/2/022040
![]() |
[47] | R. Fletcher, Practical Methods of Optimization, Unconstrained Optimization, 1st edition, Wiley, New York, 1987. |
[48] |
Function minimization by conjugate gradients. Comput. J. (1964) 7: 149-154. ![]() |
[49] |
Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. (1992) 2: 21-42. ![]() |
[50] |
On steepest descent. J. SIAM Control Ser. A (1965) 3: 147-151. ![]() |
[51] |
A nonmonotone line search technique for Newton's method. SIAM J. Numer. ANAL. (1986) 23: 707-716. ![]() |
[52] |
A class of nonmonotone stability methods in unconstrained optimization. Numer. Math. (1991) 59: 779-805. ![]() |
[53] |
A truncated Newton method with nonmonotone line search for unconstrained optimization. J. Optim. Theory Appl. (1989) 60: 401-419. ![]() |
[54] |
Nonmonotone globalization techniques for the Barzilai-Borwein gradient method. Comput. Optim. Appl. (2002) 23: 143-169. ![]() |
[55] |
A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. (2005) 16: 170-192. ![]() |
[56] |
Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Software (2006) 32: 113-137. ![]() |
[57] | A survey of nonlinear conjugate gradient methods. Pac. J. Optim. (2006) 2: 35-58. |
[58] | Combining quasi-Newton and Cauchy directions. Int. J. Appl. Math. (2003) 12: 167-191. |
[59] |
A new type of quasi-Newton updating formulas based on the new quasi-Newton equation. Numer. Algebra Control Optim. (2020) 10: 227-235. ![]() |
[60] |
Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Standards (1952) 49: 409-436. ![]() |
[61] |
Global convergence result for conjugate gradient methods. J. Optim. Theory Appl. (1991) 71: 399-405. ![]() |
[62] |
Fixed points by a new iteration method. Proc. Am. Math. Soc. (1974) 44: 147-150. ![]() |
[63] |
B. Ivanov, P. S. Stanimirović, G. V. Milovanović, S. Djordjević and I. Brajević, Accelerated multiple step-size methods for solving unconstrained optimization problems, Optimization Methods and Software, (2019). doi: 10.1080/10556788.2019.1653868
![]() |
[64] |
Applications of the conjugate gradient method in optimal surface parameterizations. Int. J. Comput. Math. (2010) 87: 1032-1039. ![]() |
[65] |
A hybrid conjugate gradient method with descent property for unconstrained optimization. Appl. Math. Model. (2015) 39: 1281-1290. ![]() |
[66] |
S. H. Khan, A Picard-Mann hybrid iterative process, Fixed Point Theory Appl., 2013 (2013), Article number: 69, 10 pp. doi: 10.1186/1687-1812-2013-69
![]() |
[67] | Novel hybrid algorithm in solving unconstrained optimizations problems. International Journal of Novel Research in Physics Chemistry & Mathematics (2017) 4: 36-42. |
[68] |
Implementation of gradient methods for optimization of underage costs in aviation industry. University Thought, Publication in Natural Sciences (2016) 6: 71-74. ![]() |
[69] |
A continuous-time approach to online optimization. J. Dyn. Games (2017) 4: 125-148. ![]() |
[70] |
On a two phase approximate greatest descent method for nonlinear optimization with equality constraints. Numer. Algebra Control Optim. (2018) 8: 315-326. ![]() |
[71] |
A modified BFGS method and its global convergence in nonconvex minimization. J. Comput. Appl. Math. (2001) 129: 15-35. ![]() |
[72] |
A modified PRP conjugate gradient algorithm with trust region for optimization problems. Numer. Funct. Anal. Optim. (2011) 32: 496-506. ![]() |
[73] |
A projected preconditioned conjugate gradient method for the linear response eigenvalue problem. Numer. Algebra Control Optim. (2018) 8: 389-412. ![]() |
[74] |
A modified Fletcher-Reeves-type derivative-free method for symmetric nonlinear equations. Numer. Algebra Control Optim. (2011) 1: 71-82. ![]() |
[75] |
Approximate greatest descent in neural network optimization. Numer. Algebra Control Optim. (2018) 8: 327-336. ![]() |
[76] |
J. Liu, S. Du and Y. Chen, A sufficient descent nonlinear conjugate gradient method for solving -tensor equations, J. Comput. Appl. Math., 371 (2020), 112709, 11 pp. doi: 10.1016/j.cam.2019.112709
![]() |
[77] |
A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. (2015) 70: 2442-2453. ![]() |
[78] |
Efficient generalized conjugate gradient algorithms, part 1: Theory. J. Optim. Theory Appl. (1991) 69: 129-137. ![]() |
[79] |
A risk minimization problem for finite horizon semi-Markov decision processes with loss rates. J. Dyn. Games (2018) 5: 143-163. ![]() |
[80] |
I. E. Livieris and P. Pintelas, A descent Dai-Liao conjugate gradient method based on a modified secant equation and its global convergence, ISRN Computational Mathematics, 2012 (2012), Article ID 435495. doi: 10.5402/2012/435495
![]() |
[81] |
M. Lotfi and S. M. Hosseini, An efficient Dai-Liao type conjugate gradient method by reformulating the CG parameter in the search direction equation, J. Comput. Appl. Math., 371 (2020), 112708, 15 pp. doi: 10.1016/j.cam.2019.112708
![]() |
[82] |
Hybrid approach for solving systems of nonlinear equations using chaos optimization and quasi-Newton method. Applied Soft Computing (2008) 8: 1068-1073. ![]() |
[83] |
Mean value methods in iterations. Proc. Amer. Math. Soc. (1953) 4: 506-510. ![]() |
[84] |
Scalar correction method for solving large scale unconstrained minimization problems. J. Optim. Theory Appl. (2011) 151: 304-320. ![]() |
[85] |
S. K. Mishra and B. Ram, Introduction to Unconstrained Optimization with R, 1st edition, Springer Singapore, Springer Nature Singapore Pte Ltd., 2019. doi: 10.1007/978-981-15-0894-3
![]() |
[86] | I. S. Mohammed, M. Mamat, I. Abdulkarim and F. S. Bt. Mohamad, A survey on recent modifications of conjugate gradient methods, Proceedings of the UniSZA Research Conference 2015 (URC 5), Universiti Sultan Zainal Abidin, 14–16 April 2015. |
[87] |
A brief survey of methods for solving nonlinear least-squares problems. Numer. Algebra Control Optim. (2019) 9: 1-13. ![]() |
[88] | A survey of sufficient descent conjugate gradient methods for unconstrained optimization. SUT J. Math. (2014) 50: 167-203. |
[89] |
J. L. Nazareth, Conjugate-Gradient Methods, In: C. Floudas and P. Pardalos (eds), Encyclopedia of Optimization, 2 edition, Springer, Boston, 2009. doi: 10.1007/978-0-387-74759-0
![]() |
[90] |
J. Nocedal and S. J. Wright, Numerical Optimization, Springer, New York, 1999. doi: 10.1007/b98874
![]() |
[91] |
W. F. H. W. Osman, M. A. H. Ibrahim and M. Mamat, Hybrid DFP-CG method for solving unconstrained optimization problems, Journal of Physics: Conf. Series, 890 (2017), 012033. doi: 10.1088/1742-6596/890/1/012033
![]() |
[92] |
Initial improvement of the hybrid accelerated gradient descent process. Bull. Aust. Math. Soc. (2018) 98: 331-338. ![]() |
[93] |
A modified conjugate gradient algorithm. Oper. Res. (1978) 26: 1073-1078. ![]() |
[94] |
An accelerated double step-size method in unconstrained optimization. Applied Math. Comput. (2015) 250: 309-319. ![]() |
[95] |
Determination of accelerated factors in gradient descent iterations based on Taylor's series. University Thought, Publication in Natural Sciences (2017) 7: 41-45. ![]() |
[96] |
Hybridization of accelerated gradient descent method. Numer. Algorithms (2018) 79: 769-786. ![]() |
[97] |
A note on hybridization process applied on transformed double step size model. Numerical Algorithms (2020) 85: 449-465. ![]() |
[98] |
M. J. Petrović and P. S. Stanimirović, Accelerated double direction method for solving unconstrained optimization problems, Math. Probl. Eng., 2014 (2014), Article ID 965104, 8 pages. doi: 10.1155/2014/965104
![]() |
[99] |
M. J. Petrović, P. S. Stanimirović, N. Kontrec and J. Mladenović, Hybrid modification of accelerated double direction method, Math. Probl. Eng., 2018 (2018), Article ID 1523267, 8 pages. doi: 10.1155/2018/1523267
![]() |
[100] |
A new class of efficient and globally convergent conjugate gradient methods in the Dai-Liao family. Optim. Methods Softw. (2015) 30: 843-863. ![]() |
[101] | Memoire sur la theorie des equations aux derivees partielles et la methode des approximations successives. J. Math. Pures Appl. (1890) 6: 145-210. |
[102] | E. Polak and G. Ribière, Note sur la convergence des méthodes de directions conjuguées, Rev. Française Imformat Recherche Opértionelle, 3 (1969), 35–43. |
[103] |
The conjugate gradient method in extreme problems. USSR Comput. Math. and Math. Phys. (1969) 9: 94-112. ![]() |
[104] |
Algorithms for nonlinear constraints that use Lagrangian functions. Math. Programming (1978) 14: 224-248. ![]() |
[105] |
On the Barzilai and Borwein choice of steplength for the gradient method. IMA J. Numer. Anal. (1993) 13: 321-326. ![]() |
[106] |
The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. (1997) 7: 26-33. ![]() |
[107] |
Relaxed steepest descent and Cauchy-Barzilai-Borwein method. Comput. Optim. Appl. (2002) 21: 155-167. ![]() |
[108] |
A new family of conjugate gradient coefficient with application. International Journal of Engineering & Technology (2018) 7: 36-43. ![]() |
[109] |
Convergence of line search methods for unconstrained optimization. Appl. Math. Comput. (2004) 157: 393-405. ![]() |
[110] |
The application of new conjugate gradient methods in estimating data. International Journal of Engineering & Technology (2018) 7: 25-27. ![]() |
[111] |
Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization. Numer. Algebra Control Optim. (2018) 8: 377-387. ![]() |
[112] |
New hybrid conjugate gradient and Broyden-Fletcher-Goldfarb-Shanno conjugate gradient methods. J. Optim. Theory Appl. (2018) 178: 860-884. ![]() |
[113] |
Computation of and -inverses based on rank-one updates. Linear Multilinear Algebra (2018) 66: 147-166. ![]() |
[114] |
Computing and -inverses by using the Sherman-Morrison formula. Appl. Math. Comput. (2015) 273: 584-603. ![]() |
[115] |
Accelerated gradient descent methods with line search. Numer. Algor. (2010) 54: 503-520. ![]() |
[116] |
P. S. Stanimirović, G. V. Milovanović, M. J. Petrović and N. Z. Kontrec, A transformation of accelerated double step-size method for unconstrained optimization, Math. Probl. Eng., 2015 (2015), Article ID 283679, 8 pages. doi: 10.1155/2015/283679
![]() |
[117] |
Global convergence of nonmonotone descent methods for unconstrained optimization problems. J. Comp. Appl. Math. (2002) 146: 89-98. ![]() |
[118] |
Identification of Hessian matrix in distributed gradient based multi agent coordination control systems. Numer. Algebra Control Optim. (2019) 9: 297-318. ![]() |
[119] |
W. Sun and Y.-X. Yuan, Optimization Theory and Methods: Nonlinear Programming, 1st edition, Springer, New York, 2006. doi: 10.1007/b106451
![]() |
[120] |
Non–monotone trust–region algorithm for nonlinear optimization subject to convex constraints. Math. Prog. (1997) 77: 69-94. ![]() |
[121] |
Efficient hybrid conjugate gradient techniques. J. Optim. Theory Appl. (1990) 64: 379-397. ![]() |
[122] |
A class of gradient unconstrained minimization algorithms with adaptive step-size. J. Comp. Appl. Math. (2000) 114: 367-386. ![]() |
[123] |
New quasi-Newton methods for unconstrained optimization problems. Appl. Math. Comput. (2006) 175: 1156-1188. ![]() |
[124] |
New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems. Appl. Math. Comput. (2006) 179: 407-430. ![]() |
[125] |
The convergence properties of some new conjugate gradient methods. Appl. Math. Comput. (2006) 183: 1341-1350. ![]() |
[126] |
Convergence conditions for ascent methods. SIAM Rev. (1969) 11: 226-235. ![]() |
[127] |
Survey of derivative-free optimization. Numer. Algebra Control Optim. (2020) 10: 537-555. ![]() |
[128] |
A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J. Math. Anal. Appl. (2013) 405: 310-319. ![]() |
[129] |
Global convergence properties of nonlinear conjugate gradient methods with modified secant condition. Comput. Optim. Appl. (2004) 28: 203-225. ![]() |
[130] |
X. Yang, Z. Luo and X. Dai, A global convergence of LS-CD hybrid conjugate gradient method, Adv. Numer. Anal., 2013 (2013), Article ID 517452, 5 pp. doi: 10.1155/2013/517452
![]() |
[131] |
S. Yao, X. Lu and Z. Wei, A conjugate gradient method with global convergence for large-scale unconstrained optimization problems, J. Appl. Math., 2013 (2013), Article ID 730454, 9 pp. doi: 10.1155/2013/730454
![]() |
[132] |
Convergence analysis of a modified BFGS method on convex minimizations. Comp. Optim. Appl. (2010) 47: 237-255. ![]() |
[133] |
A notes about WYL's conjugate gradient method and its applications. Appl. Math. Comput. (2007) 191: 381-388. ![]() |
[134] | A new stepsize for the steepest descent method. J. Comput. Math. (2006) 24: 149-156. |
[135] |
A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems. Appl. Numer. Math. (2020) 147: 129-141. ![]() |
[136] |
G. Yuan, T. Li and W. Hu, A conjugate gradient algorithm and its application in large-scale optimization problems and image restoration, J. Inequal. Appl., 2019 (2019), Article number: 247, 25 pp. doi: 10.1186/s13660-019-2192-6
![]() |
[137] |
Modified limited memory BFGS method with nonmonotone line search for unconstrained optimization. J. Korean Math. Soc. (2010) 47: 767-788. ![]() |
[138] |
An improved Wei-Yao-Liu nonlinear conjugate gradient method for optimization computation. Appl. Math. Comput. (2009) 215: 2269-2274. ![]() |
[139] |
Two new Dai-Liao-type conjugate gradient methods for unconstrained optimization problems. J. Optim. Theory Appl. (2017) 175: 502-509. ![]() |
[140] |
Semi-local convergence of the Newton-HSS method under the center Lipschitz condition. Numer. Algebra Control Optim. (2019) 9: 85-99. ![]() |
[141] |
A nonlinear conjugate gradient method based on the MBFGS secant condition. Optim. Methods Softw. (2006) 21: 707-714. ![]() |
[142] | G. Zoutendijk, Nonlinear programming, computational methods, In: J. Abadie (eds.): Integer and Nonlinear Programming, North-Holland, Amsterdam, (1970), 37–86. |
[143] | A new gradient method for solving linear regression model. International Journal of Recent Technology and Engineering (2019) 7: 624-630. |
1. | Yuxuan Chen, Jiangbo Han, Global existence and nonexistence for a class of finitely degenerate coupled parabolic systems with high initial energy level, 2021, 14, 1937-1632, 4179, 10.3934/dcdss.2021109 | |
2. | Quang-Minh Tran, Hong-Danh Pham, Global existence and blow-up results for a nonlinear model for a dynamic suspension bridge, 2021, 14, 1937-1632, 4521, 10.3934/dcdss.2021135 | |
3. | Jun Zhou, Guangyu Xu, Chunlai Mu, Analysis of a pseudo-parabolic equation by potential wells, 2021, 200, 0373-3114, 2741, 10.1007/s10231-021-01099-1 | |
4. | Le Thi Phuong Ngoc, Khong Thi Thao Uyen, Nguyen Huu Nhan, Nguyen Thanh Long, On a system of nonlinear pseudoparabolic equations with Robin-Dirichlet boundary conditions, 2022, 21, 1534-0392, 585, 10.3934/cpaa.2021190 | |
5. | Jinxing Liu, Xiongrui Wang, Jun Zhou, Huan Zhang, Blow-up phenomena for the sixth-order Boussinesq equation with fourth-order dispersion term and nonlinear source, 2021, 14, 1937-1632, 4321, 10.3934/dcdss.2021108 | |
6. | Peiqun Lin, Chenxing He, Lingshu Zhong, Mingyang Pei, Chuhao Zhou, Yang Liu, Bus timetable optimization model in response to the diverse and uncertain requirements of passengers for travel comfort, 2023, 31, 2688-1594, 2315, 10.3934/era.2023118 | |
7. | Hang Ding, Renhai Wang, Jun Zhou, Infinite time blow‐up of solutions to a class of wave equations with weak and strong damping terms and logarithmic nonlinearity, 2021, 147, 0022-2526, 914, 10.1111/sapm.12405 | |
8. | Le Thi Phuong Ngoc, Nguyen Anh Triet, Phan Thi My Duyen, Nguyen Thanh Long, General Decay and Blow-up Results of a Robin-Dirichlet Problem for a Pseudoparabolic Nonlinear Equation of Kirchhoff-Carrier Type with Viscoelastic Term, 2023, 0251-4184, 10.1007/s40306-023-00496-3 | |
9. | Gang Cheng, Yijie He, Enhancing passenger comfort and operator efficiency through multi-objective bus timetable optimization, 2024, 32, 2688-1594, 565, 10.3934/era.2024028 |