Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Normalized solution for a kind of coupled Kirchhoff systems

  • Received: 22 November 2024 Revised: 04 January 2025 Accepted: 15 January 2025 Published: 10 February 2025
  • In this paper, we investigate the existence of a normalized solution for the following Kirchhoff system in the entire space RN (N3):

    {(1+RN|u|2dx)Δu=λ1u+μ1|u|p2u+βr1|u|r12u|v|r2,(1+RN|v|2dx)Δv=λ2v+μ2|v|q2v+βr2|u|r1|v|r22v,(P)

    under the constraints RN|u|2dx=m1 and RN|v|2dx=m2, where m1,m2>0 are prescribed. The parameters μ1,μ2,β>0, 2p,q<2+8N, r1,r2>1, and satisfy r1+r2=2=2NN2. The frequencies λ1,λ2 appear as Lagrange multipliers. With the help of the Pohožaev manifold and the minimization of the energy functional over a combination of the mass constraints and the closed balls, we obtain a positive ground state solution to (P). We mainly extend the results of Yang (Normalized ground state solutions for Kirchhoff-type systems) concerning the above problem from a single critical to a coupled critical nonlinearity.

    Citation: Shiyong Zhang, Qiongfen Zhang. Normalized solution for a kind of coupled Kirchhoff systems[J]. Electronic Research Archive, 2025, 33(2): 600-612. doi: 10.3934/era.2025028

    Related Papers:

    [1] Lingzheng Kong, Haibo Chen . Normalized solutions for nonlinear Kirchhoff type equations in high dimensions. Electronic Research Archive, 2022, 30(4): 1282-1295. doi: 10.3934/era.2022067
    [2] Jiayi Fei, Qiongfen Zhang . On solutions for a class of Klein–Gordon equations coupled with Born–Infeld theory with Berestycki–Lions conditions on $ \mathbb{R}^3 $. Electronic Research Archive, 2024, 32(4): 2363-2379. doi: 10.3934/era.2024108
    [3] Zayd Hajjej . Asymptotic stability for solutions of a coupled system of quasi-linear viscoelastic Kirchhoff plate equations. Electronic Research Archive, 2023, 31(6): 3471-3494. doi: 10.3934/era.2023176
    [4] Shuai Yuan, Sitong Chen, Xianhua Tang . Normalized solutions for Choquard equations with general nonlinearities. Electronic Research Archive, 2020, 28(1): 291-309. doi: 10.3934/era.2020017
    [5] Xiaoyong Qian, Jun Wang, Maochun Zhu . Existence of solutions for a coupled Schrödinger equations with critical exponent. Electronic Research Archive, 2022, 30(7): 2730-2747. doi: 10.3934/era.2022140
    [6] Dan-Andrei Geba . Unconditional well-posedness for the periodic Boussinesq and Kawahara equations. Electronic Research Archive, 2024, 32(2): 1067-1081. doi: 10.3934/era.2024052
    [7] Haijun Luo, Zhitao Zhang . Existence and stability of normalized solutions to the mixed dispersion nonlinear Schrödinger equations. Electronic Research Archive, 2022, 30(8): 2871-2898. doi: 10.3934/era.2022146
    [8] Zijian Wu, Haibo Chen . Multiple solutions for the fourth-order Kirchhoff type problems in $ \mathbb{R}^N $ involving concave-convex nonlinearities. Electronic Research Archive, 2022, 30(3): 830-849. doi: 10.3934/era.2022044
    [9] Qingming Hao, Wei Chen, Zhigang Pan, Chao Zhu, Yanhua Wang . Steady-state bifurcation and regularity of nonlinear Burgers equation with mean value constraint. Electronic Research Archive, 2025, 33(5): 2972-2988. doi: 10.3934/era.2025130
    [10] Quanqing Li, Zhipeng Yang . Existence of normalized solutions for a Sobolev supercritical Schrödinger equation. Electronic Research Archive, 2024, 32(12): 6761-6771. doi: 10.3934/era.2024316
  • In this paper, we investigate the existence of a normalized solution for the following Kirchhoff system in the entire space RN (N3):

    {(1+RN|u|2dx)Δu=λ1u+μ1|u|p2u+βr1|u|r12u|v|r2,(1+RN|v|2dx)Δv=λ2v+μ2|v|q2v+βr2|u|r1|v|r22v,(P)

    under the constraints RN|u|2dx=m1 and RN|v|2dx=m2, where m1,m2>0 are prescribed. The parameters μ1,μ2,β>0, 2p,q<2+8N, r1,r2>1, and satisfy r1+r2=2=2NN2. The frequencies λ1,λ2 appear as Lagrange multipliers. With the help of the Pohožaev manifold and the minimization of the energy functional over a combination of the mass constraints and the closed balls, we obtain a positive ground state solution to (P). We mainly extend the results of Yang (Normalized ground state solutions for Kirchhoff-type systems) concerning the above problem from a single critical to a coupled critical nonlinearity.



    Solving systems of nonlinear equations is an important part of optimization theory and algorithms and is essential to applications in machine learning, artificial intelligence, economic planning, and other important fields. The theoretical study of algorithms for nonlinear systems of equations is an important research topic in the fields of computational mathematics, operations research and optimal control, and numerical algebra. As early as the 1970s, the monographs Orrega [1] and Rheinboldt [2] had done systematic research on nonlinear systems of equations in terms of theory and solutions.

    An early and very famous method was the Newtonian method [3]. It has the advantage that the algorithm converges quadratically when the initial point chosen is close to the minima; however, Newton's method does not necessarily converge when the initial point is away from the minima, and Newton's method requires the computation and storage of Jacobian matrices. So a wide range of researchers have built proposed quasi-Newtonian methods [4,5]. Such algorithms utilize approximate Jacobian matrices, inheriting the fast convergence of Newton's algorithm. In addition, methods for solving problems with linear equations are the Gaussian-Newton method, Levenberg-Marquardt algorithm, and their various modified forms [6,7,8]. The above methods necessitate computing and banking Jacobian matrices or approximate Jacobian matrices for each iteration step. The nonlinear conjugate gradient method belongs to the classical computational methods in the first-order optimization methods, which has the characteristics of simple structure, small storage, and low computational complexity, and thus has been widely studied by the optimization community [9,10,11,12,13]. Derivative-free algorithms are one of the popular algorithms for solving large-scale nonlinear systems of equations; they utilize the hyperplane projection technique [14] with a structure of first-order optimization methods, which have an R-linear convergence speed under appropriate conditions. These algorithms have a simple structure, a small amount of storage, a low computational complexity, and they do not require derivative information, thus they are favored by a wide range of researchers.

    To ensure the descent of search direction, adjusting search direction structure becomes another important way to study the nonlinear conjugate gradient method. Yuan et al. [15] proposed a further improved weak Wolfe-Powell line-search and proved the method's global convergence in determining average functions given appropriate conditions. [16] proposed the adaptive scaled damped BFGS method (SDBFGS) for solving gradient non-Lipschitz non-convex objective functions. The above approach is attractive because the algorithm has strong self-correcting properties for large eigenvalues. In recent years, under influence from [17], some 1st-order optimization methods exemplified by conjugate gradient (CG) techniques are widely accepted to solve large-scale unconstrained optimization projects and are straightforward and low-memory [9,10,11,12,13]. These extensions, together with the freshly elaborated techniques, are permutations of renowned conjugate gradient algorithms, another key numerical tool for unconstrained optimization [18,19,20,21].

    The three-term conjugate gradient method [22] is considered to have tantalizing numerical capabilities and good theoretical properties. Yuan [23] presented an adaptive three-term Polak-Ribière-Polyak (PRP) method for non-convex functions and non-Lipschitz continuous functions with gradient. The efficient conjugate gradient algorithm is notorious for requiring both rapid convergence and high precision. [24] proposed a mixed inertia spectral conjugate gradient projection method for solving constrained nonlinear monotone equations, which is superior in solving large-scale equations and recovering blurred images contaminated by Gaussian noise. [25] described two families of self-tuning spectral hybrid DL conjugate gradient methods. The search directions of methods are improved by integrating negative spectral gradients and a final search direction with a convex combinatorial structure. [26] proposed a biased stochastic conjugate gradient (BSCG) algorithm with adaptive step size that combines the stochastic recursive gradient method (SARAH) and the improved Barzilai-Borwein (BB) technique into a typical stochastic gradient algorithm. [27] applied an improved triple conjugate gradient method and linear search technique to machine learning. The same convergence speed as stochastic conjugate gradient algorithm (SCGA) is obtained under weaker conditional assumptions.

    The idea of this paper is to combine family weak conjugate gradient methods proposed in [28] with a new parametric formulation of the HS conjugate gradient algorithm proposed in [29]. An efficient HS conjugate gradient method (EHSCG) is proposed and used for image restoration problems and machine learning. Basic characterization of the algorithm is given below:

    ● The new algorithms have decreasing and trust-region properties requiring no extras.

    ● They converge globally in well-suited circumstances.

    ● The new algorithms can solve image restoration, nonlinear monotone equations, and machine learning issues.

    In Section 2, we present procedures for solving nonlinear equation models and attest to the related properties. In Section 3, global convergence of the algorithm is proved using the weak Wolfe-Powell line-research condition under normal conditions for non-convex functions. In Section 4, we demonstrate the implementation of the algorithms toward image restoration and machine learning tasks and nonlinear monotone equations.

    Consider the following nonlinear model:

    g(x)=0, (2.1)

    where g:nn is a continuously differentiable monotone function, including xn. g(x) characterization suggests that the inequality is true:

    (g(m1)g(m2))T(m1m2)0,m1,m2n. (2.2)

    Scholars have come up with a number of interesting theories about this model.

    To solve this optimization problem, we typically have iteration xk+1=xk+αkdk, which stipulates that symbol xk signifies the iteration point, xk+1 is the following point, αk is the step length, and dk is the current direction, which is framed as below:

    dk=gk+βk1dk1. (2.3)

    In [14] covered projection techniques to find large-scale nonlinear equation systems, noted that projection technology is strictly coupled to direction and step size. In particular, hyperplane and projection technologies were leveraged to obtain a formulation for further iteration point:

    xk+1={wk,g(wk)=0,xkg(wk)T(xkwk)g(wk)g(wk)2,otherwise,, (2.4)

    where wk=xk+αkdk.

    Furthermore, a linear search solution was presented within [30] to resolve step length αk in iteration sequencing as follows:

    gTk+1dkϖαkgk+1dk2, (2.5)

    where step length αk=max{˜q,˜q˙t,˜q˙t2...}, and ˜q>0, ˙t(0,1), ϖ>0.

    In [28], a modified search direction is followed,

    dk=gk+rkdk=(1+rk)gk+rkβk1dk1, (2.6)

    where dk is defined in (2.3), and rk+1 is designated as follows:

    rk=ϑgkdk.

    The authors show that it can be equated with the traditional HS conjugate gradient method under the conjugation condition. They executed numerous tests that illustrated the superiority of the formulation in tackling large-scale optimization issues.

    Yuan et al. [29] updated the parametric formulation with the following HS conjugate gradient algorithm:

    ˆdk={τ1gk+gTkyk1dk1dTk1gkyk1max{τ2dk1yk1,τ3|dTk1yk1|},k2,τ1gk,k=1, (2.7)

    where yk1=gkgk1, τ1,2,30. The authors prove their methods fulfill an adequate descent condition by converging globally.

    Inspired by (2.6) and (2.7), considering both the excellent theoretical and numerical performance of the two algorithms, we obtain the equation below:

    dk=gk+ϑgkˆdkˆdk=gk+ϑgkτ1gk+gTkyk1dk1dTk1gkyk1max{τ2dk1yk1,τ3|dTk1yk1|}(τ1gk+gTkyk1dk1dTk1gkyk1max{τ2dk1yk1,τ3|dTk1yk1|})=(1+ν1gkk)gk+gk(gTkyk1dk1dTk1gkyk1)kmax{ν2dk1yk1,ν3|dTk1yk1|},

    where k=τ21gk2+(|gTkyk1|dk1|dTk1gk|yk1)2max{τ2dk1yk1,τ3|dTk1yk1|}2.

    On account of the above derivation, eventually we acquire the improved dk+1 formulation of this paper:

    dk+1={(1+ν1gk+1k)gk+1+gk+1(gTk+1ykdkdTkgk+1yk)kmax{ν2dkyk,ν3|dTkyk|},k1,(1+ν1gk+1k)gk+1,k=0, (2.8)

    where k+1=τ21gk+12+(|gTk+1yk|dk|dTkgk+1|yk)2max{τ2dkyk,τ3|dTkyk|}2, yk=gk+1gk, ν1=ϑτ1, ν2=τ2ϑ, ν3=τ3ϑ, and ϑ>0, νi>0, and τi>0,i=1,2,3. Based on (2.8), combined with line search (2.5), the sufficient descent characterization and trust region of the algorithm are provided. Equally the global convergence of the algorithm can be certified. Meanwhile, numerical results also prove the effectiveness and feasibility of the algorithm.

    Assumption 2.1. (i) The problem's solution set is not . (ii) Gradient g(x) is Lipschitz continuous, and this implies that the following inequality holds: L>0, s.t.

    g(m1)g(m2)Lm1m2,m1,m2n. (2.9)

    Algorithm 1 Efficient HS conjugate gradient method (EHSCG)
        1: Recognize an initial point, x0n; constants ϖ, ϑ,˜q>0; ˙t,ε(0,1), ν1,2,3>0. Let k=1.
        2: If gkε, stop; otherwise, calculate dk based on (2.8).
        3: Selecting the right step size αk on the basis of (2.5).
        4: Reset the new point to be wk=xk+αkdk.
        5: If gkε, stop, set xk+1=wk. Or else, construct the iteration point xk on the basis of xk+1=xkg(wk)T(xkwk)g(wk)g(wk)2.
        6: Let k=k+1, visit 2.

    This assumption implies ωCs.t.

    gkω. (2.10)

    Theorem 2.1. If dk satisfies Eq (2.8), then

    (1) proposed descent:

    gTk+1dk+1(1ϑ)gk+12, (2.11)

    and (2) trust domain:

    dk+1(1+ϑ)gk+1. (2.12)

    Proof. A remarkably similar proof procedure has been granted in Yuan's work in [28] and will not be reiterated for here.

    Lemma 2.2. If Assumption 2.1 holds and the objective function monotones, the iterative series {xk} is yielded by Algorithm 1, and point x satisfies the condition g(x)=0. Taking it a step further, if the series xk is infinite, then

    k=0xkxk12. (2.13)

    Proof. A detailed proof procedure of it is available in [14].

    Theorem 2.3. If Assumption 2.1 holds, Algorithm 1 produces a finite series of step iterations {αk} during the iteration from an active point to the newer one.

    Proof. This conclusion is supported by contradiction. Assume the indication set φ=N{0}, take any kφ, and consider the iteration of xk. Assume that the step size satisfying the line search does not occur; that is, there occurs a step size such that α=˜q˙ta satisfies:

    ˚gTdkϖα˚gdk2, (2.14)

    where ˚g=g(xk+αdk). Referring to the previous (2.9) and (2.14), we have

    gk2=11+ϑgkkgTkdk=11+ϑgkk((˚ggk)Tdk˚gTdk)11+ϑgkk1(˚ggkdk+ϖα˚gdk2)11+ϑgkk1(Lxk+αdkxkdk+ϖα˚gdk2)11+ϑgkk(Lαdk2+ϖα˚gdk2)11+ϑgkkα(L+ϖ˚g)dk2.

    By (2.9) and (2.10), then

    ˚g˚ggk+gkLxk+αdkxk+ω=Lαdk+ω=Lα(1+ϑ)gk+ω(Lα(1+ϑ)+1)ω.

    Further we obtain that

    α(1+ϑgkk)gk2(L+ϖg(xk+αdk))dk2(1+ϑgkk)gk2(L+ϖ(L˜q(1+ϑ)+1)ω)dk22+ϑ(L+ϖ(L˜q(1+ϑ)+1)ω)(1+ϑ)3>0.

    The above inequality shows that the step size α is bounded.

    Theorem 2.4. If Assumption 2.1 stands, Algorithm 1 yields {αk}, {dk}, {xk}, {and} {gk}, and then lim infkgk=0.

    Proof. This theorem is still proved by the converse method. We denote the indicator set φ=N{0}, and set kφ. According to Assumption 2.1, assume that ϱ,n0>0s.t.

    gkϱandkn0, (2.15)

    where ϱ is a constant and n0 is an index. According to (2.11) and (2.10), we have

    dk+1(1+ϑ)gk+1(1+ϑ)ϱ.

    Consider the Eq (2.11) with (2.12), and we have

    gkdkgTkdk=(1+gkk)gk2,
    dk(1+ϑϱdk)ϱ,
    dk2ϱdkϑϱ2,
    (dkϱ2)2(ϑ+14)ϱ2,
    dkϑ+14ϱ+ϱ2,
    ϑ+14ϱ+ϱ2dk(1+ϑ)ϱ.

    It can be concluded that direction dk+1 is bounded, we suppose that limkdk=d, and follow (2.13) with the Theorem 2.3 iteration points limkxk=x. Step size αk is bounded, and we have

    k=0xk+1xk2=k=0αkdk2<, (2.16)
    αkdk0,
    αk0.

    We obtain that the following inequality,

    gTk+1dkϖαkgk+1dk2.

    Both parts of the given inequality take the limit, the one has

    (g)Td0.

    Let k tend to infinity, gTk+1dk+1=(1+ϑgk+1k+1)gk+12, and each side of

    (g)Td0,

    which implies

    g=0,

    but this contradicts (2.15). Thus, the conclusion of the theorem holds.

    We apply (2.8) to the massive unconstrained optimization problem. Consider the issues below:

    minxnf(x), (3.1)

    where f:n is a continuously differentiable function. Set f(x)=ϝ(x).

    Motivated by the description in Section 2, the following formula for dk is proposed:

    dk={(1+ν1ϝkk)ϝk+ϝk(ϝTkˆyk1dk1dTk1ϝkˆyk1)kmax{ν2dk1ˆyk1,ν3|dTk1ˆyk1|},k2,(1+ν1ϝkk)ϝk,k=1, (3.2)

    where k=τ21ϝk2+(|ϝTkˆyk1|dk1|dTk1ϝk|ˆyk1)2max{τ2dk1ˆyk1,τ3|dTk1ˆyk1|}2, ˆyk=ϝk+1ϝk, ν1=ϑτ1, ν2=τ2ϑ, ν3=τ3ϑ, ϑ>0, and τi>0,i=1,2,3. Analogous to the targeted algorithm in Section 2, dk bears the following traits:

    ϝTkdk(1ϑ)ϝk2, (3.3)

    and

    dk(ϑ+1)ϝk, (3.4)

    where ϑ>0 is a constant. The proofs of the above traits have been given in Section 2 and will not be repeated in this section.

    This section presents the algorithm and global convergence thesis.

    Algorithm 2
    1: Recognize point x0n; constants Eps(0,1), 0<ζ<12,ζ<ξ<1; ϑ,ν1,2,3>0. Let k=1.
    2: If ϝkEps, stop; or calculate dk based on (3.2).
    3: Select the step-size αk based on
                                                fk+1fk+ζαkϝTkdk,(3.5)
    and
                                                    ϝTk+1ξϝTkdk.(3.6)
    4: Reset the new point to be xk+1=xk+αkdk.
    5: Let k:=k+1, and go to 2.

    Assumption 3.1. (i) The level set Θ={x|f(x)f(x0)} bounds.

    (ii) f(x)C2 bounds below and Lipschitz continues, which implicates that there exists constant L higher than zero so that

    ϝ(m1)ϝ(m2)Lm1m2,m1,m2n. (3.7)

    Theorem 3.1. If Assumption 3.1 holds, Algorithm 2 creates {xk}, {αk}, {dk}, and {ϝk}, and then

    limkϝk=0. (3.8)

    Proof. We will establish this through counterfactuals. Assuming that the original theorem does not hold, and we will have the following conclusion: an existence of a constant ε larger than zero, kk>0s.t.

    ϝkε.

    By the decreasing trait (3.3) and line search method (3.5), then

    fk+1fkζαkϝTkdkζ(1ϑ)αkϝk2.

    Consequently, let k range from 0 to , and then

    k=0(fk+1fk)=ff0=k=0(ζ(1ϑ)αkϝk2)<, (3.9)

    where the inequality on the right side results from Assumption 3.1(ⅱ). Significantly, Eq (3.9) expresses that

    limkζ(1ϑ)αkϝk2=0.

    Through the linear search method (3.6) with Assumption 3.1, then

    (ϝkϝk1)Tdk1(ξ1)ϝTk1dk1(ξ1)(1ϑ)ϝk12,
    ϝkϝk1dk1Lαk1dk1dk1L(1+ϑ)|αk1|ϝk2.

    In conclusion, we have the fact that

    αk1(1ξϑ+ξϑ)ϝk12L(1+ϑ)ϝk21ξϑ+ξϑL+Lϑ>0,

    where these two inequalities are given from (2.12). Hence we obtain (3.8). The proof is finalized.

    All methods are encoded in MATLAB R2021b running on a PC with a Windows 10 operating system with an Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz 1.80 GHz.

    We compare Algorithm 1 with the modified HS method [29], three-term PRP method [31], and traditional PRP method [21,32]. The termination conditions for each of the eleven test questions are

    gk105.

    The parameters in Algorithm 1 are chosen as: ϖ=0.01, ϑ=0.8,˜q=1; ˙t=0.5, ν1=1.8,ν2=3000, and ν3=1000. We list those problems from [33] here to preserve the neutrality of this paper. The corresponding problems with a matching initial point x0 are tabulated here:

    g(x)=(g1(x),g2(x),...,gs(x))T.

    Problem 4.1.. Exponential function 1:

    g1(x)=ex111,gt(x)=t(ext1xt),t=1,2...,s.

    point x0=(ss1,ss1...,ss1,)T.

    Problem 4.2. Singular function:

    g1(x)=13x31+12x22,gt(x)=12x2t+t3x3t+12x2t+1,gs(x)=12x2s+s3x3s,t=1,2...,s.

    point x0=(1,1...,1)T.

    Problem 4.3.

    gt(x)=ln(xt+1)xts,t=1,2...,s.

    point x0=(1,1...,1)T.

    Problem 4.4. Trigexp function:

    g1(x)=3x31+2x25+sin(x1x2)sin(x1+x2),
    gt(x)=xt1ext1xt+xt(4+3x2t)+2xt1+sin(xtxt1)sin(xt+xt1)8,
    gs(x)=xsexs1xs+4xs3,t=1,2...,s.

    point x0=(0,0...,0)T.

    Problem 4.5.

    gt(x)=ext1,t=1,2...,s.

    point x0=(1s,2s...,ss)T.

    Problem 4.6. Penalty I function:

    gt(x)=105(xt1),t=1,2...,s1.
    gs(x)=14s(x21+x22+...+x2s)14.

    point x0=(13,13...,13)T.

    Problem 4.7. Variable dimensioned function:

    gt(x)=xt1,t=1,2...,s1.
    gs1(x)=(x11)+2(x21)+3(x31)+...+(s2)(xs21),
    gs(x)=((x11)+2(x21)+3(x31)+...+(s2)(xs21))2.

    point x0=(11s,12s...,1ss)T.

    Problem 4.8. Tridiagonal system:

    g1(x)=4(x1x22),
    gt(x)=8xt(x2txt1)2(1xt)+4(xtx2t+1),t=2,3...,s1,
    gs(x)=8xs(x2sxs1)2(1xs).

    point x0=(12,12...,12)T.

    Problem 4.9. Five-diagonal system:

    g1(x)=4(x1x22)+x2x23,
    g2(x)=8x2(x22x1)2(1x2)+4(x2x23)+x3x24,
    gt(x)=8xt(x2txt1)2(1xt)+4(xtx2t+1)+x2t1xt2+xt+1x2t+2,t=3,4...,s2,
    gs1(x)=8xs1(x2s1xs2)2(1xs1)+4(xs1x2s)+x2s2xs3,
    gs(x)=8xs(x2sxs1)2(1xs)+x2s1xs2.

    point x0=(2,2...,2)T.

    Problem 4.10. Seven-diagonal system:

    g1(x)=4(x1x22)+x2x23+x3x24,
    g2(x)=8x2(x22x1)2(1x2)+4(x2x23)+x21+x3x24+x4x25,
    g3(x)=8x3(x23x2)2(1x3)+4(x3x24)+x22x1+x4x25+x21+x5x26,
    gt(x)=8xt(x2txt1)2(1xt)+4(xtx2t+1)+x2t1xt2+xt+1x2t+2+x2t2+xt+2xt3x2t+3,t=4,5...,s3,
    gs1(x)=8xs1(x2s1xs2)2(1xs1)+4(xs1x2s)+x2s2xs3+xs+x2s3xs4,
    gs(x)=8xs(x2sxs1)2(1xs)+x2s1xs2+x2s2xs3.

    point x0=(3,3...,3)T.

    Problem 4.11. Troesch problem:

    g1(x)=2x1+10(1s+1)2sin(101s+1x1)x2,
    gt(x)=2xt+10(1s+1)2sin(101s+1xt)xt1xt+1,t=2,3...,s1,
    gs(x)=2xs+10(1s+1)2sin(10(1s+1)2xs)xs1.

    point x0=(0,0...,0)T.

    Per issue, four dimensions of 1000, 5000, 10,000 and 50,000 were considered. In order to provide a graphical and well-rounded comparison of these treatments, this paper utilizes the performance curves of [34], which have a large resource for messages on both efficiencies and soundness. We graphed testing issue scores P individually, where the measure differs from the corresponding best method by a factor of τ. The performance curves are depicted in Figures 13, thereafter. The numerical consequences are tabulated in Table 1, where "Dim" denotes dimension and gk denotes the final value.

    Figure 1.  Performance profile based on NI (number of iterations).
    Figure 2.  Performance profile based on NF (number of function evaluations).
    Figure 3.  Performance profile based on CPU time (CPU seconds consumed).
    Table 1.  Numerical results of the considered methods.
    Alogtithmn 1 Modified HS Three-term PRP PRP
    NO Dim NI\NG\CPU\||gk|| NI\NG\CPU\||gk|| NI\NG\CPU\||gk|| NI\NG\CPU\||gk||
    1 1000 293/451/0.140625/9.941213e-06 395/518/0.421875/9.960606e-06 176/177/0.078125/9.999271e-06 187/188/0.062500/9.923051e-06
    1 5000 165/257/0.921875/9.904049e-06 199/270/1.359375/9.989538e-06 105/106/0.609375/9.978384e-06 109/110/0.421875/9.912454e-06
    1 10,000 126/198/1.781250/9.824845e-06 155/211/2.000000/9.923827e-06 85/86/1.125000/9.994038e-06 86/87/1.156250/9.942374e-06
    1 50,000 55/92/1.875000/9.742663e-06 82/113/2.515625/9.859854e-06 47/48/1.000000/9.970994e-06 50/51/1.812500/9.812461e-06
    2 1000 10251/12764/6.437500/9.988723e-06 13248/13441/7.421875/9.984307e-06 24134/29845/14.515625/9.996621e-06 23798/29299/14.640625/9.996136e-06
    2 5000 14093/17753/114.421875/9.991092e-06 14934/15561/110.281250/9.991112e-06 24335/42282/243.843750/9.994119e-06 23669/41392/236.50, 0000/9.999623e-06
    2 10,000 52760/199219/1710.968750/9.997897e-06 15418/16372/218.046875/9.996137e-06 20300/49021/466.765625/9.996696e-06 24989/53363/558.859375/9.990332e-06
    2 50,000 14291/40231/991.046875/9.992350e-06 12791/15588/472.906250/9.997958e-06 22215/103750/2252.015625/9.997605e-06 22660/103815/2278.734375/9.998269e-06
    3 1000 8/16/0.015625/2.973142e-06 29/110/0.031250/9.327860e-06 32/2250/0.375000/1.017432e-08 41/2195/0.343750/3.078878e-06
    3 5000 8/16/0.046875/6.327004e-06 58/300/0.734375/8.380619e-06 68/6691/16.625000/2.462019e-07 76/6599/16.578125/2.149830e-06
    3 10,000 8/16/0.125000/8.892321e-06 80/460/2.968750/8.076240e-06 96/10516/62.625000/1.147079e-09 106/10419/62.609375/1.607070e-06
    3 50,000 10/22/0.640625/3.745708e-06 166/1152/15.50, 0000/6.854968e-06 211/28900/385.796875/5.328693e-08 222/28783/388.671875/2.820361e-06
    4 1000 21/126/0.046875/6.619060e-06 76/484/0.125000/9.029869e-06 4107/375958/66.140625/5.376997e-06 263/25956/4.50, 0000/9.461107e-06
    4 5000 21/126/0.328125/6.579547e-06 101/752/1.843750/6.951950e-06 2534/236972/613.250,000/9.063035e-06 286/32183/82.093750/9.754050e-06
    4 10,000 21/126/0.875000/6.570692e-06 121/983/6.312500/8.222816e-06 2505/237950/1454.921875/6.132250e-06 319/38772/230.062500/9.991339e-06
    4 50,000 21/127/1.796875/8.942224e-06 212/2078/27.140625/8.051826e-06 3857/378376/5362.843750/7.595834e-06 441/67359/953.593750/9.805361e-06
    5 1000 10/23/0.015625/1.345156e-06 20/71/0.015625/7.757674e-06 69/1450/0.250,000/9.446592e-06 27/1188/0.218750/6.659137e-06
    5 5000 10/23/0.140625/2.987337e-06 37/176/0.437500/6.260531e-06 81/3869/9.406250/8.030132e-06 48/3627/8.281250/1.916037e-06
    5 10,000 10/23/0.343750/4.221010e-06 48/252/1.828125/6.741102e-06 91/5980/36.359375/8.876877e-06 64/5747/32.640625/2.819940e-06
    5 50,000 11/26/0.421875/1.273572e-06 100/668/8.031250/7.429674e-06 139/16176/210.640625/9.126806e-06 131/15985/206.812500/9.985455e-06
    6 1000 5713/5714/1.671875/9.994632e-06 6856/6857/2.031250/9.996719e-06 10285/10286/2.921875/9.999930e-06 10295/10296/2.781250/9.999854e-06
    6 5000 17431/17432/91.203125/9.999935e-06 20918/20919/96.125000/9.999824e-06 31379/31380/151.609375/9.999607e-06 31389/31390/129.140625/9.999784e-06
    6 10,000 27627/27628/324.546875/9.999054e-06 33153/33154/369.937500/9.999230e-06 49731/49732/579.281250/9.999510e-06 49741/49742/613.953125/9.999691e-06
    6 50,000 72200/72201/1871.421875/9.999723e-06 86640/86641/1789.953125/9.999971e-06 129963/129964/3045.625000/9.999862e-06 129974/129975/3245.187500/9.999835e-06
    7 1000 1/2/0.000000/0.000000e+00 1/2/0.000000/0.000000e+00 1/2/0.000000/0.000000e+00 1/2/0.000000/0.000000e+00
    7 5000 1/2/0.000000/0.000000e+00 1/2/0.000000/0.000000e+00 1/2/0.000000/0.000000e+00 1/2/0.000000/0.000000e+00
    7 10,000 1/2/0.000000/0.000000e+00 1/2/0.062500/0.000000e+00 1/2/0.031250/0.000000e+00 1/2/0.000000/0.000000e+00
    7 50,000 1/2/0.031250/0.000000e+00 1/2/0.015625/0.000000e+00 1/2/0.031250/0.000000e+00 1/2/0.046875/0.000000e+00
    8 1000 7517/52847/9.609375/9.705240e-06 6256/51420/8.187500/9.885174e-06 3867/637967/91.906250/8.900084e-06 3993/663519/92.890625/9.960198e-06
    8 5000 7379/51894/130.171875/9.832652e-06 6535/56778/130.609375/9.978493e-06 4801/844439/1665.171875/8.187114e-06 4368/814256/1623.625000/9.935312e-06
    8 10,000 8284/58431/381.203125/9.808342e-06 6762/61024/383.843750/9.976333e-06 4595/896457/5102.390625/8.054508e-06 4581/921975/5151.968750/9.842915e-06
    8 50,000 7311/51836/708.437500/9.997653e-06 7554/78279/862.50, 0000/9.988038e-06 10463/1986698/21108.781250/9.966481e-06 5525/1401847/15546.796875/9.902653e-06
    9 1000 1315/8569/1.50, 0000/9.980071e-06 1724/11004/2.187500/9.995043e-06 514/65030/9.437500/9.645362e-06 332/43592/6.50, 0000/9.394169e-06
    9 5000 1327/8684/21.875000/9.904454e-06 1465/9883/24.593750/9.959200e-06 574/84298/172.421875/9.690180e-06 421/66124/138.812500/9.520384e-06
    9 10,000 1335/8771/57.531250/9.914197e-06 1537/10704/71.359375/9.886892e-06 2870/356611/2051.125000/9.956931e-06 493/85160/479.187500/9.995629e-06
    9 50,000 1426/9690/137.171875/9.912140e-06 1792/14088/163.593750/9.996782e-06 988/194924/2254.156250/9.358423e-06 884/182395/2254.734375/9.294612e-06
    10 1000 5788/46645/7.625000/9.518479e-06 345/3005/0.671875/8.406790e-06 422/70159/11.015625/4.510865e-06 425/70205/11.125000/9.836083e-06
    10 5000 5669/45772/118.531250/9.521868e-06 608/5639/14.140625/4.101871e-06 597/110926/237.421875/6.498110e-06 500/97170/209.906250/8.589743e-06
    10 10,000 5661/45800/298.265625/9.734609e-06 686/6678/41.671875/8.466836e-06 671/135838/784.437500/8.765321e-06 596/124310/694.171875/9.913071e-06
    10 50,000 5472/44114/616.703125/9.804676e-06 892/10433/118.531250/6.674902e-06 1030/255267/3083.531250/8.165186e-06 1065/260264/3294.484375/9.877143e-06
    11 1000 0/1/0.000000/0.000000e+00 0/1/0.000000/0.000000e+00 0/1/0.000000/0.000000e+00 0/1/0.000000/0.000000e+00
    11 5000 0/1/0.000000/0.000000e+00 0/1/0.000000/0.000000e+00 0/1/0.062500/0.000000e+00 0/1/0.000000/0.000000e+00
    11 10,000 0/1/0.000000/0.000000e+00 0/1/0.062500/0.000000e+00 0/1/0.000000/0.000000e+00 0/1/0.000000/0.000000e+00
    11 50,000 0/1/0.015625/0.000000e+00 0/1/0.015625/0.000000e+00 0/1/0.031250/0.000000e+00 0/1/0.031250/0.000000e+00

     | Show Table
    DownLoad: CSV

    As can be seen from Figure 1, Algorithm 1 is the most efficient in terms of the sense of the number of iterations, solving 50% of the problem with the least number of iterations. From Figure 2, it can be seen that the most efficient method in the sense of the number of function evaluations is Algorithm 1, which solves 75% of the problem with the minimum number of function evaluations. Figure 3 shows that Algorithm 1 is also the most efficient algorithm in terms of CPU time considerations.

    It can be seen that Algorithm 1 fared splendidly in dealing with large-scale constrained monotone equations. Conclusively, numerical investigations indicate that the suggested algorithm constitutes an effective vehicle for solving systems of convex constrained monotone equations.

    Numerical outcomes with the proposed Algorithm 2 and modified HS method [29], three-term PRP method [31], and conventional PRP method [21,32] are reported separately. The fifty problems for the experiment were taken from [35], and the initial points for every question have been given as shown in Table 2. All algorithms use the weak Wolfe-Powell line-search technique. Each test question's termination conditions are

    |fkfk+1||fk|105,

    or

    ϝk106.

    The parameters in Algorithm 2 are chosen as: ζ=0.5,ξ=0.95, ν1=2,ν2=30,000, and ν3=1000. For each problem, we considered three dimensions of 30,000, 12,000 and 300,000. The consumed performance curves are shown in Figures 46.

    Table 2.  The test problem.
    NO Problem x0 NO Problem x0
    1 Extended Freudenstein [0.5,2,...,0.5,2] 26 Extended Tridiagonal-2 Function [1,1,...,1]
    2 Extended Trigonometric Function [0.2,0.2,...,0.2] 27 ARWHEAD (CUTE) [1,1,...,1]
    3 Extended Beale Function U63 (MatrixRom) [1,0.8,...,1,0.8] 28 NONDIA (Shanno-78) (CUTE) [1,1,...,1]
    4 Extended Penalty Function [1,2,3,...,n] 29 EG2 (CUTE) [1,1,...,1]
    5 Raydan 1 Function [1,1,...,1] 30 DIXMAANB (CUTE) [2,2,...,2]
    6 Raydan 2 Function [1,1,...,1] 31 DIXMAANC (CUTE) [2,2,...,2]
    7 Diagonal 1 Function [1/n,1/n,...,1/n] 32 DIXMAANE (CUTE) [2,2,...,2]
    8 Diagonal 2 Function [1/1,1/2,...,1/n] 33 Broyden Tridiagonal [1,1,...,1]
    9 Diagonal 3 Function [1,1,...,1] 34 EDENSCH Function (CUTE) [0,0,...,0]
    10 Hager Function [1,1,...,1] 35 VARDIM Function (CUTE) [11/n,12/n,...,1n/n]
    11 Generalized Tridiagonal-1 Function [2,2,...,2] 36 LIARWHD (CUTE) [4,4,...,4]
    12 Extended Three Exponential Terms [0.1,0.1,...,0.1] 37 DIAGONAL 6 [1,1,...,1]
    13 Generalized Tridiagonal-2 [1,1,...,1,1] 38 DIXMAANF (CUTE) [2,2,...,2]
    14 Diagonal 4 Function [1,1,...,1,1] 39 DIXMAANG (CUTE) [2,2,...,2]
    15 Diagonal 5 Function (MatrixRom) [1.1,1.1,...,1.1] 40 DIXMAANH (CUTE) [2,2,...,2]
    16 Extended Himmelblau Function [1,1,...,1] 41 DIXMAANI (CUTE) [2,2,...,2]
    17 Generalized PSC1 Function [3,0.1,...,3,0.1] 42 DIXMAANJ (CUTE) [2,2,...,2]
    18 Extended PSC1 Function [3,0.1,...,3,0.1] 43 DIXMAANK (CUTE) [2,2,...,2]
    19 Extended Block Diagonal BD1 Function [0.1,0.1,...,0.1] 44 DIXMAANL (CUTE) [2,2,...,2]
    20 Extended Cliff [0,1,...,0,1] 45 DIXMAAND (CUTE) [2,2,...,2]
    21 Extended Wood Function [3,1,3,1,...] 46 ENGVAL1 (CUTE) [2,2,...,2]
    22 Extended Quadratic Penalty QP1 Function [1,1,...,1] 47 FLETCHCR (CUTE) [0,0,...,0]
    23 Extended Quadratic Penalty QP2 Function [1,1,......,1] 48 COSINE (CUTE) [1,1,...,1]
    24 A Quadratic Function QF2 [0.5,0.5,...,0.5] 49 Extended DENSCHNB (CUTE) [1,1,...,1]
    25 Extended EP1 Function [1.5,1.5,...,1.5] 50 Extended DENSCHNF (CUTE) [2,0,2,0,...,2,0]

     | Show Table
    DownLoad: CSV
    Figure 4.  Performance profile based on NI (number of iterations).
    Figure 5.  Performance profile based on NFG (the sum of the total iterations and gradient iterations).
    Figure 6.  Performance profile based on CPU time (CPU seconds consumed).

    The image restoration problems were solved by using algorithms. It is well known that in bio-engineering, medicine, and other fields of the science and engineering, image restoration techniques play a pivotal role. One of common image degradation models is defined by the following:

    min˘v(x)=12υx22+ιx1, (4.1)

    where υRm1 is the observation, Ξ is the linear function of order m1×m2, and ι is a constant greater than zero.

    The regularized model (4.1) has attracted much concern, and several scholars have proposed various iterative methods to deal with it [36,37,38].

    Figueiredo [39] developed a gradient-based projection program by reformulating (4.1) as a constrained quadratic program. The reformulation in Figueiredo et al. is by following: split vector x into two pieces, i.e., x=st, where si=(xi)+, ti=(xi), and ()+=max{0,}. Then (4.1) is transformed into

    minκ012κTΦκ+γTκ, (4.2)
    κ=[s,t]T,γ=ιe2m2+[TυTυ]T,Φ=[TTTT].

    Further, Xiao et al. [40] found (4.2) to be equivocated to the following system on nonlinear equations:

    H(κ)=min{κ,Φκ+γ}=0. (4.3)

    Pang [41] proved that H satisfies (2.9), while Xiao [40] proved that it satisfies (2.2) too.

    Codes' stop condition

    |˘v(uk+1)˘v(uk)||˘v(uk)|103,

    or

    uk+1ukuk103.

    In the experiments, "Man", " Baboon", "colorcheckertestimage", and "car" are the tested images. Comparing Algorithm 1, modified HS [29], three-term PRP [31], and NPRP2 [42], the four algorithms are not effective at adding 30%,70% to the noise. The parameters in Algorithm 1 are chosen as: ϖ=0.01, ϑ=0.8,˜q=1, ˙t=0.5, ν1=2.2,ν2=3000, and ν3=1200. The time taken for each algorithm is summarized in Tables 3 and 4. From Tables 3 and 4, it can be seen that CPU time for both contrastive and color images processed with Algorithm 1 is less than the time used by the other algorithms, which shows that the algorithms proposed in this paper are more advantageous in dealing with grey image and color image restoration problems.

    Table 3.  CPU time results for different algorithms for gray images.
    Algorithm 1 Modified HS Three-term PRP NPRP2
    Man (1024 × 1024) 0.3 6.6875 24.28125 16.375 33.303140
    0.7 13.09375 32.3125 42.3125 95.854068
    Baboon (512 × 512) 0.3 1.6875 6.140625 3.90625 7.276346
    0.7 2.765625 9.28125 12.14062 22.251381

     | Show Table
    DownLoad: CSV
    Table 4.  CPU time results for different algorithms for color images.
    Algorithm 1 Modified HS Three-term PRP NPRP2
    colorcheckertestimage (1541 × 1024) 0.3 9.077376 12.136426 17.377901 16.661125
    0.7 14.084278 37.827917 37.333502 30.093593
    car (3504 × 2336) 0.3 41.74019 62.762366 96.119989 91.296092
    0.7 55.1054 160.855 219.161032 163.100240

     | Show Table
    DownLoad: CSV

    Figures 7 and 8 show that Algorithm 1 has performed well in tackling the gray image restoration. Visualizing the outcomes of color, Figures 9 and 10 are given and from the results, it is clear that the algorithm of this paper is more competitive in dealing with color image restoration. In image restoration experiments, we usually use the PSNR value and SSIM value to estimate the quality of the processed image. The higher the PSNR value, the less distorted the image is. The higher the SSIM value, the better the image setup. When two mirrors are identical, the SSIM value is 1. As can be seen from Figures 9 and 10, the SSIM value obtained by Algorithm 1 is relatively small compared to the traditional algorithm, and the image restoration effect is not as good as the traditional algorithm. This aspect will be investigated in the future.

    Figure 7.  From front to back for each row: the images with 30% image noise added, the images processed by Algorithm 1, the modified HS method, the three-term PRP method, and the NPRP2 method.
    Figure 8.  From front to back for each row: the images with 70% image noise added, the images processed by Algorithm 1, the modified HS method, the three-term PRP method, and the NPRP2 method.
    Figure 9.  From front to back for each row: the images with 30% image noise added, the images processed by Algorithm 1, the modified HS method, the three-term PRP method, and the NPRP2 method.
    Figure 10.  From front to back for each row: the images with 70% image noise added, the images processed by Algorithm 1, the modified HS method, the three-term PRP method, and the NPRP2 method.

    We now embed the improved HS method into a stochastic large subspace algorithm [43] by replacing its search direction with (2.8), thus building an improved HS algorithm based on the stochastic large subspace and variance-reducing SCGN-based algorithms (called mSCGN and mSDSCGN). Next, we test and compare the stochastic gradient descent algorithm (SGD) [44] and the STOCHASTIC VARIANCE GRADIENT REDUCTION (SVRG) [45] on the following two learning models:

    (ⅰ) Nonconvex SVM model with a sigmoid loss function:

    minxd1tts=0fs(x)+λx2 (4.4)

    where fs(x)=1tanh(ωs<x,ϖs>), ϖ, ω{1,1} signify feature vectors and the corresponding labels, respectively. Support vector machine models were implemented in different areas for pattern recognition, currently focusing on information categorization, encompassing text and images.

    (ⅱ) Nonconvex regularized ERM model with a nonconvex sigmoid loss function:

    minxd1tts=0fs(x)+λ2x2 (4.5)

    where fs(x)=1/{1+exp(ϵsεTsx)}, ϵs{1,1} is target value of the s-th sample, εsd is the eigenvector of the s-th sample, and λ>0 is the regularization parameter. Binary classification models are instrumental in practical problems. While a non-convex ERM model concentrates on minimizing classification inaccuracy, the s-type function is also exhibited to be usually superior to alternative loss functions. Accordingly, nonconvex ERM models with s-type loss functions are valuable for mathematical modeling.

    The parameters in mSCGN and mSDSCGN are chosen as: ν1=3.8,ν2=300, and ν3=100. The dataset comes from https://www.csie.ntu.edu.twcjlin/libsvmtools/datasets/. All algorithms were executed on three large datasets, with the specific details of the datasets are shown in Table 5. In all experiments, the regularization was chosen to be λ=105 and a similar batch size m was used in each iteration. Gradient f(x) and the Hessian matrix 2f(x) are estimated by:

    f(x)s=f(x+σes)f(xσes)2σ,
    2f(x)s,r=f(x+σes+σer)f(x+σes)f(x+σer)+f(x)σ2,

    where σ=104 and es is the s-th unit-vector. Let the regularization factor λ of the learning model be 104, and the convergence result of the algorithms is shown in Figures 11 and 12. In this experiment, the largest count for internal iterations is 30. In principle, the lower the value of the curve, the better convergence in the corresponding algorithm.

    Table 5.  Details of data sets.
    Data set Training samples (t) Dimensions (d)
    Adult 32,562 123
    IJCNN 49,990 22
    Covtype 581,012 54

     | Show Table
    DownLoad: CSV
    Figure 11.  Numerical performance of SGD and mSCGN for solving models 1 and 2 in three datasets.
    Figure 12.  Numerical performance of SVRG and mSCGN for solving models 1 and 2 in three datasets.

    The different algorithms' behavior is revealed by plotting the curve of function values over the iteration. Figure 11 illustrates the behavior of mSCGN and SGD at solving (4.4) and (4.5) with alternative decreasing steps. As verified from Figure 11, both algorithms managed to tackle the models. The function values decline at a greater rate initially and incrementally remain constant. Notice that when the mSCGN algorithm and the SGD algorithm share the same step-size on the datasets, the former decays the function value firmer. This reveals that the algorithm herein designed is well executed for having sufficient descent and trust domain capabilities. Observe that for mSCGN, αk=10k0.9 converges quicker. Our algorithm exhibits superiority at appropriate step sizes, and experimental results perform better at larger decreasing step sizes.

    Figure 12 illustrates the behavior of mSDSCGN and SVRG for solving (4.4) and (4.5) at separate constant step sizes. Analyzing Figure 12, it is obvious that even though the SVRG algorithm uses the best step size, SVRG does not perform as well as mSDSCGN on all three datasets. We observed that of those step sizes, 0.04 is optimally suited toward mSDSCGN. Overall, our algorithm shows more promise and efficiency than others. Based on the analysis of the results, one can conclude that our suggested method is useful for machine learning.

    We propose a modified HS conjugate gradient method with the following characteristics: 1) Fulfills descent and trust-region features. 2) A particular method's global convergence for the non-convex functions can be easily established. 3) The image restoration, machine learning issues, and nonlinear monotone equations experiments were tested and the outcomes indicated that the algorithms have promising numerical characterizations for tackling these issues.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work is supported by the Guangxi Science and Technology Base and Talent Project (Grant No. AD22080047), the National Natural Science Foundation of Guangxi Province (Grant No. 2023GXNFSBA026063), the major talent project of Guangxi (GXR-6BG242404), the Bagui Scholars Program of Guangxi, and the 2024 Graduate Innovative Programs Establishment (Grant No. ZX01030031124006). We are sincerely grateful to the editors and reviewers for their valuable comments, which have further enhanced this paper.

    The authors declare there are no conflicts of interest.



    [1] G. Kirchhoff, Mechanik, Teubner, Leipzigl, 1883.
    [2] W. Bao, Y. Cai, Mathematical theory and numerical methods for Bose-Einstein condensation, Kinet. Relat. Models, 6 (2013), 1–135. https://doi.org/10.3934/krm.2013.6.1 doi: 10.3934/krm.2013.6.1
    [3] N. Soave, Normalized ground states for the NLS equation with combined nonlinearities, J. Differ. Equations, 269 (2020), 6941–6987. https://doi.org/10.1016/j.jde.2020.05.016 doi: 10.1016/j.jde.2020.05.016
    [4] B. D. Esry, C. H. Greene, J. P. Burke Jr, J. L. John, Hartree-fock theory for double condensates, Phys. Rev. Lett., 78 (1997), 3594–3597. https://doi.org/10.1103/PhysRevLett.78.3594 doi: 10.1103/PhysRevLett.78.3594
    [5] Z. Yang, Normalized ground state solutions for Kirchhoff type systems, J. Math. Phys., 62 (2021), 031504. https://doi.org/10.1063/5.0028551 doi: 10.1063/5.0028551
    [6] X. Cao, J. Xu, J. Wang, The existence of solutions with prescribed L2-norm for Kirchhoff type system, J. Math. Phys., 58 (2017), 041502. https://doi.org/10.1063/1.4982037 doi: 10.1063/1.4982037
    [7] G. Che, H. Chen, Existence and asymptotic behavior of positive ground state solutions for coupled nonlinear fractional Kirchhoff-type systems, Comput. Math. Appl., 77 (2019), 173–188. https://doi.org/10.1016/j.camwa.2018.09.020 doi: 10.1016/j.camwa.2018.09.020
    [8] D. Lü, S. Peng, Existence and asymptotic behavior of vector solutions for coupled nonlinear Kirchhoff-type systems, J. Differ. Equations, 263 (2017), 8947–8978. https://doi.org/10.1016/j.jde.2017.08.062 doi: 10.1016/j.jde.2017.08.062
    [9] Y. Jalilian, Existence and multiplicity of solutions for a coupled system of Kirchhoff type equations, Acta Math. Sci., 40 (2020), 1831–1848. https://doi.org/10.1007/s10473-020-0614-7 doi: 10.1007/s10473-020-0614-7
    [10] G. Che, H. Chen, Existence and multiplicity of systems of Kirchhoff-type equations with general potentials, Math. Methods Appl. Sci., 40 (2017), 775–785. https://doi.org/10.1002/mma.4007 doi: 10.1002/mma.4007
    [11] H. Li, W. Zou, Normalized ground states for semilinear elliptic systems with critical and subcritical nonlinearities, J. Fixed Point Theory Appl., 23 (2021), 43. https://doi.org/10.1007/s11784-021-00878-w doi: 10.1007/s11784-021-00878-w
    [12] T. Bartsch, H. Li, W. Zou, Existence and asymptotic behavior of normalized ground states for Sobolev critical Schrödinger systems, Calculus Var. Partial Differ. Equations, 62 (2023), 9. https://doi.org/10.1007/s00526-022-02355-9 doi: 10.1007/s00526-022-02355-9
    [13] M. Liu, X. Fang, Normalized solutions for the Schrödinger systems with mass supercritical and double Sobolev critical growth, Z. Angew. Math. Phys., 73 (2022), 108. https://doi.org/10.1007/s00033-022-01757-1 doi: 10.1007/s00033-022-01757-1
    [14] T. Bartsch, L. Jeanjean, Normalized solutions for nonlinear Schrödinger systems, Proc. R. Soc. Edinburgh Sect. A: Math., 148 (2018), 225–242. https://doi.org/10.1017/s0308210517000087 doi: 10.1017/s0308210517000087
    [15] L. Jeanjean, S. Lu, A mass supercritical problem revisited, Calculus Var. Partial Differ. Equations, 59 (2020), 174. https://doi.org/10.1007/s00526-020-01828-z doi: 10.1007/s00526-020-01828-z
    [16] N. Ikoma, Compactness of minimizing sequences in nonlinear Schrödinger systems under multiconstraint conditions, Adv. Nonlinear Stud., 14 (2014), 115–136. https://doi.org/10.1515/ans-2014-0104 doi: 10.1515/ans-2014-0104
    [17] T. Cazenave, Semilinear Schrödinger Equations, American Mathematical Society, Rhode Island, 2003. https://doi.org/10.1090/cln/010
    [18] N. Ghoussoub, Duality and Perturbation Methods in Critical Point Theory, Cambridge University Press, Cambridge, 1993. https://doi.org/10.1017/CBO9780511551703
    [19] X. Luo, Q. Wang, Existence and asymptotic behavior of high energy normalized solutions for the Kirchhoff type equations in R3, Nonlinear Anal. Real World Appl., 33 (2017), 19–32. https://doi.org/10.1016/j.nonrwa.2016.06.001 doi: 10.1016/j.nonrwa.2016.06.001
  • This article has been cited by:

    1. Dinh-Toi Chu, Thuy Nguyen Thi Phuong, Nguyen Le Bao Tien, Dang Khoa Tran, Le Bui Minh, Vo Van Thanh, Pham Gia Anh, Van Huy Pham, Vu Thi Nga, Adipose Tissue Stem Cells for Therapy: An Update on the Progress of Isolation, Culture, Storage, and Clinical Application, 2019, 8, 2077-0383, 917, 10.3390/jcm8070917
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(526) PDF downloads(46) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog