Loading [MathJax]/jax/output/SVG/jax.js
Research article

On a class of weakly Landsberg metrics composed by a Riemannian metric and a conformal 1-form

  • Received: 14 July 2023 Revised: 06 September 2023 Accepted: 08 September 2023 Published: 26 September 2023
  • MSC : 53B40, 53C60

  • Without the quadratic restriction, there are many non-Riemannian geometric quantities in Finsler geometry. Among these geometric quantities, Berwald curvature, Landsberg curvature and mean Landsberg curvature are related directly to the famous "unicorn problem" in Finsler geometry. In this paper, Finsler metrics with vanishing weakly Landsberg curvature (i.e., weakly Landsberg metrics) are studied. For the general (α,β)-metrics, which are composed by a Riemannian metric α and a 1-form β, we found that if the expression of the metric function doesn't depend on the dimension n, then any weakly Landsberg (α,β)-metric with a conformal 1-form must be a Landsberg metric. In the two-dimensional case, the weakly Landsberg case is equivalent to the Landsberg case. Further, we classified two-dimensional Berwald general (α,β)-metrics with a conformal 1-form.

    Citation: Fangmin Dong, Benling Li. On a class of weakly Landsberg metrics composed by a Riemannian metric and a conformal 1-form[J]. AIMS Mathematics, 2023, 8(11): 27328-27346. doi: 10.3934/math.20231398

    Related Papers:

    [1] Yeyang Jiang, Zhihua Liao, Di Qiu . The existence of entire solutions of some systems of the Fermat type differential-difference equations. AIMS Mathematics, 2022, 7(10): 17685-17698. doi: 10.3934/math.2022974
    [2] Hong Li, Keyu Zhang, Hongyan Xu . Solutions for systems of complex Fermat type partial differential-difference equations with two complex variables. AIMS Mathematics, 2021, 6(11): 11796-11814. doi: 10.3934/math.2021685
    [3] Zhenguang Gao, Lingyun Gao, Manli Liu . Entire solutions of two certain types of quadratic trinomial q-difference differential equations. AIMS Mathematics, 2023, 8(11): 27659-27669. doi: 10.3934/math.20231415
    [4] Minghui Zhang, Jianbin Xiao, Mingliang Fang . Entire solutions for several Fermat type differential difference equations. AIMS Mathematics, 2022, 7(7): 11597-11613. doi: 10.3934/math.2022646
    [5] Wenju Tang, Keyu Zhang, Hongyan Xu . Results on the solutions of several second order mixed type partial differential difference equations. AIMS Mathematics, 2022, 7(2): 1907-1924. doi: 10.3934/math.2022110
    [6] Jingjing Li, Zhigang Huang . Radial distributions of Julia sets of difference operators of entire solutions of complex differential equations. AIMS Mathematics, 2022, 7(4): 5133-5145. doi: 10.3934/math.2022286
    [7] Nan Li, Jiachuan Geng, Lianzhong Yang . Some results on transcendental entire solutions to certain nonlinear differential-difference equations. AIMS Mathematics, 2021, 6(8): 8107-8126. doi: 10.3934/math.2021470
    [8] Hong Yan Xu, Zu Xing Xuan, Jun Luo, Si Min Liu . On the entire solutions for several partial differential difference equations (systems) of Fermat type in C2. AIMS Mathematics, 2021, 6(2): 2003-2017. doi: 10.3934/math.2021122
    [9] Zheng Wang, Zhi Gang Huang . On transcendental directions of entire solutions of linear differential equations. AIMS Mathematics, 2022, 7(1): 276-287. doi: 10.3934/math.2022018
    [10] Hua Wang, Hong Yan Xu, Jin Tu . The existence and forms of solutions for some Fermat-type differential-difference equations. AIMS Mathematics, 2020, 5(1): 685-700. doi: 10.3934/math.2020046
  • Without the quadratic restriction, there are many non-Riemannian geometric quantities in Finsler geometry. Among these geometric quantities, Berwald curvature, Landsberg curvature and mean Landsberg curvature are related directly to the famous "unicorn problem" in Finsler geometry. In this paper, Finsler metrics with vanishing weakly Landsberg curvature (i.e., weakly Landsberg metrics) are studied. For the general (α,β)-metrics, which are composed by a Riemannian metric α and a 1-form β, we found that if the expression of the metric function doesn't depend on the dimension n, then any weakly Landsberg (α,β)-metric with a conformal 1-form must be a Landsberg metric. In the two-dimensional case, the weakly Landsberg case is equivalent to the Landsberg case. Further, we classified two-dimensional Berwald general (α,β)-metrics with a conformal 1-form.



    Consider the following sum of linear ratios optimization problem defined by

    (FP):{min G(x)=pi=1nj=1cijxj+finj=1dijxj+gis. t.  xD={xRn|Axb, x0},

    where p2, A is a m×n order real matrix, b is a m dimension column vector, D is a nonempty bounded polyhedron set, cij,fi,dij, and giR,i=1,2,,p,j=1,2,,n, and for any xD, nj=1dijxj+gi0.

    The problem (FP) has attracted the attention of many researchers and practitioners for decades. One reason is that the problem (FP) and its special form have a wide range of applications in computer vision, portfolio optimization, information theory, and so on [1,2,3]. Another reason is that the problem (FP) is a global optimization problem, which generally has multiple locally optimal solutions that are not globally optimal. In the past several decades, many algorithms have been proposed for globally solving the problem (FP) and its special form. According to the characteristics of these algorithms, they can be classified into the following categories: Parametric simplex algorithm [4], image space analysis method [5], monotonic optimization algorithm [6], branch-and-bound algorithms [7,8,9,10,11], polynomial-time approximation algorithm [12], etc. Jiao et al. [13,14] presented several branch-and-bound algorithms for solving the sum of linear or nonlinear ratios problems; Huang, Shen et al. [15,16] proposed two spatial branch and bound algorithms for solving the sum of linear ratios problems; Jiao et al. [17] designed an outer space methods for globally solving the min-max linear fractional programming problem; Jiao et al. [18,19,20,21] proposed several outer space methods for globally solving the generalized linear fractional programming problem and its special forms. In addition, several novel optimization algorithms [22,23,24] are also proposed for the fractional optimization problems. However, the above-reviewed methods are difficult to solve the problem (FP) with large-size variables. So it is still necessary to put forward a new algorithm for the problem (FP).

    In this paper, based on the branch-and-bound framework, the new linearizing technique, and the image space region reduction technique, an image space branch-and-bound algorithm is proposed for globally solving the problem (FP). Compared with some methods, the algorithm has the following advantages. First, the branching search takes place in the image space Rp of ratios, than in space Rn of variable x, and n usually far exceeds p, this will economize the required computations. Second, based on the characteristics of the problem (EP1) and the structure of the algorithm, an image space region reduction technique is proposed for improving the convergence speed of the algorithm. Third, the computational complexity of the algorithm is analyzed and the maximum iterations of the algorithm are estimated for the first time, which are not available in other articles. In addition, numerical results indicate the computational superiority of the algorithm. Finally, a practical application problem in education investment is solved to verify the usefulness of the proposed algorithm.

    The structure of this paper is as follows. In Section 2, we give the equivalent problem (EP1) of problem (FP) and its linear relaxation problem (LRP). In Section 3, an image space branch-and-bound algorithm is presented, the convergence of the algorithm is proved, and its computational complexity is analysed. Numerical results are reported in Section 4. A practical application from education investment problem is solved to verify the usefulness of the algorithm in Section 5. Finally, some conclusions are given in Section 6.

    To find a global optimal solution of the problem (FP), we need to transform the problem (FP) into the equivalent problems (EP) and (EP1). Next, the fundamental assignment is to globally solve the problem (EP1). To this end, for each i=1,2,,p, we need to compute the minimum value α0i=minxDnj=1cijxj+finj=1dijxj+gi and the maximum value β0i=maxxDnj=1cijxj+finj=1dijxj+gi of each linear ratio nj=1cijxj+finj=1dijxj+gi. Next, we first consider the following linear fractional programs:

    α0i=minxDnj=1cijxj+finj=1dijxj+gi,i=1,2,,p. (1)

    Since any linear ratio is quasi-convex, the problem (1) can attain the minimum value at some vertex of D. Since nj=1dijxj+gi0, without losing generality, we can suppose that nj=1dijxj+gi>0. Thus, for solving the problem (1), for any i{1,2,,p}, let ti=1nj=1dijxj+gi and zj=tixj, then the problem (1) can be converted into the following linear programming problems:

    {min  nj=1cijzj+fitis.t.   nj=1dijzj+giti=1      Azbti. (2)

    Obviously, if x is a global optimal solution of the problem (1), then if and only if (z,ti) is a global optimal solution of the problem (2) with z=tix, and the problems (1) and (2) have the same optimal value. Therefore, α0i can be obtained by solving a linear programming problem (2). Similarly, we can compute the maximum value β0i of each linear ratio over D.

    Let Ω0={ωRpα0iωiβ0i, i=1,2,,p} be the initial image space rectangle, so we can get the equivalent problem (EP) of the problem (FP) as follows:

    (EP):{min Ψ(x,ω)=pi=1ωi,s.t.ωi=nj=1cijxj+finj=1dijxj+gi, i=1,2,,p,xD, ωΩ0.

    Obviously, let ωi=nj=1cijxj+finj=1dijxj+gi, i=1,2,,p, if x is a global optimal solution to the problem (FP), then (x,ω) is a global optimal solution to the problem (EP); conversely, if (x,ω) is a global optimal solution to the problem (EP), then x is a global optimal solution to the problem (FP). Furthermore, from nj=1dijxj+gi0, the problem (EP) can be reformulated as the following equivalent problem (EP1).

    (EP1):{min Ψ(x,ω)=pi=1ωis.t.ωi(nj=1dijxj+gi)=nj=1cijxj+fi, i=1,2,,p,xD, ωΩ0.

    In the following, for globally solving the problem (EP1), we need to construct its linear relaxation problem, which can offer a reliable lower bound in the branch-and-bound searching process. The detailed deriving process of the linear relaxation problem is given as follows.

    For any xD and ωΩ={ωRpαiωiβi, i=1,2,,p}Ω0, we have

    ωi(nj=1dijxj+gi)nj=1,dij>0dijαixj+nj=1,dij<0dijβixj+giωi

    and

    ωi(nj=1dijxj+gi)nj=1,dij>0dijβixj+nj=1,dij<0dijαixj+giωi.

    Consequently, we can construct the linear relaxation problem (LPΩ) of the problem (EP1) over Ω as follows, which is a linear programming problem.

    (LPΩ):{min Ψ(x,ω)=pi=1ωi,s.t.nj=1,dij>0dijαixj+nj=1,dij<0dijβixj+giωinj=1cijxj+fi, i=1,2,,p,nj=1,dij>0dijβixj+nj=1,dij<0dijαixj+giωinj=1cijxj+fi, i=1,2,,p,xD, ωΩ.

    For any Ω={ωRpαiωiβi,i=1,2,,p}Ω0, by the constructing method of the problem (LPΩ), all feasible points of the problem (EP1) over Ω are always feasible to the problem (LPΩ), and the optimal value of the problem (LPΩ) is less than or equal to that of the problem (EP1) over Ω. Thus, the optimal value of the problem (LPΩ) can provide a valid lower bound for that of the problem (EP1) over Ω.

    Without losing generality, for any Ω={ωRpαiωiβi, i=1,2,,p}Ω0, define

    ψi(x,ωi)=ωi(nj=1dijxj+gi)=nj=1dijωixj+giωi,ψ_i(x,ωi)=nj=1,dij>0dijαixj+nj=1,dij<0dijβixj+giωi,¯ψi(x,ωi)=nj=1,dij>0dijβixj+nj=1,dij<0dijαixj+giωi,

    then we have the following Theorem 1.

    Theorem 1. For any i{1,2,,p}, let ψi(x,ωi),ψ_i(x,ωi) and ¯ψi(x,ωi) be defined in the former, and let Δωi=βiαi. Then, we have:

    ψi(x,ωi)ψ_i(x,ωi)0  and  ¯ψi(x,ωi)ψi(x,ωi)0 as  Δωi0.

    Proof. By the definitions of the ¯ψi(x,ωi), ψ_i(x,ωi) and ψi(x,ωi), we can get that

    ψi(x,ωi)ψ_i(x,ωi)=ωi(nj=1dijxj+gi)[nj=1,dij>0dijαixj+nj=1,dij<0dijβixj+giωi]=nj=1,dij>0(ωiαi)dijxjnj=1,dij<0(βiωi)dijxj(βiαi)×(nj=1,dij>0dijxjnj=1,dij<0dijxj),

    which implies that

    ψi(x,ωi)ψ_i(x,ωi)0  as  Δωi0.

    Similarly, we also have

    ¯ψi(x,ωi)ψi(x,ωi)=nj=1,dij>0dijβixj+nj=1,dij<0dijαixj+giωiωi(nj=1dijxj+gi)=nj=1,dij>0(βiωi)dijxjnj=1,dij<0(ωiαi)dijxj(βiαi)×(nj=1,dij>0dijxjnj=1,dij<0dijxj),

    which implies that

    |¯ψi(x,ωi)ψi(x,ωi)|0  as  Δωi0.

    The proof is completed.

    From Theorem 1, the functions ψ_i(x,ωi) and ¯ψi(x,ωi) will infinitely approximate the function ψi(x,ωi) as βα0, which ensures that the problem (LPΩ) will infinitely approximate the problem (EP1) over Ω as βα0.

    In this section, based on the branch-and-bound framework, the linear relaxation problem, and the image space region reduction technique, we propose an image space branch-and-bound algorithm for globally solving the problem (FP).

    To improve the convergence speed of the algorithm, for any investigated image space rectangle Ωk, without losing the global optimal solution of the problem (EP1), the region reduction technique aims at replacing Ωk by a smaller rectangle ˉΩk or judging that the rectangle Ωk does not contain the global optimal solution of problem (EP1). For this purpose, let ˆΦk=pi=1αki, then the smaller rectangle ˉΩk can be derived by the following theorem.

    Theorem 2. Let UBk be the best currently known upper bound at the kth iteration, for any rectangle Ωk=[αk,βk]Ω0, we have the following conclusions:

    (ⅰ) If ˆΦk>UBk, then there exists no global optimal solution to the problem (EP1) over Ωk.

    (ⅱ) If ˆΦkUBk and αkρτkρβkρ for any ρ{1,2,,p}, then there is no global optimal solution to the problem (EP1) over ˆΩk where

    ˆΩk={ωRp|τkρ<ωρβkρ,αkiωiβki,i=1,2,,p,iρ},

    with

    τkρ=UBkˆΦk+αkρ, ρ{1,2,,p}.

    Proof. For any Ωk=[αk,βk]Ω0, we consider the following two cases:

    (ⅰ) If ˆΦk>UBk, then for any feasible solution (ˇx,ˇω) to the problem (EP1) over Ωk, the corresponding target function value Ψ(ˇx,ˇω) to the problem (EP1) over Ωk satisfies that

    Ψ(ˇx,ˇω)=pi=1ˇωipi=1αki=ˆΦk>UBk.

    Thus, there is no global optimal solution to the problem (EP1) over Ωk.

    (ⅱ) If ˆΦkUBk and αkρτkρβkρ for any ρ{1,2,,p}, then for any feasible solution (ˇx,ˇω) of the problem (EP1) over ˆΩk, we have

    Ψ(ˇx,ˇω)=pi=1ˇωi>pi=1,iραki+τkρ=ˆΦkαkρ+τkρ=ˆΦkαkρ+UBkˆΦk+αkρ=UBk.

    Thus, there exists no global optimal solution to the problem (EP1) over ˆΩk.

    From Theorem 2, the investigated image space rectangle Ωk can be replaced by a smaller rectangle ˉΩk=ΩkˆΩk or judged that the rectangle Ωk does not contain the global optimal solution of the problem (EP1).

    Definition 1. Denote xk as a known feasible solution for problem (FP), and denote v as the global optimal value for problem (FP), if G(xk)vε, then xk is called as a global εoptimum solution for problem (FP).

    The basic steps of the proposed image space branch-and-bound algorithm are given as follows.

    Step 0. Given the termination error ε>0 and the initial rectangle Ω0. Solve the problem (LP(Ω0)) to obtain its optimal solution (x0,ˆω0) and optimal value Ψ(x0,ˆω0). Set LB0=Ψ(x0,ˆω0), let ω0i=nj=1cijx0j+finj=1dijx0j+gi,i=1,2,,p,UB0=Ψ(x0,ω0). If UB0LB0ε, then stops, and x0 is a global ε -optimal solution to the problem (FP). Otherwise, let F={(x0,ω0)} be the set of feasible points, and let k=0, T0={Ω0} is the set of all active nodes.

    Step 1. Using the maximum edge binding method of rectangles to subdivide Ωk into two new sub-rectangles Ωk1 and Ωk2, and let H={Ωk1,Ωk2}.

    Step 2. For each rectangle Ωkσ(σ=1,2), use the image space region reduction technique proposed in Section 3.1 to compress its range, and still denote the remaining region of Ωkσ as Ωkσ, if Ωkσ, then solve the problem (LP(Ωkσ)) to obtain its optimal solution (xΩkσ,ˆωΩkσ) and optimal value Ψ(xΩkσ,ˆωΩkσ). Let LB(Ωkσ)=Ψ(xΩkσ,ˆωΩkσ), ωΩkσi=nj=1cijxΩkσj+finj=1dijxΩkσj+gi,i=1,2,,p, and F=F{(xΩkσ,ωΩkσ)}. If UBk<LB(Ωkσ), then let H=HΩkσ,F=F{(xΩkσ,ωΩkσ)} and Tk=Tk{Ωkσ}. Update the upper bound by UBk=min(x,ω)FΨ(x,ω) and denote by (xk,ωk)=argmin(x,ω)FΨ(x,ω). Let Tk=(TkΩk)H and LBk=min{LB(Ω)ΩTk}.

    Step 3. Set Tk+1={ΩUBkLB(Ω)>ε,ΩTk}. If Tk+1=, then the algorithm stops with that xk is a global ε -optimal solution to the problem (FP). Otherwise, select the rectangle Ωk+1 satisfying that Ωk+1=argminΩTk+1LB(Ω), set k=k+1, and return to Step 1.

    In the following, we will discuss the global convergence of the algorithm.

    Theorem 3. Let v be the global optimal value of the problem (FP), Algorithm ISBBA either ends at the global optimal solution of the problem (FP) or generates an infinite sequence of feasible solutions so that its any accumulation point is the global optimal solution of the problem (FP).

    Proof. If Algorithm ISBBA is terminated finitely after k iterations, then when Algorithm ISBBA is terminated, we obtain a better feasible solution xk of the problem (FP) and a better feasible solution (xk,ωk) of the problem (EP) with ωki=nj=1cijxkj+finj=1dijxkj+gi,i=1,2,,p, respectively. By the termination conditions, the updating method of the upper bound, and the steps of Algorithm ISBBA, we can get the following inequalities:

    LBkv, vΨ(xk,ωk), G(xk)=Ψ(xk,ωk)=vk, vkεLBk.

    By combining the above equalities and inequalities, we can get

    G(xk)ε=Ψ(xk,ωk)εLBkvΨ(xk,ωk)=G(xk).

    Therefore, we can get that xk is an ϵglobal optimal solution of the problem (FP).

    If Algorithm ISBBA produces an infinite sequence of feasible solutions {xk} for the problem (FP) and an infinite sequence of feasible solutions {(xk,ωk)} for the problem (EP) with ωki=nj=1cijxkj+finj=1dijxkj+gi,i=1,2,,p, respectively. Without losing generality, let x be an accumulation point of {xk}, we can get that limkxk=x.

    By the continuity of the nj=1cijxj+finj=1dijxj+gi, nj=1cijxkj+finj=1dijxkj+gi=ωki[αki,βki],i=1,2,,p, and the exhaustiveness of the partitioning method, we can get that

    nj=1cijxj+finj=1dijxj+gi=limknj=1cijxkj+finj=1dijxkj+gi=limkωki=limk[αki,βki]=limkk[αki,βki]=ωi.

    Thus, (x,ω) is a feasible solution for the problem (EP). Also since {LBk} is an increasing lower bound sequence satisfying that LBkv, we can follow that

    Ψ(x,ω)vlimkLBk=limkΨ(xk,ˆωk)=Ψ(x,ω). (3)

    Hence, by the method of updating upper bound and the continuity of G(x), we can get that

    limkvk=limkpi=1ωki=limkΨ(xk,ωk)=Ψ(x,ω)=G(x)=limkG(xk). (4)

    From (3) and (4), we can get that

    limkvk=v=G(x)=limkG(xk)=Ψ(x,ω)=limkLBk.

    Therefore, any accumulation point x of the infinite sequence {xk} of feasible solutions is a global optimal solution for the problem (FP), and the proof of the theorem is completed.

    In this subsection, by analysing the computational complexity of Algorithm ISBBA, we estimate the maximum iteration times of Algorithm ISBBA. For convenience, we first define the size of a rectangle

    Ω={ωRp|αiωiβi,i=1,2,,p}Ω0

    as

    δ(Ω):=max{βiαi,i=1,2,,p}.

    Theorem 4. For any given termination error ε>0, if there exists a rectangle Ωk, which is formed by Algorithm ISBBA at the kth iteration, and which is satisfied with δ(Ωk)εp, then we have that

    UBkLB(Ωk)ε,

    where LB(Ωk) represents the optimal value for the problem (LP(Ωk)), and UBk represents the currently known best upper bound of the global optimal value of the problem (EP).

    Proof. Without loss of generality, assume that (xk,ˆωk) is the optimal solution of the linear relaxation programming (LP(Ωk)), and let ωki=nj=1cijxkj+finj=1dijxkj+gi,i=1,2,,p, then (xk,ωk) must be a feasible solution to the problem (EP(Ωk)).

    By utilizing the definitions of UBk and LB(Ωk), we have that

    Ψ(xk,ωk)UBkLB(Ωk)=Ψ(xk,ˆωk).

    Thus, by steps of Algorithm ISBBA, we can follow that

    UBkLB(Ωk)Ψ(xk,ωk)Ψ(xk,ˆωk)=pi=1ωkipi=1ˆωkipi=1(βkiαki)pi=1δ(Ωk)=pδ(Ωk).

    Furthermore, from the above formula and δ(Ωk)εp, we can get that

    UBkLB(Ωk)pi=1(δ(Ωk))=pδ(Ωk)ε,

    and the proof of the theorem is completed.

    According to Step 3 of Algorithm ISBBA, from Theorem 4, if δ(Ωk)εp, then it can be seen easily that the rectangle Ωk will be deleted. Thus, if the sizes of all sub-rectangles Ω generated by Algorithm ISBBA meet δ(Ω)εp, then Algorithm ISBBA will stop. The maximum iteration times of Algorithm ISBBA can be estimated by using Theorem 4, see Theorem 5 for details.

    Theorem 5. Given the termination error ε>0, Algorithm ISBBA can find an ε-global optimal solution to the problem (FP) after at most

    Λ=2pi=1log2p(β0iα0i)ε1

    iterations, where Ω0={ωRp|α0iωiβ0i,i=1,2,,p}.

    Proof. Without losing generality, we assume that the ith edge of the rectangle Ω0 is continuously selected for dividing γi times, and suppose that after γi iterations, there exists a sub-interval Ωγii=[αγii,βγii] of the interval Ω0i=[α0i,β0i] such that

    βγiiαγiiεp,  for every i=1,2,,p. (5)

    From the partitioning process of Algorithm ISBBA, we have that

    βγiiαγii=12γi(β0iα0i),  for every i=1,2,,p. (6)

    From (5) and (6), we can get that

    12γi(β0iα0i)εp,  for every i=1,2,,p,

    i.e.,

    γilog2p(β0iα0i)ε,  for every i=1,2,,p.

    Next, we let

    ˉγi=log2p(β0iα0i)ε,   i=1,2,,p.

    Let Λ1=pi=1ˉγi, then after Λ1 iterations, Algorithm ISBBA will generate at most Λ1+1 sub-rectangles, denoting these sub-rectangles as Ω1,Ω2,,ΩΛ1+1, which must meet

    δ(Ωt)=2Λ1tδ(ΩΛ1)=2Λ1tδ(ΩΛ1+1), t=Λ1,Λ11,,2,1,

    where δ(ΩΛ1)=δ(ΩΛ1+1)=max{βˉγiiαˉγii,i=1,2,,p} and

    Ω0=Λ1+1tΩt. (7)

    Furthermore, put these Λ1+1 sub-rectangles into the set TΛ1+1, i.e.,

    TΛ1+1={Ωt,t=1,2,,Λ1+1}.

    By (5), we have that

    δ(ΩΛ1)=δ(ΩΛ1+1)εp. (8)

    Thus, by (8), Theorem 4, and Step 3 of Algorithm ISBBA, the sub-rectangles ΩΛ1 and ΩΛ1+1 have been examined clearly, which should be discarded from the partitioning set TΛ1+1. Next, the remaining sub-rectangles are placed in the set TΛ1, where

    TΛ1=TΛ1+1{ΩΛ1,ΩΛ1+1}={Ωt,t=1,,Λ11},

    and the remaining sub-rectangles Ωt (t=1,,Λ11) will be examined further.

    Next, consider the sub-rectangle ΩΛ11, by using the branching rule, we can subdivide the sub-rectangle ΩΛ11 into two sub-rectangles ΩΛ11,1 and ΩΛ11,2, which satisfies that

    ΩΛ11=ΩΛ11,1ΩΛ11,2

    and

    δ(ΩΛ11)=2δ(ΩΛ11,1)=2δ(ΩΛ11,2)=2δ(ΩΛ1)=2δ(ΩΛ1+1)εp.

    Therefore, after Λ1+(211) iterations, the sub-rectangle ΩΛ11 has been examined clearly. By (8), Theorem 4, and Step 3 of Algorithm ISBBA, ΩΛ11 should be discarded from the partitioning set TΛ1. Furthermore, the remaining sub-rectangles will be placed in the set TΛ11, where

    TΛ11=TΛ1{ΩΛ11}=TΛ1+1{ΩΛ11,ΩΛ1,ΩΛ1+1}={Ωt,t=1,,Λ12}.

    Similarly, after Algorithm ISBBA executed Λ1+(211)+(221) iterations, the sub-rectangle ΩΛ12 has been examined clearly, and which should be discarded from the partitioning set TΛ11. Furthermore, the remaining sub-rectangles will be put into the set TΛ12, where

    TΛ12=TΛ11{ΩΛ12}=TΛ1+1{ΩΛ12,ΩΛ11,ΩΛ1,ΩΛ1+1}={Ωt,t=1,,Λ13}.

    Reduplicate the above procedures, until all sub-rectangles Ωt(t=1,2,,Λ1+1) are completely eliminated from Ω0. Thus, by (7), after at most

    Λ=Λ1+(211)+(221)+(231)++(2Λ111)=2Λ11=2pi=1log2p(β0iα0i)ε1

    iterations, Algorithm ISBBA will stop, and the proof of the theorem is completed.

    Remark 1. By Theorem 5, from the above complexity analysis of Algorithm ISBBA, the running time of Algorithm ISBBA is bounded by 2ΛT(m+2p,n+p) for finding an ε-global optimal solution for the problem (FP), where T(m+2p,n+p) represents the time taken to solve a linear programming problem with (n+p) variables and (m+2p) constraints.

    In this section, we numerically compare Algorithm ISBBA with the software "BARON" [25] and the branch-and-bound-algorithm presented in Jiao & Liu [10], denoted by Algorithm BBA-J. All used algorithms are coded in MATLAB R2014a, all test problems are solved on the same microcomputer with Intel(R) Core(TM) i5-7200U CPU @2.50GHz processor and 16 GB RAM. We set the maximum time limit for all algorithms to 4000 seconds. These test problems and their numerical results are listed as follows.

    Test Problem 1 is a problem with large-size variables, with the given termination error ϵ=102, numerical comparisons among Algorithm ISBBA, BBA-J, and BARON are listed in Table 1, respectively. Test Problem 2 is a problem with the large-size number of ratios, with the given termination error ϵ=103, numerical comparisons among Algorithm ISBBA and BARON are listed in Table 2, respectively. For test Problems 1 and 2, we solved arbitrary ten independently generated test examples and recorded their best, worst, and average results among these ten test examples, and we highlighted in bold the winners of average results in their numerical comparisons. What needs to be pointed out here is that "" represents that the used algorithm failed to terminate in 4000s. From the numerical results for Problem 1 in Table 1, first, we see that the software BARON is more time-consuming than Algorithm ISBBA, though the number of iterations for BARON is smaller than Algorithm ISBBA. Second, in terms of computational performance, Algorithm ISBBA outperforms Algorithm BBA-J. Especially, when we fixed m=100, let p=2 and n=8000,10000 or 20000, or let p=3 and n=8000, BARON failed to terminate in 4000s for all arbitrary ten independently generated test examples; when we fixed m=100, let p=3 and n=8000, Algorithm BBA-J and BARON all failed to terminate in 4000s for all arbitrary ten independently generated test examples, but in all cases, Algorithm ISBBA can globally solve all arbitrary ten independently generated test examples.

    Table 1.  Numerical comparisons among Algorithm ISBBA, BBA-J, and BARON on Problem 1.
    (p,m,n) algorithms iteration of algorithm CPU time in seconds
    min. ave. max. min. ave. max.
    (2,100, 5000) BBA-J 40 104.8 244 186.21 530.14 1244.53
    BARON 1 1.2 3 920.05 1083.93 1408.27
    ISBBA 30 37.5 46 139.76 194.13 253.43
    (2,100, 8000) BBA-J 32 84.9 139 276.25 802.90 1323.32
    BARON
    ISBBA 29 38.2 48 261.68 355.27 487.13
    (2,100, 10000) BBA-J 35 76.6 112 405.80 933.54 1414.22
    BARON
    ISBBA 31 36 45 355.57 405.93 510.31
    (2,100, 20000) BBA-J 41 69.4 105 1239.04 2216.69 3495.84
    BARON
    ISBBA 32 36.2 41 950.13 1075.0 1233.14
    (3,100, 5000) BBA-J
    BARON 3 9.8 31 1320.47 2310.83 3113.8
    ISBBA 65 249.9 482 398.56 1815.11 3600.96
    (3,100, 8000) BBA-J
    BARON
    ISBBA 95 217.9 338 1030.95 2564.92 3985.77

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical comparisons among Algorithm ISBBA and BARON on Problem 2.
    (p,m,n) algorithms iteration of algorithm CPU time in seconds
    min. ave. max. min. ave. max.
    (10,100,300) BARON 3 9.2 13 8.28 12.66 17.64
    ISBBA 9 13.6 19 5.28 8.87 12.7
    (10,100,400) BARON 9 35.8 93 22.28 30.86 42.33
    ISBBA 10 16 25 6.90 12.65 20.66
    (10,100,500) BARON
    ISBBA 10 17.4 30 8.07 15.89 26.52
    (15,100,400) BARON 11 34 157 36.14 47.92 79.81
    ISBBA 50 121.6 201 46.78 118.75 201.66
    (15,100,500) BARON
    ISBBA 49 118.1 258 54.57 137.92 303.49
    (20,100,300) BARON 5 14 17 22.53 36.14 51.11
    ISBBA 157 321.2 861 126.19 255.46 694.20
    (20,100,400) BARON
    ISBBA 99 399.9 1134 99.06 425.77 1199.2

     | Show Table
    DownLoad: CSV

    From the numerical results for Problem 2 in Table 2, we can see that when we fixed p=10 and n=500, or p=15 and n=500, or p=20 and n=400, the software BARON failed to terminate in 4000s for all arbitrary ten independently generated examples, but Algorithm ISBBA can successfully find the globally optimal solutions of all arbitrary ten independently generated tests. It should be noted that, when p is larger for Problem 2, Algorithm BBA-J failed to solve all arbitrary ten tests in 4000s. Therefore, we just report the computational comparisons among Algorithm ISBBA and BARON in Table 2, this indicates the robustness and stability of Algorithm ISBBA.

    Problem 1.

    {min  pi=1nj=1cijx+finj=1dijx+gis.t.   Axb,  x0,

    where cij,dij,fi, and giR,i=1,2,,p; ARm×n, bRm; cij, dij, and all elements of A are all randomly generated from [0,10]; all elements of b are equal to 10; fi and gi,i=1,2,,p, are all randomly generated from [0,1].

    Problem 2.

    {min  pi=1nj=1γijxj+ξinj=1δijxj+ηis.t.   Axb,  x0,

    where γij,ξi,δij,ηiR, i=1,2,,p,j=1,2,,n; ARm×n, bRm; all γij and δij are randomly generated from [0.1,0.1]; all elements of A are randomly generated from [0.01,1]; all elements of b are equal to 10; all ξi and ηi satisfies that nj=1γijxj+ξi>0 and nj=1δijxj+ηi>0.

    Consider finding the optimal solution of the education investment problem, whose mathematical modelling can be given as below:

    {minG(x)=pj=1ni=1cjixini=1djixi=pj=1cTjxdTjxs.t.ni=1xi1,Axb, x0,

    where cji (j=1,2,,p,i=1,2,,n) is the ith invested fund of the jth education investment, xi (i=1,2,,n) is the investment percentage of the ith education investment, dji (j=1,2,,p,i=1,2,,n) is the ith output fund of the jth education investment.

    The parameters of an education investment problem are given as below:

    p=2; n=3; c=[0.1,0.2,0.4;0.1,0.1,0.2]; d=[0.1,0.1,0.1;0.1,0.3,0.1];A=[1,1,1;1,1,1;12,5,12;12,12,7;6,1,1]; b=[1;1;34.8;29.1;4.1].

    Using the presented algorithm in this article to solve the above problem, the global optimal solution can be obtained as below:

    x=(0.7286,0.0000,0.2714).

    It is to say, the optimal investment percentage of these three kinds of education investment is 0.7286,0.0000,0.2714, respectively.

    We study the problem (FP). Based on the image space search, the new linearizing technique, and the image space region reduction technique, we propose an image space branch-and-bound algorithm. In contrast to the existing algorithm, the proposed algorithm can find an ϵ-approximate global optimal solution of the problem (FP) in at most (2pi=1log2p(β0iα0i)ε1) iterations. Numerical results show the computational superiority of the algorithm.

    A potential field for future research lies in investigating the existence of analogous linear or convex relaxation problems with closed-form solutions in cases where both the numerators and denominators are nonlinear functions. Furthermore, there is also need to design an efficient algorithm for globally solving generalized nonlinear ratios optimization problems with non-convex feasible region, as well as for more general non-convex ratios optimization problems under uncertain variable environments.

    All authors of this article have been contributed equally. All authors have read and approved the final version of the manuscript for publication.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This paper is supported by the National Natural Science Foundation of China (12001167), the China Postdoctoral Science Foundation (2018M642753), the Key Scientific and Technological Research Projects in Henan Province (232102211085; 202102210147), the Research Project of Nanyang Normal University(2024ZX014).

    The author declares no conflict of interest.



    [1] T. Aikou, Some remarks on the geometry of tangent bundle of Finsler manifolds, Tensor, 52 (1993), 234–242.
    [2] G. S. Asanov, Finsleroid-Finsler space with Berwald and Landsberg conditions, Rep. Math. Phys., 58 (2006), 275–300. https://doi.org/10.1016/S0034-4877(06)80053-4 doi: 10.1016/S0034-4877(06)80053-4
    [3] D. Bao, S. S. Chern, A note on the Gauss-Bonnet theorem for Finsler spaces, Ann. Math., 143 (1996), 233–252. https://doi.org/10.2307/2118643 doi: 10.2307/2118643
    [4] D. Bao, S. S. Chern, Z. Shen, An introduction to Riemann-Finsler geometry, New York: Springer, 2000. https://doi.org/10.1007/978-1-4612-1268-3
    [5] S. Bácsó, M. Matsumoto, Reduction theorems of certain Landsberg spaces to Berwald spaces, Publ. Math. Debrecen, 48 (1996), 357–366.
    [6] D. Bao, Z. Shen, On the volume of unit tangent spheres in a Finsler space, Results Math., 26 (1994), 1–17. https://doi.org/10.1007/BF03322283 doi: 10.1007/BF03322283
    [7] M. Crampin, On Landsberg spaces and the Landsberg-Berwald problem, Houston J. Math., 37 (2011), 1103–1124.
    [8] L. Huang, X. Mo, On some explicit constructions of dually flat Finsler metrics, J. Math. Anal. Appl., 405 (2013), 565–573. https://doi.org/10.1016/j.jmaa.2013.04.028 doi: 10.1016/j.jmaa.2013.04.028
    [9] B. Li, Z. Shen, On a class of weakly Landsberg metrics, Sci. China Ser. A, 50 (2007), 573–589. https://doi.org/10.1007/s11425-007-0021-8 doi: 10.1007/s11425-007-0021-8
    [10] M. Matsumoto, Remarks on Berwald and Landsberg spaces, Contemp. Math., 1996.
    [11] V. S. Matveev, On "All regular Landsberg metrics are always Berwald" by Z. I. Szabó, Balkan J. Geom. Appl., 14 (2008), 50–52.
    [12] X. Mo, L. Zhou, The curvatures of spherically symmetric Finsler metrics in Rn, 2012, arXiv: 1202.4543. https://doi.org/10.48550/arXiv.1202.4543
    [13] G. Randers, On an asymmetric metric in the four-space of general relativity, Phys. Rev., 59 (1941), 195. https://doi.org/10.1103/PhysRev.59.195 doi: 10.1103/PhysRev.59.195
    [14] Z. Shen, Differential geometry of Spray and Finsler spaces, Dordrecht: Springer, 2001. https://doi.org/10.1007/978-94-015-9727-2
    [15] Z. Shen, Finsler manifolds with nonpositive flag curvature and constant S-curvature, Math. Z., 249 (2005), 625–639. https://doi.org/10.1007/s00209-004-0725-1 doi: 10.1007/s00209-004-0725-1
    [16] Z. Shen, On a class of Landsberg metrics in Finsler geometry, Can. J. Math., 61 (2009), 1357–1374. https://doi.org/10.4153/CJM-2009-064-9 doi: 10.4153/CJM-2009-064-9
    [17] Z. Shen, H. Xing, On randers metrics with isotropic S-curvature, Acta. Math. Sin.-English Ser., 24 (2008), 789–796. https://doi.org/10.1007/s10114-007-5194-0 doi: 10.1007/s10114-007-5194-0
    [18] Z. I. Szabó, Positive definite Berwald spaces. Structure theorem on Berwald spaces, Tensor (NS), 35 (1981), 25–39.
    [19] Z. I. Szabó, All regular Landsberg metrics are Berwald, Ann. Glob. Anal. Geom., 34 (2008), 381–386. https://doi.org/10.1007/s10455-008-9115-y doi: 10.1007/s10455-008-9115-y
    [20] Z. I. Szabó, Correction to "All regular Landsberg metrics are Berwald", Ann. Glob. Anal. Geom., 35 (2009), 227–230. https://doi.org/10.1007/s10455-008-9131-y doi: 10.1007/s10455-008-9131-y
    [21] M. Xu, V. S. Matveev, Proof of Laugwitz Conjecture and Landsberg Unicorn Conjecture for Minkowski norms with SO(k)×SO(nk)-symmetry, Can. J. Math., 74 (2022), 1486–1516. https://doi.org/10.4153/S0008414X21000304 doi: 10.4153/S0008414X21000304
    [22] C. Yu, H. Zhu, On a new class of Finsler metrics, Differ. Geom. Appl., 29 (2011), 244–254. https://doi.org/10.1016/j.difgeo.2010.12.009 doi: 10.1016/j.difgeo.2010.12.009
    [23] L. Zhou, Projective spherically symmetric Finsler metrics with constant flag curvature in Rn, Geom. Dedicata, 158 (2012), 353–364. https://doi.org/10.1007/s10711-011-9639-3 doi: 10.1007/s10711-011-9639-3
    [24] L. Zhou, The Finsler surface with K=0 and J=0, Differ. Geom. Appl., 35 (2014), 370–380. https://doi.org/10.1016/j.difgeo.2014.02.003 doi: 10.1016/j.difgeo.2014.02.003
    [25] S. Zhou, B. Li, On Landsberg general (α, β)-metrics with a conformal 1-form, Differ. Geom. Appl., 59 (2018), 46–65. https://doi.org/10.1016/j.difgeo.2018.04.001 doi: 10.1016/j.difgeo.2018.04.001
    [26] M. Zohrehvand, H. Maleki, On general (α,β)-metrics of Landsberg type, Int. J. Geom. Methods M., 13 (2016), 1650085. https://doi.org/10.1142/S0219887816500857 doi: 10.1142/S0219887816500857
  • This article has been cited by:

    1. Yihui Gong, Qi Yang, On Entire Solutions of Two Fermat-Type Differential-Difference Equations, 2025, 51, 1017-060X, 10.1007/s41980-024-00942-4
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1207) PDF downloads(46) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog