Research article

Meteorological effects on trace element solubility in Mississippi coastal wetlands

  • Trace elements (TEs) vary in toxicity and should be closely monitored to manage their accumulation in marine organisms, which may pass on to the food web. This study utilizes the National Oceanic and Atmospheric Administration’s (NOAA), Data Integration Visualization Exploration and Reporting (DIVER), National Centers for Environmental Information (NCEI), and Drought Monitor Systems to acquire arsenic (As), cadmium (Cd), chromium (Cr), lead (Pb), and zinc (Zn) concentrations in marsh soils along the Mississippi Gulf Coast area using the precipitation and drought data for the two sampling periods in 2010 and 2011. Results revealed that Zn had a significantly high correlation (r = 0.80 and p < 0.05) between soil concentrations and daily rainfall amounts, whereas Pb concentrations in soil were shown to occur after heavy storms. There was also significant variation between the months for Cd, Cr, and Pb, over the sampling periods. Arsenic remains relatively stable and statistically similar in the sediment averaging 21.09 µg/kg. Coastal soils were significantly higher in Cd and Cr concentrations in 2010 (28.89 µg/kg for Cd and 28.91 µg/kg for Cr) compared to 2011 (both 9.16 µg/kg). The trace elements, Cd and Cr showed negative correlation (r = −0.63 and −0.27) to increasing drought intensities. Most of these responses can be explained by a redox process where TEs were sequestered or released as solids and subsequently becomes saturated or dried out. It was observed that soils in Mississippi’s Gulf Coast study areas have the ability to be a sink or reservoir during various weather conditions thereby entrapping or releasing TEs, which becomes more available to organisms.

    Citation: Joseph A. Kazery, Dayakar P. Nittala, H. Anwar Ahmad. Meteorological effects on trace element solubility in Mississippi coastal wetlands[J]. AIMS Environmental Science, 2019, 6(1): 1-13. doi: 10.3934/environsci.2019.1.1

    Related Papers:

    [1] Yan Shi, Qunzhen Zheng, Jingben Yin . Effective outcome space branch-and-bound algorithm for solving the sum of affine ratios problem. AIMS Mathematics, 2024, 9(9): 23837-23858. doi: 10.3934/math.20241158
    [2] Hongwu Li, Longfei Wang, Yingfeng Zhao . Global optimization algorithm for a class of linear ratios optimization problem. AIMS Mathematics, 2024, 9(6): 16376-16391. doi: 10.3934/math.2024793
    [3] Junqiao Ma, Hongwei Jiao, Jingben Yin, Youlin Shang . Outer space branching search method for solving generalized affine fractional optimization problem. AIMS Mathematics, 2023, 8(1): 1959-1974. doi: 10.3934/math.2023101
    [4] Hongwu Li, Yuling Feng, Hongwei Jiao, Youlin Shang . A novel algorithm for solving sum of several affine fractional functions. AIMS Mathematics, 2023, 8(4): 9247-9264. doi: 10.3934/math.2023464
    [5] Bo Zhang, Yuelin Gao, Ying Qiao, Ying Sun . A nonlinear relaxation-strategy-based algorithm for solving sum-of-linear-ratios problems. AIMS Mathematics, 2024, 9(9): 25396-25412. doi: 10.3934/math.20241240
    [6] Habibe Sadeghi, Fatemeh Moslemi . A multiple objective programming approach to linear bilevel multi-follower programming. AIMS Mathematics, 2019, 4(3): 763-778. doi: 10.3934/math.2019.3.763
    [7] Wei Yang, Lili Pan, Jinhui Wan . Smoothing gradient descent algorithm for the composite sparse optimization. AIMS Mathematics, 2024, 9(12): 33401-33422. doi: 10.3934/math.20241594
    [8] Xu Li, Rui-Feng Li . Shift-splitting iteration methods for a class of large sparse linear matrix equations. AIMS Mathematics, 2021, 6(4): 4105-4118. doi: 10.3934/math.2021243
    [9] Adisorn Kittisopaporn, Pattrawut Chansangiam . Approximate solutions of the 2D space-time fractional diffusion equation via a gradient-descent iterative algorithm with Grünwald-Letnikov approximation. AIMS Mathematics, 2022, 7(5): 8471-8490. doi: 10.3934/math.2022472
    [10] Yilihamujiang Yimamu, Zui-Cha Deng, Liu Yang . An inverse volatility problem in a degenerate parabolic equation in a bounded domain. AIMS Mathematics, 2022, 7(10): 19237-19266. doi: 10.3934/math.20221056
  • Trace elements (TEs) vary in toxicity and should be closely monitored to manage their accumulation in marine organisms, which may pass on to the food web. This study utilizes the National Oceanic and Atmospheric Administration’s (NOAA), Data Integration Visualization Exploration and Reporting (DIVER), National Centers for Environmental Information (NCEI), and Drought Monitor Systems to acquire arsenic (As), cadmium (Cd), chromium (Cr), lead (Pb), and zinc (Zn) concentrations in marsh soils along the Mississippi Gulf Coast area using the precipitation and drought data for the two sampling periods in 2010 and 2011. Results revealed that Zn had a significantly high correlation (r = 0.80 and p < 0.05) between soil concentrations and daily rainfall amounts, whereas Pb concentrations in soil were shown to occur after heavy storms. There was also significant variation between the months for Cd, Cr, and Pb, over the sampling periods. Arsenic remains relatively stable and statistically similar in the sediment averaging 21.09 µg/kg. Coastal soils were significantly higher in Cd and Cr concentrations in 2010 (28.89 µg/kg for Cd and 28.91 µg/kg for Cr) compared to 2011 (both 9.16 µg/kg). The trace elements, Cd and Cr showed negative correlation (r = −0.63 and −0.27) to increasing drought intensities. Most of these responses can be explained by a redox process where TEs were sequestered or released as solids and subsequently becomes saturated or dried out. It was observed that soils in Mississippi’s Gulf Coast study areas have the ability to be a sink or reservoir during various weather conditions thereby entrapping or releasing TEs, which becomes more available to organisms.


    We mainly consider the linear multiplicative problem of the following form:

    (LMP):{minf(y)=pi=1(cTiy+di)αi,s.t.yY={yRn|Ayb},

    where ci is an n-dimensional column vector, di is a real number and αi is a real number different from zero. T represents the transpose of a vector, ARm×n, bRm and Y is a non-empty bounded set. In this article, for any yY, we suppose that cTiy+di>0, i=1,...,p.

    LMP is a typical non-convex optimization problem with important applications in real life. It and its variants have been applied in various fields such as robust optimization [1], financial optimization [2], VLSI chip design [3], decision tree optimization [4], network flow optimization [5], supply chain problem [6], investment portfolio [7], etc. Moreover, LMP is an NP-hard problem [8] with multiple local solutions rather than global solutions, which increases the computational complexity. Hence, researching this problem holds great significance.

    Various solution algorithms for LMP and its special forms have been proposed by numerous experts and scholars. These algorithms can be broadly categorized into the following groups: branch-and-bound algorithms [9,10,11,12,13,14], finite algorithm [15], heuristic method [16], approximation algorithm [17], polynomial time algorithm [18], parameterized simplex method [19], cutting-plane method [20], level set algorithm [21], etc. Despite the advancements made by these approaches, solving high-dimensional LMP remains a challenging task. In the past 20 years, searching for global solutions of LMP in the outer space using different relaxation methods has become a hot topic among scholars [22,23,24,25,26,27,28,29]. For example, the authors in references [22,27] used new convex relaxation techniques to put forward different outer space branch-and-bound algorithms for solving LMP. References [23,25,26,28,29,30] adopted various linear relaxation programming problems and proposed novel outer space branch-and-bound algorithms, respectively.

    In this paper, an outer space branch-and-bound algorithm is designed to solve large-scale LMP. The major characteristics of the algorithm are as follows: First, p auxiliary variables are introduced to transform LMP into an equivalent problem, where p is the number of linear functions. Second, based on the properties of exponential and logarithmic functions, the second equivalent problem (ELMP1) is established. Third, a novel linear relaxation approach is employed to derive the linear relaxation programming problem for ELMP1. Moreover, the branching rule in p-dimensional outer space is given, and the corresponding outer space branch-and-bound algorithm is developed by embedding the rectangular compression technique into the branch-and-bound framework. Finally, the computational complexity of the algorithm is analyzed to estimate the number of iterations in the worst case, which also implies that our algorithm is convergent.

    Compared with existing methods [10,23,26,28], the proposed algorithm has the following advantages:

    (1) The solved LMP is universal, and the exponents of its objective function are real numbers rather than being limited to just 1 or only positive values.

    (2) After the first equivalent conversion, the exponents of the objective function are all positive. Therefore, when linearly approximating the objective function of equivalent problems, there is no need to consider the case where the coefficient is negative, which simplifies the implementation of linear relaxation.

    (3) The branching process takes place only in the p-dimensional outer space of the linear function. This leads to cost savings compared to the branching operation in the n-dimensional decision variable space, as p is often much smaller than n in practical problems.

    (4) To demonstrate the efficiency of the algorithm, we compared it with the methods in references [10,23,26,28], our algorithm is suitable for solving LMP with large-scale decision variables.

    The rest of this paper is organized as follows: Section 2 presents the equivalent problems of LMP and establishes its linear relaxation problem. In Section 3, the branching operation and compression technology are discussed in detail. Moreover, the outer space branch-and-bound algorithm and its properties are described. Section 4 provides numerical comparison results for some problems. Finally, a brief summary of this paper is presented in Section 5.

    In order to search the global optimal solution of LMP, we transform it into the equivalent problem (ELMP). For convenience, we first define the following sets:

    I+={i|αi>0,i{1,...,p}},I={i|αi<0,i{1,...,p}}.

    For any i{1,...,p}, denote

    0<t_0i=minyYcTiy+di,¯t0i=maxyYcTiy+di,iI+;0<t_0i=1maxyYcTiy+di,¯t0i=1minyYcTiy+di,iI.

    Since cTiy+di is a bounded linear function, by solving the above 2p linear programs, we can easily get that t_0i and ¯t0i. Simultaneously, the initial hyper-rectangle

    H0={tRp|t_0iti¯t0i,i=1,...,p}

    is constructed. Thus, let us consider the following equivalent problem:

    (ELMP):{minαi>0,i{1,...,p}tαiiαi<0,i{1,...,p}tαiis.t.ti=cTiy+di,iI+,ti=1zi,iI,zi=cTiy+di,iI,yY,tH0.

    We denote the feasible region of ELMP as V={ti=cTiy+di,iI+,ti=1zi,zi=cTiy+di,iI,yY,tH0}. It is evident that V is a nonempty bounded compact set if and only if Y. Theorem 1 below explains the equivalence between LMP and ELMP.

    Theorem 1. (y,t,z) is a global optimal solution for ELMP if and only if y is a global optimal solution for LMP, where ti=cTiy+di when iI+, ti=1zi and zi=cTiy+di when iI.

    Proof. We will develop our proof in two aspects. On one hand, for any yY, let ti=cTiy+di for iI+, ti=1zi and zi=cTiy+di for iI, thus (y,t,z)V. If y is a global optimal solution for LMP, then ti=cTiy+di for iI+, ti=1zi and zi=cTiy+di for iI, so (y,t,z)V, which shows that (y,t,z) is a feasible solution to ELMP, and by the optimality of y, the following inequalities hold:

    iI+(ti)αiiI(ti)αi=iI+(ti)αiiI(zi)αi=pi=1(cTiy+di)<pi=1(cTiy+di)αi=iI+tαiiiIzαii=iI+(ti)αiiI(ti)αi.

    Thus, (y,t,z) is a global optimal solution for ELMP.

    On the other hand, if (y,t,z) is a global optimal solution of ELMP, where t,z are satisfied: if iI+, then ti=cTiy+di; if iI, then ti=1zi and zi=cTiy+di. Suppose y is a global optimal solution of LMP such that pi=1(cTiy+di)αi<pi=1(cTiy+di)αi holds, for iI+, let ti=cTiy+di, for iI, let ti=1zi and zi=cTiy+di, it follows that

    iI+tαiiiItαii<iI+(ti)αiiI(ti)αi.

    This contradicts the optimality of (y,t,z), thus y is a global optimal solution of LMP.

    For the sake of convenience, we denote βi=αi>0 for iI. In the meantime, ln() and exp() represent the logarithmic and the exponential functions, respectively. The objective function of ELMP is further equivalently transformed according to the properties of the exponential and logarithmic functions, namely,

    iI+tαiiiItαii=iI+tαiiiIzαii=iI+tαiiβi>0,i{1,...,p}tβii=exp(ln(iI+tαiiβi>0,i{1,...,p}tβii))=exp(κi=1,iI+αilnti+pi=κ+1,βi>0βilnti)=exp(pi=1ζilnti),

    where ζRp and ζ=[α1,α2,,ακ,βκ+1,,βp]. Hence, ELMP is reformulated to the following form:

    (ELMP1):{minL(y,t,z)=pi=1ζilntis.t.ti=cTiy+di,iI+,zi=1ti,zi=cTiy+di,iI,yY,tHk,

    where Hk represents H0 or the sub-rectangle of H0. Obviously, the optimal solution for ELMP1 is the same as that for ELMP. Therefore, we shift our focus to solving ELMP1, but ELMP1 cannot be solved directly due to the nonlinearity of the objective function and the constraints zi=1ti for iI. To address this, we propose a linear relaxation technique to obtain the lower bound problem of ELMP1.

    Theorem 2. For i=1,...,p, ti[t_i,¯ti], define

    g(ti)=lnti,g_(ti)=lnt_i+ki(tit_i),L_(y,t,z)=pi=1ζig_(ti),

    where ki=ln¯tilnt_i¯tit_i. Then we have the following conclusions:

    (i) g_(ti)g(ti); (ii) L_(y,t,z)L(y,t,z).

    Proof. Since the function g(ti) is a monotonically increasing concave function on [t_i,¯ti] with respect to ti, g_(ti) is its secant approximation, so (ⅰ) and (ⅱ) obviously hold.

    Theorem 3. For each iI, define

    ψ_(ti)=1t_i¯titi+2t_i¯ti,¯ψ(ti)=1t_i¯titi+1t_i+1¯ti.

    Then the functions ψ_(ti) and ¯ψ(ti) satisfy the conclusions below:

    (i) ψ_(ti)1ti¯ψ(ti);

    (ii) Denote Δti=¯tit_i, then limΔti0(1tiψ_(ti))=0, limΔti0(¯ψ(ti)1ti)=0.

    Proof. (ⅰ) For each iI, since ti[t_i,¯ti] and ti>0, it follows from the definitions of ψ_(ti) and ¯ψ(ti) that

    1tiψ_(ti)=1ti+1t_i¯titi2t_i¯ti=1ti1t_i¯ti+1t_i¯titi1t_i¯ti=t_i¯tititit_i¯ti+tit_i¯tit_i¯ti=(tit_i¯ti)2tit_i¯ti0.

    and

    ¯ψ(ti)1ti=1t_i¯titi+1t_i+1¯ti1ti=t2i+t_iti+¯titit_i¯tit_i¯titi=(¯titi)(tit_i)t_i¯titi0.

    (ⅱ) From (ⅰ), when Δti0 for each iI, the following inequalities are valid:

    limΔti0(1tiψ_(ti))=limΔti0t_i¯tititit_i¯ti+tit_i¯tit_i¯tilimΔti0|¯tit_i||tit_i¯ti|+|¯tit_i||t_i¯ti|=limΔti0(1|tit_i¯ti|+1|t_i¯ti|)ΔtilimΔti02Δtimin{t_2i,t_i¯ti}=0.

    and

    limΔti0(¯ψ(ti)1ti)=limΔti0(¯titi)(tit_i)t_i¯titilimΔti0Δt2imin{t_2i¯ti,¯t2it_i}=0.

    Consequently, we obtain the linear relaxation program of ELMP1:

    (LRP):{minL_(y,t,z)s.t.ti=cTiy+di,iI+,ψ_(ti)zi¯ψ(ti),zi=cTiy+di,iI,yY,tHk.

    In the constraint of LRP, we substitute zi=cTiy+di into the inequalities zi¯ψ(ti) and ψ_(ti)zi, respectively, that is

    cTiy+1t_i¯titi1t_i+1¯tidi,cTiy1t_i¯titidi2t_i¯ti,iI.

    For each i=1,...,p, ζi>0, LRP is reformulated as

    (LRP(Hk)):{minpi=1ζi(lnt_i+ki(tit_i))s.t.cTiy+ti=di,iI+,cTiy+1t_i¯titi1t_i+1¯tidi,iI,cTiy1t_i¯titidi2t_i¯ti,iI,yY,tHk.

    We have derived the linear relaxation problem for ELMP1 through a series of relaxation processes. This relaxation enables us to simplify the problem by loosening certain constraints while providing a reliable lower bound for the global optimal value of ELMP1, facilitating informed decision-making in subsequent optimization steps.

    In this section, we present an efficient deterministic algorithm for solving ELMP1 by combining the linear relaxation problem proposed in Section 2 with subsequent branching operation in Section 3.1 and rectangle compression technique in Section 3.2. The algorithm requires solving a series of linear relaxation problems on the subdivided rectangles of H0. Furthermore, we provide rigorous proofs for the convergence and complexity of the algorithm based on the employed branching operation.

    For any selected sub-rectangle Hk={tRp|t_iti¯ti}H0, the following branching rules are given:

    (ⅰ) Let τ=argmax{¯tit_i,i=1,...,p};

    (ⅱ) Hk is divided into two sub-rectangles

    Hk1=Πτ1i=1Hi×[t_τ,t_τ+¯tτ2]×Πpi=τ+1Hi,Hk2=Πτ1i=1Hi×[t_τ+¯tτ2,¯tτ]×Πpi=τ+1Hi,

    where Hi=[tiR|t_iti¯ti,i=1,...,p,iτ].

    We introduce a compression technique to enhance the convergence speed of the algorithm. When the algorithm iterates to the kth time, multiple sub-rectangles are obtained through rectangular subdivision. For any sub-rectangle ˜HkHk, we further investigate whether ELMP1 over ˜Hk has a global minimum, where ˜Hk=˜H1טH2×טHp and ˜Hi={tiR|t_iti¯ti,i=1,...,p}. The embedded compression technology in the algorithm involves replacing sub-rectangles with smaller rectangles ˜Hk, while ensuring that the global optimal solution of ELMP1 remains intact and unaffected.

    Theorem 4. When the algorithm iterates to the kth time, let ^UB be the current best upper bound of the global optimum of ELMP1. Denote

    Ξ=pi=1ζilnt_i,πι=exp(^UBΞ+ζιlnt_ιζι),ι{1,2,,p}.

    For any sub-rectangle ˜HkHk, it can be inferred that

    (i) If Ξ>^UB, then there is no global optimal solution over ˜Hk for ELMP1;

    (ii) If Ξ<^UB, then ELMP1 has no global optimal solution over ¨H, where

    ¨H=¨H1×רHι1רHιרHι+1×רHp

    with ¨Hι={tιR|πιtι¯tι}˜Hι.

    Proof. (ⅰ) If Ξ>^UB, then

    minpi=1ζilnti=pi=1ζilnt_i=Ξ>^UB.

    Therefore, ˜Hk does not contain a global optimal solution for ELMP1.

    (ⅱ) If Ξ<^UB, for any t¨H,

    mint¨Hpi=1ζilnti=mint¨Hpi=1,iιζilnti+mint¨Hζιlntι>pi=1,iιζilnt_i+ζιlnπι=pi=1,iιζilnt_i+ζιln(exp(^UBΞ+ζιlnt_ιζι))=pi=1,iιζilnt_i+ζιlnt_ι+^UBΞ=^UB.

    Therefore, ¨H does not contain a global optimal solution for ELMP1.

    The branching operation proposed in Section 3.1 partitions the initial rectangle H0 into smaller sub-rectangles, enabling the algorithm to search for local optimal solutions of ELMP1 over V that necessarily include the global minimal solution of ELMP1. During the kth iteration of the algorithm, we provide some relevant notations. Qk denotes the list of active nodes. For each branching node HQk, (y(H),t(H)) and LB(H) represent the optimal solution and the optimal value of LRP(H), respectively. The current best lower bound for ELMP1 is noted as LBk=min{LB(H),HQk}. vk represents the objective function value corresponding to the best feasible solution of ELMP1, and the current best upper bound of vk is denoted as UB. We choose a divided rectangle Hk from Qk that satisfies LB(H)=LBk, and segment it into two sub-rectangles Hk1 and Hk2 by branching operation. These sub-rectangles are then added to the set T, and the set T is updated as T={THk}{Hk1,Hk2}. Let F be the set of feasible points, and ϵ denotes algorithmic tolerance. In a more precise manner, we can describe the presented outer space branch-and-bound algorithm as follows:

    Step 0. Initialize the best known solution as null and the best known upper bound as infinity. Create a root node and initialize the list of active nodes with this root node. Set the algorithmic tolerance to the desired value.

    F=,UB=+,Q0={H0},ϵ0,k=0.

    Step 1. Solve a relaxation problem LRP(H0) in order to get a lower bound (or prove infeasibility). If problem is feasible, update the incumbent if necessary. Let

    LB0=LB(H0),(y0,t0,z0)=(y(H0),t(H0),z(H0)).

    If (y0,t0,z0) is a feasible solution of ELMP1, then let UB=L(y0,t0,z0), F=F{y0,t0,z0}. If UBLB0ϵ, the algorithm terminates and obtains the global ϵ-optimal solution (y0,t0,z0) for ELMP1. Otherwise, denote T={H0}.

    Step 2. Split the current node Hk into two new nodes Hkj(j=1,2) and reduce them by using the compression technique, the reduced rectangle is still denoted as Hkj(j=1,2) and set T={THk}{Hk1,Hk2}.

    Step 3. For each child node HkjT(j=1,2), the corresponding optimal value LB(Hkj) and optimal solution (y(Hkj),t(Hkj)) are obtained by solving LRP(Hkj). Set F=F(y(Hkj),t(Hkj),z(Hkj)), (ˆyk,ˆtk,ˆzk)=argmin(y,t,z)FL(y,t,z), set vk=L(ˆyk,ˆtk,ˆzk). If vk<UB, then update the upper bound UB=vk, the current best solution for ELMP1 is updated as (yk,tk,zk)=(ˆyk,ˆtk,ˆzk), and set F=. If LB(Hkj)>UB, then remove it from T, i.e. T=T{Hkj}. Set Qk=(QkHk)T and update the lower bound LBk=min{LB(H)|HQk}.

    Step 4. Let the list of active nodes Qk+1={H|UBLB(H)>ϵ,HQk}. If Qk+1 is empty, return the best known solution as a global ϵ-optimal solution. Otherwise select an active node Hk+1argmin{LB(H),HQk+1}. Set k=k+1 and go back to Step 2.

    Definition 1 provides the concept of the global ϵ-optimal solution involved in the proposed algorithm.

    Definition 1. Given ϵ>0, the feasible solution (ˆy,ˆt,ˆz) is considered a global ϵ-optimal solution for ELMP1, if L(ˆy,ˆt,ˆz)v+ε, where v represents the global optimal value of ELMP1.

    This subsection discusses the convergence analysis of the algorithm. Supposing the algorithm is infinite, according to the branching operation, there exists an infinite rectangular sequence {Hk}k=1 such that for each k=1,2,..., we have Hk+1HkH0, where HkRp. The following Lemma 1 provides a theoretical basis for Theorem 5.

    Lemma 1. For any tH, when ¯tit_i0, i=1,...,p, for which we have L(y,t,z)L_(y,t,z)0.

    Proof. It follows from Theorem 2 that

    L(y,t,z)L_(y,t,z)=pi=1ζi[lnti(lnt_i+ki(tit_i)]max{pi=1ζi[lnti(lnt_i+ki(tit_i)]}pi=1max{ζi}maxtiHi{lnti(lnt_i+ki(tit_i))}=pi=1max{ζi}[ln¯tilnt_i+ln¯tilnt_i¯tit_i(¯tit_i)]=pi=1max{ζi}2(ln¯tilnt_i)pi=12max{ζi}t_i(¯tit_i).

    Therefore, for any tH, L(y,t,z)L_(y,t,z)0 holds while ¯tit_i0, i=1,...,p.

    Theorem 5. Given ϵ>0, assuming that the feasible domain of ELMP1 is non-empty, the algorithm either obtains a global ϵ-optimal solution of ELMP1 at a finite number of iterations, or produces a sequence of feasible solutions {(yk,tk,zk)}, each of whose accumulation points is a global ϵ-optimal solution of ELMP1.

    Proof. Assuming that the algorithm terminates within a finite number of iterations, without loss of generality, let us assume that the algorithm terminates at the kth iteration, which gets UBLBkϵ. According to Step 3 in the algorithm, we have UB=vk=L(ˆyk,ˆtk,ˆzk), thus

    L(ˆyk,ˆtk,ˆzk)LBkϵ. (3.1)

    It follows from (3.1) that

    L(ˆyk,ˆtk,ˆzk)LBk+ϵv+ϵ.

    Thereby, (ˆyk,ˆtk,ˆzk) is a global ϵ-optimal solution to ELMP1.

    If the algorithm iterates an infinite number of times and produces an infinite sequence of rectangles {Hk}k=1, where Hk=pi=1[t_ki,¯tki]Rp. Without losing generality, suppose that limkHk=H, then for the subsequences K of sequence {1,2,...} we have

    limkKHk=H. (3.2)

    For any kK, depending on Step 3 of the algorithm, the lower bound is updated to

    LB(Hk)=mintVL_(yk,tk,zk)vkL(ˆyk,ˆtk,ˆzk)L(yk,tk,zk).

    For any kK, it follows that tkHkH. Therefore, {tk}k=1 exists a convergent subsequence {tk}kK, by formula (3.2), the limit of {tk}kK is in H, let

    limkKtk=ˆt, (3.3)

    where ˆt is a accumulation point of {tk}kK. Since L(y,t,z) is continuous, combining with (3.3) we have

    limkKL(yk,tk,zk)=L(ˆy,ˆt,ˆz). (3.4)

    For any kK, tkHk, it follows from Lemma 1 that

    limkK(L(yk,tk,zk)L_(yk,tk,zk))=0. (3.5)

    Hence, we have

    limkKL_(yk,tk,zk)=L(ˆy,ˆt,ˆz). (3.6)

    Integrating (3.4)–(3.6), we obtain L(ˆy,ˆt,ˆz)=limkKL(yk,tk,zk)=limkKL_(yk,tk,zk). For each kK we get L(ˆyk,ˆtk,ˆzk)=pi=1ζiln(tki), therefore

    limkvk=limkpi=1ζiln(tki)=v. (3.7)

    Because {pi=1ζiln(tki)}kK is a subsequence of {pi=1ζiln(tki)}k=1, following from (3.7), we then obtain limkKpi=1ζiln(tki)=v. From the continuity of L(y,t,z) and formula (3.5), we have

    limkKpi=1ζiln(tki)=pi=1ζiln(ˆtki).

    So we get

    pi=1ζiln(ˆtki)=v. (3.8)

    Combining Eq (3.8), we conclude that (ˆyk,ˆtk,ˆzk) is a globally optimal solution to ELMP1.

    We use Ω(H) to define the size of the sub-rectangle H={tRp,t_iti¯ti,i=1,...,p}, i.e.,

    Ω(H)=max{¯tit_i,i=1,...,p}.

    In addition,

    θ=2max{ζi,i=1,...,p},λ=max{1t_i,i=1,...,p}.

    Lemma 2. Given any ϵ>0, for a sub-rectangle HH0, if Ω(H)<ϵ, then for any optimal solution (y,t) of LRP(H), we have

    |L(y,t,z)L_(y,t,z)|pδθλϵ.

    Proof. Clearly, (y,t,z) is a feasible solution to ELMP1, it follows from Lemma 1 that

    L(y,t,z)L_(y,t,z)pi=12max{ζi}t_i(¯tit_i)pθλΩ(H)pθλϵ.

    The following Theorem 6 illustrates an estimation of the maximum number of iterations for the proposed algorithm in the worst case, indirectly indicating that the algorithm will eventually find the global optimal solution for LMP.

    Theorem 6. For any ϵ0(0,1), the algorithm iterates at most

    2pi=1log2pθλΩ(H0)ϵ01

    times to obtain a global ϵ0-optimal solution in a worst case.

    Proof. Let ϵ=ϵ0, suppose that during the kth iteration, when the algorithm reaches Step 3, HH0 is a rectangle selected by the branching rule to be dissected, which satisfies

    ¯tit_iΩ(H)ϵ0pθλ,i=1,,p,

    then, the algorithm is terminated. In the worst case, when the algorithm terminates at Step 4 on the kth iteration, splitting the initial rectangle H0 results in k+1 sub-rectangles, with the assumption that Hι is any one of these sub-rectangles and satisfies

    ¯tιit_ιiΩ(Hι)12ιiΩ(H0),i=1,,p,

    here, ιi denotes the initial interval [t_0i,¯t0i] after ιi subdivision to produce [t_ιi,¯tιi]. From Lemma 2, it follows that

    12ιiΩ(H0)ϵ0pθλ,i=1,,p. (3.9)

    Since the subdivision H0 yields no more than pi=12ιi sub-rectangles, if every sub-rectangle satisfies (3.9), the algorithm must terminate. By Eq (3.9) we have

    ιilog2pθλΩ(H0)ϵ0,i=1,...,p.

    Let χi=log2pθλΩ(H0)ϵ0,i=1,...,p. The initial rectangle is split into k+1 sub-rectangles and k+1=pi=12χi. At this point the algorithm terminates. Thus, the algorithm iterates at most 2pi=1log2pθλΩ(H0)ϵ01 times. Furthermore, it can be derived that

    ϵ0|L(y,t,z)L_(y,t,z)|L(y,t,z)LB(H)L(y,t,z)L(y,t,z)0, (3.10)

    where (y,t,z) is a global optimal solution of ELMP1. For iI+, ti=cTiy+di. For iI, zi=1ti and zi=cTiy+di. Based on the bounding process, let ˆyk be the best feasible solution obtained so far, and denote that {f(ˆyk)} is the decreasing sequence such that f(ˆyk)f(y). Combined with (3.10) can be obtained

    ϵ0f(y)f(y)f(ˆyk)f(y).

    Therefore, the algorithm terminates, and ˆyk is a global ϵ0-optimal solution to LMP.

    In this section, several problems are employed to demonstrate the feasibility and effectiveness of the proposed algorithm in this paper. All linear programming problems are solved by the dual-simplex method, all tests of the algorithm are carried out using MATLAB9.2.0.538062 (R2017a) on an Inter(R)Core(TM) i5-8250U, CPU@1.60GHz, 4GB memory and 64 bit Windows10 operating system.

    Initially, we utilize the existing branch and bound algorithms [10,23,26,28] and the proposed algorithm to compute the deterministic Problems 1–8 with a predefined convergence tolerance. This is to demonstrate the feasibility of our algorithm. To validate the efficiency of the proposed algorithm, we conduct tests on random Problems 9–11 with a tolerance of 106.

    Problem 1 [10,26,28]

    {min(y1+2y2+2)(4y13y2+4)(3y14y2+5)1(2y1+y2+3)1s.t.y1+y21.5,0y11,0y21.

    Problem 2 [10,26,28]

    {min(y1+y2)(y1y2+7)s.t.2y1+y214,y1+y210,4y1+y20,2y1+y26,y1+2y26,y1y23,y1+y20,y1y2+70,y1,y20.

    Problem 3 [10,26,28]

    {min(y1+y2+1)2.5(2y1+y2+1)1.1(y1+y2+1)1.9s.t.y1+2y26,2y1+2y28,1y13,1y23.

    Problem 4 [26,28]

    {min(3y14y2+5)(y1+2y21)0.5(2y1y2+4)(y12y2+8)0.5(2y1+y21)s.t.5y18y224,5y1+8y244,6y13y215,4y1+5y244,1y13,0y21.

    Problem 5 [10,26,28]

    {min(3y12y22)23(y1+2y2+2)25s.t.2y1y22,y12y22,y1+y25,3y15,1y23.

    Problem 6 [26,28]

    {min(y1+19y3)(y2+19y3+2)s.t.9y1+9y2+2y381,8y1+y2+8y372y1+8y2+8y372,7y1+y2+y39,y1+7y2+y39,y1+y2+7y39,0y18,0y29,0y39.

    Problem 7 [23,26,28]

    {min(4y14y4+3y5+21)(4y1+2y2+3y34y4+4y53)×(3y1+4y2+2y32y4+2y57)(2y1+y22y3+2y5+11)s.t.4y1+4y2+5y3+3y4+y525,y15y2+2y3+3y4+y52,y1+2y2+y32y4+2y56,4y2+3y38y4+11y58,y1+y2+y3+y4+y56,y1+y2+7y39,y1,y2,y3,y4,y51.

    Problem 8 [23,26,28]

    {min(0.813396y1+0.67440y2+0.305038y3+0.129742y4+0.217796)×(0.224508y1+0.063458y2+0.932230y3+0.528736y4+0.091947)s.t.0.488509y1+0.063458y2+0.945686y3+0.210704y43.562809,0.324014y10.501754y20.719204y3+0.099562y40.052215,0.445225y10.346896y2+0.637939y30.257623y40.427920,0.202821y1+0.647361y2+0.920135y30.983091y40.840950,0.886420y10.802444y20.305441y30.180123y41.353686,0.515399y10.424820y2+0.897498y3+0.187268y42.137251,0.591515y1+0.060581y20.427365y3+0.579388y40.290987,0.423524y1+0.940496y20.437944y30.742941y40.373620,y1,y2,y3,y40.

    Problem 9 [10,23]

    min2i=1(nj=1ˆcijyj+1)s.t.ˆAyˆb,y0.

    where ˆcij is generated randomly in the interval [0, 1], i=1,2,j=1,...,n. All elements ˆaij of the matrix ˆA are randomly generated in the interval [-1, 1], i.e., ˆaij=2ˆλ1, where ˆλ is randomly generated in [0, 1]. The components of ˆb is set to nj=1ˆaij+2ˆt, where ˆt is randomly generated at [0, 1].

    Problem 10 [23,26]

    {minpi=1nj=1˜cijyjs.t.nj=1˜amjyj˜bm,m=1,...,M.0yj1,j=1,...,n.

    where ˜cij is randomly generated in [0, 1]. ˜amj is randomly generated at [-1, 1], m=1,...,M,j=1,...,n. ˜bm=nj=1˜amj+2˜t, where ˜t is randomly generated at [0, 1].

    Problem 11 [26,28]

    {minpi=1(nj=1¯cijyj+¯di)¯αis.t.¯Ay¯b,y0.

    where ¯cij, and ¯di are randomly generated in [0, 1], i=1,...,p,j=1,...,n. Each element of the matrix ¯A and ¯αi are randomly generated in [-1, 1]. The components of the vector ¯b are generated by nj=1ˉaij+2ˉt, where ˉt is generated randomly at [0, 1].

    Table 1 shows the numerical comparison between some algorithms and our algorithm on Problems 1–8.

    Table 1.  Numerical comparisons among some other algorithms and our algorithm on Problems 1–8.
    Problems Algorithms Optimal solution Opt.val Iter Time Tolerance
    1 Algorithm in [10] (0, 0) 0.5333 290 41.4735 103
    Algorithm in [26] (0, 0) 0.5333 68 1.1780 106
    Algorithm in [28] (0, 0) 0.5333 67 1.2418 106
    Our algorithm (0, 0) 0.5333 3 0.0350 106
    2 Algorithm in [10] (1.9975, 8) 9.9725 122 17.1740 103
    Algorithm in [26] (2, 8) 10 4 0.05146 106
    Algorithm in [28] (2, 8) 10 1 0.00003 106
    Our algorithm (2, 8) 10 1 0.00004 106
    3 Algorithm in [10] (1, 1) 997.6613 29 4.0872 103
    Algorithm in [26] (1.0, 1.0) 997.6613 1 0.0000351 106
    Algorithm in [28] (1.0, 1.0) 997.6613 1 0.0000403 106
    Our algorithm (1.0, 1.0) 997.6613 1 0.0000389 106
    4 Algorithm in [26] (1.25, 1.00) 263.78893 2 0.010338 106
    Algorithm in [28] (1.25, 1.00) 263.78893 2 0.013769 106
    Our algorithm (1.25, 1.00) 263.78893 2 0.00876 106
    5 Algorithm in [10] (3.0000, 1.9990) 5.01105 48 6.66627 103
    Algorithm in [26] (3, 2) 5.00931 1 0.0000338 106
    Algorithm in [28] (3, 2) 5.00931 1 0.0000348 106
    Our algorithm (3, 2) 5.00931 1 0.0000317 106
    6 Algorithm in [26] (0, 8, 1) 0.90123 10 0.1977365 106
    Algorithm in [28] (8, 0, 1) 0.90123 9 0.17455 106
    Our algorithm (8, 0, 1) 0.90123 9 0.1405001 106
    7 Algorithm in [26] (1, 2.000, 1, 1, 1) 9504.0 5 0.0826865 106
    Algorithm in [28] (1, 2, 1, 1, 1) 9504.0 1 0.0000387 106
    Algorithm in [23] (1, 2, 1, 1, 1) 9503.9999 2 0.069 106
    Our algorithm (1, 2, 1, 1, 1) 9504.0 1 0.0000542 106
    8 Algorithm in [26] (1.3148, 0.1396, 0, 0.4233) 0.8902 2 0.0194 106
    Algorithm in [28] (1.3148, 0.1396, 0, 0.4233) 0.8902 2 0.0394 106
    Algorithm in [23] (1.3148, 0.1396, 0, 0.4233) 0.8902 1 0.0266 106
    Our algorithm (1.3148, 0.1396, 0, 0.4233) 0.8902 2 0.0093 106

     | Show Table
    DownLoad: CSV

    For stochastic Problems 9–11, we solve 10 randomly generated problems for each set of parameters (p,m,n) and place their average number of iterations and average CPU time in Tables 26. Specifically, Problem 9 represents an LMP with only two linear functions and exponents of 1. Table 2 shows the results of numerical comparisons between our algorithm and the algorithms proposed in [10,23]. Problem 10 is an LMP with multiple linear functions and exponents of 1. Tables 3 and 4 display the numerical results of our algorithm compared with the algorithms in [23,26]. Additionally, Figures 1 and 2 plot some of the data results in Table 3. Problem 11 is an LMP with real exponents and multiple linear functions. Tables 5 and 6 show the numerical results of our algorithm compared with the algorithms in [26,28]. Figures 36 depict some of the data results from Tables 5 and 6.

    Table 2.  Numerical comparisons among the algorithms in [10,23] and our algorithm on Problem 9.
    (m,n) Our algorithm Algorithm in [10] Algorithm in [23]
    Avg(Std).Iter Avg(Std).Time Avg(Std).Iter Avg(Std).Time Avg(Std).Iter Avg(Std).Time
    (10, 20) 10.3(3.4655) 0.1712(0.0666) 14.2 (1.5492) 0.6062 (0.0695) 2.6 (6.2561) 0.2083 (0.3861)
    (20, 20) 8.9(3.7802) 0.2018(0.1548) 17.4 (1.7127) 0.8368 (0.0756) 4.8 (6.9793) 0.2814 (0.5504)
    (22, 20) 8.8(3.8678) 0.1375(0.0726) 18.5 (1.9003) 0.9460 (0.1235) 6.0 (12.2564) 0.3231 (0.9257)
    (20, 30) 12.3(3.8223) 0.2059(0.0737) 19.9 (0.5676) 1.0781 (0.0674) 6.4 (7.4951) 0.3302 (0.4899)
    (35, 50) 11.6(1.8) 0.2234(0.0404) 21.2 (0.4316) 1.8415 (0.1338) 8.1 (11.6772) 0.4267(0.8646)
    (45, 60) 11.3(2.1471) 0.3004(0.1370) 23.0 (0.6667) 2.4338 (0.1016) 8.7 (14.2688) 0.4867 (0.8930)
    (40,100) 11.3(0.9) 0.3790(0.1208) 35.7 (1.1595) 5.1287 (0.0935) 11.9 (12.3809) 0.6049 (0.9664)
    (60,100) 11.9(2.3854) 0.4781(0.1569) 36.1 (0.7379) 6.8143 (0.1713) 9.7 (12.9822) 0.7955 (1.2783)
    (70,100) 11.1(1.7578) 0.4682(0.1673) 36.6 (1.2649) 8.1967 (0.2121) 8.3 (11.6638) 0.8152 (1.3057)
    (70,120) 11.9(3.7802) 0.5736(0.4449) 39.1 (1.6633) 9.5642 (0.2975) 10.1 (14.6462) 0.9693 (1.3529)
    (100,100) 9.1(1.8682) 0.5069(0.2585) 37.5 (2.1731) 13.0578 (0.3543) 11.1 (9.0549) 1.1889 (1.2506)

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical comparisons among the algorithms in [23,26] and our algorithm on medium-sized Problem 10.
    p (m,n) Our algorithm Algorithm in [23] Algorithm in [26]
    Avg.Iter Avg.Time Avg.Iter Avg.Time Avg.Iter Avg.Time
    4 (10, 20) 45.3 0.5224 10.8 0.6654 66.1 0.8137
    (20, 40) 56.4 0.7212 25.1 1.1668 86.8 1.1674
    (30, 60) 61.1 0.8918 32.6 1.8145 89.4 1.3799
    (40, 80) 54.9 0.9787 46.4 2.2780 89.1 1.6630
    (50,100) 59.8 1.3421 42.6 2.5277 89.3 2.1077
    (60,120) 64.1 1.8703 76.5 6.5797 96.7 2.9471
    (70,140) 62.9 2.3383 62.2 8.1493 95.1 3.6654
    (80,160) 73.7 3.4148 43.8 14.7688 109.5 5.2437
    (90,180) 70.1 4.1754 37.2 19.2281 110.7 6.8075
    (100,200) 69.7 5.2313 79.2 46.8494 111.7 8.6532
    5 (10, 20) 77.8 0.9546 45.7 1.2609 120.9 1.5939
    (20, 40) 89.5 1.2246 45.7 1.3929 142.1 2.0816
    (30, 60) 104.8 1.6322 184.6 7.6766 180.5 3.0199
    (40, 80) 100.8 1.9369 117.6 10.4108 154.9 3.1245
    (50,100) 100 2.3850 167 15.5902 178.9 4.4852
    (60,120) 117.2 3.5787 143.4 18.3711 203.8 6.5421
    (70,140) 123.3 4.5434 212.5 38.7537 231.6 9.0070
    (80,160) 97.6 4.6827 310.2 81.1816 175.8 8.7630
    (90,180) 116.1 6.8339 298.4 93.8722 211.9 12.8401
    (100,200) 98.8 7.4864 230.6 114.7671 176.9 13.7859
    6 (10, 20) 88.5 1.0468 43.8 1.7287 158.5 1.9954
    (20, 40) 116.9 1.5515 77.0 6.6506 204.6 2.8660
    (30, 60) 171 2.5808 101.9 11.1125 334.9 5.3399
    (40, 80) 175.5 3.2212 133.1 16.3945 342.3 6.6506
    (50,100) 163.8 3.8391 271.2 22.0331 307.7 7.5644
    (60,120) 147.3 4.3689 387.1 25.2491 289.2 9.0304
    (70,140) 168.6 6.3815 766.5 63.8334 327.5 12.8848
    (80,160) 187.2 8.7670 315.4 98.5674 384 18.5743
    (90,180) 166.5 9.8465 444.2 110.7434 310.2 18.9719
    (100,200) 173.4 13.2959 657.8 120.0034 323.5 25.7361
    7 (10, 20) 133.6 1.5839 458.8 19.2006 284.1 3.6194
    (20, 40) 140.4 1.8923 496.6 23.8151 265.2 3.7999
    (30, 60) 182.1 2.8005 1013.5 64.3865 330.2 5.3898
    (40, 80) 202.7 3.7863 1177.3 86.1229 470.7 9.3001
    (50,100) 250.7 5.8942 1398 127.1028 542.5 13.4583
    (60,120) 225.4 6.8489 640.2 155.5248 456.0 14.5520
    (70,140) 275.6 10.6270 1197.4 185.5591 604.2 24.1463
    (80,160) 258.9 12.7324 891.1 278.3208 542.2 27.5974
    (90,180) 211.3 13.1138 816.7 293.8722 463.1 29.6745
    (100,200) 234.4 18.4294 469.3 321.4063 510.6 41.1299

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical comparisons among the algorithms in [23,26] and our algorithm on large-scale Problem 10.
    (p,m,n) Our algorithm Algorithm in [23] Algorithm in [26]
    Avg.Iter Avg.Time Avg.Iter Avg.Time Avg.Iter Avg.Time
    (2, 10, 1000) 27.7 4.1565 15.5 2.6293 39.7 6.2742
    (2, 10, 2000) 33.9 18.1454 28.5 14.0012 48.6 26.5702
    (3, 10, 1000) 106.1 16.8598 101.8 19.3235 193.7 31.8366
    (3, 10, 2000) 173.4 98.8698 185.4 90.3898 352.7 204.3262
    (4, 10, 1000) 361.8 59.9727 757.6 156.5649 878.4 149.7120
    (4, 10, 2000) 552.6 332.8818 1352.1 995.4707 1519.3 921.7023

     | Show Table
    DownLoad: CSV
    Table 5.  Numerical comparisons among the algorithms in [26,28] and our algorithm on medium-scale Problem 11.
    p (m,n) our [26] [28]
    Avg(Std).Iter Avg(Std).Time Avg(Std).Iter Avg(Std).Time Avg(Std).Iter Avg(Std).Time
    4 (10, 20) 31.9(32.8312) 0.4150(0.4459) 120.1(118.6612) 2.0917(2.0855) 112.2(97.3764) 1.8487(1.5777)
    (20, 40) 35.4(28.3944) 0.5178(0.4354) 142(109.9582) 2.6908(2.1186) 114.3(92.8041) 2.0269(1.6408)
    (30, 60) 46.6(56.5194) 0.8169(1.0093) 93.1(58.7409) 1.9714(1.3202) 93.0(69.5054) 1.8041(1.2743)
    (40, 80) 34.5(33.5894) 0.7204(0.7070) 85.6(74.8441) 2.0867(1.8616) 63.5(54.9641) 1.4810(1.3050)
    (50,100) 40.8(39.5596) 1.0568(1.0764) 91.8(99.0372) 2.6335(2.8371) 71.1(54.9735) 1.8772(1.4905)
    (60,120) 21.8(17.0458) 0.6746(0.5693) 89.2(61.4684) 3.2295(2.2926) 68.5(51.9870) 2.3788(1.8911)
    (70,140) 16.0(11.1893) 0.6336(0.4974) 78.7(80.9519) 3.6159(3.7190) 63.7(76.3375) 2.7302(3.2902)
    (80,160) 27.8(19.9038) 1.4628(1.0860) 88.7(53.3555) 5.1639(3.3558) 74.6(49.2020) 4.2010(2.9670)
    (90,180) 21.2(21.6601) 1.3846(1.5300) 67.0(62.3217) 4.6670(4.5688) 59.6(54.8584) 3.9347(3.8613)
    (100,200) 37.3(30.3942) 3.3410(3.0137) 122.5(55.1693) 11.0229(5.1801) 96.9(53.5807) 8.3233(4.7203)
    5 (10, 20) 123.2(173.7980) 1.6461(2.3781) 160.5(122.7992) 2.6874(2.0730) 98.4(70.0303) 1.5646(1.0756)
    (20, 40) 57.8(77.3586) 0.8393(1.16481) 101.5(90.8947) 1.8805(1.7787) 71.6(67.9591) 1.3059(1.3715)
    (30, 60) 63.2(39.3568) 1.1174(0.7884) 155.5(135.708) 3.3789(3.0502) 98.4(88.7437) 2.0061(1.7969)
    (40, 80) 54.9(95.2979) 1.3205(2.362) 203.9(146.9084) 5.4791(4.0027) 145.5(118.9128) 3.7432(3.0424)
    (50,100) 42.3(33.77) 1.1217(0.9049) 343.5(382.1019) 10.8698(11.9164) 267.8(361.9276) 8.0793(10.2894)
    (60,120) 51.0(47.8623) 1.8003(1.8125) 268.0(146.4281) 10.6335(6.2604) 191(131.6716) 7.2388(5.0948)
    (70,140) 31.8(40.5038) 1.3226(1.8188) 117.8(114.8893) 5.4516(5.4995) 86.9(82.3619) 3.8601(3.8865)
    (80,160) 30.4(29.2684) 1.5684(1.5981) 67.5(55.9897) 3.86(3.2299) 46.0(21.5639) 2.4507(1.2596)
    (90,180) 31.9(31.3989) 2.0009(2.0368) 96.6(77.3139) 6.8571(5.7645) 68.4(65.7529) 4.6769(4.6839)
    (100,200) 33.4(33.6309) 3.1802(3.3664) 198.4(169.6061) 19.0294(16.5882) 119.5(105.2675) 11.0382(9.6674)
    6 (10, 20) 99.3(117.1632) 2.1315(2.2123) 466.3(514.9322) 12.2852(14.0562) 356(416.9149) 8.8802(9.4876)
    (20, 40) 348.7(651.2854) 5.6616(10.6349) 439.7(617.1514) 9.5253(14.0514) 287.9(355.6457) 4.7303(5.8901)
    (30, 60) 83.8(62.7117) 1.3889(1.0594) 197.5(149.0156) 4.0838(3.1896) 109.3(72.8190) 2.1971(1.5108)
    (40, 80) 89.3(116.0233) 2.8242(3.4539) 385.6(404.1607) 15.9517(16.6636) 267.0(311.3747) 10.6983(12.9213
    (50,100) 215.1(205.3667) 10.4319(10.0666) 418.3(258.3912) 19.7742(13.2598) 280.8(200.1099) 13.2487(9.9695)
    (60,120) 64.0(104.2929) 3.7668(6.0989) 536.4(451.0098) 32.1704(27.5837) 386.8(327.6687) 23.1508(20.6228)
    (70,140) 127.5(136.0230) 9.7495(10.2623) 239.6(159.1353) 16.9048(11.5614) 143.4(101.6545) 10.1235(7.9464)
    (80,160) 128.0(247.0591) 12.2207(23.8867) 255.10(288.4623) 23.7336(27.8770) 168.7(192.1182) 15.9113(19.6485)
    (90,180) 140.2(191.3007) 17.0936(23.5297) 392.10(326.3545) 46.8629(40.9758) 256.1(248.8777) 30.1672(29.1070)
    (100,200) 65.1(60.5532) 5.5949(5.6379) 214.8(257.7537) 18.8967(23.1908) 133.2(188.4955) 11.3892(16.4153)
    7 (10, 20) 134.3(171.9087) 1.9250(2.4624) 434.7(817.5430) 7.6743(14.2808) 268.8(489.3136) 4.7123(8.6148)
    (20, 40) 379.6(645.6239) 6.1832(9.991) 1029.2 (2443.5000) 19.9616(46.747) 492.3(1137.0000) 8.843(19.9798)
    (30, 60) 359.5(780.0254) 6.8852(14.7182) 294.5(595.5877) 6.9729(14.0042) 209.1(425.2718) 4.4931(8.9504)
    (40, 80) 72.0(152.2281) 1.5577(3.3768) 416.8(624.1418) 10.5708(15.7193) 240.5(412.5630) 5.9042(10.0912)
    (50,100) 39.8(61.3560) 0.9973(1.5611) 281.7(481.0314) 8.1550(14.1022) 192.1(366.9682) 5.2928(10.2618)
    (60,120) 164.5(227.7056) 5.9599(8.0338) 672.5(906.5232) 26.0214(34.4929) 444.9(682.5063) 16.7892(24.8233)
    (70,140) 77.3(98.5231) 2.6876(3.4708) 276.2(303.7498) 10.102(11.0515) 139.6(142.7895) 5.13(5.3474)
    (80,160) 74.1(122.941) 5.2043(8.8393) 474.8(780.9027) 27.3175(45.6091) 289.6(552.5047) 16.8792(32.7485)
    (90,180) 70.0(140.4179) 5.2749(10.985) 429.5(705.4637) 30.0896(48.8796) 318.2(562.72) 22.3532(39.7114)
    (100,200) 70.5(96.0742) 6.8665(9.7942) 694.6(1073.2000) 68.3707(108.5662) 384.0(623.8142) 34.5013(56.7705)

     | Show Table
    DownLoad: CSV
    Table 6.  Numerical comparisons among the algorithms in [26,28] and our algorithm on large-scale Problem 11.
    (p,m,n) Our algorithm Algorithm in [26] Algorithm in [28]
    Avg(Std).Iter Avg(Std).Time Avg(Std).Iter Avg(Std).Time Avg(Std).Iter Avg(Std).Time
    (2, 10, 1000) 11.3(5.6223) 1.8230(1.0482) 28.1(29.4192) 4.9508(5.5044) 33.2(47.4042) 4.9860(6.9212)
    (2, 10, 2000) 13.1(8.8707) 7.6748(5.8547) 38.1(35.5006) 24.5339(24.0695) 45(45.1597) 24.0173(23.0733)
    (2, 10, 3000) 10.3(4.9406) 13.2076(7.6065) 21.1(14.1736) 29.3624(21.9892) 66.9(60.9843) 71.8731(62.9537)
    (2, 10, 4000) 8.8(8.0100) 18.9312(20.8423) 35.9(29.7740) 90.5241(79.8744) 59.1(58.9024) 117.6200(111.2727)
    (3, 10, 1000) 77.1(61.2576) 15.0725(12.4141) 216.1(156.9888) 41.7395(30.3254) 243.6(199.4228) 40.3785(32.9021)
    (3, 10, 2000) 161.8(394.4629) 114.8995(283.4199) 250.1(365.6067) 174.4885(259.0985) 349.7(438.3820) 202.5456(251.7850)
    (4, 10, 1000) 80.5(84.4562) 14.3938(15.3816) 887.9(706.9598) 163.7237(130.5653) 1047.7(944.1833) 168.5456(152.1507)
    (4, 10, 2000) 290.2(348.2453) 169.9271(208.815) 739.2(749.6237) 423.9934(419.0576) 387.2(331.2992) 208.3637(176.7977)

     | Show Table
    DownLoad: CSV
    Figure 1.  Comparison of average number of iterations in Problem 10.
    Figure 2.  Comparison of average CPU time in Problem 10.
    Figure 3.  Comparison of average number of iterations in problem 11.
    Figure 4.  Comparison of average CPU time in problem 11.
    Figure 5.  Comparison of average number of iterations in problem 11.
    Figure 6.  Comparison of average CPU time in problem 11.

    For convenience, the symbols in the table headers in Tables 16 are specified as follows: Opt.val: the global optimum of the tested problem; Iter: the number of iterations of the algorithm; Time: the CPU time in seconds; Avg.Iter: the average number of iterations of the 10 randomly generated problems; Std.Iter: the standard deviation of the number of iterations; Avg.Time: the average CPU time of the 10 randomly generated problems; Std.Time: the standard deviation of the average CPU time; p: the number of linear functions; m: the number of constraints; n: the dimensionality of decision variables.

    As can be seen from the numerical results in Table 1, our algorithm effectively calculates the global optimal solutions and optimal values for low-dimensional Problems 1–8. In comparison to the algorithms proposed in [10,26,28], our algorithm demonstrates shorter computation time and fewer iterations. Most deterministic problems require only one iteration, with a maximum of nine iterations, indicating the feasibility of our algorithm.

    Upon observing Table 2, it is evident that our algorithm exhibits a lower average number of iterations and shorter average CPU time compared to the algorithm proposed in [10] for medium-scale Problem 9. The primary reason for this disparity is that our algorithm solves the problem in the p-dimensional space, whereas the algorithm in [10] tackles it in the n-dimensional space. In comparison to the algorithm presented in [23], it is apparent that the iterations of the algorithm in [23] is generally lower than our algorithm. However, when considering the average CPU time, our algorithm outperforms in terms of efficiency.

    From Table 3, it is easy to see that the proposed algorithm is more efficient in solving Problem 10. First, when compared to the algorithm in [26], our algorithm consumes less time and requires fewer iterations. Second, in comparison to the algorithm in [23], our algorithm exhibits a lower number of iterations for parameter sets where p is fixed at 7. Although the algorithm in [23] may have less iterations than ours for some parameter groups, our average CPU time is still lower than that of [23]. For instance, when p=4, our algorithm demonstrates higher iterations than [23] as m and n vary, yet our CPU time is shorter than theirs.

    To visualize the effectiveness of the proposed algorithm in this paper, Figures 1 and 2 depict the trends of the average number of iterations and average CPU time for fixing (m,n) to (10, 20), (100,200) and p to change from 4 to 7 in Problem 10, respectively. From Figure 1, when (m,n)=(10,20) (indicated by the solid line), the green solid line is always above the blue solid line, indicating that the number of iterations of the algorithm in [26] is higher than ours. On the other hand, the red solid line is above the blue solid line only after p=6, which means that the iterations of the algorithm in [23] are higher than ours after p>6. When (m,n)=(100,200) (indicated by the dashed line), both the red dashed line and the green dashed line are above the blue dashed line, indicating that the number of iterations of our algorithm is lower than the other two algorithms. In the vertical direction, the blue dashed line is always higher than the blue solid line, but the vertical distance between them is shorter than the corresponding vertical distance of the other two algorithms. This implies that as (m,n) increases, the number of iterations of our algorithm exhibits a slight increase, but the magnitude of the increase is not significant. Based on Figure 2, we observe that when (m,n)=(10,20), p=4,5,6, the three solid lines approximately coincide and when p=7, it is obvious that our algorithm takes less time. When (m,n)=(100,200), the distance between the red dashed line and the blue dashed line becomes larger as p increases. The time taken by our algorithm increases as (m,n) increases, but not significantly.

    Table 4 gives the numerical results for solving the large-scale Problem 10. It can be observed that our algorithm is more time-saving compared to the algorithm in [26]. In contrast to the algorithm in [23], the first two sets of parameters yield better results than ours. However, for the case of (m,n)=(3,10,1000), the number of iterations in our algorithm is comparable to that of [23], but our algorithm exhibits lower time consumption. When (p,m,m)=(3,10,2000), although our algorithm requires fewer iterations than that of [23], the CPU time is slightly longer. When (p,m,n)=(4,10,1000),(4,10,2000), respectively, our algorithm demonstrates clear advantages. Specifically, the algorithm in [23] requires twice as many iterations and three times as much CPU time as ours.

    Table 5 shows the results of a medium-scale test for Problem 11. Compared to the algorithms in [26,28], our algorithm exhibits a distinct advantage. Figures 3 and 4 show the average number of iterations and the average CPU time among the algorithms in [26,28] and our algorithm when (m,n) is fixed (10, 20), (100,200) on Problem 11, respectively. From Figure 3, it can be observed that the average number of iterations in the algorithm proposed by [26] is higher than our algorithm when (m,n)=(10,20) and lower than our algorithm only when p=5. When (m,n)=(100,200), the average number of iterations in our algorithm is lower than those of the algorithms in [26,28]. Moreover, our algorithm significantly outperforms the algorithms in [26,28] when (m,n)=(100,200). From Figure 4, we find that while (m,n)=(10,20),(100,200), the average CPU time of the proposed algorithm is less than the other two algorithms, and the trend of time change is not obvious as p increases.

    Table 6 shows the numerical results of a large-scale Problem 11, we can see that the number of linear functions p is much smaller than n. Fixing (p,m)=(2,10), the number of iterations of our algorithm decreases as the decision variable n increases, while the number of iterations of the algorithms in the references [26,28] decreases and then increases as the decision variable n increases, as reflected in Figure 5. Moreover, we find that the algorithm in [26,28] consume more CPU time than our algorithm. Furthermore, we conclude that CPU time increases with an increase in the decision variable or linear function. Figure 6 provides a clearer picture of this conclusion.

    We propose a new branch-and-bound algorithm for solving LMP in outer space. First, the original problem is reduced to an equivalent problem by introducing auxiliary variables. Then, the equivalent problem is further simplified by leveraging the properties of exponential and logarithmic functions.The focus switches to resolving the second equivalent problem. Afterwards, the nonlinear constraints are linearly relaxed, and the objective function of the equivalent problem is linearly approximated to obtain the linear relaxation programming problem. Consequently, the proposed rectangular compression technique is embedded into the branch-and-bound framework, leading to the development of the outer space branch-and-bound algorithm. Furthermore, we demonstrate the computational complexity to estimate the maximum number of iterations of the algorithm in the worst case. Finally, in order to verify the feasibility and efficiency of the algorithm, some deterministic and random problems are tested. The experimental results show that the algorithm exhibits good computational performance, particularly for large-scale random LMP where the number of linear functions p is significantly smaller than the decision variable n.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work is supported by the National Natural Science Foundation of China (11961001), the Construction Project of first-class subjects in Ningxia higher Education(NXYLXK2017B09), the Basic discipline research projects supported by Nanjing Securities(NJZQJCXK202201).

    The authors declare no conflicts of interest.



    [1] Shervette V (2006) Assessment of Estuarine Habitats for Resident and Estuarine-dependent Species: Tools for Conservation and Management. PhD diss., Texas A & M University, 1–187.
    [2] Paalman MAA (1997) Processes affecting the distribution and speciation of heavy metals in the Rhine/Meuse Estuary. Published by Utrecht University, 148: 1–149.
    [3] Mok JS, Kwon JY, Son KT, et al. (2014) Contents and Risk Assessment of Heavy Metals in Marine Invertebrates from Korean Coastal Fish Markets. J Food Protect 77: 1022–1030. doi: 10.4315/0362-028X.JFP-13-485
    [4] Tabelin CB, Igarashi T, Villacorte-Tabelin M, et al. (2018) Arsenic, selenium, boron, lead, cadmium, copper, and zinc in naturally contaminated rocks: A review of their sources, modes of enrichment, mechanisms of release, and mitigation strategies. Sci Total Environ 645: 1522–1553. doi: 10.1016/j.scitotenv.2018.07.103
    [5] Whalley C, Rowlatt S, Bennett M, et al. (1999) Total Arsenic in Sediments from the Western North Sea and the Humber Estuary. Mar Pollut Bull 38: 394–400. doi: 10.1016/S0025-326X(98)00158-1
    [6] Xie X, Wang Y, Ellis A, et al. (2011) The sources of geogenic arsenic in aquifers at Datong basin, northern China: Constraints from isotopic and geochemical data. J Geochem Explor 110: 155–166. doi: 10.1016/j.gexplo.2011.05.006
    [7] Sun Z, Mou X, Tong C, et al. (2015) Spatial variations and bioaccumulation of heavy metals in intertidal zone of the Yellow River estuary, China. Catena 126: 43–52. doi: 10.1016/j.catena.2014.10.037
    [8] Peterson M, Waggy G, Woodrey M, et al. (2007) Grand Bay national estuarine research reserve: An ecological characterization. Moss Point, Mississippi. Silver Spring, MD 20910, 1–268.
    [9] McComb JQ, Han FX, Rogers C, et al. (2015) Trace elements and heavy metals in the Grand Bay National Estuarine Reserve in the northern Gulf of Mexico. Mar Pollut Bull 99: 61–69. doi: 10.1016/j.marpolbul.2015.07.062
    [10] Reiman JH, Xu YJ, He S, et al. (2018) Metals geochemistry and mass export from the Mississippi-Atchafalaya River system to the Northern Gulf of Mexico. Chemosphere 205: 559–569. doi: 10.1016/j.chemosphere.2018.04.094
    [11] Hu XF, Jiang Y, Shu Y, et al. (2014) Effects of mining wastewater discharges on heavy metal pollution and soil enzyme activity of the paddy fields. J Geochem Explor 147: 139–150. doi: 10.1016/j.gexplo.2014.08.001
    [12] Du Laing G, Vandecasteele B, De Grauwe P, et al. (2007) Factors affecting metal concentrations in the upper sediment layer of intertidal reedbeds along the river Scheldt. J Environ Monitor 9: 449–455. doi: 10.1039/b618772b
    [13] Du Laing G, Meers E, Dewispelaere M, et al. (2009) Heavy metal mobility in intertidal sediments of the Scheldt estuary: Field monitoring. Sci Total Environ 407: 2919–2930. doi: 10.1016/j.scitotenv.2008.12.024
    [14] Hatje V, Payne TE, Hill DM, et al. (2003) Kinetics of trace element uptake and release by particles in estuarine waters: Effects of pH, salinity, and particle loading. Environ Int 29: 619–629. doi: 10.1016/S0160-4120(03)00049-7
    [15] Turner A (2000) Trace Metal Contamination in Sediments from U.K. Estuaries: An Empirical Evaluation of the Role of Hydrous Iron and Manganese Oxides. Estuar Coast Shelf Sci 50: 355–371.
    [16] Ouddane B, Skiker M, Fischer JC, et al. (1999) Distribution of iron and manganese in the Seine river estuary: Approach with experimental laboratory mixing. J Environ Monitor 1: 489–496. doi: 10.1039/a903721g
    [17] Wong VNL, Johnston SG, Burton ED, et al. (2010) Seawater causes rapid trace metal mobilisation in coastal lowland acid sulfate soils: Implications of sea level rise for water quality. Geoderma 160: 252–263. doi: 10.1016/j.geoderma.2010.10.002
    [18] Charters FJ, Cochrane TA, O'Sullivan AD (2016) Untreated runoff quality from roof and road surfaces in a low intensity rainfall climate. Sci Total Environ 550: 265–272. doi: 10.1016/j.scitotenv.2016.01.093
    [19] Nordstrom DK (2009) Acid rock drainage and climate change. J Geochem Explor 100: 97–104. doi: 10.1016/j.gexplo.2008.08.002
    [20] Shehane SD, Harwood VJ, Whitlock JE, et al. (2005) The influence of rainfall on the incidence of microbial faecal indicators and the dominant sources of faecal pollution in a Florida river. J Appl Microbiol 98: 1127–1136. doi: 10.1111/j.1365-2672.2005.02554.x
    [21] Alloway BJ, (2013) Sources of Heavy Metals and Metalloids in Soils, In: Alloway BJ, editor. Dordrecht: Springer Netherlands, 11–50.
    [22] Mearns LO, Giorgi F, McDaniel L, et al. (1995) Analysis of daily variability of precipitation in a nested regional climate model: Comparison with observations and doubled CO2 results. Global Planet Change 10: 55–78. doi: 10.1016/0921-8181(94)00020-E
    [23] Hatje V, Birch GF, Hill DM (2001) Spatial and Temporal Variability of Particulate Trace Metals in Port Jackson Estuary, Australia. Estuarine Coastal Shelf Sci 53: 63–77. doi: 10.1006/ecss.2001.0792
    [24] Katz RW (1983) Statistical Procedures for Making Inferences about Precipitation Changes Simulated by an Atmospheric General Circulation Model. J Atmos Sci 40: 2193–2201. doi: 10.1175/1520-0469(1983)040<2193:SPFMIA>2.0.CO;2
    [25] Ragosta M, Caggiano R, D'Emilio M, et al. (2002) Source origin and parameters influencing levels of heavy metals in TSP, in an industrial background area of Southern Italy. Atmos Environ 36: 3071–3087. doi: 10.1016/S1352-2310(02)00264-9
    [26] Chuan MC, Shu GY, Liu JC (1996) Solubility of heavy metals in a contaminated soil: Effects of redox potential and pH. Water Air Soil Pollut 90: 543–556. doi: 10.1007/BF00282668
    [27] Huerta-Diaz MA, Tessier A, Carignan R (1998) Geochemistry of trace metals associated with reduced sulfur in freshwater sediments. Appl Geochem 13: 213–233. doi: 10.1016/S0883-2927(97)00060-7
    [28] Byrnes MR, Davis JRA, Kennicutt IIMC, et al. (2017) Habitats and Biota of the Gulf of Mexico: Before the Deepwater Horizon Oil Spill. Volume 1: Water Quality, Sediments, Sediment Co ntaminants, Oil and Gas Seeps, Coastal Habitats, Offshore Plankton and Benthos, and Shellfish. 948.
    [29] Ideriah TJK, Ikpe FN, Nwanjoku FN (2013) Distribution and Speciation of Heavy Metals in Crude Oil Contaminated Soils from Niger Delta, Nigeria. World Environ 3: 18–28.
    [30] Stephens SR, Alloway BJ, Parker A, et al. (2001) Changes in the leachability of metals from dredged canal sediments during drying and oxidation. Environ Pollut 114: 407–413. doi: 10.1016/S0269-7491(00)00231-1
    [31] Church TM, Bernat M, Sharma P (1986) Distribution of natural uranium, thorium, lead, and polonium radionuclides in tidal phases of a Delaware salt marsh. Estuaries 9: 2–8. doi: 10.2307/1352187
    [32] Li QS, Liu YN, Du YF, et al. (2011) The behavior of heavy metals in tidal flat sediments during fresh water leaching. Chemosphere 82: 834–838. doi: 10.1016/j.chemosphere.2010.11.026
    [33] Peterson ML, Carpenter R (1986) Arsenic distributions in porewaters and sediments of Puget Sound, Lake Washington, the Washington coast and Saanich Inlet, B.C. Geochim Cosmochim Ac 50: 353–369. doi: 10.1016/0016-7037(86)90189-4
    [34] Chu B, Chen X, Li Q, et al. (2014) Effects of salinity on the transformation of heavy metals in tropical estuary wetland soil. Chem Ecol 31: 186–198.
    [35] Charlatchka R, Cambier P (2000) Influence of reducing conditions on solubility of trace metals in contaminated soils. Water Air Soil Pollut 118: 143–168. doi: 10.1023/A:1005195920876
    [36] Du Laing G, De Vos R, Vandecasteele B, et al. (2008) Effect of salinity on heavy metal mobility and availability in intertidal sediments of the Scheldt estuary. Estuar Coast Shelf Sci 77: 589–602. doi: 10.1016/j.ecss.2007.10.017
    [37] Acosta JA, Jansen B, Kalbitz K, et al. (2011) Salinity increases mobility of heavy metals in soils. Chemosphere 85: 1318–1324. doi: 10.1016/j.chemosphere.2011.07.046
    [38] Millward GE, Liu YP (2003) Modelling metal desorption kinetics in estuaries. Sci Total Environ 314–316: 613–623.
    [39] Mearns LO, Giorgi F, McDaniel L, et al. (1995) Analysis of daily variability of precipitation in a nested regional climate model: Comparison with observations and doubled CO2 results. Global Planet Change 10: 55–78. doi: 10.1016/0921-8181(94)00020-E
    [40] Kraepiel AML, Chiffoleau JF, Martin JM, et al. (1997) Geochemistry of trace metals in the Gironde estuary. Geochim Cosmochim Ac 61: 1421–1436. doi: 10.1016/S0016-7037(97)00016-1
  • This article has been cited by:

    1. Suxia Ma, Yuelin Gao, Bo Zhang, Output-space branch-and-bound reduction algorithm for solving generalized linear multiplicative programming programs, 2024, 1598-5865, 10.1007/s12190-024-02202-4
    2. Xia Jing, Xiaohua Ma, Yuelin Gao, Xia Liu, An efficient outcome-space branch-and-bound algorithm for solving a class of large-scale linear multiplicative programs, 2024, 1017-1398, 10.1007/s11075-024-01961-2
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4391) PDF downloads(738) Cited by(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog