Loading [MathJax]/jax/element/mml/optable/Latin1Supplement.js
Research article

An efficient spectral minimization of the Dai-Yuan method with application to image reconstruction

  • Received: 16 August 2023 Revised: 24 September 2023 Accepted: 13 October 2023 Published: 17 November 2023
  • MSC : 65K05, 90C06, 90C30, 90C47, 90C90

  • In this paper, a spectral Dai and Yuan conjugate gradient (CG) method is proposed based on the generalized conjugacy condition for large-scale unconstrained optimization, in which the spectral parameter is motivated by some interesting theoretical features of quadratic convergence associated with the Newton method. Accordingly, utilizing the strong Wolfe line search to yield the step-length, the search direction of the proposed spectral method is sufficiently descending and converges globally. By applying some standard Euclidean optimization test functions, numerical results reports show the advantage of the method over some modified Dai and Yuan CG schemes in literature. In addition, the method also shows some reliable results, when applied to solve an image reconstruction model.

    Citation: Nasiru Salihu, Poom Kumam, Ibrahim Mohammed Sulaiman, Thidaporn Seangwattana. An efficient spectral minimization of the Dai-Yuan method with application to image reconstruction[J]. AIMS Mathematics, 2023, 8(12): 30940-30962. doi: 10.3934/math.20231583

    Related Papers:

    [1] Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469
    [2] Xiyuan Zhang, Yueting Yang . A new hybrid conjugate gradient method close to the memoryless BFGS quasi-Newton method and its application in image restoration and machine learning. AIMS Mathematics, 2024, 9(10): 27535-27556. doi: 10.3934/math.20241337
    [3] Yixin Li, Chunguang Li, Wei Yang, Wensheng Zhang . A new conjugate gradient method with a restart direction and its application in image restoration. AIMS Mathematics, 2023, 8(12): 28791-28807. doi: 10.3934/math.20231475
    [4] Auwal Bala Abubakar, Poom Kumam, Maulana Malik, Parin Chaipunya, Abdulkarim Hassan Ibrahim . A hybrid FR-DY conjugate gradient algorithm for unconstrained optimization with application in portfolio selection. AIMS Mathematics, 2021, 6(6): 6506-6527. doi: 10.3934/math.2021383
    [5] Mingyuan Cao, Yueting Yang, Chaoqian Li, Xiaowei Jiang . An accelerated conjugate gradient method for the Z-eigenvalues of symmetric tensors. AIMS Mathematics, 2023, 8(7): 15008-15023. doi: 10.3934/math.2023766
    [6] Jie Guo, Zhong Wan . A new three-term conjugate gradient algorithm with modified gradient-differences for solving unconstrained optimization problems. AIMS Mathematics, 2023, 8(2): 2473-2488. doi: 10.3934/math.2023128
    [7] Maulana Malik, Ibrahim Mohammed Sulaiman, Auwal Bala Abubakar, Gianinna Ardaneswari, Sukono . A new family of hybrid three-term conjugate gradient method for unconstrained optimization with application to image restoration and portfolio selection. AIMS Mathematics, 2023, 8(1): 1-28. doi: 10.3934/math.2023001
    [8] Jamilu Sabi'u, Ibrahim Mohammed Sulaiman, P. Kaelo, Maulana Malik, Saadi Ahmad Kamaruddin . An optimal choice Dai-Liao conjugate gradient algorithm for unconstrained optimization and portfolio selection. AIMS Mathematics, 2024, 9(1): 642-664. doi: 10.3934/math.2024034
    [9] Yulin Cheng, Jing Gao . An efficient augmented memoryless quasi-Newton method for solving large-scale unconstrained optimization problems. AIMS Mathematics, 2024, 9(9): 25232-25252. doi: 10.3934/math.20241231
    [10] Ibtisam A. Masmali, Zabidin Salleh, Ahmad Alhawarat . A decent three term conjugate gradient method with global convergence properties for large scale unconstrained optimization problems. AIMS Mathematics, 2021, 6(10): 10742-10764. doi: 10.3934/math.2021624
  • In this paper, a spectral Dai and Yuan conjugate gradient (CG) method is proposed based on the generalized conjugacy condition for large-scale unconstrained optimization, in which the spectral parameter is motivated by some interesting theoretical features of quadratic convergence associated with the Newton method. Accordingly, utilizing the strong Wolfe line search to yield the step-length, the search direction of the proposed spectral method is sufficiently descending and converges globally. By applying some standard Euclidean optimization test functions, numerical results reports show the advantage of the method over some modified Dai and Yuan CG schemes in literature. In addition, the method also shows some reliable results, when applied to solve an image reconstruction model.



    The unconstrained optimization basically deals with finding the global maximum or minimum of a continuously differentiable function f:RnR. The general form of an unconstrained optimization problem is as follows:

    minf(x),xRn, (1.1)

    where the gradient g(x)=f(x). Solving the complexity of this problem (1.1) would require the use of some symbolic aided algebra system and an effective numerical method that would be able to perform the necessary computations, plot the numerical results and manipulate the mathematical expression in an analytical form. One of the widely and most preferred methods considered for solving (1.1) is the nonlinear conjugate gradient (CG) method because of its robustness and ability to deal with large-scale problems [1,2]. The simplicity and efficiency of the CG algorithm has further motivated its application to numerous real-life problems including the feed forward training of neural networks [3], robotic motion control [4], signal recovery [5], regression analysis [1,6], image deblurring [7,8] and portfolio selection [9] among others.

    Starting with an initial guess of x0Rn, the CG method generates a sequence of iterates {xk} via the following formula:

    xk+1=xk+αkdk,k=0,1,2,, (1.2)

    where αk represents the step size that is computed along the search direction dk [10]. Usually, a line search procedure is required to obtain the step size (αk) which could either be an exact or inexact procedure. While the exact line search yields the exact minimizer for a given problem, the inexact scheme computes αk such that it satisfies the Wolfe conditions at each iteration. The most commonly used Wolfe conditions includes the standard Wolfe condition that requires αk to satisfy

    f(xk+αkdk)f(xk)+δαkgTkdk, (1.3)
    g(xk+αkdk)TdkσgTkdk, (1.4)

    and the strong Wolfe condition is computed such that αk satisfies (1.3) and

    |g(xk+αkdk)Tdk|σgTkdk, (1.5)

    where 0<δ<12, δ<σ<1.

    Another important component of the CG method is the search direction dk. This component plays an important role in the performance and convergence analysis of any CG method. Also, the search direction distinguishes the classes of different CG algorithms including the spectral and three-term CG methods. The search direction dk for the classical CG method is computed as follows:

    d0=g0,dk+1=gk+1+βkdk,k1, (1.6)

    where the coefficient βk characterizes the different CG formulas. Selection of the CG choice parameter and search direction is always crucial in the study of the unconstrained optimization because these two components are responsible for the numerical performance and theoretical analysis of any CG method [11].

    For any optimization method to fulfill the general optimization criteria and be able to use the line search procedure, it is required to possess the following descent property:

    dTkgk<0k. (1.7)

    Furthermore, a CG method is said to be sufficiently descending if there exists some constant c>0 such that the following condition holds:

    dTkgkcgk2k. (1.8)

    This condition (1.8) plays an important role in the convergence analysis of any nonlinear CG algorithm, which is always preferred over (1.7). Some of the earliest and known CG coefficients that possess this condition (1.8) include those developed by Fletcher and Revees (FR)[12], Dai and Yuan (DY) [13] and Fletcher (conjugate descent (CD))[14], with formulas as follows:

    βFRk=gk+12gk2,βDYk=gk+12dTkyk,βCDk=gk+12dTkgk, (1.9)

    where yk=gk+1gk and denotes the 2 norm. This class of CG methods is characterized by simplicity and less memory requirements under different line search procedures. However, their numerical performance in practical computations is often affected by jamming phenomena. For instance, if any of the above CG algorithm generates a tiny step size from xk to xk+1 and poor search direction and as a result if restart is not performed along the negative direction, then, it is likely that the subsequent step size and direction will also have poor performance [15].

    Due to the challenges reported by the above category of CG algorithms, several studies have shown that the methods possess nice convergence properties (see [11,16,17]). To address the issue related to the computational performance, several studies involved constructing new CG formulas by either combining the methods in (1.9) with other efficient formulas or introducing new terms to the set of methods in (1.9) to improve the computational efficiency and their general structure [18].

    In this study, we are interested in modifying the classical DY CG formula for solving optimization and image restoration problems. The proposed method introduces a new spectral parameter θk to scale the search direction dk in (1.6) such that it satisfies the Newton search direction and the well-known D–L [19] conjugacy condition in Section 2. Section 3 demonstrates how the proposed method satisfies the descent condition and further proves the global convergence under suitable conditions. The preliminary computational results on a set of optimization functions are analyzed in Section 4 while Section 5 reports results related to real-world application problem. The last section summarizes the whole idea related to the study and presents a general conclusion.

    The spectral CG methods have been proposed to improve the general structure of CG methods. The methods are based on the works of Raydan [20], Barzilai and Borwein[21] and Birgin and Martínez [22]. For some recent results on spectral CG methods, see [8,24,25,26,27,28,29,30,31]. Similarly, the standard conjugacy condition given by

    yTkdk+1=0 (2.1)

    is crucial in the convergence analysis of these CG methods. The CG methods proposed with this structure (2.1) largely depend on exact line criteria for the choice of step size, αk. However, this requirement is computationally expensive for large-scale problems. Therefore, denoting sk=xk+1xk, the selection of αk uses its generalized form called the D–L conjugacy condition introduced in [19],

    yTkdk+1=tgTk+1sk,t>0, (2.2)

    is usually preferable in the design of some nice CG algorithms. The parameter t, called the D-L parameter is capable of ensuring better convergence. Therefore, it must be carefully chosen.

    To address some weaknesses associated with the DY method, its modification has been investigated recently. For instance, utilizing the strong Wolfe line search, Jiang and Jian [32] proposed the improved DY [13] CG parameter with the following structure:

    βIDYk=rkgk+12dTkyk, (2.3)

    where rk=|gTk+1dk|gTkdk. Motivated by this idea, Jian et al. [33] introduced a spectral parameter that yields the descent direction with

    θJYJLLk=1+|gTk+1dk|gTkdk. (2.4)

    Since θJYJLLk1 it is obvious that dk is a descent direction. Following the idea in [34], they proposed its conjugate parameter as follows:

    βJYJLLk=gk+12(gTk+1dk)2dk2max{gk2,dTkyk}. (2.5)

    However, the search direction including the method in (2.5) only satisfies the descent condition, (1.7) and it converges globally based on two cases of max{gk2,dTkyk} with the standard Wolfe criteria respectively. According to this formulation, two other modified DY CG methods were proposed in [35] with the following forms:

    β(1)k=gk+12(gTk+1gk)2gk2dTkyk, (2.6)
    β(2)k=gk+12gk+1gkgTk+1gkμdTkyk. (2.7)

    Inspired by these modifications, Zhu et al. [36] suggested another DY modification namely, the DDY1 scheme which has the form of

    βDDY1k={gk+12μ1(gTk+1dk)2gk+1gkdk2gTk+1gkdTkyk,gTk+1gk0,gk+12+μ1(gTk+1dk)2gk+1gkdk2gTk+1gkdTkyk,gTk+1gk<0. (2.8)

    Nonetheless, the method has similar theoretical characteristics with the scheme described by (2.5). Thus, in order to improve the general structure and establish sufficient descent, as given by (1.8) for the DY method, we scale the search direction dk in (1.6) with a spectral parameter θk such that it satisfies the well-known D–L conjugacy condition (2.2), as well as the Newton search direction in the following form

    dk+1=θkgk+1+βkdk,k1. (2.9)

    Pre-multiplying by yTk, we obtain

    yTkdk+1=θkyTkgk+1+βDYkyTkdk.

    Using the above relation with (2.2) gives

    tsTkgk+1=θkyTkgk+1+βDYkyTkdk.

    Re- arranging implies that

    θSDLk=tsTkgk+1yTkgk+1+βDYkyTkdkyTkgk+1. (2.10)

    Similarly, in some neighborhood of the minimizer, if the current point xk+1 is close enough to a local minimizer and the objective function behaves like a quadratic function, then the optimal search direction to follow is the Newton direction:

    dk+1=2f(xk+1)1gk+1, (2.11)

    where 2f(xk+1) is the Hessian matrix of f. Thus, the Newton method requires the Hessian matrix, i.e, the second derivative information to update (2.11), which provides a nice convergence rate. Motivated by its quadratic convergence property, we assume that 2f(xk+1) exists at every iteration and satisfies the conditions of a suitable secant equation; for instance,

    2f(xk+1)sk=yk. (2.12)

    Now, equating (2.9) with (2.11) gives

    2f(xk+1)1gk+1=θkgk+1+βDYkdk.

    Pre-multiplying by yTk and using the secant equation (2.12), we get

    2f(xk+1)1gk+1=θkgk+1+βDYkdksTkgk+1=θksTk2f(xk+1)gk+1+βDYksTk2f(xk+1)dksTkgk+1=θkyTkgk+1+βDYkyTkdk.

    After some simple simplification, we obtain

    θSNMk=sTkgk+1yTkgk+1+βDYkyTkdkyTkgk+1. (2.13)

    Remark 2.1. Observe that, if t=1, then the parameter θSDLk=θSNMk, which implies that, the spectral search direction {dk+1} does not only satisfy the generalized D-L conjugacy condition it also takes advantage of the nice convergence property of the Newton direction. However, for a sufficient descent condition to hold, we always select t>1 in this paper.

    To achieve the sufficient descent of the search direction described by (2.9), we suggest the following modification of (2.10) as follows:

    θSNM1k=tsTkgk+1max{|yTkgk+1|,dTkyk}+βDYkyTkdkmax{|yTkgk+1|,dTkyk}. (2.14)

    Thus, if |yTkgk+1|>dTkyk, then (2.14) reduces (2.10). Hence, for dTkyk>|yTkgk+1|, we obtain another spectral parameter:

    θSNM2k=tsTkgk+1dTkyk+βDYkyTkdkdTkyk. (2.15)

    Combining (2.10) and (2.15) implies that the optimal θk can be computed as follows:

    θNSDYk={θSDLk,if|yTkgk+1|>dTkyk,θSNM2k,otherwise. (2.16)

    Now, we describe the new spectral DY (NSDY) algorithm as follows:

    Algorithm 1: NSDY
    Step 1: Apply x0Rn and the parameters 0<δ<σ1. Set d0=g0, α0=1.
    Step 2: Verify the convergence: If gkϵ, then stop. Otherwise, proceed to the next step.
    Step 3: Select αk>0 such that (1.3) and (1.5) are satisfied.
    Step 4: Compute θNSDYk using (2.16).
    Step 5: Compute βDYk from (1.9) and dk using (2.9).
    Step 6: Update the next iteration from Step 2.

    Theorem 2.2. Let θk be given by (2.16) with 0<αkσ<1. Suppose that the NSDY algorithm generates sequences {gk} and {dk}, where αk is determined by strong Wolfe rules (1.3)–(1.5); then, there exists a constant ρ>0 for which the condition

    dTk+1gk+1ρgk+12,k0. (2.17)

    holds.

    The criterion (2.17) is called the sufficient descent condition, and it ensure that the direction described by (2.9) is indeed sufficient for the minimizer.

    Proof. The proof is achieved by mathematical induction. Initially, for k=0 it follows easily that gT0d0=g02ρg02. Suppose that (2.17) holds for index k, that is, dTkgkρgk2. We now show for k+1. Now, according to the strong Wolfe criterion (1.3), it holds that

    |gTk+1dk|σgTkdk,

    i.e.,

    dTkyk=dTkgk+1dTkgk(1σ)dTkgk. (2.18)

    Then

    |gTk+1dk||dTkyk|σ1σ. (2.19)

    Pre-multiplying (2.9) by gTk+1, we have

    gTk+1dk+1=θkgk+1+βDYkgTk+1dk. (2.20)

    Using (2.14) with sk=αkdk gives

    θSNM1k=tsTkgk+1max{|yTkgk+1|,dTkyk}+βDYkyTkdkmax{|yTkgk+1|,dTkyk}tsTkgk+1|dTkyk|+βDYkyTkdk|dTkyk|tαk|dTkgk+1||dTkyk|+βDYkyTkdk|dTkyk|. (2.21)

    Obtaining βDYk from (1.9) and applying it with (2.21) and (2.20), we get

    gTk+1dk+1(tαk|dTkgk+1||dTkyk|+βDYkyTkdk|dTkyk|)gk+12+βDYkgTk+1dk.=(tαk|gTk+1dk||dTkyk|+gk+12|dTkyk|)gk+12+gk+12dTkykgTk+1dk.(tαk|gTk+1dk||dTkyk|+gk+12|dTkyk|)gk+12+|gTk+1dk||dTkyk|gk+12(σtαk1σ+gk+12|dTkyk|)gk+12+σ1σgk+12 (2.22)

    where the second to the last and last inequalities follow (2.19) respectively. Now, from (2.18), we get

    1dTkyk1(1σ)dTkgk.

    Combining this with (2.17), we get

    1dTkyk1(1σ)ρgk2. (2.23)

    The result by the last inequality follows from the induction hypothesis. Finally, using (2.22) and (2.23), we conclude that

    gTk+1dk+1(σtαk(1σ)+gk+12(1σ)ρgk2σ(1σ))gk+12.(σtαk(1σ)σ(1σ))gk+12.

    Since gk+12(1σ)ρgk2>0, we have

    gTk+1dk+1(σtαk(1σ)σ(1σ))gk+12.

    Denoting ρ=(σtαk(1σ)σ(1σ)), with t1.2, αk(0,0.9] and σ(0,0.9] the required result is achieved.

    The convergence analysis requires the following presumptions.

    Assumption 3.1.

    1. Given an initial point x0, the function f(x) is bounded below on the level set η={xRn:f(x0)f(x)}.

    2. Denote Γ as some neighborhood of η, and the function f is smooth with its gradient that is Lipschitz-continuous and satisfying

    g(x)g(y)Lxy,x,yΓ,L>0. (3.1)

    Note that these assumptions, imply that

    g(x)γ,xη,γ>0, (3.2)
    xyb,x,yη,b>0. (3.3)

    The following lemma was taken from [17] and is very crucial in our analysis.

    Lemma 3.2. Suppose that {xk} and {dk} are sequences generated by the NSDY algorithm, where the spectral search direction dk is descending and αk satisfies the strong Wolfe conditions; then,

    k0(gTkdk)2dk2<+. (3.4)

    Theorem 3.3. If Assumption (3.1) holds and the sequence of iterations {xk} is produced by the NSDY algorithm, then

    lim infkgk=0. (3.5)

    Proof. If (3.5) does not hold, then there exists some constant r>0 so that

    gk>r. (3.6)

    Claim: The search direction defined by (2.9) is bounded, i.e., there exists a constant P>0 such that

    dk+1P,k0. (3.7)

    To proceed with the proof of this claim by induction, we need to show that |βDYk| and |θk| are bounded below. According to (2.18), it yields

    1dTkyk1(1σ)dTkgk.

    Combining this with (2.17), we get

    1dTkyk1(1σ)ρgk2. (3.8)

    Therefore, obtaining βDYk from (1.9) and applying it with (3.2), (3.6) and (3.8), we have

    |βDYk|gk+12|dTkyk|γ2ρ(1σ)r2=E. (3.9)

    Next, from (2.14), (3.2), (3.8) and (3.9), we obtain

    |θk||tsTkgk+1max{|yTkgk+1|,dTkyk}+βDYkyTkdkmax{|yTkgk+1|,dTkyk}||tsTkgk+1dTkyk+βDYk|tskgk+1|dTkyk|+Etbγρ(1σ)r2+E=M. (3.10)

    Now, for k=0, we get that d1=θ1g1+β1d0 from (2.9), which implies that d1=θ1g1β1g0, since d0=g0; this yields

    d1|θ1|g1+|β1|g0γ+Eγ=γ,

    that is, the claim (3.7) holds for k=0. Next we assume that the claim (3.7) is true for k, that is, dkP. To show that it is true for k+1, consider the search direction described by (2.9).

    dk+1=βkgk+1+βkdk.

    Thus, using (3.2), (3.9) and (3.10), we obtain

    dk+1|θk|gk+1+|βk|dkMγ+EP;

    therefore, the claim also holds for k+1. Now since (3.7) holds for all values of k, then we have

    1dk1P,P>0. (3.11)

    From the above inequality, it follows that

    k=01dk2=+. (3.12)

    Accordingly, from (2.17), (3.4) and (3.6), it follows that

    ρ2r4k01dk2k0ρ2gk4dk2k0(gTkdk)2dk2<+ (3.13)

    Therefore, this implies that

    k=01dk2<+. (3.14)

    It is obvious that, (3.12) and (3.14) cannot hold concurrently. Thus, (3.5) must hold.

    In this part, we present a numerical comparison of the NSDY method versus the classical DY method and three other modifications of the DY method. The evaluations are based on a number of test functions considered from [37,38]. For the computation, we consider the dimension ranging from 2 to 100, 000 as presented in Tables 1 and 2. To analyze the results further, the following specifications are considered for the algorithms:

    Table 1.  List of test problems part 1/2.
    NO Problem Dim NO Problem Dim
    1 EXTENDED PENALTY 100 43 DIAGONAL 1 100
    2 EXTENDED PENALTY 10000 44 HAGER 500
    3 EXTENDED PENALTY 20000 45 HAGER 300
    4 EXTENDED MARATOS 2 46 HAGER 5000
    5 DIAGONAL 5 10 47 HAGER 1000
    6 DIAGONAL 5 50000 48 ZIRILLI 2
    7 DIAGONAL 5 100000 49 RAYDAN 1 100
    8 TRECANNI 2 50 RAYDAN 1 10
    9 TRECANNI 2 51 RAYDAN 1 10
    10 QUADRATIC PENALTY 1 4 52 RAYDAN 1 50
    11 QUADRATIC PENALTY 1 100 53 RAYDAN 2 10000
    12 QUADRATIC PENALTY 1 1000 54 RAYDAN 2 50000
    13 QUADRATIC PENALTY 2 500 55 RAYDAN 2 100000
    14 QUADRATIC PENALTY 2 100 56 FLETCHCER 10
    15 QUADRATIC PENALTY 2 10 57 FLETCHCER 1000
    16 QUADRATIC FUNCTION 1 100 58 FLETCHCER 50000
    17 QUADRATIC FUNCTION 1 10 59 DIAGONAL 3 2
    18 QUADRATIC FUNCTION 1 10 60 DIAGONAL 3 10
    19 QUADRATIC FUNCTION 2 50 61 EXTENDED DENSCHN B 100
    20 QUADRATIC FUNCTION 2 1000 62 EXTENDED DENSCHN B 5000
    21 QUADRATIC FUNCTION 2 5000 63 EXTENDED DENSCHN B 10000
    22 POWER 2 64 DIAGONAL 6 10000
    23 POWER 2 65 DIAGONAL 6 5000
    24 ZETTL 2 66 DIAGONAL 6 10000
    25 DIAGONAL 2 1000 67 DIAGONAL 6 50000
    26 DIAGONAL 2 10000 68 DIAGONAL 4 1000
    27 DIAGONAL 2 50000 69 DIAGONAL 4 10000
    28 TEST 3 70 DIAGONAL 4 100000
    29 TEST 3 71 DIAGONAL 7 10
    30 SUM OF SQUARES 100 72 DIAGONAL 7 100
    31 SUM OF SQUARES 2000 73 DIAGONAL 7 100
    32 SUM OF SQUARES 5000 74 DIAGONAL 8 100
    33 SHALLOW 1000 75 DIAGONAL 8 500
    34 QUARTIC 100 76 DIAGONAL 9 10
    35 QUARTIC 1000 77 DIAGONAL 9 100
    36 QUARTIC 5000 78 DENSCHN A 3000
    37 QUARTIC 10000 79 DENSCHN A 15000
    38 MATYAS 2 80 DENSCHN C 1000
    39 MATYAS 2 81 DENSCHN C 10000
    40 DIAGONAL 1 10 82 EXTENDED BLOCK DIAGONAL 1 10
    41 DIAGONAL 1 100 83 EXTENDED BLOCK DIAGONAL 1 100
    42 DIAGONAL 1 10 84 EXTENDED BLOCK DIAGONAL 1 1000

     | Show Table
    DownLoad: CSV
    Table 2.  List of test problems part 2/2.
    NO Problem Dim
    85 HIMMELBLAU 1000
    86 HIMMELBLAU 10000
    87 HIMMELBLAU 50000
    88 HIMMELBLAU 10000
    89 DQDRTIC 1000
    90 DQDRTIC 10000
    91 DQDRTIC 100
    92 DQDRTIC 10
    93 QUARTICM 1000
    94 QUARTICM 10000
    95 LINEAR PERTURBED 5000
    96 LINEAR PERTURBED 10000
    97 LINEAR PERTURBED 20000
    98 TWH 2
    99 ENGVAL1 2
    100 ENGVAL1 2
    101 ENGVAL8 4
    102 ENGVAL8 2
    103 DENSCHN F 1000
    104 DENSCHN F 10000
    105 DENSCHN F 50000
    106 ARWHEAD 10
    107 ARWHEAD 100
    108 ARWHEAD 500
    109 SIX HUMP 2
    110 PRICE4 2
    111 PRICE4 2
    112 ZIRILLI 2
    113 ZIRILLI 2
    114 EXTENDED HIMMELBLAU 500
    115 EXTENDED HIMMELBLAU 1000
    116 EXTENDED HIMMELBLAU 2000
    117 ROTATED ELLIPSE 2
    118 ROTATED ELLIPSE 2
    119 EL-ATTAR-VIDYASAGAR-DUTTA 2
    120 EL-ATTAR-VIDYASAGAR-DUTTA 2
    121 EXTENDED HIEBERT 2
    122 EXTENDED TRIDIAGONAL 1 100
    123 EXTENDED TRIDIAGONAL 1 500
    124 THUMP 2
    125 THUMP 2

     | Show Table
    DownLoad: CSV

    ● JYJLL algorithm of Jian et al. [33] that uses (2.4) and (2.5) for the search direction described by (2.9) and δ=0.01 and σ=0.1 in (1.3) and (1.4) respectively.

    ● IDY algorithm of Jiang and Jian [32] that uses (2.3) for the search direction described by (1.6) and δ=0.01 and σ=0.1 in (1.3) and (1.4) respectively.

    ● DDY1 algorithm of Zhu et al. [36] that uses (2.8) for the search direction described by (1.6) with μ1=0.4 and δ=0.01 and σ=0.1 in (1.3) and (1.4) respectively.

    ● DY algorithm of Dai and Yuan [13] with βDYk=gk+12dTkyk for the search direction described by (1.6) and δ=0.01 and σ=0.1 in (1.3) and (1.4) respectively.

    The MATLAB R2022a codes for the numerical experiment were run on a Dell Core i7 laptop with 16 GB RAM and a 2.90 GHz CPU. We set δ=103 and σ=0.9 in (1.3) and (1.5) for the NSDY method and gk106 as the stopping condition for all schemes. We also use the symbol to indicate a situation in which the stopping condition is not met. The results from the computational experiments based on number of iterations (NI), number of function evaluations (FE), and CPU time (ET) is presented at the following link https://acrobat.adobe.com/link/review?uri = urn:aaid:scds:US:ed485b92-05e2-40f3-a1f2-6e159858c515.

    We will employ the performance profiling technique to interpret and discuss the performance of the methods examined. Let P be the collection of np test problems and S be the set of ns solvers used in the comparison. The performance profiles thus constitute the performance metric for a problem pP and a solver sS, which denote either NI, FE or ET relative to the other solvers sS on a set of problems pP. Then, Dolan and Moreˊ[39] described the measure of performance ratio to compare and evaluate the performance:

    r_{p, s} = \frac{f_{p, s}}{\min \left\{{f_{p, s}:{s\in S}\; and \; {p\in P}}\right\}}.

    Thus, the best method is indicated by the performance profile plot at the top curve. The experiments can therefore be interpreted graphically by using Figures 13 based on numerical performance.

    Figure 1.  NI performance profile for the methods.
    Figure 2.  FE performance profile for the methods.
    Figure 3.  ET performance profile for the methods.

    The interpretation of Figure 1 for the value \tau chosen within the 0 < \tau < 0.5 interval shows that the NSDY method solved the test problems and won in 98\%/38\% to be the best, followed by the JYJLL method with 95\%/45\%, whereas the DY, IDY and DDY1 methods solved and won in 96\%/25\%, 90\%/37\% and 55\%/20\% of the given problems respectively. Accordingly, if we increase the \tau to an interval \tau\geq1 , the NSDY method is the best with 68\% accuracy, whereas the JYJLL, IDY, DY and DDY1 methods have 66\%, 60\%, 55\% and 40\% accuracy, respectively. Similarly, the comparison among the five schemes based on interpretation of Figure 2 shows that the NSDY, DY, JYJLL, IDY and DDY1 methods solved the problems and won in 98\%/12\%, 96\%/28\%, 94\%/54\%, 90\%/30\% and 57\%/14\% respectively for the value \tau chosen within 0 < \tau < 0.5 . However, increasing the \tau value to the interval \tau\geq6 reveals that the NSDY method wins when solving 97\% of the test problems compared to JYJLL, IDY, DY and DDY1 methods with 90\%, 88\%, 96\%, and 52\% of problems solved to the best respectively. Finally, Figure 3 also shows that for the value \tau chosen within 0 < \tau < 0.5, NSDY, DY, JYJLL, IDY and DDY1 methods solved and won in 98\%/33\%, 96\%/08\%, 94\%/38\%, 90\%/22\% and 58\%/05\% problems, respectively. Alternatively, taking the value of \tau in the interval \tau\geq6 reveals that the NSDY method wins when solving 97\% of the test problems compared to the JYJLL, IDY, DY and DDY1 methods with 93\%, 88\%, 92\%, and 52\% of problems solved to the best, respectively. Therefore, interpretations of Figures 13 indicate that the NSDY method is more preferable than other CG methods.

    The CG method is among the most efficient minimization algorithms used for faster optimization of both linear and non-linear systems because of its rapid solution for large-scale problems. Recently, the CG iterative scheme has been widely considered for solving real-world application problems because of its fewer iterations and less computational resources required to optimize a given problem. Some of the most relevant problems solved by the CG method includes the feed forward training of neural networks [3], the problem of motion control of robotic manipulators [8], regression analysis [1,6], and the restoration of corrupted images [8,9].

    Image restoration is among the family of inverse problems used to obtain some highly-quality images from those images that have possibly been corrupted during the data acquisition process. Choosing an appropriate algorithm that is capable of restoring these corrupted images is very crucial because knowledge of the degradation system has to be considered in order to achieve better quality images. In this study, we consider restoring some images which that include a building (512 × 512) and camera (512 × 512). These images were possibly corrupted by salt-and-pepper impulse noise during the data acquisition or storage process. For the purpose of this study, the following metrics have been employed to measure the quality of the restored images and evaluate the performance of the algorithms. These metrics include the following

    ● peak signal-to-noise ratio (PSNR),

    ● relative error (RelErr),

    ● CPU time.

    The PSNR has widely been used to measure the perceptual quality of restored images because, it calculates the ratio between the power of corrupting noises and the maximum possible power of a signal affecting the fidelity of the representation. This metric is very crucial when evaluating the quality of both corrupted and restored images. It is important to note that a method with higher PSNR will produce images with better quality [40]. The PSNR can be obtained as follows

    \begin{align} PNSR & = 10 \cdot \log_{10} \left( \frac{MAX_{I}^{2}}{MSE} \right) \\ & = 20 \cdot \log_{10} \left( \frac{MAX_{I}}{\sqrt{MSE}} \right) \\ & = 20 \cdot \log_{10} \left( MAX_{I} \right) - 10 \cdot \log_{10} \left( MSE \right) \end{align} (5.1)

    where MSE is used to measure the average pixel differences of the complete images and MAX_I represents the maximum pixel value of the images. Also, an algorithm with higher MSE values will produce greater difference between the original and corrupted images. The MSE values can be obtained via:

    \begin{equation} MSE = \frac{1}{N} \sum \sum \left( E_{i, j} - o_{i, j} \right)^2, \end{equation} (5.2)

    where N denotes image size, E is the edge image and o represents the original image.

    Consider the following image restoration problem:

    \mathcal{H}(u) = \sum _{(i, j)\in G}\left\{\sum\limits_{(m, n)\in T_{i, j}/G}\phi_{\alpha}(u_{i, j}-\xi_{m, n})+\frac{1}{2}\sum _{(m, n)\in T_{i, j}\cap G}\phi _{\alpha}(u_{i, j}-u_{m, n})\right\},

    where the edge-preserving potential function \phi_{\alpha}(t) = \sqrt{t^2+\alpha} for some constant \alpha whose value is given as 1 .

    The above problem is transformed into the following optimization problem [41]

    \min \mathcal{H}(u),

    where x is the original M \times N pixel image. Also, G denotes the following index set of noise candidates x :

    \begin{equation} G = \{(i, j)\in Q| \bar{\xi} _{ij}\ne \xi _{ij}, \; \xi_{ij} = s_{\min}\; \text{ or }\;s_{\max}\}, \end{equation} (5.3)

    where i, j \in Q = \{1, 2, \cdot, M\}\times \{1, 2, \cdot, N\} its neighborhood is defined as

    \begin{equation} T_{i\; j} = \{(i, \; j-1), (i, j+1), (i-1, j), (i+1, j)\}. \end{equation} (5.4)

    From (5.3), the maximum and minimum values of the noisy pixel are s_{\max} and s_{\min} . Also, \xi is the observed noisy corrupted image whose adaptive median filter is denoted as \bar{\xi} .

    Next, we present the computational performance of the proposed algorithm in Tables 3, 4, and 5. The performance was compared with that of other similar algorithms with the same characteristics in terms of the PNSR, RelErr, and CPU time to demonstrate the robustness of the algorithm. All computations were carried out under the conditions of the strong Wolfe line search with the restoration noise degrees set as 30, 50 and 80, respectively.

    Table 3.  Image restoration outputs for NSDY, DY, DDY1, JYJLL, and IDY in terms of CPU time (CPUT).
    METHOD NSDY DY DDY1 JYJLL IDY
    IMAGE NOISE CPUT CPUT CPUT CPUT CPUT
    CAMERA 30% 48.2371 46.2840 48.3230 46.6328 74.4926
    50% 76.4981 112.2052 78.4817 76.7701 109.4473
    80% 170.6165 261.6915 184.9280 174.2533 141.6278
    BUILDING 30% 60.47630 61.8992 64.5338 46.4718 98.3261
    50% 143.6863 147.1558 104.1381 82.7440 144.3797
    80% 174.7825 187.3404 159.8783 191.3842 187.2095

     | Show Table
    DownLoad: CSV
    Table 4.  Image restoration outputs for NSDY, DY, DDY1, JYJLL, and IDY in terms of RelErr.
    METHOD NSDY DY DDY1 JYJLL IDY
    IMAGE NOISE RelErr RelErr RelErr RelErr RelErr
    CAMERA 30% 1.2046 1.0732 1.2665 1.1387 1.1129
    50% 1.6940 1.9179 1.6558 1.5858 1.6827
    80% 3.6860 3.5995 3.2609 3.0341 3.0700
    BUILDING 30% 1.4926 1.4439 1.4831 1.3775 1.5324
    50% 2.5823 2.6177 2.5245 2.5926 2.6678
    80% 4.9713 5.3007 5.4765 4.9022 5.0085

     | Show Table
    DownLoad: CSV
    Table 5.  Image restoration outputs for NSDY, DY, DDY1, JYJLL, and IDY in terms of PSNR.
    METHOD NSDY DY DDY1 JYJLL IDY
    IMAGE NOISE PSNR PSNR PSNR PSNR PSNR
    CAMERA 30% 30.6398 30.9961 30.5249 30.8717 30.8786
    50% 27.6515 27.2421 27.5891 27.7729 27.3638
    80% 23.7462 23.2416 23.3517 23.6676 23.9367
    BUILDING 30% 29.8875 29.8283 29.7804 29.9679 29.6387
    50% 26.4309 26.4154 26.5512 26.3220 26.1593
    80% 22.2909 21.9993 22.0771 22.2824 22.5207

     | Show Table
    DownLoad: CSV

    From the results presented in the tables above, it is obvious that all of the methods are very competitive. Starting with the CPU time of the camera image, it can be observed that at 30\% noise degree, the CPU time for the proposed method is lower than that of DDY1 and IDY algorithms but higher than that of DY and JYJLL methods. However, at the 50\% noise degree, the proposed algorithm outperformed all of the other methods, achieving the shortest CPU time. Also, at the 80\% noise degree, our method was superior to all other algorithms except the IDY algorithm which produced the shortest CPU time. Similarly, by considering the Performance of the new method in terms of restoring the building corrupted image also shows that the method is very competitive. Furthermore, overall performance analysis of the methods (see Tables 4 and 5) in terms of the PSNR and RelErr shows that the proposed method performed better because its produced higher values for PSNR and RelErr in most of the noise degrees considered. Based on the previous discussion, the higher the PNSR and RelErr values, the better the quality of the output images. For the graphical representation of the results, we refer the reader to Figures 4 and 5. These results were obtained for the 30%, 50%, and 80% noise degrees, respectively.

    Figure 4.  Camera image corrupted by (a) 30 %, (b) 50% and (c) 80 % salt-and-pepper noise; the restored images using NSDY: (d, e, f), DY: (g, h, i), DDY1: (j, k, l), JYJLL (m, n, o), IDY (p, q, r).
    Figure 5.  Building image corrupted (a) 30 %, (b) 50% and (c) 80 % salt-and-pepper noise; the restored images using NSDY: (d, e, f), DY: (g, h, i), DDY1: (j, k, l), JYJLL (m, n, o), IDY (p, q, r).

    Based on Tables 3, 4 and 5 and Figures 4 and 5, we can conclude that the proposed method is more efficient and robust because it restored most of the corrupted images with high accuracy than other existing methods.

    In this study, considering the D-L conjugacy condition as well as the Newton search direction, a spectral DY CG method for large-scale unconstrained optimization and image restoration problems is suggested. The proposed method introduces a new spectral parameter \theta_k to scale the search direction d_k such that it satisfies the sufficient descent property and converges globally with strong Wolfe criteria. The preliminary computational results a set of optimization functions analyzed in comparison to those obtained for some DY modifications in literature indicated that the proposed method is efficient and reliable.

    The authors declare that they have not used artificial intelligence tools in the creation of this article.

    The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT and Petchra Pra Jom Klao PhD Scholarship of King Mongkut's University of Technology Thonburi (KMUTT), contract No. 52/2564. Moreover, the authors also acknowledged the financial support provided by the Mid-Career Research Grant (N41A640089).

    The authors declare no competing interests.



    [1] I. M. Sulaiman, M. Malik, A. M. Awwal, P. Kumam, M. Mamat, S. Al-Ahmad, On three-term conjugate gradient method for optimization problems with applications on covid-19 model and robotic motion control, Adv. Contin. Discrete Models, (2022), 1–22.
    [2] N. Salihu, P. Kumam, A. M. Awwal, I. Arzuka, T. Seangwattana, A Structured Fletcher-Revees Spectral Conjugate Gradient Method for Unconstrained Optimization with Application in Robotic Model, In Operations Research Forum, 4 (2023), 81.
    [3] K. Kamilu, M. Sulaiman, A. Muhammad, A. Mohamad, M. Mamat, Performance evaluation of a novel conjugate gradient method for training feed forward neural network, Math. Model. Comp., 10 (2023), 326–337.
    [4] M. M. Yahaya, P. Kumam, A. M. Awwal, P. Chaipunya, S. Aji, S. Salisu, A new generalized quasi-newton algorithm based on structured diagonal Hessian approximation for solving nonlinear least-squares problems with application to 3dof planar robot arm manipulator, IEEE Access, 10 (2022), 10816–10826. https://doi.org/10.1109/ACCESS.2022.3144875 doi: 10.1109/ACCESS.2022.3144875
    [5] A. S. Halilu, A. Majumder, M. Y. Waziri, K. Ahmed, Signal recovery with convex constrained nonlinear monotone equations through conjugate gradient hybrid approach, Math. Comput. Simul., 187 (2021), 520–539.
    [6] I. M. Sulaiman, M. Mamat, A new conjugate gradient method with descent properties and its application to regression analysis, JNAIAM. J. Numer. Anal. Ind. Appl. Math., 14 (2020), 25–39.
    [7] G. Yuan, J. Lu, Z. Wang, The PRP conjugate gradient algorithm with a modified WWP line search and its application in the image restoration problems, Appl. Numer. Math., 152 (2020), 1–11.
    [8] N. Salihu, P. Kumam, A. M. Awwal, I. M. Sulaiman, T. Seangwattana, The global convergence of spectral RMIL conjugate gradient method for unconstrained optimization with applications to robotic model and image recovery, Plos one, 18 (3), e0281250.
    [9] M. Malik, I. M. Sulaiman, A. B. Abubakar, G. Ardaneswari, Sukono, A new family of hybrid three-term conjugate gradient method for unconstrained optimization with application to image restoration and portfolio selection, AIMS Math., 8 (2023), 1–28.
    [10] N. Andrei, A Dai-Liao conjugate gradient algorithm with clustering of eigenvalues, Numer. Algorithms, 77 (4), 1273–1282.
    [11] W. W. Hager, H. Zhang, A survey of nonlinear conjugate gradient methods, Pac. J. Optim., 2 (2006), 35–58.
    [12] R. Fletcher, C. M. Reeves, Function minimization by conjugate gradients, Comput. J., 7 (1964), 149–154. https://doi.org/10.1093/comjnl/7.2.149 doi: 10.1093/comjnl/7.2.149
    [13] Y. H. Dai, Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. Optim., 10 (1999), 177–182. https://doi.org/10.1137/S1052623497318992 doi: 10.1137/S1052623497318992
    [14] R. Fletcher, Practical methods of optimization, A Wiley-Interscience Publication. John Wiley & Sons, Ltd., Chichester, second edition, 1987.
    [15] X. Du, P. Zhang, W. Ma, Some modified conjugate gradient methods for unconstrained optimization, J. Comput. Appl. Math., 305 (2016), 92–114.
    [16] M. J. D. Powell, Restart procedures for the conjugate gradient method, Math. Program., 12 (1997), 241–254. https://doi.org/10.1023/A:1007963324520 doi: 10.1023/A:1007963324520
    [17] G. Zoutendijk, Nonlinear programming, computational methods, In: J. Abadie Ed., Integer and Nonlinear Programming, North-Holland, Amsterdam, 37–86, 1970.
    [18] N. Andrei, An adaptive scaled BFGS method for unconstrained optimization, Numer. Algorithms, 77 (2018), 413–432. https://doi.org/10.1007/s11075-017-0321-1 doi: 10.1007/s11075-017-0321-1
    [19] Y. H. Dai, L. Z. Liao, New conjugacy conditions and related nonlinear conjugate gradient methods, Appl. Math. Optim., 43 (2001), 87–101.
    [20] M. Raydan, The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem, SIAM J. Optim., 7 (1997), 26–33. https://doi.org/10.1137/S1052623494266365 doi: 10.1137/S1052623494266365
    [21] J. Barzilai, J. M. Borwein, Two-point step size gradient methods. IMA J. Numer. Anal., 8 (1988), 141–148. https://doi.org/10.1093/imanum/8.1.141
    [22] E. G. Birgin, J. M. Martínez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Optim., 43 (2001), 117–128.
    [23] N. Salihu, M. R. Odekunle, A. M. Saleh, S. Salihu. A Dai-Liao hybrid Hestenes-Stiefel and Fletcher-Revees methods for unconstrained optimization, Int. J. Indu. Optim., 2 (2021), 33–50.
    [24] N. Salihu, M. Odekunle, M. Waziri, A. Halilu, A new hybrid conjugate gradient method based on secant equation for solving large scale unconstrained optimization problems, Iran. J. Optim., 12 (2020), 33–44. https://doi.org/10.11606/issn.1984-5057.v12i2p33-44 doi: 10.11606/issn.1984-5057.v12i2p33-44
    [25] S. Nasiru, R. O. Mathew, Y. W. Mohammed, S. H. Abubakar, S. Suraj, A Dai-Liao hybrid conjugate gradient method for unconstrained optimization, Int. J. Indu. Optim., 2 (2021), 69–84.
    [26] T. Barz, S. Körkel, G. Wozny, Nonlinear ill-posed problem analysis in model-based parameter estimation and experimental design, Compu. Chem. Engi., 77 (2015), 24–42.
    [27] A. M. Awwal, I. M. Sulaiman, M. Maulana, M. Mustafa, K. Poom, S. Kanokwan, A spectral RMIL+ conjugate gradient method for unconstrained optimization with applications in portfolio selection and motion control, IEEE Access, 9 (2021), 75398–75414. https://doi.org/10.1109/ACCESS.2021.3081570 doi: 10.1109/ACCESS.2021.3081570
    [28] H. Shao, H. Guo, X. Wu, P. Liu, Two families of self-adjusting spectral hybrid DL conjugate gradient methods and applications in image denoising, Appl. Math. Model., 118 (2023), 393–411.
    [29] J. Jian, P. Liu, X. Jiang, C. Zhang, Two classes of spectral conjugate gradient methods for unconstrained optimizations, J. Appl. Math. Comput., 68 (2022), 4435–4456. https://doi.org/10.1007/s12190-022-01713-2 doi: 10.1007/s12190-022-01713-2
    [30] J. Jian, P. Liu, X. Jiang, B. He, Two improved nonlinear conjugate gradient methods with the strong Wolfe line search, Bull. Iranian Math. Soc., 48 (2022), 2297–2319. https://doi.org/10.1007/s41980-021-00647-y doi: 10.1007/s41980-021-00647-y
    [31] I. Arzuka, M. R. Abu Bakar, W. J. Leong, A scaled three-term conjugate gradient method for unconstrained optimization, J. Ineq. Appl., (2016), 1–16. https://doi.org/10.15600/2238-1244/sr.v16n42p1-10
    [32] X. Jiang, J. Jian, Improved Fletcher-Reeves and Dai-Yuan conjugate gradient methods with the strong Wolfe line search, J. Comput. Appl. Math., 34 (2019), 525–534.
    [33] J. Jian, L. Yang, X. Jiang, P. Liu, M. Liu, A spectral conjugate gradient method with descent property, Mathematics, 8 (2020), 280.
    [34] J. Jian, L. Han, X. Jiang, A hybrid conjugate gradient method with descent property for unconstrained optimization, Appl. Math. Model., 39 (2015), 1281–1290. https://doi.org/10.1016/j.apm.2014.08.008 doi: 10.1016/j.apm.2014.08.008
    [35] X. Zhou, L. Lu, The global convergence of modified DY conjugate gradient methods under the wolfe line search, J. Chongqing Normal Univ.(Nat. Sci. Ed.), 33 (2016), 6–10.
    [36] Z. Zhu, D. Zhang, S. Wang, Two modified DY conjugate gradient methods for unconstrained optimization problems, Appl. Math. Comput., 373 (2020), 125004. https://doi.org/10.1016/j.amc.2019.125004 doi: 10.1016/j.amc.2019.125004
    [37] N. Andrei, Nonlinear conjugate gradient methods for unconstrained optimization, volume 158 of Springer Optimization and Its Applications, Springer, Cham, 2020.
    [38] J. Momin, Y. X. She, A literature survey of benchmark functions for global optimization problems, Int. J. Mathe. Model. Nume. Optim., 4 (2013), 150–194.
    [39] E. D. Dolan, J. J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), 201–213. https://doi.org/10.1007/s101070100263 doi: 10.1007/s101070100263
    [40] M. Nadipally, Chapter 2-Optimization of methods for image-texture segmentation using ant colony optimization, volume 1, In: Intelligent Data Analysis for Biomedical Applications, Academic Press, Elsevier, 2019.
    [41] G. Yu, J. Huang, Y. Zhou, A descent spectral conjugate gradient method for impulse noise removal, Appl. Math. Lett., 23 (2010), 555–560. https://doi.org/10.1016/S0268-005X(08)00209-9 doi: 10.1016/S0268-005X(08)00209-9
  • This article has been cited by:

    1. Nasiru Salihu, Poom Kumam, Sulaiman M. Ibrahim, Wiyada Kumam, Some combined techniques of spectral conjugate gradient methods with applications to robotic and image restoration models, 2024, 1017-1398, 10.1007/s11075-024-01970-1
    2. Nasiru Salihu, Poom Kumam, Mahmoud Muhammad Yahaya, Thidaporn Seangwattana, A revised Liu–Storey conjugate gradient parameter for unconstrained optimization problems with applications, 2024, 0305-215X, 1, 10.1080/0305215X.2024.2329323
    3. Nasiru Salihu, Poom Kumam, Ibrahim Mohammed Sulaiman, Ibrahim Arzuka, Wiyada Kumam, An efficient Newton-like conjugate gradient method with restart strategy and its application, 2024, 226, 03784754, 354, 10.1016/j.matcom.2024.07.008
    4. Nasiru Salihu, Poom Kumam, Sani Salisu, Two efficient nonlinear conjugate gradient methods for Riemannian manifolds, 2024, 43, 2238-3603, 10.1007/s40314-024-02920-2
    5. Jamilu Sabi’u, Ado Balili, Homan Emadifar, Longxiu Huang, An efficient Dai-Yuan projection-based method with application in signal recovery, 2024, 19, 1932-6203, e0300547, 10.1371/journal.pone.0300547
    6. Nasiru Salihu, Poom Kumam, Sulaiman Mohammed Ibrahim, Huzaifa Aliyu Babando, A sufficient descent hybrid conjugate gradient method without line search consideration and application, 2024, 41, 0264-4401, 1203, 10.1108/EC-12-2023-0912
    7. Lawal Muhammad, Mohammad Y. Waziri, Sulaiman M. Ibrahim, Aceng Sambas, Two self-scaling Polak–Ribière–Polyak conjugate gradient methods for unconstrained optimization, 2024, 0305-215X, 1, 10.1080/0305215X.2024.2420743
    8. Sulaiman M. Ibrahim, Lawal Muhammad, Rabiu Bashir Yunus, Muhammad Yusuf Waziri, Saadi bin Ahmad Kamaruddin, Aceng Sambas, Nooraini Zainuddin, Ali F. Jameel, Carlos Alberto Cruz-Villar, The global convergence of some self-scaling conjugate gradient methods for monotone nonlinear equations with application to 3DOF arm robot model, 2025, 20, 1932-6203, e0317318, 10.1371/journal.pone.0317318
    9. Audu Umar Omesa, Sulaiman Mohammed Ibrahim, Rabiu Bashir Yunus, Issam A.R. Moghrabi, Muhammad Y. Waziri, Aceng Sambas, A brief survey of line search methods for optimization problems, 2025, 19, 26667207, 100550, 10.1016/j.rico.2025.100550
    10. Nasiru Salihu, Sulaiman M. Ibrahim, P. Kaelo, Issam A.R. Moghrabi, Elissa Nadia Madi, Mohamed Kamel Riahi, A spectral Fletcher-Reeves conjugate gradient method with integrated strategy for unconstrained optimization and portfolio selection, 2025, 20, 1932-6203, e0320416, 10.1371/journal.pone.0320416
    11. Sulaiman Mohammed Ibrahim, Aliyu M. Awwal, Maulana Malik, Ruzelan Khalid, Aida Mauziah Benjamin, Mohd Kamal Mohd Nawawi, Elissa Nadia Madi, An efficient gradient-based algorithm with descent direction for unconstrained optimization with applications to image restoration and robotic motion control, 2025, 11, 2376-5992, e2783, 10.7717/peerj-cs.2783
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1461) PDF downloads(68) Cited by(11)

Figures and Tables

Figures(5)  /  Tables(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog