Loading [MathJax]/jax/output/SVG/jax.js
Research article Topical Sections

Inhibition of Candida albicans biofilm by lipopeptide AC7 coated medical-grade silicone in combination with farnesol

  • Biosurfactants affect interaction of microorganisms with material surfaces by altering interfacial properties, and have recently attract the attention of the scientific community for their use as anti-adhesive and anti-biofilm agents. The work studied the synergistic effect of a lipopeptide from Bacillus subtilis AC7 (AC7BS) combined with the quorum sensing molecule farnesol to counteract Candida albicans biofilms on silicone elastomer in simulated physiological conditions. The anti-adhesive and anti-biofilm properties of AC7BS, farnesol and their combination was evaluated after 1.5, 24 and 48 h by the viable count method on three C. albicans strains. Moreover, fungal biofilm was characterised by both scanning electron microscopy (SEM) and confocal laser scanning microscopy (CLSM). By combining the two molecules, a synergistic effect was observed with a significant reduction of C. albicans adhesion up to 74% at 1.5 h and of biofilm growth up to 93% at 24 h and 60% at 48 h. SEM and CLSM confirmed the synergistic anti-adhesive and anti-biofilm activity. Similar trends for the percentage of biofilm covered surface and biofilm mean thickness were observed. No cytotoxicity on eukaryotic cells was detected after exposures to AC7BS concentrations up to 0.5 mg ml−1. Results demonstrated that the combination of the two molecules significantly inhibit both C. albicans initial adhesion and biofilm growth on silicone. Biosurfactant AC7 in combination with farnesol is a hopeful coating to prevent C. albicans medical device-associated infection.

    Citation: Chiara Ceresa, Francesco Tessarolo, Devid Maniglio, Iole Caola, Giandomenico Nollo, Maurizio Rinaldi, Letizia Fracchia. Inhibition of Candida albicans biofilm by lipopeptide AC7 coated medical-grade silicone in combination with farnesol[J]. AIMS Bioengineering, 2018, 5(3): 192-208. doi: 10.3934/bioeng.2018.3.192

    Related Papers:

    [1] Morteza Fotouhi, Andreas Minne, Henrik Shahgholian, Georg S. Weiss . Remarks on the decay/growth rate of solutions to elliptic free boundary problems of obstacle type. Mathematics in Engineering, 2020, 2(4): 698-708. doi: 10.3934/mine.2020032
    [2] Daniela De Silva, Ovidiu Savin . Uniform density estimates and $ \Gamma $-convergence for the Alt-Phillips functional of negative powers. Mathematics in Engineering, 2023, 5(5): 1-27. doi: 10.3934/mine.2023086
    [3] Catharine W. K. Lo, José Francisco Rodrigues . On an anisotropic fractional Stefan-type problem with Dirichlet boundary conditions. Mathematics in Engineering, 2023, 5(3): 1-38. doi: 10.3934/mine.2023047
    [4] Filippo Gazzola, Gianmarco Sperone . Remarks on radial symmetry and monotonicity for solutions of semilinear higher order elliptic equations. Mathematics in Engineering, 2022, 4(5): 1-24. doi: 10.3934/mine.2022040
    [5] Donatella Danielli, Rohit Jain . Regularity results for a penalized boundary obstacle problem. Mathematics in Engineering, 2021, 3(1): 1-23. doi: 10.3934/mine.2021007
    [6] Aleksandr Dzhugan, Fausto Ferrari . Domain variation solutions for degenerate two phase free boundary problems. Mathematics in Engineering, 2021, 3(6): 1-29. doi: 10.3934/mine.2021043
    [7] Hugo Tavares, Alessandro Zilio . Regularity of all minimizers of a class of spectral partition problems. Mathematics in Engineering, 2021, 3(1): 1-31. doi: 10.3934/mine.2021002
    [8] Tatsuya Miura . Polar tangential angles and free elasticae. Mathematics in Engineering, 2021, 3(4): 1-12. doi: 10.3934/mine.2021034
    [9] Matteo Novaga, Marco Pozzetta . Connected surfaces with boundary minimizing the Willmore energy. Mathematics in Engineering, 2020, 2(3): 527-556. doi: 10.3934/mine.2020024
    [10] Daniela De Silva, Giorgio Tortone . Improvement of flatness for vector valued free boundary problems. Mathematics in Engineering, 2020, 2(4): 598-613. doi: 10.3934/mine.2020027
  • Biosurfactants affect interaction of microorganisms with material surfaces by altering interfacial properties, and have recently attract the attention of the scientific community for their use as anti-adhesive and anti-biofilm agents. The work studied the synergistic effect of a lipopeptide from Bacillus subtilis AC7 (AC7BS) combined with the quorum sensing molecule farnesol to counteract Candida albicans biofilms on silicone elastomer in simulated physiological conditions. The anti-adhesive and anti-biofilm properties of AC7BS, farnesol and their combination was evaluated after 1.5, 24 and 48 h by the viable count method on three C. albicans strains. Moreover, fungal biofilm was characterised by both scanning electron microscopy (SEM) and confocal laser scanning microscopy (CLSM). By combining the two molecules, a synergistic effect was observed with a significant reduction of C. albicans adhesion up to 74% at 1.5 h and of biofilm growth up to 93% at 24 h and 60% at 48 h. SEM and CLSM confirmed the synergistic anti-adhesive and anti-biofilm activity. Similar trends for the percentage of biofilm covered surface and biofilm mean thickness were observed. No cytotoxicity on eukaryotic cells was detected after exposures to AC7BS concentrations up to 0.5 mg ml−1. Results demonstrated that the combination of the two molecules significantly inhibit both C. albicans initial adhesion and biofilm growth on silicone. Biosurfactant AC7 in combination with farnesol is a hopeful coating to prevent C. albicans medical device-associated infection.


    Solving systems of nonlinear equations is an important part of optimization theory and algorithms and is essential to applications in machine learning, artificial intelligence, economic planning, and other important fields. The theoretical study of algorithms for nonlinear systems of equations is an important research topic in the fields of computational mathematics, operations research and optimal control, and numerical algebra. As early as the 1970s, the monographs Orrega [1] and Rheinboldt [2] had done systematic research on nonlinear systems of equations in terms of theory and solutions.

    An early and very famous method was the Newtonian method [3]. It has the advantage that the algorithm converges quadratically when the initial point chosen is close to the minima; however, Newton's method does not necessarily converge when the initial point is away from the minima, and Newton's method requires the computation and storage of Jacobian matrices. So a wide range of researchers have built proposed quasi-Newtonian methods [4,5]. Such algorithms utilize approximate Jacobian matrices, inheriting the fast convergence of Newton's algorithm. In addition, methods for solving problems with linear equations are the Gaussian-Newton method, Levenberg-Marquardt algorithm, and their various modified forms [6,7,8]. The above methods necessitate computing and banking Jacobian matrices or approximate Jacobian matrices for each iteration step. The nonlinear conjugate gradient method belongs to the classical computational methods in the first-order optimization methods, which has the characteristics of simple structure, small storage, and low computational complexity, and thus has been widely studied by the optimization community [9,10,11,12,13]. Derivative-free algorithms are one of the popular algorithms for solving large-scale nonlinear systems of equations; they utilize the hyperplane projection technique [14] with a structure of first-order optimization methods, which have an R-linear convergence speed under appropriate conditions. These algorithms have a simple structure, a small amount of storage, a low computational complexity, and they do not require derivative information, thus they are favored by a wide range of researchers.

    To ensure the descent of search direction, adjusting search direction structure becomes another important way to study the nonlinear conjugate gradient method. Yuan et al. [15] proposed a further improved weak Wolfe-Powell line-search and proved the method's global convergence in determining average functions given appropriate conditions. [16] proposed the adaptive scaled damped BFGS method (SDBFGS) for solving gradient non-Lipschitz non-convex objective functions. The above approach is attractive because the algorithm has strong self-correcting properties for large eigenvalues. In recent years, under influence from [17], some 1st-order optimization methods exemplified by conjugate gradient (CG) techniques are widely accepted to solve large-scale unconstrained optimization projects and are straightforward and low-memory [9,10,11,12,13]. These extensions, together with the freshly elaborated techniques, are permutations of renowned conjugate gradient algorithms, another key numerical tool for unconstrained optimization [18,19,20,21].

    The three-term conjugate gradient method [22] is considered to have tantalizing numerical capabilities and good theoretical properties. Yuan [23] presented an adaptive three-term Polak-Ribière-Polyak (PRP) method for non-convex functions and non-Lipschitz continuous functions with gradient. The efficient conjugate gradient algorithm is notorious for requiring both rapid convergence and high precision. [24] proposed a mixed inertia spectral conjugate gradient projection method for solving constrained nonlinear monotone equations, which is superior in solving large-scale equations and recovering blurred images contaminated by Gaussian noise. [25] described two families of self-tuning spectral hybrid DL conjugate gradient methods. The search directions of methods are improved by integrating negative spectral gradients and a final search direction with a convex combinatorial structure. [26] proposed a biased stochastic conjugate gradient (BSCG) algorithm with adaptive step size that combines the stochastic recursive gradient method (SARAH) and the improved Barzilai-Borwein (BB) technique into a typical stochastic gradient algorithm. [27] applied an improved triple conjugate gradient method and linear search technique to machine learning. The same convergence speed as stochastic conjugate gradient algorithm (SCGA) is obtained under weaker conditional assumptions.

    The idea of this paper is to combine family weak conjugate gradient methods proposed in [28] with a new parametric formulation of the HS conjugate gradient algorithm proposed in [29]. An efficient HS conjugate gradient method (EHSCG) is proposed and used for image restoration problems and machine learning. Basic characterization of the algorithm is given below:

    ● The new algorithms have decreasing and trust-region properties requiring no extras.

    ● They converge globally in well-suited circumstances.

    ● The new algorithms can solve image restoration, nonlinear monotone equations, and machine learning issues.

    In Section 2, we present procedures for solving nonlinear equation models and attest to the related properties. In Section 3, global convergence of the algorithm is proved using the weak Wolfe-Powell line-research condition under normal conditions for non-convex functions. In Section 4, we demonstrate the implementation of the algorithms toward image restoration and machine learning tasks and nonlinear monotone equations.

    Consider the following nonlinear model:

    $ g(x)=0, $ (2.1)

    where $ g:\Re^n\rightarrow \Re^n $ is a continuously differentiable monotone function, including $ x\in \Re^n $. $ g(x) $ characterization suggests that the inequality is true:

    $ (g(m1)g(m2))T(m1m2)0,m1,m2n. $ (2.2)

    Scholars have come up with a number of interesting theories about this model.

    To solve this optimization problem, we typically have iteration $ x_{k+1} = x_k+\alpha_kd_k $, which stipulates that symbol $ x_k $ signifies the iteration point, $ x_{k+1} $ is the following point, $ \alpha_k $ is the step length, and $ d_k $ is the current direction, which is framed as below:

    $ dk=gk+βk1dk1. $ (2.3)

    In [14] covered projection techniques to find large-scale nonlinear equation systems, noted that projection technology is strictly coupled to direction and step size. In particular, hyperplane and projection technologies were leveraged to obtain a formulation for further iteration point:

    $ xk+1={wk,g(wk)=0,xkg(wk)T(xkwk)g(wk)g(wk)2,otherwise,, $ (2.4)

    where $ w_k = x_k+\alpha_kd_k $.

    Furthermore, a linear search solution was presented within [30] to resolve step length $ \alpha_k $ in iteration sequencing as follows:

    $ gTk+1dkϖαkgk+1dk2, $ (2.5)

    where step length $ \alpha_k = \max\left\{\tilde{q}, \tilde{q}\dot{t}, \tilde{q}\dot{t}^2...\right\} $, and $ \tilde{q} > 0 $, $ \dot{t}\in(0, 1) $, $ \varpi > 0 $.

    In [28], a modified search direction is followed,

    $ dk=gk+rkdk=(1+rk)gk+rkβk1dk1, $ (2.6)

    where $ d_k $ is defined in (2.3), and $ r_{k+1} $ is designated as follows:

    $ r^{\star}_{k} = \vartheta\frac{\|g_{k}\|}{\|d_{k}\|}. $

    The authors show that it can be equated with the traditional HS conjugate gradient method under the conjugation condition. They executed numerous tests that illustrated the superiority of the formulation in tackling large-scale optimization issues.

    Yuan et al. [29] updated the parametric formulation with the following HS conjugate gradient algorithm:

    $ ˆdk={τ1gk+gTkyk1dk1dTk1gkyk1max{τ2dk1yk1,τ3|dTk1yk1|},k2,τ1gk,k=1, $ (2.7)

    where $ y_{k-1} = g_{k}-g_{k-1} $, $ \tau_{1, 2, 3}\geq0 $. The authors prove their methods fulfill an adequate descent condition by converging globally.

    Inspired by (2.6) and (2.7), considering both the excellent theoretical and numerical performance of the two algorithms, we obtain the equation below:

    $ dk=gk+ϑgkˆdkˆdk=gk+ϑgkτ1gk+gTkyk1dk1dTk1gkyk1max{τ2dk1yk1,τ3|dTk1yk1|}(τ1gk+gTkyk1dk1dTk1gkyk1max{τ2dk1yk1,τ3|dTk1yk1|})=(1+ν1gkk)gk+gk(gTkyk1dk1dTk1gkyk1)kmax{ν2dk1yk1,ν3|dTk1yk1|}, $

    where $ \hslash_k = \sqrt{\tau_1^2\|g_{k}\|^2+\frac{(|g_{k}^Ty_{k-1}|\|d_{k-1}\|-|d_{k-1}^Tg_{k}|\|y_{k-1}\|)^2}{\max\{\tau_2\|d_{k-1}\|\|y_{k-1}\|, \tau_3|d_{k-1}^Ty_{k-1}|\}^2}} $.

    On account of the above derivation, eventually we acquire the improved $ d_{k+1} $ formulation of this paper:

    $ dk+1={(1+ν1gk+1k)gk+1+gk+1(gTk+1ykdkdTkgk+1yk)kmax{ν2dkyk,ν3|dTkyk|},k1,(1+ν1gk+1k)gk+1,k=0, $ (2.8)

    where $ \hslash_{k+1} = \sqrt{\tau_1^2\|g_{k+1}\|^2+\frac{(|g_{k+1}^Ty_k|\|d_k\|-|d_k^Tg_{k+1}|\|y_k\|)^2}{\max\{\tau_2\|d_k\|\|y_k\|, \tau_3|d_k^Ty_k|\}^2}} $, $ y_k = g_{k+1}-g_k $, $ \nu_1 = \vartheta\tau_1 $, $ \nu_2 = \frac{\tau_2}{\vartheta} $, $ \nu_3 = \frac{\tau_3}{\vartheta} $, and $ \vartheta > 0 $, $ \nu_i > 0 $, and $ \tau_i > 0, i = 1, 2, 3 $. Based on (2.8), combined with line search (2.5), the sufficient descent characterization and trust region of the algorithm are provided. Equally the global convergence of the algorithm can be certified. Meanwhile, numerical results also prove the effectiveness and feasibility of the algorithm.

    Assumption 2.1. (i) The problem's solution set is not $ \varnothing $. (ii) Gradient $ g(x) $ is Lipschitz continuous, and this implies that the following inequality holds: $ \exists L > 0 $, s.t.

    $ g(m1)g(m2)Lm1m2,m1,m2n. $ (2.9)

    Algorithm 1 Efficient HS conjugate gradient method (EHSCG)
        1: Recognize an initial point, $ x_0 \in \Re^n $; constants $ \varpi $, $ \vartheta, \; \tilde{q} > 0 $; $ \dot{t}, \; \varepsilon\in (0, 1) $, $ \nu_{1, 2, 3} > 0 $. Let $ k = 1 $.
        2: If $ \|g_k\|\leq\varepsilon $, stop; otherwise, calculate $ d_k $ based on (2.8).
        3: Selecting the right step size $ \alpha_k $ on the basis of (2.5).
        4: Reset the new point to be $ w_k=x_k+\alpha_kd_k $.
        5: If $ \|g_k\|\leq\varepsilon $, stop, set $ x_{k+1}=w_k $. Or else, construct the iteration point $ x_k $ on the basis of $ x_{k+1}=x_k-\frac{g(w_k)^T(x_k-w_k)g(w_k)}{\|g(w_k)\|^2} $.
        6: Let $ k=k+1 $, visit 2.

    This assumption implies $ \exists\; \omega\in \mathbb{C} \; s.t. $

    $ gkω. $ (2.10)

    Theorem 2.1. If $ d_k $ satisfies Eq (2.8), then

    (1) proposed descent:

    $ gTk+1dk+1(1ϑ)gk+12, $ (2.11)

    and (2) trust domain:

    $ dk+1(1+ϑ)gk+1. $ (2.12)

    Proof. A remarkably similar proof procedure has been granted in Yuan's work in [28] and will not be reiterated for here.

    Lemma 2.2. If Assumption 2.1 holds and the objective function monotones, the iterative series $ \{x_k\} $ is yielded by Algorithm 1, and point $ x^{\star} $ satisfies the condition $ g(x^{\star}) = 0 $. Taking it a step further, if the series $ {x_k} $ is infinite, then

    $ k=0xkxk12. $ (2.13)

    Proof. A detailed proof procedure of it is available in [14].

    Theorem 2.3. If Assumption 2.1 holds, Algorithm 1 produces a finite series of step iterations $ \{\alpha_{k}\} $ during the iteration from an active point to the newer one.

    Proof. This conclusion is supported by contradiction. Assume the indication set $ \varphi = N\cup{\{0\}} $, take any $ k \in \varphi $, and consider the iteration of $ x_k $. Assume that the step size satisfying the line search does not occur; that is, there occurs a step size such that $ \alpha^{\star} = \tilde{q}\dot{t}^a $ satisfies:

    $ ˚gTdkϖα˚gdk2, $ (2.14)

    where $ \mathring{g} = g(x_k+\alpha^{\star}d_k) $. Referring to the previous (2.9) and (2.14), we have

    $ gk2=11+ϑgkkgTkdk=11+ϑgkk((˚ggk)Tdk˚gTdk)11+ϑgkk1(˚ggkdk+ϖα˚gdk2)11+ϑgkk1(Lxk+αdkxkdk+ϖα˚gdk2)11+ϑgkk(Lαdk2+ϖα˚gdk2)11+ϑgkkα(L+ϖ˚g)dk2. $

    By (2.9) and (2.10), then

    $ ˚g˚ggk+gkLxk+αdkxk+ω=Lαdk+ω=Lα(1+ϑ)gk+ω(Lα(1+ϑ)+1)ω. $

    Further we obtain that

    $ α(1+ϑgkk)gk2(L+ϖg(xk+αdk))dk2(1+ϑgkk)gk2(L+ϖ(L˜q(1+ϑ)+1)ω)dk22+ϑ(L+ϖ(L˜q(1+ϑ)+1)ω)(1+ϑ)3>0. $

    The above inequality shows that the step size $ \alpha^{\star} $ is bounded.

    Theorem 2.4. If Assumption 2.1 stands, Algorithm 1 yields $ \{\alpha_k\} $, $ \{d_k\} $, $ \{x_k\} $, {and} $ \{g_k\} $, and then $ \liminf_{k\to\infty}\|g_k\| = 0 $.

    Proof. This theorem is still proved by the converse method. We denote the indicator set $ \varphi = N\cup{\{0\}} $, and set $ k\in\varphi $. According to Assumption 2.1, assume that $ \exists\; \varrho, \; n_0 > 0 \; s.t. $

    $ gkϱandkn0, $ (2.15)

    where $ \varrho $ is a constant and $ n_0 $ is an index. According to (2.11) and (2.10), we have

    $ dk+1(1+ϑ)gk+1(1+ϑ)ϱ. $

    Consider the Eq (2.11) with (2.12), and we have

    $ gkdkgTkdk=(1+gkk)gk2, $
    $ dk(1+ϑϱdk)ϱ, $
    $ dk2ϱdkϑϱ2, $
    $ (dkϱ2)2(ϑ+14)ϱ2, $
    $ dkϑ+14ϱ+ϱ2, $
    $ ϑ+14ϱ+ϱ2dk(1+ϑ)ϱ. $

    It can be concluded that direction $ d_{k+1} $ is bounded, we suppose that $ \lim_{k\to\infty}d_k = d^{\star} $, and follow (2.13) with the Theorem 2.3 iteration points $ \lim_{k\to\infty}x_k = x^{\star} $. Step size $ \alpha_{k} $ is bounded, and we have

    $ k=0xk+1xk2=k=0αkdk2<, $ (2.16)
    $ αkdk0, $
    $ αk0. $

    We obtain that the following inequality,

    $ gTk+1dkϖαkgk+1dk2. $

    Both parts of the given inequality take the limit, the one has

    $ (g)Td0. $

    Let $ k $ tend to infinity, $ g_{k+1}^Td_{k+1} = -(1+\vartheta\frac{\|g_{k+1}\|}{\hslash_{k+1}})\|g_{k+1}\|^2 $, and each side of

    $ (g)Td0, $

    which implies

    $ g=0, $

    but this contradicts (2.15). Thus, the conclusion of the theorem holds.

    We apply (2.8) to the massive unconstrained optimization problem. Consider the issues below:

    $ minxnf(x), $ (3.1)

    where $ f:\Re^n\rightarrow \Re $ is a continuously differentiable function. Set $ \nabla f(x) = \digamma(x) $.

    Motivated by the description in Section 2, the following formula for $ d_{k} $ is proposed:

    $ dk={(1+ν1ϝkk)ϝk+ϝk(ϝTkˆyk1dk1dTk1ϝkˆyk1)kmax{ν2dk1ˆyk1,ν3|dTk1ˆyk1|},k2,(1+ν1ϝkk)ϝk,k=1, $ (3.2)

    where $ \hslash_{k} = \sqrt{\tau_1^2\|\digamma_{k}\|^2+\frac{(|\digamma^T_{k}\hat{y}_{k-1}|\|d_{k-1}\|-|d_{k-1}^T\digamma_{k}|\|\hat{y}_{k-1}\|)^2}{\max\{\tau_2\|d_{k-1}\|\|\hat{y}_{k-1}\|, \tau_3|d_{k-1}^T\hat{y}_{k-1}|\}^2}} $, $ \hat{y}_k = \digamma_{k+1}-\digamma_k $, $ \nu_1 = \vartheta\tau_1 $, $ \nu_2 = \frac{\tau_2}{\vartheta} $, $ \nu_3 = \frac{\tau_3}{\vartheta} $, $ \vartheta > 0 $, and $ \tau_i > 0, i = 1, 2, 3 $. Analogous to the targeted algorithm in Section 2, $ d_{k} $ bears the following traits:

    $ ϝTkdk(1ϑ)ϝk2, $ (3.3)

    and

    $ dk(ϑ+1)ϝk, $ (3.4)

    where $ \vartheta > 0 $ is a constant. The proofs of the above traits have been given in Section 2 and will not be repeated in this section.

    This section presents the algorithm and global convergence thesis.

    Algorithm 2
    1: Recognize point $ x_0 \in \Re^n $; constants $ Eps \in (0, 1) $, $ 0 < \zeta < \frac{1}{2}, \zeta < \xi < 1 $; $ \vartheta, \; \nu_{1, 2, 3} > 0 $. Let $ k=1 $.
    2: If $ \|\digamma_k\|\leq Eps $, stop; or calculate $ d_k $ based on (3.2).
    3: Select the step-size $ \alpha_k $ based on
                                                $ f_{k+1}\leq f_k+\zeta\alpha_k\digamma^T_kd_k,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\left( {3.5} \right)$
    and
                                                    $ \digamma^T_{k+1}\geq \xi\digamma^T_kd_k. \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\left( {3.6} \right)$
    4: Reset the new point to be $ x_{k+1}=x_k+\alpha_kd_k $.
    5: Let $ k:=k+1 $, and go to 2.

    Assumption 3.1. (i) The level set $ \Theta = \{x|f(x)\leq f(x_0)\} $ bounds.

    (ii) $ f(x)\in C^2 $ bounds below and Lipschitz continues, which implicates that there exists constant $ L $ higher than zero so that

    $ ϝ(m1)ϝ(m2)Lm1m2,m1,m2n. $ (3.7)

    Theorem 3.1. If Assumption 3.1 holds, Algorithm 2 creates $ \{x_k\} $, $ \{\alpha_k\} $, $ \{d_k\} $, and $ \{\digamma_k\} $, and then

    $ limkϝk=0. $ (3.8)

    Proof. We will establish this through counterfactuals. Assuming that the original theorem does not hold, and we will have the following conclusion: an existence of a constant $ \varepsilon^* $ larger than zero, $ \forall k\geq k^* > 0 \; s.t. $

    $ ϝkε. $

    By the decreasing trait (3.3) and line search method (3.5), then

    $ fk+1fkζαkϝTkdkζ(1ϑ)αkϝk2. $

    Consequently, let $ k $ range from 0 to $ \infty $, and then

    $ k=0(fk+1fk)=ff0=k=0(ζ(1ϑ)αkϝk2)<, $ (3.9)

    where the inequality on the right side results from Assumption 3.1(ⅱ). Significantly, Eq (3.9) expresses that

    $ limkζ(1ϑ)αkϝk2=0. $

    Through the linear search method (3.6) with Assumption 3.1, then

    $ (ϝkϝk1)Tdk1(ξ1)ϝTk1dk1(ξ1)(1ϑ)ϝk12, $
    $ ϝkϝk1dk1Lαk1dk1dk1L(1+ϑ)|αk1|ϝk2. $

    In conclusion, we have the fact that

    $ αk1(1ξϑ+ξϑ)ϝk12L(1+ϑ)ϝk21ξϑ+ξϑL+Lϑ>0, $

    where these two inequalities are given from (2.12). Hence we obtain (3.8). The proof is finalized.

    All methods are encoded in MATLAB R2021b running on a PC with a Windows 10 operating system with an Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz 1.80 GHz.

    We compare Algorithm 1 with the modified HS method [29], three-term PRP method [31], and traditional PRP method [21,32]. The termination conditions for each of the eleven test questions are

    $ gk105. $

    The parameters in Algorithm 1 are chosen as: $ \varpi = 0.01 $, $ \vartheta = 0.8, \; \tilde{q} = 1 $; $ \dot{t} = 0.5 $, $ \nu_1 = 1.8, \nu_2 = 3000 $, and $ \nu_3 = 1000 $. We list those problems from [33] here to preserve the neutrality of this paper. The corresponding problems with a matching initial point $ x_0 $ are tabulated here:

    $ g(x)=(g1(x),g2(x),...,gs(x))T. $

    Problem 4.1.. Exponential function 1:

    $ g1(x)=ex111,gt(x)=t(ext1xt),t=1,2...,s. $

    point $ x_0 = (\frac{s}{s-1}, \frac{s}{s-1}..., \frac{s}{s-1}, )^T $.

    Problem 4.2. Singular function:

    $ g1(x)=13x31+12x22,gt(x)=12x2t+t3x3t+12x2t+1,gs(x)=12x2s+s3x3s,t=1,2...,s. $

    point $ x_0 = (1, 1..., 1)^T $.

    Problem 4.3.

    $ gt(x)=ln(xt+1)xts,t=1,2...,s. $

    point $ x_0 = (1, 1..., 1)^T $.

    Problem 4.4. Trigexp function:

    $ g1(x)=3x31+2x25+sin(x1x2)sin(x1+x2), $
    $ gt(x)=xt1ext1xt+xt(4+3x2t)+2xt1+sin(xtxt1)sin(xt+xt1)8, $
    $ gs(x)=xsexs1xs+4xs3,t=1,2...,s. $

    point $ x_0 = (0, 0..., 0)^T $.

    Problem 4.5.

    $ gt(x)=ext1,t=1,2...,s. $

    point $ x_0 = (\frac{1}{s}, \frac{2}{s}..., \frac{s}{s})^T $.

    Problem 4.6. Penalty I function:

    $ gt(x)=105(xt1),t=1,2...,s1. $
    $ gs(x)=14s(x21+x22+...+x2s)14. $

    point $ x_0 = (\frac{1}{3}, \frac{1}{3}..., \frac{1}{3})^T $.

    Problem 4.7. Variable dimensioned function:

    $ gt(x)=xt1,t=1,2...,s1. $
    $ gs1(x)=(x11)+2(x21)+3(x31)+...+(s2)(xs21), $
    $ gs(x)=((x11)+2(x21)+3(x31)+...+(s2)(xs21))2. $

    point $ x_0 = (1-\frac{1}{s}, 1-\frac{2}{s}..., 1-\frac{s}{s})^T $.

    Problem 4.8. Tridiagonal system:

    $ g1(x)=4(x1x22), $
    $ gt(x)=8xt(x2txt1)2(1xt)+4(xtx2t+1),t=2,3...,s1, $
    $ gs(x)=8xs(x2sxs1)2(1xs). $

    point $ x_0 = (12, 12..., 12)^T $.

    Problem 4.9. Five-diagonal system:

    $ g1(x)=4(x1x22)+x2x23, $
    $ g2(x)=8x2(x22x1)2(1x2)+4(x2x23)+x3x24, $
    $ gt(x)=8xt(x2txt1)2(1xt)+4(xtx2t+1)+x2t1xt2+xt+1x2t+2,t=3,4...,s2, $
    $ gs1(x)=8xs1(x2s1xs2)2(1xs1)+4(xs1x2s)+x2s2xs3, $
    $ gs(x)=8xs(x2sxs1)2(1xs)+x2s1xs2. $

    point $ x_0 = (-2, -2..., -2)^T $.

    Problem 4.10. Seven-diagonal system:

    $ g1(x)=4(x1x22)+x2x23+x3x24, $
    $ g2(x)=8x2(x22x1)2(1x2)+4(x2x23)+x21+x3x24+x4x25, $
    $ g3(x)=8x3(x23x2)2(1x3)+4(x3x24)+x22x1+x4x25+x21+x5x26, $
    $ gt(x)=8xt(x2txt1)2(1xt)+4(xtx2t+1)+x2t1xt2+xt+1x2t+2+x2t2+xt+2xt3x2t+3,t=4,5...,s3, $
    $ gs1(x)=8xs1(x2s1xs2)2(1xs1)+4(xs1x2s)+x2s2xs3+xs+x2s3xs4, $
    $ gs(x)=8xs(x2sxs1)2(1xs)+x2s1xs2+x2s2xs3. $

    point $ x_0 = (-3, -3..., -3)^T $.

    Problem 4.11. Troesch problem:

    $ g1(x)=2x1+10(1s+1)2sin(101s+1x1)x2, $
    $ gt(x)=2xt+10(1s+1)2sin(101s+1xt)xt1xt+1,t=2,3...,s1, $
    $ gs(x)=2xs+10(1s+1)2sin(10(1s+1)2xs)xs1. $

    point $ x_0 = (0, 0..., 0)^T $.

    Per issue, four dimensions of 1000, 5000, 10,000 and 50,000 were considered. In order to provide a graphical and well-rounded comparison of these treatments, this paper utilizes the performance curves of [34], which have a large resource for messages on both efficiencies and soundness. We graphed testing issue scores $ P $ individually, where the measure differs from the corresponding best method by a factor of $ \tau $. The performance curves are depicted in Figures 13, thereafter. The numerical consequences are tabulated in Table 1, where "Dim" denotes dimension and $ \|g_k\| $ denotes the final value.

    Figure 1.  Performance profile based on NI (number of iterations).
    Figure 2.  Performance profile based on NF (number of function evaluations).
    Figure 3.  Performance profile based on CPU time (CPU seconds consumed).
    Table 1.  Numerical results of the considered methods.
    Alogtithmn 1 Modified HS Three-term PRP PRP
    NO Dim NI\NG\CPU\||gk|| NI\NG\CPU\||gk|| NI\NG\CPU\||gk|| NI\NG\CPU\||gk||
    1 1000 293/451/0.140625/9.941213e-06 395/518/0.421875/9.960606e-06 176/177/0.078125/9.999271e-06 187/188/0.062500/9.923051e-06
    1 5000 165/257/0.921875/9.904049e-06 199/270/1.359375/9.989538e-06 105/106/0.609375/9.978384e-06 109/110/0.421875/9.912454e-06
    1 10,000 126/198/1.781250/9.824845e-06 155/211/2.000000/9.923827e-06 85/86/1.125000/9.994038e-06 86/87/1.156250/9.942374e-06
    1 50,000 55/92/1.875000/9.742663e-06 82/113/2.515625/9.859854e-06 47/48/1.000000/9.970994e-06 50/51/1.812500/9.812461e-06
    2 1000 10251/12764/6.437500/9.988723e-06 13248/13441/7.421875/9.984307e-06 24134/29845/14.515625/9.996621e-06 23798/29299/14.640625/9.996136e-06
    2 5000 14093/17753/114.421875/9.991092e-06 14934/15561/110.281250/9.991112e-06 24335/42282/243.843750/9.994119e-06 23669/41392/236.50, 0000/9.999623e-06
    2 10,000 52760/199219/1710.968750/9.997897e-06 15418/16372/218.046875/9.996137e-06 20300/49021/466.765625/9.996696e-06 24989/53363/558.859375/9.990332e-06
    2 50,000 14291/40231/991.046875/9.992350e-06 12791/15588/472.906250/9.997958e-06 22215/103750/2252.015625/9.997605e-06 22660/103815/2278.734375/9.998269e-06
    3 1000 8/16/0.015625/2.973142e-06 29/110/0.031250/9.327860e-06 32/2250/0.375000/1.017432e-08 41/2195/0.343750/3.078878e-06
    3 5000 8/16/0.046875/6.327004e-06 58/300/0.734375/8.380619e-06 68/6691/16.625000/2.462019e-07 76/6599/16.578125/2.149830e-06
    3 10,000 8/16/0.125000/8.892321e-06 80/460/2.968750/8.076240e-06 96/10516/62.625000/1.147079e-09 106/10419/62.609375/1.607070e-06
    3 50,000 10/22/0.640625/3.745708e-06 166/1152/15.50, 0000/6.854968e-06 211/28900/385.796875/5.328693e-08 222/28783/388.671875/2.820361e-06
    4 1000 21/126/0.046875/6.619060e-06 76/484/0.125000/9.029869e-06 4107/375958/66.140625/5.376997e-06 263/25956/4.50, 0000/9.461107e-06
    4 5000 21/126/0.328125/6.579547e-06 101/752/1.843750/6.951950e-06 2534/236972/613.250,000/9.063035e-06 286/32183/82.093750/9.754050e-06
    4 10,000 21/126/0.875000/6.570692e-06 121/983/6.312500/8.222816e-06 2505/237950/1454.921875/6.132250e-06 319/38772/230.062500/9.991339e-06
    4 50,000 21/127/1.796875/8.942224e-06 212/2078/27.140625/8.051826e-06 3857/378376/5362.843750/7.595834e-06 441/67359/953.593750/9.805361e-06
    5 1000 10/23/0.015625/1.345156e-06 20/71/0.015625/7.757674e-06 69/1450/0.250,000/9.446592e-06 27/1188/0.218750/6.659137e-06
    5 5000 10/23/0.140625/2.987337e-06 37/176/0.437500/6.260531e-06 81/3869/9.406250/8.030132e-06 48/3627/8.281250/1.916037e-06
    5 10,000 10/23/0.343750/4.221010e-06 48/252/1.828125/6.741102e-06 91/5980/36.359375/8.876877e-06 64/5747/32.640625/2.819940e-06
    5 50,000 11/26/0.421875/1.273572e-06 100/668/8.031250/7.429674e-06 139/16176/210.640625/9.126806e-06 131/15985/206.812500/9.985455e-06
    6 1000 5713/5714/1.671875/9.994632e-06 6856/6857/2.031250/9.996719e-06 10285/10286/2.921875/9.999930e-06 10295/10296/2.781250/9.999854e-06
    6 5000 17431/17432/91.203125/9.999935e-06 20918/20919/96.125000/9.999824e-06 31379/31380/151.609375/9.999607e-06 31389/31390/129.140625/9.999784e-06
    6 10,000 27627/27628/324.546875/9.999054e-06 33153/33154/369.937500/9.999230e-06 49731/49732/579.281250/9.999510e-06 49741/49742/613.953125/9.999691e-06
    6 50,000 72200/72201/1871.421875/9.999723e-06 86640/86641/1789.953125/9.999971e-06 129963/129964/3045.625000/9.999862e-06 129974/129975/3245.187500/9.999835e-06
    7 1000 1/2/0.000000/0.000000e+00 1/2/0.000000/0.000000e+00 1/2/0.000000/0.000000e+00 1/2/0.000000/0.000000e+00
    7 5000 1/2/0.000000/0.000000e+00 1/2/0.000000/0.000000e+00 1/2/0.000000/0.000000e+00 1/2/0.000000/0.000000e+00
    7 10,000 1/2/0.000000/0.000000e+00 1/2/0.062500/0.000000e+00 1/2/0.031250/0.000000e+00 1/2/0.000000/0.000000e+00
    7 50,000 1/2/0.031250/0.000000e+00 1/2/0.015625/0.000000e+00 1/2/0.031250/0.000000e+00 1/2/0.046875/0.000000e+00
    8 1000 7517/52847/9.609375/9.705240e-06 6256/51420/8.187500/9.885174e-06 3867/637967/91.906250/8.900084e-06 3993/663519/92.890625/9.960198e-06
    8 5000 7379/51894/130.171875/9.832652e-06 6535/56778/130.609375/9.978493e-06 4801/844439/1665.171875/8.187114e-06 4368/814256/1623.625000/9.935312e-06
    8 10,000 8284/58431/381.203125/9.808342e-06 6762/61024/383.843750/9.976333e-06 4595/896457/5102.390625/8.054508e-06 4581/921975/5151.968750/9.842915e-06
    8 50,000 7311/51836/708.437500/9.997653e-06 7554/78279/862.50, 0000/9.988038e-06 10463/1986698/21108.781250/9.966481e-06 5525/1401847/15546.796875/9.902653e-06
    9 1000 1315/8569/1.50, 0000/9.980071e-06 1724/11004/2.187500/9.995043e-06 514/65030/9.437500/9.645362e-06 332/43592/6.50, 0000/9.394169e-06
    9 5000 1327/8684/21.875000/9.904454e-06 1465/9883/24.593750/9.959200e-06 574/84298/172.421875/9.690180e-06 421/66124/138.812500/9.520384e-06
    9 10,000 1335/8771/57.531250/9.914197e-06 1537/10704/71.359375/9.886892e-06 2870/356611/2051.125000/9.956931e-06 493/85160/479.187500/9.995629e-06
    9 50,000 1426/9690/137.171875/9.912140e-06 1792/14088/163.593750/9.996782e-06 988/194924/2254.156250/9.358423e-06 884/182395/2254.734375/9.294612e-06
    10 1000 5788/46645/7.625000/9.518479e-06 345/3005/0.671875/8.406790e-06 422/70159/11.015625/4.510865e-06 425/70205/11.125000/9.836083e-06
    10 5000 5669/45772/118.531250/9.521868e-06 608/5639/14.140625/4.101871e-06 597/110926/237.421875/6.498110e-06 500/97170/209.906250/8.589743e-06
    10 10,000 5661/45800/298.265625/9.734609e-06 686/6678/41.671875/8.466836e-06 671/135838/784.437500/8.765321e-06 596/124310/694.171875/9.913071e-06
    10 50,000 5472/44114/616.703125/9.804676e-06 892/10433/118.531250/6.674902e-06 1030/255267/3083.531250/8.165186e-06 1065/260264/3294.484375/9.877143e-06
    11 1000 0/1/0.000000/0.000000e+00 0/1/0.000000/0.000000e+00 0/1/0.000000/0.000000e+00 0/1/0.000000/0.000000e+00
    11 5000 0/1/0.000000/0.000000e+00 0/1/0.000000/0.000000e+00 0/1/0.062500/0.000000e+00 0/1/0.000000/0.000000e+00
    11 10,000 0/1/0.000000/0.000000e+00 0/1/0.062500/0.000000e+00 0/1/0.000000/0.000000e+00 0/1/0.000000/0.000000e+00
    11 50,000 0/1/0.015625/0.000000e+00 0/1/0.015625/0.000000e+00 0/1/0.031250/0.000000e+00 0/1/0.031250/0.000000e+00

     | Show Table
    DownLoad: CSV

    As can be seen from Figure 1, Algorithm 1 is the most efficient in terms of the sense of the number of iterations, solving $ 50\% $ of the problem with the least number of iterations. From Figure 2, it can be seen that the most efficient method in the sense of the number of function evaluations is Algorithm 1, which solves $ 75\% $ of the problem with the minimum number of function evaluations. Figure 3 shows that Algorithm 1 is also the most efficient algorithm in terms of CPU time considerations.

    It can be seen that Algorithm 1 fared splendidly in dealing with large-scale constrained monotone equations. Conclusively, numerical investigations indicate that the suggested algorithm constitutes an effective vehicle for solving systems of convex constrained monotone equations.

    Numerical outcomes with the proposed Algorithm 2 and modified HS method [29], three-term PRP method [31], and conventional PRP method [21,32] are reported separately. The fifty problems for the experiment were taken from [35], and the initial points for every question have been given as shown in Table 2. All algorithms use the weak Wolfe-Powell line-search technique. Each test question's termination conditions are

    $ |fkfk+1||fk|105, $

    or

    $ ϝk106. $

    The parameters in Algorithm 2 are chosen as: $ \zeta = 0.5, \; \xi = 0.95 $, $ \nu_1 = 2, \nu_2 = 30,000 $, and $ \nu_3 = 1000 $. For each problem, we considered three dimensions of 30,000, 12,000 and 300,000. The consumed performance curves are shown in Figures 46.

    Table 2.  The test problem.
    NO Problem $ x_0 $ NO Problem $ x_0 $
    1 Extended Freudenstein $ [0.5, -2, ..., 0.5,-2] $ 26 Extended Tridiagonal-2 Function $ [1, 1, ...,1] $
    2 Extended Trigonometric Function $ [0.2, 0.2, ...,0.2] $ 27 ARWHEAD (CUTE) $ [1, 1, ...,1] $
    3 Extended Beale Function U63 (MatrixRom) $ [1, 0.8, ..., 1, 0.8] $ 28 NONDIA (Shanno-78) (CUTE) $ [-1, -1, ...,-1] $
    4 Extended Penalty Function $ [1, 2, 3, ...,n] $ 29 EG2 (CUTE) $ [1, 1, ...,1] $
    5 Raydan 1 Function $ [1, 1, ...,1] $ 30 DIXMAANB (CUTE) $ [2, 2, ...,2] $
    6 Raydan 2 Function $ [1, 1, ...,1] $ 31 DIXMAANC (CUTE) $ [2, 2, ...,2] $
    7 Diagonal 1 Function $ [1/n, 1/n, ...,1/n] $ 32 DIXMAANE (CUTE) $ [2, 2, ...,2] $
    8 Diagonal 2 Function $ [1/1, 1/2, ...,1/n] $ 33 Broyden Tridiagonal $ [-1, -1, ...,-1] $
    9 Diagonal 3 Function $ [1, 1, ...,1] $ 34 EDENSCH Function (CUTE) $ [0, 0, ...,0] $
    10 Hager Function $ [1, 1, ...,1] $ 35 VARDIM Function (CUTE) $ [1-1/n, 1-2/n, ...,1-n/n] $
    11 Generalized Tridiagonal-1 Function $ [2, 2, ...,2] $ 36 LIARWHD (CUTE) $ [4, 4, ...,4] $
    12 Extended Three Exponential Terms $ [0.1, 0.1, ...,0.1] $ 37 DIAGONAL 6 $ [1, 1, ...,1] $
    13 Generalized Tridiagonal-2 $ [-1, -1, ...,-1, -1] $ 38 DIXMAANF (CUTE) $ [2, 2, ...,2] $
    14 Diagonal 4 Function $ [1, 1, ...,1, 1] $ 39 DIXMAANG (CUTE) $ [2, 2, ...,2] $
    15 Diagonal 5 Function (MatrixRom) $ [1.1, 1.1, ...,1.1] $ 40 DIXMAANH (CUTE) $ [2, 2, ...,2] $
    16 Extended Himmelblau Function $ [1, 1, ...,1] $ 41 DIXMAANI (CUTE) $ [2, 2, ...,2] $
    17 Generalized PSC1 Function $ [3, 0.1, ...,3, 0.1] $ 42 DIXMAANJ (CUTE) $ [2, 2, ...,2] $
    18 Extended PSC1 Function $ [3, 0.1, ...,3, 0.1] $ 43 DIXMAANK (CUTE) $ [2, 2, ...,2] $
    19 Extended Block Diagonal BD1 Function $ [0.1, 0.1, ...,0.1] $ 44 DIXMAANL (CUTE) $ [2, 2, ...,2] $
    20 Extended Cliff $ [0, -1, ...,0, -1] $ 45 DIXMAAND (CUTE) $ [2, 2, ...,2] $
    21 Extended Wood Function $ [-3, -1, -3, -1, ...] $ 46 ENGVAL1 (CUTE) $ [2, 2, ...,2] $
    22 Extended Quadratic Penalty QP1 Function $ [1, 1, ...,1] $ 47 FLETCHCR (CUTE) $ [0, 0, ...,0] $
    23 Extended Quadratic Penalty QP2 Function $ [1, 1, ......,1] $ 48 COSINE (CUTE) $ [1, 1, ...,1] $
    24 A Quadratic Function QF2 $ [0.5, 0.5, ...,0.5] $ 49 Extended DENSCHNB (CUTE) $ [1, 1, ...,1] $
    25 Extended EP1 Function $ [1.5, 1.5, ...,1.5] $ 50 Extended DENSCHNF (CUTE) $ [2, 0, 2, 0, ...,2, 0] $

     | Show Table
    DownLoad: CSV
    Figure 4.  Performance profile based on NI (number of iterations).
    Figure 5.  Performance profile based on NFG (the sum of the total iterations and gradient iterations).
    Figure 6.  Performance profile based on CPU time (CPU seconds consumed).

    The image restoration problems were solved by using algorithms. It is well known that in bio-engineering, medicine, and other fields of the science and engineering, image restoration techniques play a pivotal role. One of common image degradation models is defined by the following:

    $ min˘v(x)=12υx22+ιx1, $ (4.1)

    where $ \upsilon\in R^{m_1} $ is the observation, $ \varXi $ is the linear function of order $ m_1\times m_2 $, and $ \iota $ is a constant greater than zero.

    The regularized model (4.1) has attracted much concern, and several scholars have proposed various iterative methods to deal with it [36,37,38].

    Figueiredo [39] developed a gradient-based projection program by reformulating (4.1) as a constrained quadratic program. The reformulation in Figueiredo et al. is by following: split vector $ x $ into two pieces, i.e., $ x = s-t $, where $ s_i = (x_i)_+ $, $ t_i = (-x_i)_- $, and $ (\bullet)_+ = \max\{0, \bullet\} $. Then (4.1) is transformed into

    $ minκ012κTΦκ+γTκ, $ (4.2)
    $ κ=[s,t]T,γ=ιe2m2+[TυTυ]T,Φ=[TTTT]. $

    Further, Xiao et al. [40] found (4.2) to be equivocated to the following system on nonlinear equations:

    $ H(κ)=min{κ,Φκ+γ}=0. $ (4.3)

    Pang [41] proved that $ H $ satisfies (2.9), while Xiao [40] proved that it satisfies (2.2) too.

    Codes' stop condition

    $ |˘v(uk+1)˘v(uk)||˘v(uk)|103, $

    or

    $ uk+1ukuk103. $

    In the experiments, "Man", " Baboon", "colorcheckertestimage", and "car" are the tested images. Comparing Algorithm 1, modified HS [29], three-term PRP [31], and NPRP2 [42], the four algorithms are not effective at adding $ 30\%, 70\% $ to the noise. The parameters in Algorithm 1 are chosen as: $ \varpi = 0.01 $, $ \vartheta = 0.8, \; \tilde{q} = 1 $, $ \dot{t} = 0.5 $, $ \nu_1 = 2.2, \nu_2 = 3000 $, and $ \nu_3 = 1200 $. The time taken for each algorithm is summarized in Tables 3 and 4. From Tables 3 and 4, it can be seen that CPU time for both contrastive and color images processed with Algorithm 1 is less than the time used by the other algorithms, which shows that the algorithms proposed in this paper are more advantageous in dealing with grey image and color image restoration problems.

    Table 3.  CPU time results for different algorithms for gray images.
    Algorithm 1 Modified HS Three-term PRP NPRP2
    Man (1024 $ \times $ 1024) 0.3 6.6875 24.28125 16.375 33.303140
    0.7 13.09375 32.3125 42.3125 95.854068
    Baboon (512 $ \times $ 512) 0.3 1.6875 6.140625 3.90625 7.276346
    0.7 2.765625 9.28125 12.14062 22.251381

     | Show Table
    DownLoad: CSV
    Table 4.  CPU time results for different algorithms for color images.
    Algorithm 1 Modified HS Three-term PRP NPRP2
    colorcheckertestimage (1541 $ \times $ 1024) 0.3 9.077376 12.136426 17.377901 16.661125
    0.7 14.084278 37.827917 37.333502 30.093593
    car (3504 $ \times $ 2336) 0.3 41.74019 62.762366 96.119989 91.296092
    0.7 55.1054 160.855 219.161032 163.100240

     | Show Table
    DownLoad: CSV

    Figures 7 and 8 show that Algorithm 1 has performed well in tackling the gray image restoration. Visualizing the outcomes of color, Figures 9 and 10 are given and from the results, it is clear that the algorithm of this paper is more competitive in dealing with color image restoration. In image restoration experiments, we usually use the PSNR value and SSIM value to estimate the quality of the processed image. The higher the PSNR value, the less distorted the image is. The higher the SSIM value, the better the image setup. When two mirrors are identical, the SSIM value is 1. As can be seen from Figures 9 and 10, the SSIM value obtained by Algorithm 1 is relatively small compared to the traditional algorithm, and the image restoration effect is not as good as the traditional algorithm. This aspect will be investigated in the future.

    Figure 7.  From front to back for each row: the images with $ 30\% $ image noise added, the images processed by Algorithm 1, the modified HS method, the three-term PRP method, and the NPRP2 method.
    Figure 8.  From front to back for each row: the images with $ 70\% $ image noise added, the images processed by Algorithm 1, the modified HS method, the three-term PRP method, and the NPRP2 method.
    Figure 9.  From front to back for each row: the images with $ 30\% $ image noise added, the images processed by Algorithm 1, the modified HS method, the three-term PRP method, and the NPRP2 method.
    Figure 10.  From front to back for each row: the images with $ 70\% $ image noise added, the images processed by Algorithm 1, the modified HS method, the three-term PRP method, and the NPRP2 method.

    We now embed the improved HS method into a stochastic large subspace algorithm [43] by replacing its search direction with (2.8), thus building an improved HS algorithm based on the stochastic large subspace and variance-reducing SCGN-based algorithms (called mSCGN and mSDSCGN). Next, we test and compare the stochastic gradient descent algorithm (SGD) [44] and the STOCHASTIC VARIANCE GRADIENT REDUCTION (SVRG) [45] on the following two learning models:

    (ⅰ) Nonconvex SVM model with a sigmoid loss function:

    $ minxd1tts=0fs(x)+λx2 $ (4.4)

    where $ f_s(x) = 1-tanh(\omega_s < x, \varpi_s > ) $, $ \varpi \in \Re $, $ \omega\in\{-1, 1\} $ signify feature vectors and the corresponding labels, respectively. Support vector machine models were implemented in different areas for pattern recognition, currently focusing on information categorization, encompassing text and images.

    (ⅱ) Nonconvex regularized ERM model with a nonconvex sigmoid loss function:

    $ minxd1tts=0fs(x)+λ2x2 $ (4.5)

    where $ f_s(x) = 1/\{1+exp(\epsilon_s\varepsilon_s^Tx)\} $, $ \epsilon_s \in \{-1, 1\} $ is target value of the $ s $-th sample, $ \varepsilon_s \in \Re^d $ is the eigenvector of the $ s $-th sample, and $ \lambda > 0 $ is the regularization parameter. Binary classification models are instrumental in practical problems. While a non-convex ERM model concentrates on minimizing classification inaccuracy, the $ s $-type function is also exhibited to be usually superior to alternative loss functions. Accordingly, nonconvex ERM models with s-type loss functions are valuable for mathematical modeling.

    The parameters in mSCGN and mSDSCGN are chosen as: $ \nu_1 = 3.8, \nu_2 = 300 $, and $ \nu_3 = 100 $. The dataset comes from https://www.csie.ntu.edu.twcjlin/libsvmtools/datasets/. All algorithms were executed on three large datasets, with the specific details of the datasets are shown in Table 5. In all experiments, the regularization was chosen to be $ \lambda = 10^{-5} $ and a similar batch size $ m $ was used in each iteration. Gradient $ \nabla f(x) $ and the Hessian matrix $ \nabla^2 f(x) $ are estimated by:

    $ f(x)s=f(x+σes)f(xσes)2σ, $
    $ 2f(x)s,r=f(x+σes+σer)f(x+σes)f(x+σer)+f(x)σ2, $

    where $ \sigma = 10^{-4} $ and $ e_s $ is the $ s $-th unit-vector. Let the regularization factor $ \lambda $ of the learning model be $ 10^{-4} $, and the convergence result of the algorithms is shown in Figures 11 and 12. In this experiment, the largest count for internal iterations is 30. In principle, the lower the value of the curve, the better convergence in the corresponding algorithm.

    Table 5.  Details of data sets.
    Data set Training samples (t) Dimensions (d)
    Adult 32,562 123
    IJCNN 49,990 22
    Covtype 581,012 54

     | Show Table
    DownLoad: CSV
    Figure 11.  Numerical performance of SGD and mSCGN for solving models 1 and 2 in three datasets.
    Figure 12.  Numerical performance of SVRG and mSCGN for solving models 1 and 2 in three datasets.

    The different algorithms' behavior is revealed by plotting the curve of function values over the iteration. Figure 11 illustrates the behavior of mSCGN and SGD at solving (4.4) and (4.5) with alternative decreasing steps. As verified from Figure 11, both algorithms managed to tackle the models. The function values decline at a greater rate initially and incrementally remain constant. Notice that when the mSCGN algorithm and the SGD algorithm share the same step-size on the datasets, the former decays the function value firmer. This reveals that the algorithm herein designed is well executed for having sufficient descent and trust domain capabilities. Observe that for mSCGN, $ \alpha_k = 10k^{-0.9} $ converges quicker. Our algorithm exhibits superiority at appropriate step sizes, and experimental results perform better at larger decreasing step sizes.

    Figure 12 illustrates the behavior of mSDSCGN and SVRG for solving (4.4) and (4.5) at separate constant step sizes. Analyzing Figure 12, it is obvious that even though the SVRG algorithm uses the best step size, SVRG does not perform as well as mSDSCGN on all three datasets. We observed that of those step sizes, 0.04 is optimally suited toward mSDSCGN. Overall, our algorithm shows more promise and efficiency than others. Based on the analysis of the results, one can conclude that our suggested method is useful for machine learning.

    We propose a modified HS conjugate gradient method with the following characteristics: 1) Fulfills descent and trust-region features. 2) A particular method's global convergence for the non-convex functions can be easily established. 3) The image restoration, machine learning issues, and nonlinear monotone equations experiments were tested and the outcomes indicated that the algorithms have promising numerical characterizations for tackling these issues.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work is supported by the Guangxi Science and Technology Base and Talent Project (Grant No. AD22080047), the National Natural Science Foundation of Guangxi Province (Grant No. 2023GXNFSBA026063), the major talent project of Guangxi (GXR-6BG242404), the Bagui Scholars Program of Guangxi, and the 2024 Graduate Innovative Programs Establishment (Grant No. ZX01030031124006). We are sincerely grateful to the editors and reviewers for their valuable comments, which have further enhanced this paper.

    The authors declare there are no conflicts of interest.

    [1] Liu Y, Filler SG (2011) Candida albicans Als3, a multifunctional adhesin and invasin. Eukaryot Cell 10: 168-173. doi: 10.1128/EC.00279-10
    [2] Hawser SP, Douglas LJ (1994) Biofilm formation by Candida species on the surface of catheter materials in vitro. Infect Immun 62: 915-921.
    [3] Ramage G, Martínez JP, López-Ribot JL (2006) Candida biofilms on implanted biomaterials: A clinically significant problem. FEMS Yeast Res 6: 979-986. doi: 10.1111/j.1567-1364.2006.00117.x
    [4] Chandra J, Mukherjee PK, Ghannoum MA (2008) In vitro growth and analysis of Candida biofilms. Nat Protoc 3: 1909-1924. doi: 10.1038/nprot.2008.192
    [5] Finkel JS, Mitchell AP (2011) Genetic control of Candida albicans biofilm development. Nat Rev Microbiol 9: 109-118. doi: 10.1038/nrmicro2475
    [6] Chandra J, Kuhn DM, Mukherjee PK, et al. (2001) Biofilm formation by the fungal pathogen Candida albicans: Development, architecture, and drug resistance. J Bacteriol 183: 5385-5394. doi: 10.1128/JB.183.18.5385-5394.2001
    [7] Lazzell AL, Chaturvedi AK, Pierce CG, et al. (2009) Treatment and prevention of Candida albicans biofilms with caspofungin in a novel central venous catheter murine model of candidiasis. J Antimicrob Chemother 64: 567-570. doi: 10.1093/jac/dkp242
    [8] Francolini I, Donelli G (2010) Prevention and control of biofilm-based medical-device-related infections. FEMS Immunol Med Microbiol 59: 227-238. doi: 10.1111/j.1574-695X.2010.00665.x
    [9] Muakkassa FK, Ghannoum M, (2016) Updates on Therapeutic Strategies Against Candida (and Aspergillus) Biofilm Related Infections, In: Imbert C, editor, Fungal Biofilms and related infections. Advances in Experimental Medicine and Biology, Cham: Springer, 95-103.
    [10] Giles C, Lamont-Friedrich SJ, Michl TD, et al. (2018) The importance of fungal pathogens and antifungal coatings in medical device infections. Biotechnol Adv 36: 264-280. doi: 10.1016/j.biotechadv.2017.11.010
    [11] Ramage G, Saville SP, Wickes BL, et al. (2002) Inhibition of Candida albicans biofilm formation by farnesol, a quorum-sensing molecule. Appl Environ Microbiol 68: 5459-5463. doi: 10.1128/AEM.68.11.5459-5463.2002
    [12] Hornby JM, Jensen EC, Lisec AD, et al. (2001) Quorum sensing in the dimorphic fungus Candida albicans is mediated by farnesol. Appl Environ Microbiol 67: 2982-2992. doi: 10.1128/AEM.67.7.2982-2992.2001
    [13] Nickerson KW, Atkin AL, Hornby JM (2006) Quorum sensing in dimorphic fungi: Farnesol and beyond. Appl Environ Microbiol 72: 3805-3813. doi: 10.1128/AEM.02765-05
    [14] Deveau A, Hogan DA (2011) Linking quorum sensing regulation and biofilm formation by Candida albicans. Methods Mol Biol 692: 219-233. doi: 10.1007/978-1-60761-971-0_16
    [15] Donadio S, Monciardini P, Alduina R, et al. (2002) Microbial technologies for the discovery of novel bioactive metabolites. J Biotechnol 99: 187-198. doi: 10.1016/S0168-1656(02)00209-2
    [16] Singh P, Cameotra SS (2004) Potential applications of microbial surfactants in biomedical sciences. Trends Biotechnol 22: 142-146. doi: 10.1016/j.tibtech.2004.01.010
    [17] Bĕhal V (2006) Mode of action of microbial bioactive metabolites. Folia Microbiol 51: 359-369. doi: 10.1007/BF02931577
    [18] Rodrigues L, Van dMH, Teixeira J, et al. (2004) Biosurfactant from Lactococcus lactis 53 inhibits microbial adhesion on silicone rubber. Appl Microbiol Biotechnol 66: 306-311. doi: 10.1007/s00253-004-1674-7
    [19] Gudiña EJ, Rocha V, Teixeira JA (2010) Isolation and functional characterization of a biosurfactant produced by Lactobacillus paracasei. Colloids Surf B 76: 298-304. doi: 10.1016/j.colsurfb.2009.11.008
    [20] Janek T, Łukaszewicz M, Krasowska A (2012) Anti-adhesive activity of the biosurfactant pseudofactin II secreted by the Arctic bacterium Pseudomonas fluorescens BD5. BMC Microbiol 12: 24. doi: 10.1186/1471-2180-12-24
    [21] Martinotti MG, Allegrone G, Cavallo M, et al. (2013) Biosurfactants, In: Piemonte V, De Falco M, Basile A, editors, Sustainable Development in Chemical Engineering-Innovative Technologies, Chichester: John Wiley & Sons, 199-240.
    [22] Fracchia L, Banat JJ, Cavallo M, et al. (2015) Potential therapeutic applications of microbial surface-active compounds. AIMS Bioeng 2: 144-162. doi: 10.3934/bioeng.2015.3.144
    [23] Banat IM, Makkar RS, Cameotra SS (2000) Potential commercial applications of microbial surfactants. Appl Microbiol Biotechnol 53: 495-508. doi: 10.1007/s002530051648
    [24] Banat IM, Franzetti A, Gandolfi I, et al. (2010) Microbial biosurfactants production, applications and future potential. Appl Microbiol Biotechnol 87: 427-444. doi: 10.1007/s00253-010-2589-0
    [25] Fracchia L, Cavallo M, Martinotti MG, et al. (2012) Biosurfactants and bioemulsifiers: Biomedical and related applications-present status and future potentials, In: Ghista DN, editor, Biomedical Science, Engineering and Technology, Croatia, Rijeka: InTech.
    [26] Fracchia L, Ceresa C, Franzetti A, et al. (2014) Industrial Applications of Biosurfactants, In: Kosaric N, Sukan FV, editors, BIOSURFACTANTS. Production and Utilization-Processes, Technologies, and Economics, USA: CRS Press-Taylor & Francis Group, 245-267.
    [27] Sotirova AV, Spasova DI, Galabova DN, et al. (2008) Rhamnolipid-biosurfactant permeabilizing effects on gram-positive and gram-negative bacterial strains. Curr Microbiol 56: 639-644. doi: 10.1007/s00284-008-9139-3
    [28] Ortiz A, Teruel JA, Espuny MJ, et al. (2009) Interactions of a bacterial biosurfactant trehalose lipid with phosphatidylserine membranes. Chem Phys Lipids 158: 46-53. doi: 10.1016/j.chemphyslip.2008.11.001
    [29] Zaragoza A, Aranda FJ, Espuny MJ, et al. (2009) A mechanism of membrane permeabilization by a bacterial trehalose lipid biosurfactant produced by Rhodococcus sp. Langmuir 25: 7892-7898. doi: 10.1021/la900480q
    [30] Sánchez M, Aranda FJ, Teruel JA, et al. (2010) Permeabilization of biological and artificial membranes by a bacterial dirhamnolipid produced by Pseudomonas aeruginosa. J Colloid Interface Sci 341: 240-247. doi: 10.1016/j.jcis.2009.09.042
    [31] Rodrigues L, Banat IM, Teixeira J, et al. (2006) Biosurfactants: Potential applications in medicine. J Antimicrob Chemother 57: 609-618. doi: 10.1093/jac/dkl024
    [32] Banat IM, Rienzo MAD, Quinn GA (2014) Microbial biofilms, biosurfactants as antibiofilm agents. Appl Microbiol Biotechnol 98: 9915-9929. doi: 10.1007/s00253-014-6169-6
    [33] Ceresa C, Rinaldi M, Chiono V, et al. (2016) Lipopeptides from Bacillus subtilis AC7 inhibit adhesion and biofilm formation of Candida albicans on silicone. Antonie Van Leeuwenhoek 109: 1375-1388. doi: 10.1007/s10482-016-0736-z
    [34] Rivardo F, Turner RJ, Allegrone G, et al. (2009) Anti-adhesion activity of two biosurfactants produced by Bacillus spp. prevents biofilm formation of human bacterial pathogens. Appl Microbiol Biotechnol 83: 541-553.
    [35] Ceresa C, Tessarolo F, Caola I, et al. (2015) Inhibition of Candida albicans adhesion on medical-grade silicone by a Lactobacillus-derived biosurfactant. J Appl Microbiol 118: 1116-1125. doi: 10.1111/jam.12760
    [36] Comoglio F, Fracchia L, Rinaldi M (2013) Bayesian inference from count data using discrete uniform priors. PLoS One 8: e74388. doi: 10.1371/journal.pone.0074388
    [37] Quinn GA, Maloy AP, Banat MM, et al. (2013) A comparison of effects of broad-spectrum antibiotics and biosurfactants on established bacterial biofilms. Curr Microbiol 67: 614-623. doi: 10.1007/s00284-013-0412-8
    [38] Rodrigues L, Banat IM, Teixeira J, et al. (2007) Strategies for the prevention of microbial biofilm formation on silicone rubber voice prostheses. J Biomed Mater Res B Appl Biomater 81B: 358-370. doi: 10.1002/jbm.b.30673
    [39] Ceresa C, Rinaldi M, Fracchia L (2017) Synergistic activity of antifungal drugs and lipopeptide AC7 against Candida albicans biofilm on silicone. AIMS Bioeng 4: 318-334. doi: 10.3934/bioeng.2017.2.318
    [40] Jabra-Rizk MA, Shirtliff M, James C, et al. (2006) Effect of farnesol on Candida dubliniensis biofilm formation and fluconazole resistance. FEMS Yeast Res 6: 1063-1073. doi: 10.1111/j.1567-1364.2006.00121.x
    [41] Lara HH, Romero-Urbina DG, Pierce C, et al. (2015) Effect of silver nanoparticles on Candida albicans biofilms: An ultrastructural study. J Nanobiotechnol 13: 91. doi: 10.1186/s12951-015-0147-8
    [42] Nieminen MT, Novak-Frazer L, Rautemaa V, et al. (2014) A novel antifungal is active against Candida albicans biofilms and inhibits mutagenic acetaldehyde production in vitro. PLoS One 9: e97864. doi: 10.1371/journal.pone.0097864
    [43] Cochis A, Fracchia L, Martinotti MG, et al. (2012) Biosurfactants prevent in vitro Candida albicans biofilm formation on resins and silicon materials for prosthetic devices. Oral Surg Oral Med Oral Pathol Oral Radiol 113: 755-761. doi: 10.1016/j.oooo.2011.11.004
  • Reader Comments
  • © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(7264) PDF downloads(2128) Cited by(16)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog