Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article Special Issues

Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems

  • This paper presents and examines a newly improved linear technique for solving the equilibrium problem of a pseudomonotone operator and the fixed point problem of a nonexpansive mapping within a real Hilbert space framework. The technique relies two modified mildly inertial methods and the subgradient extragradient approach. In addition, it can be viewed as an advancement over the previously known inertial subgradient extragradient approach. Based on common assumptions, the algorithm's weak convergence has been established. Finally, in order to confirm the efficiency and benefit of the proposed algorithm, we present a few numerical experiments.

    Citation: Francis Akutsah, Akindele Adebayo Mebawondu, Austine Efut Ofem, Reny George, Hossam A. Nabwey, Ojen Kumar Narain. Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems[J]. AIMS Mathematics, 2024, 9(7): 17276-17290. doi: 10.3934/math.2024839

    Related Papers:

    [1] Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672
    [2] Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154
    [3] Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane . Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Mathematics, 2023, 8(12): 30102-30119. doi: 10.3934/math.20231539
    [4] Habib ur Rehman, Poom Kumam, Kanokwan Sitthithakerngkiet . Viscosity-type method for solving pseudomonotone equilibrium problems in a real Hilbert space with applications. AIMS Mathematics, 2021, 6(2): 1538-1560. doi: 10.3934/math.2021093
    [5] Yasir Arfat, Muhammad Aqeel Ahmad Khan, Poom Kumam, Wiyada Kumam, Kanokwan Sitthithakerngkiet . Iterative solutions via some variants of extragradient approximants in Hilbert spaces. AIMS Mathematics, 2022, 7(8): 13910-13926. doi: 10.3934/math.2022768
    [6] Habib ur Rehman, Wiyada Kumam, Poom Kumam, Meshal Shutaywi . A new weak convergence non-monotonic self-adaptive iterative scheme for solving equilibrium problems. AIMS Mathematics, 2021, 6(6): 5612-5638. doi: 10.3934/math.2021332
    [7] Fei Ma, Jun Yang, Min Yin . A strong convergence theorem for solving pseudo-monotone variational inequalities and fixed point problems using subgradient extragradient method in Banach spaces. AIMS Mathematics, 2022, 7(4): 5015-5028. doi: 10.3934/math.2022279
    [8] Yuanheng Wang, Chenjing Wu, Yekini Shehu, Bin Huang . Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems. AIMS Mathematics, 2024, 9(4): 9705-9720. doi: 10.3934/math.2024475
    [9] Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020
    [10] Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108
  • This paper presents and examines a newly improved linear technique for solving the equilibrium problem of a pseudomonotone operator and the fixed point problem of a nonexpansive mapping within a real Hilbert space framework. The technique relies two modified mildly inertial methods and the subgradient extragradient approach. In addition, it can be viewed as an advancement over the previously known inertial subgradient extragradient approach. Based on common assumptions, the algorithm's weak convergence has been established. Finally, in order to confirm the efficiency and benefit of the proposed algorithm, we present a few numerical experiments.



    In [22], Muu and Oetti considered the equilibrium problem (EP) as a generalization of several problems in nonlinear analysis, and these problems include convex minimization, variational inequalities, Nash-equilibrium problems, fixed point problems, and saddle point problems [8,22]. The EP has applications in many mathematical models from various fields of applied science and engineering such as economics, physics, image restoration, finance, ecology, network elasticity, transportation, and optimization. Let C be a nonempty, convex, and closed subset of a real Hilbert space H and g:C×CR be a bifunction such that g(x,x)=0 for all xC. The EP, as defined by Muu and Oetti [22], is formulated as follows: Find xC such that

    g(x,w)0,wC. (1.1)

    The solution of the EP (1.1) is denoted by EP(g). Due to the fact that many real-life problems are modeled with a pseudomonotone bifunction, several authors have considered solving EP (1.1) with pseudomonotone bifunction g; see, for example, [15,16,17] and the references therein. On the other hand, the theory of fixed points is an important concept in nonlinear analysis. Many problems in applied sciences and engineering can be formulated as fixed point problems. A point xC is called a fixed point of a self mapping T:CC if Tx=x. In this article, the set of all fixed point of T is denoted by F(T)={xC:Tx=x}. Finding the common solutions of EP and fixed points problems (FPP) is important because some mathematical models constraints can be expressed as EP and FFP. Some of these models can be found in several practical problems such as network resource allocation, signal processing, image restoration and so on [17].

    In [32], Tada and Takahashi introduced the following hybrid method for approximating the common solution of monotone EP and FFP of nonexpansive mappings in real Hilbert spaces:

    {x0C0=Q0=C,zmCsuch thatg(zm,w)+1λmwzm,zmxm0,wC,wm=αmxm+(1αm)Tzm,Cm={uC:wmuxmu},Qm={uC:x0xm,uxm0},xm+1=PCmQmx0. (1.2)

    It is worthy to note that the method (1.2) requires solving a strongly monotonic generalized EP for point zm: Find zmC such that

    g(zm,w)+1λmwzm,zmxm0,wC. (1.3)

    It is difficult to approximate the solutions of EP (1.1) when the bifunction assumption is weakened from monotone to pseudomonotone [17]. In 2008, Quoc et al. [28] considered a new method known as the extrgradient method (EM). Their results are extension and generalization of the results of Korpelevic [18] and Antipin [2] to the case of EP involving a pseudomonotone bifunction. In 2013, Ahn [1] considered an iterative method for finding the common solution of EP with a pseudomonotone bifunction and FPP of nonexpansive mappings. The major drawback of the methods in [28] and [1] is that, one needs to solve two strongly convex optimization problems in the feasible set C in each iteration of the algorithms. In order to improve the extragradient method, Hieu [15] followed the results of Censor et al. [9,10] to introduce a Halpern-type subgradient extragradient method. This method involves a half-space in the second minimization problem. It is noticed that the Halpern-type method is dependent on Lipschitz-type constants of the bifunction and that these constants are difficult to determine. Recently, Yang and Liu [33] introduced a modified Halpern-type method which does not require the prior knowledge of the Lipschitz-type constants. Their method has a non-increasing step-size. In real Hilbert spaces, the authors proved a strong convergence theorem for approximating the common solution of pseudomonotone EP and FFP of nonexpansive mappings.

    In order to speed up the process of solving the smooth convex minimization problem, Polyak originally presented and examined the idea of inertial extrapolation in [27] in 1964. Since then, scientists have employed this method to accelerate the rate at which many iterative processes converge. Since its conception, the inertial extrapolation approach has been refined, extended, and generalized by numerous authors; see [3,4,5,6,7] and the references therein. Relaxation techniques have proven to be an effective method for improving the rate of convergence in this field of study; see [23,24,25]. It is common knowledge that when inertial and relaxation techniques are combined, the results increase and the rate of convergence is higher than when either approach is used alone; see [19,20,26]. Very recently, Ceng et al. [11] introduced and studied a mildly inertial algorithm with a linesearch process for finding a common solution of the variational inequality problem and the common fixed-point problem of an asymptotically nonexpansive mapping and finitely many nonexpansive mappings by using a subgradient approach. For more details on the mildly inertial concept; see, [11,31] and the reference in them.

    Is it feasible to introduce a new step-size rule in conjunction with a new double inertial extrapolation to solve a pseudomonotone inequality problem and a fixed point problem in the framework of Hilbert space?

    Motivated by the above results, in this article, we propose a new self-adaptive inertial subgradient extragradient method which does not rely on the prior knowledge of the Lipschitz-type constants of the bifunction. The suggested method is used to approximate the common solution of pseudomonotone EP and FFP of nonexpansive in a real Hilbert space. Our method includes two modified mildly inertial terms which improve the speed of convergence of the suggested method. Under some mild conditions on the control parameters, we prove the weak convergence of the result of the suggested method. Furthermore, we present some numerical examples to demonstrate the computational advantage of our method over some well known method in the literature.

    The remaining part of this article is arranged as follows: In Section 2, we give some useful results and definitions in this study. In Section 3, we present the suggested method and the necessary conditions for obtaining our main result. In Section 4, we establish the strong convergence results of the suggested method. In Section 5, we present a numerical experiment to show the efficiency of our method, and in Section 6, we give th conclusion of our study.

    In this section, we begin by recalling some known and useful results which are needed in the sequel. Let H be a real Hilbert space. We denotes strong and weak convergence by "" and "", respectively. For any x,yH and α[0,1], it is well-known that

    xy2=x22x,y+y2. (2.1)
    x+y2=x2+2x,y+y2. (2.2)
    xy2x2+2y,xy. (2.3)
    αx+(1α)y2=αx2+(1α)y2α(1α)xy2. (2.4)
    αx+βy+γz2=αx2+βy2+γz2αβxy2αγxz2γβyz2. (2.5)

    Definition 2.1. [8,12,14] Let g:C×CR be a mapping. Then g is said to be

    (a) Strongly monotone on C if there exists a constant τ>0 such that

    g(x,y)+g(y,x)τxy2, (2.6)

    for all x,yC;

    (b) Monotone on C if

    g(x,y)+g(y,x)0, (2.7)

    for all x,yC;

    (c) Strongly pseudomonotone on C if there exists a constant γ>0 such that

    g(x,y)0g(x,y)γxy2,x,yC;

    (d) Pseudomonotone on C if

    g(x,y)0g(y,x)0,x,yC;

    (e) Satisfying a Lipschitz-like condition if there exist two positive constants L1,L2 such that

    g(x,y)+g(y,z)g(x,z)L1xy2L2yz2,x,y,zC. (2.8)

    Let C be a nonempty, closed and convex subset of H. For any uH, there exists a unique point PCuC such that

    uPCuuy,yC.

    The operator PC is called the metric projection of H onto C. It is well-known that PC is a nonexpansive mapping and that PC satisfies

    xy,PCxPCyPCxPCy2, (2.9)

    for all x,yH. Furthermore, PC is characterized by the following property:

    xy2xPCx2+yPCx2

    and

    xPCx,yPCx0, (2.10)

    for all xH and yC. A subset C of H is called proximal if for each xH, there exists yC such that

    xy=d(x,C).

    The Hausdorff metric on H is as follows:

    H(A,B):=max

    for all subsets A and B of H.

    The normal cone N_C to C at a point x\in C is defined by N_C(x) = \{z\in H:\langle z, x-y\rangle \geq 0, \; \forall y\in C\}.

    Lemma 2.1. [13] Let \{\delta_n\} and \{\omega_n\} be sequences of positive real numbers, such that

    \delta_{n+1}\leq (1+\omega_n) \delta_{n} + \omega_n\delta_{n-1}.

    Then the following holds

    \delta_{n+1}\leq M\cdot \Pi_{i = 1}^{n}(1+2\omega_i),

    where M = \max\{\delta_1, \delta_2\}. Moreover, if \sum_{n = 1}^{\infty}\omega_n < \infty, then \{\delta_{n}\} is bounded.

    Lemma 2.2. [21] Let \{x_m\} be a sequence in H such that the following conditions hold:

    (1) \lim_{m\to \infty}\|x_m -x\| exist for any x\in H;

    (2) all weak cluster points of \{x_n\} lies in H .

    Then \{x_m\} converges weakly to some point in H.

    Lemma 2.3. [30] Let \{\delta_m\}, \{\beta_m\} and \{\omega_m\} be sequences of positive real numbers, such that

    \delta_{m+1}\leq (1+\omega_m) \delta_{m} + \beta_m.

    If \sum_{m = 1}^{\infty}\omega_m < \infty, and \sum_{m = 1}^{\infty}\beta_m < \infty. Then the limit of \delta_m exists.

    Lemma 2.4. [12] Let C be a convex subset of a real Hilbert space H and \phi:C \to \mathbb{R} be a subdifferential function on C. Then x^* is a solution to the convex problem: \mathit{\text{minimize}}\{\phi(x): x\in C\} if and only if 0\in \partial\phi(x^*) + N_C(x^*), where \partial\phi(x^*) denotes the subdifferential of \phi and N_C(x^*) is the normal cone of C at x^*.

    Assumption 3.1. Condition A. Suppose that C is a nonempty, closed convex subset of a real Hilbert space H. Let g:C\times C\to \mathbb{R} satisfies the following conditions:

    (1) g is pseudomonotone on C, g(x, x) = 0 for all x\in H and satisfies the Lipschitz-type condition (2.8) on H with positive constants c_1, c_2;

    (2) g(\cdot, x) is sequentially weakly upper semi-continuous on C for each fixed x\in C;

    (3) g(x, \cdot) is convex, lower semi-continuous and subdifferential on C for every fixed x\in C;

    (4) \{T_m\} is a sequence of nonexpansive mapping;

    (5) S:C\to C is a nonexpansive mapping;

    (6) The solution set \Omega = Ep(g)\cap F(S)\neq \emptyset.

    Next, we present the proposed modified mildly inertial subgradient extragradient algorithm (Algorithm 3.1) and prove its weak convergence results.

    Algorithm 3.1. Modified mildly inertial subgradient extragradient
    Step 0. Let x_0, x_1\in H, \lambda\in (1, \infty), \mu \in (0, \frac{2}{\lambda}), \{\gamma_m\}, \{\theta_m\}\subset(0, \infty), \sum_{m = 1}^{\infty}\gamma_m < \infty, \sum_{m = 1}^{\infty}\theta_m < \infty and \beta_m\subset(0, 1). For all m\in \mathbb{N}, given \{x_m\}, compute the sequence \{x_{m+1}\} as follows:
    Step 1. Compute
                                                                     w_m = x_m - \gamma_m(T_mx_{m-1}-T_mx_{m}),
                                                                     z_m = w_m - \theta_m(T_mx_{m-1}- T_mw_m),
                                                                 y_m = \mathit{\text{argmin}}_{u\in C}\{\tau_mg(z_m,u)+\frac{1}{2}\|u-z_m\|^2\},
    if y_m = z_m, then stop and z_m is a solution. Otherwise, go to step 2.
    Step 2. Select \psi_m \in \partial_2 g(z_m, \cdot)(y_m) and \mu_m\in N_C(y_m) such that
                                                                     \mu_m = z_m-\tau_m\psi_m-y_m,
    and construct the half-space
                                                                     C_m = \{w\in H:\langle z_m-\tau_m\psi_m-y_m,w-y_m\rangle\leq 0\},
                                                                     u_{m} = \mathit{\text{argmin}}_{u\in C_m}\{\tau_mg(y_m,u)+\frac{1}{2}\|y_m-z_m\|^2\},
                                                                             x_{m+1} = (1-\beta_m)u_m + \beta_mSu_m,
    \begin{align} \tau_{m+1} = \begin{cases} \min\big\{\tau_{m}, \frac{\mu[\|y_m-z_m\|^2 +\|u_{m} -y_m\|^2]}{4\lambda (g(z_m , u_{m})-g(z_m,y_m)-g(y_m, u_{m}))\}}\big\}, &\mathit{\mbox{if}}\; g(z_m , u_{m})-g(z_m,y_m)-g(y_m, u_{m})>0,\\ \tau_{m}, &\mathit{\mbox{otherwise}}. \end{cases} \end{align}\;\;\; \left( {3.1} \right)

    Remark 3.1. The above iterative method is quite different from the usual double iterative methods in the literature. The role of T_m is justified in our numerical experiment.

    Lemma 4.1. The sequence \tau_{m+1} generated by Algorithm 3.1. Then,

    \lim\limits_{m\to \infty}\tau_{m}\geq \min\bigg\{\frac{\mu}{4\max\{c_1, c_2\}}, \tau_1\bigg\}.

    Proof. Using (3.1) in Algorithm 3.1 and (2.8), we have

    \begin{align} \frac{\mu[\|w_m-u_m\|^2 +\|x_{m+1} -u_m\|^2]}{4\lambda (g(w_m , x_{m+1})-g(w_m,u_m)-g(u_m, x_{m+1}))}&\geq \frac{\mu[\|w_m-u_m\|^2 +\|x_{m+1} -u_m\|^2]}{4\lambda [c_1\|w_m -u_m\|^2 + c_2\|x_{n+1}-u_m\|^2] }\\ &\geq \frac{\mu}{4\lambda \max\{c_1, c_2\}}\geq \frac{\mu}{4 \max\{c_1, c_2\}}. \end{align} (4.1)

    Thus, the sequence \tau_{m} is nonincreasing and has a lower bound of \frac{\mu}{4\max\{c_1, c_2\}}. It then follows that, there exists

    \lim\limits_{m\to \infty}\tau_{m}\geq \min\bigg\{\frac{\mu}{4 \max\{c_1, c_2\}}, \tau_1\bigg\}.

    Theorem 4.1. Let \{x_m\} be the sequence generated by Algorithm 3.1 such that the Assumption 3.1 holds. Then, \{x_m\} converges weakly to a point p\in \Omega.

    Proof. Let p\in \Omega. Using Algorithm 3.1, Lemma 2.4 and the definition of u_{m}, we have

    \begin{align} 0\in \partial_2\bigg(\tau_m g(y_m, \cdot) + \frac{1}{2}\|z_m - \cdot\|^2\bigg) (u_{m}) + N_{C_m}. \end{align} (4.2)

    Let v_m \in \partial_2g(y_m, u_{m}) and \chi\in N_{C_m}(u_{m}) such that

    \begin{align} \chi = z_m -\psi_mv_m -u_{m}, \end{align} (4.3)

    since \chi \in N_{C_m}(u_{m}), then

    \begin{align} \tau_m \langle v_m, x-u_{m} \rangle \geq \langle z_m- u_{m}, x-u_{m}\rangle\; \forall x\in C_m, \end{align} (4.4)

    and since p\in Ep(g), we have

    \begin{align} \tau_m \langle v_m, p-u_{m} \rangle \geq \langle z_m- u_{m}, p-u_{m}\rangle\; \forall p\in C_m. \end{align} (4.5)

    In addition, since v_m\in \partial_2g(y_m, u_{m}), we obtain

    \begin{align} \tau_m(g(y_m, p)-g(y_m, u_{m}))\geq \tau_m\langle v_m, p-u_{m}\rangle. \end{align} (4.6)

    Combining (4.5) and (4.6), we have

    \begin{align} \tau_m(g(y_m, p)-g(y_m, u_{m}))\geq \langle z_m- u_{m}, p-u_{m}\rangle. \end{align} (4.7)

    Since, p\in \Omega, then g(p, y_m)\geq 0, thus, using the fact that g is pseudomonotone, we have g(y_m, p)\leq 0. Hence, we get

    \begin{align} -2\tau_mg(y_m, u_{m})\geq 2\langle z_m- u_{m}, p-u_{m}\rangle. \end{align} (4.8)

    Using the fact that u_{m}\in C_{m}, we obtain

    \langle z_m- \tau_{m}\psi_m-y_m, u_{m}-y_m\rangle \leq 0,

    so,

    \tau_{m}\langle \psi_m, u_{m}-y_m\rangle \geq \langle z_m-y_m, u_{m}-y_m\rangle.

    Since \psi_m\in \partial_2g(z_m, y_m), and the definition of subdifferential, we obtain

    g(z_m, y)-g(z_m, y_m)\geq \langle \psi_m, y- y_m\rangle \forall y\in H,

    it then follows that

    \begin{align} 2\tau_{m}(g(z_m, u_{m})- g(z_m, y_m))\geq 2\langle z_m-y_m, u_{m}-y_m\rangle. \end{align} (4.9)

    Using (4.8) and (4.9), we have

    \begin{align} &2\tau_{m}(g(z_m, u_{m})- g(z_m, y_m)-g(y_m, u_{m}))\\\geq&2\langle z_m-y_m, u_{m}-y_m\rangle + 2\langle z_m- u_{m}, p-u_{m}\rangle\\ \geq&\| z_m-y_m\|^2 + \|u_{m}-y_m\|^2 + \| u_{m}-p\|^2 -\| p-z_{m}\|^2, \end{align} (4.10)

    which implies that

    \begin{align} \| p-u_{m}\|^2 &\leq \| z_{m}-p\|^2- \| z_m-y_m\|^2 - \|u_{m}-y_m\|^2 +2\tau_{m}(g(z_m, u_{m})- g(z_m, y_m)-g(y_m, u_{m})). \end{align} (4.11)

    Thus, using (3.1), we have

    \begin{align} \| u_{m}-p\|^2 &\leq \| z_{m}-p\|^2- \| z_m-y_m\|^2 - \|u_{m}-y_m\|^2 +\frac{\mu\tau_{m}}{2\lambda\tau_{m+1}} [\|z_m-y_m\|^2+\|u_{m}-y_m\|^2]. \end{align} (4.12)

    Clearly, \lim_{m\to \infty}\frac{\mu\tau_{m}}{2\lambda\tau_{m+1}} = \frac{\mu}{2\lambda} < 1. Thus, we have

    \begin{align} \| u_{m}-p\|^2\leq \|z_m -p\|^2 -(1- \frac{\mu}{2\lambda})[\|z_m-y_m\|^2 + \|u_{m}-y_m\|^2]. \end{align} (4.13)

    In addition, using Algorithm 3.1, we have

    \begin{align} \|z_m-p\|& = \|w_m - \theta_m(T_mx_{m-1}- T_mw_m)-p\|\\ & = \|(w_m-p) + [-\theta_m(T_mx_{m-1}- T_mw_m)]\|\\ &\leq \|w_m-p\| + \|(-\theta_m)(T_mx_{m-1}- T_mw_m)\|\\ & = \|w_m-p\| + \theta_m\|x_{m-1}- w_m\|\\ & = \| x_m - \gamma_m(T_mx_{m-1}-T_mx_{m})-p\| + \theta_m\|x_{m-1}- ( x_m - \gamma_m(T_mx_{m-1}-T_mx_{m}))\|\\ &\leq \|x_m-p\| + \gamma_m\|x_{m-1}-x_{m}\| + \theta_m\|x_{m-1}- x_m\| + \theta_m\gamma_m\|x_{m-1}-x_{m}\|\\ & = \|x_m-p\| + (\gamma_m+\theta_m+\theta_m\gamma_m)\|x_{m-1}-x_{m}\|. \end{align} (4.14)

    Using Algorithm 3.1, (4.13), and the definition of S, we have

    \begin{align} \|x_{m+1}-p\|^2 & = \|(1-\beta_m)u_m + \beta_mSu_m-p\|^2\\ & = (1-\beta_m)\|u_m -p\|^2 + \beta_m\|Su_m -p\|^2 -\beta_m(1-\beta_m)\|Su_m-u_m\|^2\\ &\leq \|u_m -p\|^2 -\beta_m(1-\beta_m)\|Su_m-u_m\|^2\\ & \leq \|z_m -p\|^2 -(1- \frac{\mu}{2\lambda})[\|z_m-y_m\|^2 + \|u_{m}-y_m\|^2] - \beta_m (1-\beta_m)\|Su_m -u_m\|^2\\ & \leq \|z_m -p\|^2. \end{align} (4.15)

    From (4.14) and (4.15), we have

    \begin{align} \|x_{m+1}-p\|&\leq \|z_m -p\|\\ &\leq \|x_m-p\| + (\gamma_m+\theta_m+\theta_m\gamma_m)\|x_{m-1}-x_{m}\|\\ &\leq \|x_m-p\| + \omega_m[\|x_{m-1}-p\|+\|x_m-p\|]\\ & = (1+\omega_m) \|x_m-p\| + \omega_m\|x_{m-1}-p\|, \end{align} (4.16)

    where \omega_m = \gamma_m+\theta_m+\theta_m\gamma_m. Using Lemma 2.1, we have that \{x_m\} is bounded, consequently the sequences \{w_m\}, \{z_m\} and \{u_m\} are bounded. It then follows that \sum_{m = 1}^{\infty}\omega_m\|x_{m-1}-p\| < \infty. Using (4.16) and Lemma 2.3, we have that \lim_{m\to\infty}\|x_m-p\| exists. As such from (4.16), we obtain that

    \begin{align} \lim\limits_{m\to \infty}(\gamma_m+\theta_m+\theta_m\gamma_m)\|x_{m-1}-x_{m}\| = 0. \end{align} (4.17)

    Thus, we obtain

    \begin{align} \lim\limits_{m\to \infty}\|x_{m-1}-x_{m}\| = 0. \end{align} (4.18)

    From (4.14), we have that

    \begin{align} &\|z_m-p\|^2\\ \leq &\|x_m-p\|^2 + 2(\gamma_m+\theta_m+\theta_m\gamma_m)\|x_{m-1}-x_{m}\|\|x_m-p\| + (\gamma_m+\theta_m+\theta_m\gamma_m)^2\|x_{m-1}-x_{m}\|^2. \end{align} (4.19)

    Thus, using (4.15) and (4.19), we have

    \begin{align} \|x_{m+1}-p\|^2 & \leq \|z_m -p\|^2 -(1- \frac{\mu}{2\lambda})[\|z_m-y_m\|^2 + \|u_{m}-y_m\|^2] - \beta_m (1-\beta_m)\|Su_m -u_m\|^2\\ &\leq \|x_m-p\|^2 + 2(\gamma_m+\theta_m+\theta_m\gamma_m)\|x_{m-1}-x_{m}\|\|x_m-p\| + (\gamma_m+\theta_m+\theta_m\gamma_m)^2\|x_{m-1}-x_{m}\|^2\\ &-(1- \frac{\mu}{2\lambda})[\|z_m-y_m\|^2 + \|u_{m}-y_m\|^2] - \beta_m (1-\beta_m)\|Su_m -u_m\|^2, \end{align} (4.20)

    which implies that

    \begin{align} &(1- \frac{\mu}{2\lambda})[\|z_m-y_m\|^2 + \|u_{m}-y_m\|^2] + \beta_m (1-\beta_m)\|Su_m -u_m\|^2 \\ \leq &\|x_m-p\|^2- \|x_{m+1}-p\|^2 + 2(\gamma_m+\theta_m+\theta_m\gamma_m)\|x_{m-1}-x_{m}\|\|x_m-p\| + (\gamma_m+\theta_m+\theta_m\gamma_m)^2\|x_{m-1}-x_{m}\|^2. \end{align} (4.21)

    Using (4.18), we have

    \begin{align*} \lim\limits_{m\to \infty}[ (1- \frac{\mu}{2\lambda})[\|z_m-y_m\|^2 + \|u_{m}-y_m\|^2] + \beta_m (1-\beta_m)\|Su_m -u_m\|^2] = 0, \end{align*}

    which implies that

    \begin{align} \lim\limits_{m\to \infty} \|z_m-y_m\| = 0\; \; ,\lim\limits_{m\to \infty}\|u_{m}-y_m\| = 0\; \; \text{and}\; \; \lim\limits_{m\to \infty}\|Su_m -u_m\| = 0. \end{align} (4.22)

    Furthermore, from Algorithm 3.1, we have

    \begin{align} \|z_m-x_m\|&\leq \|w_m-x_m\| + \theta_m\|x_{m-1}- w_m\|\\ &\leq \|x_m - \gamma_m(T_mx_{m-1}-T_mx_{m})-x_m\| + \theta_m\|x_{m-1}- x_m + \gamma_m(T_mx_{m-1}-T_mx_{m})\|\\ &\leq \|x_{m-1}-x_{m}\| +\theta_m\|x_{m-1}- x_m\| +\theta_m\gamma_m\|x_{m-1}-x_{m}\|. \end{align} (4.23)

    Using (4.18), we have

    \begin{align} \lim\limits_{m\to \infty}\|z_m-x_m\| = 0 = \lim\limits_{m\to \infty}\|w_m-x_m\|. \end{align} (4.24)

    Using (4.22) and (4.24), we have

    \begin{align} \lim\limits_{m\to \infty}\|x_m-u_m\| \leq \lim\limits_{m\to \infty}\|x_m-z_m\| + \lim\limits_{m\to \infty}\|z_m-y_m\| + \lim\limits_{m\to \infty}\|y_m-u_m\| = 0. \end{align} (4.25)

    We need to establish that the sequence \{x_m\} converges weakly to x^*\in \Omega. Since \{x_m\} is bounded, there exists a weakly convergent subsequence of \{x_m\}. Suppose that \{x_{m_{k}}\} is the subsequence of \{x_m\} such that \{x_{m_{k}}\} converges weakly to x^*. Furthermore, using (4.25), we obtain that a subsequence of \{u_n\} say \{u_{n_k}\} converges weakly to x^*, and using (4.22) and by the demiclosedness, we have that x^*\in F(S). Using (4.9) and the subdifferential of g, we have

    \begin{align} \tau_{m_k}(g(z_{m_k},x) -g(z_{m_k}, y_{m_k})) \geq \langle z_{m_k} -y_{m_k}, x-y_{m_k}\rangle\; \forall\; x\in C, \end{align} (4.26)

    taking limit as k\to \infty and using Assumption 3.1(1) and (3), and the fact that \lim_{k\to \infty}\tau_{m_k} = \tau > 0, we have that g(x^*, x)\geq 0 for all x\in C. Hence, x^*\in \Omega. Using Lemma 2.2, we obtain that \{x_m\} converges weakly to p and p\in \Omega.

    In this section, some numerical examples are given to authenticate our main result. We compare the performance of the numerical behavior of our Algorithm 3.1 (shortly, Alg. 3.1) with Algorithm 3.1 in [33] (shortly, LY Alg. 3.1) and Algorithm 4.1 in [15] (shortly, H Alg. 4.1). We perform all numerical simulations using MATLAB R2020b and carried out on PC Desktop Intel® CoreTM i7-3540M CPU @ 3.00GHz \times 4 memory 400.00GB.

    Example 5.1. Let H = \ell_2(\mathbb{R}) = \{x = (x_1, x_2, ..., x_k, ...), , x_k\in \mathbb{R}\, \text{and}\, \sum_{k = 1}^{\infty}|x_k|^2 < \infty\} with inner product \langle\cdot, \cdot\rangle:\ell_2\times \ell_2\to \mathbb{R} and norm \|\cdot\|:\ell_2\to \mathbb{R} defined by \langle x, y\rangle = \sum_{k = 1}^{\infty}x_ky_k and \|x\| = (\sum_{k = 1}^{\infty}|x_k|^2)^\frac{1}{2} , where x = \{x_k\}^\infty_{k = 1} , y = \{y_k\}^\infty_{k = 1} . Now, let C = \{x\in H:\|x\|\leq1 \} . The bifunction g:C\to C is defined by g(x, y) = (3-\|x\|)\langle x, y-x\rangle, \forall x, y\in C . As in [29], one can easily verify that g is a pseudomonotone which is not monotone and g fulfills the Lipschitz-type condition with constants c_1 = c_2 = \frac{5}{2} . Let T_m:\ell_2\to \ell_2 be defined by T_m = \frac{x}{5m} , for all m\in \mathbb{N} , x\in C and define S:\ell_2\to \ell_2 by Sx = \frac{x}{4} , where x = (x_1, x_2, ..., x_k, ...) , x_k\in \mathbb{R} . Now, we consider the following parameters for the various methods:

    \bullet For Alg. 3.1: \lambda = 2.4 , \mu = 0.7 , \gamma_m = \theta_m = \frac{1}{10m^2+1} , \beta_m = \frac{1}{5m} ,

    \bullet For LY Alg. 3.1: \alpha_m = \frac{1}{1000(m+2)} , \beta_m = 0.8 , \lambda_0 = \frac{1}{4} , \mu = 0.7 .

    \bullet For H Alg. 4.1: \lambda = 2.4 , \alpha_m = \frac{1}{1000(m+2)} and \beta_m = 0.8 .

    Next, we consider the following initial values:

    Case A: x_0 = (0, 1, 0, ..., 0, ...) , x_1 = (1, 1, 1, ,..., 0, ...) ,

    Case B: x_0 = (1, 1, 0, ..., 0, ...) , x_1 = (0, 1, 1, ,..., 0, ...) ,

    Case C: x_0 = (1, 0, 1, ..., 0, ...) , x_1 = (1, 0, 1, ,..., 0, ...) ,

    Case D: x_0 = (1, 0, ..., 0, ...) , x_1 = (1, 1, 0, ,..., 0, ...) .

    For the numerical computation, we used the stopping criterion TOL_n = \|x_{m+1}-x_{m}\| < 10^{-4} and we obtained the following Table 1 and Figure 1:

    Table 1.  Results of the numerical simulations for various cases.
    Alg. 3.1 Alg. 3.1(Without (T_m) ) YL Alg. 3.1 H Alg. 4.1
    Iter CPU time (secs.) Iter CPU time (secs.) CPU time (secs.) Iter CPU time (secs.) Iter
    Case A 21 0.0069 49 0.0125 0.0219 61 0.0236 82
    Case B 35 0.0090 45 0.0098 0.0221 78 0.0236 82
    Case C 40 0.0101 50 0.0105 0.0225 80 0.0237 85
    Case D 41 0.0101 55 0.0112 0.0237 85 0.0237 85

     | Show Table
    DownLoad: CSV
    Figure 1.  Example 1, Case A (top left); Case B (top right); Case C (bottom left); Case D (bottom right).

    Example 5.2. Let H = L_2([0, 1]) with the inner product \langle x, y\rangle = \int_{0}^{1}x(t)y(t)dt with inner product \|x\| = (\int_{0}^{1}x^2(t)dt)^{\frac{1}{2}} for all x, y\in L_2([0, 1]) . We define the set C by C = \{x\in H:\int_{0}^{1}(s^2+1)x(s)ds\leq 1\} and the function g:C\times C\to \mathbb{R} is defined by g(x, y) = \langle Bx, y-x\rangle , where Bx(s) = \max \{0, x(s)\} , s\in [0, 1] for all x\in H . Now, let the mapping T_m:C\to C be defined by T_mx = \frac{x}{2m} , for each m\in \mathbb{N} and define S:C\to C by Sx = \frac{x}{3} , for each x\in C . Now, we consider the following initial values:

    Case I: x_0 = s^3+1 , x_1 = \sin (3s) ,

    Case II: x_0 = \frac{\sin(4s)}{30} , x_1 = \frac{\cos (3s)}{3} ,

    Case III: x_0 = \frac{\exp (3s)}{80} , x_1 = \frac{\exp (4s)}{80} ,

    Case IV: x_0 = s^3+s^2 , x_1 = s^2+1 .

    For the numerical computation, we use the same parameters as in Example 1. We consider the stopping criterion Tol_m = \|x_{m+1}-x_{m}\| < 10^{-5} and the following Table 2 and Figure 2 are obtained.

    Table 2.  Results of the numerical simulations for various cases.
    Alg. 3.1 Alg. 3.1(Without (T_m) ) YL Alg. 3.1 H Alg. 4.1
    Iter CPU time (secs.) Iter CPU time (secs.) CPU time (secs.) Iter CPU time (secs.) Iter
    Case A 25 0.0070 30 0.0092 0.0105 50 0.0226 80
    Case B 38 0.0098 49 0.0102 0.0220 79 0.0224 84
    Case C 40 0.0120 50 0.0105 0.0225 83 0.0225 84
    Case D 41 0.0121 56 0.0123 0.0224 85 0.0230 90

     | Show Table
    DownLoad: CSV
    Figure 2.  Example 2, Case I (top left); Case II (top right); Case III (bottom left); Case IV (bottom right).

    Remark 5.1. From the Tables 1 and 2 and Figures 1 and 2, it is evident that our new method outperforms the compared methods.

    In this work, we have introduced a modified subgradient extragradient iterative algorithm for approximating the common solution of EP with pseudomonotone bifunction and FPP of nonexpansive mappings in a real Hilbert space. The proposed method employs a self-adaptive step-size and does not rely on the prior knowledge of the Lipscihtz-type constants of the pseudomonotone bifunctions. Under some mild conditions, we proved the weak convergence result of the new method. We have shown that the suggested method which includes two modified mildly inertial steps outperforms several well known methods in the existing literature.

    Francis Akutsah: Conceptualization, Writing-original draft; Akindele Adebayo Mebawondu: Conceptualization, Software, Writing-original draft; Austine Efut Ofem: Methodology, Software, Writing-original draft; Reny George: Validation, Funding acquisition, Formal analysis; Hossam A. Nabwey: Validation, Formal analysis; Ojen Kumar Narain: Supervision. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this work through the project number (PSAU/2024/01/78917).

    The authors declare no conflict of interests.



    [1] P. Anh, A hybrid extragradient method for pseudomonotone equilibrium problems and fixed point problems, Bull. Malays. Math. Sci. Soc., 36 (2013), 107–116.
    [2] A. Antipin, On a method for convex programs using a symmetrical modification of the Lagrange function, Ekon. Mat. Metody, 12 (1976), 1164–1173.
    [3] F. Akutsah, A. Mebawondu, H. Abass, O. Narain, A self adaptive method for solving a class of bilevel variational inequalities with split variational inequlaity and composed fixed point problem constraints in Hilbert spaces, Numer. Algebra Control, 13 (2023), 117–138. http://dx.doi.org/10.3934/naco.2021046 doi: 10.3934/naco.2021046
    [4] F. Akutsah, A. Mebawondu, G. Ugwunnadi, O. Narain, Inertial extrapolation method with regularization for solving monotone bilevel variation inequalities and fixed point problems in real Hilbert space, J. Nonlinear Funct. Anal., 2022 (2022), 5. http://dx.doi.org/10.23952/jnfa.2022.5 doi: 10.23952/jnfa.2022.5
    [5] F. Akutsah, A. Mebawondu, G. Ugwunnadi, P. Pillay, O. Narain, Inertial extrapolation method with regularization for solving a new class of bilevel problem in real Hilbert spaces, SeMA, 80 (2023), 503–524. http://dx.doi.org/10.1007/s40324-022-00293-2 doi: 10.1007/s40324-022-00293-2
    [6] F. Alvares, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set Valued Anal., 9 (2001), 3–11. http://dx.doi.org/10.1023/A:1011253113155 doi: 10.1023/A:1011253113155
    [7] P. Anh, Strong convergence theorems for nonexpansive mappings Ky Fan inequalities, J. Optim. Theory Appl., 154 (2012), 303–320. http://dx.doi.org/10.1007/s10957-012-0005-x doi: 10.1007/s10957-012-0005-x
    [8] E. Blum, W. Oettli, From optimization and variational inequalities to equilibrium problems, Mathematics Student, 63 (1994), 123–145.
    [9] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert spaces, J. Optim. Theory Appl., 148 (2011), 318–335. http://dx.doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
    [10] Y. Censor, A. Gibali, S. Reich, Extensions of Korpelevich's extragradient method for variational inequality problems in Euclidean space, Optimization, 61 (2012), 119–1132. http://dx.doi.org/10.1080/02331934.2010.539689 doi: 10.1080/02331934.2010.539689
    [11] L. Ceng, X. Qin, Y. Shehu, J. Yao, Mildly inertial subgradient extragradient method for variational inequalities involving an asymptotically nonexpansive and finitely many nonexpansive mappings, Mathematics, 7 (2019), 881. http://dx.doi.org/10.3390/math7100881 doi: 10.3390/math7100881
    [12] P. Daniele, F. Giannessi, A. Maugeri, Equilibrium problems and variational model, New York: Springer, 2003. http://dx.doi.org/10.1007/978-1-4613-0239-1
    [13] A. Hanjing, S. Suantai, A fast image restoration algorithm based on a fixed point and optimization method, Mathematics, 8 (2020), 378. http://dx.doi.org/10.3390/math8030378 doi: 10.3390/math8030378
    [14] Z. He, The split equilibrium problem and its convergence algorithms, J. Inequal. Appl., 2012 (2012), 162. http://dx.doi.org/10.1186/1029-242X-2012-162 doi: 10.1186/1029-242X-2012-162
    [15] D. Hieu, Halpern subgradient extragradient method extended to equilibrium problems, RACSAM, 111 (2017), 823–840. http://dx.doi.org/10.1007/s13398-016-0328-9 doi: 10.1007/s13398-016-0328-9
    [16] D. Hieu, Hybrid projection methods for equilibrium problems with non-Lipschitz type bifunctions, Math. Method. Appl. Sci., 40 (2017), 4065–4079. http://dx.doi.org/10.1002/mma.4286 doi: 10.1002/mma.4286
    [17] L. Jolaoso, M. Aphane, A self-adaptive inertial subgradient extragradient method for pseudomonotone equilibrium and common fixed point problems, Fixed Point Theory Appl., 2020 (2020), 9. http://dx.doi.org/10.1186/s13663-020-00676-y doi: 10.1186/s13663-020-00676-y
    [18] G. Korpelevich, The extragradient method for finding saddle points and other problems, Matekon, 12 (1976), 747–756.
    [19] M. Lukumon, A. Mebawondu, A. Ofem, C. Agbonkhese, F. Akutsah, O. Narain, An efficient iterative method for solving quasimonotone bilevel split variational inequality problem, Adv. Fixed Point Theory, 13 (2023), 26. http://dx.doi.org/10.28919/afpt/8269 doi: 10.28919/afpt/8269
    [20] A. Mebawondu, A. Ofem, F. Akutsah, C. Agbonkhese, F. Kasali, O. Narain, A new double inertial subgradient extragradient algorithm for solving split pseudomonotone equilibrium problems and fixed point problems, Ann. Univ. Ferrara, in press. http://dx.doi.org/10.1007/s11565-024-00496-7
    [21] A. Moudafi, E. Al-Shemas, Simultaneous iterative methods for split equality problem, Trans. Math. Program. Appl., 1 (2013), 1–11.
    [22] L. Muu, W. Oettli, Convergence of an adaptive penalty scheme for finding constrained equilibria, Nonlinear Anal., 18 (1992), 1159–1166. http://dx.doi.org/10.1016/0362-546X(92)90159-C doi: 10.1016/0362-546X(92)90159-C
    [23] A. Ofem, A. Mebawondu, G. Ugwunnadi, H. Isik, O. Narain, A modified subgradient extragradient algorithm-type for solving quasimonotone variational inequality problems with applications, J. Inequal. Appl., 2023 (2023), 73. http://dx.doi.org/10.1186/s13660-023-02981-7 doi: 10.1186/s13660-023-02981-7
    [24] A. Ofem, A. Mebawondu, G. Ugwunnadi, P. Cholamjiak, O. Narain, Relaxed Tseng splitting method with double inertial steps for solving monotone inclusions and fixed point problems, Numer. Algor., in press. http://dx.doi.org/10.1007/s11075-023-01674-y
    [25] A. Ofem, A. Mebawondu, C. Agbonkhese, G. Ugwunnadi, O. Narain, Alternated inertial relaxed tseng method for solving fixed point and quasi-monotone variational inequality problems, Nonlinear Functional Analysis and Applications, 29 (2024), 131–164. http://dx.doi.org/10.22771/nfaa.2024.29.01.10 doi: 10.22771/nfaa.2024.29.01.10
    [26] A. Ofem, J. Abuchu, G. Ugwunnadi, H. Nabwey, A. Adamu, O. Narain, Double inertial steps extragadient-type methods for solving optimal control and image restoration problems, AIMS Mathematics, 9 (2024), 12870–12905. http://dx.doi.org/10.3934/math.2024629 doi: 10.3934/math.2024629
    [27] B. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Comp. Math. Math. Phys., 4 (1964), 1–17. http://dx.doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [28] D. Quoc Tran, M. Le Dung, V. Nguyen, Extragradient algorithms extended to equilibrium problems, Optimization, 57 (2008), 749–776. http://dx.doi.org/10.1080/02331930601122876 doi: 10.1080/02331930601122876
    [29] H. Rehman, P. Kumam, P. Cho, Y. Suleiman, W. Kumam, Modified Popov's explicit iterative algorithms for solving pseudomonotone equilibrium problems, Optim. Method. Softw., 36 (2021), 82–113. http://dx.doi.org/10.1080/10556788.2020.1734805 doi: 10.1080/10556788.2020.1734805
    [30] K. Tan, H. Xu, Approximating fixed points of nonexpansive mappings by the ishikawa iteration process, J. Math. Anal. Appl., 178 (1993), 301–308. http://dx.doi.org/10.1006/jmaa.1993.1309 doi: 10.1006/jmaa.1993.1309
    [31] D. Thong, D. Hieu, Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems, Numer. Algor., 80 (2019), 1283–1307. http://dx.doi.org/10.1007/s11075-018-0527-x doi: 10.1007/s11075-018-0527-x
    [32] A. Tada, W. Takahashi, Strong convergence theorem for an equilibrium problem and a nonexpansive mapping, In: Nonlinear analysis and convex analysis, Yokohama: Yokohama Publishers, 2006,609–617.
    [33] J. Yang, H. Liu, The subgradient extragradient method extended to pseudomonotone equilibrium problems and fixed point problems in Hilbert space, Optim. Lett., 14 (2020), 1803–1816. http://dx.doi.org/10.1007/s11590-019-01474-1 doi: 10.1007/s11590-019-01474-1
  • This article has been cited by:

    1. Jacob Ashiwere Abuchu, Austine Efut Ofem, Godwin Chidi Ugwunnadi, Ojen Kumar Narain, An inertial-type extrapolation algorithm for solving the multiple-sets split pseudomonotone variational inequality problem in real Hilbert spaces, 2024, 0, 2155-3289, 0, 10.3934/naco.2024056
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1188) PDF downloads(57) Cited by(1)

Figures and Tables

Figures(2)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog