Loading [MathJax]/jax/output/SVG/jax.js
Research article

Recurrence after treatment of arteriovenous malformations of the head and neck

  • These two authors contributed equally.
  • Objective 

    Arteriovenous malformations (AVMs) are aggressive diseases with a high tendency to recur. AVM treatment is complex, especially in the anatomically difficult head and neck region. This study analyzed correlations between extracranial head and neck AVM presentations and the frequency of recurrence.

    Methods 

    We retrospectively assessed AVM recurrence among 55 patients with head and neck AVMs treated with embolization and resection between January 2008 and December 2015. Recurrence was defined as any evidence of AVM expansion following embolization and resection. Patient variables, including sex, age, AVM size, AVM location, stage, and treatment modalities, were examined for correlations with the recurrence of head and neck AVMs. Statistical analysis was performed using SPSS 20.0.

    Results 

    A total of 55 patients with at least 6 months of follow-up following AVM treatment with embolization and surgical resection were enrolled in this study. During follow-up, 14 of 55 patients experienced recurrence (the long-term recurrence rate was 25.5%). Sex, stage, AVM size, and treatment modality were identified as independent predictors of recurrence. Recurrence was less likely following the treatment of lower-stage or smaller lesions and did not correlate with age or location.

    Conclusions 

    AVMs of the head and neck are among the most challenging conditions to manage due to a high risk of recurrence. Early and total AVM resection is the best method for preventing recurrence.

    Citation: Do-Thi Ngoc Linh, Lam Khanh, Le Thanh Dung, Nguyen Hong Ha, Tran Thiet Son, Nguyen Minh Duc. Recurrence after treatment of arteriovenous malformations of the head and neck[J]. AIMS Medical Science, 2022, 9(1): 9-17. doi: 10.3934/medsci.2022003

    Related Papers:

    [1] Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672
    [2] Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154
    [3] Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri . Viscosity-type inertial iterative methods for variational inclusion and fixed point problems. AIMS Mathematics, 2024, 9(7): 18553-18573. doi: 10.3934/math.2024903
    [4] Francis Akutsah, Akindele Adebayo Mebawondu, Austine Efut Ofem, Reny George, Hossam A. Nabwey, Ojen Kumar Narain . Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems. AIMS Mathematics, 2024, 9(7): 17276-17290. doi: 10.3934/math.2024839
    [5] Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane . Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Mathematics, 2023, 8(12): 30102-30119. doi: 10.3934/math.20231539
    [6] Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629
    [7] Mohammad Dilshad, Mohammad Akram, Md. Nasiruzzaman, Doaa Filali, Ahmed A. Khidir . Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems. AIMS Mathematics, 2023, 8(6): 12922-12942. doi: 10.3934/math.2023651
    [8] James Abah Ugboh, Joseph Oboyi, Hossam A. Nabwey, Christiana Friday Igiri, Francis Akutsah, Ojen Kumar Narain . Double inertial extrapolations method for solving split generalized equilibrium, fixed point and variational inequity problems. AIMS Mathematics, 2024, 9(4): 10416-10445. doi: 10.3934/math.2024509
    [9] Anjali, Seema Mehra, Renu Chugh, Salma Haque, Nabil Mlaiki . Iterative algorithm for solving monotone inclusion and fixed point problem of a finite family of demimetric mappings. AIMS Mathematics, 2023, 8(8): 19334-19352. doi: 10.3934/math.2023986
    [10] Bancha Panyanak, Chainarong Khunpanuk, Nattawut Pholasa, Nuttapol Pakkaranang . A novel class of forward-backward explicit iterative algorithms using inertial techniques to solve variational inequality problems with quasi-monotone operators. AIMS Mathematics, 2023, 8(4): 9692-9715. doi: 10.3934/math.2023489
  • Objective 

    Arteriovenous malformations (AVMs) are aggressive diseases with a high tendency to recur. AVM treatment is complex, especially in the anatomically difficult head and neck region. This study analyzed correlations between extracranial head and neck AVM presentations and the frequency of recurrence.

    Methods 

    We retrospectively assessed AVM recurrence among 55 patients with head and neck AVMs treated with embolization and resection between January 2008 and December 2015. Recurrence was defined as any evidence of AVM expansion following embolization and resection. Patient variables, including sex, age, AVM size, AVM location, stage, and treatment modalities, were examined for correlations with the recurrence of head and neck AVMs. Statistical analysis was performed using SPSS 20.0.

    Results 

    A total of 55 patients with at least 6 months of follow-up following AVM treatment with embolization and surgical resection were enrolled in this study. During follow-up, 14 of 55 patients experienced recurrence (the long-term recurrence rate was 25.5%). Sex, stage, AVM size, and treatment modality were identified as independent predictors of recurrence. Recurrence was less likely following the treatment of lower-stage or smaller lesions and did not correlate with age or location.

    Conclusions 

    AVMs of the head and neck are among the most challenging conditions to manage due to a high risk of recurrence. Early and total AVM resection is the best method for preventing recurrence.



    Mathematical modelling provides a systematic formalism for the understanding of the corresponding real-world problem. Moreover, adequate mathematical tools for the analysis of the translated real-world problem are at our disposal. Fixed point theory (FPT), an important branch of nonlinear functional analysis, is prominent for modelling a variety of real-world problems. It is worth mentioning that the real-world phenomenon can be translated into well known existential as well as computational FPP.

    The EP theory provides an other systematic formalism for modelling the real-world problems with possible applications in optimization theory, variational inequality theory and game theory [7,10,13,17,18,19,21,25,28,31,32]. In 1994, Blum and Oettli [13] proposed the (monotone-) EP in Hilbert spaces. Since then various classical iterative algorithms are employed to compute the optimal solution of the (monotone-) EP and the FPP. It is remarked that the convergence characteristic and the speed of convergence are the principal attributes of an iterative algorithm. All the classical iterative algorithms from FPT or EP theory have a common shortcoming that the convergence characteristic occurs with respect to the weak topology. In order to enforce the strong convergence characteristic, one has to assume stronger assumptions on the domain and/or constraints. Moreover, strong convergence characteristic of an iterative algorithm is often more desirable than weak convergence characteristic in an infinite dimensional framework.

    The efficiency of an iterative algorithm can be improved by employing the inertial extrapolation technique [29]. This technique has successfully been combined with the different classical iterative algorithms; see e.g., [2,3,4,5,6,8,9,14,15,16,23,27]. On the other hand, the parallel architecture of the algorithm helps to reduce the computational cost.

    In 2006, Tada and Takahashi [33] suggested a hybrid framework for the analysis of monotone EP and FPP in Hilbert spaces. On the other hand, the iterative algorithm proposed in [33] fails for the case of pseudomonotone EP. In order to address this issue, Anh [1] suggested a hybrid extragradient method, based on the seminal work of Korpelevich [24], to address the pseudomonotone EP together with the FPP. Inspired by the work of Anh [1], Hieu et al. [21] suggested a parallel hybrid extragradient framework to address the pseudomonotone EP together with the FPP associated with nonexpansive operators.

    Inspired and motivated by the ongoing research, it is natural to study the pseudomonotone EP together with the FPP associated with the class of an η-demimetric operators. We therefore, suggest some variants of the classical Mann iterative algorithm [26] and the Halpern iterative algorithm [20] in Hilbert spaces. We formulate these variants endowed with the inertial extrapolation technique and parallel hybrid architecture for speedy strong convergence results in Hilbert spaces.

    The rest of the paper is organized as follows. We present some relevant preliminary concepts and useful results regarding the pseudomonotone EP and FPP in Section 2. Section 3 comprises strong convergence results of the proposed variants of the parallel hybrid extragradient algorithm as well as Halpern iterative algorithm under suitable set of constraints. In Section 4, we provide detailed numerical results for the demonstration of the main results in Section 3 as well as the viability of the proposed variants with respect to various real-world applications.

    Throughout this section, the triplet (H,<,>,) denotes the real Hilbert space, the inner product and the induced norm, respectively. The symbolic representation of the weak and strong convergence characteristic are and , respectively. Recall that a Hilbert space satisfies the Opial's condition, i.e., for a sequence (pk)H with pkν then the inequality lim infkpkν<lim infkpkμ holds for all μH with νμ. Moreover, H satisfies the the Kadec-Klee property, i.e., if pkν and pkν as k, then pkν0 as k.

    For a nonempty closed and convex subset KH, the metric projection operator ΠHK:HK is defined as ΠHK(μ)=argminνKμν. If T:HH is an operator then Fix(T)={νH|ν=Tν} represents the set of fixed points of the operator T. Recall that the operator T is called η-demimetric (see [35]) where η(,1), if Fix(T) and

    μν,μTμ12(1η)μTμ2,μHandνFix(T).

    The above definition is equivalently represented as

    Tμν2μν2+ημTμ2,μHandνFix(T),

    Recall also that a bifunction g:K×KR{+} is coined as (ⅰ) monotone if g(μ,ν)+g(ν,μ)0, for all μ,νK; and (ⅱ) strongly pseudomonotone if g(μ,ν)0g(ν,μ)αμν2,for all μ,νK, where α>0. It is worth mentioning that the monotonicity of a bifunction implies the pseudo-monotonicity, but the converse is not true. Recall the EP associated with the bifunction g is to find μK such that g(μ,ν)0 for all νK. The set of solutions of the equilibrium problem is denoted by EP(g).

    Assumption 2.1. [12,13] Let g:K×KR{+} bea bifunction satisfying the following assumptions:

    (A1) g is pseudomonotone, i.e., g(μ,ν)0g(μ,ν)0, for allμ,νK;

    (A2) g is Lipschitz-type continuous, i.e., there exist two nonnegativeconstants d1,d2 such that

    g(μ,ν)+g(ν,ξ)g(μ,ξ)d1μν2d2νξ2, for allμ,ν,ξK;

    (A3) g is weakly continuous on K×K imply that, if μ,νK and (pk), (qk) are two sequences in K such that pkμ and qkν respectively, then f(pk,qk)f(μ,ν);

    (A4) For each fixed μK, g(μ,) is convex and subdifferentiable on K.

    In view of the Assumption 2.1, EP(g) associated with the bifunction g is weakly closed and convex.

    Let gi:K×KR{+} be a finite family of bifunctions satisfying Assumption 2.1. Then for all i{1,2,,M}, we can compute the same Lipschitz coefficients (d1,d2) for the family of bifunctions gi by employing the condition (A2) as

    gi(μ,ξ)gi(μ,ν)gi(ν,ξ)d1,iμν2+d2,iνξ2d1μν2+d2νξ2,

    where d1=max1iM{d1,i} and d2=max1iM{d2,i}. Therefore, gi(μ,ν)+gi(ν,ξ)gi(μ,ξ)d1μν2d2νξ2. In addition, we assume Tj:HH to be a finite family of η-demimetric operators such that Γ:=(Mi=1EP(gi))(Nj=1Fix(Tj)). Then we are interested in the following problem:

    ˆpΓ. (2.1)

    Lemma 2.2. [11] Let μ,νH and βR then

    (1) μ+ν2μ2+2ν,μ+ν;

    (2) μν2=μ2ν22μν,ν;

    (3) βμ+(1β)ν2=βμ2+(1β)ν2β(1β)μν2.

    Lemma 2.3. [35] Let T:KH be an η-demimetric operator defined on a nonempty, closed and convex subset K of a Hilbert space H with η(,1). Then Fix(T) is closed and convex.

    Lemma 2.4. [36] Let T:KH be an η-demimetric operator defined on a nonempty, closed and convex subset K of a Hilbert space H with η(,1). Then the operator L=(1γ)Id+γT is quasi-nonexpansive provided that Fix(T) and 0<γ<1η.

    Lemma 2.5. [11] Let T:KK be a nonexpansive operator defined on a nonempty closed convex subset K of a real Hilbert spaceH and let (pk) be a sequence in K. If pkx and if (IdT)pk0, then xFix(T).

    Lemma 2.6. [37] Let h:KR be aconvex and subdifferentiable function on nonempty closed and convex subset K of a real Hilbert space H. Then, p solves the min{h(q):qK}, if and only if 0h(p)+NK(p), where h() denotes the subdifferential of h and NK(ˉp) is the normal cone of K at ˉp.

    Our main iterative algorithm of this section has the following architecture:

    Theorem 3.1. Let the following conditions:

    (C1) k=1ξkpkpk1<;

    (C2) 0<aγkmin{1η1,,1ηN},

    hold. Then Algorithm 1 solves the problem 2.1.

    Algorithm 1 Parallel Hybrid Inertial Extragradient Algorithm (Alg.1)
    Initialization: Choose arbitrarily, p0,p1H, KH and C1=H. Set k1, {α1,,αN}(0,1) such that Nj=1αj=1, 0<μ<min(12d1,12d2), ξk[0,1) and γk(0,).
    Iterative Steps: Given pkH, calculate ek, ˉvk and wk as follows:
      Step 1. Compute
        {ek=pk+ξk(pkpk1);ui,k=argmin{μgi(ek,ν)+12ekν2:νK},i=1,2,,M;vi,k=argmin{μgi(ui,k,ν)+12ekν2:νK},i=1,2,,M;ik=argmax{vi,kpk:i=1,2,,M},ˉvk=vik,k;wk=Nj=1αj((1γk)Id+γkTj)ˉvk;
      If wk=ˉvk=ek=pk then terminate and pk solves the problem 2.1. Else
      Step 2. Compute
        Ck+1={zCk:wkz2pkz2+ξ2kpkpk12+2ξkpkz,pkpk1},pk+1=ΠHCk+1p1,k1.
      Set k=:k+1 and return to Step 1.

     | Show Table
    DownLoad: CSV

    The following result is crucial for the strong convergence result of the Algorithm 1.

    Lemma 3.2. [1,30] Suppose that νEP(gi), and pk, ek, ui,k, vi,k, i{1,2,,M} are defined in Step 1 of the Algorithm 1. Then we have

    vi,kν2ekν2(12μd1)ui,kek2(12μd2)ui,kvi,k2.

    Proof of Theorem 3.1.

    Step 1. The Algorithm 1 is stable.

    Observe the following representation of the set Ck+1:

    Ck+1={zCk:wkpk,z12(wk2pk2+ξ2kpkpk12+2ξkpkz,pkpk1)}.

    This infers that Ck+1 is closed and convex for all k1. It is well-known that EP(gi) and Fix(Tj) (from the Assumption 2.1 and Lemma 2.3, respectively) are closed and convex. Hence Γ is nonempty, closed and convex. For any pΓ, it follows from Algorithm 1 that

    ekp2=pkp+ξk(pkpk1)2pkp2+ξ2kpkpk12+2ξkpkp,pkpk1. (3.1)

    From (3.1) and recalling Lemma 2.4, we obtain

    wkp=Nj=1αj((1γk)Id+γkTj)ˉvkpNj=1αj((1γk)Id+γkTj)ˉvkpNj=1αjˉvkp=ˉvkp.

    Now recalling Lemma 3.2, the above estimate implies that

    wkp2ˉvkp2pkp2+ξ2kpkpk12+2ξkpkp,pkpk1. (3.2)

    The above estimate (3.2) infers that ΓCk+1. It is now clear from these facts that the Algorithm 1 is well-defined.

    Step 2. The limit limkpkp1 exists.

    From pk+1=ΠHCk+1p1, we have pk+1p1,pk+1ν0 for each νCk+1. In particular, we have pk+1p1,pk+1p0 for each pΓ. This proves that the sequence (pkp1) is bounded. However, from pk=ΠH1Ckp1 and pk+1=ΠH1Ck+1p1Ck+1, we have that

    pkp1pk+1p1.

    This infers that (pkp1) is nondecreasing and hence

    limkpkp1exists. (3.3)

    Step 3. ~pΓ.

    Compute

    pk+1pk2=pk+1p1+p1pk2=pk+1p12+pkp122pkp1,pk+1p1=pk+1p12+pkp122pkp1,pk+1pk+pkp1=pk+1p12pkp122pkp1,pk+1pkpk+1p12pkp12.

    Utilizing (3.3), the above estimate infers that

    limkpk+1pk=0. (3.4)

    Recalling the definition of (ek) and the condition (C1), we have

    limkekpk=limkξkpkpk1=0. (3.5)

    Recalling (3.4) and (3.5), the following relation

    ekpk+1ekpk+pkpk+1,

    infers that

    limkekpk+1=0. (3.6)

    Note that pk+1Ck+1, therefore the following relation

    wkpk+1pkpk+1+2ξkpkpk1+2ξkpkpk+1,pkpk1,

    infers, on employing (3.4) and the condition (C1), that

    limkwkpk+1=0. (3.7)

    Again, recalling (3.4) and (3.7), the following relation

    wkpkwkpk+1+pk+1pk

    infers that

    limkwkpk=0. (3.8)

    In view of the condition (C2), observe the variant of (3.2)

    (12μd1)uik,kek2(12μd2)uik,kvik,k2(pkp+wkp)pkwk+ξ2kpkpk12+2ξkpkppkpk1.

    Recalling (3.8) and condition (C1), we get

    (12μd1)limkuik,kek2(12μd2)limkuik,kvik,k2=0. (3.9)

    The above estimate (3.9) implies that

    limkuik,kek2=limkuik,kvik,k2=0. (3.10)

    Reasoning as above, recalling (3.5), (3.8) and (3.10), we have

    ˉvkekˉvkuik,k+uik,kek0;

    ˉvkpkˉvkek+ekpk0;

    wkekwkpk+pkek0;

    wkˉvkwkek+ekˉvk0.

    In view of the estimate limkwkˉvk=0, we have

    limkTjˉvkˉvk=0,j={1,2,,N}. (3.11)

    Next, we show that ~pMi=1EP(gi).

    Observe that

    ui,k=argmin{μgi(ek,ν)+12ekν2:νK}.

    Recalling Lemma 2.6, we get

    02{μgi(ek,ν)+12ekν2}(ui,k)+NK(ui,k).

    This implies the existence of ˜x2gi(ek,ui,k) and ~xNK(ui,k) such that

    μ˜x+ekui,k+~x. (3.12)

    Since ~xNK(ui,k) and ~x,νui,k0 for all νK. Therefore recalling (3.12), we have

    μ˜x,νui,kui,kek,νui,k,νK. (3.13)

    Since ˜x2gi(ek,ui,k),

    gi(ek,ν)gi(ek,ui,k)p,νui,k,νK. (3.14)

    Therefore recalling (3.13) and (3.14), we obtain

    μ(gi(ek,ν)gi(ek,ui,k))ui,kek,νui,k,νK. (3.15)

    Observe from the fact that (pk) is bounded then pkt~pH as t for a subsequence (pkt) of (pk). This also infers that ˉwkt~p, ˉvkt~p and bkt~p as t. Since ek~p and ekui,k0 as k, this implies ui,k~p. Recalling the assumption (A3) and (3.15), we deduce that gi(~p,ν)0 for all νK and i{1,2,,M}. Therefore, ~pMi=1EP(gi). Moreover, recall that ˉvkt~p as t and (3.11) we have ~pNj=1Fix(Tj). Hence ~pΓ.

    Step 4. pkp=ΠHΓp1.

    Since p=ΠHΓp1 and ~pΓ, therefore we have pk+1=ΠHCk+1p1 and pΓCk+1. This implies that

    pk+1p1pp1.

    By recalling the weak lower semicontinuity of the norm, we have

    p1pp1~plim inftp1pktlim suptp1pktp1p.

    Recalling the uniqueness of the metric projection operator yields that ~p=p=ΠHΓp1. Also limtpktp1=pp1=~pp1. Moreover, recalling the Kadec-Klee property of H with the fact that pktp1~pp1, we have pktp1~pp1 and hence pkt~p. This completes the proof.

    Corollary 3.3. Let KH be a nonempty closed and convex subset of a real Hilbert space H. For all i{1,2,,M}, let gi:K×KR{+} be a finite family of bifunctions satisfying Assumption 2.1. Assume that Γ:=Mi=1EP(gi), such that

    {ek=pk+ξk(pkpk1);ui,k=argmin{μgi(ek,ν)+12ekν2:νK},i=1,2,,M;vi,k=argmin{μgi(ui,k,ν)+12ekν2:νK},i=1,2,,M;ik=argmax{vi,kpk:i=1,2,,M},ˉvk=vik,k;Ck+1={zCk:ˉvkz2pkz2+ξ2kpkpk12+2ξkpkz,pkpk1};pk+1=ΠHCk+1p1,k1. (3.16)

    Assume that the condition (C1) holds, then the sequence (pk) generated by (3.16) strongly converges to a point in Γ.

    We now propose an other variant of the hybrid iterative algorithm embedded with the Halpern iterative algorithm [20].

    Remark 3.4. Note for the Algorithm 2 that the claim pk is a common solution of the EP and FPP provided that pk+1=pk, in general is not true. So intrinsically a stopping criterion is implemented for k>kmax for some chosen sufficiently large number kmax.

    Algorithm 2 Parallel Hybrid Inertial Halpern-Extragradient Algorithm (Alg.2)
    Initialization: Choose arbitrarily q,p0,p1H, KH and C1=H. Set k1, {α1,,αN},βk(0,1) such that Nj=1αj=1, 0<μ<min(12d1,12d2), ξk[0,1) and γk(0,).
    Iterative Steps: Given pkH, calculate ek, ˉvk and wk as follows:
      Step 1. Compute
        {ek=pk+ξk(pkpk1);ui,k=argmin{μgi(ek,ν)+12ekν2:νK},i=1,2,,M;vi,k=argmin{μgi(ui,k,ν)+12ekν2:νK},i=1,2,,M;ik=argmax{vi,kpk:i=1,2,,M},ˉvk=vik,k;wk=Nj=1αj((1γk)Id+γkTj)ˉvk;tl,k=βkq+(1βk)wk;lk=argmax{tj,kpk:j=1,2,,P},ˉtk=tlk,k.
        If ˉtk=wk=ˉvk=ek=pk then terminate and pk solves the problem 2.1. Else
      Step 2. Compute
        Ck+1={zCk:ˉtkz2βkqz2+(1βk)(pkz2+ξ2kpkpk12+2ξkpkz,pkpk1)};pk+1=ΠHCk+1  p1,k1.
      Set k=:k+1 and go back to Step 1.

     | Show Table
    DownLoad: CSV

    Theorem 3.5. Let Γ and the following conditions:

    (C1) k=1ξkpkpk1<;

    (C2) 0<aγkmin{1η1,,1ηN} and limkβk=0,

    hold. Then the Algorithm 2 solves the problem 2.1.

    Proof. Observe that the set Ck+1 can be expressed in the following form:

    Ck+1={zCk:ˉtkz2βkqz2+(1βk)(pkz2+ξ2kpkpk12+2ξkpkz,pkpk1)}.

    Recalling the proof of Theorem 3.1, we deduce that the sets Γand Ck+1 are closed and convex satisfying ΓCk+1 for all k0. Further, (pk) is bounded and

    limkpk+1pk=0. (3.17)

    Since pk+1=ΠHCk+1(q)Ck+1, we have

    ˉtkpk+12βkqpk+12+(1βk)(pkpk+12+ξ2kpkpk12+2ξkpkpk+1,pkpk1).

    Recalling the estimate (3.17) and the conditions (C1) and (C2), we obtain

    limkˉtkpk+1=0.

    Reasoning as above, we get

    limkˉtkpk=0.

    The rest of the proof of Theorem 3.5 follows from the proof of Theorem 3.1 and is therefore omitted.

    The following remark elaborate how to align condition (C1) in a computer-assisted iterative algorithm.

    Remark 3.6. We remark here that the condition (C1) can easily be aligned in a computer-assisted iterative algorithm since the value of pkpk1 is quantified before choosing ξk such that 0ξk^ξk with

    ^ξk={min{σkpkpk1,ξ}ifpkpk1;ξ                       otherwise.

    Here {σk} denotes a sequence of positives k=1σk< and ξ[0,1).

    As a direct application of Theorem 3.1, we have the following variant of the problem 2.1, namely the generalized split variational inequality problem associated with a finite family of single-valued monotone and hemicontinuous operators Aj:KH defined on a nonempty closed convex subset K of a real Hilbert space H for each j{1,2,,N}. The set VI(K,A) represents all the solutions of the following variational inequality problem Aμ,νμ0νC.

    Theorem 3.7. Assume that Γ=Mi=1VI(C,Ai)Nj=1Fix(Tj) and the conditions (C1)–(C4) hold. Then the sequence (pk)

    {ek=pk+ξk(pkpk1);ui,k=ΠK(ekμAi(ek)),i=1,2,,M;vi,k=ΠK(ekμAi(ui,k)),i=1,2,,M;ik=argmax{vi,kpk:i=1,2,,M},ˉvk=vik,k;wk=Nj=1αj((1γk)Id+γkTj)ˉvk;Ck+1={zCk:wkz2pkz2+ξ2kpkpk12+2ξkpkz,pkpk1},pk+1=ΠHCk+1p1,k1, (3.18)

    generated by (3.18) solves the problem 2.1.

    Proof. Observe that, if we set gi(ˉμ,ˉν)=Ai(ˉμ),ˉνˉμ for all ˉμ,ˉνK, then each Ai being L-Lipschitz continuous infers that gi is Lipschitz-type continuous with d1=d2=L2. Moreover, the pseudo-monotonicity of Ai ensures the pseudo-monotonicity of gi. Recalling the assumptions (A3)–(A4) and the Algorithm 1, note that

    ui,k=argmin{μAi(pk),νpk+12pkν2:νK};vi,k=argmin{μAi(ui,k),νui,k+12pkν2:νK},

    can be transformed into

    ui,k=argmin{12ν(pkμAi(pk)2:νK}=ΠK(pkμAi(pk));vi,k=argmin{12ν(pkμAi(ui,k)2:νK}=ΠK(pkμAi(ui,k)).

    Hence recalling gi(ˉμ,ˉν)=Ai(ˉμ),ˉνˉμ for all ˉμ,ˉνK and for all i{1,2,,M} in Theorem 3.1, we have the desired result.

    This section provides the effective viability of the algorithm via a suitable numerical experiment.

    Example 4.1. Let H=R be the set of all real numbers with the inner product defined by p,q=pq, for all p,qR and the induced usual norm ||. For each i={1,2,,M}, let the family of pseudomonotone bifunctions gi(p,q):K×KR on K=[0,1]H, is defined by gi(p,q)=Si(p)(qp), where

    Si(p)={0,0pλi;sin(pλi)+exp(pλi)1,λip1.

    where 0<λ1<λ2<...<λM<1. Note that EP(gi)=[0,λi] if and only if 0pλi and q[0,1]. Consequently, Mi=1EP(gi)=[0,λ1]. For each j{1,2,,N}, let the family of operators Tj:RR be defined by

    Tj(p)={3pj,p[0,);p,p(,0).

    Clearly, Tj defines a finite family of η-demimetric operators with Nj=1Fix(Tj)={0}. Hence Γ=(Mi=1EP(gi))(Nj=1Fix(Tj))=0. In order to compute the numerical values of the Algorithm 1, we choose ξ=0.5, αk=1100k+1, μ=17, λi=i(M+1), M=2×105 and N=3×105. Since

    {min{1k2pkpk1  ,0.5}    if   pkpk1;0.5                            otherwise,

    Observe that the expression

    ui,k=argmin{μSi(ek)(νek)+12(ypk)2,ν[0,1]},

    in the Algorithm 1 is equivalent to the following relation ui,k=ekμSi(ek),for alli{1,2,,M}. Similarly vi,k=ekμSi(ui,k),for alli{1,2,,M}. Hence, we can compute the intermediate approximation ˉvk which is farthest from ek among vi,k, for all i{1,2,,M}. Generally, at the kth step if Ek=pkpk1=0 then pkΓ implies that pk is the required solution of the problem. The terminating criteria is set as Ek<106. The values of the Algorithm 1 and its variant are listed in the following table (see Table 1):

    Table 1.  Numerical values of Algorithm 1.
    No. of Iter. CPU-Time (Sec)
    N0. Alg.1, ξk=0 Alg.1, ξk0 Alg.1, ξk=0 Alg.1, ξk0
    Choice 1. p0=(5), p1=(2) 87 75 0.088153 0.073646
    Choice 2. p0=(4.3), p1=(1.7) 88 79 0.072250 0.068662
    Choice 3. p0=(7), p1=(3) 99 92 0.062979 0.051163

     | Show Table
    DownLoad: CSV

    The values of the non-inertial and non-parallel variant of the Algorithm 1 referred as Alg.1 are listed in the following table (see Table 2):

    Table 2.  Numerical values of Algorithm Alg.1.
    No. of Choices No. of Iter. CPU-Time (Sec)
    Choice 1. p0=(5), p1=(2) 111 0.091439
    Choice 2. p0=(4.3), p1=(1.7) 106 0.089872
    Choice 3. p0=(7), p1=(3) 104 0.081547

     | Show Table
    DownLoad: CSV

    The error plotting Ek against the Algorithm 1 and its variants for each choices in Tables 1 and 2 are illustrated in Figure 1.

    Figure 1.  Comparison between Algorithm 1 and its variants in view of Example 4.1.

    Example 4.2. Let H=Rn with the induced norm p=ni=1|pi|2 and the inner product p,q=ni=1piqi, for all p=(p1,p2,,pn)Rn and q=(q1,q2,,qn)Rn. The set K is given by K={pRn+:|pk|1}, where k={1,2,,n}. Consider the following problem:

    findpΓ:=Mi=1EP(gi)Nj=1Fix(Tj),

    where gi:K×KR is defined by:

    gi(p,q)=nk=1Si,k(q2kp2k),i{1,2,,M},

    where Si,k(0,1) is randomly generated for all i={1,2,,M} and k={1,2,,n}. For each j{1,2,,N}, let the family of operators Tj:HH be defined by

    Tj(p)={4pj,p[0,);p,p(,0).

    for all pH. It is easy to observe that Γ=Mi=1EP(gi)Nj=1Fix(Tj)=0. The values of the Algorithm 1 and its non-inertial variant are listed in the following table (see Table 3):

    Table 3.  Numerical values of Algorithm 1.
    No. of Iter. CPU-Time (Sec)
    N0. Alg.1, ξk=0 Alg.1, ξk0 Alg.1, ξk=0 Alg.1, ξk0
    Choice 1. p0=(5), p1=(2), n=5 46 35 0.061975 0.054920
    Choice 2. p0=(1), p1=(1.5), n=10 38 27 0.056624 0.040587
    Choice 3. p0=(8), p1=(3), n=30 50 37 0.055844 0.041246

     | Show Table
    DownLoad: CSV

    The values of the non-inertial and non-parallel variant of the Algorithm 1 referred as Alg.1 are listed in the following table (see Table 4):

    Table 4.  Numerical values of Algorithm Alg.1.
    No. of Choices No. of Iter. CPU-Time (Sec)
    Choice 1. p0=(5), p1=(2), n=5 81 0.072992
    Choice 2. p0=(1), p1=(1.5), n=10 75 0.065654
    Choice 3. p0=(8), p1=(3), n=30 79 0.068238

     | Show Table
    DownLoad: CSV

    The error plotting Ek106 against the Algorithm 1 and its variants for each choices in Tables 3 and 4 are illustrated in Figure 2.

    Figure 2.  Comparison between Algorithm 1 and its variants in view of Example 4.2.

    Example 4.3. Let L2([0,1])=H with induced norm p=(10|p(s)|2ds)12 and the inner product p,q=10p(s)q(s)ds, for all p,qL2([0,1]) and s[0,1]. The feasible set K is given by: K={pL2([0,1]):p1}. Consider the following problem:

    findˉpΓ:=Mi=1EP(gi)Nj=1Fix(Tj),

    where gi(p,q) is defined as Sip,qp with the operator Si:L2([0,1])L2([0,1]) given by

    Si(p(s))=max{0,p(s)i},i{1,2,,M},s[0,1].

    Since each gi is monotone and hence pseudomonotone on C. For each j{1,2,,N}, let the family of operators Tj:HH be defined by

    Tj(p)=ΠC(p)={pp,p>1;p,p1.

    Then Tj is a finite family of η-demimetric operators. It is easy to observe that Γ=Mi=1EP(gi)Nj=1Fix(Tj)=0. Choose M=50 and N=100. The values of the Algorithm 1 and its non-inertial variant have been computed for different choices of p0 and p1 in the following table (see Table 5):

    Table 5.  Numerical values of Algorithm 1.
    No. of Iter. CPU-Time (Sec)
    N0. Alg.1, ξk=0 Alg.1, ξk0 Alg.1, ξk=0 Alg.1, ξk0
    Choice 1. p0=exp(3s)×sin(s), p1=3s2s 10 5 1.698210 0.981216
    Choice 2. p0=11+s, p1=s210 14 6 2.884623 1.717623
    Choice 3. p0=cos(3s)7, p1=s 16 5 2.014687 1.354564

     | Show Table
    DownLoad: CSV

    The values of the non-inertial and non-parallel variant of the Algorithm 1 referred as Alg.1 have been computed for different choices of p0 and p1 in the following table (see Table 6):

    Table 6.  Numerical values of Algorithm Alg.1.
    No. of Choices No. of Iter. CPU-Time (Sec)
    Choice 1. p0=exp(3s)×sin(s), p1=3s2s 23 2.65176
    Choice 2. p0=11+s, p1=s210 27 3.102587
    Choice 3. p0=cos(3s)7, p1=s 26 2.903349

     | Show Table
    DownLoad: CSV

    The error plotting Ek=<104 against the Algorithm 1 and its variants for each choices in Tables 5 and 6 are illustrated in Figure 3.

    Figure 3.  Comparison between Algorithm 1 and its variants in view of Example 4.3.

    We can see from Tables 16 and Figures 13 that the Algorithm 1 out performs its variants with respect to the reduction in the error, time consumption and the number of iterations required for the convergence towards the common solution.

    In this paper, we have constructed some variants of the classical extragradient algorithm that are embedded with the inertial extrapolation and hybrid projection techniques. We have shown that the algorithm strongly converges towards the common solution of the problem 2.1. A useful instance of the main result, that is, Theorem 3.1, as well as an appropriate example for the viability of the algorithm, have also been incorporated. It is worth mentioning that the problem 2.1 is a natural mathematical model for various real-world problems. As a consequence, our theoretical framework constitutes an important topic of future research.

    The authors declare that they have no competing interests.

    The authors wish to thank the anonymous referees for their comments and suggestions.

    The author Yasir Arfat acknowledge the support via the Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut's University of Technology Thonburi, Thailand (Grant No.16/2562).

    The authors Y. Arfat, P. Kumam, W. Kumam and K. Sitthithakerngkiet acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. Moreover, this reserch was funded by King Mongkut's University of Technology North Bangkok, Contract No. KMUTNB-65-KNOW-28.


    Acknowledgments



    This work was not supported by any funding.

    Authors contributions



    Study concept and design: DTNL, LK, TTS, and NMD; acquisition of data: DTNL, LK, TTS, and NMD; analysis and interpretation of data: DTNL, LK, TTS, and NMD; drafting of the manuscript: DTNL and NMD; critical revision of the manuscript: DTNL and NMD; statistical analysis: DTNL and TTS; study supervision: LK and TTS; and manuscript approval: DTNL, LK, LTD, NHH, TTS, and NMD.

    Conflict of interest



    All the authors declare that there are not biomedical financial interests or potential conflicts of interest in writing this manuscript.

    [1] Uller W, Alomari AI, Richter GT (2014) Arteriovenous malformation. Semin Pediatr Surg 23: 203-207. https://doi.org/10.1053/j.sempedsurg.2014.07.005
    [2] Pekkola J, Lappalainen K, Vuola P, et al. (2013) Head and neck arteriovenous malformations: Results of ethanol sclerotherapy. American J Neuroradiol 34: 198-204. http://doi.org:10.3174/ajnr.A3180
    [3] Mulliken JB, Fishman SJ, Burrows PE (2000) Vascular anomalies. Curr Probl Surg 37: 517-584. https://doi.org/10.1016/S0011-3840(00)80013-1
    [4] Mulliken JB, Burrows PE, Fishman SJ (2013) Mulliken and Young's vascular anomalies: Hemangiomas and malformations. Oxford University Press. https://doi.org/10.1093/med/9780195145052.001.0001
    [5] Kohout MP, Hansen M, Pribaz JJ, et al. (1998) Arteriovenous malformations of the head and neck: natural history and management. Plast Reconstr Surg 102: 643-654. https://doi.org/10.1097/00006534-199809010-00006
    [6] Liu AS, Mulliken JB, Zurakowski D, et al. (2010) Extracranial arteriovenous malformations: natural progression and recurrence after treatment. Plast Reconstr Surg 125: 1185-1194. https://doi.org/10.1097/PRS.0b013e3181d18070
    [7] Wu JK, Bisdorff A, Gelbert F, et al. (2005) Auricular arteriovenous malformation: evaluation, management, and outcome. Plast Reconstr Surg 115: 985-995. https://doi.org/10.1097/01.PRS.0000154207.87313
    [8] Lee BB, Do YS, Yakes W, et al. (2004) Management of arteriovenous malformations: a multidisciplinary approach. J Vasc Surg 39: 590-600. https://doi.org/10.1016/j.jvs.2003.10.048
    [9] Richter GT, Suen JY (2010) Clinical course of arteriovenous malformations of the head and neck: a case series. Otolaryngol Head Neck Surg 142: 184-190. https://doi.org/10.1016/j.otohns.2009.10.023
    [10] Rosenberg TL, Suen JY, Richter GT (2018) Arteriovenous malformations of the head and neck. Otolaryngol Clin North Am 51: 185-195. https://doi.org/10.1016/j.otc.2017.09.005
    [11] Jeong HS, Baek CH, Son YI, et al. (2006) Treatment for extracranial arteriovenous malformation of the head and neck. Acta Otolaryngol 126: 295-300. https://doi.org/10.1080/00016480500388950
    [12] Kim JY, Kim DI, Do YS, et al. (2006) Surgical treatment for congenital arteriovenous malformation: 10 years' experience. Eur J Vasc Endovasc Surg 32: 101-106. https://doi.org/10.1016/j.ejvs.2006.01.004
    [13] Fernández-Alvarez V, Suárez C, De Bree R, et al. (2020) Management of extracranial arteriovenous malformations of the head and neck. Auris Nasus Larynx 47: 181-190. https://doi.org/10.1016/j.anl.2019.11.008
    [14] Richter GT, Suen JY (2011) Pediatric extracranial arteriovenous malformations. Curr Opin Otolaryngol Head Neck Surg 19: 455-461. https://doi.org/10.1097/MOO.0b013e32834cd57c
    [15] Koshima I, Nanba Y, Tsutsui T, et al. (2003) Free perforator flap for the treatment of defects after resection of huge arteriovenous malformations in the head and neck regions. Ann Plast Surg 51: 194-199. https://doi.org/10.1097/01.SAP.0000044706.58478.73
    [16] Fowell C, Jones R, Nishikawa H, et al. (2016) Arteriovenous malformations of the head and neck: current concepts in management. Br J Oral Maxillofac Surg 54: 482-487. https://doi.org/10.1016/j.bjoms.2016.01.034
    [17] Greene AK, Orbach DB (2011) Management of arteriovenous malformations. Clin Plast Surg 38: 95-106. https://doi.org/10.1016/j.cps.2010.08.005
    [18] Lu L, Bischoff J, Mulliken JB, et al. (2011) Progression of arteriovenous malformation: possible role of vasculogenesis. Plast Reconstr Surg 128: 260e-269e. https://doi.org/10.1097/PRS.0b013e3182268afd
    [19] Colletti G, Dalmonte P, Moneghini L, et al. (2015) Adjuvant role of anti-angiogenic drugs in the management of head and neck arteriovenous malformations. Med Hypotheses 85: 298-302. https://doi.org/10.1016/j.mehy.2015.05.016
  • This article has been cited by:

    1. Yasir Arfat, Poom Kumam, Supak Phiangsungnoen, Muhammad Aqeel Ahmad Khan, Hafiz Fukhar-ud-din, An inertially constructed projection based hybrid algorithm for fixed point and split null point problems, 2023, 8, 2473-6988, 6590, 10.3934/math.2023333
    2. Yasir Arfat, Poom Kumam, Muhammad Aqeel Ahmad Khan, Thidaporn Seangwattana, Zaffar Iqbal, Some variants of the hybrid extragradient algorithm in Hilbert spaces, 2024, 2024, 1029-242X, 10.1186/s13660-023-03052-7
    3. Yasir Arfat, Supak Phiangsungnoen, Poom Kumam, Muhammad Aqeel Ahmad Khan, Jamshad Ahmad, Some variant of Tseng splitting method with accelerated Visco-Cesaro means for monotone inclusion problems, 2023, 8, 2473-6988, 24590, 10.3934/math.20231254
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5105) PDF downloads(285) Cited by(0)

Figures and Tables

Figures(3)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog