Research article Special Issues

Heart disease detection using inertial Mann relaxed CQ algorithms for split feasibility problems

  • This study investigates the weak convergence of the sequences generated by the inertial relaxed CQ algorithm with Mann's iteration for solving the split feasibility problem in real Hilbert spaces. Moreover, we present the advantage of our algorithm by choosing a wider range of parameters than the recent methods. Finally, we apply our algorithm to solve the classification problem using the heart disease dataset collected from the UCI machine learning repository as a training set. The result shows that our algorithm performs better than many machine learning methods and also extreme learning machine with fast iterative shrinkage-thresholding algorithm (FISTA) and inertial relaxed CQ algorithm (IRCQA) under consideration according to accuracy, precision, recall, and F1-score.

    Citation: Suthep Suantai, Pronpat Peeyada, Andreea Fulga, Watcharaporn Cholamjiak. Heart disease detection using inertial Mann relaxed CQ algorithms for split feasibility problems[J]. AIMS Mathematics, 2023, 8(8): 18898-18918. doi: 10.3934/math.2023962

    Related Papers:

    [1] Meiying Wang, Luoyi Shi . A new self-adaptive inertial algorithm with $ W $-mapping for solving split feasibility problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 18767-18783. doi: 10.3934/math.20221032
    [2] Yuanheng Wang, Bin Huang, Bingnan Jiang, Tiantian Xu, Ke Wang . A general hybrid relaxed CQ algorithm for solving the fixed-point problem and split-feasibility problem. AIMS Mathematics, 2023, 8(10): 24310-24330. doi: 10.3934/math.20231239
    [3] Yu Zhang, Xiaojun Ma . An accelerated conjugate method for split variational inclusion problems with applications. AIMS Mathematics, 2025, 10(5): 11465-11487. doi: 10.3934/math.2025522
    [4] Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971
    [5] Puntita Sae-jia, Suthep Suantai . A new two-step inertial algorithm for solving convex bilevel optimization problems with application in data classification problems. AIMS Mathematics, 2024, 9(4): 8476-8496. doi: 10.3934/math.2024412
    [6] Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
    [7] Premyuda Dechboon, Abubakar Adamu, Poom Kumam . A generalized Halpern-type forward-backward splitting algorithm for solving variational inclusion problems. AIMS Mathematics, 2023, 8(5): 11037-11056. doi: 10.3934/math.2023559
    [8] Suthep Suantai, Suparat Kesornprom, Nattawut Pholasa, Yeol Je Cho, Prasit Cholamjiak . A relaxed projection method using a new linesearch for the split feasibility problem. AIMS Mathematics, 2021, 6(3): 2690-2703. doi: 10.3934/math.2021163
    [9] Hasanen A. Hammad, Habib ur Rehman, Manuel De la Sen . Accelerated modified inertial Mann and viscosity algorithms to find a fixed point of $ \alpha - $inverse strongly monotone operators. AIMS Mathematics, 2021, 6(8): 9000-9019. doi: 10.3934/math.2021522
    [10] Chibueze C. Okeke, Abubakar Adamu, Ratthaprom Promkam, Pongsakorn Sunthrayuth . Two-step inertial method for solving split common null point problem with multiple output sets in Hilbert spaces. AIMS Mathematics, 2023, 8(9): 20201-20222. doi: 10.3934/math.20231030
  • This study investigates the weak convergence of the sequences generated by the inertial relaxed CQ algorithm with Mann's iteration for solving the split feasibility problem in real Hilbert spaces. Moreover, we present the advantage of our algorithm by choosing a wider range of parameters than the recent methods. Finally, we apply our algorithm to solve the classification problem using the heart disease dataset collected from the UCI machine learning repository as a training set. The result shows that our algorithm performs better than many machine learning methods and also extreme learning machine with fast iterative shrinkage-thresholding algorithm (FISTA) and inertial relaxed CQ algorithm (IRCQA) under consideration according to accuracy, precision, recall, and F1-score.



    In this paper, we study the split feasibility problem (SFP) which is defined on two nonempty closed and convex subsets C and Q of real Hilbert space H1 and H2, respectively when A:H1H2 is a bounded linear operator. The problem SFP is to

    find μC such that AμQ, (1.1)

    if such μ exists. The set Ω:={μC:AμQ} is denoted for the solution set of the problem SFP (1.1).

    In 1994, Censor and Elfving [8] first introduced the algorithm for solving the problem SFP (1.1). The existence of the inverse of the operator A1 need to be required for computing of each iteration. After that many mathematicians (see in [3,9,10,14,34,37]) applied the problem SFP (1.1) to solve real world problems such as signal and image processing, automatic control systems, machine learning, and many more.

    Byrne [7] was the first to propose a popular CQ algorithm solving SFP (1.1) which generates a sequence {μn} by the recursive procedure,

    μn+1=PC(μnλAT(IPQ)Aμn), n1, (1.2)

    where λ belongs in the open interval (0,2A2) with PC and PQ are the projections matric onto C and Q, respectively. Another one of the famous algorithms in convex minimization problems is known that the gradient projection algorithm (GPA), this algorithm was generated as follow:

    μn+1=PC(μnλnf(μn)),n1, (1.3)

    where f:H1(,+] is a lower semicontinuous convex function, λn the stepsize at iteration n is chosen in the interval (0,2L), where L is the Lipschitz constant of f. It is well known that the algorithm GPA (1.3) can be reduced to solve the problem SFP (1.1) when setting f(μ):=12(IPQ)Aμ2 with f(μ)=AT(IPQ)Aμ. The Lipschitz condition was required for the step size λn of the algorithms (1.2) and (1.3), that is λn(0,2A2). This means that to compute the CQ algorithm, the matrix norm of A needs to be found, which is generally not easy work in practice.

    Later on, Byrne [7] presented a different step size {λn} without matrix norms computing. Also, Yang [41] was interested in using a step size {λn} that has no connection with matrix norms, the algorithm GPA (1.3) was considered for variational inequality problem. After that, many different stepsizes {λn} have been presented by many mathematicians, see in [22,35,36,41]

    Another one of the different stepsizes was presented in 2018 by Pham et al. [2], this stepsize is generated as follow:

    λn=βnηn, n1, (1.4)

    where

    ηn=max{1,fn(μn)}, limnβn=0, n=1βn=.

    The algorithm (1.2) with the stepsize (1.4) was used to solve the problem SFP (1.1). For recent results on the problem SFP with the stepsize (1.4), see [13,19,23,38,43].

    Finding a way to make algorithms converge faster is another approach many authors are interested in studying. The inertial technique is one way of solving the smooth convex minimization problem, which was first proposed by Polyak [27] in 1964. Polyak's algorithm was called the heavy ball method, modified from the two-step iterative method. The next iterate is defined by making use of the previous two iterates. Later on, the heavy ball method was improved by Nesterov [25] to speed up the rate of convergence. It is denotable that the inertial terminology dramatically improves the algorithm's performance and has nice convergence properties (see [10]). Since that, the heavy ball method has been widely used to solve a wide variety of problems in the optimization field, as seen in [12,24,30,33].

    In 2020, Sahu et al. [28] proposed an inertial relaxed CQ algorithm {μn} for solving the problem SFP (1.1) in a real Hilbert space by combining the inertial technique of Alvarez and Attouch [1] with the Byrne algorithm (1.2). This algorithm was generated as follows:

    {νn=μn+σn(μnμn1),μn+1=PCn(νnλAT(IPQn)A(νn)), n1, (1.5)

    where the stepsize parameter λ is still in the interval involving the norm of operator A and the extrapolation factor σn[0,ˉσn] and σ[0,1) such that

    ˉσn=min{σ,1max{n2μnμn12, n2μnμn1}}, n1. (1.6)

    The weakly convergence of sequence {μn} generated by (1.5) was proved under the conditions of the extrapolation factor (1.6) and the stepsize parameter λ.

    The study of the development of inertial techniques received significant attention. Subsequently, Beck and Teboulle [5] introduced the well-known fast iterative shrinkage-thresholding algorithm (FISTA). The algorithm is designed by choosing t1=1, λ>0 and compute

    {νn=PCn(μnλAT(IPQ)Aμn),tn+1=1+1+4t2n2,σn=tn1tn+1,μn+1=νn+σn(νnνn1). (1.7)

    FISTA has received a lot of attention because of its excellent computational results. Many mathematicians have used its implementation in many problem applications (see [21] and reference therein). This inertial technique is limited in the computation of the {σn} sequence.

    With the limit of choosing parameter σn of Beck and Teboulle [5], Gibali et al. [17] modified the following the inertial relaxed CQ algorithm (IRCQA) in a real Hilbert space. This algorithm is generated as follows:

    {νn=μn+σn(μnμn1),μn+1=PCn(νnλnAT(IPQn)A(νn)), n1. (1.8)

    They proved that, if λn=τnfn(μn)η2n, where ηn=max{1,fn(μn)} and σn[0,ˉσn], where

    ˉσn={min{σ,ϵnμnμn12},      ifμnμn1,σ,                              otherwise, (1.9)

    such that n=0σnμnμn12<, then the sequence {μn} generated by (1.8) converges weakly to an element in a solution set of the problem SFP (1.1). The advantage of the IRCQA (1.8) is the extrapolation factor {σn} can be chosen in many ways under the control condition (1.9), and the stepsize parameter {λn} was built without the matrix norm.

    In this paper, we propose an inertial Mann relaxed CQ algorithms to solve the split feasibility problems in Hilbert spaces. Our work is inspired by iterative methods developed Dang et al. [10], and Gibali et al. [17]. We apply our main result to solve a data classification problem in machine learning and then compare the performance of our algorithm with FISTA and IRCQA.

    Let H1 and H2 be real Hilbert spaces. The strong (weak) convergence of a sequence {μn} to μ is denoted by μnμ (μnμ), respectively. Given a bounded linear operator A:H1H2, ATdenotes the adjoint of A. For any sequence {μn}H1, ωn(μn) denotes the weak w-limit set of {μn}, that is,

    ωω(μn):={μH1:μnjμ for some subsequence {nj} of {n}}.

    Let C be a nonempty closed and convex subset of a real Hilbert space H1. The metric projection from H1 onto C is defined by for each μH1, there exists a unique xC such that

    μxμν, νC.

    x is called the metric projection from H1 onto C and denoted by PCμ.

    Lemma 2.1. [35] \it Let f:H1R be a function defined by

    f(μ):=12AμPQAμ2, μH1.

    Then following assertions hold:

    (i) f is convex and differentiable;

    (ii) f is weakly lower semicontinuous on H1;

    (iii) f(μ)=AT(IPQ)Aμ for all μH1;

    (iv) f is 1A2 inverse strongly monotone, that is,

    fμfy,μν1A2fμfν2, μ,νH1.

    Lemma 2.2. [1] Let {κn}, {δn} and {αn} be the sequences in [0,+) such that κn+1κn+αn(κnκn1)+δn for all n1, n=1δn<+ and there exists a real number α with 0αnα<1 for all n1. Then the followings hold:

    (i) n1[κnκn1]+<+, where [t]+=max{t,0};

    (ii) There exists κ[0,+) such that limn+κn=κ.

    Lemma 2.3. [40] Consider the problem SFP (1.1) with the function f as in Lemma 2.1 and let λ>0 and μH1. The point μ solve the problem SFP (1.1) if and only if the point μ solve the fixed point equation:

    μ=PC(μλf(μ))=PC(μλAT(IPQ)Aμ). (2.1)

    Lemma 2.4. [26] Let {μn} be a sequence in a real Hilbert H1 such that there exists a nonempty closed and convex subset Ω of H1 satisfying:

    limnμnμ exists for all μΩ and any weak cluster point of {μn} belongs to Ω.

    Then there exists μΩ such that μnμ.

    Lemma 2.5. [32] Let X be a Banach space satisfying Opial's condition and let {μn} be a sequence in X. Let u,vX be such that

    limnμnu  and  limnμnv  exists.

    If {μnk} and {μmk} are subsequences of {μn} which converge weakly to u and v, respectively, then u=v.

    In this section, we introduce an inertial Mann relaxed CQ algorithm for solving the SFP (1.1). Let C and Q be a nonempty closed and convex subsets of a real Hilbert spaces H1 and H2, respectively, such that

    C={μH1:c(μ)0},  Q={νH2:q(ν)0}, (3.1)

    where c:H1R and q:H2R are lower semi-continuous convex functions. We also assume that c and q are bounded operators. For a sequence {νn} in H1, we define the half-spaces Cn and Qn as follow:

    Cn={μH1:c(νn)un,νnμ}, (3.2)

    where un c(νn), and

    Qn={νH2:q(Aνn)vn,Aνnν}, (3.3)

    where vnq(Aνn) and A:H1H2 is bounded linear operator. We see that CCn and QQn for each n1. Define

    fn(μ):=12(IPQn)Aμ2, μH1 and n1. (3.4)

    Hence, we have

    fn(μ)=AT(IPQn)Aμ.

    Our algorithm is defined as follows:

    Algorithm 3.1. : Inertial Mann relaxed CQ algorithm
    Initialization: Take μ0,μ1C and set n=1.
    Iterative Steps: Generate {μn} by computing the following step:
    Step 1. Compute
    νn=μn+σn(μnμn1),(3.5)
    where σn[0,σ) for each n1 such that for some σ[0,1).
    Step 2. Compute
    zn=PCn(νnλnfn(νn)),
    where λn(0,2A2).
    Step 3. Compute
    μn+1=(1αn)νn+αnzn,(3.6)
    where αn(0,1).
    Update n to n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Assume that the following conditions hold:

    n=1σnmax{μnμn12,μnμn1}<. (3.7)
    0<lim infnλnlim supnλn<2A2 (3.8)
    0<lim infnαnlim supnαn<1 (3.9)

    Lemma 3.1. Let {μn} be the sequence generated by Algorithm 3.1. Assume that the conditions (3.7)–(3.9) hold. Then we have the following conclusions:

    (i) fn(νn),νnμ2fn(νn) for all μΩ and nN.

    (ii) μn+1μ2νnμ24λnαn(112λnA2)fn(νn)) for all μΩ.

    (iii) If limnμnμ exists and n=1[μnμ2μn1μ2]+< for all μΩ then we have

    (a) {μn}, {νn} and {fn(νn)} are bounded,

    (b) μn+1μn0.

    Proof. (i) Let μΩ and A is adjoint operator of A. Since CCn and QQn, μ=PC(μ)=PCn(μ) and (IPQ)(Aμ)=(IPQn)(Aμ)=0. From (IPQn) is firmly nonexpansive, for each nN, we have

    2fn(νn)=(IPQn)Aνn2=(IPQn)Aνn(IPQn)Aμ2(IPQn)Aνn(IPQn)Aμ,AνnAμ=(IPQn)Aνn,AνnAμ=A(IPQn)Aνn,νnμ=fn(νn),νnμ.

    (ii) Let μΩ. Set tn=νnλnfn(νn), we have

    μn+1μ2=(1αn)νn+αnPCn((Iλnfn)νn)μ2(1αn)νnμ2+αnPCn(tn)μ2(1αn)νnμ2+αn(tnμ2tnPCn(tn)2)=νnμ2αn(νnμ2+νnλnfn(νn)μ2νnλnfn(νn)μn+12)=νnμ2αn(νnμn+12+2λnfn(νn),νnμ2λnfn(νn),νnμn+1).

    From part (i), we get

    μn+1μ2νnμ2αn(νnμn+12+2λnfn(νn)νnμn+14λnfn(νn))νnμ2αn(νnμn+12+(λnfn(νn))2+νnμn+124λnfn(νn))=νnμ2+λ2nαnfn(νn)24αnλnfn(νn)νnμ2+2λ2nαnA2fn(νn)4αnλnfn(νn)=νnμ24λnαn(112λnA2)fn(νn). (3.10)

    (iii) Let μΩ. Suppose that limnμnμ exists, (3.7) holds and n=1[μnμ2μn1μ2]+<, we have

    μn+1νn2+μn+1μ2=νnμ2+2μn+1νn,μn+1μ. (3.11)

    On the other hand, for each nN,

    νnμ2=(1+σn)μnμ2σnμn1μ2+σn(1+σn) μnμn12(1+σn)μnμ2σnμn1μ2+2σnμnμn12. (3.12)

    From (3.11) and (3.12), we have

    μn+1νn2+μn+1μ2μnμ2+σn(μnμ2μn1μ2)+2σn μnμn12+2μn+1νn,μn+1μ. (3.13)

    Since {μn} is bounded, it follows from (3.12) that {νn} is also bounded. Since fn is A2-Lipschitz, we have

    fn(νn)=fn(νn)fn(μ)A2νnμ.

    Hence {fn(νn)} is also bounded.

    Since λ(0,2A2), we have

    μn+1μ2(1αn)νnμ2+αnznμ2(1αn)αnνnzn2νnμ2(1αn)αnνnzn2=μnμ+σn(μnμn+1)2(1αn)αnνnzn2=μnμ2+2σnμnμn+1,νnμ(1αn)αnνnzn2.

    This implies that

    (1αn)αnνnzn2μnμ2μn+1μ2+2σnμnμn+1,νnμ.

    If follows from limnμnμ exists and (3.7) that

    limnνnzn=0. (3.14)

    We next show that μn+1μn0. It follows from (3.14), we have

    μn+1νn,μn+1μ=(1αn)νn+αnznνn,(1αn)νn+αnznμ=αnznνn,(1αn)(νnμ)+αn(znμ)=αnznνn,(1αn)(νnμ)+αnznνn,αn(znμ)=αn(1αn)znνn,νnμ+α2nznνn,znμ0,  as  n. (3.15)

    By n=1[μnz2μn1z2]+< and n=1σnμnμn12<, it follows from (3.13) and (3.15) that

    limnμn+1νn2=0. (3.16)

    On the other hand, from (3.7), we have

    νnμn=σnμnμn10. (3.17)

    From (3.16) and (3.17), we get

    μn+1μnμn+1νn+νnμn0  as  n.

    This completes the proof.

    Let {μn} be sequence which defined by Algorithm 3.1. We next prove that there exists a subsequence {μnj} of the sequence {μn} converges weakly to a solution of the problem SFP (1.1).

    Theorem 3.2. Let H1 and H2 be two real Hilbert spaces, and let C and Q be nonempty closed convex subsets of H1 and H2, respectively. Let A:H1H2 be a bounded linear operator. Assume that the solution set Ω of the problem SFP (1.1) is nonempty, the condition (3.7) holds, and {λn}, {αn} satisfies the condition (3.8). Let {μn} be a sequence generated by Algorithm 3.1. Then we have the following:

    (i) {μn},{νn} and fn(νn) are bounded;

    (ii) There exists a subsequence {μnj} of {μn} converging weakly to a point μΩ;

    (iii) The sequence {μn} converges weakly to a point μΩ.

    Proof. Let μΩ. From Lemma 3.1 (ii), there exists mN such that

    μn+1μ2+4λnαn(112λnA2)fn(νn)νnμ2, nm. (3.18)

    From (3.12) and (3.18), we have

    μn+1μ2+σnμn1μ2+4λnαn(112λnA2)fn(νn)(1+σn)μnμ2+2σnμnμn12, nm. (3.19)

    Since 4λnαn(112λnA2)fn(νn)0, from (3.19), we have

    μn+1μ2+σnμn1μ2(1+σn)μnμ2+2σnμnμn12, (3.20)

    which implies that, for each nm,

    μn+1μ2μnμ2σn(μnμ2μn1μ2)+2σnμnμn12. (3.21)

    From (3.10), we have

    4λnαn(112λnA2)fn(νn)μnμ2μn+1μ2+2σnμnμn1,νnμ. (3.22)

    Applying Lemma 2.2 of [1] in (3.21) with the data ψn=μnμ2, δn=2σnμnμn12, we obtain that limnμnμ exists and nm[μnμ2μn1μ2]+<. This leads, from Lemma 3.1 (iii) that {μn},{νn} and {fn(νn)} are bounded. Since {fn(νn)} is bounded. It follows from (3.22) and conditions (3.7)–(3.9) that

    limnfn(νn)=0. (3.23)

    Since {νnj} is bounded, there exists a subsequence {νnjm} of {νnj} which converges weakly to μ. Since PQnjAνnjQnj, we have

    q(Aνnj)vnj,AνnjPQnjAνnj, (3.24)

    where vnjq(Aνnj). Since q is bounded, then {vnj} is also bounded. From (3.24), we have

    q(Aνnj)vnjAνnjPQnjAνnj0  as  j.

    It follow from the assumption of q that

    q(Aμ)0,

    which means that AμQ. By Lemma 3.1 (iii), we have

    limnμnμn+1=0.

    Note that znjCnj. By the definition of Cnj, we get

    c(νnj)unj,νnjznj,

    where unjc(νnj). Since c is bounded, we see that {unj} is bounded. From (3.14), we have

    c(νnj)unjνnjznj0  as  j.

    Similarly, we obtain that c(μ)0, i.e., μC. From 3.17. Therefore, μnjμΩ.

    Since {μn} is bounded and H is reflexive, ωω(μn) is nonempty. Let pωω(μn) be an arbitrary element. Then there exists a subsequence {μnk} of {μn} such that μnkp. Let qωω(μn) and {μni}{μn} be such that μniq. From (ii), we have p,qΩ. By Lemma 2.5, p=q. Applying Lemma 2.4 and Lemma 3.1 (iii), there exists μΩ such that μnμ.

    For the convergence of Algorithm 3.1, we see that the parameter {λn} needs to satisfy the Lipschitz condition that is λn(0,2A2). So, Algorithm 3.1 is flexible to use by choosing the parameter {λn}. For example, applying the stepsize (1.6) and (1.9) of Dang et al. [10] and Gibali et al. [17], respectively, we present a new update step size in the following Algorithm 3.3 and Algorithm 3.4:

    Algorithm 3.3. Initialization: Take {λ1}(0,2A2), {αn}(0,1), and ρ1,ρ2(0,2) and NN. Select arbitrary points μ0,μ1C and σn[0,σ) for some σ[0,1). Set n=1.
    Iterative Steps: Generate {μn} by computing the following step:
    Step 1. Compute
    νn=μn+σn(μnμn1).(3.25)
    Step 2. Compute
    zn=PCn(νnλnfn(νn)).
    Step 3. Compute
    μn+1=(1αn)νn+αnzn.(3.26)
    λn+1={min{λn,ρ1νnznΞ(νn),ρ2znμn+1Ξ(μn+1)},     if Ξ(νn)0,Ξ(μn+1)0, nN,2nA2,                                        n>N,λn,                                            otherwise,
    where Ξ(x)=fn(zn)fn(x).
    Update n to n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV
    Algorithm 3.4. Initialization: Take {λ1}(0,2A2), {αn}(0,1), and >0. Select arbitrary points μ0,μ1C and σn[0,σ) for some σ[0,1). Set n=1.
    Iterative Steps: Generate {μn} by computing the following step:
    Step 1. Compute
    νn=μn+σn(μnμn1).(3.27)
    Step 2. Compute
    zn=PCn(νnλnfn(νn)).
    Step 3. Compute
    μn+1=(1αn)νn+αnzn.(3.28)
    λn+1={min{λn,Θ(νn),Θ(zn),Θ(μn+1)},      if Θ(νn)0,Θ(zn)0,Θ(μn+1)0, nN,2nA2,                                                   n>N,λn,                                                       otherwise,
    where Θ(x)=fn(x).
    Update n to n+1 and go to Step 1.

     | Show Table
    DownLoad: CSV

    Remark 3.1. From Algorithms 3.3 and 3.4, it's easy to see that the stepsize λn is a nonincreasing sequence in (0,2A2) and satisfies the condition (3.8).

    Currently, cardiovascular disease is the leading cause of death. World Health Organization (WHO) reported 17.9 million human deaths caused by cardiovascular diseases in the year 2019 that was estimated to be 32% the year 2019 [39]. In Thailand [11] cardiovascular disease is the number 1 cause of death for Thai people and increases in all age groups. Therefore, monitoring the heart condition at regular intervals and tracing out the problem at an earlier stage is the need to control the life-threatening situation due to heart failure. To predict heart disease, we used the UCI Machine Learning Heart Disease dataset, which is available on the Internet at [15], was used to evaluate the proposed model. The dataset comprises 76 characteristics and 303 records. However, only 14 attributes from the dataset were used for training and testing. This dataset contains the various attributes are Age, Gender, CP, Trestbps, Chol, Fbs, Restecg, Thalach, Exang, Oldpeak, Slope, Ca, Thal and Num (target variable). The dataset consists of 138 normal instances versus 165 abnormal instances. The following Table 1 shows visualization of the dataset.

    Table 1.  Overview of the UCI Machine Learning Heart Disease dataset.
    Attribute Description ˉx S.D. Max Min C.V.
    Age Age of patient in years 54.37 9.07 77 29 16.68
    Sex Male and female 0.68 0.47 1 0 68.10
    Cp Chest pain type 0.97 1.03 3 0 106.55
    Trestbps Resting blood pressure 131.62 17.51 200 94 13.30
    (in mm Hg on admission to the hospital)
    Chol Serum cholesterol shows the amount of triglycerides present 246.26 51.75 564 126 21.01
    Fbs Fasting blood sugar larger than 120 mg/dl 0.15 0.36 1 0 239.44
    Restecg Resting electrocardiographic results 0.53 0.52 2 0 99.42
    Thalach Maximum heart rate achieved 149.65 22.87 202 71 15.28
    Exang Exercise-induced angina (1 yes) 0.33 0.47 1 0 143.55
    Oldpeak ST depression induced by exercise relative to rest 1.04 1.16 6.2 0 111.50
    Slope The slope of the peak exercise ST segment 1.40 0.62 2 0 43.96
    Ca Number of major vessels colored by fluoroscopy 0.73 1.02 4 0 139.97
    Thal No explanation provided, but probably thalassemia 2.31 0.61 3 0 26.42
    Target No disease, disease - - - - -

     | Show Table
    DownLoad: CSV

    In 2021, Bharti et al. [6] presented the comparison of different machine learning algorithms of the UCI Machine Learning Heart Disease dataset with feature selections and normalization for getting better accuracy. In this section, we shall apply our Algorithms 3.1, 3.3, and 3.4 to optimize weight parameter in training data for machine learning by using 5-fold cross-validation [20] in extreme learning machine (ELM). Very recently, Sarnmeta et al. [29] also considered the UCI Machine Learning Heart Disease dataset using an accelerated forward backward algorithm with linesearch technique for convex minimization problems in ELM with 10-fold cross-validation. The following Table 2 shows the efficiency of our algorithm in extreme learning machine by original dataset compare with the existing machine learning methods were presented in Bharti et al. [6] and ELM algorithm in Sarnmeta et al. [29].

    Table 2.  Highest accuracy of different machine learning methods using the UCI Machine Learning Heart Disease dataset.
    Machine learning method Accuracy(%)
    Logistic regression 83.30
    K neighbors 84.80
    Support vector machine 83.20
    Random forest 80.30
    Decision tree 82.30
    Artificial neural network [4] 82.50
    Learning vector quantization neural network algorithm [31] 85.55
    ELM(Sarnmeta et al. [29]) 83.87
    ELM(our algorithm) 87.69

     | Show Table
    DownLoad: CSV

    For our machine learning classification process, we start at letting U:={(μs,rs):μsRn,rsRm,s=1,2,...,N} be a training set of N distinct samples where μs is an input training data and rs is a target data. The output function of ELM for single-hidden layer feed forward neural networks (SLFNs) [16,42] with M hidden nodes and activation function V is

    Os=Mi=1wiV(ciμs+ei),

    where ci and ei are parameters of weight and finally the bias, respectively. To find the optimal output weight wi at the i-th hidden node, then the hidden layer output matrix A is generated as follows:

    A=[V(c1μ1+e1)V(cMμ1+eM)V(c1μN+e1)V(cMμN+eM)].

    To solve ELM is to find optimal output weight w=[wT1,...,wTM]T such that Aw=T, where T=[rT1,...,rTN]T is the training target data. The least square problem is used for finding the solution of linear equation Aw=T in the cases of the Moore-Penrose generalized inverse of A may be not easy to compute when the matrix A does not exist. To reduce overfitting of the model in training, we consider constrain least square problem in closed convex subsets C of H1 as follow:

    minωC12{AωT22}, (4.1)

    where C={xH1:x1γ} such that γ is regularization parameters. For applying our inertial Mann relaxed CQ algorithm to solve the problem (4.1), we define f(μ):=12(IPQ)Aμ2, μH1, and Q={T}, and let c(μ)=μ1γ and q(μ)=12μT2.

    The following four evaluation metrics: Accuracy, Precision, Recall, and F1-score [18] are considered for comparing the performance of the classification algorithms:

    Accuracy=TP+TNTP+FP+TN+FN×100%, (4.2)
    Precision=TPTP+FP×100%, (4.3)
    Recall=TPTN+FN×100%, (4.4)
    F1score=2×(Precision×Recall)Precision+Recall, (4.5)

    where TP: = True Positive, FN: = False Negative, TN: = True Negative and FP: = False Positive.

    The binary cross-entropy loss function is the mean of a cross-entropy resulting from two probability distributions, the probability distribution we want versus the probability distribution estimated by the model. By computing the following average:

    Loss=1KKi=1yilogˆyi+(1yi)log(1ˆyi),

    where ˆyi is the i-th scalar value in the model output, yi is the corresponding target value, and K is the number of scalar values in the model output.

    We start computation by setting the activation function as sigmoid, hidden nodes M=100, regularization parameter λ=1×105 and αn=1n+1 for Algorithms 3.1, 3.3, and 3.4 with λn=0.92(max(eigenvalue(ATA))) for Algorithm 3.1, λ1=0.92(max(eigenvalue(ATA))), ρ1=ρ2=1.99 for Algorithm 3.3 and λ1=0.92(max(eigenvalue(ATA))) for Algorithm 3.4. The stopping criteria is the number of iteration 100. We compare the performance of the algorithm with different parameters ˉσn as seen in Table 3 when

    σn={¯σnn2max{μnμn12,μnμn1,          if n>N and  μnμn1,¯σn,                                       otherwise,
    Table 3.  Numerical results of ˉσn.
    Loss
    ˉσn Training Time Training Test
    0.3 0.0371 0.252224 0.230180
    0.5 0.0239 0.251785 0.229676
    Algorithm 3.1 1n 0.0321 0.252403 0.230384
    1μnμn12+n2 0.0333 0.252805 0.230993
    213μnμn13+n3+213 0.0322 0.250660 0.228933
    0.3 0.1511 0.252224 0.230180
    0.5 0.1681 0.251785 0.229676
    Algorithm 3.3 1n 0.1804 0.252403 0.230384
    1μnμn12+n2 0.1750 0.252805 0.230993
    213μnμn13+n3+213 0.1773 0.250660 0.228933
    0.3 0.1398 0.252224 0.230180
    0.5 0.1342 0.251785 0.229676
    Algorithm 3.4 1n 0.1314 0.252403 0.230384
    1μnμn12+n2 0.1123 0.252805 0.230993
    213μnμn13+n3+213 0.1450 0.250660 0.228933

     | Show Table
    DownLoad: CSV

    where N is a number of iterations that we want to stop. We can see that parameters σn satisfies the condition in Algorithm 3.1, Algorithm 3.3, and Algorithm 3.4 all of each case of ˉσn in Table 3. We can see that ˉσn=213μnμn13+n3+213 highly improves the performance of Algorithm 3.1, Algorithm 3.3, and Algorithm 3.4. We next choose it as the default inertial parameter for later our calculation.

    By setting ˉσn=213μnμn13+n3+213, αn=1n+1 for Algorithms 3.1, 3.3, and 3.4 with ρ1=ρ2=1.99 for Algorithm 3.3. The stopping criteria is the number of iteration 100. We obtain the results of the different parameters h when λn=h2(max(eigenvalue(ATA))) for Algorithm 3.1 and different parameters λ1 for Algorithm 3.3 and Algorithm 3.4 as seen in Table 4.

    Table 4.  Numerical results of λn of Algorithm 3.1 and λ1 of Algorithm 3.3 and Algorithm 3.4, respectively.
    Loss
    h,λ1 Training Time Training Test
    0.7 0.0380 0.250782 0.228759
    0.9 0.0347 0.250660 0.228933
    Algorithm 3.1 1 0.0331 0.250174 0.228310
    1.9 0.0256 0.247012 0.224474
    1.9999 0.0338 0.246779 0.224221
    0.7 0.1440 0.250782 0.228759
    0.9 0.1581 0.250660 0.228933
    Algorithm 3.3 1 0.1533 0.250174 0.228310
    1.9 0.1735 0.247012 0.224474
    1.9999 0.1574 0.246795 0.224238
    0.7 0.1317 0.250782 0.228759
    0.9 0.1367 0.250660 0.228933
    Algorithm 3.4 1 0.1313 0.250174 0.228310
    1.9 0.1280 0.247012 0.224474
    1.9999 0.1353 0.246779 0.224221

     | Show Table
    DownLoad: CSV

    We can see that h=λ1=1.9999 highly improves the performance of Algorithm 3.1, Algorithm 3.3, and Algorithm 3.4. We next choose it as the default suitable step size for later our calculation.

    Setting the inertial parameters ˉσn=213μnμn13+n3+213, λn=1.99992(max(eigenvalue(ATA))) for Algorithm 3.1 and ˉσn=213μnμn13+n3+213, λ1=1.99992(max(eigenvalue(ATA))) for Algorithm 3.3 and Algorithm 3.4 with ρ1=ρ2=1.99 for Algorithm 3.3. The comparison of all algorithms with different parameters αn are presented in Table 5.

    Table 5.  Numerical results of αn.
    Loss
    αn Training Time Training Test
    0.3 0.0364 0.242989 0.220795
    0.5 0.0376 0.240726 0.219035
    Algorithm 3.1 1n 0.0372 0.244716 0.221871
    1n+1 0.0343 0.246779 0.224221
    1100n+1 0.0366 0.271235 0.259327
    0.3 0.1605 0.243010 0.220813
    0.5 0.1621 0.240745 0.219048
    Algorithm 3.3 1n 0.1654 0.244733 0.221890
    1n+1 0.1820 0.246795 0.224238
    1100n+1 0.1762 0.271299 0.259421
    0.3 0.1396 0.242989 0.220795
    0.5 0.1281 0.240726 0.219035
    Algorithm 3.4 1n 0.1367 0.244716 0.221871
    1n+1 0.1444 0.246779 0.224221
    1100n+1 0.1264 0.271235 0.259327

     | Show Table
    DownLoad: CSV

    We can see that αn=0.5 highly improves the performance of Algorithm 3.1, Algorithm 3.3, and Algorithm 3.4. Therefore, we choose it as the default parameter αn for later our calculation. We compare the performance of FISTA, IRCQA, and our algorithm. All the parameters are chosen as seen in Table 6.

    Table 6.  Chosen parameters of each algorithm.
    Algorithm ˉσn λn λ1 αn ρ1, ρ2 τn
    FISTA - 0.22A2 - - - -
    IRCQA 1μnμn12+n+2 - - - - 1n+1
    Algorithm 3.1 213μnμn13+n3+213 1.99992(max(eigenvalue(ATA))) - 0.5 - -
    Algorithm 3.3 213μnμn13+n3+213 - 1.99992(max(eigenvalue(ATA))) 0.5 1.99 -
    Algorithm 3.4 213μnμn13+n3+213 - 1.99992(max(eigenvalue(ATA))) 0.5 - -

     | Show Table
    DownLoad: CSV

    For comparison, We set sigmoid as an activation function, number of hidden nodes M=100 and regularization parameter λ=1×105.

    Table 7 shows that our algorithm is among those with the highest precision, recall, F1-score, and accuracy efficiency. Additionally, it has the lowest number of iterations. This means that it has the highest probability of correctly classifying heart disease compared to algorithms examinations. We next present the training and validation loss with the accuracy of training to show that our algorithm has good fit model in the training dataset.

    Table 7.  The performance of each algorithm.
    Algorithm Iteration No. Training Time Precision Recall F1-score Accuracy
    FISTA 72 0.0336 100.00 87.50 93.33 87.69
    IRCQA 85 0.0758 100.00 87.50 93.33 87.69
    Algorithm 3.1 67 0.0386 100.00 87.50 93.33 87.69
    Algorithm 3.3 68 0.0975 100.00 87.50 93.33 87.69
    Algorithm 3.4 67 0.0934 100.00 87.50 93.33 87.69

     | Show Table
    DownLoad: CSV

    From Figures 13, we can see that the Training Loss and Validation Loss values have decreased, where the Validation Loss value is lower than Training Loss. On the contrary, when we look at the Accuracy graph, we see that Training Accuracy and Validation Accuracy increase, where the Validation Accuracy is higher than Training Accuracy.

    Figure 1.  Accuracy and Loss plots of the iteration of Algorithm 3.1.
    Figure 2.  Accuracy and Loss plots of the iteration of Algorithm 3.3.
    Figure 3.  Accuracy and Loss plots of the iteration of Algorithm 3.4.

    This paper considers solving split feasibility problems using the inertial Mann relaxed CQ algorithms. Under some suitable conditions imposed on parameters, we have proved the weak convergence of the algorithm. Moreover, we present choosing different stepsize modifications to achieve an efficient algorithm. We show the efficiency of our algorithm by comparing it with different machine learning methods and also extreme learning machine with FISTA and IRCQA algorithms in data classification using the UCI Machine Learning Heart Disease dataset. The results show that our algorithms are better than the other algorithms.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research was supported by the NSRF via the program Management Unit for Human Re-sources & Institutional Development, Research and Innovation [grant number B05F640183] and Chiang Mai University. This research was also supported by National Research Council of Thailand (N42A650334) and Thailand Science Research and Innovation, the University of Phayao (Grant No. FF66-UoE).

    The dataset used in this research is publicly available at the UCI machine learning repository on https://archive.ics.uci.edu/ml/datasets/Heart+Disease.

    The authors declare no conflicts of interest.



    [1] F. Alvarez, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Anal., 9 (2001), 3–11. https://doi.org/10.1023/A:1011253113155 doi: 10.1023/A:1011253113155
    [2] P. K. Anh, N. T. Vinh, V. T. Dung, A new self-adaptive CQ algorithm with an application to the LASSO problem, J. Fixed Point Theory Appl., 20 (2018), 142. https://doi.org/10.1007/s11784-018-0620-8 doi: 10.1007/s11784-018-0620-8
    [3] Q. H. Ansari, A. Rehan, Split feasibility and fixed point problems, In: Nonlinear analysis, New Delhi: Birkhäuser, 2014,281–322. https://doi.org/10.1007/978-81-322-1883-8_9
    [4] K. Aravinthan, M. Vanitha, A comparative study on prediction of heart disease using cluster and rank based approach, International Journal of Advanced Research in Computer and Communication Engineering, 5 (2016), 421–424.
    [5] A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183–202. https://doi.org/10.1137/080716542 doi: 10.1137/080716542
    [6] R. Bharti, A. Khamparia, M. Shabaz, G. Dhiman, S. Pande, P. Singh, Prediction of heart disease using a combination of machine learning and deep learning, Comput. Intell. Neurosci., 2021 (2021), 8387680. https://doi.org/10.1155/2021/8387680 doi: 10.1155/2021/8387680
    [7] C. Byrne, Iterative oblique projection onto convex sets and the split feasibility problem, Inverse probl., 18 (2002), 441. https://doi.org/10.1088/0266-5611/18/2/310 doi: 10.1088/0266-5611/18/2/310
    [8] Y. Censor, T. Elfving, A multiprojection algorithm using Bregman projections in a product space, Numer. Algor., 8 (1994), 221–239. https://doi.org/10.1007/BF02142692 doi: 10.1007/BF02142692
    [9] Y. T. Chow, Y. Deng, Y. He, H. Liu, X. Wang, Surface-localized transmission eigenstates, super-resolution imaging, and pseudo surface plasmon modes, SIAM J. Imaging Sci., 14 (2021), 946–975. https://doi.org/10.1137/20M1388498 doi: 10.1137/20M1388498
    [10] Y. Dang, J. Sun, H. Xu, Inertial accelerated algorithms for solving a split feasibility problem, J. Ind. Manag. Optim., 13 (2017), 1383–1394. https://doi.org/10.3934/jimo.2016078 doi: 10.3934/jimo.2016078
    [11] Department of Disease Control, the department of disease control joins the campaign for world heart day, 2021. Available from: https://ddc.moph.go.th/brc/news.php?news = 20876 & deptcode = brc & fbclid = \IwAR2GSqs1NVuYuGfmO4kOsUElK0T4ZOyRFnmbty2aZ_rnQ7Xc3jmhu6DIMSk.
    [12] Q. L. Dong, J. Z. Huang, X. H. Li, Y. J. Cho, T. M. Rassias, MiKM: multi-step inertial Krasnosel'skiǐ–Mann algorithm and its applications, J. Glob. Optim., 73 (2019), 801–824. https://doi.org/10.1007/s10898-018-0727-x doi: 10.1007/s10898-018-0727-x
    [13] Q. L. Dong, X. H. Li, D. Kitkuan, Y. J. Cho, P. Kumam, Some algorithms for classes of split feasibility problems involving paramonotone equilibria and convex optimization, J. Inequal. Appl., 2019 (2019), 77. https://doi.org/10.1186/s13660-019-2030-x doi: 10.1186/s13660-019-2030-x
    [14] Q. L. Dong, Y. C. Tang, Y. J. Cho, T. M. Rassias, "Optimal" choice of the step length of the projection and contraction methods for solving the split feasibility problem, J. Glob. Optim., 71 (2018), 341–360. https://doi.org/10.1007/s10898-018-0628-z doi: 10.1007/s10898-018-0628-z
    [15] D. Dua, C. Graff, UCI Machine Learning Repository, Irvine, CA: University of California, School of Information and Computer Science, 2019. Available from: http://archive.ics.uci.edu/ml.
    [16] Y. Gao, H. Liu, X. Wang, K. Zhang, On an artificial neural network for inverse scattering problems, J. Comput. Phys., 448 (2022), 110771. https://doi.org/10.1016/j.jcp.2021.110771 doi: 10.1016/j.jcp.2021.110771
    [17] A. Gibali, D. V. Thong, Tseng type methods for solving inclusion problems and its applications, Calcolo, 55 (2018), 49. https://doi.org/10.1007/s10092-018-0292-1 doi: 10.1007/s10092-018-0292-1
    [18] J. Han, M. Kamber, J. Pei, Data mining: concepts and techniques, Waltham, MA: Morgan Kaufman Publishers, 2012.
    [19] W. Jirakitpuwapat, P. Kumam, Y. J. Cho, K. Sitthithakerngkiet, A general algorithm for the split common fixed point problem with its applications to signal processing, Mathematics, 7 (2019), 226. https://doi.org/10.3390/math7030226 doi: 10.3390/math7030226
    [20] V. A. Kumari, R. Chitra, Classification of diabetes disease using support vector machine, International Journal of Engineering Research and Applications, 3 (2013), 1797–1801.
    [21] J. Liang, T. Luo, C. B. Schonlieb, Improving "fast iterative Shrinkage-Thresholding algorithm": faster, smarter, and greedier, SIAM J. Sci. Comput., 44 (2022), A1069–A1091. https://doi.org/10.1137/21M1395685 doi: 10.1137/21M1395685
    [22] , V. Martˊin-Mˊarquez, F. Wang, H. K. Xu, Solving the split feasibility problem without prior knowledge of matrix norms, Inverse Probl., 28 (2012), 085004. https://doi.org/10.1088/0266-5611/28/8/085004 doi: 10.1088/0266-5611/28/8/085004
    [23] Z. Ma, L. Wang, Y. J. Cho, Some results for split equality equilibrium problems in Banach spaces, Symmetry, 11 (2019), 194. https://doi.org/10.3390/sym11020194 doi: 10.3390/sym11020194
    [24] P. E. Maingˊe, Inertial iterative process for fixed points of certain quasi-nonexpansive mappings, Set-Valued Anal., 15 (2007), 67–79. https://doi.org/10.1007/s11228-006-0027-3 doi: 10.1007/s11228-006-0027-3
    [25] Y. E. E. Nesterov, A method of solving a convex programming problem with convergence rate O(1/k2), Dokl. Akad. Nauk SSSR., 269 (1983), 543–547.
    [26] Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc., 73 (1967), 591–597.
    [27] B. T. Polyak, Some methods of speeding up the convergence of iteration methods, Comput. Math. Math. Phys., 4 (1964), 1–17. https://doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [28] D. R. Sahu, Y. J. Cho, Q. L. Dong, M. R. Kashyap, X. H. Li, Inertial relaxed CQ algorithms for solving a split feasibility problem in Hilbert spaces, Numer. Algor., 87 (2021), 1075–1095. https://doi.org/10.1007/s11075-020-00999-2 doi: 10.1007/s11075-020-00999-2
    [29] P. Sarnmeta, W. Inthakon, D. Chumpungam, S. Suantai, On convergence and complexity analysis of an accelerated forward–backward algorithm with linesearch technique for convex minimization problems and applications to data prediction and classification, J. Inequal. Appl., 2021 (2021), 141. https://doi.org/10.1186/s13660-021-02675-y doi: 10.1186/s13660-021-02675-y
    [30] Y. Shehu, A. Gibali, New inertial relaxed method for solving split feasibilities, Optim. Lett., 15 (2021), 2109–2126. https://doi.org/10.1007/s11590-020-01603-1 doi: 10.1007/s11590-020-01603-1
    [31] J. S. Sonawane, D. R. Patil, Prediction of heart disease using learning vector quantization algorithm, In: 2014 Conference on IT in Business, Industry and Government (CSIBIG), Indore, India, 2014, 1–5. https://doi.org/10.1109/CSIBIG.2014.7056973
    [32] S. Suantai, Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings, J. Math. Anal. Appl., 311 (2005), 506–517. https://doi.org/10.1016/j.jmaa.2005.03.002 doi: 10.1016/j.jmaa.2005.03.002
    [33] S. Suantai, N. Pholasa, P. Cholamjiak, Relaxed CQ algorithms involving the inertial technique for multiple-sets split feasibility problems, RACSAM, 113 (2019), 1081–1099. https://doi.org/10.1007/s13398-018-0535-7 doi: 10.1007/s13398-018-0535-7
    [34] S. Suantai, S. Kesornprom, P. Cholamjiak, A new hybrid CQ algorithm for the split feasibility problem in Hilbert spaces and its applications to compressed sensing, Mathematics, 7 (2019), 789. https://doi.org/10.3390/math7090789 doi: 10.3390/math7090789
    [35] N. T. Vinh, P. Cholamjiak, S. Suantai, A new CQ algorithm for solving split feasibility problems in Hilbert spaces, Bull. Malays. Math. Sci. Soc., 42 (2019), 2517–2534. https://doi.org/10.1007/s40840-018-0614-0 doi: 10.1007/s40840-018-0614-0
    [36] F. Wang, Polyak's gradient method for split feasibility problem constrained by level sets, Numer. Algor., 77 (2018), 925–938. https://doi.org/10.1007/s11075-017-0347-4 doi: 10.1007/s11075-017-0347-4
    [37] X. Wang, Y. Guo, D. Zhang, H. Liu, Fourier method for recovering acoustic sources from multi-frequency far-field data, Inverse Probl., 33 (2017), 035001. https://doi.org/10.1088/1361-6420/aa573c doi: 10.1088/1361-6420/aa573c
    [38] U. Witthayarat, Y. J. Cho, P. Cholamjiak, On solving proximal split feasibility problems and applications, Ann. Funct. Anal., 9 (2018), 111–122. https://doi.org/10.1215/20088752-2017-0028 doi: 10.1215/20088752-2017-0028
    [39] World Health Organization, Cardiovascular diseases (CVDs), World Health Organization (WHO), 2021. Available from: https://www.who.int/news-room/fact-sheets/detail/cardiovascular-diseases-(cvds).
    [40] H. K. Xu, Iterative methods for solving the split feasibility in infinite-dimensional Hilbert spaces, Inverse Probl., 26 (2010), 105018. https://doi.org/10.1088/0266-5611/26/10/105018 doi: 10.1088/0266-5611/26/10/105018
    [41] Q. Yang, On variable-step relaxed projection algorithm for variational inequalities, J. Math. Anal. Appl., 302 (2005), 166–179. https://doi.org/10.1016/j.jmaa.2004.07.048 doi: 10.1016/j.jmaa.2004.07.048
    [42] W. Yin, W. Yang, H. Liu, A neural network scheme for recovering scattering obstacles with limited phaseless far-field data, J. Comput. Phys., 417 (2020), 109594. https://doi.org/10.1016/j.jcp.2020.109594 doi: 10.1016/j.jcp.2020.109594
    [43] J. Zhao, Y. Liang, Y. Liu, Y. J. Cho, Split equilibrium, variational inequality and fixed point problems for multi-valued mappings in Hilbert spaces, Appl. Comput. Math., 17 (2018), 271–283.
  • This article has been cited by:

    1. Pennipat Nabheerong, Warissara Kiththiworaphongkich, Watcharaporn Cholamjiak, Arjun Singh, Breast Cancer Screening Using a Modified Inertial Projective Algorithms for Split Feasibility Problems, 2023, 2023, 2090-3189, 1, 10.1155/2023/2060375
    2. Abdulwahab Ahmad, Poom Kumam, Yeolb Je Cho, Kanokwan Sıtthıthakerngkıet, Halpern-type relaxed algorithms with alternated and multi-step inertia for split feasibility problems with applications in classification problems, 2025, 8, 2651-2939, 50, 10.33205/cma.1563173
    3. Abdulwahab Ahmad, Poom Kumam, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet, Alternated and Multi-Step Inertial Algorithm with Three-Term Conjugate Gradient-Like Direction for Split Feasibilities with Applications in Classification Problems and Elastic Net, 2025, 44, 2238-3603, 10.1007/s40314-025-03276-x
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1803) PDF downloads(88) Cited by(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog