Processing math: 60%
Research article Special Issues

Significance of heat transfer for second-grade fuzzy hybrid nanofluid flow over a stretching/shrinking Riga wedge

  • This investigation presents the fuzzy nanoparticle volume fraction on heat transfer of second-grade hybrid Al2O3 + Cu/EO nanofluid over a stretching/shrinking Riga wedge under the contribution of heat source, stagnation point, and nonlinear thermal radiation. Also, this inquiry includes flow simulations using modified Hartmann number, boundary wall slip and heat convective boundary condition. Engine oil is used as the host fluid and two distinct nanomaterials (Cu and Al2O3) are used as nanoparticles. The associated nonlinear governing PDEs are intended to be reduced into ODEs using suitable transformations. After that 'bvp4c, ' a MATLAB technique is used to compute the solution of said problem. For validation, the current findings are consistent with those previously published. The temperature of the hybrid nanofluid rises significantly more quickly than the temperature of the second-grade fluid, for larger values of the wedge angle parameter, the volume percentage of nanomaterials. For improvements to the wedge angle and Hartmann parameter, the skin friction factor improves. Also, for the comparison of nanofluids and hybrid nanofluids through membership function (MF), the nanoparticle volume fraction is taken as a triangular fuzzy number (TFN) in this work. Membership function and σ - cut are controlled TFN which ranges from 0 to 1. According to the fuzzy analysis, the hybrid nanofluid gives a more heat transfer rate as compared to nanofluids. Heat transfer and boundary layer flow at wedges have recently received a lot of attention due to several metallurgical and engineering physical applications such as continuous casting, metal extrusion, wire drawing, plastic, hot rolling, crystal growing, fibreglass and paper manufacturing.

    Citation: Imran Siddique, Yasir Khan, Muhammad Nadeem, Jan Awrejcewicz, Muhammad Bilal. Significance of heat transfer for second-grade fuzzy hybrid nanofluid flow over a stretching/shrinking Riga wedge[J]. AIMS Mathematics, 2023, 8(1): 295-316. doi: 10.3934/math.2023014

    Related Papers:

    [1] Puntita Sae-jia, Suthep Suantai . A new two-step inertial algorithm for solving convex bilevel optimization problems with application in data classification problems. AIMS Mathematics, 2024, 9(4): 8476-8496. doi: 10.3934/math.2024412
    [2] Suparat Kesornprom, Papatsara Inkrong, Uamporn Witthayarat, Prasit Cholamjiak . A recent proximal gradient algorithm for convex minimization problem using double inertial extrapolations. AIMS Mathematics, 2024, 9(7): 18841-18859. doi: 10.3934/math.2024917
    [3] Adisak Hanjing, Pachara Jailoka, Suthep Suantai . An accelerated forward-backward algorithm with a new linesearch for convex minimization problems and its applications. AIMS Mathematics, 2021, 6(6): 6180-6200. doi: 10.3934/math.2021363
    [4] Hengdi Wang, Jiakang Du, Honglei Su, Hongchun Sun . A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing. AIMS Mathematics, 2023, 8(6): 14726-14746. doi: 10.3934/math.2023753
    [5] Habibe Sadeghi, Fatemeh Moslemi . A multiple objective programming approach to linear bilevel multi-follower programming. AIMS Mathematics, 2019, 4(3): 763-778. doi: 10.3934/math.2019.3.763
    [6] Kobkoon Janngam, Suthep Suantai, Rattanakorn Wattanataweekul . A novel fixed-point based two-step inertial algorithm for convex minimization in deep learning data classification. AIMS Mathematics, 2025, 10(3): 6209-6232. doi: 10.3934/math.2025283
    [7] Suparat Kesornprom, Prasit Cholamjiak . A modified inertial proximal gradient method for minimization problems and applications. AIMS Mathematics, 2022, 7(5): 8147-8161. doi: 10.3934/math.2022453
    [8] Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629
    [9] B. El-Sobky, G. Ashry, Y. Abo-Elnaga . An active-set with barrier method and trust-region mechanism to solve a nonlinear Bilevel programming problem. AIMS Mathematics, 2022, 7(9): 16112-16146. doi: 10.3934/math.2022882
    [10] Kasamsuk Ungchittrakool, Natthaphon Artsawang . A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149
  • This investigation presents the fuzzy nanoparticle volume fraction on heat transfer of second-grade hybrid Al2O3 + Cu/EO nanofluid over a stretching/shrinking Riga wedge under the contribution of heat source, stagnation point, and nonlinear thermal radiation. Also, this inquiry includes flow simulations using modified Hartmann number, boundary wall slip and heat convective boundary condition. Engine oil is used as the host fluid and two distinct nanomaterials (Cu and Al2O3) are used as nanoparticles. The associated nonlinear governing PDEs are intended to be reduced into ODEs using suitable transformations. After that 'bvp4c, ' a MATLAB technique is used to compute the solution of said problem. For validation, the current findings are consistent with those previously published. The temperature of the hybrid nanofluid rises significantly more quickly than the temperature of the second-grade fluid, for larger values of the wedge angle parameter, the volume percentage of nanomaterials. For improvements to the wedge angle and Hartmann parameter, the skin friction factor improves. Also, for the comparison of nanofluids and hybrid nanofluids through membership function (MF), the nanoparticle volume fraction is taken as a triangular fuzzy number (TFN) in this work. Membership function and σ - cut are controlled TFN which ranges from 0 to 1. According to the fuzzy analysis, the hybrid nanofluid gives a more heat transfer rate as compared to nanofluids. Heat transfer and boundary layer flow at wedges have recently received a lot of attention due to several metallurgical and engineering physical applications such as continuous casting, metal extrusion, wire drawing, plastic, hot rolling, crystal growing, fibreglass and paper manufacturing.



    Convex bilevel optimization problems play an important role in many real-word applications such as image-signal processing, data science, data prediction, data classification, and artificial intelligence. For some interesting applications, we refer to the recent papers [1,2]. More precisely, we recall the concept of the convex bilevel optimization problem as the following. Let ψ and ϕ be two proper convex and lower semi-continuous functions from a real Hilbert space H into R{+}, and ϕ is a smooth function. In this work, we consider the following convex bilevel optimization problem:

    minzSh(z), (1.1)

    where h is a strongly convex differentiable function of the form H into R with parameter s, and S is the solution set of the problem:

    minzH{ϕ(z)+ψ(z)}. (1.2)

    Problems (1.1) and (1.2) are known as outer-level and inner-level problems, respectively. It is well-known that if z satisfies the variational inequality:

    h(z),zz0,   zS,

    then z is a solution of the outer-level problem (1.1); for more details, see [3]. Generally, the solution of problem (1.2) usually exists under the assumption that ϕ is Lipschitz continuous with parameter Lϕ, that is, there exists Lϕ>0 such that ϕ(w)ϕ(v)Lϕwv for all w,vH.

    The proximity operator, proxμψ(z)=Jψμ(z)=(I+μψ)1(z), where I is an identity mapping and ψ is a subdifferential of ψ, is crucial in solving problem (1.2). It is known that a point z in S is a fixed point of proximity operator proxμψ(Iμϕ). The following classical forward-backward splitting algorithm:

    xk+1=proxμkψ(xkμkϕ(xk)) (1.3)

    was proposed for solving problem (1.2). After that, Sabach and Shtern [4] introduced the bilevel gradient sequential averaging method (BiG-SAM), as seen in Algorithm 2. They also proved that sequence {xk} generated by BiG-SAM converges strongly to the optimal point z in the convex bilevel optimization problem (1.1) and (1.2). Later, to speed up the rate of convergence of BiG-SAM, Shehu et al. [5] employed an inertial technique proposed by Polyak [6], as defined by Algorithm 3 (iBiGSAM). Moreover, they proved a strong convergence theorem of Algorithm 3 under some weaker assumptions on {λk} given in [7], that is, limkλk=0 and k=1λk=+. Moreover, the convergence rate of the iBiG-SAM was consecutively improved by adapting the inertial technique, which is called the alternated inertial bilevel gradient sequential averaging method [8] (aiBiG-SAM), as seen in Algorithm 4. It was shown by some examples in [8] that the convergence behavior of aiBiG-SAM is better than BiG-SAM and iBiG-SAM. Recently, Jolaoso et al. [9] proposed a double inertial technique to accelerate the convergence rate of the strongly convergent 2-step inertial PPA algorithm solving for a zero of the sum of two maximal monotone operators. Yao et al. [10] also introduced a method for solving such a problem, called the weakly convergent FRB algorithm with momentum. This problem is just the inner-level problem in this work.

    It is worth noting that all methods mentioned above desire a Lipschitz continuity assumption of ϕ. However, finding a Lipschitz constant of ϕ is sometimes too difficult. To solve the inner-level problem without computing the Lipschitz constant of gradient ϕ, Cruz and Nghia [11] presented a linesearch technique (Linesearch 1) for finding some suitable step size for a forward-backward splitting algorithm. This notion provides weaker assumptions on the gradient of ϕ, as seen in the following criteria:

    A1. ϕ,ψ:H(,+] are proper lower semicontinuous convex functions with domψdomϕ;

    A2. ϕ is differentiable on an open set containing domψ, and ϕ is uniformly continuous on any bounded subset of domψ and maps any bounded subset of domψ to a bounded set in H.

    It is observed that assumption A2 is weaker than the Lipschitz continuity assumption on ϕ. Under assumptions A1 and A2, they proved that the sequence {xk} generated by (1.3), where μk is derived from Linesearch 1 (see more detail in the appendix), converges weakly to the optimal solution of the inner level problem (1.2). Inspired by [11], several algorithms with the linesearch technique were proposed in order to solve problem (1.2); see [12,13,14,15,16,17], for examples. Recently, Hanjing et al. [17] introduced a new linesearch technique (Linesearch 2) and a new algorithm (Algorithm 5), called the forward-backward iterative method with the inertial technical term and linesearch technique, to solve the inner-level problem (1.2). For more details on Linesearch 2 and Algorithm 5, see the appendix. They proved that the sequence {xk} generated by Algorithm 7 converges weakly to a solution of problem (1.2) under some control conditions.

    Note that Algorithm 7 was employed to find a solution of the inner-level problem (1.2) and it provided only weak convergence, but the strong convergence is more desired. Inspired by all of the mentioned works, we aim to develop Algorithm 7 for solving the convex bilevel problems (1.1) and (1.2) by employing Linesearch 2 together with the viscosity approximation methods. The strong convergence theorem of our developed algorithm is established under some suitable conditions and assumptions. Furthermore, we apply our proposed algorithm to solve image restoration and data classification problems including comparison of its performance with other algorithms.

    In this section, we provide some important definitions, propositions, and lemmas which will be used in the next section. Let H be a real Hilbert space and X be a nonempty closed convex subset of H. Then, for each wH, there exists a unique element PXw in X satisfying

    wPXwwz,   zX.

    The mapping PX is known as the metric projection of H onto X. Moreover,

    wPXw,zPXw0 (2.1)

    holds for all wH and zX. A mapping f:XH is called Lipschitz continuous if there exists Lf>0 such that

    f(v)f(w)Lfvw,v,wX.

    If Lf[0,1), then f is called a contraction. Moreover, f is nonexpansive if Lf=1. The domain of function f:H[,+] is denoted by domf, when domf:={vH:f(v)<}. Let {xk} be a sequence in H, and we adopt the following notations:

    1) xkw denotes that sequence {xk} converges weakly to wH;

    2) xkw denotes that {xk} converges strongly to wH.

    For each v,wH, the following conditions hold:

    1) v±w2=v2±2v,w+w2;

    2) v+w2v2+2w,v+w;

    3) tv+(1t)w2=tv2+(1t)w2t(1t)vw2,tR.

    Let ψ:H(,+] be a proper function. The subdifferential ψ of ψ is defined by

    ψ(u):={vH:v,wu+ψ(u)ψ(w),wH},uH.

    If ψ(u), then ψ is subdifferentiable at u, and the elements of ψ(u) are the subgradients of ψ at u. The proximal operator, proxψ:Hdomψ with proxψ(x):=(I+ψ)1, is single-valued with a full domain. Moreover, we have from [18] that for each xH and μ>0,

    xproxμψ(x)μψ(proxμψ(x)). (2.2)

    Let us next revisit some important properties for this work.

    Lemma 1 ([19]). Let ψ be a subdifferential of ψ. Then, the following hold:

    1) ψ is maximal monotone,

    2) Gph(ψ):={(v,w)H×H:wψ(v)} is demiclosed, i.e., if {(vk,wk)} is a sequence in Gph(ψ) such that vkv and wkw, then (v,w) Gph(ψ).

    Using the same idea of [4, Proposition 3], the following result can be proven.

    Proposition 2. Suppose h:HR is strongly convex with parameter s>0 and continuously differentiable such that h is Lipschitz continuous with constant Lh. Then, the mapping Ith is c-contraction for all 0<t2Lh+s, where c=12stLhs+Lh and I is the identity operator.

    Proof: For any x,yH, we obtain

    (xth(x))(yth(y))2=xy22th(x)h(y),xy+t2h(x)h(y)2. (2.3)

    Using the same proof as in the case of H=Rn on [20, Theorem 2.1.12], we get

    h(x)h(y),xysLhs+Lhxy2+1s+Lhh(x)h(y)2. (2.4)

    From (2.2) and (2.4), we get

    (xth(x))(yth(y))2(12stLhs+Lh)xy2+t(t2s+Lh)h(x)h(y)212stLhs+Lhxy.

    Lemma 3 ([21]). Let {ak} be a sequence of nonnegative real numbers satisfying

    ak+1(1bk)ak+bksk,    kN,

    where {bk} is a sequence in (0,1) such that k=1bk=+ and {sk} is a sequence satisfying lim supksk0. Then, limkak=0.

    Lemma 4 ([22]). Let {uk} be a sequence of real numbers such that there exists a subsequence {umj} of {uk} such that umj<umj+1 for all jN. Then there exists a nondecreasing sequence {k} of N such that limk= and for all sufficiently large N, the following holds:

    ukuk+1anduuk+1.

    We begin this section by introducing a new accelerated algorithm (Algorithm 1) by using a linesearch technique together with some modifications of Algorithm 5 for solving bilevel convex minimization problems (1.1) and (1.2). Throughout this section, we let Ω be the set of all solutions of convex bilevel problems (1.1) and (1.2), and we assume that h:HR is a strongly convex differentiable function with parameter s such that h is Lh-Lipschitz continuous and t(0,2Lh+s]. Suppose f:domψdomψ is a c-contraction for some c(0,1). Let {γk} be a real positive sequence, {ξk} a positive sequence, and {λk} be a sequence in (0,1). We propose the following Algorithm 1:

     

    Algorithm 1 Accelerated viscosity forward-backward algorithm with Linesearch 1.
    1: We are given x1=y0domψ, σ>0, θ(0,1), ρ(0,12], and δ(0,ρ4).
    2: For each k1, define μk:= Linesearch 2 (uk,σ,θ,δ) and evaluate
    uk=λkf(xk)+(1λk)xk,vk=proxμkψ(ukμkϕ(uk)),yk=proxμkψ(vkμkϕ(vk)).
    3: Select ηk(0,¯ηk] such that
    ¯ηk={min{γk,ξkykyk1}ifykyk1,γk,otherwise.(3.1)
    Compute
    xk+1=Pdomψ(yk+ηk(ykyk1)).

    Remark 1. Our proposed algorithm uses a linesearch technique for finding the step size of the proximal gradient methods in order to relax the continuity assumption on the gradeint of f. Note that this linesearch technique employes two proximal evaluations which is appropriated from the algorithms consisting of two proximal evaluations at each iteration, see [12,13,14,15,16,17]. It is observed that those algorithms have a better convergence behavior than the others.

    To prove the convergence results of Algorithm 1, we need to find the following results.

    Lemma 5. Let {xk} be a sequence generated by Algorithm 1, and vdomψ. Then, the following inequality holds:

    ukv2ykv22μk((ϕ+ψ)(yk)+(ϕ+ψ)(vk)2(ϕ+ψ)(v))+(14δρ)(ykvk2+vkuk2).

    Proof: Let vdomψ. It follows from (2.2) that

    ukvkμkϕ(uk)=ukproxμkψ(ukμkϕ(uk))μkϕ(uk)ψ(vk),

    and

    vkykμkϕ(vk)=vkproxμkψ(vkμkϕ(vk))μkϕ(vk)ψ(yk).

    Then, by the definition of ψ, we get

    ψ(v)ψ(vk)ukvkμkϕ(uk),vvk=1μkukvk,vvk+ϕ(uk),vkv, (3.2)

    and

    ψ(v)ψ(yk)vkykμkϕ(vk),vyk=1μkvkyk,vyk+ϕ(vk),ykv. (3.3)

    By the convexity of ϕ, we have for every xdomϕ and ydomψ,

    ϕ(x)ϕ(y)ϕ(y),xy, (3.4)

    which implies

    ϕ(v)ϕ(uk)ϕ(uk),vuk, (3.5)

    and

    ϕ(v)ϕ(vk)ϕ(vk),vvk. (3.6)

    From (3.2), (3.3), (3.5), and (3.6), we have

    2(ϕ+ψ)(v)(ϕ+ψ)(vk)ϕ(uk)ψ(yk)1μkukvk,vvk+ϕ(uk),vkv+ϕ(uk),vuk+ϕ(vk),vvk+1μkvkyk,vyk+ϕ(vk),ykv=1μk(ukvk,vvk+vkyk,vyk)+ϕ(uk),vkuk+ϕ(vk),ykvk=1μk(ukvk,vvk+vkyk,vyk)+ϕ(uk)ϕ(vk),vkuk+ϕ(vk),vkuk+ϕ(vk)ϕ(yk),ykvk+ϕ(yk),ykvk1μk(ukvk,vvk+vkyk,vyk)ϕ(uk)ϕ(vk)vkuk+ϕ(vk),vkukϕ(vk)ϕ(yk)ykvk+ϕ(yk),ykvk.

    This together with (3.4) gives

    2(ϕ+ψ)(v)(ϕ+ψ)(vk)ϕ(uk)ψ(yk)1μk(ukvk,vvk+vkyk,vyk)ϕ(uk)ϕ(vk)vkukϕ(vk)ϕ(yk)ykvk+ϕ(vk)ϕ(uk)+ϕ(yk)ϕ(vk)=1μk(ukvk,vvk+vkyk,vyk)ϕ(uk)ϕ(vk)vkukϕ(vk)ϕ(yk)ykvkϕ(uk)+ϕ(yk)1μk(ukvk,vvk+vkyk,vyk)ϕ(uk)+ϕ(yk)ϕ(uk)ϕ(vk)(ykvk+vkuk)ϕ(vk)ϕ(yk)(ykvk+vkuk)=1μk(ukvk,vvk+vkyk,vyk)ϕ(uk)+ϕ(yk)(ykvk+vkuk)(ϕ(uk)ϕ(vk)+ϕ(vk)ϕ(yk)). (3.7)

    By the definition of Linesearch 1, we get

    μk((1ρ)ϕ(yk)ϕ(vk)+ρϕ(vk)ϕ(uk))δ(ykvk+vkuk). (3.8)

    From (3.7) and (3.8), we have

    1μk(ukvk,vkv+vkyk,ykv)(ϕ+ψ)(vk)+ϕ(uk)+ψ(yk)2(ϕ+ψ)(v)ϕ(uk)+ϕ(yk)(ykvk+vkuk)(ϕ(uk)ϕ(vk)+ϕ(vk)ϕ(yk))(ϕ+ψ)(vk)+(ϕ+ψ)(yk)2(ϕ+ψ)(v)(ykvk+vkuk)((1ρ1)ϕ(uk)ϕ(vk))(ykvk+vkuk)ϕ(vk)ϕ(yk)=(ϕ+ψ)(vk)+(ϕ+ψ)(yk)2(ϕ+ψ)(v)1ρ(ykvk+vkuk)((1ρ)ϕ(uk)ϕ(vk))1ρ(ykvk+vkuk)(ρϕ(vk)ϕ(yk))(ϕ+ψ)(vk)+(ϕ+ψ)(yk)2(ϕ+ψ)(v)1ρ(ykvk+vkuk)(δμk(ykvk+vkuk))=(ϕ+ψ)(vk)+(ϕ+ψ)(yk)2(ϕ+ψ)(v)δρμk(ykvk+vkuk)2(ϕ+ψ)(vk)+(ϕ+ψ)(yk)2(ϕ+ψ)(v)2δρμk(ykvk2+vkuk2). (3.9)

    Moreover, we know that

    ukvk,vkv=12(ukv2ukvk2vkv2), (3.10)

    and

    vkyk,ykv=12(vkv2vkyk2ykv2). (3.11)

    By replacing (3.10) and (3.11) in (3.9), we obtain

    ukv2ykv22μk((ϕ+ψ)(yk)+(ϕ+ψ)(vk)2(ϕ+ψ)(v))4δρ(ykvk2+vkuk2)+ukvk2+vkyk2=2μk((ϕ+ψ)(yk)+(ϕ+ψ)(vk)2(ϕ+ψ)(v))+(14δρ)(ykvk2+vkuk2). (3.12)

    Lemma 6. Let {xk} be a sequence generated by Algorithm 1 and S. Suppose that limkξkλk=0. Then {xk} is bounded. Furthermore, {f(xk)},{uk},{yk}, and {vk} are also bounded.

    Proof: Let vS. By Lemma 5, we have

    ukv2ykv22μk((ϕ+ψ)(yk)+(ϕ+ψ)(vk)2(ϕ+ψ)(v))+(14δρ)(ykvk2+vkuk2)(14δρ)(ykvk2+vkuk2)0, (3.13)

    which implies

    ukvykv. (3.14)

    By the definition of uk and since f is a contraction with constant c, we get

    ukv=λkf(xk)+(1λk)xkv (3.15)
    λkf(xk)f(v)+λkf(v)v+(1λk)xkvcλkxkv+λkf(v)v+(1λk)xkv=(1λk(1c))xkv+λkf(v)v. (3.16)

    This together with (3.14) gives

    xk+1v=Pdomψ((yk+ηk(ykyk1))Pdomψ(v)(ykv)+ηk(ykyk1) (3.17)
    ykv+ηkykyk1 (3.18)
    ukv+ηkykyk1 (3.19)
    (1λk(1c))xkv+λkf(v)v+ηkykyk1=(1λk(1c))xkv+λk(1c)(f(v)v1c+ηkλk(1c)ykyk1)max{xkv,f(v)v1c+ηkλk(1c)ykyk1}. (3.20)

    From (3.1), we have

    ηkλkykyk1ξkykyk1ykyk1λk=ξkλk.

    Using limkξkλk=0, we obtain limkηkλkykyk1=0. Therefore, there exists N>0 such that ηkλkykyk1N for all kN. The above inequality implies

    xk+1vmax{xkv,f(v)v1c+N1c}.

    By induction, we have xk+1vmax{x1v,f(v)v1c+N1c}, and so {xk} is bounded. It follows that {f(xk)} is bounded. Combining this with the definition of uk, we obtain that {uk} is bounded. It follows by (3.14) that {yk} and {vk} are also bounded.

    Theorem 7. Let {xk} be a sequence generated by Algorithm 1 and S. Suppose ϕ and ψ satisfy A1 and A2 and the following conditions hold:

    1) {λk} is a positive sequence in (0,1);

    2) μkμ for some μR+;

    3) limkλk=0 and k=1λk=+;

    4) limkξkλk=0.

    Then, xkvS such that v=PSf(v). Moreover, if f:=Ith, then xkvΩ.

    Proof: Let vS be such that v=PSf(v). By (3.17), Algorithm 1, and the fact that f is a contraction with constant c, we have

    xk+1v2(ykv)+ηk(ykyk1)2ykv2+2ηkykvykyk1+η2kykyk12ukv2+2ηkykvykyk1+η2kykyk12=λkf(xk)+(1λk)xkv2+2ηkykvykyk1+η2kykyk12=λk(f(xk)f(v))+(1λk)(xkv)+λk(f(v)v)2+ηkykyk1(2ykv+ηkykyk1)λk(f(xk)f(v))+(1λk)(xkv)2+2λkf(v)v,ukv+ηkykyk1(2ykv+ηkykyk1)=λkf(xk)f(v)2+(1λk)xkv2+2λkf(v)v,ukvλk(1λk)(f(xk)f(v))(xkv)2+ηkykyk1(2ykv+ηkykyk1)λkf(xk)f(v)2+(1λk)xkv2+2λkf(v)v,ukv+ηkykyk1(2ykv+ηkykyk1)c2λkxkv2+(1λk)xkv2+2λkf(v)v,ukv+ηkykyk1(2ykv+ηkykyk1)=(1λk(1c2))xkv2+2λkf(v)v,ukv+ηkykyk1(2ykv+ηkykyk1)(1λk(1c))xkv2+2λkf(v)v,ukv+ηkykyk1(2ykv+ηkykyk1). (3.21)

    Since limkηkykyk1=limk(λk)ξkλk=0, there exists N1>0 such that ηkykyk1N1 for all kN. From Lemma 6, we have ykvN2 for some N2>0. Choose ˉN=supkN{N1,N2}. By (3.21), we get

    xk+1v2(1λk(1c))xkv2+2λkf(v)v,ukv+3ˉNηkykyk1=(1λk(1c))xkv2+λk(1c)(21cf(v)v,ukv+3ˉNηkλk(1c)ykyk1). (3.22)

    In order to verify the convergence of {xk}, we analyze the proof into the following two cases.

    Case 1. Suppose there exists MN such that xk+1vxkv for all kM. This implies limkxkv exists. From (3.22), we set ak=xkv2,bk=λk(1c), and sk=21cf(v)v,ukv+3ˉNηkλk(1c)ykyk1. It follows from k=1λk=+ that k=1bk=(1c)k=1λk=+. In addition,

    3ˉNηkλk(1c)ykyk13ˉN1cξkykyk1ykyk1λk=3ˉN1c(ξkλk).

    Then, by limkξkλk=0, we get limk3ˉNηkλk(1c)ykyk1=0.

    To employ Lemma 3, we need to guarantee that lim supksk0. Since {uk} is bounded, there exists a subsequence {ukj} of {uk} such that ukjw, for some wH, and

    lim supkf(v)v,ukv=limjf(v)v,ukjv=f(v)v,wv.

    Next, we show that wS. We have from (3.19) and (3.20) that

    limkxkv=limkukv. (3.23)

    Combining this with (3.18) and (3.20), we obtain

    limkxkv=limkykv. (3.24)

    From (3.13), we have

    ukv2ykv2(14δρ)(ykvk2+vkuk2)(14δρ)ykvk20,

    and

    ukv2ykv2(14δρ)(ykvk2+vkuk2)(14δρ)vkuk20.

    From (3.23) and (3.24), we obtain

    limkykvk=0, (3.25)

    and

    limkvkuk=0. (3.26)

    Moreover, we know that

    ukjvkjμkjϕ(ukj)+ϕ(vkj)ψ(vkj)+ϕ(vkj)=(ϕ+ψ)(vkj).

    The uniform convexity of ϕ and (3.26) yield

    limkϕ(vk)ϕ(uk)=0. (3.27)

    It implies, by assumption (2), that

    ukjvkjμkjϕ(ukj)+ϕ(vkj)1μkjukjvkj+ϕ(vkj)ϕ(ukj)1μukjvkj+ϕ(vkj)ϕ(ukj).

    This together with (3.26) and (3.27) yields

    ukjvkjμkjϕ(ukj)+ϕ(vkj)0ask.

    By the demiclosed nature of Gph((ϕ+ψ)), 0(ϕ+ψ)(w), and so wS. It derives from (2.1) that

    lim supkf(v)v,ukv=f(v)v,wv=f(v)PSf(v),wPSf(v)0,

    which implies that lim supksk0. By Lemma 3, we obtain

    limkxkv2=0.

    Case 2. Suppose that there exists a subsequence {xmj} of {xk} such that

    xmjv<xmj+1v,

    for all jN. By Lemma 4, there is a nondecreasing sequence {k} of N such that limk=, and for all sufficiently large N, the following formula holds:

    xkvxk+1vandxvxk+1v. (3.28)

    We have from (3.25) and (3.26) that

    limykvk=0andlimvkuk=0. (3.29)

    Since {uk} is bounded, there exists a weakly convergent subsequence {uki} of {uk} such that ukiw for some wH, and

    lim supf(v)v,ukv=limif(v)v,ukiv=f(v)v,wv.

    The uniform convexity of ϕ and (3.29) imply

    limiϕ(vki)ϕ(uki)=0. (3.30)

    Moreover, we know that

    ukivkiμkiϕ(uki)+ϕ(vki)ψ(vki)+ϕ(vki)=(ϕ+ψ)(vki).

    It implies, by assumption (2), that

    ukivkiμkiϕ(uki)+ϕ(vki)1μkiukivki+ϕ(vki)ϕ(uki)1μukivki+ϕ(vki)ϕ(uki).

    Using (3.29) and (3.30), we get

    ukivkiμkiϕ(uki)+ϕ(vki)0asi.

    By demiclosedness of Gph((ϕ+ψ)), we obtain 0(ϕ+ψ)(w) and thus wS. This implies that

    lim supf(v)v,ukv=f(v)v,wv0.

    We derive from (3.22) and xkvxk+1v that

    xk+1v2(1λk(1c))xkv2+λk(1c)(21cf(v)v,ukv+3ˉNηkλk(1c)ykyk1)(1λk(1c))xk+1v2+λk(1c)(21cf(v)v,ukv+3ˉNηkλk(1c)ykyk1),

    which implies

    λk(1c)xk+1v2λk(1c)(21cf(v)v,ukv+3ˉNηkλk(1c)ykyk1).

    Consequently,

    xk+1v221cf(v)v,ukv+3ˉNηkλk(1c)ykyk1.

    From the above inequality and xvxk+1v, we obtain

    0lim supxv2lim supxk+1v20.

    Therefore, we can conclude that xkv. Finally, we show that v is the solution of problem (1.1). Since f:=Ith, it follows that f(v):=vth(v), which implies

    0PSf(v)f(v),xPSf(v)=vf(v),xv=v(vth(v)),xv=th(v),xv,

    for all xS. This together with 0<t give us that 0h(v),xv for all xS. Hence v is the solution of the outer-level problem (1.1).

    In this section, we present an experiment on image restoration and data classification problems by using our algorithm, and compare the performance of the proposed algorithm with BiG-SAM, iBiG-SAM, and aiBiG-SAM. We apply MATLAB 9.6 (R2019a) to perform all numerical experiments throughout this work. It runs on a MacBook Air 13.3-inch, 2020, with an Apple M1 chip processor and 8-core GPU, configured with 8 GB of RAM.

    In this section, we apply the proposed algorithm to solve the true RGB image restoration problems, and compare its performance with BiG-SAM, iBiG-SAM, and aiBiG-SAM. Let A be a blurring operator, and x be an original image. If b represents an observed image, then a linear image restoration problem is defined by

    Av=b+w, (4.1)

    where xRn×1 and w denotes an additive noise. In the traditional way, we apply the least absolute shrinkage and selection operator (LASSO) [23] method to approximate the original image x. It is given by

    minx{Axb22+αx1}, (4.2)

    where α denotes a positive regularization parameter, x1=nk=1|xk|, and x2=nk=1|xk|2. We see that (4.2) is the inner-level problem (1.2) when ϕ(x)=Axb22 and ψ(x)=βx1. When the true RGB image is transformed as the matrix on the LASSO model, we see that the size of matrix A and x as well as their members have an effect on the computation for the multiplication of Ax and x. To prevent this effect, we adopt the 2-D fast Fourier transform to convert the true RGB images into matrices instead. If W represents the 2-D fast Fourier transform, and B denotes the blurring matrix such that the blurring operator A=BW, then problem (4.2) is transformed to the following problem:

    minx{Axb22+αWx1}, (4.3)

    where bRm×n is the observed image of size m×n, and α>0 is a regularization parameter. Therefore, our proposed algorithm can be applied to solve an image restoration problem (4.1) by setting the inner-level problem as follows: ϕ(x)=Axb22, ψ(x)=αWx1, and we choose the outer-level problem as h(x)=12x2. Next, we select all of the parameters satisfying the convergence theorem of each algorithm as seen in Table 1.

    Table 1.  Chosen parameters of each algorithm.
    Algorithm Parameters
    t μ α λk γk ξk δ θ σ ρ
    BiG-SAM 0.01 k(k+1)Lϕ - 1k+2 - - - - - -
    iBiG-SAM 0.01 k(k+1)Lϕ 3 150k - 1050k2 - - - -
    aiBiG-SAM 0.01 k(k+1)Lϕ 3 1k+2 - λkk0.01 - - - -
    Algorithm 1 0.01 - - 150k tk1tk+1 1050k2 0.124 0.1 0.9 0.5

     | Show Table
    DownLoad: CSV

    Also, the Lipschitz constant Lϕ of ϕ for BiG-SAM, iBiG-SAM, and aiBiG-SAM is calculated by the maximum eigenvalue of the matrix ATA. The efficiency of a restorative image is measured by the peak signal-to-noise ratio (PSNR) in decibel (dB), which is given by

    PSNR(xk)=10log(2552mse),

    where mse=1Kxkv22, K denotes the number of image samples, and v indicates the original image. We select the regularization parameter α=0.00001 and consider the original image (Wat Lok Moli) of size 256×256 pixels from [24]. We employ a Gaussian blur to construct blurred and noisy images of size 9×9 with the standard deviation σ=4. The original and blurred images are shown in Figure 2. The results of deblurring the image of Wat Lok Moli at 500 iterations is demonstrated in Table 2.

    Table 2.  The values of PSNR at x10,x50,x100, and x500.
    The peak signal-to-noise ratio (PSNR)
    Iteration No. BiG-SAM iBiG-SAM aiBiG-SAM Algorithm 1
    1 20.4661 20.5577 20.4661 20.6308
    10 21.2325 21.7491 21.2327 22.9166
    50 22.5011 25.0760 22.5015 26.4285
    100 23.3503 26.5096 23.3508 27.7760
    500 25.3727 30.8838 25.6802 31.4100

     | Show Table
    DownLoad: CSV

    As seen in Table 2, our proposed algorithm (Algorithm 1) gives a higher value of PSNR than the others, which means that our algorithm has the best performance of the image restoration compared with others. The graph of PSNR for deblurring images at the 500th iteration are shown in Figure 1.

    Figure 1.  The graph of PSNR for Wat Lok Moli.
    Figure 2.  Results for image restoration at 500th iterations.

    All restoration images of Wat Lok Moli of each algorithm at the 500th iteration are shown in Figure 2.

    Machine learning is crucial because it allows computers to learn from data and make decisions or predictions. There are three types of machine learning such as supervised learning, unsupervised learning, and reinforcement learning. Our work uses supervised learning which uses the extreme learning machine (ELM) [25] and a single-layer feedback neural network (SLFNs) model while the reinforcement learning is typically used for decision-making problems where an agent learns to perform actions in an environment to maximize cumulative rewards (see more information in [26,27]). However, it is not commonly used directly for data classification, which is more traditionally tackled using supervised learning techniques.

    In this work, we aim to use the studied algorithm to solve a binary data classification problem. We focus on classifying the patient datasets of heart disease [28] and breast cancer [29] into classes. The details of the studied datasets are given in Table 3.

    Table 3.  Details of datasets.
    Datasets Samples Attributes Classes
    Heart disease 303 13 2
    Breast cancer 699 11 2

     | Show Table
    DownLoad: CSV

    Here, we accessed the above datasets on June 12, 2022 from https://archive.ics.uci.edu. We first start with a necessary notion for data classification problems. Now, we recall a concept of ELM. Suppose pkRn is an input data, and qkRm is the target. The training set of N samples is given by S:={(pk,qk):pkRn,qkRm,k=1,2,,N}. The output of the i-th hidden node for any single hidden layer of ELM is

    hi(p)=G(wi,p+ri), (4.4)

    where G is an activate function, ri is a bias, and wi is the weight vector connecting the i-th hidden node and the input node. If M denotes the amount of the hidden nodes, then ELM for SLFNs gives the output function as:

    oj=Mi=1mihi(pj),  j=1,2,,N,

    where mi is the weight vector connecting the i-th hidden node and the output node. Thus, an output matrix of hidden layer A is given by

    A=[h1(p1)h2(p1)hM(p1)h1(pN)h2(pN)hM(pN)].

    A main purpose of ELM is to find a weight m=[mT1,,mTM]T such that

    Am=Q, (4.5)

    where Q=[qT1,,qTN]T is the training data. We observe from (4.5) that m=AQ whenever the Moore–Penrose generalized inverse A of A exists. In some situations, if A does not exist, it may be difficult to find wight m, which satisfies (4.5). In order to overcome this situation, we utilize the following convex minimization problem (4.2) to solve m:

    minmAmQ22+βm1, (4.6)

    where β is the regularized parameter and (m1,m2,,mp)1=pi=1|mi|. It derives from (4.2) that \phi (m) : = \| \textbf{A}m - \textbf{Q} \|^2_2 and \psi : = \beta \| m \|_1 are inner-level functions of problem (1.2). To employ the proposed algorithm, BiG-SAM, iBiG-SAM, and aiBiG-SAM for solving data classification, we choose the outer-level function h(m) = \frac{1}{2}\|m\|^2 for problem (1.1). With datasets from Table 3, we select an activation function \mathcal{G} as sigmoid, and set the hidden node M = 30 . Choose t_0 = 1 and t_{k+1} = \frac{1+\sqrt{1+4 t_{k}^2}}{2}, for all k \geq 0. All parameters of each algorithm are chosen as in Table 4.

    Table 4.  Chosen parameters of each algorithm.
    Algorithm Parameters
    t \mu \alpha \lambda_k \gamma_k \xi_k \delta \theta \sigma \rho
    BiG-SAM 0.01 \frac{1}{L_{\phi}} - \frac{1}{k+2} - - - - - -
    iBiG-SAM 0.01 \frac{1}{L_{\phi}} 3 \frac{1}{50k} - \frac{10^{50}}{k^2} - - - -
    aiBiG-SAM 0.01 \frac{1}{L_{\phi}} 3 \frac{1}{k+2} - \frac{\lambda_k}{k^{0.01}} - - - -
    Algorithm 1 0.01 - - \frac{1}{50k} \frac{t_k -1}{t_{k+1}} \frac{10^{50}}{k^2} 0.124 0.1 0.9 0.5

     | Show Table
    DownLoad: CSV

    Also, the Lipschitz gradient L_\phi of \nabla \phi for BiG-SAM, iBiG-SAM, and aiBiG-SAM can be calculated by 2 \|\textbf{A} \|^2 . In order to measure the performance of the accuracy for prediction, we use the following formula:

    \mbox{Accuracy (Acc)} = \frac{TP+TN}{TP+TN+FP+FN}\times 100,

    where TP is the number of cases correctly identified as patient, TN represent the number of cases correctly identified as healthy, FN means the number of cases incorrectly identified as healthy, and FP denotes the number of cases incorrectly identified as patient. In what follows, Acc Train refers to the accuracy of training on the dataset, while Acc Test indicates the accuracy of testing on the dataset. We present the iteration numbers and training time on the learning model for each algorithm in Table 5.

    Table 5.  The iteration number and training time of each algorithm with the highest accuracy on each dataset.
    Dataset Algorithm Iteration no. Training time Acc train Acc test
    Heart Disease BiG-SAM 1421 0.0207 85.24 79.57
    iBiG-SAM 410 0.0069 87.14 82.80
    aiBiG-SAM 1421 0.0321 85.24 79.57
    Algorithm 1 243 0.0871 87.14 82.80
    Breast Cancer BiG-SAM 587 0.0185 95.71 99.04
    iBiG-SAM 114 0.0041 96.12 99.04
    aiBiG-SAM 587 0.0191 95.71 99.04
    Algorithm 1 48 0.0428 96.12 99.04

     | Show Table
    DownLoad: CSV

    As seen in Table 5, we observe that the training time of Algorithm 1 is not significantly different compared with the other algorithms. However, it needs to compute parameter \mu_{k} occurring from the lineserch technique, while the other algorithms do not have this process. Note that under the linesearch technique, our algorithm has better convergence behavior than the others in terms of the number of iterations. This means that the proposed algorithm provides the best optimal weight compared with the others. To evaluate the performance of each algorithm, we construct a 10-fold cross validation. The 10-fold cross validation splits data into training sets and testing sets, as seen in Table 6.

    Table 6.  The number of samples in each fold for all datasets.
    Heart disease Breast cancer
    Train Test Train Test
    Fold 1 273 30 630 69
    Fold 2 272 31 629 70
    Fold 3 272 31 629 70
    Fold 4 272 31 629 70
    Fold 5 273 30 629 70
    Fold 6 273 30 629 70
    Fold 7 273 30 629 70
    Fold 8 273 30 629 70
    Fold 9 273 30 629 70
    Fold 10 273 30 629 70

     | Show Table
    DownLoad: CSV

    In addition, we use the following formula in order to measure the success probability of making a correct positive class classification, which is defined by

    \begin{equation*} \mbox{Precision (Pre)} = \frac{TP}{TP+FP}. \end{equation*}

    Also, the sensitivity of the model toward identifying the positive class is estimated by

    \begin{equation*} \mbox{Recall (Rec)} = \frac{TP}{TP+FN}. \end{equation*}

    The appraising tool is the average accuracy which is given by

    \begin{equation*} \mbox{Average Acc} = \sum\limits_{i = 1}^{N}\frac{u_i}{v_i}\times 100\%/N, \end{equation*}

    where N is the number of sets considered during the cross validation ( N = 10 ), u_i is the number of correctly predicted data at fold i , and v_i is the number of all data at fold i .

    Let Err _{M} be the sum of errors in all 10 training sets, Err _{K} be the sum of errors in all 10 testing sets, M be the sum of all data in 10 training sets, and K be the sum of all data in 10 testing sets. Then,

    {\bf{Error}}_{\%} = \frac{{\bf{error}}_{M\%} + {\bf{error}}_{K \%}}{2},

    where {\bf{error}}_{M\%} = \frac{{\bf{Err}}_{M}}{M} \times 100\% and {\bf{error}}_{K \%} = \frac{{\bf{Err}}_{K}}{K} \times 100\% .

    We show the performance of each algorithm for patient prediction of heart disease and breast cancer with the 300th iteration in Tables 7 and 8.

    Table 7.  Experiment results in each fold for heart disease at the 300th iteration.
    BiG-SAM iBiG-SAM aiBiG-SAM Algorithm 1
    Heart disease Train Test Train Test Train Test Train Test
    Fold 1 Pre 0.79 0.88 0.82 0.94 0.79 0.88 0.83 0.87
    Rec 0.86 0.88 0.91 0.94 0.86 0.88 0.93 0.76
    Acc 79.85 86.67 84.25 93.33 79.85 86.67 85.71 80.00
    Fold 2 Pre 0.79 0.78 0.82 0.82 0.79 0.78 0.84 0.83
    Rec 0.86 0.88 0.93 0.88 0.86 0.88 0.91 0.94
    Acc 80.15 80.65 84.56 83.87 80.15 80.65 86.03 87.10
    Fold 3 Pre 0.79 0.78 0.82 0.78 0.79 0.78 0.84 0.81
    Rec 0.88 0.88 0.93 0.88 0.88 0.88 0.92 0.81
    Acc 80.88 80.65 85.29 80.65 80.88 80.65 86.03 80.65
    Fold 4 Pre 0.81 0.74 0.82 0.79 0.81 0.74 0.84 0.74
    Rec 0.87 0.88 0.91 0.94 0.87 0.88 0.92 0.88
    Acc 81.62 77.42 84.56 83.87 81.62 77.42 86.40 77.42
    Fold 5 Pre 0.79 0.77 0.82 0.81 0.79 0.77 0.84 0.81
    Rec 0.85 1.00 0.91 1.00 0.85 1.00 0.91 1.00
    Acc 79.85 83.33 84.62 86.67 79.85 83.33 85.35 86.67
    Fold 6 Pre 0.79 0.82 0.82 0.80 0.79 0.82 0.86 0.74
    Rec 0.87 0.82 0.92 0.94 0.87 0.82 0.92 0.82
    Acc 80.59 80.00 84.98 83.33 80.59 80.00 87.18 73.33
    Fold 7 Pre 0.78 0.84 0.82 0.76 0.78 0.84 0.83 0.94
    Rec 0.86 0.94 0.91 0.94 0.86 0.94 0.91 0.94
    Acc 79.49 86.67 84.62 80.00 79.49 86.67 84.62 93.33
    Fold 8 Pre 0.81 0.71 0.83 0.76 0.81 0.71 0.83 0.79
    Rec 0.87 0.71 0.93 0.76 0.87 0.71 0.91 0.88
    Acc 82.05 66.67 85.71 73.33 82.05 66.67 85.35 80.00
    Fold 9 Pre 0.81 0.70 0.83 0.75 0.81 0.70 0.83 0.83
    Rec 0.86 0.82 0.91 0.88 0.86 0.82 0.91 0.88
    Acc 81.32 70.00 84.98 76.67 81.32 70.00 85.35 83.33
    Fold 10 Pre 0.80 0.83 0.82 0.83 0.80 0.83 0.82 0.84
    Rec 0.86 0.88 0.92 0.88 0.86 0.88 0.92 0.94
    Acc 80.59 83.33 84.98 83.33 80.59 83.33 84.98 86.67
    Average Pre 0.80 0.79 0.82 0.81 0.80 0.79 0.84 0.82
    Average Rec 0.87 0.87 0.92 0.90 0.87 0.87 0.91 0.89
    Average Acc 80.64 79.54 84.86 82.51 80.64 79.54 85.70 82.85
    Error _{\%} 19.91 16.32 19.91 15.73

     | Show Table
    DownLoad: CSV
    Table 8.  Experiment results in each fold for breast cancer at the 300th iteration.
    BiG-SAM iBiG-SAM aiBiG-SAM Algorithm 1
    Breast cancer Train Test Train Test Train Test Train Test
    Fold 1 Pre 0.97 0.97 0.99 0.97 0.97 0.97 0.99 1.00
    Rec 0.98 0.89 0.98 0.89 0.98 0.89 0.98 0.89
    Acc 96.35 91.30 97.62 91.30 96.35 91.30 97.62 92.75
    Fold 2 Pre 0.97 1.00 0.97 1.00 0.97 1.00 0.97 1.00
    Rec 0.97 0.98 0.98 0.98 0.97 0.98 0.98 0.98
    Acc 96.50 98.57 96.50 98.57 96.50 98.57 96.66 98.57
    Fold 3 Pre 0.97 1.00 0.97 1.00 0.97 1.00 0.97 1.00
    Rec 0.97 0.98 0.97 0.98 0.97 0.98 0.97 0.98
    Acc 96.34 98.57 96.18 98.57 96.34 98.57 96.34 98.57
    Fold 4 Pre 0.97 0.94 0.96 0.96 0.97 0.94 0.97 0.96
    Rec 0.97 1.00 0.97 1.00 0.97 1.00 0.97 1.00
    Acc 96.03 95.71 95.87 97.14 96.03 95.71 96.50 97.14
    Fold 5 Pre 0.98 0.98 0.98 0.98 0.98 0.98 0.99 0.98
    Rec 0.98 1.00 0.97 1.00 0.98 1.00 0.97 1.00
    Acc 96.82 98.57 96.66 98.57 96.82 98.57 97.14 98.57
    Fold 6 Pre 0.97 0.98 0.98 0.98 0.97 0.98 0.98 0.98
    Rec 0.97 0.98 0.98 0.98 0.97 0.98 0.97 0.98
    Acc 96.03 97.14 96.82 97.14 96.03 97.14 96.66 97.14
    Fold 7 Pre 0.96 0.98 0.98 1.00 0.96 0.98 0.98 1.00
    Rec 0.98 0.98 0.97 0.98 0.98 0.98 0.97 0.98
    Acc 96.03 97.14 96.66 98.57 96.03 97.14 96.82 98.57
    Fold 8 Pre 0.98 0.98 0.99 1.00 0.98 0.98 0.99 1.00
    Rec 0.97 0.96 0.97 0.96 0.97 0.96 0.97 0.96
    Acc 96.82 95.71 97.30 97.14 96.82 95.71 97.62 97.14
    Fold 9 Pre 0.98 0.94 0.98 0.98 0.98 0.94 0.98 0.98
    Rec 0.97 1.00 0.97 1.00 0.97 1.00 0.97 1.00
    Acc 96.50 95.71 96.50 98.57 96.50 95.71 96.98 98.57
    Fold 10 Pre 0.96 0.98 0.97 0.98 0.96 0.98 0.98 0.98
    Rec 0.97 1.00 0.98 0.98 0.97 1.00 0.97 0.98
    Acc 95.87 98.55 96.34 97.10 95.87 98.55 96.82 95.71
    Average Pre 0.97 0.97 0.97 0.98 0.97 0.97 0.98 0.99
    Average Rec 0.97 0.98 0.97 0.97 0.97 0.98 0.97 0.97
    Average Acc 96.33 96.70 96.65 97.27 96.33 96.70 96.92 97.41
    Error _{\%} 3.55 3.11 3.55 2.90

     | Show Table
    DownLoad: CSV

    According to Tables 7 and 8, Algorithm 1 gives the best average accuracy of training and testing datasets compared with BiG-SAM, iBiG-SAM, and aiBiG-SAM. We also see that our algorithm provides higher the recall and precision for diagnosis of heart disease and breast cancer. Furthermore, the proposed algorithm has the lowest percent error on prediction.

    Recently, there are various algorithms for solving convex bilevel optimization problems (1.1) and (1.2). These methods require the Lipschitz continuity assumption of the gradient of the objective function on problem (1.2). To relax this criteria, the linesearch technique is applied. In this work, we proposed a novel accelerated algorithm employing both linesearch and inertial techniques for solving convex bilevel optimization problems (1.1) and (1.2). The convergence theorem of the proposed algorithm was analyzed under some suitable conditions. Furthermore, we applied our algorithm to solve image restoration and data classification problems. According to our experiment, we obtained that the proposed algorithm has more efficiency on image restoration and data classification than the others.

    It is worth mentioning that in real-world application, if we appropriately choose the objective function of the outer-level problem (1.1), our algorithm can provide more benefit and accuracy for the specific objective of data classifications. Note that we use \frac{1}{2}\|x\|^2 as the outer-level objective function, so our solution is a minimum norm problem. In order to improve the accuracy for prediction, in future work, we need a new mathematical model and deep learning algorithm. Very recently, a deep extreme learning machine is an appropriate model for improving accuracy for prediction, see [30,31]. However, deep extreme learning algorithms are also challenging to study and discuss. Moreover, we would like to employ our method for prediction of noncommunicable diseases of patient data from the Sriphat Medical Center, Faculty of Medicine, Chiang Mai University.

    Adisak Hanjing: formal analysis, investigation, resources, methodology, writing-review & editing, validation, data curation, and funding acquisition; Panadda Thongpaen: formal analysis, invesigation, writing-original draft, software, visualization, data curation; Suthep Suantai: conceptualization, supervision, project administration, methodology, validation, and formal funding acquisition. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was partially supported by Chiang Mai University and the Fundamental Fund 2024 (FF030/2567), Chiang Mai University. The first author was supported by the Science Research and Innovation Fund, agreement no. FF67/P1-012. The authors would also like to thank the Rajamangala University of Technology Isan for partial financial support.

    All authors declare no conflicts of interest in this paper.

    In this section, we discuss the specific details of algorithms related to our work. These algorithms were proposed for solving convex bilevel optimization problems as follows:

     

    Algorithm 2 BiG-SAM: Bilevel gradient sequential averaging method [4].
    1: Initialization Step: Select the sequence \{ \lambda_k \} \subset (0, 1] corresponding to criteria assumed in [7], and take arbitrary x_1 \in \mathbb{R}^n . Consider the step sizes \mu \in \left(0, \frac{1}{L_{\phi}} \right] and the parameter t \in \left(0, \frac{2}{L_{h} + s} \right] .
    2: Iterative Step: For all k \geq 1 , set y_k = \mbox{prox}_{\mu \psi} \left(I - \mu \nabla \phi \right) (x_k) and define
    \begin{align*} u_k & = (I- t \nabla h) (x_k) \\ x_{k+1} & = \lambda_k u_k +(1- \lambda_k) y_k. \end{align*}

     

    Algorithm 3 iBiG-SAM: Inertial with bilevel gradient sequential averaging method.
    1: Initialization Step: Select the sequence \{ \lambda_k \} \subset (0, 1) , and take arbitrary x_1, x_0 \in \mathbb{R}^n . Consider the step sizes \mu \in \left(0, \frac{2}{L_{\phi}} \right) , the parameter t \in \left(0, \frac{2}{L_h + s} \right] , and \alpha \geq 3 .
    2: Iterative Step: For all k \geq 1 , set z_k := x_k + \eta_{k} (x_k -x_{k-1}) while \eta_{k} \in [0, \bar{\eta_k } ] corresponding to
    \begin{equation} \bar{\eta_k } = \begin{cases} \min \left\{ \frac{k}{k+\alpha-1}, \frac{\xi_k }{ \| x_k - x_{k-1} \|} \right\} \ \ \ \mbox{if} \ \ x_k \neq x_{k - 1}, \\ \frac{k}{k+\alpha-1} \hskip2.9cm \mbox{otherwise}, \end{cases} \quad\quad\quad\quad\quad\quad(.1) \end{equation}
    and define
    \begin{align*} y_k & = \mbox{prox}_{ \mu \psi} \left(I - \mu \nabla \phi \right)(z_k), \\ u_k & = (I- t \nabla h) (z_k) \\ x_{k+1} & = \lambda_k u_k +(1-\lambda_k) y_k. \end{align*}

     

    Algorithm 4 aiBiG-SAM: The alternated inertial bilevel gradient sequential averaging method.
    1: Initialization Step: Select the sequence \{ \lambda_k \} \subset (0, 1) corresponding to criteria assumed in [5], and take arbitrary x_1, x_0 \in H . Consider the step sizes \mu \in \left(0, \frac{2}{L_{\phi}} \right) , the parameter t \in \left(0, \frac{2}{L_{h} + s} \right] , and \alpha \geq 3.
    2: Initialization Step: For k \geq 1, if k is odd, evaluate
    z_k := x_k + \eta_{k} (x_k -x_{k-1}),
    where 0 \leq \vert \eta_{k} \vert \leq \bar{\eta_k } while \bar{\eta_k } corresponds to
    \bar{\eta_k } := \begin{cases} \min \left\{ \frac{k}{k+\alpha-1}, \frac{\xi_k }{ \| x_k - x_{k-1} \|} \right\} \ \ \mbox{if} \ \ x_k \neq x_{k - 1}, \\ \frac{k}{k+\alpha-1} \hskip2.7cm \mbox{if} \; \; x_{k} = x_{k-1}. \end{cases}
    When k is even, set z_k := x_k . After that, define
    \begin{align*} y_k & = \mbox{prox}_{ \mu \psi} \left(I - \mu \nabla \phi \right)(z_k), \\ u_k & = (I- t \nabla h) (z_k) \\ x_{k+1} & = \lambda_k u_k +(1-\lambda_k) y_k. \end{align*}

    Next, the details of the linesearch technique related to this work are provided as follows:

     

    Algorithm 5 Linesearch 1 ( x, \sigma, \theta, \delta ).
    1: Initialization Step: Take arbitrary point x \in \mbox{dom} \ \psi , and set L(x, \mu) = \mbox{prox}_{\mu \psi}(x- \mu \nabla \phi (x)) .
    2: Choose \theta \in (0, 1) and \delta \in \left(0, \frac{1}{2}\right).
    3: Computation Step: Select \sigma > 0 and define the first value \mu = \sigma.
    4: while
    \begin{align*} \mu \| \nabla \phi (L(x, \mu)) - \nabla \phi (x) \| & > \delta \|L(x, \mu) -x \| \end{align*}
    do
    5: \mu = \theta \mu,
    6: L(x, \mu)= L(x, \theta \mu), \; S(x, \mu)= S(x, \theta \mu) .
    7: end while
    8: Output \mu.

     

    Algorithm 6 Linesearch 2 ( x, \sigma, \theta, \delta ).
    1: Initialization Step: Take arbitrary point x \in \mbox{dom}\psi , and set L(x, \mu) = \mbox{prox}_{\mu \psi}(x- \mu \nabla \phi (x)) and S(x, \mu) = \mbox{prox}_{\mu \psi}(L(x, \mu)- \mu \nabla \phi (L(x, \mu))).
    2: Choose \theta \in (0, 1), \rho \in \left(0, \frac{1}{2}\right] , and \delta \in \left(0, \frac{\rho}{8}\right).
    3: Computation Step: Select \sigma > 0 and define the first value \mu = \sigma.
    4: while
    \begin{align*} \mu \Big((1-\rho) \| \nabla \phi (S(x, \mu)) & - \nabla \phi (L(x, \mu)) \| + \rho \| \nabla \phi (L(x, \mu)) - \nabla \phi (x) \| \Big) \\ & > \delta \Big(\|S(x, \mu)- L(x, \mu)\|+ \|L(x, \mu) -x \| \Big) \end{align*}
    do
    5: \mu = \theta \mu,
    6: L(x, \mu)= L(x, \theta \mu), \; S(x, \mu)= S(x, \theta \mu) .
    7: end while
    8: Output \mu.

     

    Algorithm 7 FBIL: The forward-backward iterative method with the inertial technical term and linesearch technique.
    1: Initialization Step: Take arbitrary points x_1= y_0 \in \mbox{dom}\psi .
    2: For k \geq 1 , calculate \mu_{k}:= Linesearch 1 ( x_k, \sigma, \theta, \delta ), and define
    \begin{align*} z_k & = \mbox{prox}_{\mu_k \psi}(x_k- \mu_k \nabla \phi (x_k)), \\ y_k & = \mbox{prox}_{\mu_k \psi}(z_k - \mu_k \nabla \phi (z_k)), \\ x_{k+1} & =\mbox{P}_{\mbox{dom} \psi}\left(y_k + \eta_k (y_k - y_{k -1}) \right), \end{align*}
    where P _{\mbox{dom} \psi} is a metric projection mapping and \eta_k \geq 0 .



    [1] G. Rasool, T. Zhang, A. Shafiq, Second grade nanofluidic fow past a convectively heated vertical Riga plate, Phys. Scr., 12 (2019), 125212. https://doi.org/10.1088/1402-4896/ab3990 doi: 10.1088/1402-4896/ab3990
    [2] T. Abbas, M. Ayub, M. M. Bhatti, M. M. Rashidi, M. E. Ali, Entropy generation on nanofuid fow through a horizontal Riga-plate, Entropy, 18 (2016), 223. https://doi.org/10.3390/e18060223 doi: 10.3390/e18060223
    [3] A. B. Tsinober, A. G. Shtern, Possibility of increasing the flow stability in a boundary layer using crossed electric and magnetic fields, Magnetohydrodynamics, 3 (1967), 103–105.
    [4] S. Abdal, I. Siddique, A. S. Alshomrani, F. Jarad, I. S. U. Din, S. Afzal, Significance of chemical reaction with activation energy for Riga wedge flow of tangent hyperbolic nanofluid in existence of heat source, Case Stud. Therm. Eng., 28 (2021), 101542. https://doi.org/10.1016/j.csite.2021.101542 doi: 10.1016/j.csite.2021.101542
    [5] K. Gangadhar, M. A. Kumari, A. J. Chamkha, EMHD flow of radiative second-grade nanofluid over a Riga plate due to convective heating: Revised Buongiorno's nanofluid model, Arab. J. Sci. Eng., 2021, 1–11. https://doi.org/10.1007/s13369-021-06092-7 doi: 10.1007/s13369-021-06092-7
    [6] M. I. Khan, F. Alzahrani, Dynamics of viscoelastic fluid conveying nanoparticles over a wedge when bioconvection and melting process are significant, Int. Commun. Heat Mass, 128 (2021), 105604. https://doi.org/10.1016/j.icheatmasstransfer.2021.105604 doi: 10.1016/j.icheatmasstransfer.2021.105604
    [7] D. Vieru, I. Siddique, M. Kamran, C. Fetecau, Energetic balance for the flow of a second-grade fluid due to a plate subject to shear stress, Comput. Math. Appl., 4 (2008), 1128–1137. https://doi.org/10.1016/j.camwa.2008.02.013 doi: 10.1016/j.camwa.2008.02.013
    [8] A. Mahmood, C. Fetecau, I. Siddique, Exact solutions for some unsteady flows of generalized second grade fluids in cylindrical domains, J. Prim. Res. Math., 4 (2008), 171–180. Available from: http://www.sms.edu.pk/jprm/media/pdf/jprm/volume_04/jprm10_4.pdf
    [9] M. Ramzan, M. Bilal, Time-dependent MHD nano-second grade fluid flow induced by a permeable vertical sheet with mixed convection and thermal radiation, PLoS One, 10 (2015). https://doi.org/10.1371/journal.pone.0124929 doi: 10.1371/journal.pone.0124929
    [10] M. Ramzan, M. Bilal, U. Farooq, J. D. Chung, Mixed convective radiative flow of second grade nanofluid with convective boundary conditions: an optimal solution, Res. Phys., 6 (2016), 796–804. https://doi.org/10.1016/j.rinp.2016.10.011 doi: 10.1016/j.rinp.2016.10.011
    [11] S. K. Rawat, H. Upreti, M. Kumar, Comparative study of mixed convective MHD Cu-water nanofluid flow over a cone and wedge using modified Buongiorno's model in presence of thermal radiation and chemical reaction via Cattaneo-Christov double diffusion model, J. Appl. Comput. Mech., 2020. Available from: https://jacm.scu.ac.ir/article_15395_9ede39b5e33cc127967e197282afed32.
    [12] S. Rajput, A. K. Verma, K. Bhattacharyya, A. J. Chamkha, Unsteady nonlinear mixed convective flow of nanofluid over a wedge: Buongiorno model, Waves Random Complex, 2021, 1–15. https://doi.org/10.1080/17455030.2021.1987586 doi: 10.1080/17455030.2021.1987586
    [13] A. Mishra, M. Kumar, Numerical analysis of MHD nanofluid flow over a wedge, including effects of viscous dissipation and heat generation/absorption, using Buongiorno model, Heat Transfer, 8 (2021), 8453–8474. https://doi.org/10.1002/htj.22284 doi: 10.1002/htj.22284
    [14] R. Garia, S. K. Rawat, M. Kumar, M. Yaseen, Hybrid nanofluid flow over two different geometries with Cattaneo-Christov heat flux model and heat generation: A model with correlation coefficient and probable error, Chinese J. Phys., 74 (2021), 421–439. https://doi.org/10.1016/j.cjph.2021.10.030 doi: 10.1016/j.cjph.2021.10.030
    [15] I. Siddique, M. Nadeem, J. Awrejcewicz, W. Pawłowski, Soret and Dufour effects on unsteady MHD second-grade nanofluid flow across an exponentially stretching surface, Sci Rep., 12 (2022), 11811. https://www.nature.com/articles/s41598-022-16173-8
    [16] S. U. Choi, J. A. Eastman, Enhancing thermal conductivity of fluids with nanoparticles (No. ANL/MSD/CP-84938; CONF-951135-29), Argonne National Lab., IL (United States), 1995. Available from: https://ecotert.com/pdf/196525_From_unt-edu.pdf
    [17] S. Suresh, K. P. Venkitaraj, P. Selvakumar, Effect of Al2O3-Cu/water hybrid nanofluid in heat transfer, Exp. Therm. Fluid Sci., 38 (2012), 54–60. https://doi.org/10.1016/j.expthermflusci.2011.11.007 doi: 10.1016/j.expthermflusci.2011.11.007
    [18] L. S. Sundar, A. C. Sousa, M. K. Singh, Heat transfer enhancement of low volume concentration of carbon nanotube-Fe3O4/water hybrid nanofluids in a tube with twisted tape inserts under turbulent flow, J. Therm. Sci. Eng. Appl., 7 (2015), 021015. https://doi.org/10.1115/1.4029622 doi: 10.1115/1.4029622
    [19] S. Nadeem, N. Abbas, A. U. Khan, Characteristics of three dimensional stagnation point flow of Hybrid nanofluid past a circular cylinder, Results Phys., 8 (2018), 829–835. https://doi.org/10.1016/j.rinp.2018.01.024 doi: 10.1016/j.rinp.2018.01.024
    [20] S. Nadeem, N. Abbas, On both MHD and slip effect in micropolar hybrid nanofluid past a circular cylinder under stagnation point region, Can. J. Phys., 97 (2018), 392–399. https://doi.org/10.1139/cjp-2018-017 doi: 10.1139/cjp-2018-017
    [21] S. Yan, D. Toghraie, L. A. Abdulkareem, A. Alizadeh, P. Barnoon, M. Afrand, The rheological behavior of MWCNTs-ZnO/water-ethylene glycol hybrid non-Newtonian nanofluid by using of an experimental investigation, J. Mater. Res. Technol., 9 (2020), 8401–8406. https://doi.org/10.1016/j.jmrt.2020.05.018 doi: 10.1016/j.jmrt.2020.05.018
    [22] A. U. Rehman, R. Mehmood, S. Nadeem, N. S. Akbar, S. S. Motsa, Effects of single and multi-walled carbon nano tubes on water and engine oil based rotating fluids with internal heating, Adv. Powder Technol., 28 (2017), 1991–2002. https://doi.org/10.1016/j.apt.2017.03.017 doi: 10.1016/j.apt.2017.03.017
    [23] N. A. L. Aladdin, N. Bachok, I. Pop, Cu-Al2O3/water hybrid nanofluid flow over a permeable moving surface in presence of hydromagnetic and suction effects, Alex. Eng. J., 59 (2020), 657–666. https://doi.org/10.1016/j.aej.2020.01.028 doi: 10.1016/j.aej.2020.01.028
    [24] N. S. Anuar, N. Bachok, N. M. Arifin, H. Rosali, Analysis of Al2O3-Cu nanofluid flow behaviour over a permeable moving wedge with convective surface boundary conditions, J. King Saud Univ. Sci., 33 (2021), 101370. https://doi.org/10.1016/j.jksus.2021.101370 doi: 10.1016/j.jksus.2021.101370
    [25] M. Nadeem, I. Siddique, J. Awrejcewicz, M. Bilal, Numerical analysis of a second-grade fuzzy hybrid nanofluid flow and heat transfer over a permeable stretching/shrinking sheet, Sci. Rep.-UK, 12 (2022), 1–17. https://www.nature.com/articles/s41598-022-05393-7
    [26] N. Joshi, A. K. Pandey, H. Upreti, M. Kumar, Mixed convection flow of magnetic hybrid nanofluid over a bidirectional porous surface with internal heat generation and a higher‐order chemical reaction, Heat Transfer, 50 (2021), 3661–3682. https://doi.org/10.1002/htj.22046 doi: 10.1002/htj.22046
    [27] N. Joshi, H. Upreti, A. K. Pandey, M. Kumar, Heat and mass transfer assessment of magnetic hybrid nanofluid flow via bidirectional porous surface with volumetric heat generation, Int. J. Appl. Comput. Math., 7 (2021), 1–17. Available from: https://link.springer.com/article/10.1007/s40819-021-00999-3
    [28] T. Watanabe, Thermal boundary layers over a wedge with uniform suction or injection in forced flow, Acta Mech., 83 (1990), 119–126. Available from: https://link.springer.com/article/10.1007/BF01172973
    [29] N. A. Yacob, A. Ishak, I. Pop, Falkner-Skan problem for a static or moving wedge in nanofluids, Int. J. Therm. Sci., 50 (2011), 133–139. https://doi.org/10.1016/j.ijthermalsci.2010.10.008 doi: 10.1016/j.ijthermalsci.2010.10.008
    [30] H. Upreti, A. K. Pandey, M. Kumar, Assessment of entropy generation and heat transfer in three-dimensional hybrid nanofluids flow due to convective surface and base fluids, J. Porous Media, 24 (2021). https://doi.org/10.1615/JPorMedia.2021036038 doi: 10.1615/JPorMedia.2021036038
    [31] A. Mishra, M. Kumar, Velocity and thermal slip effects on MHD nanofluid flow past a stretching cylinder with viscous dissipation and Joule heating, SN Appl. Sci., 2 (2020), 1–13. Available from: https://link.springer.com/article/10.1007/s42452-020-3156-7.
    [32] A. Mishra, H. Upreti, A comparative study of Ag-MgO/water and Fe3O4-CoFe2O4/EG-water hybrid nanofluid flow over a curved surface with chemical reaction using Buongiorno model, Partial Differ. Eq. Appl. Math., 5 (2022), 100322, 2666–8181. https://doi.org/10.1016/j.padiff.2022.100322 doi: 10.1016/j.padiff.2022.100322
    [33] M. Yaseen, S. K. Rawat, M. Kumar, Cattaneo-Christov heat flux model in Darcy-Forchheimer radiative flow of MoS2-SiO2/kerosene oil between two parallel rotating disks, J. Therm. Anal. Calorim., 2022, 1–23. Available from: https://link.springer.com/article/10.1007/s10973-022-11248-0.
    [34] S. K. Rawat, M. Kumar, Cattaneo-Christov heat flux model in flow of copper water nanofluid through a stretching/shrinking sheet on stagnation point in presence of heat generation/absorption and activation energy, Int. J. Appl. Comput. Math., 6 (2020), 1–26. Available from: https://link.springer.com/article/10.1007/s40819-020-00865-8.
    [35] A. Gailitis, O. Lielausis, On possibility to reduce the hydrodynamics resistance of a plate in an electrolyte, Appl. Magn. Rep. Phys. Inst. Riga, 12 (1961), 143–146. Available from: https://scirp.org/reference/ReferencesPapers.aspx?ReferenceID=1927365.
    [36] H. T. Basha, R. Sivaraj, I. L. Animasaun, Stability analysis on Ag-MgO/water hybrid nanofluid flow over an extending/contracting Riga wedge and stagnation point, CTS, 12 (2020), 6. https://doi.org/10.1615/ComputThermalScien.2020034373 doi: 10.1615/ComputThermalScien.2020034373
    [37] G. Rasool, A. Wakif, Numerical spectral examination of EMHD mixed convection flow of second-grade nanofluid towards a vertical Riga plate used an advanced version of the revised Buongiorno's nanofluid model, J. Therm. Anal. Calorim., 143 (2021), 2379–2393. Available from: https://link.springer.com/article/10.1007/s10973-020-09865-8.
    [38] G. K. Ramesh, G. S. Roopa, B. J. Gireesha, S. A. Shehzad, F. M. Abbasi, An electro-magneto-hydrodynamic flow Maxwell nanoliquid past a Riga plate: A numerical study, J. Brazilian Soc. Mech. Sci. Eng., 39 (2017), 4547–4554. Available from: https://link.springer.com/article/10.1007/s40430-017-0900-z.
    [39] A. Shafiq, I. Zari, I. Khan, T. S. Khan, A. H. Seikh, E. S. M. Sherif, Marangoni driven boundary layer flow of carbon nanotubes toward a Riga plate, Front. Phys., 7 (2020), 1–11. https://doi.org/10.3389/fphy.2019.00215 doi: 10.3389/fphy.2019.00215
    [40] N. Ahmed, Adnan, U. Khan, S. T. Mohyud-Din, Influence of thermal radiation and viscous dissipation on squeezed flow of water between Riga plates saturated with carbon nanotubes, Colloid. Surface. A., 522 (2017), 389–398. https://doi.org/10.1016/j.colsurfa.2017.02.083 doi: 10.1016/j.colsurfa.2017.02.083
    [41] M. Ayub, T. Abbas, M. M. Bhatti, Inspiration of slip effects on EMHD nanofluid flow through a horizontal Riga plate, Eur. Phys. J. Plus, 131 (2016), 1–9. https://doi.org/10.1140/epjp/i2016-16193-4 doi: 10.1140/epjp/i2016-16193-4
    [42] A. Zaib, R. U. Haq, A. J. Chamkha, M. M. Rashidi, Impact of partial slip on mixed convective flow towards a Riga plate comprising micropolar TiO2-kerosene/water nanoparticles, Int. J. Numer. Meth. H., 29 (2018), 1647–1662. https://doi.org/10.1108/HFF-06-2018-0258 doi: 10.1108/HFF-06-2018-0258
    [43] T. Abbas, M. Ayub, M. M. Bhatti, M. M. Rashidi, M. E. S. Ali, Entropy generation on nanofluid flow through a horizontal Riga plate, Entropy, 18 (2016), 223. https://doi.org/10.3390/e18060223 doi: 10.3390/e18060223
    [44] M. M. Bhatti, T. Abbas, M. M. Rashidi, Effects of thermal radiation and electro magneto hydrodynamics on viscous nanofluid through a Riga plate, Multidiscip. Model. Ma., 12 (2016), 605–618. https://doi.org/10.1108/MMMS-07-2016-0029 doi: 10.1108/MMMS-07-2016-0029
    [45] E. Magyari, A. Pantokratoras, Aiding and opposing mixed convection flows over the Riga-plate, Commun. Nonlinear Sci. Numer. Simul., 16 (2011), 3158–3167. https://doi.org/10.1016/j.cnsns.2010.12.003 doi: 10.1016/j.cnsns.2010.12.003
    [46] J. Pang, K. S. Choi, Turbulent drag reduction by Lorentz force oscillation, Phys. Fluids, 16 (2004). https://doi.org/10.1063/1.1689711 doi: 10.1063/1.1689711
    [47] Y. Liu, Y. Jian, W. Tan, Entropy generation of electromagnetohydrodynamic (EMHD) flow in a curved rectangular microchannel, Int. J. Heat Mass Transf., 127 (2018), 901–913. https://doi.org/10.1016/j.ijheatmasstransfer.2018.06.147 doi: 10.1016/j.ijheatmasstransfer.2018.06.147
    [48] N. A. Zainal, R. Nazar, K. Naganthran, I. Pop, Unsteady EMHD stagnation point flow over a stretching/shrinking sheet in a hybrid Al2O3-Cu/H2O nanofluid, Int. Commun. Heat Mass, 123 (2021), 105205. https://doi.org/10.1016/j.icheatmasstransfer.2021.105205 doi: 10.1016/j.icheatmasstransfer.2021.105205
    [49] M. Bilal, Micropolar flow of EMHD nanofluid with nonlinear thermal radiation and slip effects, Alex. Eng. J., 59 (2020), 965–976. https://doi.org/10.1016/j.aej.2020.03.023 doi: 10.1016/j.aej.2020.03.023
    [50] N. Kakar, A. Khalid, A. S. Al-Johani, N. Alshammari, I. Khan, Melting heat transfer of a magnetized water-based hybrid nanofluid flow past over a stretching/shrinking wedge, Case Stud. Therm. Eng., 30 (2022), 101674. https://doi.org/10.1016/j.csite.2021.101674 doi: 10.1016/j.csite.2021.101674
    [51] T. Abbas, T. Hayat, M. Ayub, M. M. Bhatti, A. Alsaedi, Electromagnetohydrodynamic nanofluid flow past a porous Riga plate containing gyrotactic microorganism, Neural Comput. Appl., 31 (2019), 1905–1913. Available from: https://link.springer.com/article/10.1007/s00521-017-3165-7.
    [52] N. Joshi, H. Upreti, A. K. Pandey, MHD Darcy-Forchheimer Cu-Ag/H2O-C2H6O2 hybrid nanofluid flow via a porous stretching sheet with suction/blowing and viscous dissipation, Int. J. Comput. Meth. Eng. Sci. Mech., 2022, 1–9. https://doi.org/10.1080/15502287.2022.2030426 doi: 10.1080/15502287.2022.2030426
    [53] A. Mishra, H. Upreti, A comparative study of Ag-MgO/water and Fe3O4-CoFe2O4/EG-water hybrid nanofluid flow over a curved surface with chemical reaction using Buongiorno model, Partial Differ. Eq. Appl. Math., 5 (2022), 100322. https://doi.org/10.1016/j.padiff.2022.100322 doi: 10.1016/j.padiff.2022.100322
    [54] S. Abdal, U. Habib, I. Siddique, A. Akgül, B. Ali, Attribution of multi-slips and bioconvection for micropolar nanofluids transpiration through porous medium over an extending sheet with PST and PHF conditions, Int. J. Appl. Comput. Math., 7 (2021), 1–21. Available from: https://link.springer.com/article/10.1007/s40819-021-01137-9.
    [55] S. Abdal, I. Siddique, D. Alrowaili, Q. Al-Mdallal, S. Hussain, Exploring the magnetohydrodynamic stretched flow of Williamson Maxwell nanofluid through porous matrix over a permeated sheet with bioconvection and activation energy, Sci. Rep., 12 (2022), 1–12. Available from: https://www.nature.com/articles/s41598-021-04581-1.
    [56] L. A. Zadeh, Fuzzy sets, Inform. Control, 8 (1965), 338–353. https://doi.org/10.1142/9789814261302_0021 doi: 10.1142/9789814261302_0021
    [57] S. Chang, L. Zadeh, On fuzzy mapping and control, IEEE T. Syst. Man Cy., 2 (1972), 30–34. https://doi.org/10.1109/TSMC.1972.5408553 doi: 10.1109/TSMC.1972.5408553
    [58] D. Dubois, H. Prade, Towards fuzzy differential calculus: part 3, differentiation, Fuzzy Set. Syst., 8 (1982), 30–34. https://doi.org/10.1016/S0165-0114(82)80001-8 doi: 10.1016/S0165-0114(82)80001-8
    [59] O. Kaleva, Fuzzy differential equations, Fuzzy Set. Syst., 24 (1987), 301–307. https://doi.org/10.1016/0165-0114(87)90029-7 doi: 10.1016/0165-0114(87)90029-7
    [60] O. Kaleva, The cauchy problem for fuzzy differential equations, Fuzzy Set. Syst., 24 (1990), 389–396. https://doi.org/10.1016/0165-0114(90)90010-4 doi: 10.1016/0165-0114(90)90010-4
    [61] S. Seikkala, On the fuzzy initial value problem, Fuzzy Set. Syst., 24 (1987), 319–330. https://doi.org/10.1016/0165-0114(87)90030-3 doi: 10.1016/0165-0114(87)90030-3
    [62] G. Borah, P. Dutta, G. C. Hazarika, Numerical study on second-grade fluid flow problems using analysis of fractional derivatives under fuzzy environment, Soft Comput. Tech. Appl. Adv. Intell. Syst. Comput. 1248 (2021). https://doi.org/10.1007/978-981-15-7394-1_4 doi: 10.1007/978-981-15-7394-1_4
    [63] A. Barhoi, G. C. Hazarika, P. Dutta, Numerical solution of MHD Viscous flow over a shrinking sheet with second order slip under fuzzy environment, Adv. Math. Sci. J., 9 (2020), 10621–10631. https://doi.org/10.37418/amsj.9.12.47 doi: 10.37418/amsj.9.12.47
    [64] U. Biswal, S. Chakraverty, B. K. Ojha, Natural convection of nanofluid flow between two vertical flat plates with imprecise parameter, Coupled Syst. Mech., 9 (2020), 219–235. https://doi.org/10.12989/csm.2020.9.3.219 doi: 10.12989/csm.2020.9.3.219
    [65] M. Nadeem, A. Elmoasry, I. Siddique, F. Jarad, R. M. Zulqarnain, J. Alebraheem, N. S. Elazab, Study of triangular fuzzy hybrid nanofluids on the natural convection flow and heat transfer between two vertical plates, Comput. Intell. Neurosc., 2021 (2021). https://doi.org/10.1155/2021/3678335 doi: 10.1155/2021/3678335
    [66] I. Siddique, R. M. Zulqarnain, M. Nadeem, F. Jarad, Numerical simulation of MHD Couette flow of a fuzzy nanofluid through an inclined channel with thermal radiation effect, Comput. Intell. Neurosc., 2021 (2021), 1–16. https://doi.org/10.1155/2021/6608684 doi: 10.1155/2021/6608684
    [67] S. Chakraverty, S. Tapaswini, D. Behera, Fuzzy differential equations and applications for engineers and scientists, CRC Press, Boca Raton, 2016. https://doi.org/10.1201/9781315372853
    [68] M. Nadeem, I. Siddique, R. Ali, N. Alshammari, R. N. Jamil, N. Hamadneh, M. Andualem, Study of third-grade fluid under the fuzzy environment with Couette and Poiseuille flows, Math. Probl. Eng., 2022 (2022). https://doi.org/10.1155/2022/2458253 doi: 10.1155/2022/2458253
    [69] I. Siddique, R. M. Zulqarnain, M. Nadeem, F. Jarad, Numerical simulation of mhd couette flow of a fuzzy nanofluid through an inclined channel with thermal radiation effect, Comput. Intell. Neurosc., 2021 (2021). https://doi.org/10.1155/2021/6608684 doi: 10.1155/2021/6608684
    [70] M. Nadeem, I. Siddique, F. Jarad, R. N. Jamil, Numerical study of MHD third-grade fluid flow through an inclined channel with ohmic heating under fuzzy environment, Math. Probl. Eng., 2021 (2021). https://doi.org/10.1155/2021/9137479 doi: 10.1155/2021/9137479
    [71] M. Bilal, H. Tariq, Y. Urva, I. Siddique, S. Shah, T. Sajid, et al., A novel nonlinear diffusion model of magneto-micropolar fluid comprising Joule heating and velocity slip effects, Wave. Random Complex, 2022, 1–17. https://doi.org/10.1080/17455030.2022.2079761 doi: 10.1080/17455030.2022.2079761
    [72] I. Siddique, R. N. Jamil, M. Nadeem, H. A. El-Wahed Khalifa, F. Alotaibi, I. Khan, et al., Fuzzy analysis for thin-film flow of a third-grade fluid down an inclined plane, Math. Probl. Eng., 2022 (2022), 3495228. https://doi.org/10.1155/2022/3495228 doi: 10.1155/2022/3495228
    [73] I. Siddique, M. Nadeem, I. Khan, R. N. Jamil, M. A. Shamseldin, A. Akgül, Analysis of fuzzified boundary value problems for MHD Couette and Poiseuille flow, Sci. Rep.-UK, 12 (2022), 1–28. Available from: https://www.nature.com/articles/s41598-022-12110-x.
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3506) PDF downloads(330) Cited by(40)

Figures and Tables

Figures(16)  /  Tables(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog