Loading [MathJax]/jax/output/SVG/jax.js
Research article

MFCEN: A lightweight multi-scale feature cooperative enhancement network for single-image super-resolution

  • In recent years, significant progress has been made in single-image super-resolution with the advancements of deep convolutional neural networks (CNNs) and transformer-based architectures. These two techniques have led the way in the field of super-resolution technology research. However, performance improvements often come at the cost of a substantial increase in the number of parameters, thereby limiting the practical applications of super-resolution methods. Existing lightweight super-resolution methods, which primarily focus on single-scale feature extraction, lead to the issue of missing multi-scale features. This results in incomplete feature acquisition and poor reconstruction of the image. In response to these challenges, this paper proposed a lightweight multi-scale feature cooperative enhancement network (MFCEN). The network consists of three parts: shallow feature extraction, deep feature extraction, and image reconstruction. In the deep feature extraction part, a novel integrated multi-level feature module was introduced. Compared to existing CNN and transformer hybrid super-resolution networks, MFCEN significantly reduced the number of parameters while maintaining performance. This improvement was particularly evident at a scale factor of 3. The network introduced a novel comprehensive integrated multi-level feature module, leveraging the strong local perceptual capabilities of CNNs and the superior global information processing of transformers. It was designed with depthwise separable convolutions for extracting local information and a block-scale and global feature extraction module based on vision transformers (ViTs). While extracting the three scales of features, a satisfiability attention mechanism with a feed-forward network that can control the information was used to keep the network lightweight. Experiments demonstrated that the proposed model surpasses the reconstruction performance of the 498K-parameter SPAN model with a mere 488K parameters. Extensive experiments on commonly used image super-resolution datasets further validated the effectiveness of the network.

    Citation: Jiange Liu, Yu Chen, Xin Dai, Li Cao, Qingwu Li. MFCEN: A lightweight multi-scale feature cooperative enhancement network for single-image super-resolution[J]. Electronic Research Archive, 2024, 32(10): 5783-5803. doi: 10.3934/era.2024267

    Related Papers:

    [1] Ibrahim AL-Dayel, Emad Solouma, Meraj Khan . On geometry of focal surfaces due to B-Darboux and type-2 Bishop frames in Euclidean 3-space. AIMS Mathematics, 2022, 7(7): 13454-13468. doi: 10.3934/math.2022744
    [2] A. A. Abdel-Salam, M. I. Elashiry, M. Khalifa Saad . Tubular surface generated by a curve lying on a regular surface and its characterizations. AIMS Mathematics, 2024, 9(5): 12170-12187. doi: 10.3934/math.2024594
    [3] Haiming Liu, Jiajing Miao . Geometric invariants and focal surfaces of spacelike curves in de Sitter space from a caustic viewpoint. AIMS Mathematics, 2021, 6(4): 3177-3204. doi: 10.3934/math.2021192
    [4] Yanlin Li, Kemal Eren, Kebire Hilal Ayvacı, Soley Ersoy . The developable surfaces with pointwise 1-type Gauss map of Frenet type framed base curves in Euclidean 3-space. AIMS Mathematics, 2023, 8(1): 2226-2239. doi: 10.3934/math.2023115
    [5] Kemal Eren, Soley Ersoy, Mohammad Nazrul Islam Khan . Novel theorems on constant angle ruled surfaces with Sasai's interpretation. AIMS Mathematics, 2025, 10(4): 8364-8381. doi: 10.3934/math.2025385
    [6] Gülnur Şaffak Atalay . A new approach to special curved surface families according to modified orthogonal frame. AIMS Mathematics, 2024, 9(8): 20662-20676. doi: 10.3934/math.20241004
    [7] Samah Gaber, Asmahan Essa Alajyan, Adel H. Sorour . Construction and analysis of the quasi-ruled surfaces based on the quasi-focal curves in $ \mathbb{R}^{3} $. AIMS Mathematics, 2025, 10(3): 5583-5611. doi: 10.3934/math.2025258
    [8] Kemal Eren, Hidayet Huda Kosal . Evolution of space curves and the special ruled surfaces with modified orthogonal frame. AIMS Mathematics, 2020, 5(3): 2027-2039. doi: 10.3934/math.2020134
    [9] Masatomo Takahashi, Haiou Yu . On generalised framed surfaces in the Euclidean space. AIMS Mathematics, 2024, 9(7): 17716-17742. doi: 10.3934/math.2024861
    [10] Ayman Elsharkawy, Clemente Cesarano, Abdelrhman Tawfiq, Abdul Aziz Ismail . The non-linear Schrödinger equation associated with the soliton surfaces in Minkowski 3-space. AIMS Mathematics, 2022, 7(10): 17879-17893. doi: 10.3934/math.2022985
  • In recent years, significant progress has been made in single-image super-resolution with the advancements of deep convolutional neural networks (CNNs) and transformer-based architectures. These two techniques have led the way in the field of super-resolution technology research. However, performance improvements often come at the cost of a substantial increase in the number of parameters, thereby limiting the practical applications of super-resolution methods. Existing lightweight super-resolution methods, which primarily focus on single-scale feature extraction, lead to the issue of missing multi-scale features. This results in incomplete feature acquisition and poor reconstruction of the image. In response to these challenges, this paper proposed a lightweight multi-scale feature cooperative enhancement network (MFCEN). The network consists of three parts: shallow feature extraction, deep feature extraction, and image reconstruction. In the deep feature extraction part, a novel integrated multi-level feature module was introduced. Compared to existing CNN and transformer hybrid super-resolution networks, MFCEN significantly reduced the number of parameters while maintaining performance. This improvement was particularly evident at a scale factor of 3. The network introduced a novel comprehensive integrated multi-level feature module, leveraging the strong local perceptual capabilities of CNNs and the superior global information processing of transformers. It was designed with depthwise separable convolutions for extracting local information and a block-scale and global feature extraction module based on vision transformers (ViTs). While extracting the three scales of features, a satisfiability attention mechanism with a feed-forward network that can control the information was used to keep the network lightweight. Experiments demonstrated that the proposed model surpasses the reconstruction performance of the 498K-parameter SPAN model with a mere 488K parameters. Extensive experiments on commonly used image super-resolution datasets further validated the effectiveness of the network.



    In the study of line congruence, focal surfaces are well recognized. A focal surface is one of these line congruences, which are surfaces that are created by changing one surface to another by lines. Line congruence has been presented in the field of visualization by Pottmann et al. in [1]. They can be utilized to envision the strain and intensity circulation on a plane, temperature, precipitation, and ozone over the earth's surface, and so forth. Prior to further processing, the quality of a surface is evaluated using focal surfaces; for further information, see, for instance, [2,3,4,5]. Numerous researches have been done on focal surfaces and curves; for examples, see [6,7,8,9]. Sasai [10] described the modified orthogonal frame of a space curve in Euclidean 3-space as a helpful tool for examining analytic curves with singular points when the Frenet frame is ineffective. The modified orthogonal frame has recently been the subject of various investigations [11,12,13,14,15,16].

    The envelope of a moving sphere with variable radius is characterized as a canal surface, which is frequently used in solid and surface modeling. A canal surface is an envelope of a one-parameter set of spheres centered at the center curve c(s) with radius r(s). The spheres that are specified by r(s) and c(s) are combined to form a canal surface, which is obtained by the spine curve. These surfaces have a wide range of uses, including form reconstruction, robot movement planning, the creation of blending surfaces, and the easy sight of long and thin objects like pipes, ropes, poles, and live intestines. The term "tubular surface" refers to these canal surfaces if r(s) is constant. Different frames have been used to study tubular surfaces; for more details, see [17,18,19,20,21,22,23]. The paper is organized as follows: The fundamental ideas and the modified orthogonal frame are presented in Section 2. In Section 3, we create tubular surfaces with a modified orthogonal frame and provide some findings from these surfaces. Section 4 provides the focal surfaces of tubular surfaces in accordance with the modified orthogonal frame in E3. At last, an example that confirms our findings is presented in Section 5.

    Let α=α(s) be a space curve with respect to the arc-length s in E3 and t,n,b be the tangent, principal, and binormal unit vectors at each point on α(s), respectively, then we have the Serret-Frenet equations:

    [t(s)n(s)b(s)]=[0κ0κ0τ0τ0][t(s)n(s)b(s)], (2.1)

    where κ and τ are, respectively, the curvature and torsion functions of α.

    Since, the Serret-Frenet frame is inadequate for studying analytic space curves, of which curvatures have discrete zero points since the principal normal and binormal vectors may be discontinuous at zero points of the curvature, Sasai presented an orthogonal frame and obtained a formula, which corresponds to the Frenet-Serret equation [10].

    Let α:IE3 be an analytic curve. We suppose that the curvature κ(s) of α is not identically zero. We express an orthogonal frame {T,N,B} as

    T=dαds,N=dTds, B=T×N. (2.2)

    The relations between the frames {T,N,B} and {t,n,b} at nonzero points of κ are expressed as follows:

    T=t,N=κn,B=κb, (2.3)

    and we have

    T,T=1,N,N=B,B=κ2,T,N=T,B=N,B=0. (2.4)

    From Eqs (2.1) and (2.3), a straightforward calculation leads to

    [T(s)N(s)B(s)]=[010κ2κκτ0τκκ][T(s)N(s)B(s)], (2.5)

    and

    τ(s)=det(α,α,α)κ2,

    is the torsion of α. Therefore, the frame denoted by Eq (2.5) is called the modified orthogonal frame.

    Let Υ(s,θ) be a surface in E3 and U(s,θ) be the unit normal vector field on Υ(s,θ) defined by U=Υs×ΥθΥs×Υθ, where Υs=Υs and Υθ=Υθ are the tangent vectors of Υ(s,θ). The metric (first fundamental form) I of Υ(s,θ) is defined by

    I=g11ds2+2g12dsdθ+g22dθ2,

    where g11=Υs,Υs, g12=Υs,Υθ, and g22=Υθ,Υθ.

    Also, we can define the second fundamental form of Υ(s,θ) as

    II=h11ds2+2h12dsdθ+h22dθ2,

    where h11=Υss,U, h12=Υsθ,U, h22=Υθθ,U, and U is the unit normal vector of the surface. The Gaussian curvature K and the mean curvature H are, respectively, expressed as:

    K=h11h22h212g11g22g212,H=h11g222g12h12+g11h222(g11g22g212). (2.6)

    In this part, we obtain a tubular surface with modified orthogonal frame and give some important properties of this surface in E3. The tubular surface with respect to the modified orthogonal frame has the parametrization:

    Υ(s,θ)=c(s)+rκ(s)(cosθN(s)+sinθB(s)), (3.1)

    where c(s) is the center curve, r=constant, and κ0. The derivatives of Υ(s,θ) are given by

    Υs=(1rκcosθ)T+τrκ(sinθN+cosθB),Υθ=rκ(sinθN+cosθB),Υss=(rκcosθ+rκτsinθ)T+(1rκcosθrκτsinθrκτ2cosθ)N+(rκτcosθrκτ2sinθ)B,Υsθ=(rκsinθ)T+τrκ(cosθNsinθB),Υθθ=rκ(cosθNsinθB). (3.2)

    Therefore, we obtain

    g11=(1rκcosθ)2+r2τ2,g12=τr2,g22=r2,g=g11g22g212=r2(1rκcosθ)20. (3.3)

    The unit normal vector field U is given by

    U(s)=1κcosθN1κsinθB, (3.4)

    and we have

    h11=κcosθ(1rκcosθ)+τ2r,h12=τr,h22=r. (3.5)

    From Eq (2.6), we find

    K=κcosθr(1rκcosθ),H=12rκcosθ2r(1rκcosθ). (3.6)

    From Eqs (3.1) and (3.6), we note that, if the Gaussian curvature K is zero, then the tubular surface is generated by a moving sphere with the radius r=1, [18].

    Also, the curvatures of the tubular surface Υ(s,θ) satisfy the relation:

    H=12(Kr+1r), (3.7)

    and the shape operator of Υ(s,θ) is given by

    S=1g[h11g22h12g12h12g22h22g12h11g12+h12g11h12g12+h22g11]=1r2(1rκcosθ)2[r2(κcosθ(1rκcosθ))0τr(1rκcosθ)r(1rκcosθ)2]. (3.8)

    It follows that the principal curvatures of Υ(s,θ) are obtained as:

    k1=1r,k2=κcosθ1rκcosθ=Kr. (3.9)

    Proposition 3.1. Let Υ(s,θ) be a tubular surface in E3, then Υ(s,θ) is not a flat surface.

    Proof. The proof can be obtained by straightforward calculations from Eq (3.6).

    Proposition 3.2. Let Υ(s,θ) be a tubular surface in E3, then Υ(s,θ) is minimal if and only if

    r=12κcosθ.

    Proof. The result is obtained directly from Eq (3.6).

    Theorem 3.1. Let Υ(s,θ) be a tubular surface in E3 with a modified orthogonal frame, then

    (i) the s-curves of Υ(s,θ) are asymptotic curves if and only if

    r=κcosθκ2cosθ2+τ2,

    (ii) the θ-curves of Υ(s,θ) cannot be asymptotic curves.

    Proof. From the definition of asymptotic curves, we obtain

    Υss,U=0,Υθθ,U=0.

    ⅰ. From Eq (3.5), we can get

    h11=κcosθ(1rκcosθ)+τ2r=0,r=κcosθκ2cosθ2+τ2.

    ⅱ. Υ(s,θ) is regular if h220. Thus, the s-curves of Υ(s,θ) cannot be asymptotic curves.

    Theorem 3.2. Let Υ(s,θ) be a tubular surface in E3 with a modified orthogonal frame, then

    i. s-curves of Υ(s,θ) are geodesic if and only if,

    rκ2cos2θ2κcosθ+τ2r=c,

    where c is a constant.

    ii. θ-curves of  Υ(s,θ) are geodesic.

    Proof. From the definition of geodesic curves, we find that, Υss×U=0 and Υθθ×U=0.

    ⅰ. According to Eqs (3.2) and (3.4), we obtain

    Υss×U=(κsinθ(rκcosθ1)+rτ)T+(rκsinθ(τκsinθκcosθ))N+(rκcosθ(τκsinθ+κcosθ))B.

    Since T,N, and B are linearly independent, then Υss×U=0 if and only if

    κsinθ(rκcosθ1)+rτ=0,rκsinθ(τκsinθκcosθ)=0,rκcosθ(τκsinθκcosθ)=0.

    When the last two equations are taken into consideration together and the necessary operations are done, we get

    rκκcos2θκcosθ+rττ=0,rκ2cos2θ2κcosθ+τ2r=c,

    where c is constant.

    ⅱ. Also, from Eqs (3.2) and (3.4), we have Υθθ×U=0. Thus, θ- parameter curves are geodesic curves.

    Definition 3.1. The pair (X,Y),XY of the curvatures K,H of a tubular surface Υ(s,θ) is said to be a (X,Y)-Weingarten surface if Φ(X,Y)=0, where the Jacobi function Φ is defined as XsYθYsXθ=0 [24].

    Definition 3.2. The pair (X,Y),XY of the curvatures K,H of the tubular surface Υ(s,θ) is said to be a (X,Y)-linear Weingarten surface if Υ(s,θ) satisfies the following relation:

    aX+bY=c,

    where (a,b,c)ϵR and (a,b,c)(0,0,0) [25].

    Now, we define the partial derivatives of the curvatures of Υ(s,θ):

    Ks=κcosθr(1rκcosθ)2,Kθ=κsinθr(1rκcosθ)2,Hs=r2(κcosθr(1rκcosθ)2),Hθ=r2(κsinθr(1rκcosθ)2). (3.10)

    Proposition 3.3. Let Υ(s,θ) be a tubular surface in E3 with a modified orthogonal frame, then Υ(s,θ) is a Weingarten surface.

    Proof. (K,H)-Weingarten surface Υ(s,θ) satisfies Jacobi equation:

    HsKθHθKs=0,
    HsKθ=HθKs.

    Thus, the conclusion follows from Eq (3.10).

    Proposition 3.4. Let Υ(s,θ) be a tubular surface in E3 with a modified orthogonal frame. If (K,H) is a linear Weingarten surface, then for c=1, the relations a=r2 and b=2r, hold.

    Proof. The (K,H)-linear Weingarten surface satisfies:

    aK+bH=1,

    where a,bR and (a,b)0. From Eq (3.7), we get

    2rb=(κcosθr(1rκcosθ))(br+2a),

    and

    2κcosθ(r2+rb+a)+2rb=0,

    so we find,

    2rb=0,2κcosθ(r2+rb+a)=0,

    since b=2r; therefore, a=r2 is obtained.

    Let Υ(s,θ) be a surface in E3 parameterized by

    C(s,θ,z)=Υ(s,θ)+zE(s,θ), (4.1)

    where E(s,θ) is the set of all unit vectors and z is a marked distance. For each (s,θ), Eq (4.1) indicates a line congruence and called a generatrix. Additionally, there exist two special points (real, imaginary, or unit) on the generatrix of C. These points are called focal points, which are the osculating points with generatrix. Hence, focal surfaces are defined as a geometric locus of focal points. If E(s,θ)=U(s,θ), then C=Cu is a normal congruence. The parametric equation of focal surfaces of Cu is given as

    Υi(s,θ)=Υ(s,θ)+1kiU(s,θ);i=1,2, (4.2)

    where k1 and k2 are the principle curvature functions of surfaces Υ(s,θ)[1].

    In this section, we get focal surfaces of a tubular surface with the modified orthogonal frame in E3. Also, we investigate the properties obtained for the tubular surface within the focal surfaces. Since k1=1r, the focal surface is the curve c(s). Hence, we obtain the focal surface Υ(s,θ) of Υ(s,θ) with the function k2=κcosθ1rκcosθ, as follows:

    Υ(s,θ)=c(s)+1κcosθ(1κcosθN+1κsinθ B), (4.3)

    where κ0.

    The derivatives of Υ(s,θ) are

    Υs=1κ2(κκτtanθ)N+1κ2(τκκtanθ)B,Υθ=1κ2sec2θB,Υss=(κκ+τtanθ)T+(2κ2κ4+2κτtanθκ3κκ3τκ2tanθτ2κ2)N+(2κτκ3+2κ2tanθκ4+τκ2τ2κ2tanθκκ3tanθ)B,Υθθ=2κ2sec2θtanθ B. (4.4)

    From Eq (4.4), we get

    g11=1κ2sec2θ(τ2+κ2κ2),g12=1κ2sec2θ(τκκtanθ),g22=1κ2sec4θ,g=g11g22g212=1κ4sec4θ(τtanθ+κκ)20, (4.5)

    and

    Υs×Υθ=(1κ4sec2θ(κκ+τtanθ))T,Υs×Υθ=1κ4sec2θ(κκ+τtanθ),U(s,θ)=T. (4.6)

    From Eqs (4.4) and (4.6), we obtain

    h11=(κκτtanθ),h12=0,h22=0. (4.7)

    Also, from Eqs (4.5) and (4.7), we get

    K=0,H=κ32(κτtanθ+κ). (4.8)

    Further, the shape operator of the surface Υ(s,θ) is expressed as

    S=1γ[1κ2sec4θ(κκ+τtanθ)01κ2sec2θ(τκκtanθ)(κκ+τtanθ)0], (4.9)

    where, γ=1κ4sec4θ(τtanθ+κκ)2.

    Proposition 4.1. Let Υ(s,θ) be a tubular surface in E3 with a modified orthogonal frame obtained with the parametrization given by Eq (3.1) and let Υ(s,θ) be its focal surface with the parametrization given by Eq (4.3), then the focal surface Υ(s,θ) is a flat surface.

    Proof. The proof can be obtained by using Eq (4.8) and straightforward calculations.

    Proposition 4.2. Let Υ(s,θ) be a tubular surface in E3 with a modified orthogonal frame obtained with the parametrization given by Eq (3.1) and let Υ(s,θ) be its focal surface with the parametrization given by Eq (4.3), then the focal surface Υ(s,θ) is not minimal surface.

    Proof. From Eq (4.8), we get

    κ32(κτtanθ+κ)0.

    So, Υ(s,θ) is not minimal.

    Theorem 4.1. Let Υ(s,θ) be a tubular surface in E3 with a modified orthogonal frame obtained with the parametrization given by Eq (3.1) and let Υ(s,θ) be its focal surface with the parametrization given by Eq (4.3), then

    (i) s-curves of Υ(s,θ) cannot be asymptotic curves.

    (ii) θ-curves of Υ(s,θ) are asymptotic curves.

    Proof. (ⅰ) From Eq (4.7), we get

    h11=Υss,U,=(κκτtanθ)0.

    Since Υ(s,θ) is regular; h110. Thus, s-parameter curves of the focal surface Υ(s,θ) cannot be asymptotic curves.

    (ⅱ) Also, from Eq (4.7), we get

    h22=Υθθ,U=0.

    Since h22=0, then θ-curves of Υ(s,θ) are asymptotic curves.

    Theorem 4.2. Let Υ(s,θ) be a tubular surface in E3 with a modified orthogonal frame obtained with the parametrization given by Eq (3.1) and let Υ(s,θ) be its focal surface with the parametrization given by Eq (4.3), then

    (i) s-parameter curves of  Υ(s,θ) are geodesic curves if and only if,

    2κκ=ττ.

    (ii) θ-parameter curves of Υ(s,θ) are not geodesic curves.

    Proof. (ⅰ) From Eqs (4.4) and (4.6), we have

    Υss×U=(2κ2τκ3+2κ2tanθκ4+τκ2τ2tanθκ2κtanθκ3)N+(2κκ4+2κτtanθκ3τtanθκ2κκ3τ2κ2)B.

    Therefore, Υss×U=0 if and only if

    2κκ=ττ.

    (ⅱ) From Eqs (4.4) and (4.6), we get

    Υθθ×U=2sec2θtanθκ2N.

    Since Υθθ×U0, then θ-parameter curves are not geodesic curves of Υ(s,θ).

    Hence, the proof is completed.

    The partial derivatives of the curvatures of Υ(s,θ) are

    Ks=0, Kθ=0,Hs=3κ2κ2(κτtanθ+κ)+κ3(τκtanθ+κτtanθ+κ)2(κτtanθ+κ)2,Hθ=κ4τsec2θ2(κτtanθ+κ)2. (4.10)

    Thus, we get the result:

    Proposition 4.3. Let Υ(s,θ) be a tubular surface in E3 with a modified orthogonal frame obtained with the parametrization given by Eq (3.1) and let Υ(s,θ) be its focal surface with the parametrization given by Eq (4.3), then the focal surface Υ(s,θ) is a Weingarten surface but not a linear-Weingarten surface.

    Proof. The conclusion can be obtained easily from Eqs (4.8) and (4.10).

    Definition 4.1. [8] A surface Υ(s,θ) in E3 with principal curvatures k1k2 has a generalized focal surface ˜Υ(s,θ) given by

    ˜Υ(s,θ)=Υ(s,θ)+f(k1,k2)U(s,θ),

    where f(k1,k2) is related to its principal curvatures.

    If

    f(k1,k2)=k21+k22k1+k2,

    then ˜Υ(s,θ) is expressed as

    ˜Υ(s,θ)=Υ(s,θ)+(k21+k22k1+k2)U(s,θ).

    Therefore, by employing this definition and using Eq (3.9), we can obtain a generalized focal surface ˜Υ(s,θ) of Υ(s,θ) as

    ˜Υ(s,θ)=c(s)+1κ(rK2r4+1Kr3+r)(cosθN+sinθB).

    Let us demonstrate the above considerations in a computational example. So, assume the center curve α is given by

    α(s)=(cos(53s),sin(53s),2s3),

    then, its modified orthogonal frame is calculated as follows:

    T=(53sin(53s),53cos(53s),23),N=(59cos(53s),59sin(53s),0),B=(1027sin(53s),1027cos(53s),5527).

    When the radius function r(s)=1, we obtain the tubular surface:

    Υ(s,θ)=(cos(53s)cosθcos(53s)+23sinθsin(53s),sin(53s)cosθsin(53s)23sinθcos(53s),2s3+53sinθ).)

    In addition, we get the focal surface of Υ(s,θ) as

    Υ(s,θ)=(cos(53s)95cos(53s)+65tanθsin(53s),sin(53s)95sin(53s)65tanθcos(53s),2s3+353tanθ).)

    The center curve α(s) and the corresponding tubular surface Υ(s,θ) are shown in Figures 1a and 1b, respectively.

    Figure 1.  (a) The center curve α(s); (b) the tubular surface Υ(s,θ).

    Also, the generalized focal surface of Υ(s,θ) is obtained as follows:

    ˜Υ(s,θ)=(cos(53s)ρcos(53s)cosθ+23ρsinθsin(53s),sin(53s)ρsin(53s)cosθρ23sinθcos(53s),2s3+ρ53sinθ.);
    ρ=(1(59cosθ159cosθ)2+1(59cosθ159cosθ)+1).

    The focal surface Υ(s,θ) is displayed in Figure 2a and the generalized focal surface ˜Υ(s,θ) is displayed in Figure 2b.

    Figure 2.  (a) The focal surface Υ(s,θ) with r(s)=1; (b) the generalized focal surface ˜Υ(s,θ).

    Finally, in the light of aforementioned calculations, we can conclude the above results as follows:

    ● While Υ(s,θ) is a not flat surface, the focal surface Υ(s,θ) is flat.

    Υ(s,θ) is a minimal surface, but the focal surface Υ(s,θ) is not minimal.

    ● The s-parameter curves of Υ(s,θ) are asymptotic, while the s-parameter curves of the Υ(s,θ) are non-asymptotic.

    ● The θ-parameter curves of the tubular surface  Υ(s,θ) are not asymptotic, but the θ-parameter curves of the focal surface Υ(s,θ) are asymptotic.

    ● The s and θ parameters of the tubular surface  Υ(s,θ) are geodesic curves, but the s parameter curve of Υ(s,θ) is a geodesic whereas the θ parameter curve is not a geodesic curve.

    Υ(s,θ) and focal surface Υ(s,θ) are Weingarten surfaces.

    Υ(s,θ) can be a linear Weingarten surface while the focal surface Υ(s,θ) is not a linear Weingarten surface.

    In this paper, tubular surfaces and their focal surfaces have been studied in E3. Some characteristics of the tubular surfaces have been presented such as minimal, Weingarten, linear-Weingarten, and flat. Afterward, focal surfaces of tubular surfaces have been obtained with modified orthogonal frame. Similar properties have been investigated for focal surfaces, that is, it has been shown that the focal surfaces are flat, Weingarten, and linear Weingarten, but they are not minimal surfaces. In addition, asymptotic and geodesic curves of the tubular and focal surfaces have been investigated. Finally, to confirm our results, a computational example is given and plotted.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors want to thank the referees for their important remarks and ideas which assisted with working on this paper. Additionally, the first author wants to thank the Islamic University of Madinah.

    The authors declare that there are no conflicts of interest.



    [1] H. Guan, Y. Hu, J. Zeng, C. Zuo, Q. Chen, Super-resolution imaging by synthetic aperture with incoherent illumination, Comput. Imaging VII, 12523 (2023), 100–104.
    [2] H. M. Patel, V. M. Chudasama, K. Prajapati, K. P. Upla, K. Raja, R. Ramachandra, et al., ThermISRnet: An efficient thermal image super-resolution network, Opt. Eng., 60 (2021), 073101. https://doi.org/10.1117/1.OE.60.7.073101 doi: 10.1117/1.OE.60.7.073101
    [3] D. Qiu, Y. Cheng, X. Wang, Medical image super-resolution reconstruction algorithms based on deep learning: A survey, Comput. Methods Programs Biomed., 238 (2023), 107590. https://doi.org/10.1016/j.cmpb.2023.107590 doi: 10.1016/j.cmpb.2023.107590
    [4] H. Yang, Z. Wang, X. Liu, C. Li, J. Xin, Z. Wang, Deep learning in medical image super resolution: A review, Appl. Intell., 53 (2023), 20891–20916. https://doi.org/10.1007/s10489-023-04566-9 doi: 10.1007/s10489-023-04566-9
    [5] C. Wang, J. Jiang, K. Jiang, X. Liu, SPADNet: Structure prior-aware dynamic network for face super-resolution, IEEE Trans. Biom. Behav. Identity Sci., 6 (2024), 326–340. https://doi.org/10.1109/TBIOM.2024.3382870 doi: 10.1109/TBIOM.2024.3382870
    [6] C. Saharia, J. Ho, W. Chan, T. Salimans, D. J. Fleet, M. Norouzi, Image super-resolution via iterative refinement, IEEE Trans. Pattern Anal. Mach. Intell., 45 (2022), 4713–4726. https://doi.org/10.1109/TPAMI.2022.3204461 doi: 10.1109/TPAMI.2022.3204461
    [7] G. Bhat, M. Danelljan, L. Van Gool, R. Timofte, Deep burst super-resolution, in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2021), 9205–9214. https://doi.org/10.1109/CVPR46437.2021.00909
    [8] A. Lugmayr, M. Danelljan, L. Van Gool, R. Timofte, Srflow: Learning the super-resolution space with normalizing flow, in Computer Vision – ECCV 2020, Springer, (2020), 715–732. https://doi.org/10.1007/978-3-030-58558-7_42
    [9] K. Zhang, L. Van Gool, R. Timofte, Deep unfolding network for image super-resolution, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2020), 3214–3223. https://doi.org/10.1109/CVPR42600.2020.00328
    [10] X. Kong, H. Zhao, Y. Qiao, C. Dong, Classsr: A general framework to accelerate super-resolution networks by data characteristic, in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2021), 12011–12020. https://doi.org/10.1109/CVPR46437.2021.01184
    [11] X. Wang, L. Xie, C. Dong, Y. Shan, Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data, in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), IEEE, (2021), 1905–1914. https://doi.org/10.1109/ICCVW54120.2021.00217
    [12] Y. Guo, J. Chen, J. Wang, Q. Chen, J. Cao, Z. Deng, et al., Closed-loop matters: Dual regression networks for single image super-resolution, in 020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2020), 5406–5415. https://doi.org/10.1109/CVPR42600.2020.00545
    [13] Z. Yue, J. Wang, C. C. Loy, Resshift: Efficient diffusion model for image super-resolution by residual shifting, in Advances in Neural Information Processing Systems, Curran Associates, Inc., 36 (2024), 13294–13307.
    [14] L. Sun, J. Dong, J. Tang, J. Pan, Spatially-adaptive feature modulation for efficient image super-resolution, in 2023 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE, (2023), 13144–13153. https://doi.org/10.1109/ICCV51070.2023.01213
    [15] Z. Du, D. Liu, J. Liu, J. Tang, G. Wu, L. Fu, Fast and memory-efficient network towards efficient image super-resolution, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, (2022), 852–861. https://doi.org/10.1109/CVPRW56347.2022.00101
    [16] J. L. Harris, Diffraction and resolving power, J. Opt. Soc. Am., 54 (1964), 931–936. https://doi.org/10.1364/JOSA.54.000931 doi: 10.1364/JOSA.54.000931
    [17] D. C. Lepcha, B. Goyal, A. Dogra, V. Goyal, Image super-resolution: A comprehensive review, recent trends, challenges and applications, Inf. Fusion, 91 (2023), 230–260. https://doi.org/10.1016/j.inffus.2022.10.007 doi: 10.1016/j.inffus.2022.10.007
    [18] C. Dong, C. C. Loy, K He, X. Tang, Image super-resolution using deep convolutional networks, IEEE Trans. Pattern Anal. Mach. Intell., 38 (2015), 295–307. https://doi.org/10.1109/TPAMI.2015.2439281 doi: 10.1109/TPAMI.2015.2439281
    [19] C. Dong, C. C. Loy, X. Tang, Accelerating the super-resolution convolutional neural network, in Computer Vision – ECCV 2016, Springer, (2016), 391–407. https://doi.org/10.1007/978-3-319-46475-6_25
    [20] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, et al., Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2016), 1874–1883. https://doi.org/10.1109/CVPR.2016.207
    [21] J. Kim, J. K. Lee, K. M. Lee, Accurate image super-resolution using very deep convolutional networks, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2016), 1646–1654. https://doi.org/10.1109/CVPR.2016.182
    [22] L. Zhang, X. Li, D. He, F. Li, Y. Wang, Z. Zhang, RRSR: Reciprocal reference-based image super-resolution with progressive feature alignment and selection, in Computer Vision – ECCV 2022, Springer, (2022), 648–664. https://doi.org/10.1007/978-3-031-19800-7_38
    [23] Y. Yang, W. Ran, H. Lu, Rddan: A residual dense dilated aggregated network for single image deraining, in 2020 IEEE International Conference on Multimedia and Expo (ICME), IEEE, (2020), 1–6. https://doi.org/10.1109/ICME46284.2020.9102945
    [24] Y. Mei, Y. Fan, Y. Zhou, Image Super-Resolution with Non-Local Sparse Attention, in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2021), 3516–3525. https://doi.org/10.1109/CVPR46437.2021.00352
    [25] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, et al., Swin transformer: Hierarchical vision transformer using shifted windows, in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE, (2021), 9992–10002. https://doi.org/10.1109/ICCV48922.2021.00986
    [26] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, J. Dai, Deformable DETR: Deformable transformers for end-to-end object detection, preprint, arXiv: 2010.04159.
    [27] X. Zhu, H. Hu, S. Lin, J. Dai, Deformable ConvNets V2: More deformable, better results, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2019), 9300–9308. https://doi.org/10.1109/CVPR.2019.00953
    [28] S. Zheng, J. Lu, H. Zhao, X. Zhu, Z. Luo, Y. Wang, et al., Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers, in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2021), 6877–6886. https://doi.org/10.1109/CVPR46437.2021.00681
    [29] M. Zheng, P. Gao, R. Zhang, K. Li, X. Wang, H. Li, et al., End-to-end object detection with adaptive clustering transformer, preprint, arXiv: 2011.09315.
    [30] H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, H. Jegou, Training data-efficient image transformers & distillation through attention, in Proceedings of the 38th International Conference on Machine Learning, PMLR, (2021), 10347–10357.
    [31] P. Zhang, X. Dai, J. Yang, B. Xiao, L. Yuan, L. Zhang, et al., Multi-scale vision longformer: A new vision transformer for high-resolution image encoding, in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE, (2021), 2978–2988. https://doi.org/10.1109/ICCV48922.2021.00299
    [32] J. Fang, H. Lin, X. Chen, K. Zeng, A hybrid network of cnn and transformer for lightweight image super-resolution, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, (2022), 1102–1111. https://doi.org/10.1109/CVPRW56347.2022.00119
    [33] G. Gao, Z. Wang, J. Li, W. Li, Y. Yu, T. Zeng, Lightweight bimodal network for single-image super-resolution via symmetric CNN and recursive transformer, preprint, arXiv: 2204.13286.
    [34] W. S. Lai, J. B. Huang, N. Ahuja, M. H. Yang, Deep laplacian pyramid networks for fast and accurate super-resolution, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2017), 5835–5843. https://doi.org/10.1109/CVPR.2017.618
    [35] B. Lim, S. Son, H. Kim, S. Nah, K. M. Lee, Enhanced deep residual networks for single image super-resolution, in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, (2017), 1132–1140. https://doi.org/10.1109/CVPRW.2017.151
    [36] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, Y. Fu, Image super-resolution using very deep residual channel attention networks, in Computer Vision – ECCV 2018, Springer, (2018), 294–310. https://doi.org/10.1007/978-3-030-01234-2_18
    [37] J. Kim, J. K. Lee, K. M. Lee, Deeply-recursive convolutional network for image super-resolution, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2016), 1637–1645. https://doi.org/10.1109/CVPR.2016.181
    [38] J. Li, F. Fang, K. Mei, G. Zhang, Multi-scale residual network for image super-resolution, in Computer Vision – ECCV 2018, Springer, (2018), 527–542. https://doi.org/10.1007/978-3-030-01237-3_32
    [39] F. Zhu, Q. Zhao, Efficient single image super-resolution via hybrid residual feature learning with compact back-projection network, in 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), IEEE, (2019), 2453–2460. https://doi.org/10.1109/ICCVW.2019.00300
    [40] F. Kong, M. Li, S. Liu, D. Liu, J. He, Y. Bai, et al., Residual local feature network for efficient super-resolution, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, (2022), 765–775. https://doi.org/10.1109/CVPRW56347.2022.00092
    [41] J. Yang, S. Shen, H. Yue, K. Li, Implicit transformer network for screen content image continuous super-resolution, in Advances in Neural Information Processing Systems, Curran Associates, Inc., 34 (2021), 13304–13315.
    [42] J. Li, S. Zhu, Channel-spatial transformer for efficient image super-resolution, in ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, (2024), 2685–2689. https://doi.org/10.1109/ICASSP48485.2024.10446047
    [43] J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, R. Timofte, SwinIR: Image restoration using swin transformer in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), IEEE, (2021), 1833–1844. https://doi.org/10.1109/ICCVW54120.2021.00210
    [44] Z. Lu, J. Li, H. Liu, C. Huang, L. Zhang, T. Zeng, Transformer for single image super-resolution, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, (2022), 456–465. https://doi.org/10.1109/CVPRW56347.2022.00061
    [45] X. Deng, Y. Zhang, M. Xu, S. Gu, Y. Duan, Deep coupled feedback network for joint exposure fusion and image super-resolution, IEEE Trans. Image Process., 30 (2021), 3098–3112. https://doi.org/10.1109/TIP.2021.3058764 doi: 10.1109/TIP.2021.3058764
    [46] J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, et al., Deep high-resolution representation learning for visual recognition, IEEE Trans. Pattern Anal. Mach. Intell., 43 (2021), 3349–3364. https://doi.org/10.1109/TPAMI.2020.2983686 doi: 10.1109/TPAMI.2020.2983686
    [47] L. Wang, X. Dong, Y. Wang, X. Ying, Z. Lin, W. An, et al., Exploring sparsity in image super-resolution for efficient inference, in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2021), 4915–4924. https://doi.org/10.1109/CVPR46437.2021.00488
    [48] X. Li, J. Dong, J. Tang, J. Pan, DLGSANet: Lightweight dynamic local and global self-attention networks for image super-resolution, in 2023 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE, (2023), 12746–12755. https://doi.org/10.1109/ICCV51070.2023.01175
    [49] W. Deng, H. Yuan, L. Deng, Z. Lu, Reparameterized residual feature network for lightweight image super-resolution, in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, (2023), 1712–1721. https://doi.org/10.1109/CVPRW59228.2023.00172
    [50] X. Zhang, H. Zeng, S. Guo, L. Zhang, Efficient long-range attention network for image super-resolution, in Computer Vision – ECCV 2022, Springer, (2022), 649–667. https://doi.org/10.1007/978-3-031-19790-1_39
    [51] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, et al., Mobilenets: Efficient convolutional neural networks for mobile vision applications, preprint, arXiv: 1704.04861.
    [52] R. Timofte, S. Gu, J. Wu, L. Van Gool, L. Zhang, M. H. Yang, et al., Ntire 2018 challenge on single image super-resolution: Methods and results, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (2018), 965–96511. https://doi.org/10.1109/CVPRW.2018.00130
    [53] M. Bevilacqua, A. Roumy, C. Guillemot, M. L. Alberi-Morel, Low-complexity single-image super-resolution based on nonnegative neighbor embedding, in Proceedings of the 23rd British Machine Vision Conference (BMVC), BMVA Press, (2012), 1–10.
    [54] R. Zeyde, M. Elad, M. Protter, On single image scale-up using sparse-representations, in Curves and Surfaces. Curves and Surfaces 2010, Springer, (2012), 711–730. https://doi.org/10.1007/978-3-642-27413-8_47
    [55] D. Martin, C. Fowlkes, D. Tal, J. Malik, A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics, in Proceedings Eighth IEEE International Conference on Computer Vision (ICCV), IEEE, (2001), 416–423. http://doi.org/10.1109/ICCV.2001.937655
    [56] J. B. Huang, A. Singh, N. Ahuja, Single image super-resolution from transformed self-exemplars, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2015), 5197–5206. http://doi.org/10.1109/CVPR.2015.7299156
    [57] Y. Matsui, K. Ito, Y. Aramaki, A. Fujimoto, T. Ogawa, T. Yamasaki, et al., Sketch-based manga retrieval using manga109 dataset, Multimedia Tools Appl., 76 (2017), 21811–21838. https://doi.org/10.1007/s11042-016-4020-z doi: 10.1007/s11042-016-4020-z
    [58] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, Y. Fu, Residual dense network for image super-resolution, in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), (2018), 2472–2481.
    [59] Y. Zhang, D. Wei, C. Qin, H. Wang, H. Pfister, Y. Fu, Context reasoning attention network for image super-resolution, in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE, (2021), 4278–4287. https://doi.org/10.1109/ICCV48922.2021.00424
    [60] K. Zhang, W. Zuo, L. Zhang, Learning a single convolutional super-resolution network for multiple degradations, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, (2018), 3262–3271. https://doi.org/10.1109/CVPR.2018.00344
    [61] Z. Hui, X. Gao, Y. Yang, X. Wang, Lightweight image super-resolution with information multi-distillation network, in Proceedings of the 27th ACM International Conference on Multimedia, Association for Computing Machinery, (2019), 2024–2032. https://doi.org/10.1145/3343031.3351084
    [62] H. Zhao, X. Kong, J. He, Y. Qiao, C. Dong, Efficient image super-resolution using pixel attention, in Computer Vision – ECCV 2020 Workshops, Springer, (2020), 56–72. https://doi.org/10.1007/978-3-030-67070-2_3
    [63] J. Liu, J. Tang, G. Wu, Residual feature distillation network for lightweight image super-resolution, in Computer Vision – ECCV 2020 Workshops, Springer, (2020), 41–55. https://doi.org/10.1007/978-3-030-67070-2_2
    [64] J. H. Kim, J. H. Choi, M. Cheon, J. S. Lee, MAMNet: Multi-path adaptive modulation network for image super-resolution, Neurocomputing, 402 (2020), 38–49. https://doi.org/10.1016/j.neucom.2020.03.069 doi: 10.1016/j.neucom.2020.03.069
    [65] L. Sun, J. Pan, J. Tang, Shufflemixer: An efficient convnet for image super-resolution, in Advances in Neural Information Processing Systems, Curran Associates, Inc., 35 (2022), 17314–17326.
    [66] J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), (2018), 7132–7141.
    [67] C. Wan, H. Yu, Z. Li, Y. Chen, Y. Zou, Y. Liu, et al., Swift parameter-free attention network for efficient super-resolution, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2024), 6246–6256.
    [68] X. Zhang, Y. Zhang, F. Yu, HiT-SR: Hierarchical transformer for efficient image super-resolution, preprint, arXiv: 2407.05878.
  • This article has been cited by:

    1. Burçin Saltık Baek, Esra Damar, Nurdan Oğraş, Nural Yüksel, Ruled Surfaces of Adjoint Curve with the Modified Orthogonal Frame, 2024, 2149-1402, 69, 10.53570/jnt.1583283
    2. Esra Damar, Burçin Saltık Baek, Nural Yüksel, Nurdan Oğraş, Tubular Surfaces of Adjoint Curves According to the Modified Orthogonal Frame, 2025, 8, 2619-8991, 206, 10.34248/bsengineering.1582756
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(956) PDF downloads(36) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog