Loading [MathJax]/jax/output/SVG/jax.js
Research article

Enhancing autonomous vehicle safety in rain: a data centric approach for clear vision


  • Autonomous vehicles (AV) face significant challenges in navigating adverse weather, particularly rain, due to the visual impairment of camera-based systems. In this study, we leveraged contemporary deep learning techniques to mitigate these challenges, aiming to develop a vision model that processes live vehicle camera feeds to eliminate rain-induced visual hindrances, yielding visuals closely resembling clear, rain-free scenes. Using the Car Learning to Act (CARLA) simulation environment, we generated a comprehensive dataset of clear and rainy images for model training and testing. In our model, we employed a classic encoder-decoder architecture with skip connections and concatenation operations. It was trained using novel batching schemes designed to effectively distinguish high-frequency rain patterns from low-frequency scene features across successive image frames. To evaluate the model's performance, we integrated it with a steering module that processes front-view images as input. The results demonstrated notable improvements in steering accuracy, underscoring the model's potential to enhance navigation safety and reliability in rainy weather conditions.

    Citation: Mark A. Seferian, Jidong J. Yang. Enhancing autonomous vehicle safety in rain: a data centric approach for clear vision[J]. Applied Computing and Intelligence, 2024, 4(2): 282-299. doi: 10.3934/aci.2024017

    Related Papers:

    [1] Saima Rashid, Rehana Ashraf, Fahd Jarad . Strong interaction of Jafari decomposition method with nonlinear fractional-order partial differential equations arising in plasma via the singular and nonsingular kernels. AIMS Mathematics, 2022, 7(5): 7936-7963. doi: 10.3934/math.2022444
    [2] Mohammed A. Almalahi, Satish K. Panchal, Fahd Jarad, Mohammed S. Abdo, Kamal Shah, Thabet Abdeljawad . Qualitative analysis of a fuzzy Volterra-Fredholm integrodifferential equation with an Atangana-Baleanu fractional derivative. AIMS Mathematics, 2022, 7(9): 15994-16016. doi: 10.3934/math.2022876
    [3] Kishor D. Kucche, Sagar T. Sutar, Kottakkaran Sooppy Nisar . Analysis of nonlinear implicit fractional differential equations with the Atangana-Baleanu derivative via measure of non-compactness. AIMS Mathematics, 2024, 9(10): 27058-27079. doi: 10.3934/math.20241316
    [4] Saima Rashid, Fahd Jarad, Fatimah S. Bayones . On new computations of the fractional epidemic childhood disease model pertaining to the generalized fractional derivative with nonsingular kernel. AIMS Mathematics, 2022, 7(3): 4552-4573. doi: 10.3934/math.2022254
    [5] Muhammad Farman, Ali Akgül, Kottakkaran Sooppy Nisar, Dilshad Ahmad, Aqeel Ahmad, Sarfaraz Kamangar, C Ahamed Saleel . Epidemiological analysis of fractional order COVID-19 model with Mittag-Leffler kernel. AIMS Mathematics, 2022, 7(1): 756-783. doi: 10.3934/math.2022046
    [6] Ahu Ercan . Comparative analysis for fractional nonlinear Sturm-Liouville equations with singular and non-singular kernels. AIMS Mathematics, 2022, 7(7): 13325-13343. doi: 10.3934/math.2022736
    [7] Hasib Khan, Jehad Alzabut, J.F. Gómez-Aguilar, Praveen Agarwal . Piecewise mABC fractional derivative with an application. AIMS Mathematics, 2023, 8(10): 24345-24366. doi: 10.3934/math.20231241
    [8] Kottakkaran Sooppy Nisar, Aqeel Ahmad, Mustafa Inc, Muhammad Farman, Hadi Rezazadeh, Lanre Akinyemi, Muhammad Mannan Akram . Analysis of dengue transmission using fractional order scheme. AIMS Mathematics, 2022, 7(5): 8408-8429. doi: 10.3934/math.2022469
    [9] Veliappan Vijayaraj, Chokkalingam Ravichandran, Thongchai Botmart, Kottakkaran Sooppy Nisar, Kasthurisamy Jothimani . Existence and data dependence results for neutral fractional order integro-differential equations. AIMS Mathematics, 2023, 8(1): 1055-1071. doi: 10.3934/math.2023052
    [10] Eman A. A. Ziada, Salwa El-Morsy, Osama Moaaz, Sameh S. Askar, Ahmad M. Alshamrani, Monica Botros . Solution of the SIR epidemic model of arbitrary orders containing Caputo-Fabrizio, Atangana-Baleanu and Caputo derivatives. AIMS Mathematics, 2024, 9(7): 18324-18355. doi: 10.3934/math.2024894
  • Autonomous vehicles (AV) face significant challenges in navigating adverse weather, particularly rain, due to the visual impairment of camera-based systems. In this study, we leveraged contemporary deep learning techniques to mitigate these challenges, aiming to develop a vision model that processes live vehicle camera feeds to eliminate rain-induced visual hindrances, yielding visuals closely resembling clear, rain-free scenes. Using the Car Learning to Act (CARLA) simulation environment, we generated a comprehensive dataset of clear and rainy images for model training and testing. In our model, we employed a classic encoder-decoder architecture with skip connections and concatenation operations. It was trained using novel batching schemes designed to effectively distinguish high-frequency rain patterns from low-frequency scene features across successive image frames. To evaluate the model's performance, we integrated it with a steering module that processes front-view images as input. The results demonstrated notable improvements in steering accuracy, underscoring the model's potential to enhance navigation safety and reliability in rainy weather conditions.



    The Hermite-Hadamard inequality, which is one of the basic inequalities of inequality theory, has many applications in statistics and optimization theory, as well as providing estimates about the mean value of convex functions.

    Assume that f:IRR is a convex mapping defined on the interval I of R where a<b. The following statement;

    f(a+b2)1babaf(x)dxf(a)+f(b)2

    holds and known as Hermite-Hadamard inequality. Both inequalities hold in the reversed direction if f is concave.

    The concept of convex function, which is used in many classical and analytical inequalities, especially the Hermite-Hadamard inequality, has attracted the attention of many researchers see [4,7,8,9], and has expanded its application area with the construction of new convex function classes. The introduction of this useful class of functions for functions of two variables gave a new direction to convex analysis. In this sense, in [6], Dragomir mentioned about an expansion of the concept of convex function, which is used in many inequalities in theory and has applications in different fields of applied sciences and convex programming.

    Definition 1.1. Let us consider the bidimensional interval Δ=[a,b]×[c,d] in R2 with a<b,c<d. A function f:ΔR will be called convex on the co-ordinates if the partial mappings fy:[a,b]R,fy(u)=f(u,y) and fx:[c,d]R,fx(v)=f(x,v) are convex where defined for all y[c,d] and x[a,b]. Recall that the mapping f:ΔR is convex on Δ if the following inequality holds,

    f(λx+(1λ)z,λy+(1λ)w)λf(x,y)+(1λ)f(z,w)

    for all (x,y),(z,w)Δ and λ[0,1].

    Transferring the concept of convex function to coordinates inspired the presentation of Hermite-Hadamard inequality in coordinates and Dragomir proved this inequality as follows.

    Theorem 1.1. (See [6]) Suppose that f:Δ=[a,b]×[c,d]R is convex on the co-ordinates on Δ. Then one has the inequalities;

     f(a+b2,c+d2)12[1babaf(x,c+d2)dx+1dcdcf(a+b2,y)dy]1(ba)(dc)badcf(x,y)dxdy14[1(ba)baf(x,c)dx+1(ba)baf(x,d)dx+1(dc)dcf(a,y)dy+1(dc)dcf(b,y)dy]f(a,c)+f(a,d)+f(b,c)+f(b,d)4. (1.1)

    The above inequalities are sharp.

    To provide further information about convexity and inequalities that have been established on the coordinates, see the papers [1,2,5,10,11,12,13,14,15]).

    One of the trending problems of recent times is to present different types of convex functions and to derive new inequalities for these function classes. Now we will continue by remembering the concept of n-polynomial convex function.

    Definition 1.2. (See [16]) Let nN. A non-negative function f:IRR is called n-polynomial convex function if for every x,yI and t[0,1],

    f(tx+(1t)y)1nns=1(1(1t)s)f(x)+1nns=1(1ts)f(y).

    We will denote by POLC(I) the class of all n-polynomial convex functions on interval I.

    In the same paper, the authors have proved some new Hadamard type inequalities, we will mention the following one:

    Theorem 1.2. (See [16]) Let f:[a,b]R be an n-polynomial convex function. If a<b and fL[a,b], then the following Hermite-Hadaamrd type inequalities hold:

    12(nn+2n1)f(a+b2)1babaf(x)dxf(a)+f(b)nns=1ss+1. (1.2)

    Since some of the convex function classes can be described on the basis of means, averages have an important place in convex function theory. In [3], Awan et al. gave the harmonic version on the n-polynomial convexity described on the basis of the arithmetic mean as follows. They have also proved several new integral inequalities of Hadamard type.

    Definition 1.3. (See [3]) Let nN and H(0,) be an interval. Then a nonnegative real-valued function f:H[0,) is said to be an n-polynomial harmonically convex function if

    f(xytx+(1t)y)1nns=1(1(1t)s)f(y)+1nns=1(1ts)f(x)

    for all x,yH and t[0,1].

    Theorem 1.3. (See [3]) Let f:[a,b](0,)[0,) be an n-polynomial harmonically convex function. Then one has

    12(nn+2n1)f(2aba+b)abbabaf(x)x2dxf(a)+f(b)nns=1ss+1 (1.3)

    if fL[a,b].

    The main motivation in this study is to give a new modification of (m,n)-harmonically polynomial convex functions on the coordinates and to obtain Hadamard type inequalities via double integrals and by using Hö lder inequality along with a few properties of this new class of functions.

    In this section, we will give a new classes of convexity that will be called (m,n)-polynomial convex function as following.

    Definition 2.1. Let m,nN and Δ=[a,b]×[c,d] be a bidimensional interval. Then a non-negative real-valued function f:ΔR is said to be (m,n)-harmonically polynomial convex function on Δ on the co-ordinates if the following inequality holds:

    f(xztz+(1t)x,ywsw+(1s)y)1nni=1(1(1t)i)1mmj=1(1(1s)j)f(x,y)+1nni=1(1(1t)i)1mmj=1(1sj)f(x,w)+1nni=1(1ti)1mmj=1(1(1s)j)f(z,y)+1nni=1(1ti)1mmj=1(1sj)f(z,w)

    where (x,y),(x,w),(z,y),(z,w)Δ and t,s[0,1].

    Remark 2.1. If one choose m=n=1, it is easy to see that the definition of (m,n)-harmonically polynomial convex functions reduces to the class of the harmonically convex functions.

    Remark 2.2. The (2,2)-harmonically polynomial convex functions satisfy the following inequality;

    f(xztx+(1t)z,ywsz+(1s)w)3tt223ss22f(x,y)+3tt222ss22f(x,w)+2tt223ss22f(z,y)+2tt222ss22f(z,w)

    where (x,y),(x,w),(z,y),(z,w)Δ and t,s[0,1].

    Theorem 2.1. Assume that b>a>0,d>c>0,fα:[a,b]×[c,d][0,) be a family of the (m,n)-harmonically polynomial convex functions on Δ and f(u,v)=supfα(u,v). Then, f is (m,n)- harmonically polynomial convex function on the coordinates if K={x,y[a,b]×[c,d]:f(x,y)<} is an interval.

    Proof. For t,s[0,1] and (x,y),(x,w),(z,y),(z,w)Δ, we can write

    f(xztz+(1t)x,ywsw+(1s)y)=supfα(xztz+(1t)x,ywsw+(1s)y)1nni=1(1(1t)i)1mmj=1(1(1s)j)supfα(x,y)+1nni=1(1(1t)i)1mmj=1(1sj)supfα(x,w)+1nni=1(1ti)1mmj=1(1(1s)j)supfα(z,y)+1nni=1(1ti)1mmj=1(1sj)supfα(z,w)=1nni=1(1(1t)i)1mmj=1(1(1s)j)f(x,y)+1nni=1(1(1t)i)1mmj=1(1sj)f(x,w)+1nni=1(1ti)1mmj=1(1(1s)j)f(z,y)+1nni=1(1ti)1mmj=1(1sj)f(z,w)

    which completes the proof.

    Lemma 2.1. Every (m,n)-harmonically polynomial convex function on Δ is (m,n)-harmonically polynomial convex function on the co-ordinates.

    Proof. Consider the function f:ΔR is (m,n)-harmonically polynomial convex function on Δ. Then, the partial mapping fx:[c,d]R,fx(v)=f(x,v) is valid. We can write

    fx(vwtw+(1t)v)=f(x,vwtw+(1t)v)=f(x2tx+(1t)x,vwtw+(1t)v)1nni=1(1(1t)i)f(x,v)+1nni=1(1ti)f(x,w)=1nni=1(1(1t)i)fx(v)+1nni=1(1ti)fx(w)

    for all t[0,1] and v,w[c,d]. This shows the (m,n)-harmonically polynomial convexity of fx. By a similar argument, one can see the (m,n)-harmonically polynomial convexity of fy.

    Remark 2.3. Every (m,n)-harmonically polynomial convex function on the co-ordinates may not be (m,n)-harmonically polynomial convex function on Δ.

    A simple verification of the remark can be seen in the following example.

    Example 2.1. Let us consider f:[1,3]×[2,3][0,), given by f(x,y)=(x1)(y2). It is clear that f is harmonically polynomial convex on the coordinates but is not harmonically polynomial convex on [1,3]×[2,3], because if we choose (1,3),(2,3)[1,3]×[2,3] and t[0,1], we have

    RHSf(22t+(1t),93t+3(1t))=f(21t,3)=1+t1tLHS1nni=1(1(1t)i)f(1,3)+1nni=1(1ti)f(2,3)=0.

    Then, it is easy to see that

    f(22t+(1t),93t+3(1t))>1nni=1(1(1t)i)f(1,3)+1nni=1(1ti)f(2,3).

    This shows that f is not harmonically polynomial convex on [1,3]×[2,3].

    Now, we will establish associated Hadamard inequality for (m,n)-harmonically polynomial convex functions on the co-ordinates.

    Theorem 2.2. Suppose that f:ΔR is (m,n)-harmonically polynomial convex on the coordinates on Δ. Then, the following inequalities hold:

    14(mm+2m1)(nn+2n1)f(2aba+b,2cdc+d)14[(mm+2m1)abbabaf(x,2cdc+d)x2dx+(nn+2n1)cddcdcf(2aba+b,y)y2dy]abcd(ba)(dc)badcf(x,y)x2y2dxdy12[1n(cd(dc)dcf(a,y)y2dy+cd(dc)dcf(b,y)y2dy)ns=1ss+1+1m(ab(ba)baf(x,c)x2dx+ab(ba)baf(x,d)x2dx)mt=1tt+1](f(a,c)+f(a,d)+f(b,c)+f(b,d)nm)(ns=1ss+1mt=1tt+1). (2.1)

    Proof. Since f is (m,n)-harmonically polynomial convex function on the co-ordinates, it follows that the mapping hx and hy are (m,n)-harmonically polynomial convex functions. Therefore, by using the inequality (1.3) for the partial mappings, we can write

    12(mm+2m1)hx(2cdc+d)cddcdchx(y)y2dyhx(c)+hx(d)mms=1ss+1 (2.2)

    namely

    12(mm+2m1)f(x,2cdc+d)cddcdcf(x,y)y2dyf(x,c)+f(x,d)mns=1ss+1. (2.3)

    Dividing both sides of (2.2) by (ba)ab and by integrating the resulting inequality over [a,b], we have

    ab2(ba)(mm+2m1)baf(x,2cdc+d)dxabcd(ba)(dc)badcf(x,y)x2y2dydxabbaf(x,c)x2dx+abbaf(x,d)x2dxm(ba)ms=1ss+1. (2.4)

    By a similar argument for (2.3), but now for dividing both sides by (dc)cd and integrating over [c,d] and by using the mapping hy is (m,n)-harmonically polynomial convex function, we get

    cd2(dc)(nn+2n1)dcf(2aba+b,y)y2dyabcd(ba)(dc)badcf(x,y)x2y2dydxcddcf(a,y)y2dy+cddcf(b,y)y2dyn(dc)nt=1tt+1. (2.5)

    By summing the inequalities (2.4) and (2.5) side by side, we obtain the second and third inequalities of (2.1). By the inequality (1.3), we also have:

    12(mm+2m1)f(2aba+b,2cdc+d)cddcdcf(2aba+b,y)y2dy

    and

    12(nn+2n1)f(2aba+b,2cdc+d)abbabaf(x,2cdc+d)x2dx

    which give by addition the first inequality of (2.1). Finally, by using the inequality (1.3), we obtain

    cddcdcf(a,y)y2dyf(a,c)+f(a,d)mns=1ss+1,
    cddcdcf(b,y)y2dyf(b,c)+f(b,d)mns=1ss+1,
    abbabaf(x,c)x2dxf(a,c)+f(b,c)nnt=1tt+1,

    and

    abbabaf(x,d)x2dxf(a,d)+f(b,d)nnt=1tt+1

    which give by addition the last inequality of (2.1).

    In order to prove our main findings, we need the following identity.

    Lemma 2.2. Assume that f:Δ=[a,b]×[c,d](0,)×(0,)R be a partial differentiable mapping on Δ and 2ftsL(Δ). Then, one has the following equality:

    Φ(f)=f(a,c)+f(b,c)+f(a,d)+f(b,d)4+abcd(ba)(dc)badcf(x,y)x2y2dxdy12[cddcdcf(a,y)y2dy+cddcdcf(b,y)y2dy+abbabaf(x,c)x2dx+abbabaf(x,d)x2dx]=abcd(ba)(dc)4×1010(12t)(12s)(AtBs)22fts(abAt,cdBs)dsdt

    where At=tb+(1t)a,Bs=sd+(1s)c.

    Theorem 2.3. Let f:Δ=[a,b]×[c,d](0,)×(0,)R be a partial differentiable mapping on Δ and 2ftsL(Δ). If |2fts|q is (m,n)-harmonically polynomial convex function on Δ, then one has the following inequality:

    |Φ(f)|bd(ba)(dc)4ac(p+1)2p×[c1|2fts(a,c)|q+c2|2fts(a,d)|q+c3|2fts(b,c)|q+c4|2fts(b,d)|q]1q (2.6)

    where

    c1=1nni=1[2F1(2q,1;2;1ab)1i+1.2F1(2q,1;i+2;1ab)]×1mmj=1[2F1(2q,1;2;1cd)1j+1.2F1(2q,1;j+2;1cd)],
    c2=1nni=1[2F1(2q,1;2;1ab)1i+1.2F1(2q,1;i+2;1ab)]×1mmj=1[2F1(2q,1;2;1cd)1j+1.2F1(2q,j+1;j+2;1cd)],
    c3=1nni=1[2F1(2q,1;2;1ab)1i+1.2F1(2q,i+1;i+2;1ab)]×1mmj=1[2F1(2q,1;2;1cd)1j+1.2F1(2q,1;j+2;1cd)],
    c4=1nni=1[2F1(2q,1;2;1ab)1i+1.2F1(2q,i+1;i+2;1ab)]×1mmj=1[2F1(2q,1;2;1cd)1j+1.2F1(2q,j+1;j+2;1cd)],

    and At=tb+(1t)a,Bs=sd+(1s)c for fixed t,s[0,1],p,q>1 and p1+q1=1.

    Proof. By using the identity that is given in Lemma 2.2, we can write

    |Φ(f)|=abcd(ba)(dc)41010|(12t)||(12s)|(AtBs)2|2fts(abAt,cdBs)|dsdt

    By using the well known Hölder inequality for double integrals and by taking into account the definition of (m,n)-harmonically polynomial convex functions, we get

    |Φ(f)|abcd(ba)(dc)4(1010|12t|p|12s|pdtds)1p×(1010(AtBs)2q|2fts(abAt,cdBs)|qdtds)1qabcd(ba)(dc)4(1010(AtBs)2q×(1nni=1(1(1t)i)1mmj=1(1(1s)j)|2fts(a,c)|q+1nni=1(1(1t)i)1mmj=1(1sj)|2fts(a,d)|q+1nni=1(1ti)1mmj=1(1(1s)j)|2fts(b,c)|q+1nni=1(1ti)1mmj=1(1sj)|2fts(b,d)|q)dtds)1q

    By computing the above integrals, we can easily see the followings

    1010(AtBs)2q(1(1t)i)(1(1s)j)dtds=(ab)2q[2F1(2q,1;2;1ab)1i+1.2F1(2q,1;i+2;1ab)]×1mmj=1[2F1(2q,1;2;1cd)1j+1.2F1(2q,1;j+2;1cd)],
    1010(AtBs)2q(1(1t)i)(1sj)dtds=(ab)2q[2F1(2q,1;2;1ab)1i+1.2F1(2q,1;i+2;1ab)]×1mmj=1[2F1(2q,1;2;1cd)1j+1.2F1(2q,j+1;j+2;1cd)],
    1010(AtBs)2q(1ti)(1(1s)j)dtds=(ab)2q[2F1(2q,1;2;1ab)1i+1.2F1(2q,i+1;i+2;1ab)]×1mmj=1[2F1(2q,1;2;1cd)1j+1.2F1(2q,1;j+2;1cd)],

    and

    1010(AtBs)2q(1ti)(1sj)dtds=(ab)2q[2F1(2q,1;2;1ab)1i+1.2F1(2q,i+1;i+2;1ab)]×1mmj=1[2F1(2q,1;2;1cd)1j+1.2F1(2q,j+1;j+2;1cd)]

    where 2F1 is Hypergeometric function defined by

    2F1(2q,1;2;1ba)=1B(b,cb)10tb1(1t)cb1(1zt)adt,

    for c>b>0,|z|<1 and Beta function is defind as B(x,y)=10tx1(1t)y1dt,x,y>0. This completes the proof.

    Corollary 2.1. If we set m=n=1 in (2.6), we have the following new inequality.

    |Φ(f)|bd(ba)(dc)4ac(p+1)2p×[c11|2fts(a,c)|q+c22|2fts(a,d)|q+c33|2fts(b,c)|q+c44|2fts(b,d)|q]1q

    where

    c11=[2F1(2q,1;2;1ab)1i+1.2F1(2q,1;i+2;1ab)]×[2F1(2q,1;2;1cd)1j+1.2F1(2q,1;j+2;1cd)],
    c22=[2F1(2q,1;2;1ab)1i+1.2F1(2q,1;i+2;1ab)]×[2F1(2q,1;2;1cd)1j+1.2F1(2q,j+1;j+2;1cd)],
    c33=[2F1(2q,1;2;1ab)1i+1.2F1(2q,i+1;i+2;1ab)]×[2F1(2q,1;2;1cd)1j+1.2F1(2q,1;j+2;1cd)],
    c44=[2F1(2q,1;2;1ab)1i+1.2F1(2q,i+1;i+2;1ab)]×[2F1(2q,1;2;1cd)1j+1.2F1(2q,j+1;j+2;1cd)],

    Corollary 2.2. Suppose that all the conditions of Theorem 2.3 hold. If we set |2f(t,s)ts|q is bounded, i.e.,

    2f(t,s)ts=sup(t,s)(a,b)×(c,d)|2f(t,s)ts|q<,

    we get

    |Φ(f)|bd(ba)(dc)4ac(p+1)2p2f(t,s)ts×[c1+c2+c3+c4]1q

    where c1,c2,c3,c4 as in Theorem 2.3.

    c1=1nni=1[2F1(2q,1;2;1ab)1i+1.2F1(2q,1;i+2;1ab)]×1mmj=1[2F1(2q,1;2;1cd)1j+1.2F1(2q,1;j+2;1cd)],
    c2=1nni=1[2F1(2q,1;2;1ab)1i+1.2F1(2q,1;i+2;1ab)]×1mmj=1[2F1(2q,1;2;1cd)1j+1.2F1(2q,j+1;j+2;1cd)],
    c3=1nni=1[2F1(2q,1;2;1ab)1i+1.2F1(2q,i+1;i+2;1ab)]×1mmj=1[2F1(2q,1;2;1cd)1j+1.2F1(2q,1;j+2;1cd)],
    c4=1nni=1[2F1(2q,1;2;1ab)1i+1.2F1(2q,i+1;i+2;1ab)]×1mmj=1[2F1(2q,1;2;1cd)1j+1.2F1(2q,j+1;j+2;1cd)],

    N. Mlaiki and T. Abdeljawad would like to thank Prince Sultan University for funding this work through research group Nonlinear Analysis Methods in Applied Mathematics (NAMAM) group number RG-DES-2017-01-17.

    S. Butt would like to thank H. E. C. Pakistan (project 7906) for their support.

    The authors declare that no conflicts of interest in this paper.



    [1] P. Vincent, H. Larochelle, Y. Bengio, P. A. Manzagol, Extracting and composing robust features with denoising autoencoders, Proceedings of the 25th International Conference on Machine Learning, 2008, 1096–1103. https://doi.org/10.1145/1390156.1390294
    [2] A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, arXiv: 1511.06434. https://arXiv.org/abs/1511.06434
    [3] O. Ronneberger, P. Fischer, T. Brox, U-net: convolutional networks for biomedical image segmentation, In: Medical image computing and computer-assisted intervention–MICCAI 2015, Cham: Springer, 2015,234–241. https://doi.org/10.1007/978-3-319-24574-4_28
    [4] T. Karras, T. Aila, S. Laine, J. Lehtinen, Progressive growing of gans for improved quality, stability, and variation, arXiv: 1710.10196. https://doi.org/10.48550/arXiv.1710.10196
    [5] A. Brock, J. Donahue, K. Simonyan, Large scale GAN training for high fidelity natural image synthesis, arXiv: 1809.11096. https://doi.org/10.48550/arXiv.1809.11096
    [6] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, et al., Stackgan: text to photo-realistic image synthesis with stacked generative adversarial networks, Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, 5907–5915.
    [7] T. Karras, S. Laine, T. Aila, A style-based generator architecture for generative adversarial networks, Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, 4396–4405. https://doi.org/10.1109/CVPR.2019.00453
    [8] W. Xu, C. Long, R. Wang, G. Wang, Drb-gan: a dynamic resblock generative adversarial network for artistic style transfer, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, 6383–6392.
    [9] H. Zhen, Y. Shi, J. Yang, J. Vehni, Co-supervised learning paradigm with conditional generative adversarial networks for sample-efficient classification, Appl. Comput. Intell., 3 (2023), 13–26. https://doi.org/10.3934/aci.2023002 doi: 10.3934/aci.2023002
    [10] S. Motamed, P. Rogalla, F. Khalvati, Data augmentation using generative adversarial networks (GANs) for GAN-based detection of Pneumonia and COVID-19 in chest X-ray images, Informatics in Medicine Unlocked, 27 (2021), 100779. https://doi.org/10.1016/j.imu.2021.100779 doi: 10.1016/j.imu.2021.100779
    [11] A. Jadli, M. Hain, A. Chergui, A. Jaize, DCGAN-based data augmentation for document classification, Proceedings of IEEE 2nd International Conference on Electronics, Control, Optimization and Computer Science (ICECOCS), 2020, 1–5. https://doi.org/10.1109/icecocs50124.2020.9314379
    [12] A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, et al., Zero-shot text-to-image generation, Proceedings of the 38th International Conference on Machine Learning, 2021, 8821–8831.
    [13] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, B. Ommer, High-resolution image synthesis with latent diffusion models, Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, 10684–10695. https://doi.org/10.1109/CVPR52688.2022.01042
    [14] B. Xia, Y. Zhang, S. Wang, Y. Wang, X. Wu, Y. Tian, et al., Diffir: efficient diffusion model for image restoration, Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), 2023, 13049–13059. https://doi.org/10.1109/ICCV51070.2023.01204
    [15] D. Ren, W. Zuo, Q. Hu, P. Zhu, D. Meng, Progressive image deraining networks: a better and simpler baseline, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, 3937–3946. https://doi.org/10.1109/CVPR.2019.00406
    [16] M. Bojarski, D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, et al., End to end learning for self-driving cars, arXiv: 1604.07316. https://doi.org/10.48550/arXiv.1604.07316
    [17] H. Zhang, V. M. Patel, Density-aware single image de-raining using a multi-stream dense network, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, 695–704. https://doi.org/10.1109/CVPR.2018.00079
    [18] W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, S. Yan, Deep joint rain detection and removal from a single image, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, 1357–1366. https://doi.org/10.1109/CVPR.2017.183
    [19] X. Fu, J. Huang, X. Ding, Y. Liao, J. Paisley, Clearing the skies: a deep network architecture for single-image rain removal, IEEE Trans. Image Process., 26 (2017), 2944–2956. https://doi.org/10.1109/tip.2017.2691802 doi: 10.1109/tip.2017.2691802
    [20] Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, et al., Backpropagation applied to handwritten zip code recognition, Neural Comput., 1 (1989), 541–551. https://doi.org/10.1162/neco.1989.1.4.541 doi: 10.1162/neco.1989.1.4.541
    [21] S. Zamir, A. Arora, S. Khan, M. Hayat, F. Khan, M. Yang, Restormer: efficient transformer for high-resolution image restoration, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, 5728–5739. https://doi.org/10.1109/cvpr52688.2022.00564
    [22] Y. Zhang, D. Li, X. Shi, D. He, K. Song, X. Wang, et al., Kbnet: kernel basis network for image restoration, arXiv: 2303.02881. https://doi.org/10.48550/arXiv.2303.02881
    [23] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, et al., Generative adversarial networks, Commun. ACM, 63 (2020), 139–144. https://doi.org/10.1145/3422622 doi: 10.1145/3422622
    [24] Y. Wei, Z. Zhang, Y. Wang, M. Xu, Y. Yang, S. Yan, et al., Deraincyclegan: rain attentive cyclegan for single image deraining and rainmaking, IEEE Trans. Image Process., 30 (2021), 4788–4801. https://doi.org/10.1109/TIP.2021.3074804 doi: 10.1109/TIP.2021.3074804
    [25] H. Zhang, V. Sindagi, V. M. Patel, Image de-raining using a conditional generative adversarial network, IEEE Trans. Circ. Syst. Vid., 30 (2020), 3943–3956. https://doi.org/10.1109/tcsvt.2019.2920407 doi: 10.1109/tcsvt.2019.2920407
    [26] C. Wang, C. Xu, C. Wang, D. Tao, Perceptual adversarial networks for image-to-image transformation, IEEE Trans. Image Process., 27 (2018), 4066–4079. https://doi.org/10.1109/TIP.2018.2836316 doi: 10.1109/TIP.2018.2836316
    [27] P. Xiang, L. Wang, F. Wu, J. Cheng, M. Zhou, Single-image de-raining with feature-supervised generative adversarial network, IEEE Signal Proc. Let., 26 (2019), 650–654. https://doi.org/10.1109/LSP.2019.2903874 doi: 10.1109/LSP.2019.2903874
    [28] Y. Ren, M. Nie, S. Li, C. Li, Single image de-raining via improved generative adversarial nets, Sensors, 20 (2020), 1591. https://doi.org/10.3390/s20061591 doi: 10.3390/s20061591
    [29] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, V. Koltun, CARLA: an open urban driving simulator, arXiv: 1711.03938. https://doi.org/10.48550/arXiv.1711.03938
    [30] S. Ioffe, C. Szegedy, Batch normalization: accelerating deep network training by reducing internal covariate shift, Proceedings of the 32nd International Conference on International Conference on Machine Learning, 2015, 448–456.
    [31] J. Springenberg, A. Dosovitskiy, T. Brox, M. A. Riedmiller, Striving for simplicity: the all convolutional net, arXiv: 1412.6806. https://doi.org/10.48550/arXiv.1412.6806
    [32] D. Kingma, J. Ba, Adam: a method for stochastic optimization, arXiv: 1412.6980. https://doi.org/10.48550/arXiv.1412.6980
    [33] J. Ho, A. Jain, P. Abbeel, Denoising diffusion probabilistic models, Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020, 6840–6851.
  • This article has been cited by:

    1. Hua Mei, Aying Wan, Bai-Ni Guo, Basil Papadopoulos, Coordinated MT- s 1 , s 2 -Convex Functions and Their Integral Inequalities of Hermite–Hadamard Type, 2021, 2021, 2314-4785, 1, 10.1155/2021/5586377
    2. Ahmet Ocak Akdemir, Saad Ihsan Butt, Muhammad Nadeem, Maria Alessandra Ragusa, Some new integral inequalities for a general variant of polynomial convex functions, 2022, 7, 2473-6988, 20461, 10.3934/math.20221121
    3. Thanin Sitthiwirattham, Muhammad Aamir Ali, Hüseyin Budak, Saowaluck Chasreechai, Quantum Hermite-Hadamard type integral inequalities for convex stochastic processes, 2021, 6, 2473-6988, 11989, 10.3934/math.2021695
    4. Suphawat Asawasamrit, Muhammad Aamir Ali, Hüseyin Budak, Sotiris K. Ntouyas, Jessada Tariboon, Quantum Hermite-Hadamard and quantum Ostrowski type inequalities for s-convex functions in the second sense with applications, 2021, 6, 2473-6988, 13327, 10.3934/math.2021771
    5. Artion Kashuri, Soubhagya Kumar Sahoo, Bibhakar Kodamasingh, Muhammad Tariq, Ahmed A. Hamoud, Homan Emadifar, Faraidun K. Hamasalh, Nedal M. Mohammed, Masoumeh Khademi, Guotao Wang, Integral Inequalities of Integer and Fractional Orders for n –Polynomial Harmonically t g s –Convex Functions and Their Applications, 2022, 2022, 2314-4785, 1, 10.1155/2022/2493944
    6. Farhat Safdar, Muhammad Attique, Some new generalizations for exponentially (s, m)-preinvex functions considering generalized fractional integral operators, 2021, 1016-2526, 861, 10.52280/pujm.2021.531203
    7. Ying-Qing Song, Saad Ihsan Butt, Artion Kashuri, Jamshed Nasir, Muhammad Nadeem, New fractional integral inequalities pertaining 2D–approximately coordinate (r1,ℏ1)-(r2,ℏ2)–convex functions, 2022, 61, 11100168, 563, 10.1016/j.aej.2021.06.044
    8. Waqar Afzal, Sayed M. Eldin, Waqas Nazeer, Ahmed M. Galal, Some integral inequalities for harmonical cr-h-Godunova-Levin stochastic processes, 2023, 8, 2473-6988, 13473, 10.3934/math.2023683
    9. Serap Özcan, Hermite-Hadamard type inequalities for exponential type multiplicatively convex functions, 2023, 37, 0354-5180, 9777, 10.2298/FIL2328777O
    10. Serap Özcan, Hermite-Hadamard type inequalities for multiplicatively p-convex functions, 2023, 2023, 1029-242X, 10.1186/s13660-023-03032-x
    11. Serap Özcan, Saad Ihsan Butt, Hermite–Hadamard type inequalities for multiplicatively harmonic convex functions, 2023, 2023, 1029-242X, 10.1186/s13660-023-03020-1
    12. Serap Özcan, Simpson, midpoint, and trapezoid-type inequalities for multiplicatively s-convex functions, 2025, 58, 2391-4661, 10.1515/dema-2024-0060
    13. Serap Özcan, Ayça Uruş, Saad Ihsan Butt, Hermite–Hadamard-Type Inequalities for Multiplicative Harmonic s-Convex Functions, 2025, 76, 0041-5995, 1537, 10.1007/s11253-025-02404-4
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(870) PDF downloads(16) Cited by(0)

Figures and Tables

Figures(16)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog