Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Attention-guided cross-modal multiple feature aggregation network for RGB-D salient object detection


  • The goal of RGB-D salient object detection is to aggregate the information of the two modalities of RGB and depth to accurately detect and segment salient objects. Existing RGB-D SOD models can extract the multilevel features of single modality well and can also integrate cross-modal features, but it can rarely handle both at the same time. To tap into and make the most of the correlations of intra- and inter-modality information, in this paper, we proposed an attention-guided cross-modal multi-feature aggregation network for RGB-D SOD. Our motivation was that both cross-modal feature fusion and multilevel feature fusion are crucial for RGB-D SOD task. The main innovation of this work lies in two points: One is the cross-modal pyramid feature interaction (CPFI) module that integrates multilevel features from both RGB and depth modalities in a bottom-up manner, and the other is cross-modal feature decoder (CMFD) that aggregates the fused features to generate the final saliency map. Extensive experiments on six benchmark datasets showed that the proposed attention-guided cross-modal multiple feature aggregation network (ACFPA-Net) achieved competitive performance over 15 state of the art (SOTA) RGB-D SOD methods, both qualitatively and quantitatively.

    Citation: Bojian Chen, Wenbin Wu, Zhezhou Li, Tengfei Han, Zhuolei Chen, Weihao Zhang. Attention-guided cross-modal multiple feature aggregation network for RGB-D salient object detection[J]. Electronic Research Archive, 2024, 32(1): 643-669. doi: 10.3934/era.2024031

    Related Papers:

    [1] Ruizhi Yang, Dan Jin . Dynamics in a predator-prey model with memory effect in predator and fear effect in prey. Electronic Research Archive, 2022, 30(4): 1322-1339. doi: 10.3934/era.2022069
    [2] Miao Peng, Rui Lin, Zhengdi Zhang, Lei Huang . The dynamics of a delayed predator-prey model with square root functional response and stage structure. Electronic Research Archive, 2024, 32(5): 3275-3298. doi: 10.3934/era.2024150
    [3] Wenbin Zhong, Yuting Ding . Spatiotemporal dynamics of a predator-prey model with a gestation delay and nonlocal competition. Electronic Research Archive, 2025, 33(4): 2601-2617. doi: 10.3934/era.2025116
    [4] Xiaowen Zhang, Wufei Huang, Jiaxin Ma, Ruizhi Yang . Hopf bifurcation analysis in a delayed diffusive predator-prey system with nonlocal competition and schooling behavior. Electronic Research Archive, 2022, 30(7): 2510-2523. doi: 10.3934/era.2022128
    [5] Yujia Xiang, Yuqi Jiao, Xin Wang, Ruizhi Yang . Dynamics of a delayed diffusive predator-prey model with Allee effect and nonlocal competition in prey and hunting cooperation in predator. Electronic Research Archive, 2023, 31(4): 2120-2138. doi: 10.3934/era.2023109
    [6] Fengrong Zhang, Ruining Chen . Spatiotemporal patterns of a delayed diffusive prey-predator model with prey-taxis. Electronic Research Archive, 2024, 32(7): 4723-4740. doi: 10.3934/era.2024215
    [7] Jiani Jin, Haokun Qi, Bing Liu . Hopf bifurcation induced by fear: A Leslie-Gower reaction-diffusion predator-prey model. Electronic Research Archive, 2024, 32(12): 6503-6534. doi: 10.3934/era.2024304
    [8] San-Xing Wu, Xin-You Meng . Hopf bifurcation analysis of a multiple delays stage-structure predator-prey model with refuge and cooperation. Electronic Research Archive, 2025, 33(2): 995-1036. doi: 10.3934/era.2025045
    [9] Jiange Dong, Xianyi Li . Bifurcation of a discrete predator-prey model with increasing functional response and constant-yield prey harvesting. Electronic Research Archive, 2022, 30(10): 3930-3948. doi: 10.3934/era.2022200
    [10] Chen Wang, Ruizhi Yang . Hopf bifurcation analysis of a pine wilt disease model with both time delay and an alternative food source. Electronic Research Archive, 2025, 33(5): 2815-2839. doi: 10.3934/era.2025124
  • The goal of RGB-D salient object detection is to aggregate the information of the two modalities of RGB and depth to accurately detect and segment salient objects. Existing RGB-D SOD models can extract the multilevel features of single modality well and can also integrate cross-modal features, but it can rarely handle both at the same time. To tap into and make the most of the correlations of intra- and inter-modality information, in this paper, we proposed an attention-guided cross-modal multi-feature aggregation network for RGB-D SOD. Our motivation was that both cross-modal feature fusion and multilevel feature fusion are crucial for RGB-D SOD task. The main innovation of this work lies in two points: One is the cross-modal pyramid feature interaction (CPFI) module that integrates multilevel features from both RGB and depth modalities in a bottom-up manner, and the other is cross-modal feature decoder (CMFD) that aggregates the fused features to generate the final saliency map. Extensive experiments on six benchmark datasets showed that the proposed attention-guided cross-modal multiple feature aggregation network (ACFPA-Net) achieved competitive performance over 15 state of the art (SOTA) RGB-D SOD methods, both qualitatively and quantitatively.



    Recently, using Solodov and Svaiter's projection technique [1], several conjugate gradient methods for solving large-scale unconstrained optimization problems have been extended to solve nonlinear equations with convex constraints (see, [2,3,4,5,6,7,8,9] and the references therein). Due to its simplicity, low storage requirement, and applications, the method has been of interest to various research communities [10,11,12,13,14]. As known, the Fletcher-Reeves (FR) [15], Conjugate Descent (CD) [16] and Dai-Yuan (DY) [17] conjugate gradient methods have strong convergence properties, but due to jamming, they do not do well in practice. Having said that, the Hestenes-Stiefel (HS) [18], Polak-Ribiére-Polyak (PRP) [19,20], and Liu-Storey (LS) [21] conjugate gradient methods do not necessarily converge, but they often work better than FR, CD and DY. In [22], in order to combine the numerical efficiency of the LS method and the strong convergence of the FR method, Djordjević proposed a hybrid LS-FR conjugate gradient method for solving the unconstrained optimization problem. In her work, the conjugate gradient parameter was computed as a convex combination of the LS and FR conjugate gradient parameter. The hybridization parameter for the convex combination was obtained in such a way that the direction of the proposed method satisfies the condition of the Newton direction but also at the same time, it satisfies the famous Dai-Liao conjugacy condition.

    In an attempt to extend the LS-FR method of Djordjević to solve monotone nonlinear equations with convex constraints, Ibrahim et al. [23] proposed a derivative-free hybrid LS-FR conjugate gradient method with a conjugate gradient parameter computed as a convex combination of derivative-free LS and FR conjugate gradient parameter. The hybridization parameter of the convex combination in their work was obtained to satisfy the famous conjugacy condition. Numerical results show that the method is efficient for solving nonlinear monotone equations with convex constraints. It is noteworthy to state that, several conditions were imposed on the hybridization parameter used in [23] in order for the hybridization parameter to take values within the interval (0,1).

    Our motivation is the following: Can we extend the LS-FR method proposed by Djordjević to construct an efficient hybrid gradient-free projection algorithm where the hybridization parameter has no condition imposed on it and the hybridization parameter will always take values in the interval [0,1])? In this paper, we give a positive answer to this question. The remainder of the paper is organized as follows. In Section 2, we describe the algorithm and some properties. In Section 3, we analyze the global convergence of the method. Numerical example and application are presented in Section 4 and 5 respectively.

    Consider the following unconstrained optimization problem

    minimizeg(z),zRn, (2.1)

    where g:RnR is a continuously differentiable function whose gradient at zk is denoted by f(zk):=(zk). Given any starting point z0Rn, the algorithm in [22] is to generate a sequence of approximation {zk} to the minimum z of g, in which

    zk+1=zk+tkjk,k0, (2.2)

    where tk>0 is the steplength which is computed by a certain line search and jk is the search direction defined by

    jk={f(zk)+βkjk1if k>0,f(zk)if k=0, (2.3)

    with βk defined by

    βk=(1θk)f(zk)Tyk1f(zk1)Tjk1+θkf(zk)2f(zk1)2,yk1=f(zk)f(zk1). (2.4)

    where θk is a hybridization parameter chosen to satisfy the Dai-Liao's condition, that is, {for t>0,}

    jTkyk1=tsTk1f(zk),

    where sk1=zk+1zk.

    Motivated by (2.3) and (2.4), we propose a gradient free projection algorithm for solving the following nonlinear equation with convex constraints:

    ρ(z)=0,zΩ (2.5)

    where ΩRn is a nonempty closed convex set, and ρ:RnRn is a continuous mapping. Our propose gradient-free projection iterative method first generates a trial point say {ck} using the relation:

    ck=zk+tkjk,tk>0, (2.6)

    the search direction jk is computed by

    jk={ρ(zk)if k=0,πkρ(zk)+βkwk1if k>0, (2.7)

    where βk is computed

    βk:=(1θk)ρ(zk)Tyk1ρ(zk1)Tjk1+θkρ(zk)2ρ(zk1)2,θk:=yk12yTk1wk1,wk1:=wk1+(max{0,wTk1yk1yk12}+1)yk1,yk1:=ρ(zk)ρ(zk1),wk1:=ck1zk1,

    and πk is obtained to satisfy the descent condition, that is, for α>0,

    jTkρ(zk)αρ(zk)2. (2.8)

    For k=0, (2.8) obviously holds. For kN, we have

    ρ(zk)Tjk(πkβkρ(zk)Twk1ρ(zk1)2)ρ(zk)2. (2.9)

    To satisfy (2.8), we only need that

    πkl+βkρ(zk)Twk1ρ(zk1)2,l>0. (2.10)

    In this paper, we choose πk as

    πk=l+βkρ(zk)Twk1ρ(zk1)2. (2.11)

    It is important to note that, θk has the following property:

    yTk1wk1max{yTk1wk1,yk12}yk12>0.

    Thus,

    θk=yk12yTk1wk1(0,1),k.

    The definition of wk1 is from the ideas of Li and Fukushima [24,25]. The definition of θk was originally proposed by Birgin and Martinez [26] and similar idea can be found in [27,28] and other optimization literature. The proposed algorithm is described immediately after recalling the definition of the projection operator.

    Definition 2.1. Let ΩRn be a nonempty closed convex set. Then for any xRn, its projection onto Ω, denoted by PΩ[x], is defined by

    PΩ[x]:=argmin{xy :yΩ}.

    The projection operator PΩ has a well-known property, that is, for any x,yRn the following nonexpansive property hold

    PΩ(x)PΩ(y)xy,x,yRn. (2.12)

    Algorithm 1:
    Input. Choose an initial point z0Ω, Initialize the variables: τ(0,1),η(0,2) Tol>0, κ>0,l>0. Set k=0.
    Step 0. Compute ρ(zk). If ρ(zk)Tol, stop. Otherwise, compute jk by (2.7)
    Step 1. Determine the steplength tk=max{τm|m=0,1,2,} such that
    ρ(zk+τmjk)Tjkκτmjk2.    (2.13)
    Step 2. Compute the trial point ck=zk+tkjk.
    Step 3. If ckΩ and ρ(ck)Tol, stop. Otherwise, compute
    zk+1=PΩ[zkημkρ(ck)]  (2.14)
    where
    μk=ρ(ck)T(zkck)ρ(ck)2.
    Step 4. Set k:=k+1 and go to step 1.

    In what follows, we assume that ρ satisfies the following assumptions.

    Assumption 1. The solution set Ω is nonempty.

    Assumption 2. The mapping ρ is Lipschitz continuous on Rn. That is,

    ρ(x)ρ(y)Lxy,x,yRn.

    Assumption 3. For any yΩ and xRn, it holds that

    ρ(x)T(xy)0. (3.1)

    Lemma 3.1. Suppose that Assumption 1 holds. Then there exists a step-size tk satisfying the line search (2.13) for k0.

    Proof. Assume there exist k00 such that (2.13) fails to hold for any i0, that is

    ρ(zk0+τijk0),jk0<κτijk02,i1.

    Applying the continuity property of ρ and letting i yields

    ρ(zk0)Tjk00,

    which negates (2.8). Hence proved.

    Lemma 3.2. Suppose Assumption 1-3 is satisfied and the sequences {zk,ck,tk,jk} are generated by Algorithm 1. Then

    tkmin{1,τ(L+κ)ρ(zk)2jk2}.

    Proof. Note that from (2.13), if tk1, then ˉtk=τ1tk does not satisfy (2.13), that is,

    ρ(zk+τ1tkjk)Tjk<κτ1tkjk2. (3.2)

    Combining the above inequality with the descent condition (2.8), we have

    ρ(zk)2=ρ(zk)Tjk=(ρ(zk+τ1tk)ρ(zk))Tjkρ(zk+τ1tk)Tjkτ1tkLjk2+τ1tkκjk2=τ1tk(L+κ)jk2. (3.3)

    Since ρ satisfies Assumption 2 then, (3.3) holds. Thus, from (3.3),

    tkmin{1,τ(L+κ)ρ(zk)2jk2}. (3.4)

    This proves Lemma 3.2.

    Lemma 3.3. Suppose that Assumptions 1-3 hold and let {zk} and {ck} be the sequences generated by Algorithm 1. Then, ρ(ck) is an ascent direction of the function zz2 at the point zk, where zΩ.

    Proof. At zk, the function 12xz2 has a gradient of zkz. By the weakly monotonicity property (3.1), it can be seen that

    ρ(ck)T(zkz)=ρ(ck)T(zk+ckckz)=ρ(ck)T(ckz)+ρ(ck)T(zkck)=ρ(ck)T(zkck)κt2kjk2=κzkck2>0. (3.5)

    The inequality above, i.e., (3.5) points out that ρ(ck) is a descent direction of the function zz at the iteration point zk.

    Lemma 3.4. Let Assumption 1-3 hold and the sequence {zk} be generated by Algorithm 1. Suppose that z is a solution of problem (2.5) with ρ(z)=0. Then there exists a positive δ>0 such that

    ρ(zk)δ. (3.6)

    Proof. Remember, by using the well-known property of PΩ, we can deduce that for any zΩ,

    zk+1z2=PΩ[zkημkρ(ck)]z2zkημkρ(ck)z2=zkz2ημkρ(ck)T(zkz)+η2μ2kρ(ck)2=zkz2ηρ(ck)T(zkck)ρ(ck)2ρ(ck)T(zkz)+η2(ρ(ck)T(zkck)ρ(ck))2zkz2ηρ(ck)T(zkck)ρ(ck)2ρ(ck)T(zkck)+η2(ρ(ck)T(zkck)ρ(ck))2=zkz2η(2η)(ρ(ck)T(zkck)ρ(ck))2 (3.7)
    zkz2. (3.8)

    From inequality (3.8) we see that {zkz} is a decreasing sequence and hence {zk} is bounded. That is,

    zka0,a0>0. (3.9)

    Furthermore, we obtain

    zk+1zzkzzk1zz0z. (3.10)

    Using the Lipchitz continuity of ρ, we have

    ρ(zk)=ρ(zk)ρ(z)LzkzLz0z. (3.11)

    Setting δ=Lz0z proves Lemma 3.4.

    Lemma 3.5. Suppose Assumption 1-3 hold and the sequence {zk} and {ck} are generated by Algorithm 1. Then,

    (a) {ck} is bounded

    (b) limkzkck=0

    (c) limkzkzk+1=0.

    Proof. (a) From (3.10), we know that the sequence {zk} is bounded. So by (3.5), we have

    ρ(ck)T(zkck)κzkck2. (3.12)

    By (3.1) and (3.6) we have

    ρ(ck)T(zkck)=(ρ(ck)ρ(zk))T(zkck)+ρ(zk)T(zkck)ρ(zk)zkckδzkck.

    Combined with (3.12), it is easy to deduce that

    zkckδκ.

    Then, we obtain,

    ckδκ+zk

    Thus {ck} is bounded due to {zk} boundedness.

    (b) From inequality (3.7), we get

    zk+1zzkz2η(2η)[ρ(ck)T(zkck)]2ρ(ck)2zkz2η(2η)κ2zkck4ρ(ck)2,

    which means

    η(2η)zkck4ρ(ck)2κ2(zkz2zk+1z2).

    Since the mapping ρ is continuous, and the {ck} is bounded, we know that {ρ(ck)} is bounded. Therefore a positive δ1>0 exists, such that ρ(ck)δ1 and moreover

    η(2η)k=0zkck4δ21κ2k=0(zkz2zkz2)=δ21κ2z0z2<+.

    Hence,

    limktkjk=limkzkck=0. (3.13)

    Using the property of the projection operator, i.e., (2.12), we have

    zkzk+1=zkPΩ[zkημkρ(ck)]zk(zkημkρ(ck))=ημkρ(ck)ηzkck.

    The global convergence result for Algorithm 1 is established via the following theorem.

    Theorem 3.6. Suppose Assumption 1-3 is satisfied and the sequences {zk} are generated by the Algorithm 1. Then we

    lim infkρ(zk)=0. (3.14)

    Proof. Suppose (3.14) does not hold, meaning there exist a constant ε0>0 such that

    ρ(zk)ε0k0. (3.15)

    By (2.8), we know

    ρ(zk)jkρ(zk)Tjkαρ(zk)2,

    which implies

    jkαρ(zk)ε0,k0. (3.16)

    By (2.3), we have

    jk=πkρ(zk)+βkwk1=(c+βkρ(zk)Twk1ρ(zk1)2)ρ(zk)+((1θk)ρ(zk)Tyk1ρ(zk1)Tjk1+θkρ(zk)2ρ(zk1)2)wk1lρ(zk)+|βk|wk1+(ρ(zk)|ρ(zk1)Tjk1|yk1+ρ(zk)2ρ(zk1)2)wk1lρ(zk)+2(ρ(zk)|ρ(zk1)Tjk1|yk1+ρ(zk)2ρ(zk1)2)wk1lρ(zk)+2(ρ(zk)αρ(zk1)2tk1jk1+ρ(zk)2ρ(zk1)2)tk1jk1lδ+2δε20(tk1jk1)2+2δ2ε20tk1jk1

    for all kN. Since (3.13) holds, it follows that for every ε1>0 there exist k0 such that tk1jk1<ε1 for every k>k0. Choosing ε1=ε0 and 0=max{j0,j1,,jk0,01} where 01=δ(c+2+2δ/ε0), it holds that

    jk0 (3.17)

    for every kN. Integrating with (3.4),(3.15),(3.16) and (3.17), we know that for any k sufficiently large

    tkjkmin{1,τ(L+κ)ρ(zk)2jk2}jk=min{jk,τ(L+κ)ρ(zk)2jk}min{ε0,τε20(L+κ)0}

    The last inequality yields a contradiction with (b) in Lemma 3.5. Consequently, (3.14) holds. The proof is completed.

    The Dolan and Moré performance profile [29] is used in this section to evaluate the efficiency of the proposed algorithm on a set of test problems with varying dimensions and initial points. Comparison is made with algorithm of the same class proposed in [30]. All codes were written in MATLAB environment and compiled on a HP laptop (CPU Corei3-2.5 GHz, RAM 8 GB) with Windows 10 operating system.

    Algo.1: The new method (Algorithm 1).

    Algo.2: MFRM method proposed in [30].

    The parameters for Algo.1 are chosen as: τ=0.9,κ=104,η=1.2. While parameters for Algo.2 are set as reported in [30]. All iterative procedure are terminated whenever ρ(zk)<106. The experiment is carried out on nine different problems with dimensions ranging from n=1000,5000,10,000,50,000,100,000 using seven different initial points: z1=(0.1,,0.1)T,z2=(0.2,,0.2)T,z3=(0.5,,0.5)T,z4=(1.2,,1.2)T,z5=(1.5,,1.5)T,z6=(2,,2)T and z7=rand(n,1). The test problems considered are listed the below where the mapping ρ(z)=(ρ1(z),ρ2(z),,ρn(z))T

    Problem 1 [31] Exponential Function.

    ρ1(z)=ez11,ρi(z)=ezi+zi1,for i=2,3,...,n,and Ω=Rn+.

    Problem 2 [31] Modified Logarithmic Function.

    ρi(z)=ln(zi+1)zin,for i=1,2,3,...,n,and Ω={zRn:ni=1zin,zi>1,i=1,2,,n}.

    Problem 3 [32]

    ρi(z)=min(min(|zi|,z2i),max(|zi|,z3i))for i=2,3,...,n,and Ω=Rn+.

    Problem 4 [31] Strictly Convex Function I.

    ρi(z)=ezi1,for i=1,2,...,n,and Ω=Rn+.

    Problem 5 [31] Strictly Convex Function II.

    ρi(z)=inezi1,for i=1,2,...,n,and Ω=Rn+.

    Problem 6 [33] Tridiagonal Exponential Function.

    ρ1(z)=z1ecos(h(z1+z2)),ρi(z)=ziecos(h(zi1+zi+zi+1)),for i=2,...,n1,ρn(z)=znecos(h(zn1+zn)),h=1n+1

    Problem 7 [34] Nonsmooth Function.

    ρi(z)=zisin|zi1|,i=1,2,3,...,n,and Ω={zRn:ni=1zin,zi1,i=1,2,,n}.

    Problem 8 [31] The Trig exp function

    ρ1(z)=3z31+2z25+sin(z1z2)sin(z1+z2)ρi(z)=3z3i+2zi+15+sin(zizi+1)sin(zi+zi+1)+4zizi1ezi1zi3fori=2,3,...,n1ρn(z)=zn1ezn1zn4zn3,where h=1m+1 and  Ω=Rn+..

    Problem 9 [35]

    ti=ni=1z2i,c=105ρi(z)=2c(zi1)+4(ti0.25)zi,i=1,2,3,...,n.and Ω=Rn+.

    Figures 1-3 presents the results of the comparisons of the mentioned methods. Figure 1 shows the graph of the two methods where the performance measure is the total number of iterations. In the figure, we see that the Algo.1 obtain the most wins with the probability around 78 % and the Algo.2 method is in the second place. Figure 2 shows the performance of the considered methods relative to the total number of function evaluation. Graph of this measure shows that Algo.1 has better performance in comparison with Algo.2. In Figure 3 the performance measure is the CPU running time. The CPU running time figure also indicates that Algo.1 outperforms Algo.2. From the presented figures, it is clear that Algo.1 is the most efficient in solving the considered test problems. A detailed result of the numerical experiment for the test problems is reported in Table 2-10 in the appendix section.

    Figure 1.  Performance profiles for the number of iterations.
    Figure 2.  Performance profiles for the number of function evaluations.
    Figure 3.  Performance profiles for the CPU time.

    The restoration of images is a process in which a distorted or damaged image is restored to its original form. Having an algorithm that can perform such function with high restoration efficiency is of importance. We consider the signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) as a metric for measuring the restoration efficiency. SNR, PSNR and SSIM's larger values reflect better quality of the restored images and indicate that the restored images are closer to the original. Consider the following disturbed or incomplete observation

    b=ρz+ω, (5.1)

    where zRn,bRk is the observation data, ρRk×n(k<<n) is a linear operator and ωRk is an error term. Our goal in this section is to recover the unknown vector z. A well-known approach for obtaining z is by solving the following 1-regularization problem

    minzRn{σz1+12ρzb22} (5.2)

    where the regularization term σ is positive, 1, and 2 are the 1-norm and 2-norm respectively. See (Refs. [36,37,38,39,40]) for various algorithms for solving (5.2). For a comprehensive procedure on how to use our proposed algorithm to solve (5.2), see [41,42].

    To assess the efficiency of Algo.1 in restoring the images degraded using a Gaussian blur kernel of standard deviation 0.1, we compare its performance with the modified Fletcher-Reeves conjugate Gradient method proposed in [30]. The algorithm is referred to as Algo.2. Four test images with different sizes are considered in this experiment. The images are labelled as A, B, C and D. The algorithms are implemented based on the following

    ● All codes were written and implemented in Matlab environment.

    ● Same starting point and stopping condition (with Tol=105) for all the algorithms.

    ● Parameters for Algo.1, are chosen as η=1,τ=0.55,κ=104. Parameters for Algo.2 are chosen as reported in the application section of [30].

    ● The linear operator ρ in the experiment is choosen as the Gaussian matrix generated by the command rand(k,n) in MATLAB.

    ● The signal-to-noise ratio (SNR) is defined as

    SNR:=20×log10(z˜zz),

    where ˜z is recovered vector. The definition of the peak-to-signal and the structural similarity index (SSIM) ratio (PSNR) can be found in [43] and [44], respectively.

    Table 1.  The numerical results obtained by Algo.1 and Algo.2 methods in restoring the blurred and noisy images.
    Algo.1 Algo.2
    Test Image SNR PSNR SSIM SNR PSNR SSIM
    A 16.74 19.03 0.765 16.66 18.95 0.760
    B 16.65 21.98 0.911 16.59 21.93 0.910
    C 20.93 22.76 0.913 20.87 22.70 0.912
    D 18.80 21.71 0.931 18.68 21.58 0.929

     | Show Table
    DownLoad: CSV

    Figure 4 has four columns labelled ORI, BNI, RA1 and RA2. Images on the column labelled ORI are the original images, images on the column labelled BNI are the blurred and noisy images. RA1 are the images restored by Algo.1 and RA2 are images restored by Algo.2. Table 1 provides the SNR, PSNR and SSIM values for Algo.1 and Algo.2. It can be seen that Algo.1 has the highest SNR, PSNR and SSIM in all the images used for the experiment. This indicates that Algo.1 is more effective than Algo.2 in restoring blurred and noisy images.

    Figure 4.  From the left: The original, blurred and noisy images, restored images by Algo.1 and 2.

    "The authors acknowledge the support provided by the Theoretical and Computational Science (TaCS) Center under Computational and Applied Science for Smart research Innovation Cluster (CLASSIC), Faculty of Science, KMUTT. The first author was supported by the Petchra Pra Jom Klao Doctoral Scholarship, Academic for Ph.D. Program at KMUTT (Grant No.16/2561)."

    The authors declare that they have no conflict of interest.

    Table 2.  Numerical result for Problem 1.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 3 11 0.020026 0 32 128 0.15285 5.77E-07
    z2 2 7 0.022233 0 23 92 0.046757 1.03E-07
    z3 3 11 0.028924 0.00E+00 43 172 0.085067 3.24E-07
    z4 2 7 0.01272 0.00E+00 28 112 0.042162 8.50E-07
    z5 2 7 0.012594 0 38 152 0.082875 7.44E-07
    z6 2 7 0.006795 0.00E+00 34 136 0.061034 4.36E-07
    z7 29 116 0.098046 3.71E-08 62 248 0.1162 3.84E-07
    5000 z1 2 7 0.1681 0 16 64 0.097823 4.98E-07
    z2 2 7 0.07673 0 27 108 0.16998 5.89E-08
    z3 2 7 0.019262 0.00E+00 34 136 0.43167 8.96E-07
    z4 2 7 0.041163 0.00E+00 43 172 0.24401 4.77E-07
    z5 2 7 0.035437 0.00E+00 36 144 0.28409 4.72E-07
    z6 2 7 0.031377 0 25 100 0.1577 8.50E-07
    z7 68 272 1.5225 2.22E-08 NaN NaN NaN NaN
    10000 z1 2 7 0.067548 0 7 28 0.080462 7.04E-07
    z2 2 7 0.02502 0 24 96 0.9236 2.84E-07
    z3 2 7 0.037267 0.00E+00 21 84 0.82591 6.94E-07
    z4 2 7 0.027143 0 38 152 0.97602 5.16E-07
    z5 2 7 0.070627 0 28 112 0.46927 8.68E-07
    z6 2 7 0.052355 0 25 100 0.28425 8.52E-07
    z7 107 428 9.536 3.42E-08 NaN NaN NaN NaN
    50000 z1 2 7 0.35904 0 7 28 0.29707 2.32E-07
    z2 2 7 0.24819 0 15 60 1.212 2.10E-07
    z3 2 7 0.21212 0.00E+00 7 28 0.33872 7.76E-07
    z4 2 7 0.265 0.00E+00 24 96 1.2315 7.36E-07
    z5 2 7 0.22679 0.00E+00 21 84 0.98662 9.19E-07
    z6 2 7 0.46048 0.00E+00 8 32 0.44742 4.62E-07
    z7 353 1412 85.8011 1.12E-11 NaN NaN NaN NaN
    100000 z1 2 7 0.26127 0 7 28 0.66487 2.45E-07
    z2 2 7 0.42916 0 14 56 1.9555 4.72E-07
    z3 2 7 0.29924 0.00E+00 7 28 0.65463 8.36E-07
    z4 2 7 0.47753 0 28 112 4.5812 5.94E-07
    z5 2 7 0.28228 0.00E+00 17 68 2.0596 5.23E-07
    z6 2 7 0.45284 0.00E+00 8 32 1.4187 3.26E-07
    z7 NaN NaN NaN NaN NaN NaN NaN NaN

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical result for Problem 2.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 7 22 0.047242 1.58E-09 4 12 0.075993 5.17E-07
    z2 7 22 0.011612 2.12E-09 5 15 0.01685 6.04E-09
    z3 6 19 0.008748 7.52E-09 5 15 0.009081 4.37E-07
    z4 8 25 0.008643 1.95E-09 6 18 0.009114 1.52E-07
    z5 6 19 0.010119 8.43E-09 7 21 0.013185 1.10E-09
    z6 9 28 0.009592 1.04E-09 7 21 0.014685 1.74E-08
    z7 44 169 0.043234 9.47E-07 69 261 0.22456 6.30E-07
    5000 z1 6 20 0.062266 2.97E-07 4 12 0.012773 1.75E-07
    z2 6 20 0.031005 4.05E-07 5 15 0.019072 6.27E-10
    z3 6 19 0.022469 9.12E-10 5 15 0.03412 1.42E-07
    z4 7 23 0.048441 3.74E-07 6 18 0.040398 3.94E-08
    z5 6 19 0.032782 1.42E-09 6 18 0.030696 4.05E-07
    z6 7 22 0.038421 7.12E-09 7 21 0.02232 2.36E-09
    z7 45 169 0.32315 1.74E-07 75 290 0.68505 9.20E-07
    10000 z1 5 16 0.065175 9.23E-09 4 12 0.05281 1.21E-07
    z2 6 21 0.072794 3.06E-07 5 15 0.055137 2.79E-10
    z3 6 19 0.036537 4.32E-10 5 15 0.038347 9.73E-08
    z4 7 24 0.054625 2.82E-07 6 18 0.057504 2.56E-08
    z5 6 20 0.09281 7.38E-10 6 18 0.053546 2.93E-07
    z6 7 22 0.098951 4.21E-09 7 21 0.05207 1.24E-09
    z7 34 133 0.35652 8.45E-07 75 286 1.1715 8.81E-07
    50000 z1 7 26 1.0892 1.84E-07 4 12 0.072347 6.32E-08
    z2 9 34 0.57121 3.87E-07 5 16 0.17135 6.75E-11
    z3 6 21 0.17777 5.88E-07 5 15 0.30908 4.87E-08
    z4 10 37 0.79714 3.60E-07 6 18 0.30538 1.11E-08
    z5 7 25 0.14544 1.16E-07 6 18 0.17986 1.84E-07
    z6 8 28 0.24313 7.93E-07 7 21 0.11731 4.01E-10
    z7 36 141 1.1389 1.07E-07 87 326 3.3093 3.83E-07
    100000 z1 7 26 0.35609 2.56E-07 4 12 0.23409 5.40E-08
    z2 9 34 0.43666 5.47E-07 5 16 0.3152 4.27E-11
    z3 6 21 0.31721 7.65E-07 5 15 0.28597 4.05E-08
    z4 10 37 0.53074 5.09E-07 6 18 0.23003 8.15E-09
    z5 7 25 0.27827 1.55E-07 6 18 0.45582 1.80E-07
    z6 9 32 0.5333 1.09E-07 7 22 0.2709 2.71E-10
    z7 31 121 1.7511 5.10E-07 81 306 6.1345 9.16E-07

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical result for Problem 3.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 2 6 0.007199 0 2 6 0.026849 0
    z2 2 6 0.00552 0 2 6 0.003173 0
    z3 2 6 0.006377 0 2 6 0.006714 0
    z4 3 11 0.017561 0.00E+00 2 6 0.005403 0
    z5 3 11 0.007556 0.00E+00 2 6 0.009761 0
    z6 3 11 0.008376 0 2 6 0.003285 0
    z7 16 49 0.043387 2.91E-07 2 6 0.005238 0
    5000 z1 2 6 0.024798 0 2 6 0.037672 0
    z2 2 6 0.017882 0 2 6 0.016857 0
    z3 2 6 0.014761 0 2 6 0.016971 0
    z4 3 11 0.021926 0.00E+00 2 6 0.024599 0
    z5 3 11 0.019501 0.00E+00 2 6 0.12878 0
    z6 3 11 0.099645 0 2 6 0.016172 0
    z7 21 65 0.26663 8.91E-07 2 6 0.068901 0
    10000 z1 2 6 0.053329 0 2 6 0.039629 0
    z2 2 6 0.036889 0 2 6 0.029941 0
    z3 2 6 0.02419 0 2 6 0.022097 0
    z4 3 11 0.046062 0.00E+00 2 6 0.015668 0
    z5 3 11 0.17699 0.00E+00 2 6 0.1442 0
    z6 3 11 0.056058 0 2 6 0.080865 0
    z7 19 58 0.42057 1.22E-07 2 6 0.052839 0
    50000 z1 2 6 0.11901 0 2 6 0.27419 0
    z2 2 6 0.10804 0 2 6 0.228 0
    z3 2 6 0.15799 0 2 6 0.083129 0
    z4 3 11 0.27797 0.00E+00 2 6 0.09131 0
    z5 3 11 0.21594 0.00E+00 2 6 0.047357 0
    z6 3 11 0.16137 0 2 6 0.049002 0
    z7 21 64 1.156 3.21E-07 2 6 0.12806 0
    100000 z1 2 6 0.21976 0 2 6 0.15418 0
    z2 2 6 0.19397 0 2 6 0.44568 0
    z3 2 6 0.17969 0 2 6 0.79033 0
    z4 3 11 0.30701 0.00E+00 2 6 0.20222 0
    z5 3 11 0.72994 0.00E+00 2 6 0.20959 0
    z6 3 11 0.36806 0 2 6 0.26684 0
    z7 22 67 1.8809 2.86E-07 2 6 0.23472 0

     | Show Table
    DownLoad: CSV
    Table 5.  Numerical result for Problem 4.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 2 7 0.007686 0 8 31 0.025113 1.65E-07
    z2 2 7 0.004973 0 7 28 0.007628 2.32E-07
    z3 2 7 0.004693 0.00E+00 8 32 0.009827 7.42E-07
    z4 2 7 0.005652 0.00E+00 9 35 0.012267 1.62E-07
    z5 2 7 0.007206 0.00E+00 7 28 0.012782 3.92E-07
    z6 2 7 0.005871 0.00E+00 8 32 0.016455 3.68E-07
    z7 22 87 0.030189 0.00E+00 71 284 0.045157 1.91E-07
    5000 z1 2 7 0.01789 0 8 31 0.035804 3.68E-07
    z2 2 7 0.083644 0 7 28 0.056219 5.20E-07
    z3 2 7 0.019787 0.00E+00 9 36 0.028182 1.66E-07
    z4 2 7 0.02077 0 9 35 0.028652 3.61E-07
    z5 2 7 0.023139 0 7 28 0.09901 8.76E-07
    z6 2 7 0.045152 0 8 32 0.046074 8.22E-07
    z7 77 308 0.88375 2.85E-07 51 204 0.12808 9.55E-07
    10000 z1 2 7 0.025792 0 8 32 0.043945 5.20E-07
    z2 2 7 0.020051 0 7 27 0.050306 7.35E-07
    z3 2 7 0.025936 0.00E+00 9 36 0.039643 2.35E-07
    z4 2 7 0.03822 0 9 35 0.041378 5.11E-07
    z5 2 7 0.03849 0 8 32 0.13231 1.24E-07
    z6 2 7 0.031354 0.00E+00 NaN NaN NaN NaN
    z7 101 404 3.4918 4.06E-09 NaN NaN NaN NaN
    50000 z1 2 7 0.091176 0 9 34 0.23565 0
    z2 2 7 0.090561 0 NaN NaN NaN NaN
    z3 2 7 0.13857 0.00E+00 9 35 0.12604 5.25E-07
    z4 2 7 0.10731 0.00E+00 10 38 0.426 0
    z5 2 7 0.14284 0.00E+00 8 31 0.47179 2.77E-07
    z6 2 7 0.29418 0.00E+00 9 35 0.21126 2.60E-07
    z7 110 439 8.6871 0 44 176 1.2526 3.55E-07
    100000 z1 2 7 0.20371 0 9 36 0.2659 1.65E-07
    z2 2 7 0.26727 0 8 30 0.48604 0
    z3 2 7 0.1588 0.00E+00 9 35 0.35032 7.42E-07
    z4 2 7 0.20624 0.00E+00 10 39 0.34301 1.62E-07
    z5 2 7 0.19404 0.00E+00 NaN NaN NaN NaN
    z6 2 7 0.21718 0.00E+00 9 35 0.31142 3.68E-07
    z7 111 444 18.0039 6.11E-08 NaN NaN NaN NaN

     | Show Table
    DownLoad: CSV
    Table 6.  Numerical result for Problem 5.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 34 127 0.026467 1.62E-07 71 263 0.16103 3.21E-07
    z2 36 140 0.026972 7.43E-07 62 235 0.052281 1.13E-07
    z3 52 205 0.16096 2.58E-07 50 194 0.088863 3.72E-07
    z4 96 378 0.55079 4.35E-07 NaN NaN NaN NaN
    z5 123 492 0.48012 3.96E-07 NaN NaN NaN NaN
    z6 196 784 1.0967 6.21E-07 NaN NaN NaN NaN
    z7 115 459 0.34961 2.89E-07 NaN NaN NaN NaN
    5000 z1 59 232 0.68163 2.32E-07 63 231 0.30231 3.90E-07
    z2 50 188 0.29441 6.42E-07 72 282 0.18091 7.31E-07
    z3 179 709 2.2218 2.91E-07 60 232 0.14861 1.47E-07
    z4 171 684 2.9204 2.99E-07 NaN NaN NaN NaN
    z5 297 1187 5.9983 3.31E-07 NaN NaN NaN NaN
    z6 420 1680 9.6236 1.67E-07 NaN NaN NaN NaN
    z7 187 744 3.4767 8.43E-07 NaN NaN NaN NaN
    10000 z1 77 300 1.3784 1.39E-07 75 283 0.27114 2.35E-07
    z2 74 283 1.5399 1.34E-07 55 208 0.20873 3.12E-07
    z3 214 843 5.0625 9.68E-07 67 259 0.65684 2.52E-07
    z4 253 1012 8.4598 5.48E-07 NaN NaN NaN NaN
    z5 383 1531 15.2491 1.45E-07 NaN NaN NaN NaN
    z6 575 2300 24.956 4.27E-07 NaN NaN NaN NaN
    z7 323 1291 9.6152 2.90E-07 NaN NaN NaN NaN
    50000 z1 135 534 12.3192 9.85E-07 65 253 1.9331 1.74E-07
    z2 342 1357 46.7469 1.53E-07 94 369 4.3154 4.77E-07
    z3 326 1294 39.8986 4.97E-07 NaN NaN NaN NaN
    z4 504 2016 82.9841 3.45E-07 NaN NaN NaN NaN
    z5 NaN NaN NaN NaN NaN NaN NaN NaN
    z6 NaN NaN NaN NaN NaN NaN NaN NaN
    z7 602 2403 97.0953 6.65E-07 NaN NaN NaN NaN
    100000 z1 164 645 25.8558 1.87E-07 NaN NaN NaN NaN
    z2 NaN NaN NaN NaN NaN NaN NaN NaN
    z3 400 1590 126.0758 7.38E-07 NaN NaN NaN NaN
    z4 636 2544 240.5206 3.57E-07 NaN NaN NaN NaN
    z5 NaN NaN NaN NaN NaN NaN NaN NaN
    z6 NaN NaN NaN NaN NaN NaN NaN NaN
    z7 NaN NaN NaN NaN NaN NaN NaN NaN

     | Show Table
    DownLoad: CSV
    Table 7.  Numerical result for Problem 6.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 9 36 0.17935 8.25E-07 9 36 0.0153 8.24E-07
    z2 9 36 0.03051 7.93E-07 9 36 0.048509 7.93E-07
    z3 9 36 0.027967 6.99E-07 9 36 0.017521 6.98E-07
    z4 9 36 0.015472 4.79E-07 9 36 0.014811 4.78E-07
    z5 9 36 0.007122 3.84E-07 9 36 0.016431 3.83E-07
    z6 9 36 0.010164 2.27E-07 9 36 0.009737 2.26E-07
    z7 9 36 0.020191 7.23E-07 9 36 0.017515 7.06E-07
    5000 z1 10 40 0.048118 1.85E-07 10 40 0.082844 1.85E-07
    z2 10 40 0.097072 1.78E-07 10 40 0.050343 1.78E-07
    z3 10 40 0.032297 1.57E-07 10 40 0.10792 1.57E-07
    z4 10 40 0.043942 1.07E-07 10 40 0.076199 1.07E-07
    z5 9 36 0.043841 8.61E-07 9 36 0.037916 8.61E-07
    z6 9 36 0.033263 5.08E-07 9 36 0.069375 5.08E-07
    z7 10 40 0.037194 1.58E-07 10 40 0.047672 1.58E-07
    10000 z1 10 40 0.082406 2.62E-07 10 40 0.076648 2.62E-07
    z2 10 40 0.068947 2.52E-07 10 40 0.15678 2.52E-07
    z3 10 40 0.058721 2.22E-07 10 40 0.13597 2.22E-07
    z4 10 40 0.078257 1.52E-07 10 40 0.08399 1.52E-07
    z5 10 40 0.062069 1.22E-07 10 40 0.07822 1.22E-07
    z6 9 36 0.053275 7.18E-07 9 36 0.1205 7.18E-07
    z7 10 40 0.057688 2.24E-07 10 40 0.080168 2.23E-07
    50000 z1 10 40 0.22352 5.85E-07 10 39 0.38243 5.85E-07
    z2 10 40 0.27436 5.63E-07 10 39 0.41361 5.63E-07
    z3 10 40 0.23122 4.96E-07 10 39 0.30721 4.96E-07
    z4 10 40 0.21192 3.40E-07 10 39 0.43086 3.40E-07
    z5 10 40 0.23892 2.72E-07 10 38 0.29829 1.26E-15
    z6 10 40 0.29017 1.61E-07 10 38 0.51415 6.28E-16
    z7 10 40 0.25616 5.01E-07 10 39 0.29756 5.00E-07
    100000 z1 10 40 0.82944 8.28E-07 10 39 1.1183 8.28E-07
    z2 10 40 0.47168 7.96E-07 10 38 0.6117 6.28E-16
    z3 10 40 0.49749 7.01E-07 10 38 0.81145 6.28E-16
    z4 10 40 0.52125 4.80E-07 10 38 0.79886 0
    z5 10 40 0.69499 3.85E-07 10 38 0.60219 0
    z6 10 40 0.47656 2.27E-07 10 38 0.72864 0
    z7 10 40 0.49578 7.07E-07 10 39 0.80741 7.07E-07

     | Show Table
    DownLoad: CSV
    Table 8.  Numerical result for Problem 7.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 5 20 0.046544 3.24E-07 5 20 0.008647 3.24E-07
    z2 5 20 0.009418 1.43E-07 5 20 0.013849 1.43E-07
    z3 5 20 0.038932 1.68E-08 4 16 0.015388 5.81E-08
    z4 6 24 0.01166 9.16E-09 6 24 0.010967 3.39E-08
    z5 6 24 0.010929 1.23E-08 6 24 0.0107 4.99E-08
    z6 6 23 0.01937 1.04E-07 6 23 0.014025 6.55E-08
    z7 26 104 0.032336 8.36E-09 36 144 0.12555 5.34E-08
    5000 z1 5 20 0.029586 7.25E-07 5 20 0.041107 7.25E-07
    z2 5 20 0.027892 3.20E-07 5 20 0.038182 3.20E-07
    z3 5 20 0.035333 3.75E-08 4 16 0.12123 1.30E-07
    z4 6 24 0.032225 2.05E-08 6 24 0.05211 7.58E-08
    z5 6 24 0.026546 2.75E-08 6 24 0.038675 1.12E-07
    z6 6 23 0.032529 2.32E-07 6 23 0.081384 1.46E-07
    z7 34 136 0.24122 3.59E-08 41 164 0.29917 1.14E-07
    10000 z1 6 24 0.043879 5.12E-09 6 24 0.079589 5.12E-09
    z2 5 20 0.036531 4.52E-07 5 20 0.1417 4.52E-07
    z3 5 20 0.050902 5.31E-08 4 16 0.045901 1.84E-07
    z4 6 24 0.054078 2.90E-08 6 24 0.059807 1.07E-07
    z5 6 24 0.052048 3.89E-08 6 24 0.099213 1.58E-07
    z6 6 23 0.04894 3.28E-07 6 23 0.054029 2.07E-07
    z7 41 164 0.30793 3.45E-08 45 180 0.92444 3.64E-07
    50000 z1 6 24 0.14 1.15E-08 6 24 0.26027 1.15E-08
    z2 6 24 0.14343 5.06E-09 6 24 0.60276 5.06E-09
    z3 5 20 0.15201 1.19E-07 4 16 0.17247 4.11E-07
    z4 6 24 0.34794 6.48E-08 6 24 0.22999 2.40E-07
    z5 6 24 0.15433 8.70E-08 6 24 0.38048 3.53E-07
    z6 6 23 0.1425 7.35E-07 6 23 0.2271 4.63E-07
    z7 29 116 1.295 9.41E-09 44 176 2.2436 7.06E-07
    100000 z1 6 24 0.39791 1.62E-08 6 24 0.94834 1.62E-08
    z2 6 24 0.47548 7.15E-09 6 24 0.43453 7.15E-09
    z3 5 20 0.48174 1.68E-07 4 16 0.29517 5.81E-07
    z4 6 24 0.26721 9.16E-08 6 24 0.55119 3.39E-07
    z5 6 24 0.28512 1.23E-07 6 24 0.61073 4.99E-07
    z6 7 27 0.5385 5.19E-09 6 23 0.42035 6.55E-07
    z7 29 116 1.6021 1.19E-08 41 164 3.9953 9.23E-07

     | Show Table
    DownLoad: CSV
    Table 9.  Numerical result for Problem 8.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 66 264 0.934 3.48E-07 NaN NaN NaN NaN
    z2 101 404 0.99428 4.28E-07 41 164 0.75214 4.29E-07
    z3 40 160 0.40127 3.33E-07 NaN NaN NaN NaN
    z4 39 156 0.5071 5.07E-07 39 156 0.71207 3.83E-07
    z5 36 144 0.61923 4.69E-07 35 140 1.4123 4.07E-07
    z6 4 14 0.071864 NaN 4 14 0.059454 NaN
    z7 23 89 0.48051 NaN NaN NaN NaN NaN
    5000 z1 52 208 2.7649 2.91E-07 NaN NaN NaN NaN
    z2 44 176 2.1027 3.54E-07 NaN NaN NaN NaN
    z3 42 168 2.1325 2.95E-07 NaN NaN NaN NaN
    z4 37 148 2.0738 3.41E-07 NaN NaN NaN NaN
    z5 16 60 0.64982 NaN NaN NaN NaN NaN
    z6 20 76 0.98188 NaN NaN NaN NaN NaN
    z7 301 1202 18.7543 4.37E-07 NaN NaN NaN NaN
    10000 z1 77 303 9.6495 3.64E-07 NaN NaN NaN NaN
    z2 71 284 8.0859 3.74E-07 NaN NaN NaN NaN
    z3 62 248 7.1755 3.27E-07 NaN NaN NaN NaN
    z4 48 192 4.1575 4.42E-07 NaN NaN NaN NaN
    z5 15 55 0.93456 NaN NaN NaN NaN NaN
    z6 123 490 12.4072 3.88E-07 NaN NaN NaN NaN
    z7 307 1226 35.579 3.46E-07 NaN NaN NaN NaN
    50000 z1 24 89 8.5017 NaN NaN NaN NaN NaN
    z2 89 355 45.0395 4.34E-07 NaN NaN NaN NaN
    z3 65 260 28.4752 3.57E-07 NaN NaN NaN NaN
    z4 431 1718 135.7493 3.88E-07 NaN NaN NaN NaN
    z5 6 21 2.1067 NaN NaN NaN NaN NaN
    z6 6 21 1.8349 NaN NaN NaN NaN NaN
    z7 7 24 1.8872 NaN NaN NaN NaN NaN
    100000 z1 34 130 31.5135 NaN NaN NaN NaN NaN
    z2 5 17 1.9076 NaN NaN NaN NaN NaN
    z3 87 332 64.5816 3.00E-07 NaN NaN NaN NaN
    z4 76 303 68.3533 4.49E-07 NaN NaN NaN NaN
    z5 5 17 2.2305 NaN NaN NaN NaN NaN
    z6 5 17 2.5293 NaN NaN NaN NaN NaN
    z7 6 21 3.1078 NaN NaN NaN NaN NaN

     | Show Table
    DownLoad: CSV
    Table 10.  Numerical result for Problem 9.
    Algo.1 Algo.2
    dim inp nit nfv tim norm nit nfv tim norm
    1000 z1 10 34 0.032445 1.06E-07 10 34 0.005932 1.06E-07
    z2 10 34 0.008239 1.06E-07 10 34 0.010335 1.06E-07
    z3 10 34 0.0073 1.06E-07 10 34 0.008407 1.06E-07
    z4 10 34 0.008495 1.06E-07 10 34 0.010614 1.06E-07
    z5 10 34 0.00772 1.06E-07 10 34 0.00775 1.06E-07
    z6 10 34 0.011383 1.06E-07 10 35 0.009311 1.06E-07
    z7 67 213 0.024778 9.71E-07 10 34 0.008864 1.06E-07
    5000 z1 7 25 0.022534 6.89E-08 7 25 0.027033 6.89E-08
    z2 7 25 0.032305 6.89E-08 7 25 0.02838 6.89E-08
    z3 7 25 0.026468 6.89E-08 7 25 0.068469 6.89E-08
    z4 7 25 0.034453 6.89E-08 7 26 0.037886 6.89E-08
    z5 7 25 0.021703 6.89E-08 7 26 0.037186 6.89E-08
    z6 7 25 0.027352 6.89E-08 7 26 0.077955 6.89E-08
    z7 20 66 0.061189 9.72E-07 7 25 0.037992 6.89E-08
    10000 z1 6 22 0.07498 8.13E-08 6 22 0.054682 8.13E-08
    z2 6 22 0.047478 8.13E-08 6 22 0.21797 8.13E-08
    z3 6 22 0.052347 8.13E-08 6 22 0.081579 8.13E-08
    z4 6 22 0.047644 8.13E-08 6 23 0.085064 8.13E-08
    z5 6 22 0.068304 8.13E-08 6 23 0.19028 8.13E-08
    z6 6 22 0.042771 8.13E-08 6 23 0.15365 8.13E-08
    z7 12 41 0.074071 9.08E-07 6 22 0.056989 8.13E-08
    50000 z1 5 19 0.22112 1.41E-07 5 19 0.60662 1.41E-07
    z2 5 19 0.21638 1.41E-07 5 19 0.33244 1.41E-07
    z3 5 19 0.22186 1.41E-07 5 20 0.8389 1.41E-07
    z4 5 19 0.37207 1.41E-07 5 20 0.63139 1.41E-07
    z5 5 19 0.36107 1.41E-07 5 20 1.046 1.41E-07
    z6 5 19 0.27063 1.41E-07 5 20 1.4673 1.41E-07
    z7 59 235 2.7862 4.11E-07 5 19 0.57234 1.41E-07
    100000 z1 6 23 0.93893 2.10E-07 6 23 1.3525 2.10E-07
    z2 6 23 0.60445 2.10E-07 6 24 1.5313 2.10E-07
    z3 6 23 0.71683 2.10E-07 6 24 1.6022 2.10E-07
    z4 6 23 0.57114 2.10E-07 6 24 1.7882 2.10E-07
    z5 6 23 0.57099 2.10E-07 6 24 1.878 2.10E-07
    z6 6 23 0.69104 2.10E-07 6 24 1.9634 2.10E-07
    z7 34 135 4.3899 4.52E-07 6 23 1.4688 2.10E-07

     | Show Table
    DownLoad: CSV


    [1] Y. Zhao, Y. Peng, Saliency-guided video classification via adaptively weighted learning, in 2017 IEEE International Conference on Multimedia and Expo (ICME), (2017), 847–852. https://doi.org/10.1109/ICME.2017.8019343
    [2] X. Hu, Y. Wang, J. Shan, Automatic recognition of cloud images by using visual saliency features, IEEE Geosci. Remote Sens. Lett., 12 (2015), 1760–1764. https://doi.org/10.1109/LGRS.2015.2424531 doi: 10.1109/LGRS.2015.2424531
    [3] J. C. Ni, Y. Luo, D. Wang, J. Liang, Q. Zhang, Saliency-based sar target detection via convolutional sparse feature enhancement and bayesian inference, IEEE Trans. Geosci. Remote Sens., 61 (2023), 1–15. https://doi.org/10.1109/TGRS.2023.3237632 doi: 10.1109/TGRS.2023.3237632
    [4] Z. Yu, Y. Zhuge, H. Lu, L. Zhang, Joint learning of saliency detection and weakly supervised semantic segmentation, in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), (2019), 7222–7232. https://doi.org/10.1109/ICCV.2019.00732
    [5] S. Lee, M. Lee, J. Lee, H. Shim, Railroad is not a train: Saliency as pseudo-pixel supervision for weakly supervised semantic segmentation, in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021), 5491–5501. https://doi.org/10.1109/CVPR46437.2021.00545
    [6] W. Feng, R. Han, Q. Guo, J. Zhu, S, Wang, Dynamic saliency-aware regularization for correlation filter-based object tracking, IEEE Trans.n Image Process., 28 (2019), 3232–3245. https://doi.org/10.1109/TIP.2019.2895411 doi: 10.1109/TIP.2019.2895411
    [7] J. Y. Zhu, J. Wu, Y. Xu, E. Chang, Z. Tu, Unsupervised object class discovery via saliency-guided multiple class learning, IEEE Trans. Pattern Anal. Mach. Intell., 37 (2015), 862–875. https://doi.org/10.1109/TPAMI.2014.2353617 doi: 10.1109/TPAMI.2014.2353617
    [8] S. Wei, L. Liao, J. Li, Q. Zheng, F. Yang, Y. Zhao, Saliency inside: Learning attentive cnns for content-based image retrieval, IEEE Trans. Image Process., 28 (2019), 4580–4593. https://doi.org/10.1109/TIP.2019.2913513 doi: 10.1109/TIP.2019.2913513
    [9] A. Kim, R. M. Eustice, Real-time visual slam for autonomous underwater hull inspection using visual saliency, IEEE Trans. Rob., 29 (2013), 719–733. https://doi.org/10.1109/TRO.2012.2235699 doi: 10.1109/TRO.2012.2235699
    [10] R. Li, C. H. Wu, S. Liu, J. Wang, G. Wang, G. Liu, B. Zeng, SDP-GAN: Saliency detail preservation generative adversarial networks for high perceptual quality style transfer, IEEE Trans. Image Process., 30 (2021), 374–385. https://doi.org/10.1109/TIP.2020.3036754 doi: 10.1109/TIP.2020.3036754
    [11] L. Jiang, M. Xu, X. Wang, L. Sigal, Saliency-guided image translation, in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021), 16504–16513. https://doi.org/10.1109/CVPR46437.2021.01624
    [12] S. Li, M. Xu, Y. Ren, Z. Wang, Closed-form optimization on saliency-guided image compression for HEVC-MSP, IEEE Trans. Multimedia, 20 (2018), 155–170. https://doi.org/10.1109/TMM.2017.2721544 doi: 10.1109/TMM.2017.2721544
    [13] Y. Patel, S. Appalaraju, R. Manmatha, Saliency driven perceptual image compression, in 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), (2021), 227–231. https://doi.org/10.1109/WACV48630.2021.00027
    [14] C. Yang, L. Zhang, H. Lu, X. Ruan, M. H. Yang, Saliency detection via graph-based manifold ranking, in 2013 IEEE Conference on Computer Vision and Pattern Recognition, (2013), 3166–3173. https://doi.org/10.1109/CVPR.2013.407
    [15] W. Zhu, S. Liang, Y. Wei, J. Sun, Saliency optimization from robust background detection, in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), 2814–2821. https://doi.org/10.1109/CVPR.2014.360
    [16] K. Shi, K. Wang, J. Lu, L. Lin, Pisa: Pixelwise image saliency by aggregating complementary appearance contrast measures with spatial priors, in 2013 IEEE Conference on Computer Vision and Pattern Recognition, (2013), 2115–2122. https://doi.org/10.1109/CVPR.2013.275
    [17] M. M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, S. M. Hu., Global contrast based salient region detection, IEEE Trans. Pattern Anal. Mach. Intell., 37 (2015), 569–582. https://doi.org/10.1109/CVPR.2011.5995344 https://doi.org/10.1109/CVPR.2011.5995344 doi: 10.1109/CVPR.2011.5995344
    [18] W. C. Tu, S. He, Q. Yang, S. Y. Chien, Real-time salient object detection with a minimum spanning tree, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 2334–2342. https://doi.org/10.1109/CVPR.2016.256
    [19] R. Zhao, W. Ouyang, H. Li, X, Wang, Saliency detection by multi-context deep learning, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), 1265–1274. https://doi.org/10.1109/CVPR.2015.7298731
    [20] G. Li, Y. Yu, Visual saliency based on multiscale deep features, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), 5455–5463.
    [21] L. Wang, H. Lu, X. Ruan, M. H. Yang, Deep networks for saliency detection via local estimation and global search, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), 3183–3192. https://doi.org/10.1109/CVPR.2015.7298938
    [22] Z. Luo, A. Mishra, A. Achkar, J. Eichel, S. Li, P. M. Jodoin, Non-local deep features for salient object detection, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 6593–6601. https://doi.org/10.1109/CVPR.2017.698
    [23] P. Zhang, D. Wang, H. Lu, H. Wang, X. Ruan, Amulet: Aggregating multi-level convolutional features for salient object detection, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017), 202–211. https://doi.org/10.1109/ICCV.2017.31
    [24] Q. Hou, M. M. Cheng, X. Hu, A. Borji, Z. Tu, P. Torr, Deeply supervised salient object detection with short connections, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 5300–5309. https://doi.org/10.1109/CVPR.2017.563
    [25] W. Wang, Q. Lai, H. Fu, J. Shen, H. Ling, R. Yang, Salient object detection in the deep learning era: An in-depth survey, IEEE Trans. Pattern Anal. Mach. Intell., 44 (2022), 3239–3259. https://doi.org/10.1109/TPAMI.2021.3051099 doi: 10.1109/TPAMI.2021.3051099
    [26] A. Borji, M. M. Cheng, Q. Hou, H. Jiang, J. Li, Salient object detection: A survey, Comput. Vis. Media, 5 (2019), 117–150. https://doi.org/10.1007/s41095-019-0149-9 doi: 10.1007/s41095-019-0149-9
    [27] T. Zhou, D. P. Fan, M. M. Cheng, J. Shen, L. Shao, RGB-D salient object detection: A survey, Comput. Vis. Media, 7 (2021), 37–69. https://doi.org/10.1007/s41095-020-0199-z doi: 10.1007/s41095-020-0199-z
    [28] X. Song, D. Zhou, W. Li, Y. Dai, L. Liu, H. Li, et al., WAFP-Net: Weighted attention fusion based progressive residual learning for depth map super-resolution, IEEE Trans. Multimedia, 24 (2022), 4113–4127. https://doi.org/10.1109/TMM.2021.3118282 doi: 10.1109/TMM.2021.3118282
    [29] P. F. Proença, Y. Gao, Splode: Semi-probabilistic point and line odometry with depth estimation from RGB-D camera motion, in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2017), 1594–1601. https://doi.org/10.1109/IROS.2017.8205967
    [30] X. Xing, Y. Cai, T. Lu, Y. Yang, D. Wen, Joint self-supervised monocular depth estimation and SLAM, in 2022 26th International Conference on Pattern Recognition (ICPR), (2022), 4030–4036. https://doi.org/10.1109/ICPR56361.2022.9956576
    [31] Q. Chen, Z. Liu, Y. Zhang, K. Fu, Q. Zhao, H. Du, RGB-D salient object detection via 3d convolutional neural networks, in Proceedings of the AAAI Conference on Artificial Intelligence, (2021), 1063–1071. https://doi.org/10.1609/aaai.v35i2.16191
    [32] F. Wang, J. Pan, S. Xu, J. Tang, Learning discriminative cross-modality features for RGB-D saliency detection, IEEE Trans. Image Process., 31 (2022), 1285–1297. https://doi.org/10.1109/TIP.2022.3140606 doi: 10.1109/TIP.2022.3140606
    [33] Z. Wu, G. Allibert, F. Meriaudeau, C. Ma, C. Demonceaux, Hidanet: RGB-D salient object detection via hierarchical depth awareness., IEEE Trans. Image Process., 32 (2023), 2160–2173. https://doi.org/10.1109/TIP.2023.3263111 doi: 10.1109/TIP.2023.3263111
    [34] J. Zhang, Q. Liang, Q. Guo, J. Yang, Q. Zhang, Y. Shi, R2net: Residual refinement network for salient object detection, Image Vision Comput., 120 (2022), 104423. https://doi.org/10.1016/j.imavis.2022.104423 doi: 10.1016/j.imavis.2022.104423
    [35] R. Shigematsu, D. Feng, S. You, N. Barnes, Learning RGB-D salient object detection using background enclosure, depth contrast, and top-down features, in 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), (2017), 2749–2757.
    [36] L. Itti, C. Koch, E. Niebur, A model of saliency-based visual attention for rapid scene analysis, IEEE Trans. Pattern Anal. Mach. Intell., 20 (1998), 1254–1259. https://doi.org/10.1109/34.730558 doi: 10.1109/34.730558
    [37] C. Yang, L. Zhang, H. Lu, Graph-regularized saliency detection with convex-hull-based center prior, IEEE Signal Process. Lett., 20 (2013), 637–640. https://doi.org/10.1109/LSP.2013.2260737 doi: 10.1109/LSP.2013.2260737
    [38] P. Jiang, H. Ling, J. Yu, J. Peng, Salient region detection by ufo: Uniqueness, focusness and objectness, in 2013 IEEE International Conference on Computer Vision, (2013), 1976–1983.
    [39] R. S. Srivatsa, R. V. Babu, Salient object detection via objectness measure, in 2015 IEEE International Conference on Image Processing (ICIP), (2015), 4481–4485. https://doi.org/10.1109/ICIP.2015.7351654
    [40] C. Scharfenberger, A. Wong, K. Fergani, J. S. Zelek, D. A. Clausi, Statistical textural distinctiveness for salient region detection in natural images, in 2013 IEEE Conference on Computer Vision and Pattern Recognition, (2013), 979–986. https://doi.org/10.1109/CVPR.2013.131
    [41] A. Borji, M. M. Cheng, H. Jiang, J. Li, Salient object detection: A benchmark, IEEE Trans. Image Process., 24 (2015), 5706–5722. https://doi.org/10.1109/TIP.2015.2487833 doi: 10.1109/TIP.2015.2487833
    [42] J. Han, D. Zhang, G. Cheng, N. Liu, D. Xu, Advanced deep-learning techniques for salient and category-specific object detection: A survey, IEEE Signal Process. Mag., 35 (2018), 84–100. https://doi.org/10.1109/MSP.2017.2749125 doi: 10.1109/MSP.2017.2749125
    [43] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Adv. Neural Inf. Process. Syst., 2012 (2012), 25. https://doi.org/10.1145/3065386 doi: 10.1145/3065386
    [44] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 770–778. https://doi.org/10.1109/CVPR.2016.90
    [45] N. Liu, J. Han, M. H. Yang, Picanet: Learning pixel-wise contextual attention for saliency detection, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 3089–3098. https://doi.org/10.1109/CVPR.2018.00326
    [46] S. Chen, X. Tan, B. Wang, X. Hu, Reverse attention for salient object detection, in Proceedings of the European conference on computer vision (ECCV), (2018), 234–250. https://doi.org/10.1007/978-3-030-01240-3_15
    [47] J. J. Liu, Q. Hou, M. M. Cheng, J. Feng, J. Jiang, A simple pooling-based design for real-time salient object detection, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 3912–3921. https://doi.org/10.1109/CVPR.2019.00404
    [48] Y. Pang, X. Zhao, L. Zhang, H. Lu, Multi-scale interactive network for salient object detection, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 9410–9419. https://doi.org/10.1109/CVPR42600.2020.00943
    [49] Q. Hou, M. M. Cheng, X. Hu, A. Borji, Z. Tu, P. H. S. Torr, Deeply supervised salient object detection with short connections, IEEE Trans. Pattern Anal. Mach. Intell., 41 (2019), 815–828. https://doi.org/10.1109/CVPR.2017.563 https://doi.org/10.1109/TPAMI.2018.2815688 doi: 10.1109/CVPR.2017.563
    [50] X. Qin, Z. Zhang, C. Huang, C. Gao, M. Dehghan, M. Jagersand, Basnet: Boundary-aware salient object detection, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 7471–7481. https://doi.org/10.1109/CVPR.2019.00766
    [51] P. Zhang, W. Liu, H. Lu, C. Shen, Salient object detection with lossless feature reflection and weighted structural loss, IEEE Trans. Image Process., 28 (2019), 3048–3060. https://doi.org/10.1109/TIP.2019.2893535 doi: 10.1109/TIP.2019.2893535
    [52] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, in 3rd International Conference on Learning Representations, 2015.
    [53] W. Wang, S. Zhao, J. Shen, S. C. H. Hoi, A. Borji, Salient object detection with pyramid attention and salient edges, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 1448–1457. https://doi.org/10.1109/CVPR.2019.00154
    [54] T. Zhao, X. Wu, Pyramid feature attention network for saliency detection, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 3080–3089. wangzhi
    [55] S. Chen, X. Tan, B. Wang, H. Lu, X. Hu, Y. Fu, Reverse attention-based residual network for salient object detection, IEEE Trans. Image Process., 29 (2020), 3763–3776. https://doi.org/10.1109/TIP.2020.2965989 doi: 10.1109/TIP.2020.2965989
    [56] M. Feng, H. Lu, E. Ding, Attentive feedback network for boundary-aware salient object detection, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 1623–1632. https://doi.org/10.1109/CVPR.2019.00172
    [57] J. Zhao, J. J. Liu, D. P. Fan, Y. Cao, J. Yang, M. M. Cheng, Egnet: Edge guidance network for salient object detection, in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), (2019), 8778–8787. https://doi.org/10.1109/ICCV.2019.00887
    [58] Y. Niu, Y. Geng, X. Li, F. Liu, Leveraging stereopsis for saliency analysis, in 2012 IEEE Conference on Computer Vision and Pattern Recognition, (2012), 454–461.
    [59] H. Peng, B. Li, W. Xiong, W. Hu, R. Ji, RGBD salient object detection: A benchmark and algorithms, in Computer Vision–ECCV 2014: 13th European Conference, (2014), 92–109. https://doi.org/10.1007/978-3-319-10578-9_7
    [60] Y. Cheng, H. Fu, X. Wei, J. Xiao, X. Cao, Depth enhanced saliency detection method, in Proceedings of international conference on internet multimedia computing and service, (2014), 23–27. https://doi.org/10.1145/2632856.2632866
    [61] R. Ju, L. Ge, W. Geng, T. Ren, G. Wu, Depth saliency based on anisotropic center-surround difference, in 2014 IEEE International Conference on Image Processing (ICIP), (2014), 1115–1119. https://doi.org/10.1109/ICIP.2014.7025222
    [62] J. Ren, X. Gong, L. Yu, W. Zhou, M. Y. Yang, Exploiting global priors for rgb-d saliency detection, in 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (2015), 25–32. https://doi.org/10.1109/CVPRW.2015.7301391
    [63] A. Wang, M. Wang, RGB-D salient object detection via minimum barrier distance transform and saliency fusion, IEEE Signal Process. Lett., 24 (2017), 663–667. https://doi.org/10.1109/LSP.2017.2688136 doi: 10.1109/LSP.2017.2688136
    [64] R. Cong, J. Lei, H. Fu, J. Hou, Q. Huang, S. Kwong, Going from RGB to RGBD saliency: A depth-guided transformation model, IEEE Trans. Cyber., 50 (2020), 3627–3639. https://doi.org/10.1109/TCYB.2019.2932005 doi: 10.1109/TCYB.2019.2932005
    [65] L. Qu, S. He, J. Zhang, J. Tian, Y. Tang, Q. Yang, RGBD salient object detection via deep fusion, IEEE Trans. Image Process., 26 (2017), 2274–2285. https://doi.org/10.1109/TIP.2017.2682981 doi: 10.1109/TIP.2017.2682981
    [66] Y. Piao, W. Ji, J. Li, M. Zhang, H. Lu, Depth-induced multi-scale recurrent attention network for saliency detection, in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), (2019), 7253–7262. https://doi.org/10.1109/ICCV.2019.00735
    [67] N. Liu, N. Zhang, J. Han, Learning selective self-mutual attention for RGB-D saliency detection, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 13753–13762. https://doi.org/10.1109/CVPR42600.2020.01377
    [68] C. Li, R. Cong, S. Kwong, J. Hou, H. Fu, G. Zhu, et al., ASIF-Net: Attention steered interweave fusion network for RGB-D salient object detection, IEEE Trans. Cyber., 51 (2021), 88–100. https://doi.org/10.1109/TCYB.2020.2969255 doi: 10.1109/TCYB.2020.2969255
    [69] G. Li, Z. Liu, M. Chen, Z. Bai, W. Lin, H. Ling, Hierarchical alternate interaction network for RGB-D salient object detection, IEEE Trans. Image Process., 30 (2021), 3528–3542. https://doi.org/10.1109/TIP.2021.3062689 doi: 10.1109/TIP.2021.3062689
    [70] Y. H. Wu, Y. Liu, J. Xu, J. W. Bian, Y. C. Gu, M. M. Cheng, MobileSal: Extremely efficient RGB-D salient object detection, IEEE Trans. Pattern Anal. Mach. Intell., 44 (2022), 10261–10269. https://doi.org/10.1109/TPAMI.2021.3134684 doi: 10.1109/TPAMI.2021.3134684
    [71] N. Huang, Y. Yang, D. Zhang, Q. Zhang, J. Han, Employing bilinear fusion and saliency prior information for RGB-D salient object detection, IEEE Trans. Multimedia, 24 (2022), 1651–1664. https://doi.org/10.1109/TMM.2021.3069297 doi: 10.1109/TMM.2021.3069297
    [72] X. Wang, S. Li, C. Chen, Y. Fang, A. Hao, H. Qin, Data-level recombination and lightweight fusion scheme for RGB-D salient object detection, IEEE Trans. Image Process., 30 (2021), 458–471. https://doi.org/10.1109/TIP.2020.3037470 doi: 10.1109/TIP.2020.3037470
    [73] X. Zhao, L. Zhang, Y. Pang, H. Lu, L. Zhang, A single stream network for robust and real-time RGB-D salient object detection, in Computer Vision—ECCV 2020: 16th European Conference, (2020), 646–662. https://doi.org/10.1007/978-3-030-58542-6_39
    [74] K. Fu, D. P. Fan, G. P. Ji, Q. Zhao, JL-DCF: Joint learning and densely-cooperative fusion framework for RGB-D salient object detection, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 3049–3059. https://doi.org/10.1109/CVPR42600.2020.00312
    [75] J. Han, H. Chen, N. Liu, C. Yan, X. Li, CNNs-based RGB-D saliency detection via cross-view transfer and multiview fusion, IEEE Trans. Cyber., 48 (2018), 3171–3183. https://doi.org/10.1109/TCYB.2017.2761775 doi: 10.1109/TCYB.2017.2761775
    [76] N. Wang, X. Gong, Adaptive fusion for RGB-D salient object detection, IEEE Access, 7 (2019), 55277–55284. https://doi.org/10.1109/ACCESS.2019.2913107 doi: 10.1109/ACCESS.2019.2913107
    [77] G. Li, Z. Liu, H. Ling, ICNet: Information conversion network for RGB-D based salient object detection, IEEE Trans. Image Process., 29 (2020), 4873–4884. https://doi.org/10.1109/TIP.2020.2976689 doi: 10.1109/TIP.2020.2976689
    [78] H. Chen, Y. Li, Progressively complementarity-aware fusion network for RGB-D salient object detection, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 3051–3060. https://doi.org/10.1109/CVPR.2018.00322
    [79] M. Zhang, W. Ren, Y. Piao, Z. Rong, H.Lu, Select, supplement and focus for RGB-D saliency detection, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 3469–3478. https://doi.org/10.1109/CVPR42600.2020.00353
    [80] C. Chen, J. Wei, C. Peng, H. Qin, Depth-quality-aware salient object detection, IEEE Trans. Image Process., 30 (2021), 2350–2363. https://doi.org/10.1109/TIP.2021.3052069 doi: 10.1109/TIP.2021.3052069
    [81] Y. Zhai, D. P. Fan, J. Yang, A. Borji, L. Shao, J. Han, L. Wang, Bifurcated backbone strategy for RGB-D salient object detection, IEEE Trans. Image Process., 30 (2021), 8727–8742. https://doi.org/10.1109/TIP.2021.3116793 doi: 10.1109/TIP.2021.3116793
    [82] W. D. Jin, J. Xu, Q. Han, Y. Zhang, M. M. Cheng, CDNet: Complementary depth network for RGB-D salient object detection, IEEE Trans. Image Process., 30 (2021), 3376–3390. https://doi.org/10.1109/TIP.2021.3060167 doi: 10.1109/TIP.2021.3060167
    [83] Z. Zhang, Z. Lin, J. Xu, W. D. Jin, S. P. Lu, D. P. Fan, Bilateral attention network for RGB-D salient object detection, IEEE Trans. Image Process., 30 (2021), 1949–1961. https://doi.org/10.1109/TIP.2021.3049959 doi: 10.1109/TIP.2021.3049959
    [84] H. Chen, Y. Li, Three-stream attention-aware network for RGB-D salient object detection, IEEE Trans. Image Process., 28 (2019), 2825–2835. https://doi.org/10.1109/TIP.2019.2891104 doi: 10.1109/TIP.2019.2891104
    [85] J. Zhang, D. P. Fan, Y. Dai, S. Anwar, F. S. Saleh, T. Zhang, et al., Uc-net: Uncertainty inspired rgb-d saliency detection via conditional variational autoencoders, iIn 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 8579–8588. https://doi.org/10.1109/CVPR42600.2020.00861
    [86] A. Luo, X. Li, F. Yang, Z. Jiao, H. Cheng, S. Lyu, Cascade graph neural networks for RGB-D salient object detection, in Computer Vision—ECCV 2020: 16th European Conference, (2020), 346–364. https://doi.org/10.1007/978-3-030-58610-2_21
    [87] B. Jiang, Z. Zhou, X. Wang, J. Tang, B. Luo, CmSalGAN: RGB-D salient object detection with cross-view generative adversarial networks, IEEE Trans. Multimedia, 23 (2021), 1343–1353. https://doi.org/10.1109/TMM.2020.2997184 doi: 10.1109/TMM.2020.2997184
    [88] T. Zhou, H. Fu, G. Chen, Y. Zhou, D. P. Fan, L. Shao, Specificity-preserving RGB-D saliency detection, in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), (2021), 4661–4671. https://doi.org/10.1109/ICCV48922.2021.00464
    [89] T. Zhou, Y. Zhou, C. Gong, J. Yang, Y. Zhang, Feature aggregation and propagation network for camouflaged object detection, IEEE Trans. Image Process., 31 (2022), 7036–7047. https://doi.org/10.1109/TIP.2022.3217695 doi: 10.1109/TIP.2022.3217695
    [90] M. Song, W. Song, G. Yang, C. Chen, Improving RGB-D salient object detection via modality-aware decoder, IEEE Trans. Image Process., 31 (2022), 6124–6138. https://doi.org/10.1109/TIP.2022.3205747 doi: 10.1109/TIP.2022.3205747
    [91] Z. Gu, J. Cheng, H. Fu, K. Zhou, H. Hao, Y. Zhao, et al., Ce-net: Context encoder network for 2d medical image segmentation, IEEE Trans. Med. Imaging, 38 (2019), 2281–2292. https://doi.org/10.1109/TMI.2019.2903562 doi: 10.1109/TMI.2019.2903562
    [92] S. Woo, J. Park, J. Y. Lee, I. S. Kweon, Cbam: Convolutional block attention module, in Proceedings of the European Conference on Computer Vision (ECCV), (2018), 3–19. https://doi.org/10.1007/978-3-030-01234-2_1
    [93] W. Gao, G. Liao, S. Ma, G. Li, Y. Liang, W. Lin, Unified information fusion network for multi-modal RGB-D and RGB-T salient object detection, IEEE Trans. Circuits Syst. Video Technol., 32 (2022), 2091–2106. https://doi.org/10.1109/TCSVT.2021.3082939 doi: 10.1109/TCSVT.2021.3082939
    [94] K. He, X. Zhang, S. Ren, J. Sun, Spatial pyramid pooling in deep convolutional networks for visual recognition, IEEE Trans. Pattern Anal. Mach. Intell., 37 (2015), 1904–1916. https://doi.org/10.1007/978-3-319-10578-9_23 doi: 10.1007/978-3-319-10578-9_23
    [95] I. Loshchilov, F. Hutter, Decoupled weight decay regularization, in 7th International Conference on Learning Representations, 2019.
    [96] J. X. Zhao, Y. Cao, D. P. Fan, M. M. Cheng, X. Y. Li, L. Zhang, Contrast prior and fluid pyramid integration for RGB-D salient object detection, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 3922–3931.
    [97] N. Li, J. Ye, Y. Ji, H. Ling, J. Yu, Saliency detection on light field, in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), 2806–2813. https://doi.org/10.1109/CVPR.2014.359
    [98] D. P. Fan, Z. Lin, Z. Zhang, M. Zhu, M. M. Cheng, Rethinking RGB-D salient object detection: Models, data sets, and large-scale benchmarks, IEEE Trans. Neural Networks Learn. Syst., 32 (2021), 2075–2089. https://doi.org/10.1109/TNNLS.2020.2996406 doi: 10.1109/TNNLS.2020.2996406
    [99] W. Ji, J. Li, M. Zhang, Y. Piao, H. Lu, Accurate RGB-D salient object detection via collaborative learning, in Computer Vision—ECCV 2020: 16th European Conference, (2020), 52–69. https://doi.org/10.1007/978-3-030-58523-5_4
    [100] W. Zhang, G. P. Ji, Z. Wang, K. Fu, Q. Zhao, Depth quality-inspired feature manipulation for efficient RGB-D salient object detection, in Proceedings of the 29th ACM International Conference on Multimedia, 2021. https://doi.org/10.1145/3474085.3475240
    [101] W. Zhang, Y. Jiang, K. Fu, Q. Zhao, BTS-Net: Bi-directional transfer-and-selection network for RGB-D salient object detection, in 2021 IEEE International Conference on Multimedia and Expo (ICME), (2021), 1–6. https://doi.org/10.1109/ICME51207.2021.9428263
    [102] M. Zhang, S. Yao, B. Hu, Y. Piao, W. Ji, C2DFNet: Criss-cross dynamic filter network for rgb-d salient object detection, IEEE Trans. Multimedia, 2022 (2022), 1–13.
    [103] X. Cheng, X. Zheng, J. Pei, H. Tang, Z. Lyu, C. Chen, Depth-induced gap-reducing network for RGB-D salient object detection: An interaction, guidance and refinement approach, IEEE Trans. Multimedia, 2022 (2022).
    [104] Y. Pang, X. Zhao, L. Zhang, H. Lu, Caver: Cross-modal view-mixed transformer for bi-modal salient object detection, IEEE Trans. Image Process., 32 (2023), 892–904. https://doi.org/10.1109/TIP.2023.3234702 doi: 10.1109/TIP.2023.3234702
    [105] D. P. Fan, C. Gong, Y. Cao, B. Ren, M. M. Cheng, A. Borji, Enhanced-alignment measure for binary foreground map evaluation, in Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, (2018), 698–704. https://doi.org/10.24963/ijcai.2018/97
    [106] D. P. Fan, M. M. Cheng, Y. Liu, T. Li, A. Borji, Structure-measure: A new way to evaluate foreground maps, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017), 4558–4567. https://doi.org/10.1109/ICCV.2017.487
    [107] G. Chen, F. Shao, X. Chai, H. Chen, Q. Jiang, X. Meng, Y. S. Ho, Modality-induced transfer-fusion network for RGB-D and RGB-T salient object detection, IEEE Trans. Circuits Syst. Video Technol., 33 (2023), 1787–1801. https://doi.org/10.1109/TCSVT.2022.3215979 doi: 10.1109/TCSVT.2022.3215979
    [108] Z. Liu, Y. Wang, Z. Tu, Y. Xiao, B. Tang, Tritransnet, in Proceedings of the 29th ACM International Conference on Multimedia, 2021. https://doi.org/10.1145/3474085.3475601
    [109] R. Cong, Q. Lin, C. Zhang, C. Li, X. Cao, Q. Huang, Y. Zhao, CIR-Net: Cross-modality interaction and refinement for RGB-D salient object detection, IEEE Trans. Image Process., 31 (2022), 6800–6815. https://doi.org/10.1109/TIP.2022.3216198 doi: 10.1109/TIP.2022.3216198
    [110] Z. Liu, Y. Tan, Q. He, Y. Xiao, Swinnet: Swin transformer drives edge-aware RGB-D and RGB-T salient object detection, IEEE Trans. Circuits Syst. Video Technol., 32 (2022), 4486–4497. https://doi.org/10.1109/TCSVT.2021.3127149 doi: 10.1109/TCSVT.2021.3127149
    [111] R. Cong, H. Liu, C. Zhang, W. Zhang, F. Zheng, R. Song, S. Kwong, Point-aware interaction and cnn-induced refinement network for RGB-D salient object detection, in Proceedings of the 31st ACM International Conference on Multimedia, 2023. https://doi.org/10.1145/3581783.3611982
  • This article has been cited by:

    1. Xin Du, Quansheng Liu, Yuanhong Bi, Bifurcation analysis of a two–dimensional p53 gene regulatory network without and with time delay, 2023, 32, 2688-1594, 293, 10.3934/era.2024014
    2. Huazhou Mo, Yuanfu Shao, Stability and bifurcation analysis of a delayed stage-structured predator–prey model with fear, additional food, and cooperative behavior in both species, 2025, 2025, 2731-4235, 10.1186/s13662-025-03879-y
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2118) PDF downloads(109) Cited by(2)

Figures and Tables

Figures(6)  /  Tables(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog