Research article

A novel attention model for salient structure detection in seismic volumes


  • Received: 10 November 2021 Accepted: 16 November 2021 Published: 22 November 2021
  • A new approach to seismic interpretation is proposed to leverage visual perception and human visual system modeling. Specifically, a saliency detection algorithm based on a novel attention model is proposed for identifying subsurface structures within seismic data volumes. The algorithm employs 3D-FFT and a multi-dimensional spectral projection, which decomposes local spectra into three distinct components, each depicting variations along different dimensions of the data. Subsequently, a novel directional center-surround attention model is proposed to incorporate directional comparisons around each voxel for saliency detection within each projected dimension. Next, the resulting saliency maps along each dimension are combined adaptively to yield a consolidated saliency map, which highlights various structures characterized by subtle variations and relative motion with respect to their neighboring sections. A priori information about the seismic data can be either embedded into the proposed attention model in the directional comparisons, or incorporated into the algorithm by specifying a template when combining saliency maps adaptively. Experimental results on two real seismic datasets from the North Sea, Netherlands and Great South Basin, New Zealand demonstrate the effectiveness of the proposed algorithm for detecting salient seismic structures of different natures and appearances in one shot, which differs significantly from traditional seismic interpretation algorithms. The results further demonstrate that the proposed method outperforms comparable state-of-the-art saliency detection algorithms for natural images and videos, which are inadequate for seismic imaging data.

    Citation: Muhammad Amir Shafiq, Zhiling Long, Haibin Di, Ghassan AlRegib. A novel attention model for salient structure detection in seismic volumes[J]. Applied Computing and Intelligence, 2021, 1(1): 31-45. doi: 10.3934/aci.2021002

    Related Papers:

  • A new approach to seismic interpretation is proposed to leverage visual perception and human visual system modeling. Specifically, a saliency detection algorithm based on a novel attention model is proposed for identifying subsurface structures within seismic data volumes. The algorithm employs 3D-FFT and a multi-dimensional spectral projection, which decomposes local spectra into three distinct components, each depicting variations along different dimensions of the data. Subsequently, a novel directional center-surround attention model is proposed to incorporate directional comparisons around each voxel for saliency detection within each projected dimension. Next, the resulting saliency maps along each dimension are combined adaptively to yield a consolidated saliency map, which highlights various structures characterized by subtle variations and relative motion with respect to their neighboring sections. A priori information about the seismic data can be either embedded into the proposed attention model in the directional comparisons, or incorporated into the algorithm by specifying a template when combining saliency maps adaptively. Experimental results on two real seismic datasets from the North Sea, Netherlands and Great South Basin, New Zealand demonstrate the effectiveness of the proposed algorithm for detecting salient seismic structures of different natures and appearances in one shot, which differs significantly from traditional seismic interpretation algorithms. The results further demonstrate that the proposed method outperforms comparable state-of-the-art saliency detection algorithms for natural images and videos, which are inadequate for seismic imaging data.



    加载中


    [1] G. AlRegib, M. Deriche, Z. Long, H. Di, Z. Wang, Y. Alaudah, et al., Subsurface structure analysis using computational interpretation and learning: A visual signal processing perspective, IEEE Signal Proc. Mag., 35 (2018), 82–98. doi: 10.1109/MSP.2017.2785979. doi: 10.1109/MSP.2017.2785979
    [2] A. Borji, L. Itti, State-of-the-art in visual attention modeling, IEEE T. Pattern Anal., 35 (2013), 185–207. doi: 10.1109/TPAMI.2012.89. doi: 10.1109/TPAMI.2012.89
    [3] L. Zhang, M. Tong, T. Marks, H. Shan, G. Cottrell, SUN: A Bayesian framework for saliency using natural statistics, J. Vision, 8 (2008), 1–20.
    [4] X. Hou, L. Zhang, Saliency detection: A spectral residual approach, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007.
    [5] C. Guo, L. Zhang, A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression, IEEE T. Image Process., 19 (2010), 185–198. doi: 10.1109/TIP.2009.2030969
    [6] R. Achanta, F. Estrada, P. Wils, S. Süsstrunk, Salient region detection and segmentation, International Conference on Computer Vision Systems, 2008.
    [7] Y. Fang, Z. Wang, W. Lin, Z. Fang, Video saliency incorporating spatiotemporal cues and uncertainty weighting, IEEE T. Image Process., 23 (2014), 3910–3921. doi: 10.1109/TIP.2014.2336549
    [8] Z. Long, G. AlRegib, Saliency detection for videos using 3D FFT local spectra, Proc. SPIE, 9394 (2015), 93941G–93941G-6. doi: 10.1117/12.2077762
    [9] B. Schauerte, R. Stiefelhagen, Quaternion-based spectral saliency detection for eye fixation prediction, European Conference on Computer Vision (ECCV), (2012), 116–129.
    [10] H. J. Seo, P. Milanfar, Static and space-time visual saliency detection by self-resemblance, J. Vision, 9 (2009), 15–15.
    [11] T. Kadir, M. Brady, Saliency, scale and image description, Int. J. Comput. Vision, 45 (2001), 83–105.
    [12] L. Itti, P. Baldi, Bayesian surprise attracts human attention, Vision Res., 49 (2009), 1295–1306. doi: 10.1016/j.visres.2008.09.007
    [13] J. Li, Y. Tian, T. Huang, Visual saliency with statistical priors, Int. J. Comput. Vision, 107 (2014), 239–253. doi: 10.1007/s11263-013-0678-0
    [14] A. Borji, M. Cheng, H. Jiang, J. Li, Salient object detection: A benchmark, IEEE T. Image Process., 24 (2015), 5706–5722. doi: 10.1109/TIP.2015.2487833
    [15] T. Alshawi, Z. Long, G. AlRegib, Unsupervised uncertainty estimation using spatiotemporal cues in video saliency detection, IEEE T. Image Process., 27 (2018), 2818–2827. doi: 10.1109/TIP.2018.2813159
    [16] A. Borji, Saliency prediction in the deep learning era: Successes and limitations, IEEE T. Pattern Anal., 43 (2021), 679–700. doi: 10.1109/TPAMI.2019.2935715. doi: 10.1109/TPAMI.2019.2935715
    [17] Z. Long, Y. Alaudah, M. A. Qureshi, Y. Hu, Z. Wang, M. Alfarraj, et al., A comparative study of texture attributes for characterizing subsurface structures in seismic volumes, Interpretation, 6 (2018), T1055–T1066. doi: 10.1190/INT-2017-0181.1. doi: 10.1190/INT-2017-0181.1
    [18] N. Drissi, T. Chonavel, J. Boucher, Salient features in seismic images, OCEANS 2008 - MTS/IEEE Kobe Techno-Ocean, 2008.
    [19] N. J. Ahuja, P. Diwan, An expert system for seismic data interpretation using visual and analytical tools, International Journal of Scientific & Engineering Research, 3 (2012), 1–13.
    [20] Y. Sivarajah, E. Holden, R. Togneri, M. Dentith, M. Lindsay, Visual saliency and potential field data enhancements: Where is your attention drawn?, Interpretation, 2 (2014), SJ9–SJ21. doi: 10.1190/INT-2013-0199.1
    [21] M. A. Shafiq, T. Alshawi, Z. Long, G. AlRegib, SalSi: A new seismic attribute for salt dome detection, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2016.
    [22] M. A. Shafiq, T. Alshawi, Z. Long, G. AlRegib, The role of visual saliency in the automation of seismic interpretation, Geophys. Prospect., 66 (2018), 132–143. doi: 10.1111/1365-2478.12570
    [23] M. A. Shafiq, Z. Long, T. Alshawi, G. AlRegib, Saliency detection for seismic applications using multi-dimensional spectral projections and directional comparisons, IEEE International Conference on Image Processing (ICIP), Beijing, China, 2017.
    [24] M. A. Shafiq, Z. Long, H. Di, G. AlRegib, M. Deriche, Fault detection using attention models based on visual saliency, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, Alberta, Canada, 2018.
    [25] Project F3 Demo 2020. Available from: https://terranubis.com/datainfo/F3-Demo-2020.
    [26] NZP & M Online Exploration Database. Available from: https://data.nzpam.govt.nz/GOLD/system/mainframe.asp.
    [27] Z. Wang, Z. Long, G. AlRegib, A. Asjad, M. A. Deriche, Automatic fault tracking across seismic volumes via tracking vectors, IEEE International Conference on Image Processing (ICIP), 2014.
    [28] Z. Wang, Z. Long, G. AlRegib, Tensor-based subspace learning for tracking salt-dome boundaries, IEEE International Conference on Image Processing (ICIP), 2015.
    [29] Z. Wang, T. Hegazy, Z. Long, G. AlRegib, Noise-robust detection and tracking of salt domes in postmigrated volumes using texture, tensors, and subspace learning, Geophysics, 80 (2015), WD101–WD116. doi: 10.1190/geo2015-0116.1
    [30] Z. Long, Y. Alaudah, M. A. Qureshi, M. A. Farraj, Z. Wang, A. Amin, et al., Characterization of migrated seismic volumes using texture attributes: A comparative study, SEG Annual Meeting, 2015.
    [31] Y. Alaudah, G. AlRegib, A generalized tensor-based coherence attribute, 78th EAGE Annual Conference & Exhibition (EAGE), 2016.
    [32] B. Mike, F. Steve, 3-D seismic discontinuity for faults and stratigraphic features: The coherence cube, The Leading Edge, 14 (1995), 1053–1058. doi: 10.1190/1.1437077. doi: 10.1190/1.1437077
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1676) PDF downloads(87) Cited by(1)

Article outline

Figures and Tables

Figures(12)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog