Research article

Local bit-plane neighbour dissimilarity pattern in non-subsampled shearlet transform domain for bio-medical image retrieval


  • Received: 10 September 2021 Accepted: 01 December 2021 Published: 13 December 2021
  • This paper introduces a novel descriptor non-subsampled shearlet transform (NSST) local bit-plane neighbour dissimilarity pattern (NSST-LBNDP) for biomedical image retrieval based on NSST, bit-plane slicing and local pattern based features. In NSST-LBNDP, the input image is first decomposed by NSST, followed by introduction of non-linearity on the NSST coefficients by computing local energy features. The local energy features are next normalized into 8-bit values. The multiscale NSST is used to provide translational invariance and has flexible directional sensitivity to catch more anisotropic information of an image. The normalised NSST subband features are next decomposed into bit-plane slices in order to capture very fine to coarse subband details. Then each bit-plane slices of all the subbands are encoded by exploiting the dissimilarity relationship between each neighbouring pixel and its adjacent neighbours. Experiments on two computed tomography (CT) and one magnetic resonance imaging (MRI) image datasets confirms the superior results of NSST-LBNDP when compared to many recent well known relevant descriptors both in terms of average retrieval precision (ARP) and average retrieval recall (ARR).

    Citation: Hilly Gohain Baruah, Vijay Kumar Nath, Deepika Hazarika, Rakcinpha Hatibaruah. Local bit-plane neighbour dissimilarity pattern in non-subsampled shearlet transform domain for bio-medical image retrieval[J]. Mathematical Biosciences and Engineering, 2022, 19(2): 1609-1632. doi: 10.3934/mbe.2022075

    Related Papers:

    [1] Fang Zhu, Wei Liu . A novel medical image fusion method based on multi-scale shearing rolling weighted guided image filter. Mathematical Biosciences and Engineering, 2023, 20(8): 15374-15406. doi: 10.3934/mbe.2023687
    [2] Anam Mehmood, Ishtiaq Rasool Khan, Hassan Dawood, Hussain Dawood . A non-uniform quantization scheme for visualization of CT images. Mathematical Biosciences and Engineering, 2021, 18(4): 4311-4326. doi: 10.3934/mbe.2021216
    [3] Jianhua Song, Lei Yuan . Brain tissue segmentation via non-local fuzzy c-means clustering combined with Markov random field. Mathematical Biosciences and Engineering, 2022, 19(2): 1891-1908. doi: 10.3934/mbe.2022089
    [4] Dayu Xiao, Jianhua Li, Ruotong Zhao, Shouliang Qi, Yan Kang . Iterative CT reconstruction based on ADMM using shearlet sparse regularization. Mathematical Biosciences and Engineering, 2022, 19(12): 11840-11853. doi: 10.3934/mbe.2022552
    [5] Pavithra Latha Kumaresan, Subbulakshmi Pasupathi, Sindhia Lingaswamy, Sreesharmila Thangaswamy, Vimal Shunmuganathan, Danilo Pelusi . Fruit-Fly optimization based feature integration in image retrieval. Mathematical Biosciences and Engineering, 2021, 18(5): 6178-6197. doi: 10.3934/mbe.2021309
    [6] Jianguo Xu, Cheng Wan, Weihua Yang, Bo Zheng, Zhipeng Yan, Jianxin Shen . A novel multi-modal fundus image fusion method for guiding the laser surgery of central serous chorioretinopathy. Mathematical Biosciences and Engineering, 2021, 18(4): 4797-4816. doi: 10.3934/mbe.2021244
    [7] Zhuang Zhang, Wenjie Luo . Hierarchical volumetric transformer with comprehensive attention for medical image segmentation. Mathematical Biosciences and Engineering, 2023, 20(2): 3177-3190. doi: 10.3934/mbe.2023149
    [8] Yafei Liu, Linqiang Yang, Hongmei Ma, Shuli Mei . Adaptive filter method in Bendlet domain for biological slice images. Mathematical Biosciences and Engineering, 2023, 20(6): 11116-11138. doi: 10.3934/mbe.2023492
    [9] Vasileios E. Papageorgiou, Georgios Petmezas, Pantelis Dogoulis, Maxime Cordy, Nicos Maglaveras . Uncertainty CNNs: A path to enhanced medical image classification performance. Mathematical Biosciences and Engineering, 2025, 22(3): 528-553. doi: 10.3934/mbe.2025020
    [10] R. Bhavani, K. Vasanth . Brain image fusion-based tumour detection using grey level co-occurrence matrix Tamura feature extraction with backpropagation network classification. Mathematical Biosciences and Engineering, 2023, 20(5): 8727-8744. doi: 10.3934/mbe.2023383
  • This paper introduces a novel descriptor non-subsampled shearlet transform (NSST) local bit-plane neighbour dissimilarity pattern (NSST-LBNDP) for biomedical image retrieval based on NSST, bit-plane slicing and local pattern based features. In NSST-LBNDP, the input image is first decomposed by NSST, followed by introduction of non-linearity on the NSST coefficients by computing local energy features. The local energy features are next normalized into 8-bit values. The multiscale NSST is used to provide translational invariance and has flexible directional sensitivity to catch more anisotropic information of an image. The normalised NSST subband features are next decomposed into bit-plane slices in order to capture very fine to coarse subband details. Then each bit-plane slices of all the subbands are encoded by exploiting the dissimilarity relationship between each neighbouring pixel and its adjacent neighbours. Experiments on two computed tomography (CT) and one magnetic resonance imaging (MRI) image datasets confirms the superior results of NSST-LBNDP when compared to many recent well known relevant descriptors both in terms of average retrieval precision (ARP) and average retrieval recall (ARR).



    In the last two decades, biomedical imaging have undergone a huge development as it plays a vital role in proper diagnosis of different diseases. With advances in technology and medical services, clear visual representation of different interior human body parts are captured in large quantities through different medical imaging modalities such as X-Ray, ultrasound, computed tomography(CT) and magnetic resonance imaging (MRI) etc. As these images are highly informative and carries different diagnostic informations, it is necessary to store them for future references. For proper utilization of these huge collection of medical images, it is important to have efficient search and biomedical image retrieval techniques. For this purpose, the content based medical image retrieval (CBMIR) has emerged as a significant tool. A CBMIR system consists of two main modules: One is feature extraction and the other one is similarity measurement. In feature extraction module, the important features which represent the image well are extracted whereas in similarity measurement module the distance between the features of query image and database images are calculated and ranked according to the shortest distance. The efficiency of image retrieval system is determined by features that can effectively distinguish between different image classes/groups.

    In literature various approaches of biomedical image retrieval exist that extract different types of features either in spatial domain or transform domain. Among these techniques, local binary pattern (LBP) [1] and its different variants have been seen to provide good performance. LBP extracts the local textural information from images. Sorensen et al. [2] utilized LBP as texture descriptor for classification of emphysema quantification in CT images. As LBP fails to work well under different lighting conditions and is not robust to noise, different variants of LBP are presented by various researchers. In reference [3], a local ternary co-occurrence pattern (LTCoP) is introduced for CT and MRI image retrieval. Here, the co-occurrence of local ternary edges are encoded. Murala et al. proposed an another biomedical image retrieval technique with local mesh pattern (LMeP) where the relationship among the neighboring pixels w.r.t a reference pixel is encoded. Spherical symmetric 3D local ternary pattern (SS-3D-LTP) [4] considers the relationship of a centre pixel with neighborhood in five directions in a three dimensional plane which is constructed from the input image using multi-scale Gaussian filtered images. In reference [5], local diagonal extrema pattern (LDEP) is introduced for CT image retrieval. The LDEP encodes the relationship of local diagonal neighbours w.r.t a centre/reference pixel. First order local diagonal derivatives are used to measure local diagonal extremas. To create a relationship between the center and its neighbors, the intensities of the local diagonal extremas and the center pixel are compared. This descriptor requires a lower computational cost and speeds up the retrieval task, since only diagonal neighbors are considered. In centre symmetric local binary co-occurrence pattern (CSLBCoP) [6], the CSLBP [7] map of the input image is first extracted in order to capture the local information, then the co-occurrence of pattern pairs in CSLBP map is calculated using GLCM in different directions as well as distances. The performance of CSLBCoP is validated over texture, face and bio-medical image databases.

    Although LBP and its variants have shown encouraging results, they are not very good at capturing very fine details present in the images. The local bit-plane based image retrieval techniques [8,9,10] solves this problem by decomposing the input image into a number of bit-planes where the most significant to least significant bit-planes carry most coarser to very fine image details. The appropriate encoding of these bit-planes enables an excellent capturing of coarse to very fine image details. Dubey et al. proposed a local bit plane dissimilarity pattern (LBDISP) [8] where the dissimilarity information between centre pixel and its neighbors at each bit plane is first calculated. Afterwards, the relationship between centre and dissimilarity map is encoded. In local bit-plane decoded pattern (LBDP) [9], the bit-plane transformed values for a given centre pixel are first computed for all the bit-planes by applying weights to the neighbouring bits. The pattern map is then created by exploiting the relationship between the bit-plane transformed values and the intensity of the centre pixel. Inspired from LBDP and LBDISP, Hatibaruah et al. proposed two bit-plane based approaches for biomedical image retrieval [10,11]. In reference [11], a local bit-plane adjacent neighborhood dissimilarity pattern (LBPANDP) is proposed for retrieval of CT images where the dissimilarity between neighbors and it's adjacent neighbors are first locally encoded. The extracted neighbor dissimilarity information in each bit plane is then encoded into a single value ranging between 0–255. The pattern is then formed by calculating the relationship between the dissimilarity informations of all considered bit planes and the intensity of the centre pixel. In case of LBPANDP, the dissimilarity information is computed only between the neighboring pixels in each bit-plane, whereas in local bit-plane based dissimilarities and adder pattern (LBPDAP) [10], both the "centre-neighbour" dissimilarity map and "neighbour-neighbour" dissimilarity maps are calculated.

    The transform domain feature extraction techniques have gained significant attention from various researchers [12,13,14,15,16]. The wavelet analysis supplies multi-scale and directional description of an input image through subbands that provides an adequate characterisation which is compatible with human visual system [12]. It is demonstrated that the computation of features from such directional subbands can improve the representation of important information which cannot be effectively captured in the spatial domain due to the complexity of image textures. In reference [14], Dong and Ma introduced a local energy histogram (LEH) to characterize the wavelet coefficients distribution in each low-pass and high-pass subbands to use its local energy feature for texture classification. In reference [17], the DWT and singular value decomposition (SVD) concept are combined for texture classification. The pdf of the singular values of image DWT coefficients is approximated using an exponential distribution. Two improved LBP operators are introduced using DWT for texture classification in reference [18]. These descriptors calculate the statistical parameters of the transformed image to construct the binary pattern. The local wavelet pattern (LWP) [19] establishes relationship among neighboring pixels using local wavelet decomposition before considering the relationship with the centre pixel. To fit the range of centre values to the range of local wavelet decomposed values, a centre pixel transformation scheme is implemented. The biomedical images often consists of line or curve discontinuities which cannot be well approximated by wavelet transform. Therefore various multi-scale geometric analysis tools such as curvelet [13], contourlet [20], shearlet [21] etc. have been introduced in the literature. The contourlet transform in comparison to curvelet has less directional features. In reference [13], a fast discrete curvelet transform based CT image retrieval technique is proposed. Here the feature vector is constructed employing the directional energies of these coefficients. In reference [22], a local binary curvelet co-occurrence pattern is introduced for CBIR. In this method, the curvelet coefficients of an input image is first computed and then the LBP is applied on the curvelet coefficients. The co-occurrence of pattern pairs using GLCM is finally computed to construct the features.

    The shearlet transform is quite similar to contourlets but with an exception of no restriction to the no. of directions. In discrete state, the shearlets is computed by combining the Laplacian pyramid (LP) with directional filters [23]. However, they lack shift invariance which can be overcome by non-subsampled shearlet transform (NSST) realized by combining non-subsampled Laplacian pyramid (NSLP) with various shearing filters [24]. The NSST has shift invariance and flexible directional sensitivity. The optimality of the shearlet transform was reported in reference [25]. In reference [26], He et al. proposed a shearlet transform based descriptor for texture classification. In this method, first the shearlet transform is applied on the input image followed by computation of shearlet domain energy. Next, these local energy features are quantized and encoded to construct the rotational invariant representation. In reference [15], Dong et al. introduced a shearlet transform based descriptor for texture classification and retrieval applications where the relation of adjacent shearlet subbands are modeled using linear regression. In reference [16], a complex shearlet domain descriptor is proposed for histopathological image classification which not only uses magnitude coefficients but also utilize the relative phase coefficients to construct the feature vector. Four texture based features are utiliized to form the feature space. The histograms of shearlet coefficients (HSC) was proposed by Schwartz et al. In reference [27] where the histograms formed using edge outputs in various directions at each decomposition level are estimated over an image region. The authors demonstrated the superiority of their features over HOG for face recognition as well as texture classification applications.

    It is observed that, the LBP and most of the shearlet based descriptors cannot capture the very fine image details very well. In this connection, the local bit-plane based image descriptors (spatial domain) have shown encouraging results in extracting the very fine details, however unlike shearlet domain approaches they are unable to capture anisotropic information at different scales. In view of these limitations of spatial local bit-plane based descriptors and shearlet domain descriptors, we combine the NSST and the concept of bit-plane decomposition in order to introduce a novel scheme called NSST-local bit-plane neighbor dissimilarity pattern (NSST-LBNDP) which can encode not only the very fine image details but the directional information at multiple scales are captured too. In LBPDAP, with respect to (w.r.t) a centre/reference pixel the dissimilarity information is calculated between each neighbour and its only one immediate adjacent neighbour (at R = 1 w.r.t reference) in clockwise direction. In LBPANDP, the dissimilarity information is calculated between each neighbour and its few selected adjacent neighbours but all that are present at R = 1 (w.r.t reference) only. Both LBPDAP and LBPANDP uses only four most significant bit-planes out of 8 bit-planes ignoring the 4 least significant bit-planes in order to reduce the feature dimensions, which limits its capability of capturing very fine image details. It is observed that the dissimilarity relation between each given neighbour and its all 8-connected adjacent neighbours (at R = 1, R = 2 w.r.t reference) are not yet exploited in the literature which actually provides a neighbour-neighbour dissimilarity information for a given neighbour. In this paper, unlike LBPDAP and LBPANDP we have considered all the 8 bit-planes for encoding and with respect to a given neighbour we have considered its all the 8 connected adjacent neighbours (at R = 1, R = 2 w.r.t reference) in order to compute full neighbour-neighbour dissimilarity information.

    The main contributions of NSST-LBNDP are:

    1) The existing spatial domain bit-plane based descriptors ignores the capturing of anisotropic details in diverse scales whereas the existing shearlet based descriptors fail to encode very fine image details. Therefore an attempt has been made to solve this issue by combining NSST, bit-plane slicing and local pattern based encoding schemes.

    2) The NSST subband coefficients effectively represents the anisotropic details of an image at different scales. The NSST based local energy features in each subband when further decomposed into a no. of bit-plane slices can capture the most coarsest to very fine subband image details.

    3) Each bit-plane is encoded by utilizing the powerful dissimilarity relationship between each neighbouring bit and all its adjacent neighbors (w.r.t each reference).

    The following is a breakdown of the paper's structure. In Section 2, the basic theory of NSST is discussed. The proposed NSST-LBNDP for CBMIR is discussed in detail in Section 3. Section 4 presents the experimental results and analysis on the results obtained. The paper is concluded in Section 5.

    The block diagram of proposed descriptor in image retrieval framework is shown in Figure 1.

    Figure 1.  The overall block diagram of NSST-LBNDP descriptor in a image retrieval framework.

    The calculation procedure of proposed NSST-LBNDP descriptor is described in details, in this section. The whole process is splitted into few crucial steps such as (ⅰ) NSST decomposition (ⅱ) Incorporation of Non-Linearity (ⅲ) Bit-plane decomposition (ⅳ) Encoding of NSST-detail and approximation subbands using NSST-LBNDP and (ⅵ) Formation of feature vector.

    Wavelet transform is considered as one of the major and effective tool in the area of multi-scale image processing. However, it is unable to represent an image effectively with higher dimensional singularities. To overcome this limitation, several different multi-scale geometrical analysis transforms like Curvelet [28], Contourlet [20], Shearlet [21], Ridgelet [29] are proposed. Recently, an affine system-based Shearlet transform (ST) [21] was proposed that can sparsely represent an image and provide an optimal approximation. It provides flexible directional selectivity. The original version of ST was proposed in order to overcome the inability of well known wavelet transform to represent an image with higher dimensional singularities. However ST itself has some limitations of its own. Since ST is shift variant in nature which might result into pseudo Gibb's phenomenon around the singularities present in the images. Usually the presence of the up-sampling and down-sampling results into shift variance, hence by removing these two blocks the shift invariant version of ST, non subsampled ST [23] was obtained. In NSST, all the subbands are of same size as the original image, due to absence of decimation between the decomposition [30].

    The two dimensional affine system is given as-

    Bas={ψi,j,K(x)=|detA|i/2ψ(sjAixK):i,jZ,KZ2} (2.1)

    where ψL2(R2), i, j and K represents the collection of basis functions, scale, direction and shift parameter respectively. A and s are the anisotropy and shear matrices of dimension 2×2 respectively and |detA|=1. The anisotropy and shear matrices are given as A = [4002] and s = [1101]. In NSST, nonsubsampled Laplacian pyramid filters are used to obtain shift invariance. Multiscale factorization and multi-directional factorization are the two phases of the NSST discretization process [31].

    The different NSST subbands obtained for a image from NEMA-CT dataset is shown in Figure 2.

    Figure 2.  Example of NSST decomposition of a CT image from Nema-CT dataset (a) Original image, (b) Approximation subband, (c)–(f) Detail subbands from scale 1, (g)–(j)Detail subbands from scale 2.

    Like wavelet coefficients, the image NSST coefficients in its available form are also inadequate for texture cues. The NSST coefficients helps in dividing the texture details into various frequencies. An incorporation of non-linearity is highly essential in order to discriminate the texture regions with like average brightness and 2nd order information.

    In the proposed NSST-LBNDP descriptor, the non-linearity followed by smoothing are performed on the image NSST coefficients before the feature extraction for texture retrieval so as to reduce the sensitivity of NSST coefficients towards the local variations. The various forms of non-linearities such as squaring, rectified sigmoid etc. are used in the literature [17]. The smoothing operation is carried out with either Gaussian or low pass filters (rectangular). Usually, the smoothing is done after incorporation of non-linearity. In NSST-LBNDP descriptor too, like [17], the incorporation of non-linearity along with smoothing is merged into one operation which is considered as useful from computational point of view too. A 3x3 local neighbourhood is considered, where we square the magnitudes of subband NSST coefficients and determine their mean. This can be viewed as calculation of local energy over a 3x3 local neighbourhood, at each reference (i,j):

    EL(i,j)=192p=02q=0|c(i1+p,j1+q)|2 (2.2)

    where c(i,j) is the image NSST coefficient at a reference position (i,j) in a subband. These energy values are required to be normalized to make certain that they belong to a particular dynamic range. In the proposed technique, the values obtained from Eq (2.2) are normalized to make them lie in the range of 0–1 and subsequently in an 8-bit range so that they can be further employed for bit-plane decomposition process.

    In this step the input image I (with dimension m×n) is decomposed into eight binary bit-planes using

    I(x,y)=B1b=0BBb(x,y)×2b (2.3)

    where x[1,m], y[1,n] and b[0,7]. BBb(x,y) represents the binary bit of I(x,y) in bth bit-plane and B denotes the bit depth.

    Through bit-plane decomposition, the input image is splitted into several bit-planes which captures the finer to coarser texture details. All the bit-planes provide unique characteristics about an image.

    Figure 3 depicts an example of bit-plane calculation for a sample image.

    Figure 3.  The eight bit-planes obtained for a sample image.

    The different bit-planes obtained for a sample image from TCIA-CT database is shown visually in Figure 4. It is observed that, the lower to higher bit-planes carries very fine to highly coarse image texture details.

    Figure 4.  Visual example of bit-plane decomposition for a CT image from TCIA-CT dataset.

    Each bit-plane obtained from NSST detail and approximation subbands are encoded by exploiting the dissimilarity relationship between each neighbouring bit (with respect to a reference) and its adjacent neighbours.

    Considering each bit in the bit-plane as center/reference, local circular neighbourhoods in each bit-plane is transformed into encoded bit-planes EBb,b[0,7]. To calculate EBb values, the dissimilarity relationship between each neighbouring value (with respect to a reference) and its adjacent neighbours are considered.

    In each kth subband (k[1,Ns]), the dissimilarity bit or information calculated between each neighbour BBab(u,v)(a[1,8],b[0,7]) (w.r.t the centre/reference bit BBb(u,v)) and its 8 adjacent neighbours BBa,tb(u,v)(t[1,8]) are calculated using

    [Da,tb,k(u,v)]={η(BBab,k(u,v),BBa,tb,k(u,v))};b[0,7],a[1,8],t[1,8],k[1,Ns] (2.4)

    where η(BBab,k(u,v),BBa,tb,k(u,v)) is the dissimilarity function calculated between BBab,k(u,v) and BBa,tb,k(u,v) and is given as

    η(x,y)={1,ifxy0,else (2.5)

    In each kth subband, for each neighbourhood position a[1,8] (w.r.t the reference/centre position BBb(u,v)), the 8 dissimilarity bits calculated using Eq (2.4) are combined using the following summing operation:

    ζab,k(u,v)=8t=1Da,tb,k(u,v),a[1,8] (2.6)

    For each bit-plane (b[0,7]) the ζab,k(u,v) value obtained for each neighbour (BBab,k(u,v)a[1,8]) around the centre/reference (BBb(u,v)) is thresholded using a threshold (Th) and multiplied with corresponding weights to produce an encoded bit-plane value.

    EBb,k(u,v)=8a,n=12n1γ(ζab,k(u,v),Th),a[1,8] (2.7)

    where

    γ(x,y)={1,ifxy0,else (2.8)

    For each kth subband, the encoded bit-plane value EBb,k(u,v) (for each reference or center BBb,k(u,v)) ranges between [0,255]. The example computation of EBb,k(u,v)b=0 for a lowest bit-plane of kth NSST subband of a sample image is shown in Figure 5. The value of threshold (Th) is set to 4 through empirical study. For each reference (u,v) in a kth subband, the encoded bit-plane values of all the 8 bit-planes EBb,k(u,v)(b[0,7]) are finally compared to the respective reference local energy feature value (as shown in Figure 5) and the corresponding binary outputs are generated which is next multiplied with the corresponding weights to produce the final NSST-LBNDP value:

    NSSTLBNDPk(u,v)=7b=02bγ(EBb,k(u,v),Ef(u,v)) (2.9)
    Figure 5.  An example computation of bit-plane encoding.

    where Ef.k(u,v) is the local energy feature value at (u,v)th position in kth subband.

    Likewise, for each subband, one NSST-LBNDP feature map is generated whose values ranges between [0,255]. If we decompose the input image using 2-level NSST with 2, 2 directions we obtain 1 approximation and 4 + 4 = 8 detail subbands. If we encode each subband using proposed NSST-LBNDP technique we obtain a feature vector of 9×256 = 2304 features. Therefore to limit the feature vector dimension, we quantize each NSST-LBNDP feature map in to the range of [0,15] using

    NSSTLBNDPqk(u,v)=round(15255(NSSTLBNDPk(u,v))),k[1,Ns] (2.10)

    After quantizing the feature maps into [0,15] range, the feature vector dimensions of total 9 subbands will be 9×16 = 144.

    Each quantized feature map obtained from each image NSST subband is splitted into four equal patches. We compute the histograms of each patch and finally concatenate them together to construct the final feature vector. The feature vector finally is given by

    FV=[HNSSTLBNDPqk,P1,HNSSTLBNDPqk,P2,HNSSTLBNDPqk,P3,HNSSTLBNDPqk,p4]k[1,Ns] (2.11)

    where HNSSTLBNDPqk,P1, HNSSTLBNDPqk,P2, HNSSTLBNDPqk,P3 and HNSSTLBNDPqk,p4 denotes the histogram of first, second, third and fourth patches respectively and Ns denotes the the total no. of subbands.

    For example, with 2-level NSST with 2, 2 directions we obtain 1 approximation and 4 + 4 = 8 detail subbands. With proposed method, each subband will yield one quantized feature map. If we divide each quantized feature map into four equal patches and form the feature vector as given in Eq (2.11), we obtain a total feature dimension of 4×9×16 = 576 features.

    The algorithm for the proposed feature extraction technique is as follows :

    Input-Image; Output-Feature vector

    1) Apply the NSST on input image.

    2) In each NSST subband, the non-linearity is incorporated by computing local energy using Eq (2.2) and then normalize it in the range of [0,1] and multiply it with 255 to bring it in an 8-bit range.

    3) The 8-bit values obtained from Step 2 for each subband are subjected to bit-plane slicing.

    4) In each bit-plane slice (of kthsubband), each bit-plane value BBb,k(u,v) is encoded into a EBb,k(u,v) value that ranges between [0,255] using Eqs (2.4)–(2.8).

    5) For each subband, the encoded bit-plane values EBb,k(u,v) for each of the 8 bit-planes, at reference position (u,v), are compared to the corresponding local energy feature value at (u,v) position (obtained from Step 2) using Eq (2.9).

    6) The binary comparison outputs generated using Eq (2.9) are multiplied by corresponding weights to construct NSSTLBNDPk(u,v) pattern map where the value ranges between [0,255].

    7) The output of Step 6 is quantized into a range of [0,15] to form NSSTLBNDPqk(u,v) pattern map.

    8) The NSSTLBNDPqk pattern maps are converted into final feature vector using Eq (2.11).

    The features of input query image and each image in the database are matched using "d1" similarity measure.

    D(di,q)=Lfj=1|fi(j)fq(j)1+fi(j)+fq(j)| (2.12)

    where D(di,q) represents the distance between di and q where di is the ith image in the database and q represents the input query image. The Lf is the length of feature vector. The fi and fq respectively are the ith feature vector in the feature database and the query feature vector. When experimented with different distance measures, the "d1" distance demonstrated the best performance and hence is used in this framework.

    The feature vector matching algorithm is given below:

    Input: Query image; Output: Most similar retrieved images

    1) Calculate the feature vector of each image using NSST-LBNDP in the database.

    2) Calculate the feature vector of query image using NSST-LBNDP.

    3) Calculate the similarity between each database image feature vector and the feature vector of query image using "d1" distance measure.

    4) Sort the similarity values obtained from Step 3.

    5) The final sorted result are the most similar retrieved images from the database.

    In this section the retrieval performance of the proposed NSST-LBNDP descriptor is evaluated and compared with other existing techniques. The experiments are conducted on two publicly available CT image databases NEMA-CT, TCIA-CT and one MRI image dataset YORK-MRI. The databases used in the experiments are discussed first, followed by the discussion on performance evaluation parameters. The results of the experiments, as well as an interpretation of the results, are discussed later.

    The digital imaging and communications in medicine(DICOM) standard was created by the National Electrical Manufacturers Association (NEMA) [32]. This was designed to view and distribute the medical images of different formats such as ultrasound, MRI's and CT-scans. NEMA-CT consists of CT-scans of different parts of human body. In our case, the dataset is composed of a total of 315 images of dimension 512×512. The nine image classes contain 36, 18, 36, 37, 41, 30, 23, 70 and 24 images per class respectively. Figure 6 presents one sample image from each image class.

    Figure 6.  One example image from each class of NEMA-CT dataset.

    This dataset is constructed with a total of 696 images of dimension 512×512 from "The Cancer Imaging Archive" which is a collection of large collection of cancer data [33]. All the images are in DICOM format. It consists of 16 image classes with 59, 25, 32, 40, 31, 42, 43, 25, 20, 36, 44, 73, 57, 69, 63 and 37 number of images respectively. Figure 7 presents one example image from each image class.

    Figure 7.  One example image from each class of TCIA dataset.

    MRI images considered here for the experiments are obtained from [34]. Here a total of 420 images are considered which are grouped into four classes with 125,100, 85 and 110 images each [34,35]. Example image from each class are shown in the Figure 8.

    Figure 8.  One example image from each class of YORK dataset.

    In the experiments each image in the database is considered as query once and from the corresponding retrieval results the performance evaluation parameters are calculated. The average retrieval precision (ARP) and average retrieval recall (ARR) are the parameters used to evaluate the retrieval performance. ARP and ARR (in percentage) are given as [5,11]:

    ARP=1|TD||TD|k=1P(Ik)×100 (3.1)
    ARR=1|TD||TD|k=1R(Ik)×100 (3.2)

    where TD is the total number of images present in the database.

    P(Ik)=Relevant images retrievedTotal number of images retrieved (3.3)
    R(Ik)=Relevant images retrievedNumber of total relevant images present in database (3.4)

    A retrieved image is considered as relevant when it belongs to the same class of the input query image.

    To validate the performance of proposed NSST-LBNDP, it is compared with some well known techniques including some very recent relevant ones, such as LDEP [5], LWP [19], LBDP [9], LBDISP [8], LBPDAP [11], LBPANDP [10], CSLBCoP [6] and local contourlet tetra pattern i.e., Cont.-TrP [36]. The NSST-LBNDP is also compared to its own spatial domain implementation referred to as S-LBNDP. In S-LBNDP, the input spatial image is directly decomposed into 8 number of bit-planes and subsequently follows the same encoding procedure that of NSST-LBNDP. The experimental results obtained from NEMA-CT dataset is presented in Table 1 and Figure 9(a), (b). Table 1 presents the retrieval results in terms of %ARP and %ARR for top 30 matches of NEMA-CT database. It is observed clearly that the proposed descriptor outperforms the results obtained from the other techniques such as LDEP, LWP, LBDP, LBDISP, LBPDAP, LBPANDP, CSLBCoP, Cont.-TrP and S-LBNDP. The NSST-LBNDP shows good % improvement over existing bit-plane based techniques. Figure 9(a), (b) show the plots of %ARP and %ARR obtained for 10, 15, 20, ... 30 top matches. The plot clearly shows the performance improvement of the proposed descriptor in comparison with the other approaches. Though the NSST-LBNDP descriptor has lower feature dimension than CSLBCoP and Cont.-TrP, the NSST-LBNDP outperforms both CSLBCoP and Cont.-TrP consistently by a good margin. The performance difference between S-LBNDP and NSST-LBNDP in Table 1 clearly demonstrates the advantage of using translational invariant NSST which extracts not only the multi-scale information but possess high directional sensitivity too in order to capture more anisotropic details of an image. In contrast the spatial domain implementation of NSST-LBNDP i.e., S-LBNDP lacks multi-scale information and many directional details too which seriously limits its discriminative power.

    Table 1.  Performance comparison of proposed NSST-LBNDP descriptor with other techniques in terms of % ARP and % ARR for NEMA-CT dataset (Top 30 match).
    Method LDEP [5] LWP [19] LBDP [9] LBDISP [8] LBPDAP [11] LBPANDP [10] CSLBCOP [6] Cont.-TrP [36] S-LBNDP NSST-LBNDP
    ARP 75.61 76.14 80.59 80.45 82.19 82.08 80.87 77.25 81.43 84.19
    ARR 61.50 62.80 65.17 66.48 67.40 67.25 65.80 61.52 66.69 69.32

     | Show Table
    DownLoad: CSV
    Figure 9.  The retrieval performance comparison in terms of ARP and ARR for datasets (a), (b)Nema-CT; (c), (d)TCIA-CT; (e), (f)York-MRI.

    In terms of %ARP, for top 30 matches, the NSST-LBNDP is found to perform better than LDEP, LWP, LBDP, LBDISP, LBPDAP, LBPANDP, CSLBCoP, Cont.-TrP and S-LBNDP by 11.34%, 10.57%, 4.46%, 4.64%, 2.43%, 2.57%, 4.10%, 8.98% and 3.39% respectively.

    Table 2 depicts the performance evaluation of the NSST-LBNDP descriptor in terms of %ARP and %ARR for top 30 matches of TCIA-CT dataset. In case of TCIA-CT, the proposed approach outperforms the other techniques in terms of both %ARP and %ARR. Figure 9(c), (d) show the plots of %ARP and %ARR obtained for 10, 15, 20, .... 100 matches considered respectively. These plots clearly show that the proposed descriptor is able to provide outstanding results in comparison with the other techniques. The experimental results also signify that the NSST-LBNDP performs consistently better than the relevant bit-plane based methods both in terms of %ARP and %ARR. For TCIA-CT, the performance of the NSST-LBNDP in terms of %ARP for top match of 30, is found to be better than LDEP, LWP, LBDP, LBDISP, LBPDAP, LBPANDP, CSLBCoP, Cont.-TrP and S-LBNDP by 27.93%, 24.74%, 14.71%, 4.78%, 2.50%, 3.75%, 24.05%, 5.63% and 2.71% respectively. The performance difference between S-LBNDP and NSST-LBNDP in Table 2 also speaks in favour of NSST-LBNDP.

    Table 2.  Performance comparison of NSST-LBNDP descriptor with other techniques in terms of % ARP and % ARR for TCIA-CT dataset (Top 30 match).
    Method LDEP [5] LWP [19] LBDP [9] LBDISP [8] LBPDAP [11] LBPANDP [10] CSLBCOP [6] Cont.-TrP [36] S-LBNDP NSST-LBNDP
    ARP 73.48 75.36 81.95 89.72 91.71 90.61 75.78 89.00 87.72 94.01
    ARR 52.47 54.05 57.05 60.87 62.37 61.68 53.82 61.70 60.27 63.42

     | Show Table
    DownLoad: CSV

    Table 3 tabulates, the performance results for YORK-MRI dataset in terms of %ARP and %ARR and the Figure 9(e), (f) presents the %ARP and %ARR plots for top 10, 15, 20, ....100 matches. From Table 3 and Figure 9(e), (f), it can be clearly seen that the proposed NSST-LBNDP performs better than the other techniques considered for comparison. Like NEMA-CT and TCIA-CT, in YORK-MRI too the NSST-LBNDP consistently outperforms the existing bit-plane based descriptors. For top match of 100, the performance of the NSST-LBNDP is observed to be 10.92%, 41.34%, 2.37%, 12.49%, 21.12%, 26.96%, 10.63%, 12.87% and 14.00% better than the LDEP, LWP, LBDP, LBDISP, LBPDAP, LBPANDP, CSLBCoP, Cont.-TrP and S-LBNDP respectively in terms of %ARP. For York-MRI too, the NSST-LBNDP outperforms S-LBNDP implementation with an encouraging margin.

    Table 3.  Performance comparison of NSST-LBNDP with other techniques in terms of % ARP and % ARR for YORK-MRI dataset (Top 100 match).
    Method LDEP [5] LWP [19] LBDP [9] LBDISP [8] LBPDAP [11] LBPANDP [10] CSLBCOP [6] Cont.-TrP [36] S-LBNDP NSST-LBNDP
    ARP 84.42 66.25 91.47 83.24 77.31 73.75 84.64 82.96 90.21 93.64
    ARR 79.15 61.44 86.44 78.37 73.72 70.20 80.04 77.55 85.37 88.41

     | Show Table
    DownLoad: CSV

    Table 4 demonstrates the performance of NSST-LBNDP for various scales and directions of NSST for York-MRI dataset. It is observed that the NSST-LBNDP performs best with the 2-level NSST decomposition when compared to 1-level NSST. And a 3-level NSST decomposition with NSST-LBNDP doesn't much improve the results further, however the increase in number of decomposition levels and no. of directions may increase the feature dimensions. Therefore, in order to achieve a practical balance between feature dimensions and the retrieval performance, we set the number of decomposition levels to 2 with [2,2] number of directions (from finest to coarsest) for the NSST-LBNDP framework.

    Table 4.  Performance analysis of NSST-LBNDP for YORK-MRI dataset (top match 100) in terms of ARP and ARR for different levels of NSST decomposition and directions.
    Level Direction Total no. of subbands NSST-LBNDP FD
    ARP ARR
    1 2 1+4=5 89.22 84.10 320
    3 1+8=9 92.47 87.19 576
    4 1+16=17 92.07 86.83 1088
    5 1+32=33 91.74 86.52 2112
    2 2 2 1+4+4=9 93.64 88.41 576
    3 3 1+8+8=17 93.69 88.52 1088
    4 4 1+16+16=33 93.19 88.04 2112
    5 5 1+32+32=65 92.88 87.88 4160
    3 2 2 2 1+4+4+4=13 93.77 88.68 832
    3 3 3 1+8+8+8=25 93.40 88.40 1600
    4 4 4 1+16+16+16=49 93.23 88.27 3136
    5 5 5 1+32+32+32=97 93.06 87.13 6208

     | Show Table
    DownLoad: CSV

    Table 5 presents a comparison of the feature dimension of NSST-LBNDP with other descriptors. Though it has relatively higher feature dimension compared to LDEP, LWP, LBDP, LBDISP, LBPDAP, LBPANDP, S-LBNDP and much lower dimension compared to CSLBCoP and Cont.-TrP, a consistent performance improvement with encouraging margin in terms of both %ARP and %ARR is seen over all other methods.

    Table 5.  Feature dimension comparison of the proposed descriptor with other techniques.
    Method LDEP [5] LWP [19] LBDP [9] LBDISP [8] LBPDAP [11] LBPANDP [10] CSLBCOP [6] Cont.-TrP [36] S-LBNDP NSST-LBNDP
    Dimension 24 256 256 256 192 64 1024 1475 64 576

     | Show Table
    DownLoad: CSV

    Table 6 shows the total retrieval time (in seconds) comparison of NSST-LBNDP with other descriptors. The total retrieval time for a given dataset is calculated by estimating the total time needed to match the query image with each image in the dataset and mainly depends on feature dimensions. All the simulations are performed on a machine with Intel core i5-7200U CPU, 2.5GHz and 8GB RAM using MATLAB computing platform. It is observed that the total retrieval time of NSST-LBNDP is better than only CSLBCoP and Cont.-TrP descriptors and inferior to all other descriptors, however the retrieval performance of NSST-LBNDP is consistently higher than all other methods for all the three datasets.

    Table 6.  Total retrieval time (in seconds) comparison of the NSST-LBNDP with other techniques.
    Method LDEP [5] LWP [19] LBDP [9] LBDISP [8] LBPDAP [11] LBPANDP [10] CSLBCOP [6] Cont.-TrP [36] S-LBNDP NSST-LBNDP
    NEMA-CT 0.10 0.24 0.26 0.28 0.21 0.18 0.78 1.61 0.16 0.44
    TCIA-CT 0.46 1.21 1.01 1.27 1.07 0.56 3.50 8.49 0.57 2.19
    YORK-MRI 0.37 0.63 0.62 0.53 0.45 0.35 1.26 3.11 0.35 0.99

     | Show Table
    DownLoad: CSV

    The capturing of multiscale, translational invariance and directional information using NSST subbands provide a powerful representation of an input image. The bit-plane decomposition of each NSST subband's local energy information further ensures the capturing of very fine to coarse subband details. The effective encoding strategy of these bit-plane slices therefore brings in all the improvement over existing descriptors and exhibits the best handcrafted based results existing in the literature.

    Figure 10 shows the visual retrieval results for different descriptors for one query image taken from NEMA-CT database. It is observed that out of all the techniques, the NSST-LBNDP descriptor retrieves more relevant images correctly as compared to other techniques. This visual result validate the retrieval supremacy of the proposed descriptor compared to existing descriptors.

    Figure 10.  Visual retrieval results for LDEP, LWP, LBDISP, LBDP, LBPDAP, LBPANDP, CSLBCoP, Cont.-TrP, S-LBNDP and NSST-LBNDP descriptors, for the top 15 image matches for NEMA-CT database for a given query image. (The image inside black box is a query image, the images inside green boxes were correctly retrieved, while the images inside red boxes were retrieved incorrectly).

    In Figure 11, the feature vectors of all existing bit-plane based descriptors i.e., LBDP, LBPDAP, LBDISP, LBPANDP and NSST-LBNDP are compared using interclass and intraclass image categories. Images I1 and I2 belongs to the same class whereas the images I2 and I3 belongs to two different classes. Figure 11(d), (e) respectively shows the probability distribution (p.d) w.r.t zero mean of feature vector difference for images I1-I2 (intraclass) and I2-I3 (interclass) for different descriptors. High deviation w.r.t zero mean exhibits low resemblance between the features however high amplitude w.r.t zero mean shows higher resemblance between the feature vectors. Figure 11(d), (e) clearly demonstrates the superior discriminative power of NSST-LBNDP features in depicting the superior alikeness matching of intraclass images and in distinguishing the interclass images respectively.

    Figure 11.  Examples displaying discriminating nature of LBDP, LBPDAP, LBDISP, LBPANDP and NSST-LBNDP feature vectors in distinguishing images of interclass or intraclass type from TCIA-CT database. Images I1 and I2 belongs to the same class but I2 and I3 images belong to two different classes. (d) and (e) respectively are the probability distribution (p.d) w.r.t zero mean of feature vector difference for images I1-I2 (intraclass) and I2-I3 (interclass) for different descriptors.

    In this paper, we present NSST-LBNDP to characterise texture in biomedical images using NSST, bit-plane slicing and local pattern based feature extraction. Based on the effective blend of NSST's shift invariance, multiscale and directional properties based image representation, and bit-plane slicing, the proposed bit-plane encoding strategy yields encouraging extraction of very fine to coarse texture details at different scales and directions. Experimental results are demonstrated to show the effectiveness of this technique on two CT image datasets and one MRI image dataset. The NSST-LBNDP approach outperforms many well known descriptors including few very recent ones.

    This work was supported by Digital India Corporation (formerly Media Lab Asia), Ministry of Electronics and Information Technology, Government of India, through Visvesvaraya Ph.D scheme.

    The authors declare that there is no known conflict of interest regarding publication of this work.



    [1] T. Ojala, P. Matti, H. David, A comparative study of texture measures with classification based on featured distributions, Pattern Recognit., 29 (1996), 51–59. doi: 10.1016/0031-3203(95)00067-4. doi: 10.1016/0031-3203(95)00067-4
    [2] S. Lauge, B. S. Saher, D. B. Marleen, Quantitative analysis of pulmonary emphysema using local binary patterns, IEEE Trans. Med. Imaging, 29 (2010), 559–569. doi: 10.1109/TMI.2009.2038575. doi: 10.1109/TMI.2009.2038575
    [3] S. Murala, Q. J. Wu, Local ternary co-occurrence patterns: a new feature descriptor for mri and ct image retrieval, Neurocomputing, 119 (2013), 399–412. doi: 10.1016/j.neucom.2013.03.018. doi: 10.1016/j.neucom.2013.03.018
    [4] S. Murala, Q. J. Wu, Spherical symmetric 3d local ternary patterns for natural, texture and biomedical image indexing and retrieval, Neurocomputing, 149 (2015), 1502–1514. doi: 10.1016/j.neucom.2013.03.018. doi: 10.1016/j.neucom.2013.03.018
    [5] S. R. Dubey, S. K. Singh, R. K. Singh, Local diagonal extrema pattern: a new and efficient feature descriptor for ct image retrieval, IEEE Signal Process. Lett., 22 (2015), 1215–1219. doi: 10.1109/LSP.2015.2392623. doi: 10.1109/LSP.2015.2392623
    [6] M. Verma, R. P. Balasubramanian, Center symmetric local binary co-occurrence pattern for texture, face and bio-medical image retrieval, J. Vis. Commun. Image Represent., 32 (2015), 224–236. doi: 10.1016/j.jvcir.2015.08.015. doi: 10.1016/j.jvcir.2015.08.015
    [7] M. H. Marko, P. Matti, S. Cordelia, Description of interest regions with center-symmetric local binary patterns, in Computer Vision, Graphics and Image Processing, Springer, (2006), 58–69. doi: 10.1007/11949619-6.
    [8] S. R. Dubey, S. K. Singh, R. K. Singh, Novel local bit-plane dissimilarity pattern for computed tomography image retrieval, Electron. Lett., 52 (2016), 1290–1292. doi: 10.1049/el.2016.1206. doi: 10.1049/el.2016.1206
    [9] S. R. Dubey, S. K. Singh, R. K. Singh, Local bit-plane decoded pattern: a novel feature descriptor for biomedical image retrieval, IEEE J. Biomed. Health Inform., 20 (2015), 1139–1147. doi: 10.1109/JBHI.2015.2437396. doi: 10.1109/JBHI.2015.2437396
    [10] R. Hatibaruah, V. K. Nath, D. Hazarika, Computed tomography image retrieval via combination of two local bit plane-based dissimilarities using an adder, Int. J. Wavelets Multiresolut. Inf. Process., 19 (2020), 1–18. doi: 10.1142/S0219691320500587. doi: 10.1142/S0219691320500587
    [11] R. Hatibaruah, V. K. Nath, D. Hazarika, Local bit plane adjacent neighborhood dissimilarity pattern for medical ct image retrieval, Procedia Comput. Sci., 165 (2019), 83–89. doi: 10.1016/j.procs.2020.01.073. doi: 10.1016/j.procs.2020.01.073
    [12] P. M. Hong, C. S. Tong, S. K. Choy, H. Zhang, A fast and effective model for wavelet subband histograms and its application in texture image retrieval, IEEE Trans. Image Process., 15 (2006), 3078–3088. doi: 10.1109/TIP.2006.877509. doi: 10.1109/TIP.2006.877509
    [13] A. A. Shinde, A. D. Rahulkar, C. Y. Patil, Fast discrete curvelet transform-based anisotropic feature extraction for biomedical image indexing and retrieval, Int. J. Multimed. Inf. Retr., 6 (2017), 281–288. doi: 10.1007/s13735-017-0132-0. doi: 10.1007/s13735-017-0132-0
    [14] Y. Dong, J. Ma, Wavelet-based image texture classification using local energy histograms, IEEE Signal Process. Lett., 18 (2011), 247–250. doi: 10.1109/LSP.2011.2111369. doi: 10.1109/LSP.2011.2111369
    [15] Y. Dong, D. Tao, X. Li, J. Ma, J. Pu, Texture classification and retrieval using shearlets and linear regression, IEEE Trans. Cybern., 45 (2015), 358–369. doi:10.1109/TCYB.2014.2326059. doi: 10.1109/TCYB.2014.2326059
    [16] A. Sadiq, L. Jochen, Texture features in the shearlet domain for histopathological image classification, BMC Med. Inform. Decis. Mak., 20 (2020), 1–19. doi:10.1186/s12911-020-01327-3. doi: 10.1186/s12911-020-01327-3
    [17] S. Selvan, S. Ramakrishnan, Svd-based modeling for image texture classification using wavelet transformation, IEEE Trans. Image Process., 16 (2007), 2688–2696. doi: 10.1109/TIP.2007.908082. doi: 10.1109/TIP.2007.908082
    [18] S. Ramakrishnan, S. Nithya, Two improved extension of local binary pattern descriptors using wavelet transform for texture classification, IET Image Process., 12 (2018), 2002–2010. doi: 10.1049/iet-ipr.2018.5410. doi: 10.1049/iet-ipr.2018.5410
    [19] S. R. Dubey, S. K. Singh, R. K. Singh, Local wavelet pattern: a new feature descriptor for image retrieval in medical ct databases, IEEE Trans. Image Process., 24 (2015), 5892–5903. doi: 10.1109/TIP.2015.2493446. doi: 10.1109/TIP.2015.2493446
    [20] M. Do, M. Vetterli, The contourlet transform: an efficient directional multiresolution image representation, IEEE Trans. Image Process., 14 (2005), 2091–2106. doi: 10.1109/TIP.2005.859376. doi: 10.1109/TIP.2005.859376
    [21] D. Labate, W. Q. Lim, G. Kutyniok, G. Weiss, Sparse multidimensional representation using shearlets, in Wavelets XI, (2005), 254–262. doi: 10.1117/12.613494.
    [22] P. Srivastava, A. Khare, Content-based image retrieval using local binary curvelet co-occurrence pattern: a multiresolution technique, Comput. J., 61 (2018), 369–385. doi: 10.1093/comjnl/bxx086. doi: 10.1093/comjnl/bxx086
    [23] G. Easley, D. Labate, W. Q. Lim, Sparse directional image representations using the discrete shearlet transform, Appl. Comput. Harmon. Anal., 25 (2008), 25–46. doi: 10.1016/j.acha.2007.09.003. doi: 10.1016/j.acha.2007.09.003
    [24] G. R. Easley, D. Labate, Critically sampled wavelets with composite dilations, IEEE Trans. Image Process., 21 (2012), 550–561. doi: 10.1109/TIP.2011.2164415. doi: 10.1109/TIP.2011.2164415
    [25] K. Guo, D. Labate, Optimally sparse multidimensional representation using shearlets, SIAM J. Math. Anal., 39 (2007), 298–318. doi: 10.1137/060649781. doi: 10.1137/060649781
    [26] J. He, H. Ji, X. Yang, Rotation invariant texture descriptor using local shearlet-based energy histograms, IEEE Signal Proc. Lett., 20 (2013), 905–908. doi: 10.1109/LSP.2013.2267730. doi: 10.1109/LSP.2013.2267730
    [27] W. R. Schwartz, R. D. da Silva, L. S. Davis, H. Pedrini, A novel feature descriptor based on the shearlet transform, IEEE Int. Conf. Image Process., 1033–1036. doi: 10.1109/ICIP.2011.6115600.
    [28] E. J. Candès, D. L. Donoho, Curvelets: A surprisingly effective nonadaptive representation for objects with edges, Stanford Univ. Ca. Dept. of Statistics, (2000).
    [29] E. J. Candès, D. L. Donoho, Ridgelets: A key to higher-dimensional intermittency?, Philos Trans A Math Phys Eng Sci., 357 (1999), 2495–2509. doi: 10.1098/rsta.1999.0444. doi: 10.1098/rsta.1999.0444
    [30] J. Saeed, G. Sedigheh, Using two coefficients modeling of nonsubsampled shearlet transform for despeckling, J. Appl. Remote Sens., 10 (2016), 015002. doi: 10.1117/1.JRS.10.015002. doi: 10.1117/1.JRS.10.015002
    [31] W. Kong, Technique for image fusion based on nsst domain inmf, Optik, 125 (2014), 2716–2722. doi:10.1016/j.ijleo.2013.11.025. doi: 10.1016/j.ijleo.2013.11.025
    [32] Nema-ct image database, 2016, Available from: http://medical.nema.org/medical/Dicom/Multifr-ame.
    [33] The cancer imaging archive, 2021, Available from: https://www.cancerimagingarchive.net.
    [34] Cardiac mri dataset-york university, 2021, Available from: http://www.cs.yorku.ca/mridataset.
    [35] A. Andreopoulos, J. K. Tsotsos, Efficient and generalizable statistical models of shape and appearance for analysis of cardiac mri, Med. Image Anal., 12 (2008), 335–357. doi: 10.1016/j.media.2007.12.003. doi: 10.1016/j.media.2007.12.003
    [36] T. G. S. Kumar, V. Nagarajan, Local contourlet tetra pattern for image retrieval, Signal Image Video Process., 12 (2018), 591–598. doi: 10.1007/s11760-017-1197-1. doi: 10.1007/s11760-017-1197-1
  • This article has been cited by:

    1. Bin Feng, Chengbo Ai, Haofei Zhang, Fusion of Infrared and Visible Light Images Based on Improved Adaptive Dual-Channel Pulse Coupled Neural Network, 2024, 13, 2079-9292, 2337, 10.3390/electronics13122337
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2546) PDF downloads(79) Cited by(1)

Figures and Tables

Figures(11)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog