Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Sparse-view X-ray CT based on a box-constrained nonlinear weighted anisotropic TV regularization

  • Sparse-view computed tomography (CT) is an important way to reduce the negative effect of radiation exposure in medical imaging by skipping some X-ray projections. However, due to violating the Nyquist/Shannon sampling criterion, there are severe streaking artifacts in the reconstructed CT images that could mislead diagnosis. Noting the ill-posedness nature of the corresponding inverse problem in a sparse-view CT, minimizing an energy functional composed by an image fidelity term together with properly chosen regularization terms is widely used to reconstruct a medical meaningful attenuation image. In this paper, we propose a regularization, called the box-constrained nonlinear weighted anisotropic total variation (box-constrained NWATV), and minimize the regularization term accompanying the least square fitting using an alternative direction method of multipliers (ADMM) type method. The proposed method is validated through the Shepp-Logan phantom model, alongisde the actual walnut X-ray projections provided by Finnish Inverse Problems Society and the human lung images. The experimental results show that the reconstruction speed of the proposed method is significantly accelerated compared to the existing L1/L2 regularization method. Precisely, the central processing unit (CPU) time is reduced more than 8 times.

    Citation: Huiying Li, Yizhuang Song. Sparse-view X-ray CT based on a box-constrained nonlinear weighted anisotropic TV regularization[J]. Mathematical Biosciences and Engineering, 2024, 21(4): 5047-5067. doi: 10.3934/mbe.2024223

    Related Papers:

    [1] Dayu Xiao, Jianhua Li, Ruotong Zhao, Shouliang Qi, Yan Kang . Iterative CT reconstruction based on ADMM using shearlet sparse regularization. Mathematical Biosciences and Engineering, 2022, 19(12): 11840-11853. doi: 10.3934/mbe.2022552
    [2] Si Li, Wenquan Ye, Fenghuan Li . LU-Net: combining LSTM and U-Net for sinogram synthesis in sparse-view SPECT reconstruction. Mathematical Biosciences and Engineering, 2022, 19(4): 4320-4340. doi: 10.3934/mbe.2022200
    [3] Javad Hassannataj Joloudari, Faezeh Azizi, Issa Nodehi, Mohammad Ali Nematollahi, Fateme Kamrannejhad, Edris Hassannatajjeloudari, Roohallah Alizadehsani, Sheikh Mohammed Shariful Islam . Developing a Deep Neural Network model for COVID-19 diagnosis based on CT scan images. Mathematical Biosciences and Engineering, 2023, 20(9): 16236-16258. doi: 10.3934/mbe.2023725
    [4] Lu-Wen Liao . A branch and bound algorithm for optimal television commercial scheduling. Mathematical Biosciences and Engineering, 2022, 19(5): 4933-4945. doi: 10.3934/mbe.2022231
    [5] Benxin Zhang, Xiaolong Wang, Yi Li, Zhibin Zhu . A new difference of anisotropic and isotropic total variation regularization method for image restoration. Mathematical Biosciences and Engineering, 2023, 20(8): 14777-14792. doi: 10.3934/mbe.2023661
    [6] Hilly Gohain Baruah, Vijay Kumar Nath, Deepika Hazarika, Rakcinpha Hatibaruah . Local bit-plane neighbour dissimilarity pattern in non-subsampled shearlet transform domain for bio-medical image retrieval. Mathematical Biosciences and Engineering, 2022, 19(2): 1609-1632. doi: 10.3934/mbe.2022075
    [7] Dawei Li, Suzhen Lin, Xiaofei Lu, Xingwang Zhang, Chenhui Cui, Boran Yang . IMD-Net: Interpretable multi-scale detection network for infrared dim and small objects. Mathematical Biosciences and Engineering, 2024, 21(1): 1712-1737. doi: 10.3934/mbe.2024074
    [8] Yingjian Yang, Qiang Li, Yingwei Guo, Yang Liu, Xian Li, Jiaqi Guo, Wei Li, Lei Cheng, Huai Chen, Yan Kang . Lung parenchyma parameters measure of rats from pulmonary window computed tomography images based on ResU-Net model for medical respiratory researches. Mathematical Biosciences and Engineering, 2021, 18(4): 4193-4211. doi: 10.3934/mbe.2021210
    [9] Limin Ma, Yudong Yao, Yueyang Teng . Iterator-Net: sinogram-based CT image reconstruction. Mathematical Biosciences and Engineering, 2022, 19(12): 13050-13061. doi: 10.3934/mbe.2022609
    [10] Arturo Nicola Natali, Chiara Giulia Fontanella, Silvia Todros, Piero G. Pavan, Simone Carmignato, Filippo Zanini, Emanuele Luigi Carniel . Conformation and mechanics of the polymeric cuff of artificial urinary sphincter. Mathematical Biosciences and Engineering, 2020, 17(4): 3894-3908. doi: 10.3934/mbe.2020216
  • Sparse-view computed tomography (CT) is an important way to reduce the negative effect of radiation exposure in medical imaging by skipping some X-ray projections. However, due to violating the Nyquist/Shannon sampling criterion, there are severe streaking artifacts in the reconstructed CT images that could mislead diagnosis. Noting the ill-posedness nature of the corresponding inverse problem in a sparse-view CT, minimizing an energy functional composed by an image fidelity term together with properly chosen regularization terms is widely used to reconstruct a medical meaningful attenuation image. In this paper, we propose a regularization, called the box-constrained nonlinear weighted anisotropic total variation (box-constrained NWATV), and minimize the regularization term accompanying the least square fitting using an alternative direction method of multipliers (ADMM) type method. The proposed method is validated through the Shepp-Logan phantom model, alongisde the actual walnut X-ray projections provided by Finnish Inverse Problems Society and the human lung images. The experimental results show that the reconstruction speed of the proposed method is significantly accelerated compared to the existing L1/L2 regularization method. Precisely, the central processing unit (CPU) time is reduced more than 8 times.



    In clinical applications, X-ray computed tomography (CT) aims to visualize the internal structure of the human body by reconstructing the tissues' attenuation coefficients μ to X-rays. Depending on diverse X-ray sources, there are parallel beam, fan beam, and cone beam CTs [1,2,3,4]. In this paper, for ease of explanation, we focus on image reconstructions in parallel beam CT, even though the proposed method can be used in other fan beam and cone beam CTs. In a parallel beam CT, parallel X-ray beams in different directions are transmitted through the patient, who lies between the X-ray sources and the detectors (see Figure 1). The corresponding attenuated X-ray intensities are measured through the detectors. The inverse problem of a parallel beam CT is to reconstruct the attenuation coefficient from the received attenuated X-ray intensities. Given the perfect measured data, the reconstruction methods include filtered back-projection, the algebraic reconstruction technique (ART) [1], and so on. Due to the advantages of speed, accuracy, and excellence with bones and lungs for nondestructive testing, X-ray CT is widely used in medical imaging to aid doctors in diagnosing diseases [1].

    Figure 1.  Schematic diagram of a parallel X-ray beam CT system.

    Nevertheless, exposure of the patients to the environment of radiation increases the risk of many diseases such as leukemia, cancer, etc [5]. Low dose CT is an effective way to reduce such a risk. There are generally three methods of low dose CT. The first is to reduce the tube voltage/currents, the second is the limited angle CT reconstruction, and the third is the sparse-view CT reconstruction. For the first method, to obtain a meaningful CT image, we need an efficient denoising algorithm [6]. For the second method, visible singularities, invisible singularities, and artifacts exist[1]. For the third method, since it violates the Nyquist/Shannon sampling criterion, strong streaking artifacts will occur[7]. In this paper, we focus on removing the streaking artifacts in the sparse-view CT reconstruction. To this end, we need to develop efficient ways to deal with the ill-posedness caused by the projection downsamplings [1].

    To deal with the ill-posedness, the data-driven and model-driven methods exist. The data-driven method includes the usage of a convolutional neural network (CNN) [8], U-Net [9], and the GoogLeNet [10]. However, as pointed out in [11], there should be more evidence of such methods being used in clinical applications. For the sparse-view CT, it is also very difficult to provide enough labeled data, since we cannot produce the data with full projections without enough doses. Hence, the model-driven method is still quite necessary.

    For the model-driven method, note that the algebraic method [12] has the flexibility of incorporating the a-priori information of μ. It is widely used in a sparse-view CT. To be precise, the reconstruction of μ is recast into a minimization problem of an energy functional constructed by a (weighted) least square fitting and a regularization.

    Noting the piecewise constant structure of medical images, its gradient can be considered sparse. The study performed in [13] proposed the total variation (TV) regularization method in sparse-view CT. However, it is well known that TV will introduce new blocky/staircasing artifacts [14]. The study performed in [15] proposed the anisotropic TV regularization method, that could produce distortions along axes. To handle such problems, many variations of TV regularization have been proposed in the last two decades. The study performed in [16] proposed an edge-preserving TV regularizer which used the e|μ|2/σ2 as the edge detector, where σ was a prescribed parameter that represented the amount of smoothing. Later, [17] proposed a similar discretized version to [16]. The similar methods can be found in [18,19]. Nevertheless, the ability of removing the streaking artifact and edge-preserving can be further improved due to the fact that the amount of regularization near the edges and away from that does not differ much since e|μ|2/σ2[0,1]. The study performed in [20] proposed the total generalized variation (TGV) regularizer, which used the second order derivative of the unknown μ. While this method can avoid the blocky artifacts, it assumes the piecewise linear structure of the image. It is well known that the piecewise constant is commonly used for a medical image. The study performed in [21] proposed a directional TV regularizer in which a directional derivative was considered, rather than just using μ. However, different weights can employed in different directions to improve the performance of this method. The study performed in [22] proposed the Lp(0<p<1) regularization method. However, the images heavily relied on the parameter p.

    L1/L2 is a recently proposed regularization technique [23]. This method is based on updating the regularization parameter in each iteration. However, the updating parameter is not region-dependent, that is, in each iteration, the minimization problem is isotropic. The study performed in [24] proposed a nonlinear weighted anisotropic TV (NWATV) regularization method and used it in electrical impedance tomography, which is a low resolution imaging modality. In this paper, a box-constrained NWATV method was used in sparse-view CT, which produced a significantly improved reconstruction compared with directly using NWATV and box-constrained L1/L2 methods. Precisely, across the internal edges where μ, we set the regularization to be small to preserve the edge, while near the smooth region, we set a normal regularization to make the ill-posed problem better posed. We found a significant convergence behavior of the iteration process with the box constraint (set the reconstruction value lies in a proper interval). We validated the proposed algorithm using the Shepp-Logan phantom, the walnut X-ray data provided by Finnish Inverse Problems Society (http://fips.fi/dataset.php), and the clinical lung image provided by The Cancer Imaging Archive (TCIA: https://www.cancerimagingarchive.net/collection/lungct-diagnosis/).

    The rest of the paper is organized as follows. In Section 2, we provide a brief introduction of the parallel beam CT. In Section 3, we introduce the proposed box-constrained nonlinear weighted anisotropic TV regularization and provide an iterative reconstruction algorithm. In Section 4, we validate the performance of the proposed regularization method using the Shepp-Logan phantom, the actual walnut CT experiment data, and the clinical lung image. In Section 5, we discuss the rules of the choice of regularization parameters. In Section 6, we conclude the paper and provide some future research topics.

    In parallel beam CT, we restrict our explanations to the two-dimensional space because the parallel beam always lies in a plane, and each time, the X-ray can only pass through a slice of the object. Let ΩR2 represent a bounded region of the imaging object. Denote μ as the attenuation coefficient of Ω, which is generally a piecewise constant function in medical imaging. In parallel beam CT, an incident X-ray beam along the direct lines Lθ,s:={xR2:Θx=s} passes through the object, which lies between the X-ray sources and the detectors (see Figure 1). Here, sR denotes the signed distance of Lθ,s to the original point O(0,0) and Θ=(cosθ,sinθ), with θ[0,π) denoting the angle of a directly line l and x-axis, where l is perpendicular to Lθ,s. We assume that the incident X-ray intensity is I0. For a fixed θ and s, the attenuated X-ray intensity I(θ,s) can be measured through the detector. The relation between the measured I(θ,s) and the unknown μ is described by the Lambert-Beer law [1,2]:

    I(θ,s)=I0exp{Rθ[f](s)},

    where Rθ[f](s) is the Radon transform [25] of f, which is defined as follows:

    Rθ[f](s)=Lθ,sμ(x)dx

    with dx denoting the length element.

    In medical imaging, we assume that a parallel beam contains J X-rays, and hence J-detectors are employed to detect the corresponding attenuated X-rays. We assume that the J X-rays are equidistantly distributed. To be precise, we assume that the signed distances of the X-rays to the original point lie in [s_,ˉs], and the signed distance of the j-th (j{1,2,,J}) X-ray to the original point is sj=s_+(j1)(¯ss_)/(J1). Moreover, we utilize K angles θk=(k1)π/K for 1kK and kZ+. Then, through the detectors M:=JK, projection datum I(θk,sj) (1jJ and 1kK) can be measured.

    For ease of explanation, we denote ym=lnI0I(θk,sj) for m=J(k1)+j. Then, we have the following:

    ym=Lθk,sjμ(x)dx (2.1)

    for 1mM. Suppose the field of view (FOV) is the square, and FOV=[a,b]×[a,b] which satisfies ΩFOV. We discretize FOV to be N×N pixels Pqt (1q,tN). Then, from Eq (2.1), the reconstruction of μ can be recast to solve the following linear system:

    y=Au. (2.2)

    Here, A=(amp) is an M×n matrix for n=N2, y=(ym) is an M×1 vector, and u=(up) is an n×1 vector. To be precise, amp is the length of the projection line which lies in the pixel Pqt, i.e., amp=|Lθk,sjPqt| for p=N(t1)+q, up=μ(q,t) and ym is defined in Eq (2.1).

    Note that for the sparse-view CT, we generally have Mn; hence, to solve Eq (2.2), we reformulate it to the following least squares problem:

    u=argminuAuy22, (3.1)

    where 2 denotes the standard Euclidean norm in RM. Since ATA is ill-conditioned, where T represents the transpose of , we approximate Eq (3.1) by the following well-conditioned problem:

    u=argminu{12Auy22+λReg(u)}. (3.2)

    The first term on the right-hand side of Eq (3.2) is the data fidelity term, the second term Reg(u) is the regularization term, and λ is the regularization parameter that balances the fidelity and the regularization terms. Precisely, instead of seeking the solution in the space 2, where there may be infinitely many solutions, we seek the solution in its subspace, which is characterized by Reg(u).

    Since the edge of the internal structure is a key feature in medical imaging, the choice of the term Reg(u) should obey the following rule:

    ● Near the local edges of μ or u where |μ|, do as little regularization as possible to preserve the edges.

    ● Perform a normal regularization when μ is smooth (i.e., |μ| is small to make Eq (3.2) well-posed).

    ● The range of the reconstructed u coincides with the range of its true value.

    Combining the above three considerations, we define Reg(u) as follows:

    Reg(u)=pDu1+γλΠ[c1,c2](u), (3.3)

    where γ is the indicator of using box constraint, that is, γ=1 if the box constraint is used, while γ=0 if no box constraint is employed. Here, p=(ω(Dxu);ω(Dyu))R2n,Du=(Dxu;Dyu)R2n, where Dx, DyRn×n are the first-order difference operators along the x and y directions, respectively, ω()=1||2+β with β>0 a small positive number to avoid zero being the denominator; Π[c1,c2](u) is an indicator function, which equals to 0 if for all i{1,2,,n}, where u[i][c1,c2], and equals to + otherwise. Note that Π[c1,c2] is capable of enforcing u into the range of the actual attenuation coefficient.

    The augmented Lagrangian functional of Eq (3.2), together with Eq (3.3), can be expressed as follows:

    L(u,d,p,v;b,e)=12Auy22+λpd1+<b,Dud>+ρ2Dud22+Π[c1,c2](v)+<e,uv>+α2uv22, (3.4)

    where d,v are the auxiliary variables, b,e are the Lagrangian multipliers, and ρ,α are the scalar penalty parameters.

    To minimize Eq (3.4), we use the alternating direction method of multipliers (ADMM)[26]. To be precise, for an initial guess (d(0),p(0),v(0),b(0),e(0)), u is iteratively updated via the following scheme:

    u(k+1)=argminuL(u,d(k),p(k),v(k);b(k),e(k)); (3.5a)
    d(k+1)=argmindL(u(k+1),d,p(k),v(k);b(k),e(k)); (3.5b)
    p(k+1)=(ω(Dxu(k+1));ω(Dyu(k+1))); (3.5c)
    b(k+1)=b(k)+ρ(Du(k+1)d(k+1)); (3.5d)
    v(k+1)=min{max{u(k+1)+1αe(k),c1},c2}; (3.5e)
    e(k+1)=e(k)+α(u(k+1)v(k+1)). (3.5f)

    For Eq (3.5a) and Eq (3.5b), the minimizers have the following closed form:

    u(k+1)=[ATA+ρDTD+αI]1[ATy+ρDTd(k)DTb(k)e(k)+αv(k)], (3.6)

    and

    d(k+1)[i]=hλ|p(k)[i]|ρ(Du(k+1)[i]+1ρb(k)[i]). (3.7)

    Here, I is an n×n identity matrix; b(k)[i], p(k)[i], u(k+1)[i], and d(k+1)[i] are the i-th element of b(k), p(k), u(k+1), and d(k+1), respectively, and hg() represents the soft threshold formula, which is defined as follows[27]:

    hg()={gsgn(),if ||>g0,otherwise,

    where sgn is the sign function.

    We end this section by summarizing the aforementioned process as the reconstruction algorithm in the form of the pseudocode shown in Algorithm 1.

    Algorithm 1: The box-constrained NWATV method
    Require: Projection matrix A, observed data y, and a bound [c1,c2] for the original image.
        Parameters: ρ,λ,β,α,R+, a tolerance ˉϵ and the maximum iteration number kMaxZ+.
    Ensure: the reconstructed image u
    1: Initialize: d(0)=0,p(0)=(1β)1,b(0)=0,v(0)=0,e(0)=0,u(0)=0.
    2: for k = 0:kmax-1 do
    3:     Update u(k+1) using (3.6).
    4:     Update d(k+1) using (3.7).
    5:     Update p(k+1) using (3.5c).
    6:     Update b(k+1) using (3.5d).
    7: Update v(k+1) using (3.5e).
    8:     Update e(k+1) using (3.5f).
    9:     if u(k+1)u(k)2<ˉϵ then
    10:         break
    11:     end if
    12: end for

    In this section, to validate the advantages of the proposed regularization, we perform experiments using the Shepp-Logan numerical model, the walnut actual CT model provided by Finnish Inverse Problems Society (http://fips.fi/dataset.php), and the clinical CT image provided by The Cancer Imaging Archive (https://www.cancerimagingarchive.net/collection/lungct-diagnosis/). Additionally, the walnut CT data is provided in ZENODO (https://zenodo.org/record/1254206). The experimental models are shown in Figure 2.

    Figure 2.  Experimental models we used. (a) is the Shepp-Logan phantom model in numerical experiment, (b) shows the walnut model in actual CT experiment which is obtained in Finnish Inverse Problems Society (FINNISH), and (c) is a clinical lung image provided by the Cancer Image Archive (TCIA).

    To show the advantages of the proposed regularization, we compare the reconstructions of the most recently proposed gradient-based L1/L2[23] with box constraint and the nonlinear weighted anisotropic TV regularization[24] with box constraint. To compare the performance of the reconstructions, we compute the relative errors (including the L2 relative errors RE(k) and the H1 relative errors ~RE(k)) and mean square errors MSE(k) for the k-th step as follows:

    RE(k)=u(k)u02u02,
    ~RE(k)=u(k)u022+D(u(k)u0)22u022+Du022,

    and

    MSE(k)=u(k)u022n.

    Here, u(k) represents the result of the reconstruction at the k-th step. In the Shepp-Logan phantom, u0 represents the ground truth image; in the walnut experiment, it represents the CT image reconstructed using full projections and the filtered back-projection (FBP) method, since we do not know the ground truth image in the actual experiment; in the lung image, it represents the actual image from TCIA. Figure 2(a)(c) illustrate u0 of the Shepp-Logan phantom, the walnut, and the lung experiments.

    Besides, we also compare the peak signal-to-noise ratio PSNR(k) and the structural similarity index SSIM(k) [28] for the k-th step, which is defined as follows:

    PSNR(k)=10log10max(u(k)u(k))MSE(k),

    and

    SSIM(k)=(2μkμ0+C1)(2σk0+C2)(μ2k+μ20+C1)(σ2k+σ20+C2),

    where represents the componentwise multiplication, μk and σk represent the local mean and the local standard deviation of uk, respectively, and μ0 and σ0 represent the local mean and the local standard deviation of u0, respectively; moreover, σk0 denotes the cross-covariance of u(k) and u0, and C1=104,C2=9×104 are set to be the default values in the Matlab build-in function "ssim".

    In the numerical experiment, we set the size of the reconstructed images to be 256×256 pixels and set the number of detectors to be J=362. In the reconstruction of the actual walnut experiment, the size of the reconstructed images is set to be 164×164, and the number of detectors is also 164. In the lung image, we use the first image of patient R_172 in the dataset. The original size is 512 × 512, though we evenly sample to get the ground truth image of 128 × 128. The number of detectors is set to be 181. Using the Radon transform, we obtain the projection data corresponding to the lung image. To solve Eq (3.6), we use the generalized minimal residual algorithm (GMRES) [29] to accelerate the computation.

    The reconstructions are carried out using Matlab 2018a (The MathWorks, Inc., Natick, MA, USA) on a workstation with 1.60 GHz Inter (R) Core (TM) i5-8250U CPU, 8.00 GB memory, Windows 10 operating system. Additionally, we use the MATLAB package AIR Tools Ⅱ to simulate the parallel beam for the CT scanning [30].

    Under box constraints, we compare the results of L1/L2 method and nonlinear weighted TV regularization. In all experiments, we set the maximum number of external and internal iterations in the box-constrained L1/L2 to be 300 and 5, respectively. For fair comparisons, the ranges of other parameters are set according to [23] to minimize the L2 relative error (RE) and to obtain the best performance. The number of iterations in NWATV(without box constraint) and the box-constrained NWATV regularization is set to be 300. We set the candidate set of parameters to be λ{0.002,0.004,0.006,0.008,0.01}, ρ{20,40,60,200,400,600}, and α{5,20,40,60}. The selection of parameters has been carefully optimized to achieve a balance between minimizing the RE and optimizing the visual effects. Henceforth, in the figures and tables, we use L1/L2-box and NWATV-box as the abbreviations of the box-constrained L1/L2 and the box-constrained NWATV, respectively.

    First, we consider the effect of box constraint for the NWATV regularization. We consider a parallel beam CT reconstruction with 31 angles uniformly taken from 0º to 150º, and the noise level is 0.5%. The box constraint is [0, 1]. Therefore, the sample size is set to be 362 × 31. We do our best to choose the parameters such that RE attains the minimum value. Precisely, we choose ρ = 20, λ = 0.002, α = 5 in the box-constrained NWATV, and ρ = 20, λ = 0.004 in NWATV. The reconstruction images are shown in Figure 3, and the corresponding relative errors (RE and ~RE), mean square errors (MSE), peak signal-to-noise rations (PSNR), SSIM values, and the CPU times are shown in Table 1.

    Figure 3.  Effect of box constraint in NWATV reconstruction. The grayscale window is [0, 1]. Left: Reconstruction result under box constraint. Right: Reconstruction result without box constraint.
    Table 1.  Numerical results of the Logan-Shepp model using the NWATV reconstruction with and without box constraint.
    RE ~RE MSE PSNR SSIM CPU time (s)
    NWATV-box 0.042 0.077 1.0826×104 39.670 0.987 264.279
    NWATV 0.046 0.079 1.3087×104 40.976 0.947 507.000

     | Show Table
    DownLoad: CSV

    The results show that with a short reconstruction time, the box-constrained NWATV regularization can perform better than that without the box constraint. In Figure 4, we illustrate the evolutions of RE(k) and SSIM(k) with k, the iteration step. The figure clearly shows that the box constraint improves the convergence behavior of the NWATV method.

    Figure 4.  The effect of box constraint on the convergence of the iteration scheme of NWATV. The left is the RE and the right is the SSIM.

    Next, we show that the proposed regularization can reconstruct a satisfied image using different sampling angles and is robust against the Gaussian random noise. We evenly take 90, 60, and 30 angles for comparisons from 0º to 179º. For each case, we add different Gaussian random noises with the levels 0.5%, 1%, 1.5%, and 2%. The box constraint is set to be [0, 1].

    We list the specific parameters selected in Table 2. The corresponding reconstruction results are shown in Figures 57. We show the values of the corresponding numerical results in Table 3.

    Table 2.  Choices of parameters ρ, λ and α.
    sampling size noise level (%) ρ λ α
    362 × 90 0.5 20 0.004 60
    1 200 0.01 5
    1.5 400 0.01 5
    2 600 0.01 40
    362 × 60 0.5 20 0.004 60
    1 200 0.01 5
    1.5 400 0.01 5
    2 600 0.01 5
    362 × 30 0.5 60 0.002 60
    1 200 0.002 5
    1.5 400 0.002 20
    2 600 0.002 20

     | Show Table
    DownLoad: CSV
    Table 3.  Performance of the box-constrained L1/L2 and the box-constrained NWATV regularization for Shepp-Logan phantom for different sampling sizes and noise levels.
    sampling size noise level (%) RE ~RE MSE PSNR SSIM CPU time (s)
    L1/L2-box 362 × 90 0.5 0.014 0.025 1.2103×105 49.171 0.995 5284.530
    1 0.029 0.051 4.9298×105 43.072 0.991 8825.227
    1.5 0.035 0.064 7.5924×105 41.196 0.987 5238.058
    2 0.054 0.099 1.7666×104 37.530 0.982 5544.499
    362 × 60 0.5 0.016 0.029 1.4968×105 48.248 0.995 1383.701
    1 0.041 0.073 1.0374×104 39.841 0.987 5755.229
    1.5 0.072 0.136 3.1424×104 35.031 0.983 5202.258
    2 0.090 0.169 4.8614×104 33.143 0.980 7313.060
    362 × 30 0.5 0.022 0.040 2.8187×105 45.500 0.993 2419.169
    1 0.049 0.085 1.4510×104 38.384 0.968 2537.593
    1.5 0.090 0.156 4.9035×105 33.100 0.924 4705.849
    2 0.144 0.247 0.0013 29.013 0.861 2627.596
    NWATV-box 362 × 90 0.5 0.018 0.034 1.8941×105 47.229 0.996 239.414
    1 0.035 0.066 7.5532×105 41.222 0.991 172.661
    1.5 0.052 0.099 1.6508×104 37.829 0.988 150.948
    2 0.073 0.138 3.2454×104 34.909 0.982 134.970
    362 × 60 0.5 0.024 0.045 3.4026×105 44.728 0.994 161.004
    1 0.042 0.077 1.0837×104 39.651 0.988 148.216
    1.5 0.068 0.129 2.7866×104 35.575 0.984 130.353
    2 0.088 0.166 4.6514×104 33.344 0.979 140.322
    362 × 30 0.5 0.039 0.071 9.1074×105 40.415 0.989 91.777
    1 0.073 0.136 3.1929×104 34.982 0.979 136.941
    1.5 0.105 0.196 6.6875×104 31.783 0.968 125.862
    2 0.134 0.248 0.0011 29.658 0.956 137.372

     | Show Table
    DownLoad: CSV
    Figure 5.  Reconstruction results using the box-constrained NWATV and the box-constrained L1/L2 methods with the sampling size 362 × 90. The top row is the results of the box-constrained L1/L2, while the bottom row is the results of the box-constrained NWATV. From left to right are respectively the reconstructed results with noise levels of 0.5%, 1%, 1.5% and 2%.
    Figure 6.  Reconstruction results using box-constrained NWATV and box-constrained L1/L2 methods with the sampling size 362 × 60. Each figure has the similar meaning as that in Figure 5.
    Figure 7.  Reconstruction results using the box-constrained NWATV and the box-constrained L1/L2 methods with the sampling size 362 × 30. Each figure has the similar meaning as that in Figure 5.

    As we can see from the reconstructions, the box-constrained NWATV can achieve numerical results similar to that of the box-constrained L1/L2 method at each sampling size. However, the CPU time is reduced at least 8 times. Taking visual effects into account, in the case of small angles (362 × 30), as the noise level increases, the box-constrained NWATV has more advantages than the box-constrained L1/L2 in noise removal and detail recovery.

    First, in this section, we validate the performance of the proposed regularization method using the walnut data from the experimental datasets[31], which includes its complete projection matrix ˉAR19680×26896 and the projection data ˉyR164×120 (3º per projection). We evenly subsample 50 angles from 0º to 149º by employing the first 50 projections. Then, the corresponding projection matrix A is also subsampled such that AR8200×26896. We use a high-resolution filtered back-projection reconstruction[31], and then evenly sample the obtained image to obtain the reference image u0, which is shown in Figure 2(b). From the reference image, we can estimate the box constraint is [0, 0.6]. Considering the magnitudes of ATA and DTD, the parameters we take here are listed as follows: ρ=0.4, λ=0.0001, and α=5. For the parameters in the box-constrained L1/L2, we balance noise removal and detail recovery, choosing a set of parameters with the best visual effects. The results are shown in Figure 8, and the values of the corresponding numerical results are shown in Table 4. The arrows in Figure 8 depict that the box-constrained NWATV method perfoms better than the box-constrained L1/L2 in edges and details preserving.

    Table 4.  Comparisons of the performances of two box-constrained methods using the walnut data.
    RE ~RE MSE PSNR SSIM CPU time (s)
    L1/L2-box 0.174 0.352 0.0013 24.532 0.804 1568.026
    NWATV-box 0.158 0.317 0.0010 25.388 0.798 50.191

     | Show Table
    DownLoad: CSV
    Figure 8.  CT reconstruction of walnut. The image size is 164 × 164 and the grayscale window is [0, 0.6]. Left: Reconstruction of L1/L2-box. Right: Reconstruction of NWATV-box.

    Finally, we discuss the application of the two methods to clinical data[32]. We uniformly sample the image of 512 × 512 to obtain the image of 128 × 128, which is shown in Figure 2(c), and we represent it as the column vector u0R1282. We consider parallel beam scanning and set the sampling sizes as 181 × 60 and 181 × 30 to obtain the projection matrix A. The projection is y=Au0. Since there is already noise in the lung image, we do not need to add additional noise any more. The box constraint is taken as [0, 0.06]. In the experiment of box-constrained NWATV, we set the parameters as λ=0.001, ρ=3×109, and α=0.01 for both the 30 and 60 degrees projections. As in the box-constrained L1/L2, we balance noise removal and detail recovery, choosing a set of parameters to make the reconstructed image have the best visual effect. The results are shown in Figure 9, and the corresponding numerical results are shown in Table 5.

    Table 5.  Comparisons of the performances of two box-constrained methods using the human lung image.
    sampling size model RE ~RE MSE PSNR SSIM CPU time (s)
    181 × 60 L1/L2-box 0.087 0.177 1.1176×106 35.447 0.999 4727.193
    NWATV-box 0.048 0.107 3.4485×107 40.635 0.9995 182.108
    181 × 30 L1/L2-box 0.120 0.219 2.1427×106 32.589 0.997 2592.474
    NWATV-box 0.097 0.195 1.4111×106 34.419 0.998 146.728

     | Show Table
    DownLoad: CSV
    Figure 9.  Reconstruction results of the human lung image. The first row depicts the reconstructed images using the box-constrained L1/L2 (left) and box-constrained NWATV (right) methods, respectively for the sampling size of 181 × 60. The second row illustrates the similar results for sampling size of 181 × 30.

    For further comparison, we also illustrate profiles of the reconstructed walnut and human lung images along the dash lines shown in Figure 10. From the profiles, it's clear that the proposed model behaves better in edges and details preserving.

    Figure 10.  The profiles of the reconstructed walnut and human lung images along the dash lines on left figures. In the right figures, the red line illustrate the values of u0 along the dash line, the blue line show the values of reconstructions using box-constrained L1/L2 methods while the green line depict the reconstructions using box-constrained NWATV methods along the dash lines. The sampling size of the human lung imaging is set to be 181 × 30.

    In conclusion, through the aforementioned experiments, the box-constrained NWATV method can produce similar and visually better results than the recently developed box-constrained L1/L2 method, while the CPU time is significantly decreased.

    Sparse-view CT and limited angle CT are two important ways to reduce the risk of radiation exposure in medical CT scanning. The recent development of artificial intelligence promotes the medical applications of low dose CT [33]. However, recent research has reported the instabilities of such methods [11,34]. Moreover, to gather the high quality training data, the conventional regularization-based reconstruction methods are still quite necessary. This is because to get the labeled data, without using an elegant regularization, data obtained from the full projections should be employed, which exposes the patient under high risk of radiation.

    Another possible model-based approach is the sinogram inpainting method [35]. However, it will cause other artifacts. Hence, proper regularization in the image reconstruction process is a mild way to produce high performance CT images for diagnoses and to gather the training data for AI methodology. [36,37] provide other ways of combining the model-based and data-driven methods for CT image reconstructions.

    In regularization-based reconstructions, the choice of the regularization parameter is generally very difficult and criticized. However, from the point view of dimensional reduction, we can produce a high performance CT image with a much higher dimension given a small amount of parameters. On the one hand, for the choices of the parameters, mathematically there are the discrepancy principle methods and the statistical methods [38] to deal with proper choices of the parameters. On the other hand, there are some rules for the parameters, which are listed as follows.

    ● As in [24], the performance of the reconstruction (i.e., the relative error depends only the λρ rather than the values of λ and ρ).

    ● The values of ρ and α should be determined by order of ATA and the number of pixels in such a way that ATA+ρDTD+αI should not change its order.

    ● The optimal λρ value ranges from 105 to 104, since we can minimize the relative error by using such a value.

    ● The optimal range of λ is approximately 103 to 102.

    In Figure 11, we depict the evolution of RE with λρ and α for different ρ. From the figures, we can see that for each ρ, RE is invariant with the changes of α when λρ is fixed.

    Figure 11.  Asymptotic behavior of RE with λρ and α. The range of λ is {0.002,0.004,0.006,0.008,0.01,0.02,0.04,0.06,0.08,0.1}.

    In this section, we explain the computation load of the proposed box-constrained NWATV method and compare with that in box-constrained L1/L2 method [23]. To guarantee the convergence of the box-constrained L1/L2 method, both inner and outer loops have to be used, while in the proposed NWATV method, a single loop could produce a good performance of the convergence. Note that in each loop, the most expensive computation is the calculation in (3.6). Similar to [23], in this paper, the maximum number of inner loops is set to be 5, which means that for the same outer loops, the computational load is at least 5 times bigger than that in the proposed method.

    In this paper, we proposed a box-constrained nonlinear weighted anisotropic TV regularization method and used it in sparse-view CT. Using the Shepp-Logan phantom and the actual walnut models, we validated that the proposed regularization could reconstruct a more accurate CT image than the most recently developed L1/L2 regularization method. To be precise, the reconstruction time was reduced more than 8 times while maintaining similar relative errors and a structural similarity index. The proposed method showed advantages, especially when the sampling angles were less than 60 and the noise level was more than 1%. Additionally, numerical simulations displayed a good convergence performance of the proposed iterative scheme. Moreover, since the pixel values of digital images were mostly limited to a certain range, it was reasonable to add box constraints in image processing [1,23]. Note that errors/noises in the iteration based numerical scheme may accumulate with the iterations, and the box constraint plays the role in suppressing the accumulation in some extent. Hence, the box constraint could enforce the iteration converges to the critical point of the functional L, as shown in Figure 4. We note that in the box-constrained L1/L2 method, the authors also note the similar role of the box constraint [23].

    Future works should definitely include the mathematical theory of the convergence of the proposed iterative scheme. Moreover, in this paper, we list some rules for the selection of parameters λ, ρ, and α through manual tuning. In the future, we could develop an automatic way to select the optimal parameters by minimizing the discrepancy function in an admissible set using the combinatorial optimization method. If the noise level δ is known, the discrepancy function could be defined as F(λ,ρ,α)=|Auλ,ρ,αyτδ| for a given τ>1 to avoid underregularization. On the other hand, if the noise level is not known, the Hanke Raus function F(λ,ρ,α)=Auλ,ρ,αy/λ could be used [39]. To minimize the above functions, we first need to select a good initial guess λ0,ρ0,α0, and the optimal parameters could be obtained by minimizing the discrepancy functional using an alternate direction iteration scheme. Furthermore, the proposed method can be further used in the area of metal artifact reduction (MAR) in CT reconstruction [40], beam-hardening artifact reduction [41], limited angle artifact reduction [1], and so on.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by Shandong Provincial Outstanding Youth Fund (Grand No. ZR2018JL002), National Natural Science Foundation of China (Grand No. 12271312).

    The authors declare there is no conflict of interest.



    [1] P. C. Hansen, J. S. Jørgensen, W. R. B. Lionheart, Computed Tomography: Algorithms, Insight, and Just Enough Theory, Society for Industrial and Applied Mathematics, Philadelphia, PA, 2021. https://doi.org/10.1137/1.9781611976670
    [2] F. Natterer, The Mathematics of Computerized Tomography, Society for Industrial and Applied Mathematics, Philadelphia, PA, 2001. https://doi.org/10.1137/1.9780898719284
    [3] C. M. Hyun, T. Bayaraa, S. M. Lee, H. Jung, J. K. Seo, Deep learning for dental cone-beam computed tomography, in Deep Learning and Medical Applications (eds. J.K. Seo), Springer Nature Singapore, (2023), 101–175. https://doi.org/10.1007/978-981-99-1839-3_3
    [4] A. C. Kak, M. Slaney, Principles of Computerized Tomographic Imaging, Society for Industrial and Applied Mathematics, Philadelphia, PA, 2001. https://doi.org/10.1137/1.9780898719277
    [5] E. J. Hall, D. J. Brenner, Cancer risks from diagnostic radiology, Brit. J. Radiol., 81 (2008), 362–378. https://doi.org/10.1259/bjr/01948454 doi: 10.1259/bjr/01948454
    [6] M. Chen, Y. F. Pu, Y. C. Bai, Low-dose CT image denoising using residual convolutional network with fractional TV loss, Neurocomputing, 452 (2021), 510–520. https://doi.org/10.1016/j.neucom.2020.10.004 doi: 10.1016/j.neucom.2020.10.004
    [7] Z. Hu, D. Liang, D. Xia, H. Zheng, Compressive sampling in computed tomography: method and application, Nucl. Instrum. Meth. A., 748 (2014), 26–32. https://doi.org/10.1016/j.nima.2014.02.026 doi: 10.1016/j.nima.2014.02.026
    [8] K. H. Jin, M. T. McCann, E. Froustey, M. Unser, Deep convolutional neural network for inverse problems in imaging, IEEE. T. Image. Process., 26 (2017), 4509–4522. https://doi.org/10.1109/TIP.2017.2713099 doi: 10.1109/TIP.2017.2713099
    [9] Y. Han, J. C. Ye, Framing U-Net via deep convolutional framelets: Application to sparse-view CT, IEEE Trans. Med. Imag., 37 (2018), 1418–1429. https://doi.org/10.1109/TMI.2018.2823768 doi: 10.1109/TMI.2018.2823768
    [10] S. Xie, X. Zheng, Y. Chen, L. Xie, J. Liu, Y. Zhang, et al., Artifact removal using improved GoogLeNet for sparse-view CT reconstruction, Sci. Rep., 8 (2018), 1–9. https://doi.org/10.1038/s41598-018-25153-w doi: 10.1038/s41598-018-25153-w
    [11] E. Y. Sidky, I. Lorente, J. G. Brankov, X. Pan, Do CNNs solve the CT inverse problem?, IEEE Trans. Biomed. Eng., 68 (2021), 1799–1810. https://doi.org/10.1109/TBME.2020.3020741 doi: 10.1109/TBME.2020.3020741
    [12] A. Faridani, Introduction to the mathematics of computed tomography, in Inside Out: Inverse Problems and Applications, Cambridge Univ. Press, (2003), 1–46. https://doi.org/10.4171/PRIMS/47.1.1
    [13] S. J. LaRoque, E. Y. Sidky, X. Pan, Accurate image reconstruction from few-view and limited-angle data in diffraction tomography, J. Opt. Soc. Am. A, 25 (2008), 1772–1782. https://doi.org/10.1364/JOSAA.25.001772 doi: 10.1364/JOSAA.25.001772
    [14] M. Burger, S. Osher, A guide to the TV zoo, in Level Set and PDE Based Reconstruction Methods in Imaging (eds. M.Burger and S.Osher), Springer International Publishing, (2013), 1–70. https://doi.org/10.1007/978-3-319-01712-9_1
    [15] X. Jin, L. Li, Z. Chen, L. Zhang, Y. Xing, Anisotropic total variation for limited-angle CT reconstruction, in IEEE Nuclear Science Symposuim & Medical Imaging Conference, (2010), 2232–2238. https://doi.org/10.1109/NSSMIC.2010.5874180
    [16] Z. Tian, X. Jia, K. Yuan, T. Pan, S. B. Jiang, Low-dose CT reconstruction via edge-preserving total variation regularization, Phys. Med. Biol., 56 (2011), 5949–5967. https://doi.org/10.1088/0031-9155/56/18/011 doi: 10.1088/0031-9155/56/18/011
    [17] Y. Liu, J. Ma, Y. Fan, Z. Liang, Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction, Phys. Med. Biol., 57 (2012), 7923–7956. https://doi.org/10.1088/0031-9155/57/23/7923 doi: 10.1088/0031-9155/57/23/7923
    [18] Y. Xi, P. Zhou, H. Yu, T. Zhang, L. Zhang, Z. Qiao, et al., Adaptive-weighted high order TV algorithm for sparse-view CT reconstruction, Med. Phys., 50 (2023), 5568–5584. https://doi.org/10.1002/mp.16371 doi: 10.1002/mp.16371
    [19] Y. Wang, Z. Qi, A new adaptive-weighted total variation sparse-view computed tomography image reconstruction with local improved gradient information, J. X-Ray Sci. Technol., 26 (2018), 957–975. https://doi.org/10.3233/XST-180412 doi: 10.3233/XST-180412
    [20] S. Niu, Y. Gao, Z. Bian, J. Huang, W. Chen, G. Yu, et al., Sparse-view X-ray CT reconstruction via total generalized variation regularization, Phys. Med. Biol., 59 (2014), 2997–3017. https://doi.org/10.1088/0031-9155/59/12/2997 doi: 10.1088/0031-9155/59/12/2997
    [21] Z. Qu, X. Zhao, J. Pan, P. Chen, Sparse-view CT reconstruction based on gradient directional total variation, Meas. Sci. Technol., 30 (2019), 1–11. https://doi.org/10.1088/1361-6501/ab09c6 doi: 10.1088/1361-6501/ab09c6
    [22] L. Zhang, H. Zhao, W. Ma, J. Jiang, L. Zhang, J. Li, et al., Resolution and noise performance of sparse view X-ray CT reconstruction via Lp-norm regularization, Phys. Medica., 52 (2018), 72–80. https://doi.org/10.1016/j.ejmp.2018.04.396 doi: 10.1016/j.ejmp.2018.04.396
    [23] C. Wang, M. Tao, J. Nagy, Y. Lou, Limited-angle CT reconstruction via the L1/L2 minimization, SIAM J. Imaging. Sci., 14 (2021), 749–777. https://doi.org/10.1137/20M1341490 doi: 10.1137/20M1341490
    [24] Y. Song, Y. Wang, D. Liu, A nonlinear weighted anisotropic total variation regularization for electrical impedance tomography, IEEE Trans. Instrum. Meas., 71 (2022), 1–13. https://doi.org/10.1109/TIM.2022.3220288 doi: 10.1109/TIM.2022.3220288
    [25] J. Radon, On the determination of functions from their integral values along certain manifolds, IEEE Trans. Med. Imag., 5 (1986), 170–176. https://doi.org/10.1109/TMI.1986.4307775 doi: 10.1109/TMI.1986.4307775
    [26] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends. Mach. Learn., 3 (2011), 1–122. https://doi.org/10.1561/2200000016 doi: 10.1561/2200000016
    [27] I. Daubechies, M. Defrise, C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pur. Appl. Math., 57 (2004), 1413–1457. https://doi.org/10.1002/cpa.20042 doi: 10.1002/cpa.20042
    [28] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE. T. Image. Process., 13 (2004), 600–612. https://doi.org/10.1109/TIP.2003.819861 doi: 10.1109/TIP.2003.819861
    [29] Y. Saad, M. H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 7 (1986), 856–869. https://doi.org/10.1137/0907058 doi: 10.1137/0907058
    [30] P. C. Hansen, J. S. Jørgensen, AIR Tools Ⅱ: Algebraic iterative reconstruction methods, improved implementation, Numer. Algor., 79 (2018), 107–137. https://doi.org/10.1007/s11075-017-0430-x doi: 10.1007/s11075-017-0430-x
    [31] K. Hämäläinen, L. Harhanen, A. Kallonen, A. Kujanpää, E. Niemi, S. Siltanen, Tomographic X-ray data of a walnut, preprint, arXiv: 1502.04064. https://doi.org/10.48550/arXiv.1502.04064
    [32] O. Grove, A. E. Berglund, M. B. Schabath, H. J. W. L. Aerts, A. Dekker, H. Wang, et al., Quantitative computed tomographic descriptors associate tumor shape complexity and intratumor heterogeneity with prognosis in lung adenocarcinoma, PLoS ONE, 10 (2015), 1–14. https://doi.org/10.1371/journal.pone.0118261 doi: 10.1371/journal.pone.0118261
    [33] G. Wang, J. C. Ye, B. D. Man, Deep learning for tomographic image reconstruction, Nat. Mach. Intell., 2 (2020), 737–748. https://doi.org/10.1038/s42256-020-00273-z doi: 10.1038/s42256-020-00273-z
    [34] V. Antun, F. Renna, C. Poon, B. Adcock, A. C. Hansen, On instabilities of deep learning in image reconstruction and the potential costs of AI, P. Natl. A. Sci., 117 (2020), 30088–30095. https://doi.org/10.1073/pnas.1907377117 doi: 10.1073/pnas.1907377117
    [35] S. Li, Q. Cao, Y. Chen, Y. Hu, L. Luo, C. Toumoulin, Dictionary learning based sinogram inpainting for CT sparse reconstruction, Optik, 125 (2014), 2862–2867. https://doi.org/10.1016/j.ijleo.2014.01.003 doi: 10.1016/j.ijleo.2014.01.003
    [36] E. Kobler, A. Effland, K. Kunisch, T. Pock, Total deep variation: A stable regularization method for inverse problems, IEEE. T. Pattern. Anal., 44 (2022), 9163–9180. https://doi.org/10.1109/TPAMI.2021.3124086 doi: 10.1109/TPAMI.2021.3124086
    [37] J. Xu, F. Noo, Convex optimization algorithms in medical image reconstruction—in the age of AI, Phys. Med. Biol., 67 (2022), 1–77. https://doi.org/10.1088/1361-6560/ac3842 doi: 10.1088/1361-6560/ac3842
    [38] J. Kaipio, E. Somersalo, Statistical and Computational Inverse Problems, Springer, New York, 2005. https://doi.org/10.1007/b138659
    [39] L. Xu, L. Li, W. Wang, Y. Gao, CT image reconstruction algorithms based on the Hanke Raus parameter choice rule, Inverse Probl. Sci. Eng., 28 (2020), 87–103. https://doi.org/10.1080/17415977.2019.1628739 doi: 10.1080/17415977.2019.1628739
    [40] H. Park, J. K. Choi, J. K. Seo, Characterization of metal artifacts in X-ray computed tomography, Commun. Pure Appl. Math., 70 (2017), 2191–2217. https://doi.org/10.1002/cpa.21680 doi: 10.1002/cpa.21680
    [41] S. M. Lee, J. K. Seo, Y. E. Chung, J. Baek, H. S. Park, Technical Note: A model-based sinogram correction for beam hardening artifact reduction in CT, Med. Phys., 44 (2017), e147–e152. https://doi.org/10.1002/mp.12218 doi: 10.1002/mp.12218
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1218) PDF downloads(142) Cited by(0)

Figures and Tables

Figures(11)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog