Processing math: 100%
Research article Special Issues

Watertight 2-manifold 3D bone surface model reconstruction from CT images based on visual hyper-spherical mapping

  • This paper proposes a general algorithm to reconstruct watertight 2-manifold 3D bone surface model from CT images based on visual hyper-spherical mapping. The reconstruction algorithm includes three main steps: two-step thresholding, initial watertight surface reconstruction and shape optimization. Firstly, volume sampling points of the target bone with given narrower threshold range are extracted by thresholding with combination of 3D morphology operation. Secondly, visible points near the bone's outer surface are extracted from its corresponding volume sampling points by hyper-spherical projection mapping method. Thirdly, implicit surface reconstruction algorithm is employed on the extracted visible surface points to obtain an initial watertight 3D bone surface model which is used as the deformation model in the following accurate bone surface model generation stage. Finally, the initial surface model is deformed according to the segmentation data with wider threshold range under given constraints in order to achieve an accurate watertight 3D bone surface model. Experiment and comparison results show that the proposed algorithm can reconstruct watertight 3D bone surface model from CT images, and local details of the bone surface can be restored accurately for the cases used in this paper.

    Citation: Tianran Yuan, Hongsheng Zhang, Hao Liu, Juan Du, Huiming Yu, Yimin Wang, Yabin Xu. Watertight 2-manifold 3D bone surface model reconstruction from CT images based on visual hyper-spherical mapping[J]. Mathematical Biosciences and Engineering, 2021, 18(2): 1280-1313. doi: 10.3934/mbe.2021068

    Related Papers:

    [1] Xuefei Deng, Yu Liu, Hao Chen . Three-dimensional image reconstruction based on improved U-net network for anatomy of pulmonary segmentectomy. Mathematical Biosciences and Engineering, 2021, 18(4): 3313-3322. doi: 10.3934/mbe.2021165
    [2] Shuaiyu Bu, Yuanyuan Li, Guoqiang Liu, Yifan Li . MAET-SAM: Magneto-Acousto-Electrical Tomography segmentation network based on the segment anything model. Mathematical Biosciences and Engineering, 2025, 22(3): 585-603. doi: 10.3934/mbe.2025022
    [3] Limin Ma, Yudong Yao, Yueyang Teng . Iterator-Net: sinogram-based CT image reconstruction. Mathematical Biosciences and Engineering, 2022, 19(12): 13050-13061. doi: 10.3934/mbe.2022609
    [4] Jiande Zhang, Chenrong Huang, Ying Huo, Zhan Shi, Tinghuai Ma . Multi-population cooperative evolution-based image segmentation algorithm for complex helical surface image. Mathematical Biosciences and Engineering, 2020, 17(6): 7544-7561. doi: 10.3934/mbe.2020385
    [5] Huiying Li, Yizhuang Song . Sparse-view X-ray CT based on a box-constrained nonlinear weighted anisotropic TV regularization. Mathematical Biosciences and Engineering, 2024, 21(4): 5047-5067. doi: 10.3934/mbe.2024223
    [6] Yingjian Yang, Qiang Li, Yingwei Guo, Yang Liu, Xian Li, Jiaqi Guo, Wei Li, Lei Cheng, Huai Chen, Yan Kang . Lung parenchyma parameters measure of rats from pulmonary window computed tomography images based on ResU-Net model for medical respiratory researches. Mathematical Biosciences and Engineering, 2021, 18(4): 4193-4211. doi: 10.3934/mbe.2021210
    [7] Lufeng Yao, Haiqing Wang, Feng Zhang, Liping Wang, Jianghui Dong . Minimally invasive treatment of calcaneal fractures via the sinus tarsi approach based on a 3D printing technique. Mathematical Biosciences and Engineering, 2019, 16(3): 1597-1610. doi: 10.3934/mbe.2019076
    [8] R. Bhavani, K. Vasanth . Brain image fusion-based tumour detection using grey level co-occurrence matrix Tamura feature extraction with backpropagation network classification. Mathematical Biosciences and Engineering, 2023, 20(5): 8727-8744. doi: 10.3934/mbe.2023383
    [9] Yamei Deng, Yonglu Chen, Qian He, Xu Wang, Yong Liao, Jue Liu, Zhaoran Liu, Jianwei Huang, Ting Song . Bone age assessment from articular surface and epiphysis using deep neural networks. Mathematical Biosciences and Engineering, 2023, 20(7): 13133-13148. doi: 10.3934/mbe.2023585
    [10] Liming Wu, Ning Dai, Hongtao Wang . Evaluation of rods deformation of metal lattice structure in additive manufacturing based on skeleton extraction technology. Mathematical Biosciences and Engineering, 2021, 18(6): 7525-7538. doi: 10.3934/mbe.2021372
  • This paper proposes a general algorithm to reconstruct watertight 2-manifold 3D bone surface model from CT images based on visual hyper-spherical mapping. The reconstruction algorithm includes three main steps: two-step thresholding, initial watertight surface reconstruction and shape optimization. Firstly, volume sampling points of the target bone with given narrower threshold range are extracted by thresholding with combination of 3D morphology operation. Secondly, visible points near the bone's outer surface are extracted from its corresponding volume sampling points by hyper-spherical projection mapping method. Thirdly, implicit surface reconstruction algorithm is employed on the extracted visible surface points to obtain an initial watertight 3D bone surface model which is used as the deformation model in the following accurate bone surface model generation stage. Finally, the initial surface model is deformed according to the segmentation data with wider threshold range under given constraints in order to achieve an accurate watertight 3D bone surface model. Experiment and comparison results show that the proposed algorithm can reconstruct watertight 3D bone surface model from CT images, and local details of the bone surface can be restored accurately for the cases used in this paper.


    Bone medical images generated from Computed Tomography (CT) scanning of patients are widely used as a powerful tool to provide diagnosis and analysis of bone mineral density, bone injury, bone tumor, etc. [1,2,3].Since CT medical imaging data is composed of multi-layer 2D images and cannot provide enough 3D spatial geometry and topology information, watertight 2-manifold 3D bone surface model reconstruction from 2D CT images is crucial to make follow-up surgical plans, finite element modeling & analysis and so on [4,5,6]. Because exposure to radiation at high doses may cause harm to the human body, low dose and thick slice (greater than 0.625mm) are often used in clinical CT scan. Consequently, CT artifacts [7,8] during CT scanning inevitably result in problems such as noise, blurry tissue boundaries and locally non-uniform distribution of HU (Hounsfield Unit) values in CT images as shown in Figure 1. HU values corresponding to the femoral head and femoral condyle regions have obvious non-uniform distribution (Figure 1(A), (D) and (E)), and the 3D reconstructed surface models using such CT images thus have precision problems, such as redundant and abnormal shapes, holes, and stair-step shapes (Figure 1(H) and (I)). The 3D bone surface model reconstruction from CT image includes two key steps: image segmentation [9] (Figure 1(B)) and surface model generation [10] (Figure 1(F) and (G)). CT image segmentation methods can be classified into three typical kinds [11,12,13] including manual, semi-automatic [11], and automatic [12,13]. Considering the complexity of CT images, many segmentation algorithms are designed for certain kind of bones or organs [14,15], and algorithms for full body skeleton still have its limitations [16], which are not yet applied widely. 3D bone surface model reconstruction from point cloud data [10] is accomplished by interpolating triangular meshes [17,18] or implicit function fitting methods [19,20,21,22,23], and the model precision mainly depends on the segmentation results of CT images. On the one hand, bone surface reconstruction is extremely sensitive to the resolution of CT images and parameters selected for CT image segmentation. On the other hand, there are problems such as non-obvious bone boundaries, overlapping of threshold ranges between regions corresponding to the bone and its nearby cartilage or between adjacent bones. The factors related above bring many difficulties in CT image segmentation and 3D bone surface model reconstruction from CT images, and so that research interest is attracted.

    Figure 1.  Threshold segmentation and bone surface model reconstruction: A, CT image slice containing femur and hipbone; B, segmentation result with threshold range of [200,2500] HU; C, segmentation result with threshold range of [300,2500] HU; D, enlarged view of the femoral condyle circled in B; E, enlarged view of the femoral condyle circled in C; F, reconstructed bone surface model corresponding to segmentation result in B; G, reconstructed bone surface model corresponding to segmentation result in C; H, enlarged view of the femoral condyle surface circled in F; I, enlarged view of the femoral condyle surface circled in G.

    Researchers have never stopped studying medical image segmentation algorithms [11,12,13] since the invention of CT scan. The common used methods are in the top-down form [24,25,26,27], such as region-growing [24], active contour model [25], statistical shape [26], and graph cuts methods [27]. The top-down methods have excellent performance on segmentation of certain organs, but they are not suitable for accurate segmenting bones of the full body skeleton, due to complicated distribution of bone HU values in CT images, especially at the presence of fractures or bone tumors.

    Recently, some researchers introduced bottom-up methods based on statistical machine learning on medical image segmentation [13,16,28,29,30,31,32,33], which are mainly classified into two categories: clustering based methods [28] and classification based methods [29,30,31]. Clustering based methods, such as Mean-shift and K-means, assign pixels with similar properties to the same clustering block which is corrected iteratively until the clustering result is convergence. Clustering based methods belong to unsupervised learning algorithm where no predefined training samples are needed, and thus the segmentation result is usually coarse. On the contrary, the classification based methods, CNN [29,30] and SVM [31] for example, which are used in supervised learning algorithm where predefined training samples are needed, have good segmentation result. Study on combination of the above two categories' advantages to segment medical images is one of the hot topics in recent years [16,32,33]. The irregular shape such as bone tumor and complicated HU values distribution will bring great inaccuracy to the bottom-up methods such as CNN, SVM, and there are few general-purpose mature algorithms that can automatically segment all types of bones from CT images. That means proper CT image segmentation methods should be chosen when segmenting different types of bones or different regions of the same bone.

    After segmentation of CT images, bone's voxel or pixel data and its corresponding volume sampling points can be obtained. The next step is to extract bone's boundary information or surface points from the volume sampling points used to reconstruct bone surface model. Since healthy bone surfaces have the characteristics of continuity and smoothness, implicit surface reconstruction algorithms such as the tangent planes [19], RBF [20], MLS [21], MPU [22], and Poisson equation [23], are usually used to reconstruct the watertight 2-manifold bone surface model. Implicit surface reconstruction algorithms can effectively reconstruct surface models from point cloud data that have complicated topologies. Nevertheless, due to the existence of noise, stair-step effect, data loss, etc., accurate bone surface models cannot be achieved by directly applying surface reconstruction operation on the extracted surface points. Therefore, studies on methods of accurate bone surface model reconstruction are meaningful.

    We draw inspiration from methods including thresholding [34], point cloud processing [23,35], shape modeling [36,37] and computer vision [38]. This paper proposes a method to reconstruct accurate watertight 3D bone surface model from CT images. The volume sampling points are used in the surface point extraction and accurate surface generation stages of the proposed method, which are also named as initial volume sampling points with narrower threshold range and target volume sampling points with wider threshold range respectively. Overview of the algorithm is described as follows:

    Firstly, resample the CT images according to the requirement of reconstruction precision; segment bone's ROI (Region of Interest) by thresholding with combination of 3D morphology operation; extract the initial and target volume sampling points respectively. Secondly, calculate visible surface points from the initial volume sampling points by method based on visual hyper-spherical mapping proposed in this paper. Thirdly, generate the watertight 2-manifold surface model from the visible surface points by implicit surface reconstruction. Finally, take the subset of the target volume sampling points that are near the actual bone surface as constraint point set; deform the initial surface under given constraints to generate the accurate bone surface model.

    The remainder of this paper is organized as follows: section 2 describes details of the proposed 3D bone surface reconstruction method including volume sampling point extraction, visible surface point calculation, initial surface reconstruction, and accurate surface generation; section 3 presents experiments; section 4 discusses result; section 5 draws conclusions.

    Let XYZ-O denote the 3D Cartesian coordinate system of the CT images with origin at the center of the first image slice. Z axis parallels to the scanning direction. X and Y are parallels to U and V of the image's 2D pixel coordinate system UV-O respectively. The Cartesian coordinates (xyz) of a sample point pm,n,lR1x3 are denoted as {xm}Nx10, {yn}Ny10 and {zl}Nz10, where xm=mΔx, yn=nΔy, zl=lΔz. Δx,Δy are spatial distances between two adjacent pixels along X axis and Y axis respectively, and Δz is the slice thickness. Generally, Δx and Δy are equal to each other, and Δz varies greatly for different CT scans and clinical requirements. Nx,Ny, and Nz denote CT image slice numbers along X, Y, and Z axis, respectively. Thus, the number of volume sampling points is N=Nx×Ny×Nz. All sample points form a point set P={pm,n,l}, PRN×3. The voxel corresponding to pm,n,l can be denoted as wm,n,lR1×3, and the voxel set corresponding to P is W={wm,n,l}, WRN×3. The HU value of wm,n,l is represented by Hm,n,l=F(xm,yn,zl).

    if Δx=ΔyΔz, the extracted voxel data exhibits anisotropy, which will cause surface of the generated 3D bone model being unnatural. In order to obtain isotropic voxel data (Δx=Δy=Δz=d), numerical interpolation and resampling on CT images is needed. The commonly used methods [39] include Nearest-Neighbor, linear, quadratic, B-spline, cubic, Lagrange, and Gaussian interpolations. Because B-spline interpolation method has the advantages of good robustness, high precision and efficiency, which is easy to implement, this paper adopts it to obtain evenly spaced resampling points data with better resolution. Figure 2 shows the 3D bone surface models reconstructed before and after CT images being resampled. High resolution resampling with small thickness simultaneously on all three coordinate directions can effectively reduce the stair-step effects both in CT images and reconstruction results, and enhances the clarity of CT images.

    Figure 2.  Comparison of the bone surface models generated by Poisson surface reconstruction before (left side) and after (right side) resampling: A, reconstructed bone surface model corresponding to the threshold range of [200,2500] HU and the sampling thickness of 0.7 mm; B, reconstructed surface model corresponding to the threshold range of [200,2500] HU and the resampling thickness of 0.3 mm; C, enlarged view of region circled in A; D, CT image corresponding to region circled in A; E, enlarged view of region circled in B; F, CT resample image corresponding to region circled in B.

    After interpolation and resampling, the following step is to segment bone's ROI and extract its corresponding voxel data or volume sampling points from CT images. The two-step thresholding method is adopted in this paper because of its high efficiency and robustness. Let WH={Hm,n,l[θmin,θmax],wm,n,lW}, where θmin and θmax are the lower and upper limit value of the threshold range. θmax for a bone tissue is generally fixed, so the quality of segmentation results mainly depends on θmin, and the segmentation results have the behavior of outward expansion or inward shrinking with the change of threshold range.

    As θmin increase gradually, the connection regions between adjacent bones in the binary images will be shrunk, but the holes and regions with data loss will become large in the corresponding binary images. When θmin is a value bigger enough to segment the bone separately from adjacent bones while still making the segmentation retain the main surface profile of the bone as much as possible, such as θmin=H0, the initial volume sampling points PH0 corresponding to voxel data WH0 are obtained.

    As the threshold range become wider with the decrease of θmin, the holes and those regions with data loss of the segmentation results will be gradually filled and recovered, and also noise data from cartilage and adjacent bones will be introduced which cause regions of adjacent bones being merged together. When θmin is a value smaller enough to make the segmentation result fully contain the region of the target bone while introducing noise points as few as possible, such as θmin=HT, the target volume sampling points PHT corresponding to voxel data WHT are obtained.

    The segmentation results can be visually presented as adjusting θmin dynamically, so H0 and HT are easy to be determined interactively. The narrower threshold range [H0,θmax] is used to extract volume sampling points for reconstructing the initial surface model, and the wider threshold range[HT,θmax] is used in the iterative deformation stage to further restore local shape of the bone surface, especially for shape near the joint position. Figure 3 shows the binary image segmentation results corresponding to different threshold ranges.

    Figure 3.  Segmentation results corresponding to regions containing femoral head and femoral condyle with threshold ranges given in the figure respectively: A, CT image containing femoral condyle; B, CT image containing femoral head; C, binary images segmented from A; D, binary images segmented from B.

    In order to reduce the negative influence on the volume sampling point extraction such as noises, holes and data loss caused by directly using thresholding method, this paper applies the morphology operation on the segmentation result to fill holes, smooth boundaries and break small connections between adjacent bones.

    Let WkH denote the voxel set of bone with index k (k[0,206], 206 is the total bone number of Asian), and its corresponding volume sampling data point set is PkH. Let Erodem(W), Dilatem(W), Openm(W), and Closem(W) represent the corresponding Erosion, dilation, openning and Closing morphology operations on voxel set W with m(m1) times iteration respectively. The following describes the process of volume sampling point extraction.

    Firstly, a hybrid filter composed of Gaussian filter and ρ-percentile gradient filter is employed on resampled CT images in order to enhance boundaries of bone tissue and fill potential holes.

    Secondly, the initial voxel set WkH0={Hm,n,l[H0,θmax],vm,n,lW} is obtained, and H0 is chosen on the basis of retaining as many bone tissue regions as possible while minimizing the connection regions. If WkH0 still includes voxel data from adjacent bones, erosion operation will be iteratively executed on WkH0 until achieving the sub-region Erodem(WkH0) which is disconnected with adjacent bones, Erodem(WkH0)WkH0. However, because the erosion operation also erode correct regions while removing redundant connection regions between adjacent bones, the sub-regionErodem(WkH0) should be applied dilation operation in order to get the temporary region WDE which contain the eroded correct regions, WDE=DilationmErodem(WkH0). Then, the bone's voxel data is obtained by operation WkH0=WkH0WDE. Voxel set WkHi corresponding to threshold value θmin=Hi is also obtained by the same way, where Hi[0,H0], andHi<Hi1.

    Finally, opening and closing operations are applied on WkHi in order to further improve the segmentation results: WkHi=Openlclosen(WkHi). Map the voxel set WkHi to the corresponding volume sampling points PkHi, and then, the initial and target volume sampling points PkH0 and PkHT are obtained by θmin = H0 and θmin = HT respectively.

    Because the volume sampling point PkH is a point set which contains not only the surface data, but also inner structure data of the bone, as well as noise data. It is difficult to reconstruct an accurate watertight 2-manifold bone surface model from PkH directly through geometric or non-geometric reconstruction algorithms. Further operations on PkH are needed in order to extract visible surface points S(PkH) from PkH used to construct the bone surface model, S(PkH)PkH. Based on the principle of retinal spherical imaging, this paper proposed the visible surface point extraction method from volume sampling points.

    When human observe objects, objects with shorter distance to eyes will block those with longer distance, and the visible regions are inverted and projected onto the spherical retina. According to the visual imaging principle of human eyes, the eye's imaging system can be simplified to that of the pinhole camera model when the imaging plane is replaced by spherical imaging surface of retina. This means that for object points on the same ray line emitted from the eye, the one closest to the eye is visible, and will be projected onto the retinal spherical imaging surface. Projections of the other invisible vertices fall inside the retinal spherical imaging surface.

    When applying this mechanism to a point cloud with different observation locations, we can develop an algorithm of mapping visible surface points to the inside of a super-thin spherical shell with thickness δR while mapping invisible data points to the inside of spherical shell's inner surface, where R is the outer radius of the shell.

    Imaging screen of human eye retina is similar to a concave spherical surface and the image formed through the eye vision system is concave. Spherical imaging surface of retina has better adaptability and wider view field compared with planar imaging plane. Based on the combination of the eye vision system and pinhole imaging model of camera, we propose the hyper-spherical projection mapping method to extract visible surface points, which is described as follows in Figure 4.

    Figure 4.  Demonstration of the hyper-spherical projection mapping system: A, a sphere point cloud model with visible region marked; B, the projection model of A after spherical imaging; C, flipped and magnified hyper-spherical projection mapping model corresponding to B; D, the vase point cloud model; E, the projection model of D after spherical imaging; F, flipped and magnified hyper-spherical projection mapping with variable hyper-spherical radius corresponding to E.

    Firstly, the imaging plane of the pinhole imaging system is replaced by a spherical imaging surface corresponding to the concave retina in Figure 4(B) and (E).

    Secondly, visible points of the object is inverted and projected near to the spherical imaging surface; invisible points inside the object are inverted and projected to the inside of the spherical imaging surface in Figure 4(B) and (E).

    Thirdly, for the convenience of computation and analysis, the object and its corresponding flipped and magnified hyper-spherical projection mapping model are placed on the same side of the projection center (observation center) O in Figure 4(A) and (C), Figure 4(D) and (F). Thus, the hyper-spherical projection mapping system is constructed.

    The demonstration of the hyper-spherical projection mapping system is shown in Figure 4. As the radius R of the hyper-sphere increases in Figure 4(E), more visible surface points will be projected near to the hyper-spherical surface, and invisible points off the hyper-spherical surface with a relative large distance. The invisible points can be filtered automatically in the following convex hull reconstruction stage.

    Based on pinhole imaging principle and the hyper spherical projection mapping system described above, the mapping between a point piP and its projection to the hyper-spherical surface piP is linear, namely P={pi=fO(R,pi)|piP}, where R is radius of the hyper sphere and O is the observation position. The function has characteristics that visible points are projected near the hyper spherical surface. Therefore, the mapping function can be written as

    pi=fO(R,pi)=X+2(RX)XX,X=piO (1)

    where pi is the mapped point of pi.

    Let ˆP=PO. Then, the convex hull of ˆP is calculated by using Qhull algorithm [40] in Figure 5(A) and (B). A point is visible if its mapping point is corresponding to a vertex of the convex hull, and then all vertices of the convex hull except O are corresponding to the visible surface points S(P) of P extracted from the given view direction in Figure 5(B) and (C). The convex hull varies with the hyper sphere's radius R changing, and more visible points will appear as R increases in Figure 4. When the value range of R is R[103.5,103.8] [35], the surface points can be extracted sufficiently.

    Figure 5.  Demonstration of visible surface patch reconstruction and extraction from a point cloud model: A, hyper-spherical projection mapping of the source point cloud; B, mesh surface generated by convex hull reconstruction; C, visible surface patch extraction from the source point cloud in A; D, the vase point cloud model; E, visible surface patch extracted from the left view direction; F, visible surface patch extracted from the right view direction; G, invisible points outside the left and right view field; H, surface patches extracted from both the left and right view directions.

    When we observe a point cloud with simple shape, taking a vase for an example as shown in Figure 5, we can get a full view of the object from the left and right view directions, and most of the vase model's surface data can be extracted. For object with complex shapes such as bones, six or more view directions are necessary in Figure 6. After the visible points from different view directions are extracted and added together, the surface points S(P) are obtained.

    Figure 6.  Visible surface points extraction of a femoral volume sampling point cloud: A, surface model reconstructed from the volume sampling points in B by Poisson surface reconstruction; B, volume sampling data of the left femur extracted from CT images in Figure 1(A); C, Surface patches extracted from eight different view directions; D, combination of visible surface patches in C; E, visible surface points obtained after Boolean union operation on vertices of surface patches in C.

    Figure 6(C) shows the sub-surface patches corresponding to the visible points extracted from different view directions of the thighbone's volume sampling data. As can be seen from Figure 6(D), most of the surface points can be extracted from eight view directions. The optimal view direction with most visible points is calculated by PCA analysis method.

    Let M represent a watertight 2-manifold surface mesh model.V=(v0,,vn) is the vertex set of M, VRn×3, viR1×3.

    The initial bone surface model M0 reconstructed from surface points S(PkH0) should be closed and watertight. Because S(PkH0) extracted by method proposed in section 2.2 generally have problems such as local data loss, holes and noises, geometric reconstruction algorithms are not suitable for reconstruction of this kind data. In this paper, implicit surface reconstruction method is adopted to generate M0 which can fill holes, restore local shapes with data loss and suppress noise existed in S(PkH0).

    Implicit surface reconstruction method uses spatial indicator function to describe the inner, outer, and boundary information of the model. It can reconstruct surface from complicated point clouds by simple function. Denote χM as the spatial indicator function of surface model M. χM>0 indicates points inside the model M, χM=0 means points on boundary or surface of M, and χM<0 indicates points outside M. In other words, implicit surface reconstruction is to solve the indicator function χM.

    The implicit surface reconstruction algorithm based on Poisson equation combines both the advantages of global and local fitting methods, and the reconstructed surface shows more feature details and less noise. Let vector field U denote the oriented points of the sampling points with computed normal and χM represent the gradient of χM. There is a corresponding integral relationship between U and χM. Thus, the problem of surface reconstruction from point cloud P reduces to computing the indicator function χM of which the gradient χM best approximates U.

    E(χ)=minχχMU (2)

    After applying divergence operator on Eq (2), this variational problem becomes a standard Poisson problem:

    ΔχM=χM=U (3)

    Solution of Eq (3) can finally reduce to a well-conditioned sparse linear system. The goal of this section is to reconstruct a watertight and smooth initial surface model M0, and therefore an octree of depth 7 or smaller is chosen in order to restore the contour shape efficiently at regions with data missing while approximating the visible surface points as much as possible.

    Figure 7(A) and the enlarged view in Figure 7(C) show that there are large holes in the femur head and condyle regions. Figure 7(B) shows a watertight 2-manifold bone surface model is generated by Poisson surface reconstruction algorithm with holes filled and noise smoothed, but the details such as linea aspera are missing.

    Figure 7.  The initial surface model generated by implicit surface reconstruction method: A, the visible surface model corresponding to the volume sampling points in Figure 6(B); B, Poisson surface reconstruction result of the visible surface points corresponding to model in A; C, enlarged view in the condyle region shows that there are large holes in the visible surface model; D, enlarged view in the condyle region shows that holes are filled and noises are smoothed.

    Although the initial surface model M0 reconstructed in section 2.3 is watertight and smooth, it is still have large shape deviation from the actual surface model at certain regions, and is not accurate enough to be used for medical analysis.

    Denote a point set on the actual bone surface MT as BP, BPPkH. In order to reconstruct an accurate surface model M, M0 need to be deformed in order to best approximate or interpolate BP in the form of non-rigid deformation that follows the shape characteristics of bone, and the position deviation between M0 and M should be within a reasonable range. Generally, extra constraints are imposed during solving for M in order to satisfy the spatial relationship between adjacent bones and also reduce the influence of noises existing in PkH. The constraints usually cause the solution process to be non-linear and non-convex, which is difficult to achieve a global optimization result. To solve this kind of problem, local-global iterative surface deformation algorithms are used to make M best approximate BP

    Mi=f(Mi1,BP) (4)

    where i is the number of iterations and Mi is the corresponding deformation model.

    As described in section 2.1, when adopting thresholding method to extracted bone volume sampling points from CT images, the volume sampling points gradually cover the entire bone tissue space as expansion of the threshold range, and PkHT is used as the reference point set to generate the accurate surface model. But due to the characteristics of CT image data, it is inevitable that the sampling points PkHT are mixed with data from its own cartilages, inner structures and adjacent bones. As a result, it is difficult to extract such a surface point set BP of which every point is on the actual bone surface, while the actual extracted surface point set BP also includes extra noise BN, BPBPBN, where BN is the noise point data. In this paper, we use the subset points BP of PkHT near the initial surface as the deformation constraint point set, BP=S(PkH0)(PkHTPkH0).

    Therefore, the accurate surface reconstruction transform into solving M through iterative deformation operation from M0, so that M can best approximates the sub set BP of BP, while the negative influences of noise BN are suppressed.

    A constraint energy function is designed to implement the iterative deformation: Mi=f(Mi1,BP). The constraints include proximity matching constraint to implement the match between vertices of deformation surface model and the constraint point set BP, shape deformation constraint to deform and optimize the deformation surface model while preserving its local feature details, and shape sharpness constraint to penalize relative large displacement from the previous iteration step. The constraint function is expressed as follows

    E(V)=EProximity(V)+αEdeformation(V)+βEsharpness(V) (5)

    where E(V) is the total constraint energy, α and β are coefficients, and EProximity, Edeformation and Esharpness are constraint energies corresponding to proximity matching, shape deformation, and shape sharpness, respectively, which are described as follows.

    The proximity matching of vertex set V with the constraint point set BP is obtained by minimizing the following constraint energy function

    EProximity(V)ni=1ωid(vi)2 (6)

    where ωi is constraint weight of vi and ωi>0, d(vi) is calculated as the distance of vi and its projection ψ(vi) on the bone surface implied inBP. Formally, ψ(vi) is taken as moving vi in the minimal way in order to achieve proximity matching.

    ψ(vi)argminqBPviq2 (7)

    Then Eq (6) becomes

    EProximity(V)ni=1ωiviψ(vi)2=(Vψ(V))TW(Vψ(V)) (8)

    where V=(v1,,vn), ψ(V)=(ψ(vn),,ψ(vn)), and W is a diagonal matrix. When ωi=1, Equation (8) becomes

    EProximity(V)=Vψ(V)2=(Vψ(V))T(Vψ(V)) (9)

    By minimization of Eq (9), the proximity projection of V can be solved, which match the constraint points BP as closely as possible without deviating from V too far away.

    The purpose of the shape deformation is to make the shape of M match with that of the actual bone surface as accuracy as possible. The selection of deforming strategy affects significantly on the final result. Because Laplacian surface editing can retain most of the original shape features and avoids introducing new overlaps when the surface is deformed, this paper uses Laplacian surface editing to design the deformation energy function.

    Denote one-ring neighbors of vi as Ni={vij|vijϵV,jki}, and ki is the vertex number of Ni. The coordinate vector from the weighted average coordinates of vi's one-ring neighbors to vi can be expressed as

    δi=vi1kij=1wijwijvij (10)

    where wij is the weight corresponding to vij which is the jth one-ring neighbor of vi. δi is also called the differential coordinate of absolute coordinate vi.Write Eq (10) in vector form while taking the weight value as one for convenience:

    Δ=VD(AV)=(IDA)V=LV (11)

    where Δ=(δ1,,δn), δi=vi(1/k)vij, A={0,1}n×n is the adjacency matrix of V on surface M, and D=dig(k1,,kn) is a diagonal matrix. L=(IDA) is called Laplacian operator which is translation-invariant, and the rank of matrix L is rank(L)=n1 for 2-manifold mesh model. For meshes which have fixed adjacency matrix, V can be obtained by solving linear system as long as one vertex is determined.

    When deforming surface mesh M to obtain a new surface mesh M in condition of keeping its adjacency matrix unchanged, the deformation constraint is described by the displacement vector ci of vertex vi from M to M, ci=vivi, with a deforming weight ωi[0,1]. If ωi=0, the vertex vi is called free-deform vertex, and its displacement completely depends on the adjacency matrix and those vertices whose weights are not zero. The vertices with non-zero weights control the deformation, which are named control vertices. As a result, the weight ωi and displacement ci determine the shape of the new surface. When modeling with Laplacian operator, the first thing to do is to determine the absolute coordinates of control vertices, i.e., vi=vi+ci,i{m,n}, where m<n. Then calculate new coordinates of free-deform vertices. Solving for free-deform vertices is transform into solve the following deformation energy function

    Edeformation(V)=ni=1δiδi2+ni=1ω2ivi(vi+ci)2 (12)

    By minimizing the deferential coordinate deviations and the compliance with the positional constraints, the quadratic minimization problem becomes solving sparse linear equations as below

    LTLV+Ω2(C+V)=(LTL+Ω2)V (13)

    where Ω=diag(ω1,,ωn) is a diagonal matrix composed of weights, and V is initialized with V0 of M0.

    Because excessive large displacement will introduce unnatural and sharp shape deformation, the shape sharpness constraint energy is used to control the amount of vertex's relative displacement from previous step. The displacement can be penalized by using the following energy constraint function

    Esharpness(V)=VVpre2 (14)

    where Vpre represents the position state in last iteration.

    When the partial derivation of E(V) satisfies E(V)/V=0, the total constraint energy E(V) in Eq (5) gets its extreme value, and then a linear system is obtained as follows

    (I+α(LTL+Ω2)+β)V=ψ(V)+α(LTL+Ω2)V+αΩ2C+βVpre (15)

    The direct solver for sparse matrix is used to solve Eq (15). During the computation, at earlier iteration stage, some of the displacements are relatively large, which will increase the unreliability of the corresponding projections on the implied surface of the constraint point set. In order to suppress the negative effects caused by EProximity(V), initial values for α and β are set bigger to increase the influence strength of Edeformation(V) and Esharpness(V), e.g., α=1,β=1.

    As the iteration steps increase, V gradually approaches to the target position, and the proximity match becomes more reliable. When solutions of two consecutive iterations are very close and result in little progress, values for α and β are decreased to reduce the influence strength of Edeformation(V) and Esharpness(V) on the solution and indirectly increase the weight of EProximity(V), in order to make the deformation result approximate the local feature details of the target surface more precisely.

    Figure 8 shows an example of deforming an initial surface model in Figure 8(A) to the target model in Figure 8(B). As can be seen, local feature details can still be kept in regions where the displacement deviations between initial surface model and the reference model are remarkable, and the constraint of proximity matching plays a leading role during in the later deforming stage. Figure 8(E) and (H) show that the final deformation result approximates the target model accurately.

    Figure 8.  Example of the iterative deformation algorithm between two surface model: A, the initial surface model; B, the target surface model; C, the deformation result after 1 iteration; D, the deformation result after 3 iterations; E, the deformation result after 6 iterations; F, cross section comparison between B and C; G, cross section comparison between B and D; H, Cross section comparison between B and E; I, the comparison result between E and B.

    Because of the narrow interosseous space between the hip and the caudal vertebrae, and the complicated HU values distribution in regions such as acetabular fossa, femoral head, acetabula notch and trochanter fossa, accurate segmentation and reconstruction of the femur and hip are very difficult and representative. Therefore, we take both the healthy and diseased CT images containing femur and hip as examples to analyze the proposed algorithm.

    The slice thickness of CT image is usually between 0.625 mm and 10 mm. It has great influences on CT image segmentation and bone surface reconstruction. Big thickness not only tends to seriously blur interosseous regions that make it hard to segment adjacent bones separately with each other, but also results in obvious stair-step shape on the reconstructed surface in Figure 2(A). Therefore, in order to avoid the negative effects mentioned above, it is necessary to resample the spatial space of CT images by smaller thickness, which can achieve image enhancement, improve image clarity especially for regions at joint location, and alleviate the stair-step effect in Figure 2(B). Experiments show that when the resampling thickness is taken as 0.2 0.3mm for long bones and 0.1 0.2mm for vertebrae and short bones, better segmentation and reconstruction results are achieved.

    According to biomedical characteristics of the bone to be segmented and application scenarios of the segmentation results, the HU threshold range used for bone's ROI segmentation from CT images is determined. The upper limit value of the threshold range usually remains unchanged. For the lower limit value, a bigger value is chosen to extract initial volume sampling points for reconstructing the initial surface model, and a smaller value for the target volume sampling points which is used as the reference point set for generating accurate surface model. Taking femur as an example in Figure 10, the threshold ranges [300-2500] and [150-2500] are chosen for surface points extraction and surface deformation respectively.

    Figure 9.  Iterative directional deformation of the sphere surface with radius of 100 mm for different point cloud constraints: A, iterative deformation results (A1, A2, A3) after 1, 2, and 3 iterations respectively from A0; B, iterative deformation results (B1, B2, B3) after 1, 2, and 3 iterations respectively from B0; C, the comparison result between A3 and its corresponding standard fitting sphere; D, the comparison result between B3 and its corresponding standard fitting sphere.
    Figure 10.  Surface reconstruction and analysis of a femur from CT images: A, CT slice containing the femur; B, the femoral binary image segmentation result of A with threshold range of [300,2500]; C, C0 is the initial volume sampling points extracted from binary images corresponding to B, C1 is the initial surface model reconstructed from sampling points in C0, C2 is the comparison result between C1 and the ground truth; D, binary image segmentation result of A with threshold range of [150,2500]; E, the target volume sampling points extracted from binary images corresponding to D, and the initial surface model is in placed; F, deformation result without considering the shape deformation constraint energy in the last iteration step; G, the generated accurate surface model; H, the comparison result between G and the ground truth; I, Enlarged view show that the linea aspera is restored clearly; J, cross section model reconstructed from the target volume sampling points in E by Poisson surface reconstruction; K, cross section contours comparison between the initial surface model in C1 and the surface model in J, section curves are marked with blue and red respectively; L, cross section contours comparison between the accurate surface model in G and the Poisson reconstructed surface model in J, section curves are marked with green and red respectively; M, cross section contours comparison between the accurate surface model in G and the initial surface model in C, section curves are marked with green and blue respectively.

    After resampling operation, for most cases, the adjacent bones can be effectively disconnected by applying once or twice consecutive erosion morphological operation on the interosseous regions. For some special cases, such as the threshold range of interosseous region overlaps too much with that of the bone's ROI, or the interosseous region is too small and narrow, we adjust the threshold range to that of the corresponding cartilage in order to identify and segment out the region containing cartilages, and some interaction may be required. After subtracting the cartilage region from CT images, the bone's ROI can be segmented correctly.

    Since the mapping points and the source points are matched one-to-one, inverse mapping of the convex hull reconstructed from the mapping points is corresponding to the local surface patch of the visible region on the source points. After projection operation, convex hull reconstruction and inverse mapping operation are sequentially applied on the source points from multi-view directions, and the visible surface points can be extracted efficiently. Generally, the object can be fully observed from six view directions including top, bottom, front, back, left and right. For model with complex shapes in Figure 6, additional view directions are added until the whole picture of the model is observed. The surface mesh patches reconstructed from the visible surface points usually contain noise especially at boundaries, which can be removed according to the smoothness and connection characteristics of surface model.

    When the visible surface points extracted from multi-view directions of the bone's initial volume sampling points are fitted by implicit function in Figure 7, the depth of the octree used in Poisson surface reconstruction determines the details of the reconstructed model. The bigger the depth is, the more geometric details will be reconstructed. On the contrary, the smaller the depth is, the more geometric details will be smoothed out. Smaller depth makes the Poisson surface reconstruction procedure more effective while removing noise and filling holes, and then a smooth, closed and continuous surface model is achieved.

    The initial watertight surface model M0 is reconstructed from the visible surface points, so the geometry size of M0 is usually smaller than that of the actual bone surface model. The subsequent deformation modeling behavior is similar to expanding M0 outward in Figure 11(K).

    Figure 11.  Surface reconstruction and analysis of a hip bone from CT images: A, CT slice containing the hip bone; B, the hip bone's binary image segmented from A with threshold range of [350,2500] HU; C, the initial volume sampling points of the hip bone extracted from binary images corresponding to B, and the initial surface model reconstructed from the sampling points; D, binary image segmentation result of A with threshold range of [150,2500] HU; E, the target volume sampling points extracted from binary images corresponding to D; F, the sampling points in E and the initial surface model to be deformed are in placed; G, the generated accurate surface model; H, cross section model reconstructed from the target volume sampling data in E by Poisson surface reconstruction; I, cross section contours comparison between the initial surface model in C and the surface model in H, section curves are marked with blue and red respectively; J, cross section contours comparison between the accurate surface model in G and the surface model in H, section curves are marked with green and red respectively; K, cross section contours comparison between the accurate surface model in G and the initial template surface model in C, section curves are marked with green and blue respectively.

    The deformation result mainly depends on the sampling points that are distributed near the bone's actual surface, and the sampling points inside the surface have little contribution to the deformation result. The constraint points BP used in the iterative deformation stage are achieved through set operations BP=S(PkH0)(PkHTPkH0).The reference points PkHT with lower accuracy requirements also can be obtained by using binary mask to 3D mesh reconstruction algorithm in ITK.

    As shown in Figure 8, when the tooth model in Figure 8(A) is deformed from the initial to the final state, the vertex's movement direction of the deformation model is determined by its relative position to the target model, e.g., moving inward (corresponding to blue region), moving outward (corresponding to red region) and fixed (on the surface of target model) respectively.

    Directional search is performed to obtain the matching pairs (vi,ψ(vi)) with given search radius R. Vertices in the matching pairs are used as control points, and those beyond the search radius are free deformation points. The directional search center vc corresponding to vi is determined by offsetting a distanceδ along the normal direction, vc=vi+nδ, when moving outward δ[0.5d,d], otherwiseδ[d,0.5d].The range of the search radius R is usually taken asR[d,3d], which can ensure that the deformation amplitude is controlled in reasonable range. d is the average edge length of the deformation surface model.

    During the iterative deformation process, the weights are assigned with ωi=1 and ωi=0 for vertices in control region and free deformation region, respectively. After each iteration, ωi will be updated with 1 for vertex with little or no movement. When the deformation result reaches a stable stage, all weights are updated with 1. Let N denote the number of total iterative steps. Good results are achieved when the value of N is taken between [6,10]. The deformation vector ci is defined as follow:

    ci=ψ(vi)viNj+1 (16)

    where j is the current iteration step.

    As shown in Figure 9, the spherical point cloud is obtained by union of sampling points of spherical surfaces with radiuses of 100,120,140 and 160 mm. The spherical point cloud are copied and then translated by 300 mm and 190 mm along axis X to obtain the local overlapped point cloud as shown in Figure 9(A) and Figure 9(B) respectively. The points in the overlapped region are distributed irregularly. The sphere with radius of 100 mm in Figure 9(A0) and (B0) are iteratively deformed outwards by the proposed method. After performing once, twice and thrice iterative directional deformation operation, the corresponding deformation results are obtained in Figure 9(A1)-(A3); Figure 9(B1)-(B3). The radii of spheres in Figure 9(A1)-(A3) are approximately equal to 120,140 and 160 respectively, and the same for Figure 9(B1)-(B3).

    Figure 8 and Figure 9 show that the deformation algorithm used in this paper has the characteristics of directional deformation, and the deformation amplitude, direction and result are controllable.

    As shown in Figure 8(I), the shape deviation between the deformation result and the target model with a maximum of 0.024 mm is smaller than the average edge distance 0.0256mm of the target model.

    Figure 9 shows that when the sphere surface is deformed and approaches to the surface data of the point cloud, the shape of the sphere surface is well preserved under action of the constraint energy function. The proposed reconstruction algorithm has the ability to control the deformation scale within reasonable range, and will not produce extra noise or excessive deformation. The maximum shape deviation 9.214 mm in Figure 9(D) between the deformation result and its corresponding standard fitting sphere surface is smaller than the local average sampling density 11.305 mm of the point cloud's irregular region.

    Because the proposed algorithm is iterative and the deformation process are controllable, the iterative steps, direction, scale and area of the deformation can be specified or changed during the reconstruction process as needed. As can be seen from Figures 8-11 that the deformation algorithm proposed in this paper can detect the accurate surface shape from the constraint point set. The mechanism of the reconstruction algorithm can be explained theoretically and experimentally from the constraint energy function in Figures 8-11.

    Figure 10 shows the surface reconstruction procedure of a femur, and Figure 11 for a hip bone. The CT images are resampled with thickness value of 0.3mm. The segmentation threshold ranges used to extract the initial volume sampling points and the target volume sampling points are [300,2500] and [150,2500] respectively. The initial surface models in Figure 10(C) and Figure 11(C), and the cross section models in Figure 10(J) and Figure 11(H) are all generated by Poisson surface reconstruction algorithm. The octree depth used to reconstruct the surface models in Figure 10(J) and Figure 11(H) is 10. The surface of the reconstructed model from the target volume sampling points indicates where the actual bone surface is located in Figure 10(L) and Figure 11(J).

    The enlarged view in Figure 10(F) shows that the stair-step surface model is obtained without considering the shape deformation constraint energy (α=0) in the last iteration step. As can be seen from the stair-step shapes at the femoral head, which are consistent with the 3D slice contours of the femoral head directly extracted from CT images in all three directions, the proposed proximity matching algorithm can detect the data of bone surface correctly from the volume data. The accurate femur surface model in Figure 10(G) is obtained when the iterative deformation reaches a stable state. Enlarged view in Figure 10(G) shows that the linea aspera of the femur is restored clearly.

    From the cross section contours in Figure 10(J) and Figure 11(H), it can be seen that there are local shape missing, abnormal and redundant shapes in the model reconstructed directly from the corresponding volume sampling points by Poisson surface reconstruction.

    Cross section contours in Figure 10(K) and Figure 11(I) show that the initial bone surface model has large differences in shape compared with the actual shape contained in the corresponding Poisson reconstructed surface model, especially at the region circled with blue border rectangle.

    Figure 10(L) and Figure 11(J) are the generated accurate femur and hip bone surface models respectively. As can be seen from the enlarged views in Figure 10(L) and Figure 11(J), the cross-section contour curve of the accurate bone surface model correctly approximates the actual contour curve contained in the Poisson reconstructed surface model while avoiding the influences of noise data effectively. The local features of the femur such as trochanteric fossa, linea aspera are also restored correctly in Figure 10(G) and (L).

    The cross section contour comparisons in Figure 10(L), (M) and Figure 11(J), (K) show that the proposed surface reconstruction algorithm in this paper can effectively reduce the influences of noise data, and it can also restore the local shape with data loss and reconstruct the feature details accurately. The premise is only to roughly segment out bone's ROI from CT images, in order to generate a watertight 2-manifold initial bone surface model.

    In order to get an accurate bone's ROI disconnected from its adjacent bones, the resampling thickness should be smaller than the smallest interosseous space or thickness of cartilages between adjacent bones. Because the interosseous spaces corresponding to long bones and hip bones are relatively obvious, a bigger resampling thickness can be chosen. The interosseous spaces between adjacent short bones or vertebrae are much smaller compared to long bones or hip bones, so it needs a smaller resampling thickness to effectively segment the bone's ROI from CT images.

    However, because there are non-obvious segmentation boundaries between adjacent sub-bones of the skull, the segmentation of sub-bone's ROI is very difficult. The segmentation and reconstruction of the skull's sub-bones is the goal of future research.

    The proposed surface reconstruction algorithm mainly consists of three parts: volume sampling point extraction, visible surface point extraction, initial surface model reconstruction, and accurate surface generation.

    Both CT images resampling and segmentation by thresholding have linear time O(N) complexity. The time complexities corresponding to hyper-spherical projection and convex hull reconstruction included in surface data point extraction algorithm are O(N) and O(NlogN) respectively.

    Poisson surface reconstruction algorithm has O(Nlogn) time complexity with N/2lnNH+1, where N is the points number, H is the depth of the octree. The depth H used to reconstruct the initial surface model is chosen as 7 in this paper, so it greatly reduces the computational time taken for the surface reconstruction.

    For the accurate surface generation algorithm, the most time-consuming items are the proximity matching pairs' searching and solving of the sparse linear system of the deformation algorithm with a O(N) time Cholesky factorization. The O(Nlogk) time KNN (k-nearest neighbors) algorithm [41] is used to search for matching pairs in this paper.

    Therefore, the corresponding time complexity of the proposed reconstruction algorithm can be measured by combinations of O(NlogN) and O(N). The number of iteration steps required for deformation algorithm mainly depends on the density and search radius of the constraint point set. Generally, stable results can be achieved after 5-10 iterations. The reconstruction results show that the proposed method is accurate and efficient.

    The 3D bone surface model reconstruction algorithm of CT images based on thresholding and implicit surface fitting has characteristics of strong stability and robustness compared with other algorithms, and it is the mainstream algorithm used by 3D medical image processing software such as the most popular Materialise Mimics.

    Figure 12(A) shows a patient's local CT image with femoral tumor, which includes a healthy right hip bone, a healthy right femur, a diseased left hip bone, and a left femur with bone tumor and fracture. The slice thickness, slice increment and pixel size of CT image in Figure 12(A) are 1mm, 0.4mm and 0.39mm respectively. The diseased region and shape size of bone tumor are usually uncertain and random. Bone tumor can lead to abnormal distribution of HU values in the diseased region, which will cause the segmentation boundary between bone tumor and its surrounding soft tissue being blurred and unclear (see the enlarged view of Figure 12(A)). The irregular shape and complicated HU values distribution of bone tumor will bring great inaccuracy to the algorithm based on differential geometric properties, which are calculated according to HU values. So, we use CT images indicated in Figure 12 (A) to compare and analyze the reconstruction effects between the proposed algorithm and Materialize Mimics software.

    Figure 12.  CT image with bone tumor and its corresponding contour curves with different thresholds by Materialize Mimics software (Version 20.0). A, local CT slice of a patient with femoral tumor and fracture; B, segmentation result with threshold range of [150,2500] HU; C, segmentation result with threshold range of [226,2500] HU; D, segmentation result with threshold range of [330,2500] HU; E, cross section contour curves of the 3D bone model reconstructed from segmentation data in B; F, cross section contour curves of the 3D bone model reconstructed from segmentation data in C; G, cross section contour curves of the 3D bone model reconstructed from segmentation data in D; H, Enlarged view corresponding to the left femoral head region in A.

    The statistical threshold ranges of compact bone, spongial bone and muscle tissue are [580,2500] and [145,2500], [-25,140] respectively. Because the lower limit of spongial bone and upper limit of muscle tissue are very close. It is hard to classify pixels or voxels with HU value between [130,160]. Therefore, lots of holes and segmentation boundary missing problems will appear when segmenting CT images by thresholding, especially in regions distributing with more spongial bone tissue. Figure 12(B)-(D) are segmentation results corresponding to threshold range of [150,2500], [226,2500] and [330,2500] respectively. Figure 12(E)-(G) show the cross section contour curves of the 3D bone models reconstructed directly by Mimics materialize (version 20.0) after thresholding segmentation. As can be seen from Figure 12, while the contour curves of the reconstructed model are faithful to the segmentation data, there are lots of abnormal shapes, holes and internal redundant structure in the reconstructed bone models, which can't be used directly for subsequent artificial prosthesis design, internal honeycomb structure design and FEM (Finite Element Method) analysis.

    In Materialize Mimics (version 20.0), the watertight 3D surface model reconstruction of CT images is realized by the combination method of "thresholding" (Figure 13(B)), "Calculate part" (Figure 13(C)), and "Wrap" (Figure 13(D)-(F)), and the watertight surface reconstruction procedure and purpose is similar to that of this paper.

    Figure 13.  Watertight 3D bone surface model reconstruction procedure from CT images by Materialize Mimics software (Version 20.0): A, local CT slice with femoral tumor and fracture; B segmentation result of A with threshold range of [226,2500] HU; C, the reconstructed 3D bone models corresponding to the segmentation result in B; D, wrapped result of C with parameters "smallest detail = 1 mm" and " Gap closing distance = 2 mm", the intersection plane shows the location of the CT slice in A; E, wrapped result with parameters "smallest detail = 1 mm" and "Gap closing distance = 8 mm"; F, wrapped result with parameters "smallest detail = 1 mm" and "Gap closing distance = 15 mm"; G cross section contour curves corresponding to models in D; H cross section contour curves corresponding to models in E; I cross section contour curves corresponding to models in F; J, enlarged views of G, H and I at the same region of the CT slice show the comparison between the red cross section contour curves and the blue manually selected actual contour curves.

    Figure 13 shows the reconstruction procedure of the watertight 3D bone surface model by Materialize Mimics software (Version 20.0). Figure 13(B) shows the segmentation results with threshold range of [226,2500], and the colorful region corresponding to each bone are separated with adjacent bones manually. Figure 13(C) shows the 3D bone models reconstructed directly from the segmentation results in Figure 13(B) by Mimics. Figure 13(D)-(F) are corresponding to the wrapped results of Figure 13(C) with parameters "smallest detail = 1mm" and "Gap closing distance = 2, 8, 15mm" respectively. Figure 13(G)-(I) are the cross section contour curves corresponding to Figure 13(D)-(F), and the intersection plane marked is the plane of CT slice in Figure 13(A).

    Enlarged views of Figure 13(G)-(I) at the same region of the CT image show the comparison between the red cross section contour curves and the blue manually selected actual contour curves. As can be seen from Figure 13(J), with the increase of "Gap closing distance", the contour curves are evolved from multiple rings to a single closed contour curve with holes being filled and gaps being closed, and a watertight 3D bone surface model can be achieved when the contour curve corresponding to each CT slice is a single closed curve in Figure 13(F) and (I).

    Figures 15-18 show the watertight surface wrapping procedure of the right femur, the right hip bone, the left femur, and the left hip bone respectively. The red curves are cross section contour curves, and the blue curves represent the manually selected actual contour curves. As can be seen from the enlarged views in Figures 15-18, while the contour curves are evolved as the "Gap closing distance" increases, the holes, internal redundant structures and gaps are gradually disappeared, but local surface shape in regions such as trochanteric fossa (Enlarge view in Figures 14 and 16) and acetabula notch (Enlarge view in Figures 15 and 17) are also over deformed in the direction of increasing deviation error from the correct position. When "Gap closing distance" = 15mm, the wrapped result for each bone are watertight.

    Figure 14.  Watertight surface model wrapping procedure of the yellow femur by Materializes Mimics software (Version 20.0) and comparison with the actual contour curve: A, the bone models are the same with that in Figure 13(C) and the right yellow femur is the model used to be analyzed; B, wrapped result of the yellow femur in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 2 mm"; C, wrapped result of the yellow femur in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 6 mm"; D, wrapped result of the yellow femur in A with parameters "smallest detail = 1mm" and "Gap closing distance = 10 mm"; E, wrapped result of the yellow femur in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 15 mm"; F, cross section contour curves corresponding to B; G, cross section contour curves corresponding to C; H, cross section contour curves corresponding to D; I, cross section contour curves corresponding to E; J, the manual selected actual contour curve with blue; K, contour curves comparison between F and J; L, contour curves comparison between G and J; M, contour curves comparison between H and J; N, contour curves comparison between I and J; O, local CT slice corresponding to the intersection plane in A.
    Figure 15.  Watertight surface model wrapping procedure of the cyan hip bone by Materializes Mimics software (Version 20.0) and comparison with the accurate contour curve: A, the bone models are the same with that in Figure 13(C), and the cyan hip bone is the model used to be analyzed; B, wrapped result of the cyan hip bone in A with parameters "smallest detail = 1 mm " and "Gap closing distance = 2 mm"; C, wrapped result of the cyan hip bone in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 6 mm"; D, wrapped result of the cyan hip bone in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 10 mm"; E, wrapped result of the cyan hip bone in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 15 mm"; F, cross section contour curves corresponding to B; G, cross section contour curves corresponding to C; H, cross section contour curves corresponding to D; I, cross section contour curves corresponding to E; J, the manually selected actual contour curve with green; K, contour curves comparison between F and J; L, contour curves comparison between G and J; M, contour curves comparison between H and J; N, contour curves comparison between I and J; O, local CT slice corresponding to the intersection plane in A.
    Figure 16.  Watertight surface model wrapping procedure of the green femur with tumor by Materializes Mimics software (Version 20.0) and comparison with the accurate contour curve: A, the bone models are the same with that in Figure 13(C), and the green femur is the model used to be analyzed; B, wrapped result of the green femur in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 2 mm"; C, wrapped result of the green femur in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 6 mm"; D, wrapped result of the green femur in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 10 mm"; E, wrapped result of the green femur in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 15 mm"; F, cross section contour curves corresponding to B; G, cross section contour curves corresponding to C; H, cross section contour curves corresponding to D; I, cross section contour curves corresponding to E; J, the manually selected actual contour curve with blue; K, contour curves comparison between F and J; L, contour curves comparison between G and J; M, contour curves comparison between H and J; N, contour curves comparison between I and J; O, local CT slice corresponding to the intersection plane in A.
    Figure 17.  Watertight surface model wrapping procedure of the blue diseased hip bone by Materializes Mimics software (Version 20.0) and comparison with the accurate contour curve: A, the bone models are the same with that in Figure 13(C), , and the blue hip bone is the model used to be analyzed; B, wrapped result of the blue hip bone in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 2 mm"; C, wrapped result of the blue hip bone in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 6 mm"; D, wrapped result of the blue hip bone in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 10 mm"; E, wrapped result of the blue hip bone in A with parameters "smallest detail = 1 mm" and "Gap closing distance = 15 mm"; F, cross section contour curves corresponding to B; G, cross section contour curves corresponding to C; H, cross section contour curves corresponding to D; I, cross section contour curves corresponding to E; J, the manually selected actual contour curve with blue; K, contour curves comparison between F and J; L, contour curves comparison between G and J; M, contour curves comparison between H and J; N, contour curves comparison between I and J; O, local CT slice corresponding to intersection the plane in A.
    Figure 18.  Watertight 3D surface model reconstruction from CT images by method proposed in this paper: A, local CT slice with femoral tumor and fracture; B, binary image segmentation result of A with threshold range of [330,2500] HU; C, the extracted visible surface patches; D, the reconstructed initial watertight surface model; E, binary image segmentation result of A with threshold range of [150,2500] HU; F, the generated accurate surface model; G, enlarged view corresponding to the femoral head region in D; H, enlarged view corresponding to the femoral head region in F.

    Because the wrap algorithm relies on the geometric information of the bone model to be wrapped, rather than the actual CT segmentation boundary information, it will introduce new shape errors which have the same quantity level with the value of "Gap closing distance"' while closing the shape. For bone models in Figures 15-18, Table 1 show that the maximum shape errors of the wrapped watertight surface model are usually between 5mm and 15mm, which will cause assembly accuracy problems between the remaining healthy bones after tumor resection and the corresponding artificial prosthesis.

    Table 1.  Maximum deviation comparison between the proposed method and Materializes Mimics.
    Bone name Mimics materialize (version 20.0) Method proposed in this paper (Figure 19)
    Parameters Surface quality Deviation: mm Deviation: mm
    Maximum Maximum
    smallest detail: mm Gap closing distance: mm Watertight Main defect
    The right femur (Figure 14) 1 8 No Hole --- 0.95
    10 Yes Big pit > 5.5
    15 Yes Over deformed > 6.2
    The right hip bone (Figure 15) 1 8 Yes Bit pit > 10.0 0.61
    10 Yes Bit pit > 10.0
    15 Yes Over deformed > 6.5
    The left femur (Figure 16) 1 8 Yes Bit pit > 15.0 1.1
    10 Yes Over deformed > 5.5
    15 Yes Over deformed > 15.0
    The left hip bone (Figure 17) 1 8 No hole --- 1.2
    10 Yes Bit pit > 12.0
    15 Yes Over deformed > 5.5

     | Show Table
    DownLoad: CSV

    Figure 18 shows the watertight 3D bone surface model reconstruction by the method proposed in this paper. As can be seen from Figure 18(G) and (H), the watertight and smooth bone surface models are obtained while local details are well preserved.

    Figure 19 shows the red cross section contour curves of Figure 19(I)-(L) for each bone in Figure 18(F) and the corresponding manual selected actual contour curves in Figure 19(M)-(P). Figure 19(A)-(D) indicate the location of the intersection plane in Figure 19(U) for each bone respectively. The enlarged views in Figure 19(Q)-(T) show comparisons between the cross section contour curve and the actual contour curve.

    Figure 19.  Local CT slices, contour curves and comparisons for each 3D bone model in Figure 18(F): A, the green femur after being intersected by its corresponding CT slice plane in U; B, the yellow hip bone after being intersected; C, the light brown hip bone after being intersected; D, the green and cyan bone tumors after being intersected; E, local CT slice at the location indicated in A; F, local CT slice at the location indicated in B; G, local CT slice at the location indicated in C; H, local CT slice at the location indicated in D; I, cross section contour curve of A; J, cross section contour curve of B; K, cross section contour curves of C; L, cross section contour curve of D; M, the manually selected actual contour curve of CT slice in A; N, the manual selected actual contour curve of CT slice in B; O, the manually selected actual contour curve of CT slice in C; P, the manually selected actual contour curves of CT slice in D; Q, local enlarged view of the comparison between contour curves in I and M; R, local enlarged view of the comparison between contour curves in J and N; S, local enlarged view of the comparison between contour curves in K and O; T, local enlarged view of the comparison between contour curves in P and T; U, CT slice plane corresponding to each bone.

    Figure 20 shows the quantitative analysis of the deviation error between the cross section contour curve and the manually selected actual contour curve. Figure 20(A) and Figure 20(B) show that the maximum deviation of both the healthy right femur and the healthy right hip bone is less than 1 mm, and the deviation of 95% sampling points is less than 0.5mm (1.28pixel size). Figure 20(C) show that for the left diseased hip bone and the left femoral tumor, there are about 1~2% sampling points with deviation error lager than 1 mm, deviation error of 85% sampling points is less than 0.5mm. Figure 20(B) shows the deviation curve comparison between the diseased left and healthy right hip bone. Because of irregular changes of local shapes and HU values corresponding to the diseased bone, deviation curve of the diseased hip bone changes much sharply than that of the healthy hip bone.

    Figure 20.  quantitative analysis of the deviation between the cross section contour curve and the manually selected actual contour curve for each bone in Figure 19.

    For the cases used in this paper, statistical results demonstrate that the deviation error of 98% sampling points for all bones is less than 1mm (2.5pixels), and the reconstruction accuracy is coincide with CT slice thickness (1.0 mm). The maximum deviation is only 1.3 mm, which is far less than that of the reconstruction results by Mimics in Table 1.

    In fact, the volume sampling data corresponding to CT images are regular point set with equidistance between adjacent points. The search radius of the deformation algorithm is easy to be determined according to the CT slice thickness. Therefore, the deformation amplitude of each iteration can be controlled within the CT slice thickness.

    The premise of the bone surface reconstruction algorithm proposed in this paper is to firstly segment the bone's ROI from CT images in order to achieve the initial volume sampling points, which are used to reconstruct the initial watertight surface model. When dealing with bones that have non obvious segmentation boundary in CT images such as sub-bones of the skull or sternum, it is difficult to segment sub-bone's ROI from CT images directly by thresholding method, and a more efficient and accurate segmentation method needs to be designed.

    The proposed algorithm is designed for surface model reconstruction. The reconstructed model doesn't include the bone model's inner honeycomb structure. Therefore, to reconstruct a bone model with detailed inner structure, the algorithm needed to be redesigned based on volumetric mesh model

    Aiming at the problems that exist in the 3D bone surface model reconstructed from CT images, such as noises, holes, stair-step shapes, abnormal and redundant shapes, this paper proposes a watertight 2-manifold 3D bone surface reconstruction method. The proposed method includes three main steps: two-step thresholding, initial watertight surface reconstruction and shape optimization. From the experimental results of the cases used in this paper, the proposed algorithm can achieve good reconstruction result for bone with obvious interosseous space. For special cases, some manual interaction is still needed to achieve the desired deformation results.

    Funding: This work was supported by the National Natural Science Foundation of China [grant number 51705183].

    The authors declare there is no conflict of interest.



    [1] K. Engelke, C. Libanati, T. Fuerst, P. Zysset, H. K. Genant, Advanced CT based in vivo methods for the assessment of bone density, structure, and strength, Curr. Osteoporos. Rep., 11 (2013), 246-255.
    [2] H. K. Genant, K. Engelke, S. Prevrhal, Advanced CT bone imaging in osteoporosis, Rheumatology, 47 (2008), 9-16.
    [3] N. Goyal, M. Kalra, A. Soni, P. Baweja, N. P.Ghonghey, Multi-modality imaging approach to bone tumors-state-of-the art, J. Clin. Orthop. Trauma, 10 (2019), 687-701.
    [4] G. X. Pei, Y. B. Yan, Current status and progress of digital orthopaedics in china, J. Othop. Trans., 2 (2017), 107-117.
    [5] S. Girod, M. Teschner, U. Schrell, B. Kevekordes, B. Girod, Computer-aided 3-d simulation and prediction of craniofacial surgery: A new approach, J. Cranio-MaxilloFac. Surg., 29 (2001), 156-158.
    [6] S. K. Parashar, J. K. Sharma, A review on application of finite element modelling in bone biomechanics, Perspect. Sci., 8 (2016), 696-698.
    [7] F. E. Boas, D. Fleischmann, CT artifacts: Causes and reduction techniques. Imaging Med., 4 (2012), 229-240.
    [8] G. Wang, M. W. Vannier, Stair-step artifacts in three-dimensional helical CT: An experimental study, Radiology, 191 (1994), 79-83.
    [9] E. A. Zanaty, Said Ghoniemy, Medical image segmentation techniques: An overview, Int. J. Inf. Med. Data Process., 1 (2016), 16-37.
    [10] S. P. Lim, H. Haron, Surface reconstruction techniques: A review, Artif. Intell. Rev., 42 (2014), 59-78.
    [11] F. Zhao, X. Xie, An overview of interactive medical image segmentation, Ann. BMVA, 7 (2013), 1-22.
    [12] N. Sharma, L. M. Aggarwal, Automated medical image segmentation techniques, J. Med. Phys./Assoc. Med. Phys. India, 35 (2010), 3.
    [13] M. I. Razzak, S. Naz, A. Zaib, Deep learning for medical image processing: Overview, challenges and the future, Classif. BioApps, 26 (2018), 323-350.
    [14] D. F. Malan, C. P. Botha, E. R. Valstar, Voxel classification and graph cuts for automated segmentation of pathological periprosthetic hip anatomy, Int. J. Comput. Assist. Radiol. Surg., 8 (2013), 63-74.
    [15] J. Ma, L. Lu, Y. Zhan, X. Zhou, M. Salganicoff, A. Krishnan, Hierarchical segmentation and identification of thoracic vertebra using learning-based edge detection and coarse-to-fine deformable model, Comput. Vision Image Understanding, 117 (2013), 1072-1083.
    [16] L. Lu, D. Wu, N. Lay, D. Liu, I. Nogues, R. M. Summers, Accurate 3D bone segmentation in challenging CT images: Bottom-up parsing and contextualized optimization, IEEE WACV 2016, (2016), 1-10.
    [17] N. Amenta, S. Choi, R. K. Kolluri, The power crust, unions of balls and the medial axis transform, Comput. Geometry, 19 (2001), 127-153.
    [18] F. Bernardini, C. L.Bajaj, Sampling and reconstructing manifolds using alpha-shapes, Dep. Comput. Sci. Tech. Rep., (1998), 1350.
    [19] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, W. Stuetzle, Surface reconstruction from unorganized points, ACM SIGGRAPH Comput. Graphics, 26 (1992), 71-78.
    [20] J. C. Carr, R. K. Beatson, J. B. Cherrie, T. J. Mitchell, W. R. Fright, B. C.McCallum, et al., Reconstruction and representation of 3D objects with radial basis functions, ACM SIGGRAPH, (2001), 67-76.
    [21] M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, C. T. Silva, Computing and rendering point set surfaces, IEEE Trans. Vis. Comput. Graph., 9 (2003), 3-15.
    [22] Y. Ohtake, A. Belyaev, M. Alexa, G. Turk, H. P. Seidel, Multi-level partition of unity implicits, ACM Trans. Graphics, 22 (2003), 463-470.
    [23] M. Kazhdan, M. Bolitho, H. Hoppe, Poisson surface reconstruction, Symp. Geom. Process., 7 (2006), 61-70.
    [24] Z. Pan, J. Lu, A Bayes-based region-growing algorithm for medical image segmentation, Comput. Sci. Eng., 9 (2007), 32-38.
    [25] K. Zhang, L. Zhang, K. M. Lam, D. Zhang, A level set approach to image segmentation with intensity inhomogeneity, IEEE Tech. Cybern., 46 (2015), 546-557.
    [26] T. Heimann, H. P. Meinzer, Statistical shape models for 3D medical image segmentation: A review, Med. Image Anal., 13 (2009), 543-563.
    [27] Y. Boykov, G. Funka-Lea, Graph cuts and efficient ND image segmentation, Int. J. Comput. Vis., 70 (2006), 109-131.
    [28] G. N. Abras, V. L. Ballarín, A weighted k-means algorithm applied to brain tissue classification, J. Compu. Sci. Technol., 5 (2005), 121-126.
    [29] A. Gacsádi, P. Szolgay, Variational computing based segmentation methods for medical imaging by using CNN, IEEE CNNA, (2010), 1-6.
    [30] M. Vania, D. Mureja, D. Lee, Automatic spine segmentation from CT images using convolutional neural network via redundant generation of class labels, J. Comput. Des. Eng., 6 (2019), 224-232.
    [31] K. Zhang, J. Deng, W. Lu, Segmenting human knee cartilage automatically from multi-contrast MR images using support vector machines and discriminative random fields, IEEE ICIP, (2011), 721-724.
    [32] J. T. Barron, M. D. Biggin, P. Arbelaez, D. W. Knowles, S. V. Keranen, J. Malik, Volumetric semantic segmentation using pyramid context features, IEEE ICCV, (2013), 3448-3455.
    [33] A. Farag, L. Lu, E. Turkbey, J. Liu, R. M. Summers, A bottom-up approach for automatic pancreas segmentation in abdominal CT scans, Int. MICCAI Workshop Comput. Clin. Challenges Abdom. Imaging, (2014), 103-113
    [34] J. Zhang, C. H. Yan, C. K. Chui, S. H. Ong, Fast segmentation of bone in CT images using 3D adaptive thresholding, Comput. Biol. Med., 40 (2010), 231-236.
    [35] S. Katz, A. Tal, R. Basri, direct visibility of point sets, ACM SIGGRAPH 2007, (2007), 24-40.
    [36] O. Sorkine, D. Cohen-Or, Y. Lipman, M. Alexa, C. Rössl, H. P. Seidel, Laplacian surface editing, ACM SGP 2004, (2004), 175-184.
    [37] S. Bouaziz, M. Deuss, Y. Schwartzburg, T. Weise, M. Pauly, Shape-up: Shaping discrete geometry with projections, Comput. Graphics Forum, 31 (2012), 1657-1667.
    [38] B. Cyganek, J. P. Siebert, An Introduction to 3D Computer Vision Techniques and Algorithms, 2011.
    [39] J. Kwon, J. W. Yi, S. M. Song, Adaptive cubic interpolation of CT slices for maximum intensity projections, Med. Imaging Int. Soc. Opt. Photonics 2004, 5367 (2004), 837-844.
    [40] C. B. Barber, D. P. Dobkin, H. Huhdanpaa, Qhull: Quickhull algorithm for computing the convex hull, ASCL, (2013), 1300-1304.
    [41] J. Sankaranarayanan, H. Samet, A. Varshney, A fast k-neighborhood algorithm for large point-clouds, IEEE SPBG 2006, (2006), 75-84.
  • This article has been cited by:

    1. Yudong Zhang, Juan Manuel Gorriz, Deepak Ranjan Nayak, Optimization Algorithms and Machine Learning Techniques in Medical Image Analysis, 2023, 20, 1551-0018, 5917, 10.3934/mbe.2023255
    2. Haiping Yuan, Lijun Xiong, Hengzhe Li, Hanbing Bian, Yixian Wang, Damage identification and failure morphological reconstruction for engineering rock mass, 2023, 223, 02632241, 113596, 10.1016/j.measurement.2023.113596
    3. Tao Liu, Yonghua Lu, Jiajun Xu, Haozheng Yang, Jiahui Hu, 3D reconstruction of bone CT scan images based on deformable convex hull, 2024, 62, 0140-0118, 551, 10.1007/s11517-023-02951-7
    4. D. Preethi, V. Govindaraj, S. Dhanasekar, K. Martin Sagayam, Syed Immamul Ansarullah, Farhan Amin, Isabel de la Torre D'ıez, Carlos Osorio Garc'ıa, Alina Eugenia Pascual Barrera, Fehaid Salem Alshammari, Deep learning-assisted 3D model for the detection and classification of knee arthritis, 2025, 02628856, 105574, 10.1016/j.imavis.2025.105574
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3929) PDF downloads(159) Cited by(4)

Figures and Tables

Figures(20)  /  Tables(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog