Loading [MathJax]/jax/output/SVG/jax.js
Review

Bone remodeling and biological effects of mechanical stimulus

  • Received: 05 November 2019 Accepted: 17 January 2020 Published: 19 January 2020
  • This review describes the physiology of normal bone tissue as an introduction to the subsequent discussion on bone remodeling and biomechanical stimulus. As a complex architecture with heterogeneous and anisotropic hierarchy, the skeletal bone has been anatomically analysed with different levelling principles, extending from nano- to the whole bone scale. With the interpretation of basic bone histomorphology, the main compositions in bone are summarized, including various organic proteins in the bone matrix and inorganic minerals as the reinforcement. The cell populations that actively participate in the bone remodeling—osteoclasts, osteoblasts and osteocytes—have also been discussed since they are the main operators in bone resorption and formation. A variety of factors affect the bone remodeling, such as hormones, cytokines, mechanical stimulus and electromagnetic stimulus. As a particularly potent stimulus for bone cells, mechanical forces play a crucial role in enhancing bone strength and preventing bone loss with aging. By combing all these aspects together, the information lays the groundwork for systematically understanding the link between bone physiology and orchestrated process of mechanically mediated bone homoestasis.

    Citation: Chao Hu, Qing-Hua Qin. Bone remodeling and biological effects of mechanical stimulus[J]. AIMS Bioengineering, 2020, 7(1): 12-28. doi: 10.3934/bioeng.2020002

    Related Papers:

    [1] Guojun Gan, Qiujun Lan, Shiyang Sima . Scalable Clustering by Truncated Fuzzy c-means. Big Data and Information Analytics, 2016, 1(2): 247-259. doi: 10.3934/bdia.2016007
    [2] Marco Tosato, Jianhong Wu . An application of PART to the Football Manager data for players clusters analyses to inform club team formation. Big Data and Information Analytics, 2018, 3(1): 43-54. doi: 10.3934/bdia.2018002
    [3] Jinyuan Zhang, Aimin Zhou, Guixu Zhang, Hu Zhang . A clustering based mate selection for evolutionary optimization. Big Data and Information Analytics, 2017, 2(1): 77-85. doi: 10.3934/bdia.2017010
    [4] Zhouchen Lin . A Review on Low-Rank Models in Data Analysis. Big Data and Information Analytics, 2016, 1(2): 139-161. doi: 10.3934/bdia.2016001
    [5] Pawan Lingras, Farhana Haider, Matt Triff . Fuzzy temporal meta-clustering of financial trading volatility patterns. Big Data and Information Analytics, 2017, 2(3): 219-238. doi: 10.3934/bdia.2017018
    [6] Yaguang Huangfu, Guanqing Liang, Jiannong Cao . MatrixMap: Programming abstraction and implementation of matrix computation for big data analytics. Big Data and Information Analytics, 2016, 1(4): 349-376. doi: 10.3934/bdia.2016015
    [7] Ming Yang, Dunren Che, Wen Liu, Zhao Kang, Chong Peng, Mingqing Xiao, Qiang Cheng . On identifiability of 3-tensors of multilinear rank (1; Lr; Lr). Big Data and Information Analytics, 2016, 1(4): 391-401. doi: 10.3934/bdia.2016017
    [8] Subrata Dasgupta . Disentangling data, information and knowledge. Big Data and Information Analytics, 2016, 1(4): 377-390. doi: 10.3934/bdia.2016016
    [9] Robin Cohen, Alan Tsang, Krishna Vaidyanathan, Haotian Zhang . Analyzing opinion dynamics in online social networks. Big Data and Information Analytics, 2016, 1(4): 279-298. doi: 10.3934/bdia.2016011
    [10] Ugo Avila-Ponce de León, Ángel G. C. Pérez, Eric Avila-Vales . A data driven analysis and forecast of an SEIARD epidemic model for COVID-19 in Mexico. Big Data and Information Analytics, 2020, 5(1): 14-28. doi: 10.3934/bdia.2020002
  • This review describes the physiology of normal bone tissue as an introduction to the subsequent discussion on bone remodeling and biomechanical stimulus. As a complex architecture with heterogeneous and anisotropic hierarchy, the skeletal bone has been anatomically analysed with different levelling principles, extending from nano- to the whole bone scale. With the interpretation of basic bone histomorphology, the main compositions in bone are summarized, including various organic proteins in the bone matrix and inorganic minerals as the reinforcement. The cell populations that actively participate in the bone remodeling—osteoclasts, osteoblasts and osteocytes—have also been discussed since they are the main operators in bone resorption and formation. A variety of factors affect the bone remodeling, such as hormones, cytokines, mechanical stimulus and electromagnetic stimulus. As a particularly potent stimulus for bone cells, mechanical forces play a crucial role in enhancing bone strength and preventing bone loss with aging. By combing all these aspects together, the information lays the groundwork for systematically understanding the link between bone physiology and orchestrated process of mechanically mediated bone homoestasis.


    1. Introduction

    In data clustering or cluster analysis, the goal is to divide a set of objects into homogeneous groups called clusters [10,18,20,26,12,1]. For high-dimensional data, clusters are usually formed in subspaces of the original data space and different clusters may relate to different subspaces. To recover clusters embedded in subspaces, subspace clustering algorithms have been developed, see for example [2,15,19,17,9,21,16,22,3,25,7,11,13]. Subspace clustering algorithms can be classified into two categories: hard subspace clustering algorithms and soft subspace clustering algorithms.

    In hard subspace clustering algorithms, the subspaces in which clusters embed are determined exactly. In other words, each attribute of the data is either associated with a cluster or not associated with the cluster. For example, the subspace clustering algorithms developed in [2] and [15] are hard subspace clustering algorithms. In soft subspace clustering algorithms, the subspaces of clusters are not determined exactly. Each attribute is associated to a cluster with some probability. If an attribute is important to the formation of a cluster, then the attribute is associated to the cluster with high probability. Examples of soft subspace clustering algorithms include [19], [9], [21], [16], and [13].

    In soft subspace clustering algorithms, the attribute weights associated with clusters are automatically determined. In general, the weight of an attribute for a cluster is inversely proportional to the dispersion of the attribute in the cluster. If the values of an attribute in a cluster is relatively compact, then the attribute will be assigned a relatively high value. In the FSC algorithm [16], for example, the attribute weights are calculated as

    wlj=1dh=1(Vlj+ϵVlh+ϵ)1α1,   l=1,2,,k,j=1,2,,d, (1)

    where ϵ is a small positive number used to prevent dividing by zero, α>1 is a parameter used to control the smoothness of the attribute weights, and

    Vlj=xCl(xjzlj)2. (2)

    Here k is the number of clusters, d is the number of attributes, and zl is the center of the lth cluster Cl. In the EWKM algorithm [21], the attribute weights are calculated as

    wlj=exp(Vljγ)ds=1exp(Vlsγ),   k=1,2,,n,l=1,2,,d, (3)

    where γ>0 is a parameter used to control the smoothness of the attribute weights.

    One drawback of the FSC algorithm is that a positive value of ϵ is required in order to prevent dividing by zero when an attribute has identical values in a cluster. Using the entropy weighting, the EWKM algorithm does not suffer from the problem of dividing by zero. However, the attribute weights calculated in the EWKM algorithm are sensitive to the parameter γ when the range of the attribute dispersions (e.g., Vlj) in a cluster is large. For example, suppose that a dataset has two attributes, whose dispersions in a cluster are 10 and 30, respectively. If we use a small value of γ such as γ=1, the attribute weights will be

    w1=e10e10+e30=11+e20=1,   w2=e30e10+e30=11+e20=0.

    If we use γ=10, the attribute weights will be

    w1=e1e1+e3=11+e2=0.88,   w2=e3e1+e3=11+e2=0.12.

    From the above example we see that choosing an appropriate value for the parameter γ is a difficult task when the attribute dispersions in a cluster is large. Feature group weighting has been introduced to address the issue [7,14].

    In this paper, we address the issue from a different perspective. Unlike the group feature weighting approach, the approach we employ in this paper involves using the log transformation to transform the distances so that the attribute weights are not dominated by a single attribute with the smallest dispersion. In particular, we present a soft subspace clustering algorithm called the LEKM algorithm (log-transformed entropy weighting k-means) to address the aforementioned problem. The LEKM algorithm extends the EWKM algorithm by using log-transformed distances in its objective function. The resulting attribute dispersions in a cluster are more compact than those from the EWKM algorithm. Due to the small difference of the attribute dispersions, the LEKM algorithm is less sensitive to the parameter than other soft subspace clustering algorithms are.

    The remaining part of this paper is structured as follows. In Section 2, we give a brief review of the LAC algorithm [9] and the EWKM algorithm [21]. In Section 3, we present the LEKM algorithm in detail. In Section 4, we present numerical experiments to demonstrate the performance of the LEKM algorithm. Section 5 concludes the paper with some remarks.


    2. Related work

    In this section, we introduce the EWKM algorithm [21] and the LAC algorithm [9], which are soft subspace clustering algorithms using the entropy weighting.


    2.1. The EWKM algorithm

    Let x1,x2,,xn be n data points, each of which is described by d attributes. Let k be the desired number of clusters. Then the objective function of the EWKM algorithm is defined as follows [21]:

    F(U,W,Z)=kl=1[ni=1dj=1uilwlj(xijzlj)2+γdj=1wljlnwlj], (4)

    where γ>0 is a parameter, U=(uil)n×k is a n×k partition matrix, and W=(wlj)k×d is a k×d weight matrix. In addition, the partition matrix U and the weight matrix W satisfy the following conditions:

    kl=1uil=1,    i=1,2,,n, (5a)
    uil{0,1},    i=1,2,,n,l=1,2,,k, (5b)
    dj=1wlj=1,    l=1,2,,k, (5c)

    and

    wlj>0,    l=1,2,,k,j=1,2,,d. (5d)

    Like the k-means algorithm [23,4], the EWKM algorithm tries to minimize the objective function using an iterative process. At the beginning, the EWKM algorithm initializes the cluster centers by selecting k points from the dataset randomly and initializes the attribute weights with equal values. Then the EWKM algorithm keeps updating U, W, and Z one at a time by fixing the other two. Given W and Z, the partition matrix U is updated as

    uil={1,    if   dj=1wlj(xijzlj)2dj=1uiswsj(xijzsj)2 for 1sk,0,    if  otherwise,

    for i=1,2,,n and l=1,2,,k. Given U and Z, the weight matrix W is updated as

    wlj=exp(Vljγ)ds=1exp(Vlsγ)

    for l=1,2,,k and j=1,2,,d, where

    Vlj=ni=1uil(xijzlj)2.

    Given U and W, the cluster centers are updated as

    zlj=ni=1uilxijni=1uil

    for l=1,2,,k and j=1,2,,d. The runtime complexity of one iteration of the EWKM algorithm is O(nkd).

    The parameter γ in the EWKM algorithm is used to control the smoothness of the attribute weights. If γ approaches to infinity, then all attributes have the same weights. In such cases, the EWKM algorithm becomes the standard k-means algorithm. Since the attribute weights are based on exponential normalization, the weights are sensitive to the parameter γ when the attribute dispersions (e.g., Vlj) have a wide range.


    2.2. The LAC algorithm

    The LAC algorithm (Locally Adaptive Clustering) [9] and the EWKM algorithm are similar soft subspace clustering algorithms in that both algorithms discover subspace clusters via exponential weighting of attributes. However, the LAC algorithm differs from the EWKM algorithm in the definition of objective function. Clusters found by the LAC algorithm are referred to as weighted clusters. The objective function of the LAC algorithm is defined as

    E(C,Z,W)=kl=1dj=1(wlj1|Cl|xCl(xjzlj)2+hwljlogwlj), (6)

    where k is the number of clusters, d is the number of attributes, Z={z1,z2,,zk} is a set of cluster centers, W=(wlj)k×d is a weight matrix, C={C1,C2,,Ck} is a set of clusters, and h>0 is a parameter. The weight matrix also satisfies the conditions given in Equations (5c) and (5d).

    Like the k-means algorithm and the EWKM algorithm, the LAC algorithm also employs an iterative process to optimize the objective function. Similar to the EWKM algorithm, the LAC algorithm initializes the cluster centers by selecting k points from the dataset randomly and initializes the attribute weights with equal values. Given the set of cluster centers Z and the set of weight vectors W, the clusters are determined as follows:

    Sl={x:dj=1wlj(xjzlj)2<dj=1wsj(xjzsj)2,sl} (7)

    for l=1,2,,k. Given the set of cluster centers Z and the set of clusters {S1,S2,,Sk}, the set of weight vector is determined as follows:

    wlj=exp(Vlj)/hds=1exp(Vls/h) (8)

    for l=1,2,,k and j=1,2,,d, where

    Vlj=1|Sl|xSl(xjzlj)2.

    Given the set of clusters {S1,S2,,Sk}, the cluster centers are updated as follows:

    zlj=1|Sl|xSlxj (9)

    for l=1,2,,k and j=1,2,,d. The runtime complexity of one iteration of the LAC algorithm is O(nkd).

    Comparing Equation (6) with Equation (4), we see that the distances in the objective function of the LAC algorithm are normalized by the sizes of the corresponding clusters. As a result, the dispersions (i.e., Vlj) calculated in the LAC algorithm are smaller than those calculated in the EWKM algorithm. However, the dispersions calculated in the LAC algorithm can still have a wide range for small-sample high-dimensional data such as gene expression data [8].


    3. The LEKM algorithm

    In this section, we present the LEKM algorithm. The LEKM algorithm is similar to the EWKM algorithm [21] and the LAC algorithm [9] in that the entropy weighting is used to determine the attribute weights.

    Let X={x1,x2,,xn} be a dataset containing n points, each of which is described by d numerical features or attributes. Let Z={z1,z2,,zk} be a set of cluster centers, where k is the number of clusters. Then the objective function of the LEKM algorithm is defined as

        P(U,W,Z)=kl=1ni=1uildj=1wljln[1+(xijzlj)2]+λkl=1ni=1uildj=1wljlnwlj=kl=1ni=1uil[dj=1wljln[1+(xijzlj)2]+λdj=1wljlnwlj], (10)

    where U=(uil)n×k is a n×k binary matrix satisfying Equations (5a) and (5b), W=(wlj)k×d is a k×d satisfying Equations (5c) and (5d), and λ>0 is a parameter. In the above equation, xij and zlj denote the values of xi and zl in the jth attribute, respectively. The matrix U is the partition matrix in the following sense. If uil=1, then the point xi belongs to the lth cluster. The matrix W is the weight matrix containing the attribute weights. If wlj is relatively large, then the jth attribute is important for the formulation of the lth cluster.

    Similar to the EWKM algorithm, the LEKM algorithm tries to minimize the objective function given in Equation (10) iteratively by finding the optimal value of U, W, and Z according to the following theorems.

    Theorem 3.1. Let W and Z be fixed. Then the partition matrix U that minimizes the objective function P(U,W,Z) is given by

    uil={1,    if D(xi,zl)D(xi,zs) for all s=1,2,,k;0,    if otherwise, (11)

    for i=1,2,,n and l=1,2,,k, where

    D(xi,zs)=dj=1wljln[1+(xijzsj)2]+λdj=1wljlnwlj.

    Proof. Since W and Z are fixed and the rows of the partition matrix U are independent of each other, the objective function is minimized if for each i=1,2,,n, the following function

    f(ui1,ui2,,uik)=kl=1uilD(xi,zl) (12)

    is minimized. Note that uil{0,1} and

    kl=1uil=1.

    The function defined in Equation (12) is minimized if Equation (11) holds. This completes the proof.

    Theorem 3.2. Let U and Z be fixed. Then the weight matrix W that minimizes the objective function P(U,W,Z) is given by

    wlj=exp(Vljλ)ds=1exp(Vlsλ) (13)

    for l=1,2,,k and j=1,2,,d, where

    Vlj=ni=1uilln[1+(xijzlj)2]ni=1uil.

    Proof. The weight matrix W that minimizes the objective function P(U,W,Z) subject to

    dj=1wlj=1,    l=1,2,,k,

    is the matrix W that minimizes the following function

    f(W)=P(U,W,Z)+kl=1βl(dj=1wlj1)         =kl=1ni=1uil[dj=1wljln[1+(xijzlj)2]+λdj=1wljlnwlj]            +kl=1βl(dj=1wlj1). (14)

    The weight matrix W that minimizes Equation (14) satisfies the following equations

    f(W)wlj=ni=1uil(ln[1+(xijzlj)2]+λlnwlj+λ)+βl=0

    for l=1,2,,k and j=1,2,,d, and

    f(W)βl=dj=1wlj1=0

    for l=1,2,,k. Solving the above equations leads to Equation (13).

    From Equation (13) we see that the attribute weights of the lth cluster are the exponential normalizations of Vl1, Vl2, , Vld. Since Vlj is the sum of log-transformed distances, the range of the magnitudes of Vl1, Vl2, , Vld is small. Hence the weights are less sensitive to the parameter λ.

    Theorem 3.3. Let U and W be fixed. Then the set of cluster centers Z that minimizes the objective function P(U,W,Z) satisfies the following nonlinear equations

    zlj=ni=1uil[1+(xijzlj)2]1xijni=1uil[1+(xijzlj)2]1 (15)

    for l=1,2,,k and j=1,2,,d.

    Proof. If the set of cluster centers Z minimizes the objective function P(U,W,Z), then for all l=1,2,,k and j=1,2,,d, the derivative of P(U,W,Z) with respect to wlj is equal to zeros. In other words, we have

    Pzlj=wljni=1uil[1+(xijzlj)2]1[2(xijzlj)]=0.

    Since wlj>0, we have

    ni=1uil[1+(xijzlj)2]1[2(xijzlj)]=0,

    from which Equation (15) follows.

    In the standard k-means algorithm, the EWKM algorithm, and the LAC algorithm, the center of a cluster is calculated as the average of the points in the cluster. In the LEKM algorithm, however, the center of a cluster is governed by a nonlinear equation in such a way that the center is a weighted average of the points in the cluster. In addition, if a point is far away from its center, then the point is given a low weight in the center calculation. As a result, the LEKM algorithm is less sensitive to outliers than the EWKM algorithm and the LAC algorithm. Since the LEKM algorithm is an iterative algorithm, we can in practice update the cluster centers as follows:

    zlj=ni=1uil[1+(xijzlj)2]1xijni=1uil[1+(xijzlj)2]1 (16)

    for l=1,2,,k and j=1,2,,d, where Z={z1,z2,,zk} is the set of cluster centers from the previous iteration. When the algorithm converges, the cluster centers in the current iteration are the same as those from the previous iteration and Equation (16) is the same as Equation (15).

    To find the optimal values of U, W, and Z that minimize the objective function given in Equation (10), the LEKM algorithm proceeds iteratively by updating one of U, W, and Z at a time with other other two fixed. The pseudo-code of the LEKM algorithm is shown in Algorithm 1. The computational complexity of one iteration of the LEKM algorithm is O(nkd). Although the runtime complexity of the LEKM algorithm is the same as those of the EWKM algorithm and the LAC algorithm, we expect the LEKM algorithm to be slower than the EWKM algorithm and the LAC algorithm as more operations are involved in the LEKM algorithm.

    The LEKM algorithm requires four parameters: k, λ, δ, and Nmax. The parameter k is the desired number of clusters. The parameter λ controls the smoothness of the attribute weights. The larger the value of λ, the more uniform of the attribute weights. The last two parameters are used to terminate the algorithm. Table 1 gives some default values of some parameters.

    Table 1. Default parameter values of the LEKM algorithm..
    Parameter Default Value
    λ 1
    δ 106
    Nmax 100
     | Show Table
    DownLoad: CSV

    4. Numerical experiments

    In this section, we present numerical experiments based on both synthetic data and real data to demonstrate the performance of the LEKM algorithm. We also compare the LEKM algorithm with the EWKM algorithm and the LAC algorithm in terms of accuracy and runtime. We implemented all three algorithms in Java and used the same convergence criterion as shown in Algorithm 1.

    In our experiments, we use the corrected Rand index [8,13] to measure the accuracy of clustering results. The corrected Rand index is calculated from two partitions of the same dataset and its value ranges from -1 to 1, with 1 indicating perfect agreement between the two partitions and 0 indicating agreement by chance. In general, the higher the corrected Rand index, the better the clustering result.

    Since the all the three algorithms are k-means-type algorithms, they are sensitive to initial cluster centers [6,13]. To compare the performance of these three algorithms on the first synthetic dataset, we run these algorithm 100 times and calculate the average accuracy and runtime. In each run, we use a different seed to select random initial cluster centers. To compare the three algorithms in a consistent way, we used the same 100 seeds for all three algorithms. To test the impact of the parameters (i.e., γ in EWKM, h in LAC, and λ in LEKM), we use five different values for the parameter: 1, 2, 4, 8, and 16.


    4.1. Experiments on synthetic data

    To test the performance of the LEKM algorithm, we generated two synthetic datasets. The first synthetic dataset is a 2-dimensional dataset with two clusters and is shown in Figure 1. From the figure we see that the cluster in the top is compact but the cluster in the bottom contains several points that are far away from the cluster center. We can consider this dataset as a dataset containing noises.

    Figure 1. A 2-dimensional dataset with two clusters..

    Table 2 shows the average corrected Rand index of 100 runs of the three algorithms on the first synthetic dataset. From the table we see that the LEKM algorithm produced more accurate results than the LAC algorithm and the EWKM algorithm. The EWKM produced the least accurate results. Since the dispersion of an attribute in a cluster is normalized by the size of the cluster in the LAC and LEKM algorithms, the LAC and LEKM algorithms are less sensitive to the parameter.

    Table 2. The average accuracy of 100 runs of the three algorithms on the first synthetic dataset. The numbers in parenthesis are the corresponding standard deviations over the 100 runs. The parameter refers to γ, h, and λ in EWKM, LAC, and LEKM, respectively..
    Parameter EWKM LAC LEKM
    1 0.0351 (0.0582) 0.0024 (0.0158) 0.9154 (0.2704)
    2 0.0378 (0.0556) 0.9054 (0.2322) 0.9063 (0.2827)
    4 0.012 (0.031) 0.8019 (0.2422) 0.9067 (0.2815)
    8 -0.0135 (0.0125) 0.7604 (0.2406) 0.9072 (0.2799)
    16 -0.013 (0.0134) 0.7527 (0.2501) 0.9072 (0.2799)
     | Show Table
    DownLoad: CSV

    Table 3 shows the confusion matrices produced by the best run of the three algorithms on the first synthetic dataset. We run the EWKM algorithm, the LAC algorithm, and the LEKM algorithm 100 times on the first synthetic dataset with parameter 2 (i.e., γ=2 in EWKM, h=2 in LAC, and λ=2 in LEKM) and chose the best run to be the run with the lowest objective function value. From Table 3 we see that the LEKM algorithm was able to recover the two clusters from the first synthetic dataset correctly. The LAC algorithm clustered one point incorrectly. The EWKM algorithm is sensitive to noises and clustered many points incorrectly.

    Table 3. The confusion matrices of the first synthetic dataset correspond to the runs with the lowest objective function values. The parameter used in these runs is 2. The labels "1" and "2" in the first row indicate the given clusters. The labels "C1" and "C2" in the first column indicate the found clusters. (a) EWKM. (b) LAC. (c) LEKM..
    1 2 1 2 1 2
    C2 35 25 C2 59 0 C2 60 0
    C1 25 15 C1 1 40 C1 0 40
    (a) (b) (c)
     | Show Table
    DownLoad: CSV

    Table 4 shows the attribute weights of the two clusters produced by the best runs of the three algorithms. As we can see from the table that the attribute weights produced by the EWKM algorithm are dominated by one attribute. The attribute weights of one cluster produced by the LAC algorithm is also affected by the noises in the cluster. The attribute weights of the clusters produced by the LEKM algorithm seem reasonable as the two clusters are formed in the full space and approximate the same attribute weights are expected.

    Table 4. The attribute weights of the two clusters correspond to the runs with the lowest objective function values. The parameter used in these runs is 2. The labels "C1" and "C2" in the first column indicate the found clusters. (a) EWKM. (b) LAC. (c) LEKM..
    Weight Weight Weight
    C1 1 3.01E-36 C1 0.8931 0.1069 C1 0.5448 0.4552
    C2 1 2.85E-51 C2 0.5057 0.4943 C2 0.5055 0.4945
    (a) (b) (c)
     | Show Table
    DownLoad: CSV

    Table 5 shows the average runtime of the 100 runs of the three algorithms on the first synthetic dataset. From the table we see that the EWKM algorithm converged the fastest. The LAC algorithm and the LEKM algorithm converged in about the same time.

    Table 5. The average runtime of the three algorithms on the first synthetic dataset. The numbers in parenthesis are the corresponding standard deviations over the 100 runs. The numbers are in seconds..
    Parameter EWKM LAC LEKM
    1 0.0005 (0.0005) 0.0021 (0.0032) 0.0016 (0.0009)
    2 0.0002 (0.0004) 0.0018 (0.0026) 0.0013 (0.0006)
    4 0.0002 (0.0004) 0.0017 (0.0025) 0.0014 (0.0011)
    8 0.0003 (0.0004) 0.0018 (0.0026) 0.0016 (0.0017)
    16 0.0002 (0.0004) 0.0018 (0.0025) 0.0016 (0.002)
     | Show Table
    DownLoad: CSV

    The second synthetic dataset is a 100-dimensional dataset with four clusters. Table 6 shows the sizes and dimensions of the four clusters. This dataset was also used to test the SAP algorithm developed in [13]. Table 7 summarizes the clustering results of the three algorithms. From the table we see that the LEKM algorithm produced the most accurate results when the parameter is small. When the parameter is large, the attribute weights calculated by the LEKM algorithm become approximately the same. Since the clusters are embedded in subspaces, assigning approximately the same weight to attributes prevents the LEKM algorithm from recovering these clusters.

    Table 6. A 100-dimensional dataset with 4 subspace clusters..
    Cluster Cluster Size Subspace Dimensions
    A 500 10, 15, 70
    B 300 20, 30, 80, 85
    C 500 30, 40, 70, 90, 95
    D 700 40, 45, 50, 55, 60, 80
     | Show Table
    DownLoad: CSV
    Table 7. The average accuracy of 100 runs of the three algorithms on the second synthetic dataset..
    Parameter EWKM LAC LEKM
    1 0.557 (0.1851) 0.5534 (0.1857) 0.9123 (0.147)
    2 0.557 (0.1851) 0.5572 (0.1883) 0.928 (0.1361)
    4 0.557 (0.1851) 0.5658 (0.1902) 0.6128 (0.1626)
    8 0.557 (0.1851) 0.574 (0.2028) 0.3197 (0.1247)
    16 0.5573 (0.1854) 0.6631 (0.2532) 0.2293 (0.0914)
     | Show Table
    DownLoad: CSV

    Table 8 shows the confusion matrices produced by the runs of the three algorithms with the lowest objective function value. From the table we see that only three points were clustered incorrectly by the LEKM algorithm. Many points were clustered incorrectly by the EWKM algorithm and the LAC algorithm. Figures 2, 3, and Figure 4 plot the attribute weights of the four clusters corresponding to the confusion matrices given in Table 8. From Figures 2 and 3 we can see that the attribute weights were dominated by a single attribute. Figure 4 shows that the LEKM algorithm was able to recover all the subspace dimensions correctly.

    Table 8. Confusion matrices of the second synthetic dataset produced by the runs with the lowest objective function values. In these runs, the parameter was set to 2. (a) EWKM. (b) LAC. (c) LEKM..
    2380-6966_2016_1_93-T8.jpg
     | Show Table
    DownLoad: CSV
    Figure 2. Attribute weights of the four clusters produced by the EWKM algorithm..
    Figure 3. Attribute weights of the four clusters produced by the LAC algorithm..
    Figure 4. Attribute weights of the four clusters produced by the LEKM algorithm..

    Table 9 shows the average runtime of 100 runs of the three algorithms on the second synthetic dataset. From the table we see that the LEKM algorithm is slower than the other two algorithms. Since the center calculation of the LEKM algorithm is more complicate than that of the EWKM algorithm and the LAC algorithm, it is expected that the LEKM algorithm is slower than the other two algorithms.

    Table 9. The average runtime of 100 runs of the three algorithms on the second synthetic dataset..
    Parameter EWKM LAC LEKM
    1 0.7849 (0.4221) 1.1788 (0.763) 10.4702 (0.1906)
    2 0.7687 (0.4141) 0.8862 (0.4952) 10.3953 (0.1704)
    4 0.7619 (0.4101) 0.8412 (0.4721) 10.5236 (0.2023)
    8 0.7567 (0.4074) 0.8767 (0.4816) 10.5059 (0.2014)
    16 0.7578 (0.4112) 0.8136 (0.5069) 10.4122 (0.189)
     | Show Table
    DownLoad: CSV

    In summary, the test results on synthetic datasets have shown that the LEKM algorithm is able to recover clusters from noise data and recover clusters embedded in subspaces. The test results also show that the LEKM algorithm is less sensitive to noises and parameter values that the EWKM algorithm and the LEKM algorithm. However, the LEKM algorithm is in general slower than the other two algorithm due to its complex center calculation.


    4.2. Experiments on real data

    To test the algorithms on real data, we obtained two cancer gene expression datasets from [8]1. The first dataset contains gene expression data of human liver cancers and the second dataset contains gene expression data of breast tumors and colon tumors. Table 10 shows the information of the two real datasets. The two datasets have known labels, which tell the type of sample of each data point. The two datasets were also used to test the SAP algorithm in [13].

    Table 10. Two real gene expression datasets..
    Dataset Samples Dimensions Cluster sizes
    Chen-2002 179 85 104, 76
    Chowdary-2006 104 182 62, 42
     | Show Table
    DownLoad: CSV

    The datasets are available at http://bioinformatics.rutgers.edu/Static/Supplements/CompCancer/datasets.htm

    Table 11 and Table 12 summarize the average accuracy and the average runtime of 100 runs of the three algorithms on the Chen-2002 dataset, respectively. From the average corrected Rand index shown in Table 11 we see that the LEKM algorithm produced more accurate results than the EWKM algorithm and the LAC algorithm did. However, the LEKM algorithm was slower than the other two algorithm.

    Table 11. The average accuracy of 100 runs of the three algorithms on the Chen-2002 dataset..
    Parameter EWKM LAC LEKM
    1 0.025 (0.0395) 0.0042 (0.0617) 0.2599 (0.2973)
    2 0.0203 (0.0343) 0.0888 (0.1903) 0.2563 (0.2868)
    4 0.0135 (0.0279) 0.041 (0.1454) 0.2743 (0.2972)
    8 0.0141 (0.0449) 0.0484 (0.1761) 0.2856 (0.2993)
    16 0.0002 (0.0416) 0.0445 (0.1726) 0.2789 (0.2984)
     | Show Table
    DownLoad: CSV
    Table 12. The average runtime of 100 runs of the three algorithms on the Chen-2002 dataset..
    Parameter EWKM LAC LEKM
    1 0.0111 (0.0031) 0.0162 (0.0083) 0.102 (0.0297)
    2 0.0123 (0.0033) 0.0124 (0.006) 0.1035 (0.0286)
    4 0.0143 (0.006) 0.0151 (0.0105) 0.1046 (0.0316)
    8 0.0122 (0.0043) 0.0137 (0.0089) 0.1068 (0.0337)
    16 0.0144 (0.007) 0.014 (0.0091) 0.105 (0.0323)
     | Show Table
    DownLoad: CSV

    The average accuracy and runtime of 100 runs of the three algorithms on the Chowdary-2006 dataset are shown in Table 13 and Table 14, respectively. From Table 13 we see than the LEKM algorithm again produced more accurate clustering results than the other two algorithm did. When the parameter was set to be 1, the LAC produced better results than the EWKM algorithm did. For other cases, however, the EWKM algorithm produced better results than the LAC algorithm did. The LAC algorithm and the EWKM algorithm are much faster than the LEKM algorithm as shown in Table 14.

    Table 13. The average accuracy of 100 runs of the three algorithms on the Chowdary-2006 dataset..
    Parameter EWKM LAC LEKM
    1 0.3952 (0.3943) 0.5197 (0.2883) 0.5826 (0.3199)
    2 0.3819 (0.3825) 0.19 (0.2568) 0.5757 (0.3261)
    4 0.3839 (0.3677) 0.0772 (0.1016) 0.5823 (0.3221)
    8 0.4188 (0.3584) 0.0595 (0.0224) 0.5756 (0.3383)
    16 0.4994 (0.3927) 0.0625 (0.0184) 0.582 (0.3363)
     | Show Table
    DownLoad: CSV
    Table 14. The average runtime of 100 runs of the three algorithms on the Chowdary-2006 dataset..
    Parameter EWKM LAC LEKM
    1 0.0115 (0.0048) 0.0109 (0.0042) 0.1369 (0.0756)
    2 0.011 (0.0046) 0.0156 (0.0093) 0.1446 (0.0723)
    4 0.0103 (0.0042) 0.0147 (0.0076) 0.1514 (0.0805)
    8 0.0107 (0.005) 0.0141 (0.0063) 0.1524 (0.0769)
    16 0.0113 (0.0047) 0.0138 (0.0068) 0.1542 (0.0854)
     | Show Table
    DownLoad: CSV

    In summary, the test results on real datasets show that the LEKM algorithm produced more accurate clustering results on average than the EWKM algorithm and the LAC algorithm did. However, the LEKM algorithm was slower than the other two algorithms.


    5. Concluding remarks

    The EWKM algorithm [21] and the LAC algorithm [9] are two soft subspace clustering algorithms that are similar to each other. In both algorithms, the attribute weights of a cluster are calculated as exponential normalizations of the negative attribute dispersions in the cluster scaled by a parameter. Setting the parameter is a challenge when the attribute dispersions in a cluster have a large range. In this paper, we proposed the LEKM (log-transformed entropy weighting k-means) algorithm by using log-transformed distances in the objective function so that the attribute dispersions in a cluster are smaller than those in the EWKM algorithm and the LAC algorithm. The proposed LEKM algorithm has the following two properties: first, the LEKM algorithm allows users to choose a value for the parameter easily because the attribute dispersions in a cluster have a small range; second, the LEKM algorithm is less sensitive to noises because data points far away from they corresponding cluster centers are given small weights in the cluster center calculation.

    We tested the performance of the LEKM algorithm and compared it with the EWKM algorithm and the LAC algorithm. The test results on both synthetic datasets and real datasets have shown that the LEKM algorithm is able to outperform the EWKM algorithm and the LAC algorithm in terms of accuracy. However, one limitation of the LEKM algorithm is that it is slower than the other two algorithm because updating the cluster centers in each iteration in the LEKM algorithm is more complicate than that in the other two algorithms.

    Another limitation of the LEKM algorithm is that it is sensitive to initial cluster centers. This limitation is common to most of the k-means-type algorithms, which include the EWKM algorithm and the LAC algorithm. Other efficient cluster center initialization methods [24,5,6] can be used to improve the performance of the k-means-type algorithms including the LEKM algorithm.


    Acknowledgments

    The authors would like to thank referees for their insightful comments that greatly improve the quality of the paper.



    Acknowledgments



    The authors would like to acknowledge the financial support from Australian Research Council (Grant No. DP160102491).

    Conflict of interest



    The authors declare no conflict of interests.

    [1] Hu C, Ashok D, Nisbet DR, et al. (2019) Bioinspired surface modification of orthopedic implants for bone tissue engineering. Biomaterials 119366. doi: 10.1016/j.biomaterials.2019.119366
    [2] Karsenty G, Olson EN (2016) Bone and muscle endocrine functions: unexpected paradigms of inter-organ communication. Cell 164: 1248-1256. doi: 10.1016/j.cell.2016.02.043
    [3] Rossi M, Battafarano G, Pepe J, et al. (2019) The endocrine function of osteocalcin regulated by bone resorption: A lesson from reduced and increased bone mass diseases. Int J Mol Sci 20: 4502. doi: 10.3390/ijms20184502
    [4] Loebel C, Burdick JA (2018) Engineering stem and stromal cell therapies for musculoskeletal tissue repair. Cell Stem Cell 22: 325-339. doi: 10.1016/j.stem.2018.01.014
    [5] Dimitriou R, Jones E, McGonagle D, et al. (2011) Bone regeneration: current concepts and future directions. BMC Med 9: 66. doi: 10.1186/1741-7015-9-66
    [6] Nordin M, Frankel VH (2001)  Basic Biomechanics of the Musculoskeletal System, 3 Eds USA: Lippincott Williams & Wilkins.
    [7] Kobayashi S, Takahashi HE, Ito A, et al. (2003) Trabecular minimodeling in human iliac bone. Bone 32: 163-169. doi: 10.1016/S8756-3282(02)00947-X
    [8] Bartl R, Bartl C (2019) Control and regulation of bone remodelling. The Osteoporosis Manual Cham: Springer, 31-39. doi: 10.1007/978-3-030-00731-7_4
    [9] Kenkre JS, Bassett JHD (2018) The bone remodelling cycle. Ann Clin Biochem 55: 308-327. doi: 10.1177/0004563218759371
    [10] Prendergast PJ, Huiskes R (1995) The biomechanics of Wolff's law: recent advances. Irish J Med Sci 164: 152-154. doi: 10.1007/BF02973285
    [11] Wegst UGK, Bai H, Saiz E, et al. (2015) Bioinspired structural materials. Nat Mater 14: 23-36. doi: 10.1038/nmat4089
    [12] Reznikov N, Shahar R, Weiner S (2014) Bone hierarchical structure in three dimensions. Acta Biomater 10: 3815-3826. doi: 10.1016/j.actbio.2014.05.024
    [13] Weiner S, Wagner HD (1998) The material bone: structure-mechanical function relations. Annu Rev Mater Sci 28: 271-298. doi: 10.1146/annurev.matsci.28.1.271
    [14] Recker RR, Kimmel DB, Dempster D, et al. (2011) Issues in modern bone histomorphometry. Bone 49: 955-964. doi: 10.1016/j.bone.2011.07.017
    [15] Eriksen EF, Vesterby A, Kassem M, et al. (1993) Bone remodeling and bone structure. Physiology and Pharmacology of Bone Heidelberg: Springer, 67-109. doi: 10.1007/978-3-642-77991-6_2
    [16] Augat P, Schorlemmer S (2006) The role of cortical bone and its microstructure in bone strength. Age Ageing 35: ii27-ii31. doi: 10.1093/ageing/afl081
    [17] Kozielski M, Buchwald T, Szybowicz M, et al. (2011) Determination of composition and structure of spongy bone tissue in human head of femur by Raman spectral mapping. J Mater Sci: Mater Med 22: 1653-1661. doi: 10.1007/s10856-011-4353-0
    [18] Cross LM, Thakur A, Jalili NA, et al. (2016) Nanoengineered biomaterials for repair and regeneration of orthopedic tissue interfaces. Acta Biomater 42: 2-17. doi: 10.1016/j.actbio.2016.06.023
    [19] Zebaze R, Seeman E (2015) Cortical bone: a challenging geography. J Bone Miner Res 30: 24-29. doi: 10.1002/jbmr.2419
    [20] Liu Y, Luo D, Wang T (2016) Hierarchical structures of bone and bioinspired bone tissue engineering. Small 12: 4611-4632. doi: 10.1002/smll.201600626
    [21] Brodsky B, Persikov AV (2005) Molecular structure of the collagen triple helix. Adv Protein Chem 70: 301-339. doi: 10.1016/S0065-3233(05)70009-7
    [22] Cui FZ, Li Y, Ge J (2007) Self-assembly of mineralized collagen composites. Mater Sci Eng R Rep 57: 1-27. doi: 10.1016/j.mser.2007.04.001
    [23] Wang Y, Azaïs T, Robin M, et al. (2012) The predominant role of collagen in the nucleation, growth, structure and orientation of bone apatite. Nat Mater 11: 724-733. doi: 10.1038/nmat3362
    [24] Bentmann A, Kawelke N, Moss D, et al. (2010) Circulating fibronectin affects bone matrix, whereas osteoblast fibronectin modulates osteoblast function. J Bone Miner Res 25: 706-715.
    [25] Szweras M, Liu D, Partridge EA, et al. (2002) α2-HS glycoprotein/fetuin, a transforming growth factor-β/bone morphogenetic protein antagonist, regulates postnatal bone growth and remodeling. J Biol Chem 277: 19991-19997. doi: 10.1074/jbc.M112234200
    [26] Boskey AL, Robey PG (2013) The regulatory role of matrix proteins in mineralization of bone. Osteoporosis, 4 Eds Academic Press, 235-255. doi: 10.1016/B978-0-12-415853-5.00011-X
    [27] Boskey AL (2013) Bone composition: relationship to bone fragility and antiosteoporotic drug effects. Bonekey Rep 2: 447. doi: 10.1038/bonekey.2013.181
    [28] Stock SR (2015) The mineral–collagen interface in bone. Calcified Tissue Int 97: 262-280. doi: 10.1007/s00223-015-9984-6
    [29] Nikel O, Laurencin D, McCallum SA, et al. (2013) NMR investigation of the role of osteocalcin and osteopontin at the organic–inorganic interface in bone. Langmuir 29: 13873-13882. doi: 10.1021/la403203w
    [30] He G, Dahl T, Veis A, et al. (2003) Nucleation of apatite crystals in vitro by self-assembled dentin matrix protein 1. Nat Mater 2: 552-558. doi: 10.1038/nmat945
    [31] Clarke B (2008) Normal bone anatomy and physiology. Clin J Am Soc Nephro 3: S131-S139. doi: 10.2215/CJN.04151206
    [32] Olszta MJ, Cheng X, Jee SS, et al. (2007) Bone structure and formation: A new perspective. Mater Sci Eng R Rep 58: 77-116. doi: 10.1016/j.mser.2007.05.001
    [33] Nair AK, Gautieri A, Chang SW, et al. (2013) Molecular mechanics of mineralized collagen fibrils in bone. Nature Commun 4: 1724. doi: 10.1038/ncomms2720
    [34] Landis WJ (1995) The strength of a calcified tissue depends in part on the molecular structure and organization of its constituent mineral crystals in their organic matrix. Bone 16: 533-544. doi: 10.1016/8756-3282(95)00076-P
    [35] Hunter GK, Hauschka PV, POOLE RA, et al. (1996) Nucleation and inhibition of hydroxyapatite formation by mineralized tissue proteins. Biochem J 317: 59-64. doi: 10.1042/bj3170059
    [36] Oikeh I, Sakkas P, Blake D P, et al. (2019) Interactions between dietary calcium and phosphorus level, and vitamin D source on bone mineralization, performance, and intestinal morphology of coccidia-infected broilers. Poult Sci 11: 5679-5690. doi: 10.3382/ps/pez350
    [37] Boyce BF, Rosenberg E, de Papp AE, et al. (2012) The osteoclast, bone remodelling and treatment of metabolic bone disease. Eur J Clin Invest 42: 1332-1341. doi: 10.1111/j.1365-2362.2012.02717.x
    [38] Teitelbaum SL (2000) Bone resorption by osteoclasts. Science 289: 1504-1508. doi: 10.1126/science.289.5484.1504
    [39] Yoshida H, Hayashi SI, Kunisada T, et al. (1990) The murine mutation osteopetrosis is in the coding region of the macrophage colony stimulating factor gene. Nature 345: 442-444. doi: 10.1038/345442a0
    [40] Roodman GD (2006) Regulation of osteoclast differentiation. Ann NY Acad Sci 1068: 100-109. doi: 10.1196/annals.1346.013
    [41] Martin TJ (2013) Historically significant events in the discovery of RANK/RANKL/OPG. World J Orthop 4: 186-197. doi: 10.5312/wjo.v4.i4.186
    [42] Coetzee M, Haag M, Kruger MC (2007) Effects of arachidonic acid, docosahexaenoic acid, prostaglandin E2 and parathyroid hormone on osteoprotegerin and RANKL secretion by MC3T3-E1 osteoblast-like cells. J Nutr Biochem 18: 54-63. doi: 10.1016/j.jnutbio.2006.03.002
    [43] Steeve KT, Marc P, Sandrine T, et al. (2004) IL-6, RANKL, TNF-alpha/IL-1: interrelations in bone resorption pathophysiology. Cytokine Growth F R 15: 49-60. doi: 10.1016/j.cytogfr.2003.10.005
    [44] Mellis DJ, Itzstein C, Helfrich M, et al. (2011) The skeleton: a multi-functional complex organ. The role of key signalling pathways in osteoclast differentiation and in bone resorption. J Endocrinol 211: 131-143. doi: 10.1530/JOE-11-0212
    [45] Silva I, Branco J (2011) Rank/Rankl/opg: literature review. Acta Reumatol Port 36: 209-218.
    [46] Martin TJ, Sims NA (2015) RANKL/OPG; Critical role in bone physiology. Rev Endocr Metab Dis 16: 131-139. doi: 10.1007/s11154-014-9308-6
    [47] Wang Y, Qin QH (2012) A theoretical study of bone remodelling under PEMF at cellular level. Comput Method Biomec 15: 885-897. doi: 10.1080/10255842.2011.565752
    [48] Weitzmann MN, Pacifici R (2007) T cells: unexpected players in the bone loss induced by estrogen deficiency and in basal bone homeostasis. Ann NY Acad Sci 1116: 360-375. doi: 10.1196/annals.1402.068
    [49] Duong LT, Lakkakorpi P, Nakamura I, et al. (2000) Integrins and signaling in osteoclast function. Matrix Biol 19: 97-105. doi: 10.1016/S0945-053X(00)00051-2
    [50] Stenbeck G (2002) Formation and function of the ruffled border in osteoclasts. Semin Cell Dev Biol 13: 285-292. doi: 10.1016/S1084952102000587
    [51] Jurdic P, Saltel F, Chabadel A, et al. (2006) Podosome and sealing zone: specificity of the osteoclast model. Eur J Cell Biol 85: 195-202. doi: 10.1016/j.ejcb.2005.09.008
    [52] Väänänen HK, Laitala-Leinonen T (2008) Osteoclast lineage and function. Arch Biochem Biophys 473: 132-138. doi: 10.1016/j.abb.2008.03.037
    [53] Vaananen HK, Zhao H, Mulari M, et al. (2000) The cell biology of osteoclast function. J cell Sci 113: 377-381.
    [54] Sabolová V, Brinek A, Sládek V (2018) The effect of hydrochloric acid on microstructure of porcine (Sus scrofa domesticus) cortical bone tissue. Forensic Sci Int 291: 260-271. doi: 10.1016/j.forsciint.2018.08.030
    [55] Delaissé JM, Engsig MT, Everts V, et al. (2000) Proteinases in bone resorption: obvious and less obvious roles. Clin Chim Acta 291: 223-234. doi: 10.1016/S0009-8981(99)00230-2
    [56] Logar DB, Komadina R, Preželj J, et al. (2007) Expression of bone resorption genes in osteoarthritis and in osteoporosis. J Bone Miner Metab 25: 219-225. doi: 10.1007/s00774-007-0753-0
    [57] Lorget F, Kamel S, Mentaverri R, et al. (2000) High extracellular calcium concentrations directly stimulate osteoclast apoptosis. Biochem Bioph Res Co 268: 899-903. doi: 10.1006/bbrc.2000.2229
    [58] Nesbitt SA, Horton MA (1997) Trafficking of matrix collagens through bone-resorbing osteoclasts. Science 276: 266-269. doi: 10.1126/science.276.5310.266
    [59] Xing L, Boyce BF (2005) Regulation of apoptosis in osteoclasts and osteoblastic cells. Biochem Bioph Res Co 328: 709-720. doi: 10.1016/j.bbrc.2004.11.072
    [60] Hughes DE, Wright KR, Uy HL, et al. (1995) Bisphosphonates promote apoptosis in murine osteoclasts in vitro and in vivo. J Bone Miner Res 10: 1478-1487. doi: 10.1002/jbmr.5650101008
    [61] Choi Y, Arron JR, Townsend MJ (2009) Promising bone-related therapeutic targets for rheumatoid arthritis. Nat Rev Rheumatol 5: 543-548. doi: 10.1038/nrrheum.2009.175
    [62] Harvey NC, McCloskey E, Kanis JA, et al. (2017) Bisphosphonates in osteoporosis: NICE and easy? Lancet 390: 2243-2244. doi: 10.1016/S0140-6736(17)32850-7
    [63] Ducy P, Schinke T, Karsenty G (2000) The osteoblast: a sophisticated fibroblast under central surveillance. Science 289: 1501-1504. doi: 10.1126/science.289.5484.1501
    [64] Katagiri T, Takahashi N (2002) Regulatory mechanisms of osteoblast and osteoclast differentiation. Oral dis 8: 147-159. doi: 10.1034/j.1601-0825.2002.01829.x
    [65] Kretzschmar M, Liu F, Hata A, et al. (1997) The TGF-beta family mediator Smad1 is phosphorylated directly and activated functionally by the BMP receptor kinase. Gene Dev 11: 984-995. doi: 10.1101/gad.11.8.984
    [66] Bennett CN, Longo KA, Wright WS, et al. (2005) Regulation of osteoblastogenesis and bone mass by Wnt10b. P Natl A Sci 102: 3324-3329. doi: 10.1073/pnas.0408742102
    [67] Wang Y, Qin QH, Kalyanasundaram S (2009) A theoretical model for simulating effect of parathyroid hormone on bone metabolism at cellular level. Mol Cell Biomech 6: 101-112.
    [68] Elefteriou F, Ahn JD, Takeda S, et al. (2005) Leptin regulation of bone resorption by the sympathetic nervous system and CART. Nature 434: 514-520. doi: 10.1038/nature03398
    [69] Proff P, Römer P (2009) The molecular mechanism behind bone remodelling: a review. Clin Oral Invest 13: 355-362. doi: 10.1007/s00784-009-0268-2
    [70] Katsimbri P (2017) The biology of normal bone remodelling. Eur J Cancer Care 26: e12740. doi: 10.1111/ecc.12740
    [71] Fratzl P, Weinkamer R (2007) Nature's hierarchical materials. Prog Mater Sci 52: 1263-1334. doi: 10.1016/j.pmatsci.2007.06.001
    [72] Athanasiou KA, Zhu CF, Lanctot DR, et al. (2000) Fundamentals of biomechanics in tissue engineering of bone. Tissue Eng 6: 361-381. doi: 10.1089/107632700418083
    [73] Takahashi N, Udagawa N, Suda T (1999) A new member of tumor necrosis factor ligand family, ODF/OPGL/TRANCE/RANKL, regulates osteoclast differentiation and function. Biocheml Bioph Res Co 256: 449-455. doi: 10.1006/bbrc.1999.0252
    [74] Nakashima T, Hayashi M, Fukunaga T, et al. (2011) Evidence for osteocyte regulation of bone homeostasis through RANKL expression. Nat Med 17: 1231-1234. doi: 10.1038/nm.2452
    [75] Prideaux M, Findlay DM, Atkins GJ (2016) Osteocytes: the master cells in bone remodelling. Curr Opin Pharmacol 28: 24-30. doi: 10.1016/j.coph.2016.02.003
    [76] Dallas SL, Prideaux M, Bonewald LF (2013) The osteocyte: an endocrine cell… and more. Endocr Rev 34: 658-690. doi: 10.1210/er.2012-1026
    [77] Rochefort GY, Pallu S, Benhamou CL (2010) Osteocyte: the unrecognized side of bone tissue. Osteoporosis Int 21: 1457-1469. doi: 10.1007/s00198-010-1194-5
    [78] Rowe PSN (2012) Regulation of bone–renal mineral and energy metabolism: The PHEX, FGF23, DMP1, MEPE ASARM pathway. Crit Rev Eukaryot Gene Expr 22: 61-86. doi: 10.1615/CritRevEukarGeneExpr.v22.i1.50
    [79] Pajevic PD, Krause DS (2019) Osteocyte regulation of bone and blood. Bone 119: 13-18. doi: 10.1016/j.bone.2018.02.012
    [80] Frost HM (1987) The mechanostat: a proposed pathogenic mechanism of osteoporoses and the bone mass effects of mechanical and nonmechanical agents. Bone Miner 2: 73-85.
    [81] Tate MLK, Adamson JR, Tami AE, et al. (2004) The osteocyte. Int J Biochem Cell Biol 36: 1-8. doi: 10.1016/S1357-2725(03)00241-3
    [82] Bonewald LF, Johnson ML (2008) Osteocytes, mechanosensing and Wnt signaling. Bone 42: 606-615. doi: 10.1016/j.bone.2007.12.224
    [83] Manolagas SC, Parfitt AM (2010) What old means to bone. Trends Endocrinol Metab 21: 369-374. doi: 10.1016/j.tem.2010.01.010
    [84] Wang Y, Qin QH (2010) Parametric study of control mechanism of cortical bone remodeling under mechanical stimulus. Acta Mech Sinica 26: 37-44. doi: 10.1007/s10409-009-0313-z
    [85] Qu C, Qin QH, Kang Y (2006) A hypothetical mechanism of bone remodeling and modeling under electromagnetic loads. Biomaterials 27: 4050-4057. doi: 10.1016/j.biomaterials.2006.03.015
    [86] Parfitt AM (2002) Targeted and nontargeted bone remodeling: relationship to basic multicellular unit origination and progression. Bone 1: 5-7. doi: 10.1016/S8756-3282(01)00642-1
    [87] Hadjidakis DJ, Androulakis II (2006) Bone remodeling. Ann NYAcad Sci 1092: 385-396. doi: 10.1196/annals.1365.035
    [88] Vaananen HK, Zhao H, Mulari M, et al. (2000) The cell biology of osteoclast function. J cell Sci 113: 377-381.
    [89] Goldring SR (2015) The osteocyte: key player in regulating bone turnover. RMD Open 1: e000049. doi: 10.1136/rmdopen-2015-000049
    [90] Silver IA, Murrills RJ, Etherington DJ (1988) Microelectrode studies on the acid microenvironment beneath adherent macrophages and osteoclasts. Exp Cell Res 175: 266-276. doi: 10.1016/0014-4827(88)90191-7
    [91] Delaissé JM, Andersen TL, Engsig MT, et al. (2003) Matrix metalloproteinases (MMP) and cathepsin K contribute differently to osteoclastic activities. Microsc Res Techniq 61: 504-513. doi: 10.1002/jemt.10374
    [92] Delaisse JM (2014) The reversal phase of the bone-remodeling cycle: cellular prerequisites for coupling resorption and formation. Bonekey Rep 3: 561. doi: 10.1038/bonekey.2014.56
    [93] Bonewald LF, Mundy GR (1990) Role of transforming growth factor-beta in bone remodeling. Clin Orthop Relat R 250: 261-276.
    [94] Locklin RM, Oreffo ROC, Triffitt JT (1999) Effects of TGFβ and bFGF on the differentiation of human bone marrow stromal fibroblasts. Cell Biol Int 23: 185-194. doi: 10.1006/cbir.1998.0338
    [95] Lee B, Oh Y, Jo S, et al. (2019) A dual role of TGF-β in human osteoclast differentiation mediated by Smad1 versus Smad3 signaling. Immunol Lett 206: 33-40. doi: 10.1016/j.imlet.2018.12.003
    [96] Koseki T, Gao Y, Okahashi N, et al. (2002) Role of TGF-β family in osteoclastogenesis induced by RANKL. Cell Signal 14: 31-36. doi: 10.1016/S0898-6568(01)00221-2
    [97] Anderson HC (2003) Matrix vesicles and calcification. Curr Rheumatol Rep 5: 222-226. doi: 10.1007/s11926-003-0071-z
    [98] Bellido T, Plotkin LI, Bruzzaniti A (2019) Bone cells. Basic and Applied Bone Biology, 2 Eds Elsevier, 37-55. doi: 10.1016/B978-0-12-813259-3.00003-8
    [99] Weinstein RS, Jilka RL, Parfitt AM, et al. (1998) Inhibition of osteoblastogenesis and promotion of apoptosis of osteoblasts and osteocytes by glucocorticoids. Potential mechanisms of their deleterious effects on bone. J Clin Invest 102: 274-282. doi: 10.1172/JCI2799
    [100] Vezeridis PS, Semeins CM, Chen Q, et al. (2006) Osteocytes subjected to pulsating fluid flow regulate osteoblast proliferation and differentiation. Biochem Bioph Res Co 348: 1082-1088. doi: 10.1016/j.bbrc.2006.07.146
    [101] Lind M, Deleuran B, Thestrup-Pedersen K, et al. (1995) Chemotaxis of human osteoblasts: Effects of osteotropic growth factors. Apmis 103: 140-146. doi: 10.1111/j.1699-0463.1995.tb01089.x
    [102] Russo CR, Lauretani F, Seeman E, et al. (2006) Structural adaptations to bone loss in aging men and women. Bone 38: 112-118. doi: 10.1016/j.bone.2005.07.025
    [103] Ozcivici E, Luu YK, Adler B, et al. (2010) Mechanical signals as anabolic agents in bone. Nat Rev Rheumatol 6: 50-59. doi: 10.1038/nrrheum.2009.239
    [104] Rosa N, Simoes R, Magalhães FD, et al. (2015) From mechanical stimulus to bone formation: a review. Med Eng Phys 37: 719-728. doi: 10.1016/j.medengphy.2015.05.015
    [105] Noble BS, Peet N, Stevens HY, et al. (2003) Mechanical loading: biphasic osteocyte survival and targeting of osteoclasts for bone destruction in rat cortical bone. Am J Physiol-Cell Ph 284: C934-C943. doi: 10.1152/ajpcell.00234.2002
    [106] Robling AG, Castillo AB, Turner CH (2006) Biomechanical and molecular regulation of bone remodeling. Annu Rev Biomed Eng 8: 455-498. doi: 10.1146/annurev.bioeng.8.061505.095721
    [107] Qin QH, Mai YW (1999) A closed crack tip model for interface cracks inthermopiezoelectric materials. Int J Solids Struct 36: 2463-2479. doi: 10.1016/S0020-7683(98)00115-2
    [108] Yu SW, Qin QH (1996) Damage analysis of thermopiezoelectric properties: Part I—crack tip singularities. Theor Appl Fract Mec 25: 263-277. doi: 10.1016/S0167-8442(96)00026-2
    [109] Qin QH, Mai YW, Yu SW (1998) Effective moduli for thermopiezoelectric materials with microcracks. Int J Fracture 91: 359-371. doi: 10.1023/A:1007423508650
    [110] Jirousek J, Qin QH (1996) Application of hybrid-Trefftz element approach to transient heat conduction analysis. Comput Struct 58: 195-201. doi: 10.1016/0045-7949(95)00115-W
    [111] Qin QH (1995) Hybrid-Trefftz finite element method for Reissner plates on an elastic foundation. Comput Method Appl M 122: 379-392. doi: 10.1016/0045-7825(94)00730-B
    [112] Qin QH (1994) Hybrid Trefftz finite-element approach for plate bending on an elastic foundation. Appl Math Model 18: 334-339. doi: 10.1016/0307-904X(94)90357-3
    [113] Qin QH (2013)  Mechanics of Cellular Bone Remodeling: Coupled Thermal, Electrical, and Mechanical Field Effects CRC Press. doi: 10.1201/b13728
    [114] Wang H, Qin QH (2010) FE approach with Green's function as internal trial function for simulating bioheat transfer in the human eye. Arch Mech 62: 493-510.
    [115] Qin QH (2003) Fracture analysis of cracked thermopiezoelectric materials by BEM. Electronic J Boundary Elem 1: 283-301.
    [116] Qin QH, Ye JQ (2004) Thermoelectroelastic solutions for internal bone remodeling under axial and transverse loads. Int J Solids Struct 41: 2447-2460. doi: 10.1016/j.ijsolstr.2003.12.026
    [117] Qin QH, Qu C, Ye J (2005) Thermoelectroelastic solutions for surface bone remodeling under axial and transverse loads. Biomaterials 26: 6798-6810. doi: 10.1016/j.biomaterials.2005.03.042
    [118] Ducher G, Jaffré C, Arlettaz A, et al. (2005) Effects of long-term tennis playing on the muscle-bone relationship in the dominant and nondominant forearms. Can J Appl Physiol 30: 3-17. doi: 10.1139/h05-101
    [119] Robling AG, Hinant FM, Burr DB, et al. (2002) Improved bone structure and strength after long-term mechanical loading is greatest if loading is separated into short bouts. J Bone Miner Res 17: 1545-1554. doi: 10.1359/jbmr.2002.17.8.1545
    [120] Rubin J, Rubin C, Jacobs CR (2006) Molecular pathways mediating mechanical signaling in bone. Gene 367: 1-16. doi: 10.1016/j.gene.2005.10.028
    [121] Tatsumi S, Ishii K, Amizuka N, et al. (2007) Targeted ablation of osteocytes induces osteoporosis with defective mechanotransduction. Cell Metab 5: 464-475. doi: 10.1016/j.cmet.2007.05.001
    [122] Robling AG, Turner CH (2009) Mechanical signaling for bone modeling and remodeling. Crit Rev Eukar Gene 19: 319-338. doi: 10.1615/CritRevEukarGeneExpr.v19.i4.50
    [123] Galli C, Passeri G, Macaluso GM (2010) Osteocytes and WNT: the mechanical control of bone formation. J Dent Res 89: 331-343. doi: 10.1177/0022034510363963
    [124] Robling AG, Duijvelaar KM, Geevers JV, et al. (2001) Modulation of appositional and longitudinal bone growth in the rat ulna by applied static and dynamic force. Bone 29: 105-113. doi: 10.1016/S8756-3282(01)00488-4
    [125] Burr DB, Milgrom C, Fyhrie D, et al. (1996) In vivo measurement of human tibial strains during vigorous activity. Bone 18: 405-410. doi: 10.1016/8756-3282(96)00028-2
    [126] Sun W, Chi S, Li Y, et al. (2019) The mechanosensitive Piezo1 channel is required for bone formation. Elife 8: e47454. doi: 10.7554/eLife.47454
    [127] Goda I, Ganghoffer JF, Czarnecki S, et al. (2019) Topology optimization of bone using cubic material design and evolutionary methods based on internal remodeling. Mech Res Commun 95: 52-60. doi: 10.1016/j.mechrescom.2018.12.003
    [128] Goda I, Ganghoffer JF (2018) Modeling of anisotropic remodeling of trabecular bone coupled to fracture. Arch Appl Mech 88: 2101-2121. doi: 10.1007/s00419-018-1438-y
    [129] Louna Z, Goda I, Ganghoffer JF, et al. (2017) Formulation of an effective growth response of trabecular bone based on micromechanical analyses at the trabecular level. Arch Appl Mech 87: 457-477. doi: 10.1007/s00419-016-1204-y
    [130] Goda I, Ganghoffer JF (2017) Construction of the effective plastic yield surfaces of vertebral trabecular bone under twisting and bending moments stresses using a 3D microstructural model. ZAMM Z Angew Math Mech 97: 254-272. doi: 10.1002/zamm.201600141
    [131] Qin QH, Wang YN (2012) A mathematical model of cortical bone remodeling at cellular level under mechanical stimulus. Acta Mech Sinica-Prc 28: 1678-1692. doi: 10.1007/s10409-012-0154-z
  • This article has been cited by:

    1. Tongfeng Sun, 2018, Chapter 15, 978-3-030-00827-7, 140, 10.1007/978-3-030-00828-4_15
    2. Qi He, Zhenxiang Chen, Ke Ji, Lin Wang, Kun Ma, Chuan Zhao, Yuliang Shi, 2020, Chapter 49, 978-3-030-16656-4, 530, 10.1007/978-3-030-16657-1_49
    3. Guojun Gan, Yuping Zhang, Dipak K. Dey, Clustering by propagating probabilities between data points, 2016, 41, 15684946, 390, 10.1016/j.asoc.2016.01.034
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(11670) PDF downloads(1268) Cited by(18)

Article outline

Figures and Tables

Figures(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog