Formulating mathematical models that estimate tumor growth under therapy is vital for improving patient-specific treatment plans. In this context, we present our recent work on simulating non-small-scale cell lung cancer (NSCLC) in a simple, deterministic setting for two different patients receiving an immunotherapeutic treatment. At its core, our model consists of a Cahn-Hilliard-based phase-field model describing the evolution of proliferative and necrotic tumor cells. These are coupled to a simplified nutrient model that drives the growth of the proliferative cells and their decay into necrotic cells. The applied immunotherapy decreases the proliferative cell concentration. Here, we model the immunotherapeutic agent concentration in the entire lung over time by an ordinary differential equation (ODE). Finally, reaction terms provide a coupling between all these equations. By assuming spherical, symmetric tumor growth and constant nutrient inflow, we simplify this full 3D cancer simulation model to a reduced 1D model. We can then resort to patient data gathered from computed tomography (CT) scans over several years to calibrate our model. Our model covers the case in which the immunotherapy is successful and limits the tumor size, as well as the case predicting a sudden relapse, leading to exponential tumor growth. Finally, we move from the reduced model back to the full 3D cancer simulation in the lung tissue. Thereby, we demonstrate the predictive benefits that a more detailed patient-specific simulation including spatial information as a possible generalization within our framework could yield in the future.
Citation: Andreas Wagner, Pirmin Schlicke, Marvin Fritz, Christina Kuttler, J. Tinsley Oden, Christian Schumann, Barbara Wohlmuth. A phase-field model for non-small cell lung cancer under the effects of immunotherapy[J]. Mathematical Biosciences and Engineering, 2023, 20(10): 18670-18694. doi: 10.3934/mbe.2023828
[1] | Guojun Gan, Qiujun Lan, Shiyang Sima . Scalable Clustering by Truncated Fuzzy c-means. Big Data and Information Analytics, 2016, 1(2): 247-259. doi: 10.3934/bdia.2016007 |
[2] | Marco Tosato, Jianhong Wu . An application of PART to the Football Manager data for players clusters analyses to inform club team formation. Big Data and Information Analytics, 2018, 3(1): 43-54. doi: 10.3934/bdia.2018002 |
[3] | Jinyuan Zhang, Aimin Zhou, Guixu Zhang, Hu Zhang . A clustering based mate selection for evolutionary optimization. Big Data and Information Analytics, 2017, 2(1): 77-85. doi: 10.3934/bdia.2017010 |
[4] | Zhouchen Lin . A Review on Low-Rank Models in Data Analysis. Big Data and Information Analytics, 2016, 1(2): 139-161. doi: 10.3934/bdia.2016001 |
[5] | Pawan Lingras, Farhana Haider, Matt Triff . Fuzzy temporal meta-clustering of financial trading volatility patterns. Big Data and Information Analytics, 2017, 2(3): 219-238. doi: 10.3934/bdia.2017018 |
[6] | Yaguang Huangfu, Guanqing Liang, Jiannong Cao . MatrixMap: Programming abstraction and implementation of matrix computation for big data analytics. Big Data and Information Analytics, 2016, 1(4): 349-376. doi: 10.3934/bdia.2016015 |
[7] | Ming Yang, Dunren Che, Wen Liu, Zhao Kang, Chong Peng, Mingqing Xiao, Qiang Cheng . On identifiability of 3-tensors of multilinear rank (1; Lr; Lr). Big Data and Information Analytics, 2016, 1(4): 391-401. doi: 10.3934/bdia.2016017 |
[8] | Subrata Dasgupta . Disentangling data, information and knowledge. Big Data and Information Analytics, 2016, 1(4): 377-390. doi: 10.3934/bdia.2016016 |
[9] | Robin Cohen, Alan Tsang, Krishna Vaidyanathan, Haotian Zhang . Analyzing opinion dynamics in online social networks. Big Data and Information Analytics, 2016, 1(4): 279-298. doi: 10.3934/bdia.2016011 |
[10] | Ugo Avila-Ponce de León, Ángel G. C. Pérez, Eric Avila-Vales . A data driven analysis and forecast of an SEIARD epidemic model for COVID-19 in Mexico. Big Data and Information Analytics, 2020, 5(1): 14-28. doi: 10.3934/bdia.2020002 |
Formulating mathematical models that estimate tumor growth under therapy is vital for improving patient-specific treatment plans. In this context, we present our recent work on simulating non-small-scale cell lung cancer (NSCLC) in a simple, deterministic setting for two different patients receiving an immunotherapeutic treatment. At its core, our model consists of a Cahn-Hilliard-based phase-field model describing the evolution of proliferative and necrotic tumor cells. These are coupled to a simplified nutrient model that drives the growth of the proliferative cells and their decay into necrotic cells. The applied immunotherapy decreases the proliferative cell concentration. Here, we model the immunotherapeutic agent concentration in the entire lung over time by an ordinary differential equation (ODE). Finally, reaction terms provide a coupling between all these equations. By assuming spherical, symmetric tumor growth and constant nutrient inflow, we simplify this full 3D cancer simulation model to a reduced 1D model. We can then resort to patient data gathered from computed tomography (CT) scans over several years to calibrate our model. Our model covers the case in which the immunotherapy is successful and limits the tumor size, as well as the case predicting a sudden relapse, leading to exponential tumor growth. Finally, we move from the reduced model back to the full 3D cancer simulation in the lung tissue. Thereby, we demonstrate the predictive benefits that a more detailed patient-specific simulation including spatial information as a possible generalization within our framework could yield in the future.
In data clustering or cluster analysis, the goal is to divide a set of objects into homogeneous groups called clusters [10,18,20,26,12,1]. For high-dimensional data, clusters are usually formed in subspaces of the original data space and different clusters may relate to different subspaces. To recover clusters embedded in subspaces, subspace clustering algorithms have been developed, see for example [2,15,19,17,9,21,16,22,3,25,7,11,13]. Subspace clustering algorithms can be classified into two categories: hard subspace clustering algorithms and soft subspace clustering algorithms.
In hard subspace clustering algorithms, the subspaces in which clusters embed are determined exactly. In other words, each attribute of the data is either associated with a cluster or not associated with the cluster. For example, the subspace clustering algorithms developed in [2] and [15] are hard subspace clustering algorithms. In soft subspace clustering algorithms, the subspaces of clusters are not determined exactly. Each attribute is associated to a cluster with some probability. If an attribute is important to the formation of a cluster, then the attribute is associated to the cluster with high probability. Examples of soft subspace clustering algorithms include [19], [9], [21], [16], and [13].
In soft subspace clustering algorithms, the attribute weights associated with clusters are automatically determined. In general, the weight of an attribute for a cluster is inversely proportional to the dispersion of the attribute in the cluster. If the values of an attribute in a cluster is relatively compact, then the attribute will be assigned a relatively high value. In the FSC algorithm [16], for example, the attribute weights are calculated as
wlj=1∑dh=1(Vlj+ϵVlh+ϵ)1α−1, l=1,2,…,k,j=1,2,…,d, | (1) |
where
Vlj=∑x∈Cl(xj−zlj)2. | (2) |
Here
wlj=exp(−Vljγ)∑ds=1exp(−Vlsγ), k=1,2,…,n,l=1,2,…,d, | (3) |
where
One drawback of the FSC algorithm is that a positive value of
w1=e−10e−10+e−30=11+e−20=1, w2=e−30e−10+e−30=11+e20=0. |
If we use
w1=e−1e−1+e−3=11+e−2=0.88, w2=e−3e−1+e−3=11+e2=0.12. |
From the above example we see that choosing an appropriate value for the parameter
In this paper, we address the issue from a different perspective. Unlike the group feature weighting approach, the approach we employ in this paper involves using the log transformation to transform the distances so that the attribute weights are not dominated by a single attribute with the smallest dispersion. In particular, we present a soft subspace clustering algorithm called the LEKM algorithm (log-transformed entropy weighting
The remaining part of this paper is structured as follows. In Section 2, we give a brief review of the LAC algorithm [9] and the EWKM algorithm [21]. In Section 3, we present the LEKM algorithm in detail. In Section 4, we present numerical experiments to demonstrate the performance of the LEKM algorithm. Section 5 concludes the paper with some remarks.
In this section, we introduce the EWKM algorithm [21] and the LAC algorithm [9], which are soft subspace clustering algorithms using the entropy weighting.
Let
F(U,W,Z)=k∑l=1[n∑i=1d∑j=1uilwlj(xij−zlj)2+γd∑j=1wljlnwlj], | (4) |
where
k∑l=1uil=1, i=1,2,…,n, | (5a) |
uil∈{0,1}, i=1,2,…,n,l=1,2,…,k, | (5b) |
d∑j=1wlj=1, l=1,2,…,k, | (5c) |
and
wlj>0, l=1,2,…,k,j=1,2,…,d. | (5d) |
Like the
uil={1, if ∑dj=1wlj(xij−zlj)2≤∑dj=1uiswsj(xij−zsj)2 for 1≤s≤k,0, if otherwise, |
for
wlj=exp(−Vljγ)∑ds=1exp(−Vlsγ) |
for
Vlj=n∑i=1uil(xij−zlj)2. |
Given
zlj=∑ni=1uilxij∑ni=1uil |
for
The parameter
The LAC algorithm (Locally Adaptive Clustering) [9] and the EWKM algorithm are similar soft subspace clustering algorithms in that both algorithms discover subspace clusters via exponential weighting of attributes. However, the LAC algorithm differs from the EWKM algorithm in the definition of objective function. Clusters found by the LAC algorithm are referred to as weighted clusters. The objective function of the LAC algorithm is defined as
E(C,Z,W)=k∑l=1d∑j=1(wlj1|Cl|∑x∈Cl(xj−zlj)2+hwljlogwlj), | (6) |
where
Like the
Sl={x:d∑j=1wlj(xj−zlj)2<d∑j=1wsj(xj−zsj)2,∀s≠l} | (7) |
for
wlj=exp(−Vlj)/h∑ds=1exp(−Vls/h) | (8) |
for
Vlj=1|Sl|∑x∈Sl(xj−zlj)2. |
Given the set of clusters
zlj=1|Sl|∑x∈Slxj | (9) |
for
Comparing Equation (6) with Equation (4), we see that the distances in the objective function of the LAC algorithm are normalized by the sizes of the corresponding clusters. As a result, the dispersions (i.e.,
In this section, we present the LEKM algorithm. The LEKM algorithm is similar to the EWKM algorithm [21] and the LAC algorithm [9] in that the entropy weighting is used to determine the attribute weights.
Let
P(U,W,Z)=k∑l=1n∑i=1uild∑j=1wljln[1+(xij−zlj)2]+λk∑l=1n∑i=1uild∑j=1wljlnwlj=k∑l=1n∑i=1uil[d∑j=1wljln[1+(xij−zlj)2]+λd∑j=1wljlnwlj], | (10) |
where
Similar to the EWKM algorithm, the LEKM algorithm tries to minimize the objective function given in Equation (10) iteratively by finding the optimal value of
Theorem 3.1. Let
uil={1, if D(xi,zl)≤D(xi,zs) for all s=1,2,…,k;0, if otherwise, | (11) |
for
D(xi,zs)=d∑j=1wljln[1+(xij−zsj)2]+λd∑j=1wljlnwlj. |
Proof. Since
f(ui1,ui2,…,uik)=k∑l=1uilD(xi,zl) | (12) |
is minimized. Note that
k∑l=1uil=1. |
The function defined in Equation (12) is minimized if Equation (11) holds. This completes the proof.
Theorem 3.2. Let
wlj=exp(−Vljλ)∑ds=1exp(−Vlsλ) | (13) |
for
Vlj=∑ni=1uilln[1+(xij−zlj)2]∑ni=1uil. |
Proof. The weight matrix
d∑j=1wlj=1, l=1,2,…,k, |
is the matrix
f(W)=P(U,W,Z)+k∑l=1βl(d∑j=1wlj−1) =k∑l=1n∑i=1uil[d∑j=1wljln[1+(xij−zlj)2]+λd∑j=1wljlnwlj] +k∑l=1βl(d∑j=1wlj−1). | (14) |
The weight matrix
∂f(W)∂wlj=n∑i=1uil(ln[1+(xij−zlj)2]+λlnwlj+λ)+βl=0 |
for
∂f(W)∂βl=d∑j=1wlj−1=0 |
for
From Equation (13) we see that the attribute weights of the
Theorem 3.3. Let
zlj=∑ni=1uil[1+(xij−zlj)2]−1xij∑ni=1uil[1+(xij−zlj)2]−1 | (15) |
for
Proof. If the set of cluster centers
∂P∂zlj=wljn∑i=1uil[1+(xij−zlj)2]−1[−2(xij−zlj)]=0. |
Since
n∑i=1uil[1+(xij−zlj)2]−1[−2(xij−zlj)]=0, |
from which Equation (15) follows.
In the standard
zlj=∑ni=1uil[1+(xij−z∗lj)2]−1xij∑ni=1uil[1+(xij−z∗lj)2]−1 | (16) |
for
To find the optimal values of
![]() |
The LEKM algorithm requires four parameters:
Parameter | Default Value |
1 | |
100 |
In this section, we present numerical experiments based on both synthetic data and real data to demonstrate the performance of the LEKM algorithm. We also compare the LEKM algorithm with the EWKM algorithm and the LAC algorithm in terms of accuracy and runtime. We implemented all three algorithms in Java and used the same convergence criterion as shown in Algorithm 1.
In our experiments, we use the corrected Rand index [8,13] to measure the accuracy of clustering results. The corrected Rand index is calculated from two partitions of the same dataset and its value ranges from -1 to 1, with 1 indicating perfect agreement between the two partitions and 0 indicating agreement by chance. In general, the higher the corrected Rand index, the better the clustering result.
Since the all the three algorithms are
To test the performance of the LEKM algorithm, we generated two synthetic datasets. The first synthetic dataset is a 2-dimensional dataset with two clusters and is shown in Figure 1. From the figure we see that the cluster in the top is compact but the cluster in the bottom contains several points that are far away from the cluster center. We can consider this dataset as a dataset containing noises.
Table 2 shows the average corrected Rand index of 100 runs of the three algorithms on the first synthetic dataset. From the table we see that the LEKM algorithm produced more accurate results than the LAC algorithm and the EWKM algorithm. The EWKM produced the least accurate results. Since the dispersion of an attribute in a cluster is normalized by the size of the cluster in the LAC and LEKM algorithms, the LAC and LEKM algorithms are less sensitive to the parameter.
Parameter | EWKM | LAC | LEKM |
1 | 0.0351 (0.0582) | 0.0024 (0.0158) | 0.9154 (0.2704) |
2 | 0.0378 (0.0556) | 0.9054 (0.2322) | 0.9063 (0.2827) |
4 | 0.012 (0.031) | 0.8019 (0.2422) | 0.9067 (0.2815) |
8 | -0.0135 (0.0125) | 0.7604 (0.2406) | 0.9072 (0.2799) |
16 | -0.013 (0.0134) | 0.7527 (0.2501) | 0.9072 (0.2799) |
Table 3 shows the confusion matrices produced by the best run of the three algorithms on the first synthetic dataset. We run the EWKM algorithm, the LAC algorithm, and the LEKM algorithm 100 times on the first synthetic dataset with parameter 2 (i.e.,
1 | 2 | 1 | 2 | 1 | 2 | |||||
C2 | 35 | 25 | C2 | 59 | 0 | C2 | 60 | 0 | ||
C1 | 25 | 15 | C1 | 1 | 40 | C1 | 0 | 40 | ||
(a) | (b) | (c) |
Table 4 shows the attribute weights of the two clusters produced by the best runs of the three algorithms. As we can see from the table that the attribute weights produced by the EWKM algorithm are dominated by one attribute. The attribute weights of one cluster produced by the LAC algorithm is also affected by the noises in the cluster. The attribute weights of the clusters produced by the LEKM algorithm seem reasonable as the two clusters are formed in the full space and approximate the same attribute weights are expected.
Weight | Weight | Weight | ||||||||
C1 | 1 | 3.01E-36 | C1 | 0.8931 | 0.1069 | C1 | 0.5448 | 0.4552 | ||
C2 | 1 | 2.85E-51 | C2 | 0.5057 | 0.4943 | C2 | 0.5055 | 0.4945 | ||
(a) | (b) | (c) |
Table 5 shows the average runtime of the 100 runs of the three algorithms on the first synthetic dataset. From the table we see that the EWKM algorithm converged the fastest. The LAC algorithm and the LEKM algorithm converged in about the same time.
Parameter | EWKM | LAC | LEKM |
1 | 0.0005 (0.0005) | 0.0021 (0.0032) | 0.0016 (0.0009) |
2 | 0.0002 (0.0004) | 0.0018 (0.0026) | 0.0013 (0.0006) |
4 | 0.0002 (0.0004) | 0.0017 (0.0025) | 0.0014 (0.0011) |
8 | 0.0003 (0.0004) | 0.0018 (0.0026) | 0.0016 (0.0017) |
16 | 0.0002 (0.0004) | 0.0018 (0.0025) | 0.0016 (0.002) |
The second synthetic dataset is a 100-dimensional dataset with four clusters. Table 6 shows the sizes and dimensions of the four clusters. This dataset was also used to test the SAP algorithm developed in [13]. Table 7 summarizes the clustering results of the three algorithms. From the table we see that the LEKM algorithm produced the most accurate results when the parameter is small. When the parameter is large, the attribute weights calculated by the LEKM algorithm become approximately the same. Since the clusters are embedded in subspaces, assigning approximately the same weight to attributes prevents the LEKM algorithm from recovering these clusters.
Cluster | Cluster Size | Subspace Dimensions |
A | 500 | 10, 15, 70 |
B | 300 | 20, 30, 80, 85 |
C | 500 | 30, 40, 70, 90, 95 |
D | 700 | 40, 45, 50, 55, 60, 80 |
Parameter | EWKM | LAC | LEKM |
1 | 0.557 (0.1851) | 0.5534 (0.1857) | 0.9123 (0.147) |
2 | 0.557 (0.1851) | 0.5572 (0.1883) | 0.928 (0.1361) |
4 | 0.557 (0.1851) | 0.5658 (0.1902) | 0.6128 (0.1626) |
8 | 0.557 (0.1851) | 0.574 (0.2028) | 0.3197 (0.1247) |
16 | 0.5573 (0.1854) | 0.6631 (0.2532) | 0.2293 (0.0914) |
Table 8 shows the confusion matrices produced by the runs of the three algorithms with the lowest objective function value. From the table we see that only three points were clustered incorrectly by the LEKM algorithm. Many points were clustered incorrectly by the EWKM algorithm and the LAC algorithm. Figures 2, 3, and Figure 4 plot the attribute weights of the four clusters corresponding to the confusion matrices given in Table 8. From Figures 2 and 3 we can see that the attribute weights were dominated by a single attribute. Figure 4 shows that the LEKM algorithm was able to recover all the subspace dimensions correctly.
Table 9 shows the average runtime of 100 runs of the three algorithms on the second synthetic dataset. From the table we see that the LEKM algorithm is slower than the other two algorithms. Since the center calculation of the LEKM algorithm is more complicate than that of the EWKM algorithm and the LAC algorithm, it is expected that the LEKM algorithm is slower than the other two algorithms.
Parameter | EWKM | LAC | LEKM |
1 | 0.7849 (0.4221) | 1.1788 (0.763) | 10.4702 (0.1906) |
2 | 0.7687 (0.4141) | 0.8862 (0.4952) | 10.3953 (0.1704) |
4 | 0.7619 (0.4101) | 0.8412 (0.4721) | 10.5236 (0.2023) |
8 | 0.7567 (0.4074) | 0.8767 (0.4816) | 10.5059 (0.2014) |
16 | 0.7578 (0.4112) | 0.8136 (0.5069) | 10.4122 (0.189) |
In summary, the test results on synthetic datasets have shown that the LEKM algorithm is able to recover clusters from noise data and recover clusters embedded in subspaces. The test results also show that the LEKM algorithm is less sensitive to noises and parameter values that the EWKM algorithm and the LEKM algorithm. However, the LEKM algorithm is in general slower than the other two algorithm due to its complex center calculation.
To test the algorithms on real data, we obtained two cancer gene expression datasets from [8]1. The first dataset contains gene expression data of human liver cancers and the second dataset contains gene expression data of breast tumors and colon tumors. Table 10 shows the information of the two real datasets. The two datasets have known labels, which tell the type of sample of each data point. The two datasets were also used to test the SAP algorithm in [13].
Dataset | Samples | Dimensions | Cluster sizes |
Chen-2002 | 179 | 85 | 104, 76 |
Chowdary-2006 | 104 | 182 | 62, 42 |
The datasets are available at http://bioinformatics.rutgers.edu/Static/Supplements/CompCancer/datasets.htm
Table 11 and Table 12 summarize the average accuracy and the average runtime of 100 runs of the three algorithms on the Chen-2002 dataset, respectively. From the average corrected Rand index shown in Table 11 we see that the LEKM algorithm produced more accurate results than the EWKM algorithm and the LAC algorithm did. However, the LEKM algorithm was slower than the other two algorithm.
Parameter | EWKM | LAC | LEKM |
1 | 0.025 (0.0395) | 0.0042 (0.0617) | 0.2599 (0.2973) |
2 | 0.0203 (0.0343) | 0.0888 (0.1903) | 0.2563 (0.2868) |
4 | 0.0135 (0.0279) | 0.041 (0.1454) | 0.2743 (0.2972) |
8 | 0.0141 (0.0449) | 0.0484 (0.1761) | 0.2856 (0.2993) |
16 | 0.0002 (0.0416) | 0.0445 (0.1726) | 0.2789 (0.2984) |
Parameter | EWKM | LAC | LEKM |
1 | 0.0111 (0.0031) | 0.0162 (0.0083) | 0.102 (0.0297) |
2 | 0.0123 (0.0033) | 0.0124 (0.006) | 0.1035 (0.0286) |
4 | 0.0143 (0.006) | 0.0151 (0.0105) | 0.1046 (0.0316) |
8 | 0.0122 (0.0043) | 0.0137 (0.0089) | 0.1068 (0.0337) |
16 | 0.0144 (0.007) | 0.014 (0.0091) | 0.105 (0.0323) |
The average accuracy and runtime of 100 runs of the three algorithms on the Chowdary-2006 dataset are shown in Table 13 and Table 14, respectively. From Table 13 we see than the LEKM algorithm again produced more accurate clustering results than the other two algorithm did. When the parameter was set to be 1, the LAC produced better results than the EWKM algorithm did. For other cases, however, the EWKM algorithm produced better results than the LAC algorithm did. The LAC algorithm and the EWKM algorithm are much faster than the LEKM algorithm as shown in Table 14.
Parameter | EWKM | LAC | LEKM |
1 | 0.3952 (0.3943) | 0.5197 (0.2883) | 0.5826 (0.3199) |
2 | 0.3819 (0.3825) | 0.19 (0.2568) | 0.5757 (0.3261) |
4 | 0.3839 (0.3677) | 0.0772 (0.1016) | 0.5823 (0.3221) |
8 | 0.4188 (0.3584) | 0.0595 (0.0224) | 0.5756 (0.3383) |
16 | 0.4994 (0.3927) | 0.0625 (0.0184) | 0.582 (0.3363) |
Parameter | EWKM | LAC | LEKM |
1 | 0.0115 (0.0048) | 0.0109 (0.0042) | 0.1369 (0.0756) |
2 | 0.011 (0.0046) | 0.0156 (0.0093) | 0.1446 (0.0723) |
4 | 0.0103 (0.0042) | 0.0147 (0.0076) | 0.1514 (0.0805) |
8 | 0.0107 (0.005) | 0.0141 (0.0063) | 0.1524 (0.0769) |
16 | 0.0113 (0.0047) | 0.0138 (0.0068) | 0.1542 (0.0854) |
In summary, the test results on real datasets show that the LEKM algorithm produced more accurate clustering results on average than the EWKM algorithm and the LAC algorithm did. However, the LEKM algorithm was slower than the other two algorithms.
The EWKM algorithm [21] and the LAC algorithm [9] are two soft subspace clustering algorithms that are similar to each other. In both algorithms, the attribute weights of a cluster are calculated as exponential normalizations of the negative attribute dispersions in the cluster scaled by a parameter. Setting the parameter is a challenge when the attribute dispersions in a cluster have a large range. In this paper, we proposed the LEKM (log-transformed entropy weighting
We tested the performance of the LEKM algorithm and compared it with the EWKM algorithm and the LAC algorithm. The test results on both synthetic datasets and real datasets have shown that the LEKM algorithm is able to outperform the EWKM algorithm and the LAC algorithm in terms of accuracy. However, one limitation of the LEKM algorithm is that it is slower than the other two algorithm because updating the cluster centers in each iteration in the LEKM algorithm is more complicate than that in the other two algorithms.
Another limitation of the LEKM algorithm is that it is sensitive to initial cluster centers. This limitation is common to most of the
The authors would like to thank referees for their insightful comments that greatly improve the quality of the paper.
[1] |
R. C. Rockne, J. G. Scott, Introduction to mathematical oncology, JCO Clin. Cancer Inform., 3 (2019), 1–4. https://doi.org/10.1200/CCI.19.00010 doi: 10.1200/CCI.19.00010
![]() |
[2] | R. A. Weinberg, The Biology of Cancer, W.W. Norton & Company, (2006). https://doi.org/10.1201/9780203852569 |
[3] |
T. A. Graham, A. Sottoriva, Measuring cancer evolution from the genome, J. Pathol., 241 (2017), 183–191. https://doi.org/10.1002/path.4821 doi: 10.1002/path.4821
![]() |
[4] |
D. Hanahan, R. A. Weinberg, Hallmarks of cancer: The next generation, Cell, 144 (2011), 646–674. https://doi.org/10.1016/j.cell.2011.02.013 doi: 10.1016/j.cell.2011.02.013
![]() |
[5] |
R. D. Schreiber, L. J. Old, M. J. Smyth, Cancer immunoediting: Integrating immunity's roles in cancer suppression and promotion, Science, 331 (2011), 1565–1570. https://doi.org/10.1126/science.1203486 doi: 10.1126/science.1203486
![]() |
[6] |
Y. Zhang, Z. Zhang, The history and advances in cancer immunotherapy: Understanding the characteristics of tumor-infiltrating immune cells and their therapeutic implications, Cell. Mol. Immunol., 17 (2020), 807–821. https://doi.org/10.1038/s41423-020-0488-6 doi: 10.1038/s41423-020-0488-6
![]() |
[7] |
A. Rounds, J. Kolesar, Nivolumab for second-line treatment of metastatic squamous non-small-cell lung cancer, Am. J. Health-Syst. Pharm., 72 (2015), 1851–1855. https://doi.org/10.2146/ajhp150235 doi: 10.2146/ajhp150235
![]() |
[8] |
G. M. Keating, Nivolumab: A review in advanced nonsquamous non-small cell lung cancer, Drugs, 76 (2016), 969–978. https://doi.org/10.1007/s40265-016-0589-9 doi: 10.1007/s40265-016-0589-9
![]() |
[9] |
Y. Iwai, J. Hamanishi, K. Chamoto, T. Honjo, Cancer immunotherapies targeting the PD-1 signaling pathway, J. Biomed. Sci., 24 (2017), 26. https://doi.org/10.1186/s12929-017-0329-9 doi: 10.1186/s12929-017-0329-9
![]() |
[10] |
N. Ghaffari Laleh, C. M. L. Loeffler, J. Grajek, K. Staňková, A. T. Pearson, H. S. Muti, et al., Classical mathematical models for prediction of response to chemotherapy and immunotherapy, PLOS Comput. Biol., 18 (2022), 1–18. https://doi.org/10.1371/journal.pcbi.1009822 doi: 10.1371/journal.pcbi.1009822
![]() |
[11] |
I. Ezhov, K. Scibilia, K. Franitza, F. Steinbauer, S. Shit, L. Zimmer, et al., Learn-Morph-Infer: A new way of solving the inverse problem for brain tumor modeling, Med. Image Anal., 83 (2023), 102672. https://doi.org/10.1016/j.media.2022.102672 doi: 10.1016/j.media.2022.102672
![]() |
[12] |
A. K. Laird, Dynamics of tumour growth: Comparison of growth rates and extrapolation of growth curve to one cell, Br. J. Cancer, 19 (1965), 278–291. https://doi.org/10.1038/bjc.1965.32 doi: 10.1038/bjc.1965.32
![]() |
[13] | L. Norton, A Gompertzian model of human breast cancer growth, Cancer Res., 48 (1988), 7067–7071. |
[14] |
S. Benzekry, C. Lamont, A. Beheshti, A. Tracz, J. M. L. Ebos, L. Hlatky, et al., Classical mathematical models for description and prediction of experimental tumor growth, PLoS Comput. Biol., 10 (2014), e1003800. https://doi.org/10.1371/journal.pcbi.1003800 doi: 10.1371/journal.pcbi.1003800
![]() |
[15] |
M. Bilous, C. Serdjebi, A. Boyer, P. Tomasini, C. Pouypoudat, D. Barbolosi, et al., Quantitative mathematical modeling of clinical brain metastasis dynamics in non-small cell lung cancer, Sci. Rep., 9 (2019), 13018. https://doi.org/10.1038/s41598-019-49407-3 doi: 10.1038/s41598-019-49407-3
![]() |
[16] |
P. Schlicke, C. Kuttler, C. Schumann, How mathematical modeling could contribute to the quantification of metastatic tumor burden under therapy: Insights in immunotherapeutic treatment of non-small cell lung cancer, Theor. Biol. Med. Model., 18 (2021), 1–15. https://doi.org/10.1186/s12976-021-00142-1 doi: 10.1186/s12976-021-00142-1
![]() |
[17] |
S. Benzekry, C. Sentis, C. Coze, L. Tessonnier, N. André, Development and validation of a prediction model of overall survival in high-risk neuroblastoma using mechanistic modeling of metastasis, JCO Clin. Cancer Inf., 5 (2021), 81–90. https://doi.org/10.1200/CCI.20.00092 doi: 10.1200/CCI.20.00092
![]() |
[18] | S. Benzekry, P. Schlicke, P. Tomasini, E. Simon, Mechanistic modeling of brain metastases in NSCLC provides computational markers for personalized prediction of outcome, medRxiv preprint, 2023. https://doi.org/10.1101/2023.01.10.23284189 |
[19] |
F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, A. Jemal, Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries, CA. Cancer J. Clin., 68 (2018), 394–424. https://doi.org/10.3322/caac.21492 doi: 10.3322/caac.21492
![]() |
[20] | C. Zappa, S. A. Mousa, Non-small cell lung cancer: Current treatment and future advances, Transl. Lung Cancer Res., 5 (2016). https://doi.org/10.21037/tlcr.2016.06.07 |
[21] |
W. D. Travis, E. Brambilla, A. G. Nicholson, Y. Yatabe, J. H. Austin, M. B. Beasley, et al., The 2015 world health organization classification of lung tumors: Impact of genetic, clinical and radiologic advances since the 2004 classification, J. Thorac. Oncol., 10 (2015), 1243–1260. https://doi.org/10.1097/JTO.0000000000000630 doi: 10.1097/JTO.0000000000000630
![]() |
[22] | J. S. Lowengrub, H. B. Frieboes, F. Jin, Y. L. Chuang, X. Li, P. Macklin, et al., Nonlinear modelling of cancer: Bridging the gap between cells and tumours, Nonlinearity, 23 (2009). https://doi.org/10.1088/0951-7715/23/1/r01 |
[23] | V. Cristini, J. Lowengrub, Multiscale Modeling of Cancer: An Integrated Experimental and Mathematical Modeling Approach, Cambridge University Press, (2010). https://doi.org/10.1017/cbo9780511781452 |
[24] |
O. Clatz, M. Sermesant, P. Y. Bondiau, H. Delingette, S. K. Warfield, G. Malandain, et al., Realistic simulation of the 3-D growth of brain tumors in MR images coupling diffusion with biomechanical deformation, IEEE Trans. Med. Imaging, 24 (2005), 1334–1346. https://doi.org/10.1109/tmi.2005.857217 doi: 10.1109/tmi.2005.857217
![]() |
[25] |
S. Subramanian, A. Gholami, G. Biros, Simulation of glioblastoma growth using a 3D multispecies tumor model with mass effect, J. Math. Biol., 79 (2019), 941–967. https://doi.org/10.1007/s00285-019-01383-y doi: 10.1007/s00285-019-01383-y
![]() |
[26] |
H. J. Bowers, E. E. Fannin, A. Thomas, J. A. Weis, Characterization of multicellular breast tumor spheroids using image data-driven biophysical mathematical modeling, Sci. Rep., 10 (2020), 1–12. https://doi.org/10.1038/s41598-020-68324-4 doi: 10.1038/s41598-020-68324-4
![]() |
[27] |
P. Friedl, D. Gilmour, Collective cell migration in morphogenesis, regeneration and cancer, Nat. Rev. Mol. Cell Biol., 10 (2009), 445–457. https://doi.org/10.1038/nrm2720 doi: 10.1038/nrm2720
![]() |
[28] |
H. Garcke, K. F. Lam, E. Sitka, V. Styles, A Cahn–Hilliard–Darcy model for tumour growth with chemotaxis and active transport, Math. Models Method Appl. Sci., 26 (2016), 1095–1148. https://doi.org/10.1142/s0218202516500263 doi: 10.1142/s0218202516500263
![]() |
[29] |
S. Frigeri, M. Grasselli, E. Rocca, On a diffuse interface model of tumour growth, Eur. J. Appl. Math., 26 (2015), 215–243. https://doi.org/10.1017/s0956792514000436 doi: 10.1017/s0956792514000436
![]() |
[30] |
H. G. Lee, Y. Kim, J. Kim, Mathematical model and its fast numerical method for the tumor growth, Math. Biosci. Eng., 12 (2015), 1173–1187. https://doi.org/10.3934/mbe.2015.12.1173 doi: 10.3934/mbe.2015.12.1173
![]() |
[31] |
M. Ebenbeck, H. Garcke, Analysis of a Cahn–Hilliard–Brinkman model for tumour growth with chemotaxis, J. Differ. Equations, 266 (2019), 5998–6036. https://doi.org/10.1016/j.jde.2018.10.045 doi: 10.1016/j.jde.2018.10.045
![]() |
[32] |
M. Ebenbeck, H. Garcke, On a Cahn–Hilliard–Brinkman model for tumor growth and its singular limits, SIAM J. Math. Anal., 51 (2019), 1868–1912. https://doi.org/10.1137/18m1228104 doi: 10.1137/18m1228104
![]() |
[33] |
M. Fritz, E. Lima, J. T. Oden, B. Wohlmuth, On the unsteady Darcy–Forchheimer–Brinkman equation in local and nonlocal tumor growth models, Math. Models Method Appl. Sci., 29 (2019), 1691–1731. https://doi.org/10.1142/S0218202519500325 doi: 10.1142/S0218202519500325
![]() |
[34] |
K. F. Lam, H. Wu, Thermodynamically consistent Navier–Stokes–Cahn–Hilliard models with mass transfer and chemotaxis, Eur. J. Appl. Math., 29 (2018), 595–644. https://doi.org/10.1017/s0956792517000298 doi: 10.1017/s0956792517000298
![]() |
[35] | G. Lorenzo, A. M. Jarrett, C. T. Meyer, V. Quaranta, D. R. Tyson, T. E. Yankeelov, Identifying mechanisms driving the early response of triple negative breast cancer patients to neoadjuvant chemotherapy using a mechanistic model integrating in vitro and in vivo imaging data, arXiv preprint, (2022), arXiv: 2212.04270. https://doi.org/10.48550/arXiv.2212.04270 |
[36] |
H. Garcke, K. F. Lam, R. Nürnberg, E. Sitka, A multiphase Cahn–Hilliard–Darcy model for tumour growth with necrosis, Math. Models Method Appl. Sci., 28 (2018), 525–577. https://doi.org/10.1142/s0218202518500148 doi: 10.1142/s0218202518500148
![]() |
[37] |
J. T. Oden, A. Hawkins, S. Prudhomme, General diffuse-interface theories and an approach to predictive tumor growth modeling, Math. Models Method Appl. Sci., 20 (2010), 477–517. https://doi.org/10.1142/s0218202510004313 doi: 10.1142/s0218202510004313
![]() |
[38] |
S. M. Wise, J. S. Lowengrub, H. B. Frieboes, V. Cristini, Three-dimensional multispecies nonlinear tumor growth – I: Model and numerical method, J. Theor. Biol., 253 (2008), 524–543. https://doi.org/10.1016/j.jtbi.2008.03.027 doi: 10.1016/j.jtbi.2008.03.027
![]() |
[39] |
V. Cristini, X. Li, J. S. Lowengrub, S. M. Wise, Nonlinear simulations of solid tumor growth using a mixture model: Invasion and branching, J. Math. Biol., 58 (2009), 723–763. https://doi.org/10.1007/s00285-008-0215-x doi: 10.1007/s00285-008-0215-x
![]() |
[40] |
H. B. Frieboes, F. Jin, Y. L. Chuang, S. M. Wise, J. S. Lowengrub, V. Cristini, Three-dimensional multispecies nonlinear tumor growth – II: Tumor invasion and angiogenesis, J. Theor. Biol., 264 (2010), 1254–1278. https://doi.org/10.1016/j.jtbi.2010.02.036 doi: 10.1016/j.jtbi.2010.02.036
![]() |
[41] |
E. Lima, J. T. Oden, R. Almeida, A hybrid ten-species phase-field model of tumor growth, Math. Models Method Appl. Sci., 24 (2014), 2569–2599. https://doi.org/10.1142/s0218202514500304 doi: 10.1142/s0218202514500304
![]() |
[42] |
A. Hawkins-Daarud, K. G. van der Zee, J. T. Oden, Numerical simulation of a thermodynamically consistent four-species tumor growth model, Int. J. Numer. Meth. Biol., 28 (2012), 3–24. https://doi.org/10.1002/cnm.1467 doi: 10.1002/cnm.1467
![]() |
[43] |
M. Fritz, P. K. Jha, T. Köppl, J. T. Oden, A. Wagner, B. Wohlmuth, Modeling and simulation of vascular tumors embedded in evolving capillary networks, Comput. Methods Appl. Mech. Eng., 384 (2021), 113975. https://doi.org/10.1016/j.cma.2021.113975 doi: 10.1016/j.cma.2021.113975
![]() |
[44] |
M. Fritz, P. K. Jha, T. Köppl, J. T. Oden, B. Wohlmuth, Analysis of a new multispecies tumor growth model coupling 3D phase-fields with a 1D vascular network, Nonlinear Anal. Real World Appl., 61 (2021), 103331. https://doi.org/10.1016/j.nonrwa.2021.103331 doi: 10.1016/j.nonrwa.2021.103331
![]() |
[45] |
G. Lorenzo, M. A. Scott, K. Tew, T. J. Hughes, Y. J. Zhang, L. Liu, et al., Tissue-scale, personalized modeling and simulation of prostate cancer growth, Proc. Natl. Acad. Sci., 113 (2016), E7663–E7671. https://doi.org/10.1073/pnas.1615791113 doi: 10.1073/pnas.1615791113
![]() |
[46] |
G. Song, T. Tian, X. Zhang, A mathematical model of cell-mediated immune response to tumor, Math. Biosci. Eng., 18 (2021), 373–385. https://doi.org/10.3934/mbe.2021020 doi: 10.3934/mbe.2021020
![]() |
[47] |
D. Kirschner, A. Tsygvintsev, On the global dynamics of a model for tumor immunotherapy, Math. Biosci. Eng., 6 (2009), 573–583. https://doi.org/10.3934/mbe.2009.6.573 doi: 10.3934/mbe.2009.6.573
![]() |
[48] |
K. R. Fister, J. H. Donnelly, Immunotherapy: An optimal control theory approach, Math. Biosci. Eng., 2 (2005), 499–510. https://doi.org/10.3934/mbe.2005.2.499 doi: 10.3934/mbe.2005.2.499
![]() |
[49] | A. Soboleva, A. Kaznatcheev, R. Cavill, K. Schneider, K. Stankova, Polymorphic gompertzian model of cancer validated with in vitro and in vivo data, bioRxiv preprint, 2023. https://doi.org/10.1101/2023.04.19.537467 |
[50] |
G. G. Powathil, D. J. Adamson, M. A. Chaplain, Towards predicting the response of a solid tumour to chemotherapy and radiotherapy treatments: Clinical insights from a computational model, PLoS Comput. Biol., 9 (2013), 1–14. https://doi.org/10.1371/journal.pcbi.1003120 doi: 10.1371/journal.pcbi.1003120
![]() |
[51] |
C. Wu, D. A. Hormuth, G. Lorenzo, A. M. Jarrett, F. Pineda, F. M. Howard, et al., Towards patient-specific optimization of neoadjuvant treatment protocols for breast cancer based on image-guided fluid dynamics, IEEE Trans. Biomed. Eng., 69 (2022), 3334–3344. https://doi.org/10.1109/tbme.2022.3168402 doi: 10.1109/tbme.2022.3168402
![]() |
[52] |
A. M. Jarrett, D. A. Hormuth, S. L. Barnes, X. Feng, W. Huang, T. E. Yankeelov, Incorporating drug delivery into an imaging-driven, mechanics-coupled reaction diffusion model for predicting the response of breast cancer to neoadjuvant chemotherapy: theory and preliminary clinical results, Phys. Med. Biol., 63 (2018), 105015. https://doi.org/10.1088/1361-6560/aac040 doi: 10.1088/1361-6560/aac040
![]() |
[53] | R. C. Rockne, A. D. Trister, J. Jacobs, A. J. Hawkins-Daarud, M. L. Neal, K. Hendrickson, et al., A patient-specific computational model of hypoxia-modulated radiation resistance in glioblastoma using 18F-FMISO-PET, J. R. Soc. Interface, 12 (2015). https://doi.org/10.1098/rsif.2014.1174 |
[54] |
P. Colli, H. Gomez, G. Lorenzo, G. Marinoschi, A. Reali, E. Rocca, Optimal control of cytotoxic and antiangiogenic therapies on prostate cancer growth, Math. Models Method Appl. Sci., 31 (2021), 1419–1468. https://doi.org/10.1142/s0218202521500299 doi: 10.1142/s0218202521500299
![]() |
[55] |
M. Fritz, C. Kuttler, M. L. Rajendran, L. Scarabosio, B. Wohlmuth, On a subdiffusive tumour growth model with fractional time derivative, IMA J. Appl. Math., 86 (2021), 688–729. https://doi.org/10.1093/imamat/hxab009 doi: 10.1093/imamat/hxab009
![]() |
[56] |
S. A. Quezada, K. S. Peggs, Exploiting CTLA-4, PD-1 and PD-L1 to reactivate the host immune response against cancer, Br. J. Cancer, 108 (2013), 1560–1565. https://doi.org/10.1038/bjc.2013.117 doi: 10.1038/bjc.2013.117
![]() |
[57] |
A. Ribas, Tumor immunotherapy directed at PD-1, N. Engl. J. Med., 366 (2012), 2517–2519. https://doi.org/10.1056/NEJMe1205943 doi: 10.1056/NEJMe1205943
![]() |
[58] |
D. M. Pardoll, The blockade of immune checkpoints in cancer immunotherapy, Nat. Rev. Cancer, 12 (2012), 252–264. https://doi.org/10.1038/nrc3239 doi: 10.1038/nrc3239
![]() |
[59] |
E. N. Rozali, S. V. Hato, B. W. Robinson, R. A. Lake, W. J. Lesterhuis, Programmed death ligand 2 in cancer-induced immune suppression, Clin. Dev. Immunol., 2012 (2012), 1–8. https://doi.org/10.1155/2012/656340 doi: 10.1155/2012/656340
![]() |
[60] |
S. P. Patel, R. Kurzrock, PD-L1 expression as a predictive biomarker in cancer immunotherapy, Mol. Cancer Ther., 14 (2015), 847–856. https://doi.org/10.1158/1535-7163.MCT-14-0983 doi: 10.1158/1535-7163.MCT-14-0983
![]() |
[61] |
Y. Viossat, R. Noble, A theoretical analysis of tumour containment, Nat. Ecol. Evol., 5 (2021), 826–835. https://doi.org/10.1038/s41559-021-01428-w doi: 10.1038/s41559-021-01428-w
![]() |
[62] | T. Hillen, K. J. Painter, M. Winkler, Convergence of a cancer invasion model to a logistic chemotaxis model, Math. Models Method Appl. Sci., 23 (2013), 165–198. |
[63] |
B. Gompertz, On the nature of the function expressive of the law of human mortality, and on the new mode of determining the value of life contingencies, Philos. Trans. R. Soc., 115 (1825), 513–585. https://doi.org/10.1098/rstl.1825.0026 doi: 10.1098/rstl.1825.0026
![]() |
[64] |
K. Erbertseder, J. Reichold, B. Flemisch, P. Jenny, R. Helmig, A coupled discrete/continuum model for describing cancer-therapeutic transport in the lung, PLOS ONE, 7 (2012), 1–17. https://doi.org/10.1371/journal.pone.0031966 doi: 10.1371/journal.pone.0031966
![]() |
[65] |
H. Garcke, K. F. Lam, Well-posedness of a Cahn–Hilliard system modelling tumour growth with chemotaxis and active transport, Eur. J. Appl. Math., 28 (2017), 284–316. https://doi.org/10.1017/S0956792516000292 doi: 10.1017/S0956792516000292
![]() |
[66] |
H. Garcke, K. F. Lam, Analysis of a Cahn–Hilliard system with non-zero Dirichlet conditions modeling tumor growth with chemotaxis, Discrete Contin. Dyn. Syst. Ser. A, 37 (2017), 4277–4308. https://doi.org/10.3934/dcds.2017183 doi: 10.3934/dcds.2017183
![]() |
[67] |
P. Colli, H. Gomez, G. Lorenzo, G. Marinoschi, A. Reali, E. Rocca, Optimal control of cytotoxic and antiangiogenic therapies on prostate cancer growth, Math. Models Method Appl. Sci., 31 (2021), 1419–1468. https://doi.org/10.1142/s0218202521500299 doi: 10.1142/s0218202521500299
![]() |
[68] |
D. J. Eyre, Unconditionally gradient stable time marching the Cahn–Hilliard equation, MRS Online Proc. Lib., 529 (1998), 39–46. https://doi.org/10.1557/proc-529-39 doi: 10.1557/proc-529-39
![]() |
[69] |
S. C. Brenner, A. E. Diegel, L. Y. Sung, A robust solver for a mixed finite element method for the Cahn–Hilliard equation, J. Sci. Comput., 77 (2018), 1234–1249. https://doi.org/10.1007/s10915-018-0753-3 doi: 10.1007/s10915-018-0753-3
![]() |
[70] | S. Balay, S. Abhyankar, M. F. Adams, S. Benson, J. Brown, P. Brune, et al., PETSc/TAO} Users Manual, Technical Report ANL-21/39 - Revision 3.18, Argonne National Laboratory, 2022. |
[71] | A. Logg, K. A. Mardal, G. Wells, Automated Solution of Differential Equations by the Finite Element Method: The FEniCS Book, Springer Science & Business Media, (2012). https://doi.org/10.1007/978-3-642-23099-8 |
[72] | R. Kikinis, S. D. Pieper, K. G. Vosburgh, 3D Slicer: A platform for subject-specific image analysis, visualization, and clinical support, in Intraoperative Imaging and Image-guided Therapy, Springer, (2013), 277–289. https://doi.org/10.1007/978-1-4614-7657-3_19 |
[73] | Blender, Accessed: 2022-12-02. Available from: https://www.blender.org/. |
[74] |
C. Geuzaine, J. F. Remacle, Gmsh: A 3-D finite element mesh generator with built-in pre-and post-processing facilities, Int. J. Numer. Meth. Eng., 79 (2009), 1309–1331. https://doi.org/10.1002/nme.2579 doi: 10.1002/nme.2579
![]() |
[75] | The Vascular Modeling Toolkit website, Accessed: 2022-12-02. www.vmtk.org. |
[76] |
L. Antiga, M. Piccinelli, L. Botti, B. Ene-Iordache, A. Remuzzi, D. A. Steinman, An image-based modeling framework for patient-specific computational hemodynamics, Med. Biol. Eng. Comput., 46 (2008), 1097–1112. https://doi.org/10.1007/s11517-008-0420-1 doi: 10.1007/s11517-008-0420-1
![]() |
[77] |
E. Eisenhauer, P. Therasse, J. Bogaerts, L. Schwartz, D. Sargent, R. Ford, et al., New response evaluation criteria in solid tumours: Revised RECIST guideline (version 1.1), Eur. J. Cancer, 45 (2009), 228–247. https://doi.org/10.1016/j.ejca.2008.10.026 doi: 10.1016/j.ejca.2008.10.026
![]() |
[78] |
L. Hanin, J. Rose, Suppression of metastasis by primary tumor and acceleration of metastasis following primary tumor resection: A natural law, Bull. Math. Biol., 80 (2018), 519–539. https://doi.org/10.1007/s11538-017-0388-9 doi: 10.1007/s11538-017-0388-9
![]() |
1. | Tongfeng Sun, 2018, Chapter 15, 978-3-030-00827-7, 140, 10.1007/978-3-030-00828-4_15 | |
2. | Qi He, Zhenxiang Chen, Ke Ji, Lin Wang, Kun Ma, Chuan Zhao, Yuliang Shi, 2020, Chapter 49, 978-3-030-16656-4, 530, 10.1007/978-3-030-16657-1_49 | |
3. | Guojun Gan, Yuping Zhang, Dipak K. Dey, Clustering by propagating probabilities between data points, 2016, 41, 15684946, 390, 10.1016/j.asoc.2016.01.034 |
Parameter | Default Value |
1 | |
100 |
Parameter | EWKM | LAC | LEKM |
1 | 0.0351 (0.0582) | 0.0024 (0.0158) | 0.9154 (0.2704) |
2 | 0.0378 (0.0556) | 0.9054 (0.2322) | 0.9063 (0.2827) |
4 | 0.012 (0.031) | 0.8019 (0.2422) | 0.9067 (0.2815) |
8 | -0.0135 (0.0125) | 0.7604 (0.2406) | 0.9072 (0.2799) |
16 | -0.013 (0.0134) | 0.7527 (0.2501) | 0.9072 (0.2799) |
1 | 2 | 1 | 2 | 1 | 2 | |||||
C2 | 35 | 25 | C2 | 59 | 0 | C2 | 60 | 0 | ||
C1 | 25 | 15 | C1 | 1 | 40 | C1 | 0 | 40 | ||
(a) | (b) | (c) |
Weight | Weight | Weight | ||||||||
C1 | 1 | 3.01E-36 | C1 | 0.8931 | 0.1069 | C1 | 0.5448 | 0.4552 | ||
C2 | 1 | 2.85E-51 | C2 | 0.5057 | 0.4943 | C2 | 0.5055 | 0.4945 | ||
(a) | (b) | (c) |
Parameter | EWKM | LAC | LEKM |
1 | 0.0005 (0.0005) | 0.0021 (0.0032) | 0.0016 (0.0009) |
2 | 0.0002 (0.0004) | 0.0018 (0.0026) | 0.0013 (0.0006) |
4 | 0.0002 (0.0004) | 0.0017 (0.0025) | 0.0014 (0.0011) |
8 | 0.0003 (0.0004) | 0.0018 (0.0026) | 0.0016 (0.0017) |
16 | 0.0002 (0.0004) | 0.0018 (0.0025) | 0.0016 (0.002) |
Cluster | Cluster Size | Subspace Dimensions |
A | 500 | 10, 15, 70 |
B | 300 | 20, 30, 80, 85 |
C | 500 | 30, 40, 70, 90, 95 |
D | 700 | 40, 45, 50, 55, 60, 80 |
Parameter | EWKM | LAC | LEKM |
1 | 0.557 (0.1851) | 0.5534 (0.1857) | 0.9123 (0.147) |
2 | 0.557 (0.1851) | 0.5572 (0.1883) | 0.928 (0.1361) |
4 | 0.557 (0.1851) | 0.5658 (0.1902) | 0.6128 (0.1626) |
8 | 0.557 (0.1851) | 0.574 (0.2028) | 0.3197 (0.1247) |
16 | 0.5573 (0.1854) | 0.6631 (0.2532) | 0.2293 (0.0914) |
Parameter | EWKM | LAC | LEKM |
1 | 0.7849 (0.4221) | 1.1788 (0.763) | 10.4702 (0.1906) |
2 | 0.7687 (0.4141) | 0.8862 (0.4952) | 10.3953 (0.1704) |
4 | 0.7619 (0.4101) | 0.8412 (0.4721) | 10.5236 (0.2023) |
8 | 0.7567 (0.4074) | 0.8767 (0.4816) | 10.5059 (0.2014) |
16 | 0.7578 (0.4112) | 0.8136 (0.5069) | 10.4122 (0.189) |
Dataset | Samples | Dimensions | Cluster sizes |
Chen-2002 | 179 | 85 | 104, 76 |
Chowdary-2006 | 104 | 182 | 62, 42 |
Parameter | EWKM | LAC | LEKM |
1 | 0.025 (0.0395) | 0.0042 (0.0617) | 0.2599 (0.2973) |
2 | 0.0203 (0.0343) | 0.0888 (0.1903) | 0.2563 (0.2868) |
4 | 0.0135 (0.0279) | 0.041 (0.1454) | 0.2743 (0.2972) |
8 | 0.0141 (0.0449) | 0.0484 (0.1761) | 0.2856 (0.2993) |
16 | 0.0002 (0.0416) | 0.0445 (0.1726) | 0.2789 (0.2984) |
Parameter | EWKM | LAC | LEKM |
1 | 0.0111 (0.0031) | 0.0162 (0.0083) | 0.102 (0.0297) |
2 | 0.0123 (0.0033) | 0.0124 (0.006) | 0.1035 (0.0286) |
4 | 0.0143 (0.006) | 0.0151 (0.0105) | 0.1046 (0.0316) |
8 | 0.0122 (0.0043) | 0.0137 (0.0089) | 0.1068 (0.0337) |
16 | 0.0144 (0.007) | 0.014 (0.0091) | 0.105 (0.0323) |
Parameter | EWKM | LAC | LEKM |
1 | 0.3952 (0.3943) | 0.5197 (0.2883) | 0.5826 (0.3199) |
2 | 0.3819 (0.3825) | 0.19 (0.2568) | 0.5757 (0.3261) |
4 | 0.3839 (0.3677) | 0.0772 (0.1016) | 0.5823 (0.3221) |
8 | 0.4188 (0.3584) | 0.0595 (0.0224) | 0.5756 (0.3383) |
16 | 0.4994 (0.3927) | 0.0625 (0.0184) | 0.582 (0.3363) |
Parameter | EWKM | LAC | LEKM |
1 | 0.0115 (0.0048) | 0.0109 (0.0042) | 0.1369 (0.0756) |
2 | 0.011 (0.0046) | 0.0156 (0.0093) | 0.1446 (0.0723) |
4 | 0.0103 (0.0042) | 0.0147 (0.0076) | 0.1514 (0.0805) |
8 | 0.0107 (0.005) | 0.0141 (0.0063) | 0.1524 (0.0769) |
16 | 0.0113 (0.0047) | 0.0138 (0.0068) | 0.1542 (0.0854) |
Parameter | Default Value |
1 | |
100 |
Parameter | EWKM | LAC | LEKM |
1 | 0.0351 (0.0582) | 0.0024 (0.0158) | 0.9154 (0.2704) |
2 | 0.0378 (0.0556) | 0.9054 (0.2322) | 0.9063 (0.2827) |
4 | 0.012 (0.031) | 0.8019 (0.2422) | 0.9067 (0.2815) |
8 | -0.0135 (0.0125) | 0.7604 (0.2406) | 0.9072 (0.2799) |
16 | -0.013 (0.0134) | 0.7527 (0.2501) | 0.9072 (0.2799) |
1 | 2 | 1 | 2 | 1 | 2 | |||||
C2 | 35 | 25 | C2 | 59 | 0 | C2 | 60 | 0 | ||
C1 | 25 | 15 | C1 | 1 | 40 | C1 | 0 | 40 | ||
(a) | (b) | (c) |
Weight | Weight | Weight | ||||||||
C1 | 1 | 3.01E-36 | C1 | 0.8931 | 0.1069 | C1 | 0.5448 | 0.4552 | ||
C2 | 1 | 2.85E-51 | C2 | 0.5057 | 0.4943 | C2 | 0.5055 | 0.4945 | ||
(a) | (b) | (c) |
Parameter | EWKM | LAC | LEKM |
1 | 0.0005 (0.0005) | 0.0021 (0.0032) | 0.0016 (0.0009) |
2 | 0.0002 (0.0004) | 0.0018 (0.0026) | 0.0013 (0.0006) |
4 | 0.0002 (0.0004) | 0.0017 (0.0025) | 0.0014 (0.0011) |
8 | 0.0003 (0.0004) | 0.0018 (0.0026) | 0.0016 (0.0017) |
16 | 0.0002 (0.0004) | 0.0018 (0.0025) | 0.0016 (0.002) |
Cluster | Cluster Size | Subspace Dimensions |
A | 500 | 10, 15, 70 |
B | 300 | 20, 30, 80, 85 |
C | 500 | 30, 40, 70, 90, 95 |
D | 700 | 40, 45, 50, 55, 60, 80 |
Parameter | EWKM | LAC | LEKM |
1 | 0.557 (0.1851) | 0.5534 (0.1857) | 0.9123 (0.147) |
2 | 0.557 (0.1851) | 0.5572 (0.1883) | 0.928 (0.1361) |
4 | 0.557 (0.1851) | 0.5658 (0.1902) | 0.6128 (0.1626) |
8 | 0.557 (0.1851) | 0.574 (0.2028) | 0.3197 (0.1247) |
16 | 0.5573 (0.1854) | 0.6631 (0.2532) | 0.2293 (0.0914) |
Parameter | EWKM | LAC | LEKM |
1 | 0.7849 (0.4221) | 1.1788 (0.763) | 10.4702 (0.1906) |
2 | 0.7687 (0.4141) | 0.8862 (0.4952) | 10.3953 (0.1704) |
4 | 0.7619 (0.4101) | 0.8412 (0.4721) | 10.5236 (0.2023) |
8 | 0.7567 (0.4074) | 0.8767 (0.4816) | 10.5059 (0.2014) |
16 | 0.7578 (0.4112) | 0.8136 (0.5069) | 10.4122 (0.189) |
Dataset | Samples | Dimensions | Cluster sizes |
Chen-2002 | 179 | 85 | 104, 76 |
Chowdary-2006 | 104 | 182 | 62, 42 |
Parameter | EWKM | LAC | LEKM |
1 | 0.025 (0.0395) | 0.0042 (0.0617) | 0.2599 (0.2973) |
2 | 0.0203 (0.0343) | 0.0888 (0.1903) | 0.2563 (0.2868) |
4 | 0.0135 (0.0279) | 0.041 (0.1454) | 0.2743 (0.2972) |
8 | 0.0141 (0.0449) | 0.0484 (0.1761) | 0.2856 (0.2993) |
16 | 0.0002 (0.0416) | 0.0445 (0.1726) | 0.2789 (0.2984) |
Parameter | EWKM | LAC | LEKM |
1 | 0.0111 (0.0031) | 0.0162 (0.0083) | 0.102 (0.0297) |
2 | 0.0123 (0.0033) | 0.0124 (0.006) | 0.1035 (0.0286) |
4 | 0.0143 (0.006) | 0.0151 (0.0105) | 0.1046 (0.0316) |
8 | 0.0122 (0.0043) | 0.0137 (0.0089) | 0.1068 (0.0337) |
16 | 0.0144 (0.007) | 0.014 (0.0091) | 0.105 (0.0323) |
Parameter | EWKM | LAC | LEKM |
1 | 0.3952 (0.3943) | 0.5197 (0.2883) | 0.5826 (0.3199) |
2 | 0.3819 (0.3825) | 0.19 (0.2568) | 0.5757 (0.3261) |
4 | 0.3839 (0.3677) | 0.0772 (0.1016) | 0.5823 (0.3221) |
8 | 0.4188 (0.3584) | 0.0595 (0.0224) | 0.5756 (0.3383) |
16 | 0.4994 (0.3927) | 0.0625 (0.0184) | 0.582 (0.3363) |
Parameter | EWKM | LAC | LEKM |
1 | 0.0115 (0.0048) | 0.0109 (0.0042) | 0.1369 (0.0756) |
2 | 0.011 (0.0046) | 0.0156 (0.0093) | 0.1446 (0.0723) |
4 | 0.0103 (0.0042) | 0.0147 (0.0076) | 0.1514 (0.0805) |
8 | 0.0107 (0.005) | 0.0141 (0.0063) | 0.1524 (0.0769) |
16 | 0.0113 (0.0047) | 0.0138 (0.0068) | 0.1542 (0.0854) |