Loading [MathJax]/jax/element/mml/optable/Latin1Supplement.js
Review

Modeling free tumor growth: Discrete, continuum, and hybrid approaches to interpreting cancer development


  • Tumor growth dynamics serve as a critical aspect of understanding cancer progression and treatment response to mitigate one of the most pressing challenges in healthcare. The in silico approach to understanding tumor behavior computationally provides an efficient, cost-effective alternative to wet-lab examinations and are adaptable to different environmental conditions, time scales, and unique patient parameters. As a result, this paper explored modeling of free tumor growth in cancer, surveying contemporary literature on continuum, discrete, and hybrid approaches. Factors like predictive power and high-resolution simulation competed against drawbacks like simulation load and parameter feasibility in these models. Understanding tumor behavior in different scenarios and contexts became the first step in advancing cancer research and revolutionizing clinical outcomes.

    Citation: Dashmi Singh, Dana Paquin. Modeling free tumor growth: Discrete, continuum, and hybrid approaches to interpreting cancer development[J]. Mathematical Biosciences and Engineering, 2024, 21(7): 6659-6693. doi: 10.3934/mbe.2024292

    Related Papers:

    [1] Lorna Katusiime . Time-Frequency connectedness between developing countries in the COVID-19 pandemic: The case of East Africa. Quantitative Finance and Economics, 2022, 6(4): 722-748. doi: 10.3934/QFE.2022032
    [2] Chikashi Tsuji . The historical transition of return transmission, volatility spillovers, and dynamic conditional correlations: A fresh perspective and new evidence from the US, UK, and Japanese stock markets. Quantitative Finance and Economics, 2024, 8(2): 410-436. doi: 10.3934/QFE.2024016
    [3] Yonghong Zhong, Richard I.D. Harris, Shuhong Deng . The spillover effects among offshore and onshore RMB exchange rate markets, RMB Hibor market. Quantitative Finance and Economics, 2020, 4(2): 294-309. doi: 10.3934/QFE.2020014
    [4] Cunyi Yang, Li Chen, Bin Mo . The spillover effect of international monetary policy on China's financial market. Quantitative Finance and Economics, 2023, 7(4): 508-537. doi: 10.3934/QFE.2023026
    [5] Yanhong Feng, Shuanglian Chen, Wang Xuan, Tan Yong . Time-varying impact of U.S. financial conditions on China's inflation: a perspective of different types of events. Quantitative Finance and Economics, 2021, 5(4): 604-622. doi: 10.3934/QFE.2021027
    [6] Isaac O. Ajao, Hammed A. Olayinka, Moruf A. Olugbode, OlaOluwa S. Yaya, Olanrewaju I. Shittu . Long memory cointegration and dynamic connectedness of volatility in US dollar exchange rates, with FOREX portfolio investment strategy. Quantitative Finance and Economics, 2023, 7(4): 646-664. doi: 10.3934/QFE.2023031
    [7] Samuel Kwaku Agyei, Ahmed Bossman . Investor sentiment and the interdependence structure of GIIPS stock market returns: A multiscale approach. Quantitative Finance and Economics, 2023, 7(1): 87-116. doi: 10.3934/QFE.2023005
    [8] Rubaiyat Ahsan Bhuiyan, Tanusree Chakravarty Mukherjee, Kazi Md Tarique, Changyong Zhang . Hedge asset for stock markets: Cryptocurrency, Cryptocurrency Volatility Index (CVI) or Commodity. Quantitative Finance and Economics, 2025, 9(1): 131-166. doi: 10.3934/QFE.2025005
    [9] OlaOluwa S. Yaya, Miao Zhang, Han Xi, Fumitaka Furuoka . How do leading stock markets in America and Europe connect to Asian stock markets? Quantile dynamic connectedness. Quantitative Finance and Economics, 2024, 8(3): 502-531. doi: 10.3934/QFE.2024019
    [10] Ke Liu, Changqing Luo, Zhao Li . Investigating the risk spillover from crude oil market to BRICS stock markets based on Copula-POT-CoVaR models. Quantitative Finance and Economics, 2019, 3(4): 754-771. doi: 10.3934/QFE.2019.4.754
  • Tumor growth dynamics serve as a critical aspect of understanding cancer progression and treatment response to mitigate one of the most pressing challenges in healthcare. The in silico approach to understanding tumor behavior computationally provides an efficient, cost-effective alternative to wet-lab examinations and are adaptable to different environmental conditions, time scales, and unique patient parameters. As a result, this paper explored modeling of free tumor growth in cancer, surveying contemporary literature on continuum, discrete, and hybrid approaches. Factors like predictive power and high-resolution simulation competed against drawbacks like simulation load and parameter feasibility in these models. Understanding tumor behavior in different scenarios and contexts became the first step in advancing cancer research and revolutionizing clinical outcomes.



    Thoracic diseases are diverse and imply complex relationships. For example, extensive clinical experience [1,2] has demonstrated that pulmonary atelectasis and effusion often lead to infiltrate development, and pulmonary edema often leads to cardiac hypertrophy. This strong correlation between pathologies, known as label co-occurrence, is a common phenomenon in clinical diagnosis and is not coincidental [3], as shown in Figure 1. Radiologists need to look at the lesion area at the time of diagnosis while integrating the pathologic relationships to arrive at the most likely diagnosis. Therefore, diagnosing a massive number of Chest X-ray (CXR) images is a time-consuming and laborious reasoning task for radiologists. This has inspired researchers to utilize deep learning techniques to automatically analyze CXR images and reduce the workloads of radiologists. Multiple abnormalities may be present simultaneously in a single CXR image, making the clinical chest radiograph examination a classic multi-label classification problem. Multi-label classification means that a sample can belong to multiple categories (or labels) and that different categories are related. Relationships between pathology labels are expressed differently in different data modalities. As Figure 1 shows, pathology regions appearing simultaneously in the image reflect label relationships as features. In the word embedding of pathology labels, the label relationship is implicit in the semantic information of each label. In recent years, several advanced deep learning methods have been developed to solve this task [4,5,6,7,8,9]. According to our survey, the existing methods are divided into two classes: 1) label-independent learning methods and 2) label-correlation learning methods.

    Figure 1.  Illustration of pathology relationships and alignment problems in different data modals. Left: the pathology correlation within each modal. Right: we aligned the representation of pathology across modals. The transformed arrows in the figure indicate that "Pathology A Pathology B" means that when Pathology A appears, Pathology B is likely to have occurred, but the converse does not necessarily hold.

    The label-independent learning method transforms the multi-label CXR recognition task into multiple independent nonintersecting binary recognition tasks. The primary process is to train a separate binary classifier for each label on the sample to be tested. Early on, some researchers [2,10,11,12] used convolutional neural networks and their variants on this task with some success by designing elaborate network structures to improve recognition accuracy. Despite their efforts and breakthroughs in this field, some things can still be improved. Since this label-independent learning method treats each label as an independent learning object, training results are susceptible to situations, such as missing sample labels and sample mislabeling. Additionally, this class of methods uses only the sample image as the main carrier of the learning object. The image as a single modal form of labeling relationships implies a particular limitation. These methods have yet to consider interlabel correlations and ignore the representation of labeling relationships in other data modalities.

    Subsequently, clinical experience has shown that some abnormalities in CXR images may be strongly correlated. The literature [3] suggests that this is not a coincidence but rather one of a labeling relationship that can be called co-occurrence. The literature [1] found that edema in the lungs tends to trigger cardiomegaly. The literature [2] indicates that lung infiltrates are often associated with pulmonary atelectasis and effusion. This labeling relationship inspires the application of deep learning techniques to the CXR recognition task. In addition, this interdependent information can be used to infer missing or noisy labels from co-occurrence relationships. This improves the robustness of the model and its recognition performance.

    Existing label-correlation learning methods are mainly categorized into two types: image-based unimodal learning methods and methods that additionally consider textual modal data while learning images. First, the most common technique in image-based unimodal learning methods is attention-guided. These attention-guided methods [13,14,15] focus on the most discriminating lesion area features in each sample CXR image. These methods capture the interdependence between labels and lesion regions implicitly, i.e., by designing attention models with different mechanisms to establish the correlation between lesion regions and the whole region. However, the above methods only locally establish label correlations on the imaging modality, ignoring the global label co-occurrence relationship. Another approach that considers textual modal data when learning images is categorized as Recurrent Neural Network (RNN)-based and Graph Convolutional Network (GCN)-based. These RNN-based methods [1,16,17] rely on state variables to encode label-related information and use the RNN as a decoder to predict anomalous sequences in sample images. However, this approach often requires complex computations. In addition, some researchers [18,19] extract valuable textual embedding information from radiology reports to assist in classification. In contrast, GCN-based methods [6,20,21,22] represent label-correlation information, such as label co-occurrence as undirected graph data. These methods treat each label as a graph node and use semantic word embeddings of labels as node features. However, while the above methods learn the label relations in additional modalities, they ignore the alignment between the label relation representations of different modalities, as shown on the right side of Figure 1. Moreover, these methods of modeling pathological relationships using graphs are composed so that the directed graph information is ignored, i.e., it is difficult to represent all pathological relationships in an undirected graph accurately.

    In this paper, we propose a multi-label CXR classification model called MRChexNet that integrally learns pathology information in different modalities and models interpathology correlations more comprehensively. It consists of a representation learning module (RLM), a multi-modal bridge module (MBM), and a pathology graph learning module (PGL). In RLM, we obtain image-level pathology-specific representations for lesion regions in every image. In MBM, we fully bridge the pathology representations in different modalities. The image-level pathology-specific representations from RLM align with the rich semantic information in pathology word embeddings. In PGL, we first model the undirected graph pathology correlation matrix containing all pathology relations in a data-driven manner. Second, by considering the directed information between nodes, we construct an in-degree matrix and an out-degree matrix as directed graphs by considering the out-degree and in-degree on each node as the study object, respectively. Finally, we designed a graph learning module in PGL that integrates the study of pathological information in multiple modalities. The front end of the module is designed with a graph convolution block with a two-branch symmetric structure for learning two directed graphs containing labeling relations in different directions. The back end of the module stacks graph attention layers. All labeling relations are comprehensively learned on the undirected graph pathology correlation matrix. Finally, the framework is optimized using a multi-label loss function to complete end-to-end training.

    In summary, our contributions are fourfold:

    1) A new RLM is proposed to obtain image-level pathology-specific representation and global image representation for image lesion regions.

    2) A novel MBM is proposed that aligns pathology information in different modal representations.

    3) In the proposed PGL, more accurate pathological relationships are modeled as directed graphs by considering directed information between nodes on the graph. An effective graph learning block is designed to learn the pathology information of different modalities comprehensively.

    4) We developed the framework in two large-scale CXR datasets (ChestX-ray14 [2] and CheXpert [23]) and evaluated the effectiveness of MRChexNet on this basis, with average AUC scores of 0.8503 and 0.8649 for 14 pathologies. Our method achieves state-of-the-art performance in terms of classification accuracy and generalizability.

    This section presents a summary of the relevant literature in two aspects. First, previous works on the automatic analysis of CXR images are introduced. Second, several representative works related to cross-modal fusion are presented.

    To improve efficiency and reduce the workloads of radiologists, researchers are beginning to apply the latest advances in deep learning to chest X-ray analysis. In the early days of deep learning techniques applied to CXR recognition, researchers divided the CXR multi-label recognition task into multiple independent disjoint binary labeling problems. An independent binary classifier is trained for each anomaly present in the image. Wang et al. [2] used classical convolutional neural networks and transfer learning to predict CXR images. Rajpurkar et al. [10] improved the network architecture based on DenseNet-121 [11] and proposed CheXNet for anomaly classification in CXR images, which achieved good performance in detecting pneumonia. Li et al. [24] performed thoracic disease identification and localization with additional location annotation supervision. Shen et al. [12] designed a novel network training mechanism for efficiently training CNN-based automatic chest disease detection models. To dynamically capture more discriminative features for thoracic disease classification, Chen et al. [25] used a dual asymmetric architecture based on ResNet and DenseNet. However, as mentioned above, these methods do not account for the correlation between the labels.

    When diagnosing, the radiologist needs to view the lesion area while integrating pathological relationships to make the most likely diagnosis. This necessity inspired researchers to start considering label dependencies. For example, Wang et al. [16] used RNN to model label relevance sequentially. Yao et al. [1] considered multi-label classification as a sequence prediction task with a fixed length. They employed long short-term memory (LSTM) [26] and presented initial results indicating that utilizing label dependency can enhance classification performance. Ypsilantis et al. [17] used an RNN-based bidirectional attention model that focuses on information-rich regions of an image and samples the entire CXR image sequentially. Moreover, some approaches have attempted to use different attentional mechanisms to correlate labels with attended areas. The work of Zhu et al. [13] and Wang et al. [14] both use an attention mechanism that only addresses a limited number of local correlations between regions on an image. Guan et al. [15] used CNNs to learn high-level image features and designed attention-learning modules to provide additional attention guidance for chest disease recognition. It is worth mentioning that as the graph data structure has become a hot research topic, some approaches use graphs to model labeling relationships. Subsequently, Chen et al. [22] introduced a workable framework in which every label represents a node, the term vector of each label acts as a node feature, and GCN is implemented to comprehend the connection among labels in an undirected graph. Li et al. [27] developed the A-GCN, which captures label dependencies by creating an adaptive label structure and has demonstrated exemplary performance. Lee et al. [20] described label relationships using a knowledge graph, which enhances image representation accuracy. Chen et al. [6] employed an undirected graph to represent the relationships between pathologies. They designed CheXGCN by using the word vectors of labels as node features of the graph, and the experiments showed promising results.

    Researchers often use concatenation or elemental summation to fuse different modal features to fuse cross-modal features. Fukui et al. [28] proposed that two vectors of different modalities are made exterior product to fuse multi-modal features by bilinear models. However, this method yields high-dimensional fusion vectors. Hu et al. [29] used data within 24 hours of admission to build simpler machine-learning models for early acute kidney injury (AKI) risk stratification and obtained good results. Xu et al. [30] encouraged data on both attribute and imaging modalities to be discriminated to improve attribute-image person reidentification. To reduce the high-dimensional computation, Kim et al. [31] designed a method that achieves comparable performance to the work of Fukui et al. by performing the Hadamard product between two feature vectors but with slow convergence. It is worth mentioning that Zhou et al. [32] introduced a new method with stable performance and accelerated model convergence for the study of fusing image features and text embedding. Chen et al. [22] used ResNet to learn the image features, GCN to learn the semantic information in the label word embeddings, and finally fused the two using a simple dot product. Similarly, Wang et al. [33] designed a sum-pooling method to fuse the vectors of the two modalities after learning the image features and the semantic information of label word embeddings. It not only reduces the dimensionality of the vectors but also increases the convergence rate of the model.

    This section proposes a multi-label CXR recognition framework, MRChexNet, consisting of three main modules: the representation learning module (RLM), multi-modal bridge module (MBM), and pathology graph learning module (PGL). We first introduce the general framework of our model in Figure 2 and then detail the workflow of each of these three modules. Finally, we describe the datasets implementation details, and evaluation metrics.

    Figure 2.  The overall framework of our proposed MRChexNet.

    Theoretically, we can use any CNN-based model to learn image features. In our experiments, following [1,6,25], we use DenseNet-169 [11] as the backbone for fair comparisons. Thus, if an input image I has a 224 × 224 resolution, we can obtain 1664 × 7 × 7 feature maps from the "Dense Block_4" layer of DenseNet-169. As shown in Figure 2, we perform global average pooling to obtain the image-level global feature x=fGAP(fbackbone(I)), where fGAP() represents the global average pooling (GAP) [34] operation. We first set up a multi layer perceptron (MLP) layer learning x to obtain an initial diagnostic score of the image, YMLP. Specifically, the MLP here consists of a layer of fully connected (FC) network + sigmoid activation function.

    YMLP=fMLP(x;θMLP), (3.1)

    where fMLP() represents the MLP layer and θMLPRC×D is the parameter. We use the parameter θMLP as a diagnoser for each disease and filter a set of features specific to a disease from the global feature x. Each diagnoser θCMLPRD extracts information related to disease C and predicts the likelihood of the appearance of disease C in the image. Then, the pathology-related feature Fpr is disentangled by Eq (3.2).

    Fpr=frepeat(x)θMLP. (3.2)

    The operation frepeat() indicates that xRD is copied C times to form [X,X]TRC×D, with denoting the Hadamard product. Using this method to adjust the global feature x, the adjusted x captures more relevant information for each disease.

    In this section, we design the MBM module to efficiently align the disease's image features and the disease's semantic word embeddings. As Figure 3 shows, the MBM module is divided into two phases: alignment + fusion and squeeze. The fixed input of the MBM module consists of two parts: modal1RD1, which represents the image features, and modal2RD2, which is the word embedding. First, we use two FC layers to convert modal1 into M1RD3 and modal2 into M2RD3, respectively:

    {M1=FC1(modal1)RD3M2=FC2(modal2)RD3. (3.3)
    Figure 3.  Architecture of multi-modal bridge module.

    We design a separate dropout layer for M2 to prevent redundant semantic information from causing overfitting. After obtaining two inputs M1, M2 of the same dimension, the initial bilinear pooling [35] is defined as follows:

    F=MT1SiM2, (3.4)

    where FRo is the output fusion feature of the MBM module and SiRD3×D3 is the bilinear mapping matrix with bias terms included. S=[Si,,So]RD3×D3×o can be decomposed into two low-rank matrices ui=[u1,,uG]RD3×G, vi=[v1,,vG]RD3×G. Therefore, Equation (3.4) can be rewritten as follows:

    Fi=1T(uTiM1vTiM2), (3.5)

    where the value of G is the factor or latent dimension of two low-rank matrices and 1TRG is an all-one vector. To obtain the final F, two three-dimensional tensors uiRD3×G×o,viRD3×G×o need to be learned. Under the premise of ensuring the generality of Eq (3.5), the two learnable tensors u,v are converted into two-dimensional matrices by matrix variable dimension, namely, ui˜uRD3×Go and vi˜vRD3×Go, then Eq (3.5) simplifies to:

    F=fGroupSum(˜uTM1˜vTM2,G), (3.6)

    where the function fGroupSum(vector,G) represents the mapping of g elements in vector into 1G groups and outputs all G groups obtained after complete mapping as potential dimensions, FRG. Furthermore, a dropout layer is added after the elementwise multiplication layer to avoid overfitting. Due to the introduction of elementary multiplication, the size of the output neuron can change drastically, and the model can converge to a local minimum that is not satisfactory. Therefore, the normalization layer (FF/F) and power normalization layer (Fsign(F)|F|0.5) are appended. Finally, F is copied C times through operation fRepeat(), then FRC×G as the final MBM output. These are the details of the MBM process.

    Our PGL module is built on top of graph learning. The node-level output of traditional graph learning techniques is the predicted score of each node. In contrast, the final output of our designed graph learning block is designed as the classifier for the corresponding label in our task. We use the fused features of the MBM output as the node features for graph learning. Furthermore, the graph structure (i.e., the correlation matrix) is typically predefined in other tasks. However, it is not provided in the multi-label CXR image recognition task. We need to construct the correlation matrix ourselves. Therefore, we devise a new method for constructing the correlation matrix by considering the directed information of graph nodes.

    First, we capture the pathological dependencies based on the label statistics of the entire dataset and construct the pathology correlation matrix Apc. Specifically, we count the number of occurrences (Ti) of the i-th pathological label (Li) and the simultaneous occurrences of Li and Lj (Tij = Tji). In addition, the label dependency can be expressed by conditional probability as follows:

    Pij=P(Li|Lj)=TijTj,i[1,C], (3.7)

    where Pij denotes the probability that Li occurs under the condition that Lj occurs. Note that since the conditional probabilities between two objects are asymmetric, PijPji. The element value Apcij at each position in this matrix is equal to Pij. Then, by considering directed information on the graph structure, we split an in-degree matrix Ainpc and an out-degree matrix Aoutpc, which are defined as follows:

    Ainpc=kApckiApckjvApckv,i,jC,k,vC, (3.8)
    Aoutpc=kApcikApcjkvApcvk,i,jC,k,vC. (3.9)

    Then, in our PGL, the dual-branch learning of the graph learning block is specifically defined as:

    Zin=fingc(AinpcFθingc), (3.10)
    Zout=foutgc(AoutpcFθoutgc), (3.11)

    where Zin and Zout are the outputs of the in-degree branch and the out-degree branch, respectively. fgc() denotes the graph convolutional operation, and θgc denotes the corresponding trainable transformation matrix.

    To learn more about the correlations between different pathological features, we use a graph attention network (GAT) [36] to consider Zin and Zout jointly. We do this by using Zall=f(Zin)+f(Zout) as the input feature to graph attention. f() denotes the batch normalization layer and nonlinear activation operation LeakyReLU. The graph attention layer transforms the implicit features of the input nodes and aggregates the neighborhood information to the next node to improve the correlation between the information of the central node and its neighbors. The input Zall to the graph attention layer is the set of node features {Zall1,Zall2,,Zalln}Rd, where d is the number of feature dimensions in each node. The attention weight coefficients ei,j are computed between node i and the neighborhood of node jNBi by a learnable linear transformation matrix W and applied to all nodes, as shown in Eq (3.12).

    ei,j=a[WXiWXj], (3.12)

    where is the concatenation operation, WRdˊ, a \in \mathbb{R}^{\acute{d} \times d} is a learnable parameter and \acute{d} denotes the dimensionality of the output features. The graph attention layer allows each node to focus on each of the other nodes. e_{i, j} uses LeakyReLU as the nonlinear activation function and is normalized by the sigmoid function, which can be expressed as:

    \begin{equation} \alpha_{i, j} = {Sigmoid}_j\left(e_{i, j}\right) = \frac{\exp \left({LeakyReLU}\left(e_{i, j}\right)\right)}{\sum_{k \in NB_i} \exp \left({LeakyReLU}\left(e_{i, k}\right)\right)}. \end{equation} (3.13)

    To stabilize the learning process of the graph attention in the PGL module, we extended the multiheaded self-attention mechanism within it as follows:

    \begin{equation} Y_{PGL} = \|_{k = 1}^K {ReLU}\left(\alpha^{(k)} \boldsymbol{Z^{all}} \boldsymbol{W^k}\right), \end{equation} (3.14)

    where Y_{PGL}\in\mathbb{R}^{K\acute{D}} denotes the output features incorporating the pathology-correlated features, K denotes the number of attention heads, and \alpha^{(k)} denotes the normalized k -th attention weight coefficient matrix. W^{k} denotes the transformable weight matrix under the corresponding k -th attention head. Finally, the output features are averaged and passed to the next node.

    \begin{equation} Y_{PGL} = ReLU\left(\frac{1}{K}\right) \sum\limits_{K = 1}^K\left(\alpha^{(k)} \boldsymbol{Z^{all}} \boldsymbol{W^k}\right). \end{equation} (3.15)

    We show through empirical studies that PGL can detect potentially strong correlations between pathological features. It improves the model's ability to learn implicit relationships between pathologies.

    After obtaining Y_{MLP} and Y_{PGL} , we set the final output of our model as Y_{Out} = Y_{MLP} + Y_{PGL} and then feed it into the loss function to calculate the loss. Finally, we update the entire network end-to-end using the MultiLabelSoftMargin loss (called multi-label loss) function [37]. The training loss function is described as:

    \begin{equation} \begin{aligned} \mathcal{L}\left(Y_{Out}, L\right) = &-\frac{1}{C} \sum\limits_{j = 1}^{C} L_{j} \log \left(\left(1+\exp \left(-Y_{out_j}\right)\right)^{-1}\right) \\&+\left(1-L_{j}\right) \log \left(\frac{\exp \left(-Y_{outj}\right)}{\left(1+\exp \left(-Y_{out_j}\right)\right)}\right), \end{aligned} \end{equation} (3.16)

    where Y_{Out} and L denote the predicted pathology and the true pathology of the sample image, respectively. Y_{out_j} and L_{j} denote the j -th element in its predicted pathology and the j -th element in the actual pathology.

    In this section, we report and discuss the results on two benchmark multi-label CXR recognition datasets. Ablation experiments were also conducted to explore the effects of different parameters and components on MRChexNet. Finally, a visual analysis was performed.

    ChestX-Ray14 is a large CXR dataset. It contains 78,466 training images, 11,220 validation images, and 22,434 test images. Approximately 1.6 pathology labels from 14 semantic categories are applied to the patient images. Each image is labeled with one or more pathologies, as illustrated in Figure 4. We strictly follow the official splitting standards of ChestX-Ray14 provided by Wang et al. [2] to conduct our experiments so that our results are directly comparable with most published baselines. We use the training and validation sets to train our model and then evaluate the performance on the test set.

    Figure 4.  Example images and the corresponding labels in the ChestX-Ray14 and CheXpert datasets. Each image is labeled with one or more pathologies. In CheXpert, the uncertain pathology is marked in red.

    CheXpert is a popular dataset for recognizing, detecting and segmenting common chest and lung diseases. There are 224,616 images in the database, including 12 pathology labels and two nonpathology labels (not found and assistive device). Each image is assigned one or more disease symptoms, and the disease results are labeled as positive, negative and uncertain, as illustrated in Figure 4; if no positive disease is found in the image, it is labeled as 'no finding'. Undetermined labels in the images can be considered positive (CheXpert \_ 1s) or negative (CheXpert \_ 0s). On average, each image had 2.9 pathology labels for CheXpert \_ 1s and 2.3 for CheXpert \_ 0s. Since the data for the test set are still not published, we redivided the dataset into a training set, a validation set, and a test set at a ratio of 7:1:2.

    As described earlier, the proposed PGL module involves the global modeling of all pathologies on the basis of cooccurrence pairs, the results of which are the identification of potential pathologies present in each image. As shown in Figure 5, many pathology pairs with cooccurrence relationships were obtained by counting the occurrences of all pathologies in both datasets separately. For example, lung disease is frequently associated with pleural effusion, and atelectasis is frequently associated with infiltration. This phenomenon serves as a basis for constructing pathology correlation matrix A_{pc} and provides initial evidence of the feasibility of the proposed PGL module.

    Figure 5.  Graph representations of the pathology correlation extracted from the ChestX-Ray14, CheXpert_1s and CheXpert_0s datasets.

    All experiments were run on an Intel 8268 CPU and NVIDIA Tesla V100 32 GB GPU. Moreover, it was implemented based on the PyTorch framework. First, we resize all images to 256 \times 256 and normalize via the mean and standard deviation of the ImageNet dataset. Then, random cropping to make images 224 \times 224, random horizontal flip, and random rotation were applied, as some images may have been flipped or rotated within the dataset. The output characteristic dimension D_{1} of the backbone was 1664. In the PGL module, we designed a graph learning block consisting of 1-1 symmetrically structured GCN layers stacked with 2(2) graph attention layers (the number of attention heads within the layer). The number of GCN output channels was 1024 and 1024, respectively. We used a 2-layer GAT model, with the first layer using K = 2 attention heads, each head computing 512 features (1024 features in total), followed by exponential linear unit (ELU) [46] nonlinearity. The second layer did the same, averaging these features, followed by logistic sigmoid activation. In addition, we considered LeakyReLU with a negative slope of 0.2 as the nonlinear activation function used in the PGL module. The input pathology label word embedding was a 300-dimensional vector generated by the GloVe model pretrained on the Wikipedia dataset. When multiple words represented the pathology labels, we used the average vector of all words as the pathology label word embedding. In the MBM, we set D_{3} = 14,336 to bridge the vectors of the two modes. Furthermore, we set G = 1024 with g = 14 to complete the GroupSum method. The ratios of dropout1 and dropout2 were 0.3 and 0.1, respectively. The whole network was updated by AdamW with a momentum of (0.9, 0.999) and a weight decay of 1e-4. The initial learning rate of the whole model was 0.001, which decreased 10 times every 10 epochs.

    In our experiments, we used the AUC value [38] (the area under the receiver operating characteristic (ROC) curve [38]) for each pathology and the mean AUC value across all cases to measure the performance of MRChexNet. There was no data overlap between the training and testing subsets. The true label of each image was labeled with L = \left[L_1, L_2, \dots, L_C \right] . In the dataset of two CXR label numbers C = 14, each element L_C indicated the presence or absence of the C -th pathology, i.e., 1 indicated presence and 0 indicated absence. For each image, the label was predicted as positive if the confidence level of the label was greater than 0.5.

    In this section, we conduct experiments on ChestX-Ray14 and CheXpert to compare the performance of MRChexNet with existing methods.

    Results from ChestX-Ray14 and discussion: We compared MRChexNet with a variety of existing methods including U-DCNN [2], LSTM-Net [1], CheXNet [10], DNet [39], AGCL [19], DR-DNN [12], CRAL [15], DualCheXN [25] and CheXGCN [6]. We present the results of the comparison on ChestX-Ray14 in Table 1 including the evaluation metrics for the entire dataset of 14 pathology labels. MRChexNet outperformed all candidate methods on most pathology-labeled metrics. Figure 6 illustrates the ROC curves of our model over the 14 pathologies on ChestX-Ray14. Specifically, MRChexNet outperformed these previous methods in mean AUC score, especially for U-DCNN (0.745) and LSTM-Net (0.798), with improvements of 10.5% and 3.7%, respectively. Moreover, it outperformed DualCheXNet (0.823) and improved the AUC score of detecting consolidation (0.819 vs. 0.746) and pneumonia (0.783 vs. 0.727) by more than 6.0%. Notably, the mean AUC score of MRChexNet improved by 2.4% over CheXGCN (0.826). The AUC scores of some pathologies labeled with MRChexNet obviously improved, e.g., cardiomegaly (0.923 vs. 0.893), consolidation (0.819 vs. 0.751), edema (0.904 vs. 0.850) and atelecta (0.824 vs. 0.786). It must be mentioned that our proposed model performed somewhat poorly on the nodule and fibrosis labels. Note that the pathogenesis of these diseases is systemic, and we generated word embeddings of their pathological labels using only their noun meanings without adding additional semantics to explain their sites of pathogenesis. This issue led to the unsatisfactory performance of MRChexNet on these pathologies. Overall, the proposed MRChexNet improved the multi-label recognition performance of ChestX-Ray14 and outperformed existing methods.

    Table 1.  AUC comparisons of MRChexNet with existing methods on ChestX-Ray14.
    Method ChestX-Ray14 Mean AUC
    AUC
    atel card effu infi mass nodu pne1 pne2 cons edem emph fibr pt hern
    U-DCNN [2] 0.700 0.810 0.759 0.661 0.693 0.669 0.658 0.799 0.703 0.805 0.833 0.786 0.684 0.872 0.745
    LSTM-Net [1] 0.772 0.904 0.859 0.695 0.792 0.717 0.713 0.841 0.788 0.882 0.829 0.767 0.765 0.914 0.798
    DR-DNN [12] 0.766 0.801 0.797 0.751 0.760 0.741 0.778 0.800 0.787 0.820 0.773 0.765 0.759 0.748 0.775
    AGCL [19] 0.756 0.887 0.819 0.689 0.814 0.755 0.729 0.850 0.728 0.848 0.906 0.818 0.765 0.875 0.803
    CheXNet [10] 0.769 0.885 0.825 0.694 0.824 0.759 0.715 0.852 0.745 0.842 0.906 0.821 0.766 0.901 0.807
    DNet [39] 0.767 0.883 0.828 0.709 0.821 0.758 0.731 0.846 0.745 0.835 0.895 0.818 0.761 0.896 0.807
    CRAL [15] 0.781 0.880 0.829 0.702 0.834 0.773 0.729 0.857 0.754 0.850 0.908 0.830 0.778 0.917 0.816
    DualCheXN [25] 0.784 0.888 0.831 0.705 0.838 0.796 0.727 0.876 0.746 0.852 0.942 0.837 0.796 0.912 0.823
    CheXGCN [6] 0.786 0.893 0.832 0.699 0.840 0.800 0.739 0.876 0.751 0.850 0.944 0.834 0.795 0.929 0.826
    MRChexNet (Ours) 0.824 0.923 0.894 0.719 0.857 0.779 0.783 0.888 0.819 0.904 0.920 0.835 0.808 0.946 0.850
    Note: The 14 pathologies in Chest X-Ray14 are atelectasis (atel), cardiomegaly (card), effusion (effu), infiltration (infi), mass, nodule (nodu), pneumonia (pne1), pneumothorax (pne2), consolidation (cons), edema (edem), emphysema (emph), fibrosis (fibr), pleural thickening (pt) and hernia (hern).

     | Show Table
    DownLoad: CSV
    Figure 6.  ROC curves of MRChexNet on the ChestXRay14 and CheXpert, respectively. The corresponding AUC scores are given in Tables 1-3.

    Results from CheXpert and discussion: To our limited knowledge, the test set of CheXpert has yet to be publicly available and can only be redivided by itself. Fewer state-of-the-art methods are available for comparison. Based on that, we further evaluated the comparison of our model with the uncertainty labeling treatments mentioned in the original dataset (U_Ones and U_Zeros). As shown in Table 2, MRChexNet_1s obtained higher mean AUC scores on 14 pathological labels for CheXpert_1s, which were 1.5% higher than the techniques in the original paper U_Ones. Additionally, compared to the vanilla DenseNet-169, the improvement is 3.8%. As shown in Table 3, MRChexNet_0s obtained higher mean AUC scores on 14 pathological labels for CheXpert_0s, which were 2.1% higher than the techniques U_Zeros in the original paper. The mean AUC score of MRChexNet is 3.1% higher than that of vanilla DenseNet-169. These results prove that our two proposed modules can work better when reinforcing each other. Overall, the AUC score of MRChexNet_1s was better than that of MRChexNet_0s by 0.3%, especially for lung lesions by 3.5% (0.788 \rightarrow 0.823), atelectasis by 2.5% (0.707 \rightarrow 0.732) and fracture by 2.7% (0.793 \rightarrow 0.820). This is because the true value of these uncertainty labels on the image is likely to be negative. The converse is also true. Figure 6 illustrates the ROC curves of MRChexNet on ChestX-ray14, CheXpert_1s and CheXpert_0s for the 14 pathologies.

    Table 2.  AUC comparisons of MRChexNet with previous baseline on CheXpert_1s.
    Method CheXpert_1s Mean AUC
    AUC
    nofi enla card opac lesi edem cons pne1 atel pne2 pleu1 pleu2 frac supp
    ML-GCN [22] 0.879 0.630 0.841 0.723 0.773 0.856 0.692 0.740 0.713 0.829 0.873 0.802 0.762 0.868 0.784
    U_Ones [23] 0.890 0.659 0.856 0.735 0.778 0.847 0.701 0.756 0.722 0.855 0.871 0.798 0.789 0.878 0.795
    DenseNet-169 [11] 0.916 0.717 0.895 0.770 0.783 0.882 0.710 0.774 0.728 0.871 0.916 0.817 0.805 0.909 0.821
    MRChexNet_1s (Ours) 0.976 0.738 0.900 0.887 0.940 0.884 0.701 0.719 0.759 0.925 0.924 0.852 0.958 0.944 0.865
    Note: The 14 pathologies in CheXpert are no Finding (nofi), enlarged cardiomediastinum (enla), cardiomegaly (card), lung opacity (opac), lung lesion (lesi), edema (edem), consolidation (cons), pneumonia (pne1), atelectasis (atel), pneumothorax (pne2), pleural effusion (pleu1), pleural other (pleu2), fracture (frac) and support devices (supp).

     | Show Table
    DownLoad: CSV
    Table 3.  AUC comparisons of MRChexNet with the previous baseline on CheXpert_0s.
    Method CheXpert_0s Mean AUC
    AUC
    nofi enla card opac lesi edem cons pne1 atel pne2 pleu1 pleu2 frac supp
    ML-GCN [22] 0.864 0.673 0.831 0.681 0.802 0.770 0.713 0.758 0.654 0.845 0.841 0.764 0.754 0.838 0.771
    U_Zeros [23] 0.885 0.678 0.865 0.730 0.760 0.853 0.735 0.740 0.700 0.872 0.880 0.775 0.743 0.877 0.792
    DenseNet-169 [11] 0.912 0.715 0.884 0.738 0.780 0.861 0.753 0.770 0.711 0.860 0.904 0.830 0.758 0.878 0.811
    MRChexNet_0s (Ours) 0.914 0.808 0.894 0.748 0.913 0.827 0.801 0.868 0.744 0.928 0.876 0.909 0.915 0.859 0.858

     | Show Table
    DownLoad: CSV

    MRChexNet with its different components on ChestX-Ray14: We experimented with the performance of the components of the MRChexNet; the results are shown in Table 4. In baseline + PGL, we use a simple summation of elements instead of MBM to fuse the visual feature vectors of pathology and the semantic word vectors of pathology. The obtained simple fusion vectors are used as the node features of the graph learning block. Compared to the baseline DenseNet-169, the mean AUC score of baseline + PGL was significantly higher by 3.6% (0.782 \rightarrow 0.818), especially in atelectasis (0.775 \rightarrow 0.820), cardiomegaly (0.879 \rightarrow 0.920), effusion (0.826 \rightarrow 0.888) and nodule (0.689 \rightarrow 0.769), exceeding the vanilla DenseNet-169 by an average of 5.7% in those pathology labels. The experimental results showed that the proposed PGL module is crucial in mining the global cooccurrence between pathologies. Note that in the baseline + MBM model, the fixed direct input_{2} to the MBM module is a vector of 14 pathology-annotated words with initial semantic information. We learn the output of the resulting cross-modal fusion vectors from one FC layer by aligning the visual features of pathology with the semantic word vectors of pathology. Compared to the DenseNet-169 baseline, the mean AUC score of baseline + MBM was significantly higher by 2.7% (0.782 \rightarrow 0.809), especially in atelectasis (0.775 \rightarrow 0.800), effusion (0.826 \rightarrow 0.860), pneumothorax (0.823 \rightarrow 0.859), and mass (0.766 \rightarrow 0.856) on pathology, exceeding the vanilla DenseNet-169 by an average of 4.6% in those pathology labels. With the addition of the MBM and PGL modules, MRChexNet significantly improved the mean AUC score by 6.8%. In particular, the AUC score improvement was significant for atelectasis (0.775 \rightarrow 0.824), pneumothorax (0.823 \rightarrow 0.888), and emphysema (0.838 \rightarrow 0.920). This phenomenon indicates that the MBM and PGL modules in our framework can reinforce and complement each other to make MRChexNet perform at its best.

    Table 4.  Comparison of AUC of MRChexNet with its different components on ChestX-Ray14.
    Method Chest X-Ray14 Mean AUC
    AUC
    atel card effu infi mass nodu pneu1 pneu2 cons edem emph fibr pt hern
    Baseline : DenseNet-169 [11] 0.775 0.879 0.826 0.685 0.766 0.689 0.725 0.823 0.788 0.841 0.838 0.767 0.742 0.811 0.782
    Baseline + MBM 0.800 0.892 0.860 0.707 0.856 0.760 0.741 0.859 0.810 0.870 0.883 0.711 0.781 0.796 0.809
    Baseline + PGL 0.820 0.920 0.888 0.710 0.784 0.769 0.756 0.873 0.808 0.896 0.874 0.744 0.799 0.804 0.818
    MRChexNet (Ours) 0.824 0.923 0.894 0.719 0.857 0.779 0.783 0.888 0.819 0.904 0.920 0.835 0.808 0.946 0.850

     | Show Table
    DownLoad: CSV

    Testing time for different components in MRChexNet: We experimented with the inference time for each component of MRChexNet, and the results are shown in the Table 5. We have set the inference time in seconds and the inference duration as the time to infer 1 image. Then, we first tested an image using Baseline and the obtained time as a base. After testing an image using Baseline + MBM and Baseline + PGL to get the duration, the base inference duration of the previous baseline is subtracted to get the exact inference duration of each module. According to the results, it can be seen that MBM and PGL increase the reasoning time of the model by 20.3 \times \; 10^{-6} and 33.7 \times \; 10^{-6} s, respectively. It is worth mentioning that the interaction of the two achieves a satisfactory recognition performance, which is an acceptable result compared to the manual reasoning time of the radiologist.

    Table 5.  Comparison of the test time of MRChexNet with its different components.
    Method Test time (1 image)
    Baseline : DenseNet - 169 2.5 \times \; 10^{-6} s
    (Baseline + MBM) - Baseline 12.1 \times \; 10^{-6} s
    (Baseline + PGL) - Baseline 20.3 \times \; 10^{-6} s
    MRChexNet (Ours) 33.7 \times \; 10^{-6} s

     | Show Table
    DownLoad: CSV

    MRChexNet under different types of word embeddings: We default to using GloVe [40] as the token representation as input to the multi-modal bridge module (MBM). In this section, we evaluate the performance of MRChexNet under other types of popular word representations. Specifically, we investigate four different word embedding methods, including GloVe [40], FastText [41], and simple single-hot word embedding. Figure 7 shows the results using different word embeddings on ChestX-Ray14 and CheXpert. As shown, we can see that thoracic disease recognition accuracy is not significantly affected when using different word embeddings as inputs to the MBM. Furthermore, the observations (especially the results of one-hot) demonstrate that the accuracy improvement achieved by our approach does not come entirely from the semantics produced by the word embeddings. Furthermore, using powerful word embeddings led to better performance. One possible reason may be that the word embeddings learned from a large text corpus maintain some semantic topology. That is, semantic-related concept embeddings are close in the embedding space. Our model can employ these implicit dependencies and further benefit thoracic disease recognition.

    Figure 7.  Effects of different pathology word embedding approaches. It is clear that different pathology word embeddings have little effect on accuracy. This shows that our improvements are not necessarily due to the semantic meanings derived from the pathology word embeddings but rather to our MRChexNet.

    Groups G and elements g in GroupSum: In this section, we evaluate the performance of the MBM in MRChexNet by using a different number of groups G and the number of elements g within a group. With the GroupSum in the MBM, each D_{3} -dimensional vector will be converted into a G -dimensional vector. We have a set of G - g \in \{(2048, 7), (1024, 14), (512, 28), (256, 56), (128,112)\} to generate a low-dimensional bridging vector. As shown in Figure 8, MRChexNet obtains better performance on ChestX-Ray14 when G = 1024 and g = 14 are chosen, while the change in the mean AUC is very slight on CheXpert. We believe that the original semantic information between the pathology word embeddings can be better expressed by G = 1024 and g = 14. Other values of G - g bring similar results, which do not affect the model too much.

    Figure 8.  The change of mean AUC using different values of G - g .

    Different numbers of GCN layers and GAT layers of the graph learning block in PGL: Since the front end of the graph learning block we have designed is a GCN with a dual-branch symmetric structure, the main discussion is about the number of GCN layers on each branch. We set the graph attention layer at the end of the graph learning block. To maintain the symmetry of the graph learning block structure, we kept the number of layers the same as the number of attention heads within the layer. We show the performance results for different GCN layers of our model in Table 6. For the 1-1 layer model to GCN, in each branch, the output dimensions of the sequential layers are 1024. For the 2-2 layer model to GCN, in each branch, the output dimensions of the sequential layers are 1024 and 1024. For the 3-3 layer model to GCN, in each branch, the output dimensions of the sequential layers are 1024. We aligned the number of graph attention layers with the number of attention heads. Specifically, for the 1-layer GAT model, with the layer using K = 1 attention heads, the head computes 1024 features (1024 features in total). For the 2-layer GAT model, with the first layer using K = 2 attention heads, each head computes 512 features (1024 features in total), and the second layer does the same. As shown in the table, the pathology recognition performance on both datasets decreased when the number of GCN layers and the number of GAT layers increased. The performance degradation was due to the accumulation of information transfer between nodes when more GCN and GAT layers were used, leading to oversmoothing.

    Table 6.  The different number of GCN layers and GAT layers of the graph learning block in PGL.
    Mean AUC
    #Layer Dataset
    Dual-branch GCN GAT (heads) ChestX-Ray14 CheXpert-0s CheXpert-1s
    1-1 1(1) 0.8417 0.8493 0.8366
    2(2) 0.8503 0.8649 0.8575
    2-2 1(1) 0.8342 0.8402 0.8309
    2(2) 0.8251 0.8323 0.8187
    3-3 1(1) 0.8187 0.8238 0.8194
    2(2) 0.8063 0.8109 0.8057

     | Show Table
    DownLoad: CSV

    In Figure 9, we visualize the original images and the corresponding label-specific activation maps obtained by our proposed MRChexNet. It is clear that MRChexNet can capture the discriminative semantic regions of the images for the different chest diseases. Figure 10 illustrates a visual representation of multi-label CXR recognition. The top-eight predicted scores for each test subject are given and sorted top-down by the magnitude of the predicted score values. As shown in Figure 10, compared with the vanilla DenseNet-169 model, the proposed MRChexNet enhances the performance of multi-label CXR recognition. Our MRChexNet can effectively improve associated pathology confidence scores and suppress nonassociated pathology scores with fully considered and modeled global label relationships. For example, in column 1, row 2, MRChexNet fully considers the pathological relationship between effusion and atelectasis. In the presence of effusion, the corresponding confidence score for atelectasis was (0.5210 \rightarrow 0.9319); compared to vanilla DenseNet-169 performance, the confidence score improved by approximately 0.4109. For the weakly correlated labels, effusion ranked first in column 2, row 3 regarding the DenseNet-169 score. While MRChexNet fully considers the global interlabel relationships, its confidence score does not reach the top 8. To some extent, this demonstrates the ability of our model to suppress the confidence scores of nonrelevant pathologies.

    Figure 9.  Visualization results of pathology correlation activation maps on ChestX-Ray14 dataset. The three columns on the right are three samples with different diseases and their corresponding activation maps.
    Figure 10.  Visualization results of our model scoring the highest pathology on the images to be tested in the ChestX-Ray14 dataset. We present the top-eight predicted pathology labels and the corresponding probability scores. The ground truth labels are highlighted in red.

    Improving the performance of multi-label CXR recognition algorithms in clinical environments by considering the correspondence between pathology labels in different modalities and capturing the correlation relationship between related pathologies is vital, as is aligning pathology-relationship representations in different modalities and learning the relationship information of pathologies within each modality. In this paper, we propose a multi-modal bridge and relational learning method named MRChexNet to align pathological representations in different modalities and learn information about the relationship of pathology within each modality. Specifically, our model first extracts pathology-specific feature representations in the imaging modality by designing a practical RLM. Then, an efficient MBM is designed to align pathological word embeddings and image-level pathology-specific feature representations. Finally, a novel PGL is intended to comprehensively learn the correlation of pathologies within each modality. Extensive experimental results on ChestX-Ray14 and CheXpert show that the proposed MBM and PGL can effectively enhance each other, thus significantly improving the model's multi-label CXR recognition performance with satisfactory results. In the future, we will introduce the relation weight parameter in pathology relation modeling to learn more accurate pathology relations to help further improve the multi-label CXR recognition performance.

    In the future, we will extend the applicability of the proposed method to other imaging modalities, such as optical coherence tomography (OCT). Among them, OCT is a noninvasive optical imaging modality that provides histopathology images with microscopic resolution [42,43,44,45]. Our next research direction is extending the proposed method for OCT-based pathology image analysis. In addition, exploring the interpretability and readability of models has been a hot research topic in making deep learning techniques applicable to clinical diagnosis. Our next research direction is also how to make our model more friendly and credible for clinicians' understanding.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work is supported by the National Nature Science Foundation of China (No. 61872225), Introduction and Cultivation Program for Young Creative Talents in Colleges and Universities of Shandong Province (No. 2019-173), the Natural Science Foundation of Shandong Province (No. ZR2020KF013, No. ZR2020ZD44, No. ZR2019ZD04, No. ZR2020QF043, No. ZR2023QF094) and the Special fund of Qilu Health and Health Leading Talents Training Project.

    The authors declare there is no conflict of interest.



    [1] R. L. Siegel, A. N. Giaquinto, A. Jemal, Cancer statistics, CA Cancer J. Clin., 74 (2024), 12–49. https://doi.org/10.3322/caac.21820 doi: 10.3322/caac.21820
    [2] K. A. Schafer, The cell cycle: A review, Vet. Pathol., 35 (1998), 461–478. https://doi.org/10.1177/030098589803500601 doi: 10.1177/030098589803500601
    [3] B. Alberts, A. Johnson, J. Lewis, M. Raff, K. Roberts, P. Walter, Molecular Biology of the Cell, 4th edition, Garland Science, New York, 2002.
    [4] Z. Wang, Cell cycle progression and synchronization: An overview, Methods Mol. Biol., 2579 (2002), 3–23. https://doi.org/10.1007/978-1-0716-2736-5_1 doi: 10.1007/978-1-0716-2736-5_1
    [5] E. A. Kolokotroni, D. D. Dionysiou, N. K. Uzunogulu, G. S. Stamatakos, Studying the growth kinetics of untreated clinical tumors by using an advanced discrete simulation model, Math. Model., 54 (2011), 1989–2006. https://doi.org/10.1016/j.mcm.2011.05.007 doi: 10.1016/j.mcm.2011.05.007
    [6] M. Gyllenberg, G. F. Webb, A nonlinear structured population model of tumor growth with quiescence, J. Math. Biol., 28 (1990), 671–694. https://doi.org/10.1007/BF00160231 doi: 10.1007/BF00160231
    [7] Z. Wang, J. D. Butner, R. Kerketta, V. Cristini, T. S. Deisboeck, Simulating cancer growth with multiscale agent-based modeling, Semin. Cancer Biol., 30 (2015), 70–78. https://doi.org/10.1016/j.semcancer.2014.04.001 doi: 10.1016/j.semcancer.2014.04.001
    [8] T. S. Deisboeck, Z. Wang, P. Macklin, V. Cristini, Multiscale cancer modeling, Annu. Rev. Biomed. Eng., 13 (2011), 127–155. https://doi.org/10.1146/annurev-bioeng-071910-124729 doi: 10.1146/annurev-bioeng-071910-124729
    [9] J. West, M. Robertson-Tessi, A. R. A. Anderson, Agent-based methods facilitate integrative science in cancer, Trends Cell Biol., 33 (2023), 300–311. https://doi.org/10.1016/j.tcb.2022.10.006 doi: 10.1016/j.tcb.2022.10.006
    [10] Z. Wang, T. S. Deisboeck, Computational modeling of brain tumors: discrete, continuum or hybrid?, Sci. Model Simul., 15 (2008), 381. https://doi.org/10.1007/s10820-008-9094-0 doi: 10.1007/s10820-008-9094-0
    [11] T. Trisilowati, D. G. Mallet, In silico experimental modeling of cancer treatment, ISRN Oncol., 2012 (2012), 1–8. https://doi.org/10.5402/2012/828701 doi: 10.5402/2012/828701
    [12] K. Bhuvaneshwar, A. Belouali, V. Singh, R. M. Johnson, L. Song, A. Alaoui, et al., G-DOC Plus–an integrative bioinformatics platform for precision medicine, BMC Bioinf., 17 (2016), 193. https://doi.org/10.1186/s12859-016-1010-0 doi: 10.1186/s12859-016-1010-0
    [13] L. B. Edelman, J. A. Eddy, N. D. Price, In silico models of cancer, WIREs Mech. Dis., 2 (2010), 438–459. https://doi.org/10.1002/wsbm.75 doi: 10.1002/wsbm.75
    [14] B. Colom, M. P. Alcolea, G. Piedrafita, M. W. J. Hall, A. Wabik, S. C. Dentro, Spatial competition shapes the dynamic mutational landscape of normal esophageal epithelium, Nat. Genet., 52 (2020), 604–614. https://doi.org/10.1038/s41588-020-0624-3 doi: 10.1038/s41588-020-0624-3
    [15] H. B. Frieboes, An integrated computational/experimental model of tumor invasion, Cancer Res., 66 (2006), 1597–1604. https://doi.org/10.1158/0008-5472.CAN-05-3166 doi: 10.1158/0008-5472.CAN-05-3166
    [16] H. P. Greenspan, Models for the growth of a solid tumor by diffusion, Stud. Appl. Math., 51 (1972), 317–340. https://doi.org/10.1002/sapm1972514317 doi: 10.1002/sapm1972514317
    [17] A. R. A. Anderson, A. M. Weaver, P. T. Cummings, V. Quaranta, Tumor morphology and phenotypic evolution driven by selective pressure from the microenvironment, Cell J., 127 (2006), 905–915. https://doi.org/10.1016/j.cell.2006.09.042 doi: 10.1016/j.cell.2006.09.042
    [18] H. Byrne, D. Drasdo, Individual-based and continuum models of growing cell populations: a comparison, J. Math. Biol., 58 (2009), 657–687. https://doi.org/10.1007/s00285-008-0212-0 doi: 10.1007/s00285-008-0212-0
    [19] C. Drapaca, S. Sivaloganathan, Mathematical Modelling and Biomechanics of the Brain, Springer, New York, 2019.
    [20] P. Castorina, D. Carcò, C. Guiot, T. S. Deisboeck, Tumor growth instability and its implications for chemotherapy, Cancer Res., 69 (2009), 8507–8515. https://doi.org/10.1158/0008-5472.CAN-09-0653 doi: 10.1158/0008-5472.CAN-09-0653
    [21] J. T. Oden, E. A. B. F. Lima, R. C. Almeida, Y. Feng, M. N. Rylander, D. Fuentes, et al., Toward predictive multiscale modeling of vascular tumor growth: computational and experimental oncology for tumor prediction, Arch. Comput. Methods Eng., 23 (2016), 735–779. https://doi.org/10.1007/s11831-015-9156-x doi: 10.1007/s11831-015-9156-x
    [22] A. M. Jarrett, E. A. B. F. Lima, D. A. Hormuth, M. T. McKenna, X. Fent, D. A. Ekrut, et al., Mathematical models of tumor cell proliferation: A review of the literature, Expert Rev. Anticancer Ther., 18 (2018), 1271–1286. https://doi.org/10.1080/14737140.2018.1527689 doi: 10.1080/14737140.2018.1527689
    [23] H. Murphy, J. Jaafari, H. M. Dobrovolny, Differences in predictions of ODE models of tumor growth: a cautionary example, BMC Cancer, 16 (2016), 1471–2407. https://doi.org/10.1186/s12885-016-2164-x doi: 10.1186/s12885-016-2164-x
    [24] B. Heesterman, J. Bokhorst, L. De Point, B. Verbist, J. Bayley, A. Van Der Mey, et al., Mathematical models for tumor growth and the reduction of overtreatment, J. Neurol. Surg. B., 80 (2019), 72–78. https://doi.org/10.1055/s-0038-1667148 doi: 10.1055/s-0038-1667148
    [25] P. Gerlee, A. R. A. Anderson, An evolutionary hybrid cellular automaton model of solid tumour growth, J. Theor. Biol., 246 (2007), 583–603. https://doi.org/10.1016/j.jtbi.2007.01.027 doi: 10.1016/j.jtbi.2007.01.027
    [26] N. M. Dimitriou, E. Demirag, K. Strati, G. D. Mitsis, A calibration and uncertainty quantification analysis of classical, fractional and multiscale logistic models of tumour growth, Comput. Methods Programs Biomed., 243 (2024), 107920. https://doi.org/10.1016/j.cmpb.2023.107920 doi: 10.1016/j.cmpb.2023.107920
    [27] H. J. Huber, H. B. Mistry, Explaining in-vitro to in-vivo efficacy correlations in oncology pre-clinical development via a semi-mechanistic mathematical model, J. Pharmacokinet. Pharmacodyn., 51 (2024), 169–185. https://doi.org/10.1007/s10928-023-09891-7 doi: 10.1007/s10928-023-09891-7
    [28] D. Tatro, The Mathematics of Cancer: Fitting the Gompertz Equation to Tumor Growth, Ph.D thesis, Bard College, 2018.
    [29] P. Gerlee, The model muddle: In search of tumor growth laws, Cancer Res., 73 (2013), 2407–2411. https://doi.org/10.1158/0008-5472.CAN-12-4355 doi: 10.1158/0008-5472.CAN-12-4355
    [30] A. Talkington, R. Durrett, Estimating tumor growth laws in vivo, Bull. Math. Biol., 77 (2015), 1934–1954. https://doi.org/10.1007/s11538-015-0110-8 doi: 10.1007/s11538-015-0110-8
    [31] C. Vaghi, A. Rodallec, R. Fanciullino, J. Ciccolini, J. P. Mochel, M. Mastri, et al., Population modeling of tumor growth curves and the reduced Gompertz model improve prediction of the age of experimental tumors, PLoS Comput. Biol., 16 (2020), e1007178. https://doi.org/10.1371/journal.pcbi.1007178 doi: 10.1371/journal.pcbi.1007178
    [32] S. Benzekry, C. Lamont, A. Beheshti, A. Tracz, J. M. L. Ebos, L. Hlatky, et al., Classical mathematical models for description and prediction of experimental tumor growth, PLoS Comput. Biol., 10 (2014), e1003800. https://doi.org/10.1371/journal.pcbi.1003800 doi: 10.1371/journal.pcbi.1003800
    [33] S. Vieira, R. Hoffman, Comparison of the logistic and the Gompertz growth functions considering additive and multiplicative error terms, J. R. Stat., 26 (1977), 143–148. https://doi.org/10.2307/2347021 doi: 10.2307/2347021
    [34] N. M. Dimitriou, S. Flores-Torres, J. M. Kinsella, G. D. Mitsis, Quantifying the morphology and mechanisms of cancer progression in 3D in-vitro environments: Integrating experiments and multiscale models, IEEE Trans. Biomed. Eng., 70 (2023), 1318–1329. https://doi.org/10.1109/TBME.2022.3216231 doi: 10.1109/TBME.2022.3216231
    [35] N. C. Atuegwu, L. R. Arlinghaus, X. Li, A. B. Chakravarthy, V. G. Abramson, M. E. Sanders, et al., Parameterizing the logistic model of tumor growth by DW-MRI and DCE-MRI data to predict treatment response and changes in breast cancer cellularity during neoadjuvant chemotherapy, Transl. Oncol., 6 (2013), 256–264. https://doi.org/10.1593/tlo.13130 doi: 10.1593/tlo.13130
    [36] A. K. Laird, Dynamics of tumor growth, Br. J. Cancer, 18 (1964), 490–502. https://doi.org/10.1038/bjc.1964.55 doi: 10.1038/bjc.1964.55
    [37] B. Gompertz, XXIV. On the nature of the function expressive of the law of human mortality, and on a new mode of determining the value of life contingencies. In a letter to Francis Baily, Esq. F. R. S. & c, Phil. Trans. R. Soc., 115 (1825), 513–583. https://doi.org/10.1098/rstl.1825.0026 doi: 10.1098/rstl.1825.0026
    [38] C. L. Frenzen, J. D. Murray, A cell kinetics justification for Gompertz' Equation, SIAP, 46 (1986), 614–629. https://doi.org/10.1137/0146042 doi: 10.1137/0146042
    [39] R. Chignola, A. Schenetti, G. Andrighetto, E. Chiesa, R. Foroni, S. Sartoris, et al., Forecasting the growth of multicell tumour spheroids: implications for the dynamic growth of solid tumours, Cell Prolif., 33 (2000), 219–229. https://doi.org/10.1046/j.1365-2184.2000.00174.x doi: 10.1046/j.1365-2184.2000.00174.x
    [40] L. Von Bertalanffy, Quantitative laws in metabolism and growth, Q. Rev. Biol., 32 (1957), 217–231. https://doi.org/10.1086/401873 doi: 10.1086/401873
    [41] K. Renner-Martin, N. Brunner, M. Kühleitner, W. G. Nowak, K. Scheicher, On the exponent in the Von Bertalanffy growth model, PeerJ, 6 (2018), e4205. https://doi.org/10.7717/peerj.4205 doi: 10.7717/peerj.4205
    [42] H. H. Diebner, T. Zerjatke, M. Griehl, I. Roeder, Metabolism is the tie: The Bertalanffy-type cancer growth model as common denominator of various modelling approaches, Biosystems, 167 (2018), 1–23. https://doi.org/10.1016/j.biosystems.2018.03.004 doi: 10.1016/j.biosystems.2018.03.004
    [43] K. C. L. Wong, R. M. Summers, E. Kebebew, J. Yao, Tumor growth prediction with reaction-diffusion and hyperelastic biomechanical model by physiological data fusions, MedIA, 25 (2015), 72–85. https://doi.org/10.1016/j.media.2015.04.002 doi: 10.1016/j.media.2015.04.002
    [44] R. A. Gatenby, E. T. Gawlinski, A reaction-diffusion model of cancer invasion, Cancer Res., 56 (1996), 5745–5753.
    [45] V. Cristini, J. Lowengrub, Q. Nie, Nonlinear simulation of tumor growth, J. Math. Biol., 46 (2003), 191–224. https://doi.org/10.1007/s00285-002-0174-6 doi: 10.1007/s00285-002-0174-6
    [46] C. Hogea, C. Davatzikos, G. Biros, An image-driven parameter estimation problem for a reaction–diffusion glioma growth model with mass effects, J. Math. Biol., 56 (2008), 793–825. https://doi.org/10.1007/s00285-007-0139-x doi: 10.1007/s00285-007-0139-x
    [47] O. Clatz, M. Sermesant, P. Y. Bondiau, H. Delingette, S. K. Warfield, G. Malandain, et al., Realistic simulation of the 3-D growth of brain tumors in MR images coupling diffusion with biomechanical deformation, IEEE Trans. Med. Imaging, 24 (2005), 1334–1346. https://doi.org/10.1109/TMI.2005.857217 doi: 10.1109/TMI.2005.857217
    [48] X. Chen, R. M. Summers, J. Yao, Kidney tumor growth prediction by coupling reaction-diffusion and biomechanical model, IEEE Trans. Biomed. Eng., 60 (2013), 169–173. https://doi.org/10.1109/TBME.2012.2222027 doi: 10.1109/TBME.2012.2222027
    [49] E. Konukoglu, O. Clatz, P. Bondiau, H. Delingette, N. Ayache, Extrapolating glioma invasion margin in brain magnetic resonance images: Suggesting new irradiation margins, MedIA, 14 (2010), 111–125. https://doi.org/10.1016/j.media.2009.11.005 doi: 10.1016/j.media.2009.11.005
    [50] Y. Liu, S. M. Sadowski, A. B. Weisbrod, E. Kebebew, R. M. Summers, J. Yao, Patient specific tumor growth prediction using multimodal images, MedIA, 18 (2014), 555–566. https://doi.org/10.1016/j.media.2014.02.005 doi: 10.1016/j.media.2014.02.005
    [51] B. H. Menze, K. Van Leemput, A. Honkela, E. Konukoglu, M. Weber, N. Ayache, et al., A generative approach for image-based modeling of tumor growth, in Information Processing in Medical Imaging (eds. G. Székely, H. K. Hahn), Springer, (2011), 735–747.
    [52] C. Martens, A. Rovai, D. Bonatto, T. Metens, O. Debeir, C. Decaestecker, et al., Deep learning for reaction-diffusion glioma growth modeling: Towards a fully personalized model?, Cancers, 14 (2022), 2530. https://doi.org/10.3390/cancers14102530 doi: 10.3390/cancers14102530
    [53] S. Jbabdi, E. Mandonnet, H. Duffau, L. Capelle, K. R. Swanson, M. Pélégrini-Issac, et al., Simulation of anisotropic growth of low‐grade gliomas using diffusion tensor imaging, Magn. Reson. Med., 54 (2005), 616–624. https://doi.org/10.1002/mrm.20625 doi: 10.1002/mrm.20625
    [54] E. Konukoglu, O. Clatz, B. H. Menze, B. Stieltjes, M. Weber, E. Mandonnet, et al., Image guided personalization of reaction-diffusion type tumor growth models using modified anisotropic Eikonal equations, IEEE Trans. Med. Imaging, 29 (2010), 77–95. https://doi.org/10.1109/TMI.2009.2026413 doi: 10.1109/TMI.2009.2026413
    [55] S. Subramanian, K. Scheufele, M. Mehl, G. Biros, Where did the tumor start? An inverse solver with sparse localization for tumor growth models, Inverse Probl., 36 (2020), 045006. https://doi.org/10.1088/1361-6420/ab649c doi: 10.1088/1361-6420/ab649c
    [56] K. Scheufele, S. Subramanian, G. Biros, Fully automatic calibration of tumor-growth models using a single mpMRI scan, IEEE Trans. Med. Imaging, 40 (2021), 193–204. https://doi.org/10.1109/TMI.2020.3024264 doi: 10.1109/TMI.2020.3024264
    [57] B. Tunc, D. Hormuth, G. Biros, T. E. Yankeelov, Modeling of glioma growth with mass effect by longitudinal magnetic resonance imaging, IEEE Trans. Biomed. Eng., 68 (2021), 3713–3724. https://doi.org/10.1109/TBME.2021.3085523 doi: 10.1109/TBME.2021.3085523
    [58] V. Cristini, J. Lowengrub, Multiscale Modeling of Cancer: An Integrated Experimental and Mathematical Modeling Approach, Cambridge University Press, 2010.
    [59] J. Retzlaff, X. Lai, C. Berking, J. Vera, Integration of transcriptomics data into agent-based models of solid tumor metastasis, Comput. Struct. Biotechnol. J., 21 (2023), 1930–1941. https://doi.org/10.1016/j.csbj.2023.02.014 doi: 10.1016/j.csbj.2023.02.014
    [60] G. De Vries, T. Hillen, M. Lewis, J. Müler, B. Schönfisch, A Course in Mathematical Biology: Quantitative Modeling with Mathematical and Computational Methods, Society for Industrial and Applied Mathematics, Philadelphia, 2006. https://doi.org/10.1137/1.9780898718256
    [61] D. Kamel, Dynamics in a discrete-time three dimensional cancer system, Int. J. Appl. Math., 49 (2019), 625–631.
    [62] J. Poleszczuk, H. Enderling, A high-performance cellular automaton model of tumor growth with dynamically growing domains, Appl. Math., 5 (2014), 144–152. https://doi.org/10.4236/am.2014.51017 doi: 10.4236/am.2014.51017
    [63] A. Adamatzky, Game of Life Cellular Automata, Springer, London, 2010. https://doi.org/10.1007/978-1-84996-217-9
    [64] V. García-Morales, J. A. Manzanares, J. Cervera, Modeling tumour growth with a modulated game of life cellular automaton under global coupling in Cancer, Complexity, Computation (eds. I. Balaz, A. Adamatzky), Springer International Publishing, (2022), 117–131. https://doi.org/10.1007/978-3-031-04379-6_5
    [65] G. Migliaccio, R. Ferraro, Z. Wang, V. Cristini, P. Dogra, S. Caserta, Exploring cell migration mechanisms in cancer: From wound healing assays to cellular automata models, Cancers, 15 (2023), 5284. https://doi.org/10.3390/cancers15215284 doi: 10.3390/cancers15215284
    [66] C. A. Valentim, J. A. Rabi, S. A. David, Cellular-automaton model for tumor growth dynamics: Virtualization of different scenarios, Comput. Biol. Med., 153 (2023), 106481. https://doi.org/10.1016/j.compbiomed.2022.106481 doi: 10.1016/j.compbiomed.2022.106481
    [67] F. Pourhasanzade, S. H. Sabzpoushan, A cellular automata model of chemotherapy effects on tumour growth: targeting cancer and immune cells, MCMDS, 25 (2019), 63–89. https://doi.org/10.1080/13873954.2019.1571515 doi: 10.1080/13873954.2019.1571515
    [68] J. Santos, A. Monteagudo, Analysis of behaviour transitions in tumour growth using a cellular automaton simulation, IET Syst. Biol., 9 (2015), 75–87. https://doi.org/10.1049/iet-syb.2014.0015 doi: 10.1049/iet-syb.2014.0015
    [69] C. Tanade, S. Putney, A. Randles, Developing a scalable cellular automaton model of 3D tumor growth, in Computational Science – ICCS 2022 (eds. D. Groen, C. De Mulatier, M. Paszynski, V. Krzhizhanovskaya, J. J. Dongarra, P. M. A. Sloot), Springer International Publishing, (2022), 3–16. https://ldoi.org/10.1007/978-3-031-08751-6_1
    [70] C. M. Macal, M. J. North, Agent-based modeling and simulation: ABMS examples, 2008 Winter Simulation Conference, IEEE, (2008), 101–112. https://doi.org/10.1109/WSC.2008.4736060
    [71] P. Van Liedekerke, A. Buttenschön, D. Drasdo, Off-Lattice agent-based models for cell and tumor growth, in Numerical Methods and Advanced Simulation in Biomechanics and Biological Processes, Elsevier, (2018), 245–267. https://doi.org/10.1016/B978-0-12-811718-7.00014-9
    [72] P. Macklin, H. B. Frieboes, J. L. Sparks, A. Ghaffarizadeh, S. H. Friedman, E. F. Juarez, et al., Progress towards computational 3-D multicellular systems biology, in Systems Biology of Tumor Microenvironment (ed. K. A. Rejniak), Springer International Publishing, (2016), 225–246. http://doi.org/10.1007/978-3-319-42023-3_12
    [73] E. Kim, V. Rebecca, I. V. Fedorenko, J. L. Messina, R. Mathew, S. S. Maria-Engler, et al., Senescent fibroblasts in melanoma initiation and progression: An integrated theoretical, experimental, and clinical approach, Cancer Res., 73 (2013), 6874–6885. https://doi.org/10.1158/0008-5472.CAN-13-1720 doi: 10.1158/0008-5472.CAN-13-1720
    [74] V. Estrella, T. Chen, M. Lloyd, J. Wojtkowiak, H. H. Cornnell, A. Ibrahim-Hashim, et al., Acidity generated by the tumor microenvironment drives local invasion, Cancer Res., 73 (2013), 1524–1535. https://doi.org/10.1158/0008-5472.CAN-12-2796 doi: 10.1158/0008-5472.CAN-12-2796
    [75] A. El-Kenawi, C. Gatenbee, M. Robertson-Tessi, R. Bravo, J. Dhillon, Y. Balagurunathan, et al., Acidity promotes tumour progression by altering macrophage phenotype in prostate cancer, Br. J. Cancer, 121 (2019), 556–566. https://doi.org/10.1038/s41416-019-0542-2 doi: 10.1038/s41416-019-0542-2
    [76] I. Bozic, T. Antal, H. Ohtsuki, H. Carter, D. Kim, S. Chen, et al., Accumulation of driver and passenger mutations during tumor progression, Proc. Natl. Acad. Sci. USA, 107 (2010), 18545–18550. https://doi.org/10.1073/pnas.1010978107 doi: 10.1073/pnas.1010978107
    [77] R. C. Kennedy, G. E. Ropella, C. A. Hunt, A cell-centered, agent-based framework that enables flexible environment granularities, Theor. Biol. Med. Model., 13 (2016), 4. https://doi.org/10.1186/s12976-016-0030-9 doi: 10.1186/s12976-016-0030-9
    [78] S. Jamous, A. Comba, P. R. Lowenstein, S. Motsch, Self-organization in brain tumors: How cell morphology and cell density influence glioma pattern formation, PLoS Comput. Biol., 16 (2020), e1007611. https://doi.org/10.1371/journal.pcbi.1007611 doi: 10.1371/journal.pcbi.1007611
    [79] P. Macklin, M. E. Edgerton, A. M. Thompson, V. Cristini, Patient-calibrated agent-based modelling of ductal carcinoma in situ (DCIS): From microscopic measurements to macroscopic predictions of clinical progression, J. Theor. Biol., 301 (2012), 122–140. https://doi.org/10.1016/j.jtbi.2012.02.002 doi: 10.1016/j.jtbi.2012.02.002
    [80] J. D. Butner, V. Cristini, Z. Wang, Development of a three dimensional, multiscale agent-based model of ductal carcinoma in situ, in 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), (2017), 86–89. https://doi.org/10.1109/EMBC.2017.8036769
    [81] J. D. Butner, D. Fuentes, B. Ozpolat, G. A. Calin, X. Zhou, J. Lowengrub, et al., A multiscale agent-based model of ductal carcinoma in situ, IEEE Trans. Biomed. Eng., 67 (2020), 1450-1461. https://doi.org/10.1109/TBME.2019.2938485 doi: 10.1109/TBME.2019.2938485
    [82] A. Ghaffarizadeh, R. Heiland, S. H. Friedman, S. M. Mumenthaler, P. Macklin, PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems, PLoS Comput. Biol., 14 (2018), e1005991. https://doi.org/10.1371/journal.pcbi.1005991 doi: 10.1371/journal.pcbi.1005991
    [83] G. Letort, A. Montagud, G. Stoll, R. Heiland, E. Barillot, P. Macklin, et al., PhysiBoSS: a multi-scale agent-based modelling framework integrating physical dimension and cell signalling, Bioinformatics, 35 (2019), 1188–1196. https://doi.org/10.1093/bioinformatics/bty766 doi: 10.1093/bioinformatics/bty766
    [84] J. Ozik, N. Collier, J. M. Wozniak, C. Macal, C. Cockrell, S. H. Friedman, et al., High-throughput cancer hypothesis testing with an integrated PhysiCell-EMEWS workflow, BMC Bioinf., 19 (2018), 483. https://doi.org/10.1186/s12859-018-2510-x doi: 10.1186/s12859-018-2510-x
    [85] M. Robertson-Tessi, R. J. Gillies, R. A. Gatenby, A. R. A. Anderson, Impact of metabolic heterogeneity on tumr growth, invasion, and treatment outcomes, Cancer Res., 75 (2015), 1567–1579. https://doi.org/10.1158/0008-5472.CAN-14-1428 doi: 10.1158/0008-5472.CAN-14-1428
    [86] A. R. A. Anderson, A hybrid mathematical model of solid tumour invasion: the importance of cell adhesion, Math. Med. Biol., 22 (2005), 163–186. https://doi.org/10.1093/imammb/dqi005 doi: 10.1093/imammb/dqi005
    [87] D. Toker, F. T. Sommer, M. D'Esposito, A simple method for detecting chaos in nature, Commun. Biol., 3 (2020), 11. https://doi.org/10.1038/s42003-019-0715-9 doi: 10.1038/s42003-019-0715-9
    [88] F. R. Marotto, Snap-back repellers imply chaos in Rn, J. Math. Anal. Appl., 63 (1978), 199–223. https://doi.org/10.1016/0022-247X(78)90115-4 doi: 10.1016/0022-247X(78)90115-4
    [89] T. Saeed, K. Djeddi, J. L. G. Guirao, H. H. Alsulami, M. S. Alhodaly, A discrete dynamics approach to a tumor system, Mathematics, 10 (2022), 1774. https://doi.org/10.1016/0022-247X(78)90115-4 doi: 10.1016/0022-247X(78)90115-4
    [90] E. R. Paquet, M. T. Hallett, Absolute assignment of breast cancer intrinsic molecular subtype, JNCI, 107 (2015). https://doi.org/10.1093/jnci/dju357 doi: 10.1093/jnci/dju357
    [91] C. Letellier, F. Denis, L. A. Aguirre, What can be learned from a chaotic cancer model?, J. Theor. Biol., 322 (2013), 7–16. https://doi.org/10.1016/j.jtbi.2013.01.003 doi: 10.1016/j.jtbi.2013.01.003
    [92] N. Debbouche, A. Ouannas, G. Grassi, A. A. Al-Hussein, F. R. Tahir, K. M. Saad, et al., Chaos in cancer tumor growth model with commensurate and incommensurate fractional-order derivatives, Comput. Math. Methods. Med., 2022 (2022), 1–13. https://doi.org/10.1155/2022/5227503 doi: 10.1155/2022/5227503
    [93] A. Cucuianu, Chaos in cancer?, Nat. Med., 4 (1998), 1342–1342. https://doi.org/10.1038/3904 doi: 10.1038/3904
    [94] K. A. Rejniak, A. R. A. Anderson, Hybrid models of tumor growth, WIREs Mech. Dis., 3 (2011), 115–125. https://doi.org/10.1002/wsbm.102 doi: 10.1002/wsbm.102
    [95] M. Branicky, Studies in Hybrid Systems: Modeling, Analysis, Control, Ph.D thesis, Massachusetts Institute of Technology, 1995.
    [96] T. A. Henzinger, The theory of hybrid automata, in Verification of Digital and Hybrid Systems (eds. M. K. Inan, R. P. Kurshan), Springer, Berlin, (2000), 265–292. http://doi.org/10.1007/978-3-642-59615-5_13
    [97] R. Alur, C. Belta, F. Ivančić, V. Kumar, M. Mintz, G. J. Pappas, et al., Hybrid modeling and simulation of biomolecular networks in Hybrid Systems: Computation and Control (eds. G. Goos, J. Hartmanis, J. Van Leeuwen, M. D. Di Benedetto, A. Sangiovanni-Vincentelli, R. Alur, et al.), Springer, Berlin, (2001), 19–32. http://doi.org/10.1007/3-540-45351-2_6
    [98] G. Lorenzo, S. R. Ahmed, D. A. Hormuth, B. Vaughn, J. Kalpathy-Cramer, L. Solorio, et al., Patient-specific, mechanistic models of tumor growth incorporating artificial intelligence and big data, preprint, arXiv: 2308.14925.
    [99] Z. Frankenstein, D. Basanta, O. E. Franco, Y. Gao, R. A. Javier, D. W. Strand, et al., Stromal reactivity differentially drives tumour cell evolution and prostate cancer progression, Nat. Ecol. Evol., 4 (2020), 870–884. https://doi.org/10.1038/s41559-020-1157-y doi: 10.1038/s41559-020-1157-y
    [100] A. G. López, J. M. Seoane, M. A. F. Sanjuán, Modelling cancer dynamics using cellular automata, in Advanced Mathematical Methods in Biosciences and Applications (eds. F. Berezovskaya, B. Toni), Springer International Publishing, Cham, (2019), 159–205. http://doi.org/10.1007/978-3-030-15715-9_8
    [101] L. Messina, R. Ferraro, M. J. Peláez, Z. Wang, V. Cristini, P. Dogra, et al., Hybrid cellular automata modeling reveals the effects of glucose gradients on tumor spheroid growth, Cancers, 15 (2023), 5660. https://doi.org/10.3390/cancers15235660 doi: 10.3390/cancers15235660
    [102] S. Suveges, I. Chamseddine, K. A. Rejniak, R. Eftimie, D. Trucu, Collective cell migration in a fibrous environment: A hybrid multiscale modelling approach, Front. Appl. Math. Stat., 7 (2021), 680029. https://doi.org/10.3389/fams.2021.680029 doi: 10.3389/fams.2021.680029
    [103] J. A. Gallaher, S. C. Massey, A. Hawkins-Daarud, S. S. Noticewala, R. C. Rockne, S. K. Johnston, et al., From cells to tissue: How cell scale heterogeneity impacts glioblastoma growth and treatment response, PLoS Comput. Biol., 16 (2020), e1007672. https://doi.org/10.1371/journal.pcbi.1007672 doi: 10.1371/journal.pcbi.1007672
    [104] A. Stéphanou, A. C. Lesart, K. Deverchère, A. Juhem, A. Popov, F. Estève, How tumour-induced vascular changes alter angiogenesis: Insights from a computational model, J. Theor. Biol., 419 (2017), 211–226. https://doi.org/10.1016/j.jtbi.2017.02.018 doi: 10.1016/j.jtbi.2017.02.018
    [105] Y. Chen, H. Wang, J. Zhang, K. Chen, Y. Li, Simulation of avascular tumor growth by agent-based game model involving phenotype-phenotype interactions, Sci. Rep., 5 (2015), 17992. https://doi.org/10.1038/srep17992 doi: 10.1038/srep17992
    [106] J. Kremheller, A. Vuong, B. A. Schrefler, W. A. Wall, An approach for vascular tumor growth based on a hybrid embedded/homogenized treatment of the vasculature within a multiphase porous medium model, Numer. Methods Biomed. Eng., 35 (2019), e3253. https://doi.org/10.1002/cnm.3253 doi: 10.1002/cnm.3253
    [107] C. M. Phillips, E. A. B. F. Lima, R. T. Woodall, A. Brock, T. E. Yankeelov, A hybrid model of tumor growth and angiogenesis: In silico experiments, PLoS One, 15 (2020), e0231137. https://doi.org/10.1371/journal.pone.0231137 doi: 10.1371/journal.pone.0231137
    [108] T. Duswald, E. A. B. F. Lima, J. T. Oden, B. Wohlmuth, Bridging scales: A hybrid model to simulate vascular tumor growth and treatment response, Comput. Methods Appl. Mech. Eng., 418 (2024), 116566. https://doi.org/10.1016/j.cma.2023.116566 doi: 10.1016/j.cma.2023.116566
    [109] I. M. Chamseddine, K. A. Rejniak, Hybrid modeling frameworks of tumor development and treatment, WIREs Mech. Dis., 12 (2020), e1461. https://doi.org/10.1002/wsbm.1461 doi: 10.1002/wsbm.1461
    [110] Q. Chen, Q. Ye, W. Zhang, H. Li, X. Zheng, TGM-Nets: A deep learning framework for enhanced forecasting of tumor growth by integrating imaging and modeling, Eng. Appl. Artif. Intell., 126 (2023), 106867. https://doi.org/10.1016/j.engappai.2023.106867 doi: 10.1016/j.engappai.2023.106867
    [111] H. N. Matin, S. Setayeshi, A computational tumor growth model experience based on molecular dynamics point of view using deep cellular automata, J. Med. Artif. Intell., 148 (2024), 102752. https://doi.org/10.1016/j.artmed.2023.102752 doi: 10.1016/j.artmed.2023.102752
    [112] A. Amanzholova, A. Coşkun, Enhancing cancer stage prediction through hybrid deep neural networks: a comparative study, Front. Big Data, 7 (2024), 1359703. https://doi.org/10.3389/fdata.2024.1359703 doi: 10.3389/fdata.2024.1359703
  • This article has been cited by:

    1. Xiaoqun Liu, Changrong Yang, Youcong Chao, The Pricing of ESG: Evidence From Overnight Return and Intraday Return, 2022, 10, 2296-665X, 10.3389/fenvs.2022.927420
    2. Jinyu Chen, Yilin Wang, Xiaohang Ren, Asymmetric effect of financial stress on China’s precious metals market: Evidence from a quantile-on-quantile regression, 2023, 64, 02755319, 101831, 10.1016/j.ribaf.2022.101831
    3. Jinyu Chen, Yilin Wang, Xiaohang Ren, Asymmetric effects of non-ferrous metal price shocks on clean energy stocks: Evidence from a quantile-on-quantile method, 2022, 78, 03014207, 102796, 10.1016/j.resourpol.2022.102796
    4. Zhiyi Li, Mayila Tuerxun, Jianhong Cao, Min Fan, Cunyi Yang, Does inclusive finance improve income: A study in rural areas, 2022, 7, 2473-6988, 20909, 10.3934/math.20221146
    5. Mobeen Ur Rehman, Xuan Vinh Vo, Hee-Un Ko, Nasir Ahmad, Sang Hoon Kang, Quantile connectedness between Chinese stock and commodity futures markets, 2023, 64, 02755319, 101810, 10.1016/j.ribaf.2022.101810
    6. Yunying Huang, Wenlin Gui, Yixin Jiang, Fengyi Zhu, Types of systemic risk and macroeconomic forecast: Evidence from China, 2022, 30, 2688-1594, 4469, 10.3934/era.2022227
    7. Cheng-Wen Lee, Shu-Hui Chen, Andrian Dolfriandra Huruta, Christine Dewi, Abbott Po Shun Chen, Net Transmitter of Stock Market Volatility and Safe Haven for Portfolio Investors in the Asian Dragons, 2022, 10, 2227-7099, 273, 10.3390/economies10110273
    8. Jihong Xiao, Xian Chen, Yang Li, Fenghua Wen, Oil price uncertainty and stock price crash risk: Evidence from China, 2022, 112, 01409883, 106118, 10.1016/j.eneco.2022.106118
    9. Zhenghui Li, Bin Mo, He Nie, Time and frequency dynamic connectedness between cryptocurrencies and financial assets in China, 2023, 10590560, 10.1016/j.iref.2023.01.015
    10. Jinyu Chen, Zhipeng Liang, Qian Ding, Zhenhua Liu, Extreme spillovers among fossil energy, clean energy, and metals markets: Evidence from a quantile-based analysis, 2022, 107, 01409883, 105880, 10.1016/j.eneco.2022.105880
    11. Yanhong Feng, Xiaolei Wang, Shuanglian Chen, Yanqiong Liu, Impact of Oil Financialization on Oil Price Fluctuation: A Perspective of Heterogeneity, 2022, 15, 1996-1073, 4294, 10.3390/en15124294
    12. Guifang Liu, Yuhang Zheng, Fan Hu, Zhidi Du, Modelling exchange rate volatility under jump process and application analysis, 2023, 8, 2473-6988, 8610, 10.3934/math.2023432
    13. Yan Zheng, Hua Yin, Min Zhou, Wenhua Liu, Fenghua Wen, Impacts of oil shocks on the EU carbon emissions allowances under different market conditions, 2021, 104, 01409883, 105683, 10.1016/j.eneco.2021.105683
    14. Fenghua Wen, Haocen Zhao, Lili Zhao, Hua Yin, What drive carbon price dynamics in China?, 2022, 79, 10575219, 101999, 10.1016/j.irfa.2021.101999
    15. Fenghua Wen, Minzhi Zhang, Jihong Xiao, Wei Yue, The impact of oil price shocks on the risk-return relation in the Chinese stock market, 2022, 47, 15446123, 102788, 10.1016/j.frl.2022.102788
    16. Jinyu Chen, Zhipeng Liang, Qian Ding, Zhenhua Liu, Quantile connectedness between energy, metal, and carbon markets, 2022, 83, 10575219, 102282, 10.1016/j.irfa.2022.102282
    17. Hao Nong, Yitan Guan, Yuanying Jiang, Identifying the volatility spillover risks between crude oil prices and China's clean energy market, 2022, 30, 2688-1594, 4593, 10.3934/era.2022233
    18. Nan Lin, Chengyi Liu, Sicen Chen, Jianping Pan, Pengdong Zhang, The monitoring role of venture capital on controllers' tunneling: Evidence from China, 2022, 82, 10575219, 102193, 10.1016/j.irfa.2022.102193
    19. Min Zhou, Xiaoqun Liu, Overnight-Intraday Mispricing of Chinese Energy Stocks: A View from Financial Anomalies, 2022, 9, 2296-598X, 10.3389/fenrg.2021.807881
    20. Xu Gong, Anlan Lin, Xiaoqi Chen, CEO–CFO gender congruence and stock price crash risk in energy companies, 2022, 75, 03135926, 591, 10.1016/j.eap.2022.06.010
    21. Jinyu Chen, Yuxin Huang, Xiaohang Ren, Jingxiao Qu, Time-varying spillovers between trade policy uncertainty and precious metal markets: Evidence from China-US trade conflict, 2022, 76, 03014207, 102577, 10.1016/j.resourpol.2022.102577
    22. Xiaohang Ren, Weixi Xu, Kun Duan, Fourier transform based LSTM stock prediction model under oil shocks, 2022, 6, 2573-0134, 342, 10.3934/QFE.2022015
    23. Yongrong Xin, Xiyin Chang, Jianing Zhu, How does the digital economy affect energy efficiency? Empirical research on Chinese cities, 2022, 0958-305X, 0958305X2211434, 10.1177/0958305X221143411
    24. Hongming Li, Jiahui Li, Yuanying Jiang, Exploring the Dynamic Impact between the Industries in China: New Perspective Based on Pattern Causality and Time-Varying Effect, 2023, 11, 2079-8954, 318, 10.3390/systems11070318
    25. Miaomiao Tao, Stephen Poletti, Mingyue Selena Sheng, Le Wen, Nexus between carbon, stock, and energy markets in New Zealand: An analysis of causal domains, 2024, 299, 03605442, 131409, 10.1016/j.energy.2024.131409
    26. Qian Ding, Jianbai Huang, Jinyu Chen, Time-Frequency Spillovers and the Determinants among Fossil Energy, Clean Energy and Metal Markets, 2023, 44, 0195-6574, 259, 10.5547/01956574.44.2.qdin
    27. Bin Mo, He Nie, Rongjie Zhao, Dynamic nonlinear effects of geopolitical risks on commodities: Fresh evidence from quantile methods, 2024, 288, 03605442, 129759, 10.1016/j.energy.2023.129759
    28. Zisheng Ouyang, Xuewei Zhou, Interconnected networks: Measuring extreme risk connectedness between China’s financial sector and real estate sector, 2023, 90, 10575219, 102892, 10.1016/j.irfa.2023.102892
    29. Huiqin Huang, Chenglong Wang, Wei Yu, Keying Zhu, Does powerful executive holding a dual post as the board secretary reduce nonpunitive regulation?, 2023, 89, 10575219, 102797, 10.1016/j.irfa.2023.102797
    30. Mahdi Ghaemi Asl, David Roubaud, Asymmetric interactions among cutting-edge technologies and pioneering conventional and Islamic cryptocurrencies: fresh evidence from intra-day-based good and bad volatilities, 2024, 10, 2199-4730, 10.1186/s40854-024-00623-5
    31. Juan Meng, Yonghong Jiang, Haiwen Zhao, Ansheng Tanliang, Asymmetric Effects of Renewable Energy Markets on China’s Green Financial Markets: A Perspective of Time and Frequency Dynamic Connectedness, 2024, 12, 2227-7390, 2038, 10.3390/math12132038
    32. Rija Anwar, Syed Ali Raza, Exploring the connectedness between non-fungible token, decentralized finance and housing market: Deep insights from extreme events, 2024, 10, 24058440, e38224, 10.1016/j.heliyon.2024.e38224
    33. Anouar Ben Mabrouk, Sabrine Arfaoui, Mohamed Essaied Hamrita, Wavelet-based systematic risk estimation for GCC stock markets and impact of the embargo on the Qatar case, 2023, 7, 2573-0134, 287, 10.3934/QFE.2023015
    34. Qun Zhang, Hao Zhang, Didier Sornette, Discovering nonlinear interactions between China's financial markets: A data-driven approach, 2025, 10575219, 103975, 10.1016/j.irfa.2025.103975
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2050) PDF downloads(127) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog