Precise segmentation of liver tumors from computed tomography (CT) scans is a prerequisite step in various clinical applications. Multi-phase CT imaging enhances tumor characterization, thereby assisting radiologists in accurate identification. However, existing automatic liver tumor segmentation models did not fully exploit multi-phase information and lacked the capability to capture global information. In this study, we developed a pioneering multi-phase feature interaction Transformer network (MI-TransSeg) for accurate liver tumor segmentation and a subsequent microvascular invasion (MVI) assessment in contrast-enhanced CT images. In the proposed network, an efficient multi-phase features interaction module was introduced to enable bi-directional feature interaction among multiple phases, thus maximally exploiting the available multi-phase information. To enhance the model's capability to extract global information, a hierarchical transformer-based encoder and decoder architecture was designed. Importantly, we devised a multi-resolution scales feature aggregation strategy (MSFA) to optimize the parameters and performance of the proposed model. Subsequent to segmentation, the liver tumor masks generated by MI-TransSeg were applied to extract radiomic features for the clinical applications of the MVI assessment. With Institutional Review Board (IRB) approval, a clinical multi-phase contrast-enhanced CT abdominal dataset was collected that included 164 patients with liver tumors. The experimental results demonstrated that the proposed MI-TransSeg was superior to various state-of-the-art methods. Additionally, we found that the tumor mask predicted by our method showed promising potential in the assessment of microvascular invasion. In conclusion, MI-TransSeg presents an innovative paradigm for the segmentation of complex liver tumors, thus underscoring the significance of multi-phase CT data exploitation. The proposed MI-TransSeg network has the potential to assist radiologists in diagnosing liver tumors and assessing microvascular invasion.
Citation: Wencong Zhang, Yuxi Tao, Zhanyao Huang, Yue Li, Yingjia Chen, Tengfei Song, Xiangyuan Ma, Yaqin Zhang. Multi-phase features interaction transformer network for liver tumor segmentation and microvascular invasion assessment in contrast-enhanced CT[J]. Mathematical Biosciences and Engineering, 2024, 21(4): 5735-5761. doi: 10.3934/mbe.2024253
[1] | Eduardo Mercado III . Relating Cortical Wave Dynamics to Learning and Remembering. AIMS Neuroscience, 2014, 1(3): 185-209. doi: 10.3934/Neuroscience.2014.3.185 |
[2] | Robert A. Moss . A Theory on the Singular Function of the Hippocampus: Facilitating the Binding of New Circuits of Cortical Columns. AIMS Neuroscience, 2016, 3(3): 264-305. doi: 10.3934/Neuroscience.2016.3.264 |
[3] | Dimitris Pinotsis, Karl Friston . Gamma Oscillations and Neural Field DCMs Can Reveal Cortical Excitability and Microstructure. AIMS Neuroscience, 2014, 1(1): 18-38. doi: 10.3934/Neuroscience.2014.1.18 |
[4] | Eduardo Mercado III . Learning-Related Synaptic Reconfiguration in Hippocampal Networks: Memory Storage or Waveguide Tuning?. AIMS Neuroscience, 2015, 2(1): 28-34. doi: 10.3934/Neuroscience.2015.1.28 |
[5] | Robert A. Moss, Jarrod Moss . The Role of Dynamic Columns in Explaining Gamma-band Synchronization and NMDA Receptors in Cognitive Functions. AIMS Neuroscience, 2014, 1(1): 65-88. doi: 10.3934/Neuroscience.2014.1.65 |
[6] | Ritwik Das, Artur Luczak . Epileptic seizures and link to memory processes. AIMS Neuroscience, 2022, 9(1): 114-127. doi: 10.3934/Neuroscience.2022007 |
[7] | Ubaid Ansari, Jimmy Wen, Burhaan Syed, Dawnica Nadora, Romteen Sedighi, Denise Nadora, Vincent Chen, Forshing Lui . Analyzing the potential of neuronal pentraxin 2 as a biomarker in neurological disorders: A literature review. AIMS Neuroscience, 2024, 11(4): 505-519. doi: 10.3934/Neuroscience.2024031 |
[8] | Dai Mitsushima . Contextual Learning Requires Functional Diversity at Excitatory and Inhibitory Synapses onto CA1 Pyramidal Neurons. AIMS Neuroscience, 2015, 2(1): 7-17. doi: 10.3934/Neuroscience.2015.1.7 |
[9] | Anna Lardone, Marianna Liparoti, Pierpaolo Sorrentino, Roberta Minino, Arianna Polverino, Emahnuel Troisi Lopez, Simona Bonavita, Fabio Lucidi, Giuseppe Sorrentino, Laura Mandolesi . Topological changes of brain network during mindfulness meditation: an exploratory source level magnetoencephalographic study. AIMS Neuroscience, 2022, 9(2): 250-263. doi: 10.3934/Neuroscience.2022013 |
[10] | Nour Kenaan, Zuheir Alshehabi . A review on recent advances in Alzheimer's disease: The role of synaptic plasticity. AIMS Neuroscience, 2025, 12(2): 75-94. doi: 10.3934/Neuroscience.2025006 |
Precise segmentation of liver tumors from computed tomography (CT) scans is a prerequisite step in various clinical applications. Multi-phase CT imaging enhances tumor characterization, thereby assisting radiologists in accurate identification. However, existing automatic liver tumor segmentation models did not fully exploit multi-phase information and lacked the capability to capture global information. In this study, we developed a pioneering multi-phase feature interaction Transformer network (MI-TransSeg) for accurate liver tumor segmentation and a subsequent microvascular invasion (MVI) assessment in contrast-enhanced CT images. In the proposed network, an efficient multi-phase features interaction module was introduced to enable bi-directional feature interaction among multiple phases, thus maximally exploiting the available multi-phase information. To enhance the model's capability to extract global information, a hierarchical transformer-based encoder and decoder architecture was designed. Importantly, we devised a multi-resolution scales feature aggregation strategy (MSFA) to optimize the parameters and performance of the proposed model. Subsequent to segmentation, the liver tumor masks generated by MI-TransSeg were applied to extract radiomic features for the clinical applications of the MVI assessment. With Institutional Review Board (IRB) approval, a clinical multi-phase contrast-enhanced CT abdominal dataset was collected that included 164 patients with liver tumors. The experimental results demonstrated that the proposed MI-TransSeg was superior to various state-of-the-art methods. Additionally, we found that the tumor mask predicted by our method showed promising potential in the assessment of microvascular invasion. In conclusion, MI-TransSeg presents an innovative paradigm for the segmentation of complex liver tumors, thus underscoring the significance of multi-phase CT data exploitation. The proposed MI-TransSeg network has the potential to assist radiologists in diagnosing liver tumors and assessing microvascular invasion.
The role of synapse elimination, observed in humans [1,2,3] and in animals [4,5,6,7,8,9,10], has been generally perceived as the removal, or "pruning", of redundant or weak synapses for the improvement of neural circuit performance. Although structural circuit modification has been suggested in general terms as means for long term memory [11], the specific function and mechanization of synapse elimination have remained essentially unclear. Reports that focal blockade of neurotransmission is more effective in synapse elimination than a whole junction blockade [11], and that synapse elimination precedes axon dismantling [12], have been challenged by claims that synapse elimination is a consequence of whole axon removal [13]. Dynamic firing effects of neural interaction under synapse elimination have been experimentally observed, noting that "active synaptic sites can destabilize inactive synapses in their vicinity" [11], although such effects may involve synapse silencing and reactivation [14] rather than synapse elimination. While early studies have associated synapse elimination with early development [12,15] and childhood [16], others have extended it to puberty [17], and, depending on brain regions, to age 12 for frontal and parietal lobes, to age 16 for the temporal lobe, and to age 20 for the occipital lobe [18]. Yet, grey matter [19] and cognition [20] studies, and persistent evidence of molecular processes involved in synaptic elimination throughout life [21], have suggested its relevance all the way to senescence.
Here, we study the mechanism and the implications of synapse elimination on neural circuit formation, modification and function in a model-based context. Noting that, albeit certain age-related differences in the time constants associated with membrane and synapse plasticity, the firing rate model produces essentially the same dynamic modes [22], we suggest that the model corresponding to maturity and aging represents a viable platform for analyzing the effects of synapse elimination throughout life. We further suggest that synapse weakening and eventual elimination is a consequence of asynchrony in the firing of interacting neurons and circuits. We proceed to show that while whole axon elimination removes asynchronous firing altogether, synapse elimination facilitates interference-free asynchrony between individual neurons and between neural circuit firing, maintaining internal circuit synchrony. This allows for cortical segregation into neurons and circuits having different characteristic firing rate modes corresponding to different cortical functions.
Both the mathematical analysis and the simulation of synapse elimination effects will require some specification of the models involved in order to demonstrate the functional implications. For a single isolated neuron, the firing rate model, evolving from the integrate-and-fire [23] and the conductance-based membrane current[24] paradigms, through cortical averaging [25], neuronal decoding [26] and spiking rate [27] models, is captured, in essence, by the discrete iteration map [28,22].
υ(k)=αυ(k−1)+βf(ω(k)υ(k−1)+u) | (1) |
where υ(k) is the firing rate, α=exp(−1/τm) and β=1−α, with τmthe membrane time constant, ω(k)is the self-feedback synaptic weight, u is the external membrane activation and
f(x)={xifx≥00ifx<0 | (2) |
is the conductance-based rectification kernel [29,30,31].
The Bienenstock-Cooper-Munro plasticity rule [32], enhanced by stabilizing modifications [33,34], is a widely recognized, biologically plausible, mathematical representation of the Hebbian learning paradigm [35], taking the discrete form
ω(k)=εω(k−1)+γ(υ(k−1)−θ(k−1))υ2(k−1) | (3) |
where
θ(k)=δN∑i=0exp(−i/τθ)υ2(k−i) | (4) |
with ε=exp(−1/τω),γ=1−ε and δ=1/τθ, where τω and τθ are the synaptic and the threshold time constants, respectively.
Different ranges of the time constants, τm, τω and τθ, hypothesized to correspond to different developmental stages (smaller values corresponding to early development), have been shown to produce somewhat different maps of firing rate, possessing, however, essentially similar global attractors [22]. In critical period, immediately following birth and extending into early childhood, the plasticity time constants τω and τθ are assumed to have near-zero values, while the membrane time constant, τm is assume to be higher, producing, by Eq. 1, the map depicted in Figure 1a. The inhibition onset and termination thresholds are h and g, respectively, where the slopes of the map change sign. The dynamic behavior induced by the map depends on the slope at the point p, where the map intersects the diagonal υ(k)=υ(k−1).
For our present purposes, we continue with the case corresponding to synaptic maturity and rigidity (Figure 1b), so as to stress the relevance of circuit segregation beyond early development. For this case, ω(k) converges to a fixed value ω [34], yielding the model [22]
υ(k)={f1(υ(k−1))=λ1υ(k−1)forβωυ(k−1)+βu≤0f2(υ(k−1))=λ2υ(k−1)+βuforβωυ(k−1)+βu>0 | (5) |
where
λ1=α | (6) |
and
λ2=α+βω | (7) |
and the inhibition onset and offset points become
h=βu | (8) |
and
g=βuλ1−λ2 | (9) |
while the fixed point representing the intersection of f2 with the diagonal υ(k)=υ(k−1) is
p=βu1−λ2 | (10) |
The parameters
c1=2λ1λ2+1+√1+4λ21 | (11) |
and
c2=λ1λ2+1 | (12) |
define transition points from one global attractor type of the map to another [28].
Scalar global attractors are graphically described by cobweb diagrams [36,37,38,39], which are initiated at some value, υ(0), on the map, then connected horizontally to the diagonal υ(k)=υ(k−1), then connected vertically to the map, and so on. The cobweb diagrams depicted by the corresponding subplots of Figure 2 represent different global attractor types, which satisfy the following characteristic conditions:
(a) Chaotic attractor. For u>0, λ2<−1, c1≤0, and c2<0, yielding pυ≤qυ (where q is the point obtained from the bend point g by a 4-step cobweb sequence, and where pυ and qυ are the vertical coordinates of the corresponding points on the map in Figure 2a), the attractor is represented in Figure 2a by the interval ab on the diagonal υ(k)=υ(k−1), with a and b created by a cobweb sequence initiating at the bend point g, which defines the boundaries of the attractor, as shown in the figure. An orbit of period three a1→a2→a3→a1, rendering Li-Yorke chaos [40], is defined by a1=f3(a1), a1≠f(a1), where f is the map Eq. 5, and f3(x)=f(f(f(x))). Starting with a1>g, we obtain a2=λ1a1, a3=λ21a1, a1=λ2λ21a1+βu, yielding a1=βu/(1−λ2λ21).
(b) Largely-oscillatory attractor. For u>0, λ2<−1, c1>0, and c2<0, yielding qυ<pυ, the attractor, represented in Figure 2b by the two intervals ab and cd, separated by the repelling interval bc, is largely oscillatory. Within the attractor domain, defined by a cobweb initiating at the bend-point g, trajectories alternate between the two intervals ab and cd. As implied by the cobweb diagram, depending on the circuit parameters, this alternation may, but need not necessarily, repeat precisely the same points, which may then represent oscillatory or cyclically multiplexed dynamics (the two intervals ab and cd then reduce into two or four points, respectively [22,28]).
(c) Fixed-point (constant) attractor. For u>0 and −1<λ2≤1, we have a fixed-point attractor at p. For −1<λ2≤0, the fixed point will be approached by alternate convergence (increasing υ(k) step followed by decreasing υ(k+1) step and vice versa). For 0<λ2≤λ1, convergence will be monotone, bimodal (according to f1 far from p and according to f2 near p, as illustrated by Figure 2c for 0<λ2≤λ1, and unimodal (according to f2) for λ1<λ2≤1.
(d) Silent attractor. For ω=0, the attractor is at the origin, as depicted in Figure 2d.
Specifically, the cobweb diagrams depicted in Figure 2 correspond to the following parameter values:
(a) u=10, τm=2, τω=10000, τθ=0.1, yielding
λ1=0.6055,λ2=−4.7336,c1=−3.1701,c2=−1.8711,ω=−13.5720
(b) u=10,τm=2,τω=10000,τθ=1, yielding
λ1=0.6055,λ2=−1.8160,c1=0.3692,c2=−0.1015,ω=−6.1569
(c) u=1,τm=2,τω=5,τθ=1, yielding λ1=0.6065,λ2=0.5308,ω=−0.1925
(d) u=−1,τm=2,τω=5,τθ=1, yielding λ1=0.6065,λ2=0.6065,ω=0
For each of the cases, the steady-state value of the synaptic weight, ω, was calculated by driving Eqs. 3 and 4, with ω(0)=0 and υ(0)=1, to convergence (practically, this was achieved for N=100), and the corresponding values of λ1,λ2,c1 and c2 were calculated by Eqs. 6, 7, 11 and 12 respectively. As can be verified, the conditions stated above for the attractor types (a) chaotic, (b) largely-oscillatory, (c) fixed-point and (d) silent, are satisfied, respectively, by the parameter values obtained.
For a circuit of n neurons, firing rate and plasticity would be governed by the equations
υi(k)=αiυi(k−1)+βif(ωiT(k)υ(k−1)+ui) | (13) |
ωi(k)=εiωi(k−1)+γi[υi(k−1)−θi(k−1)]υ2(k−1) | (14) |
θi(k)=δiN∑i=0exp(−i/τθi)υi2(k−i) | (15) |
where i=1,2,...,n, υ(k) is the vector of neuronal firing rates, ωi(k) is the vector of input synaptic weights (including self-feedback) corresponding to the i th neuron and υ2 is the vector whose components are the squares of the components of υ. We have used N = 100, yielding convergence of ωi,j,i,j=1,2,…,n, to constant values.
Consider first two neurons, one having the parameters of case (c) of the previous section, hence, a fixed-point global attractor of firing rate, and the other having the parameters of case (b) of the previous section, hence, a largely oscillatory global attractor of firing rate. Figure 3a shows the firing rate sequences of the two neurons, fully connected into a circuit. Eliminating the axon of the first neuron (Figure 3b) reveals the characteristic firing mode of the second neuron alone. Eliminating the axon of the second neuron (Figure 3c) reveals the characteristic firing mode of the first neuron alone. It can be clearly seen by comparison that full connectivity, resulting in the firing sequences displayed by Figure 3a, causes mutual interference between the two neurons. Inter-neuron asynchrony is assumed to imply, by the Hebbian paradigm, weakening of the corresponding synapses [35].
Eventual elimination of the receiving synapses of both neurons, represented by zero values of the corresponding synaptic weights, allows each of the neurons to display its own characteristic firing-rate mode without interference, as depicted in Figure 3d.
Next consider a fully connected circuit of n identical neurons receiving identical activation. The neurons will fire in synchrony. However, for n > 1, the neuronal firing mode will not be the same as that of an individual isolated neuron with the same property. This can be seen by noting that, for each of the synchronous circuit neurons, Eq. 13 will yield the firing rate model
υ(k)=αυ(k−1)+βf(nω(k)υ(k−1)+u) | (16) |
which is different from Eq. 1. As N→∞, Eqs. 14 and 15 will take ω(k) to its constant limit value ωf [34], yielding
υ(k)=αυ(k−1)+βf(nωυ(k−1)+u) | (17) |
which is the firing rate model of an isolated individual neuron with feedback synaptic weight nω. This synchrony equivalence principle allows us to extend our results obtained for the neuronal firing rate modes and for asynchronous neuron segregation to synchronous circuits. Clearly, Eq. 17 will produce a different mode of firing rate dynamics for every circuit size n. This is illustrated by Figure 4, where the four subfigures depict attractors of fully connected circuits having identical neurons, but different circuit sizes. It can be seen that different circuit sizes result in different firing rate modes. Specifically, the circuits represented in Figure 4 obey the model of Eqs.13–15, with N=100 (taking ω(k) to its constant limit ω). For circuits of fully connected identical neurons, having the parameter values u=4, τm=2, τω=300, τθ=0.1 and circuit size values 10, 5, 2 and 1, we obtain the modal parameter and condition values specified below:
(a) n=10, yielding ω=−0.7870,λ1=0.6065,λ2=−2.4901, c1=−0.4485,c2=−0.5103
(b) n=5, yielding ω=−1.3128,λ1=0.6065,λ2=−1.9762, c1=0.1748,c2=−0.1987
(c) n=2, yielding ω=−2.5628,λ1=0.6065,λ2=−1.4103, c1=0.8614,c2=0.1446
(d) n=1, yielding ω=−3.8993,λ1=0.6065,λ2=−0.9277, c1=1.4467,c2=0.4373
An examination of the conditions for the global attractor types specified in Section 2 shows that the above cases represent (a) chaotic, (b) largely-oscillatory, (c) oscillatory and (d) fixed-point global attractors, as ratified by Figure 4. As, except for the circuit size (i.e., the number of neurons) the neuronal parameters u,τm,τω,τθ are identical for all neurons, the increasing number of neurons in cases (a–d) above may be viewed as representing synchronous circuit segregation into smaller synchronous sub-circuits.
Finally, in order to illustrate the effects of synchronous circuit segregation into synchronous sub-circuits, consider a circuit of three identical neurons with the parameters (identical to those employed in Figure 4) u=4,τm=2,τω=300,τθ=0.1, starting with full connectivity and undergoing modification and segregation by synapse elimination (manifested by setting the corresponding synaptic weight at zero). The changes in circuit connectivity are illustrated in Figure 5, while the resulting changes in the neuronal firing modes, simulated by Eqs. 13–15, are displayed in Figure 6.
It can be seen in Figure 6 that the fully connected circuit (a) fires synchronously in a largely oscillatory multiplexed mode, whereas the transition to partial connectivity in the modified circuit (b) produces a change of the neuronal firing rate modes, which, while being similarly oscillatory, are unequal in amplitude. The circuit segregation into a two-neuron circuit and one individual neuron (c) results in synchronous oscillation and constant (fixed point) firing modes, respectively, while the segregation into three isolated neurons (d) results in each producing a constant firing rate. The firing modes are in agreement with those predicted by Figure 4 (the transition from a fully connected two-neuron circuit in case 4 (c) to a fully connected three-neuron circuit in case 5 (a) has resulted in the mode changing from oscillatory to largely oscillatory, multiplexing two oscillatory modes).
We have shown that, given internal neuronal property, the circuit connectivity structure defines the circuit firing mode as well. It is therefore justified to view circuit connectivity not only as means for information representation, but also as a manifestation of the function to be performed. The permanence of synapse elimination, as opposed to synapse silencing, makes it particularly relevant to long-term memory and to life-long functional proficiency. Synapse elimination results in circuit modification. It can segregate a synchronous circuit into smaller synchronous sub-circuits, isolated against mutual asynchronous interference. Conversely, the weakening of synapses, eventually resulting in their elimination, may be caused by asynchronous interference between neurons and synapses, as suggested by the Hebbian paradigm. Circuits of identical neurons, but different sizes, fire in different firing rate modes. While we have focused on the firing rate and plasticity dynamics corresponding to maturity, the essential persistence of the map and the corresponding global attractor code of firing rate through different developmental stages suggest the life-long relevance of synapse elimination to circuit formation, modification and function.
This study was supported by the Technion's Roy Matas/Winnipeg Chair in Biomedical Engineering.
The author (Y. Baram) declares the existence of no conflicting interest.
[1] |
H. Sung, J. Ferlay, R. L. Siegel, M. Laversanne, I. Soerjomataram, A. Jemal, et al., Global Cancer Statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries, CA Cancer J. Clin., 71 (2021), 209–249. https://doi.org/10.3322/caac.21660 doi: 10.3322/caac.21660
![]() |
[2] |
J. M. Llovet, R. K. Kelley, A. Villanueva, A. G. Singal, E. Pikarsky, S. Roayaie, et al., Hepatocellular carcinoma, Nat. Rev. Dis. Primers, 7 (2021), 6. https://doi.org/10.1038/s41572-020-00240-3 doi: 10.1038/s41572-020-00240-3
![]() |
[3] |
F. X. Bosch, J. Ribes, M. Díaz, R. Cléries, Primary liver cancer: worldwide incidence and trends, Gastroenterology, 127 (2004), S5–S16. https://doi.org/10.1053/j.gastro.2004.09.011 doi: 10.1053/j.gastro.2004.09.011
![]() |
[4] |
X. Wu, J. Li, C. Wang, G. Zhang, N. Zheng, X. Wang, Application of different imaging methods in the early diagnosis of primary hepatic carcinoma, Gastroenterol. Res. Pract., 2016 (2016), 8763205. https://doi.org/10.1155/2016/8763205 doi: 10.1155/2016/8763205
![]() |
[5] | K. Song, D. Wu, Shared decision-making in the management of patients with inflammatory bowel disease, World J. Gastroenterol., 28 (2022), 3092–3100. https://doi.org/10.3748%2Fwjg.v28.i26.3092 |
[6] |
C. Chang, H. Chen, Y. Chang, M. Yang, C. Lo, W. Ko, et al., Computer-aided diagnosis of liver tumors on computed tomography images, Comput. Methods Programs Biomed., 145 (2017), 45–51. https://doi.org/10.1016/j.cmpb.2017.04.008 doi: 10.1016/j.cmpb.2017.04.008
![]() |
[7] |
W. Li, F. Jia, Q. Hu, Automatic segmentation of liver tumor in CT images with deep convolutional neural networks, J. Comput. Commun., 3 (2015), 146–151. http://dx.doi.org/10.4236/jcc.2015.311023 doi: 10.4236/jcc.2015.311023
![]() |
[8] |
R. Naseem, Z. A. Khan, N. Satpute, A. Beghdadi, F. A. Cheikh, J. Olivares, Cross-modality guided contrast enhancement for improved liver tumor image segmentation, IEEE Access, 9 (2021), 118154–118167. https://doi.org/10.1109/ACCESS.2021.3107473 doi: 10.1109/ACCESS.2021.3107473
![]() |
[9] |
L. Wang, M. Wu, R. Li, X. Xu, C. Zhu, X. Feng, MVI-Mind: A novel deep-learning strategy using computed tomography (CT)-based radiomics for end-to-end high efficiency prediction of microvascular invasion in hepatocellular carcinoma, Cancers, 14 (2022), 2956. https://doi.org/10.3390/cancers14122956 doi: 10.3390/cancers14122956
![]() |
[10] |
Y. Jiang, S. Cao, S. Cao, J. Chen, G. Wang, W. Shi, et al., Preoperative identification of microvascular invasion in hepatocellular carcinoma by XGBoost and deep learning, J. Cancer Res. Clin. Oncol., 147 (2021), 821–833. https://doi.org/10.1007/s00432-020-03366-9 doi: 10.1007/s00432-020-03366-9
![]() |
[11] |
A. Radtke, S. Nadalin, G. C. Sotiropoulos, E. P. Molmenti, T. Schroeder, C. Valentin-Gamazo, et al., Computer-assisted operative planning in adult living donor liver transplantation: A new way to resolve the dilemma of the middle hepatic vein, World J. Surg., 31 (2007), 175–185. https://doi.org/10.1007/s00268-005-0718-1 doi: 10.1007/s00268-005-0718-1
![]() |
[12] |
P. Liang, Y. Wang, X. Yu, B. Dong, Malignant liver tumors: treatment with percutaneous microwave ablation—complications among cohort of 1136 patients, Radiology, 251 (2009), 933–940. https://doi.org/10.1148/radiol.2513081740 doi: 10.1148/radiol.2513081740
![]() |
[13] |
S. Gul, M. S. Khan, A. Bibi, A. Khandakar, M. A. Ayari, M. E. H. Chowdhury, Deep learning techniques for liver and liver tumor segmentation: A review, Comput. Biol. Med., 147 (2022), 105620. https://doi.org/10.1016/j.compbiomed.2022.105620 doi: 10.1016/j.compbiomed.2022.105620
![]() |
[14] |
L. Soler, H. Delingette, G. Malandain, J. Montagnat, N. Ayache, C. Koehl, et al., Fully automatic anatomical, pathological, and functional segmentation from CT scans for hepatic surgery, Comput. Aided Surg., 6 (2001), 131–142. https://doi.org/10.3109/10929080109145999 doi: 10.3109/10929080109145999
![]() |
[15] | H. A. Nugroho, D. Ihtatho, H. Nugroho, Contrast enhancement for liver tumor identification, in International Conference on Medical Image Computing and Computer-Assisted Intervention, 41 (2008), 201. https://doi.org/10.54294/1uhwld |
[16] |
M. Esfandiarkhani, A. H. Foruzan, A generalized active shape model for segmentation of liver in low-contrast CT volumes, Comput. Biol. Med., 82 (2017), 59–70. https://doi.org/10.1016/j.compbiomed.2017.01.009 doi: 10.1016/j.compbiomed.2017.01.009
![]() |
[17] | D. Wong, J. Liu, F. Yin, Q. Tian, W. Xiong, J. Zhou, et al., A semi-automated method for liver tumor segmentation based on 2D region growing with knowledge-based constraints, in International Conference on Medical Image Computing and Computer-Assisted Intervention, 41 (2008), 159. https://doi.org/10.54294/25etax |
[18] | L. Fernandez-de-Manuel, J. L. Rubio, M. J. Ledesma-Carbayo, J. Pascau, J. M. Tellado, E. Ramon, et al., 3D liver segmentation in preoperative CT images using a level-sets active surface method, in International Conference of the IEEE Engineering in Medicine and Biology Society, (2009), 3625–3628. https://doi.org/10.1109/iembs.2009.5333760 |
[19] |
S. S. Kumar, R. S. Moni, J. Rajeesh, An automatic computer-aided diagnosis system for liver tumours on computed tomography images, Comput. Electr. Eng., 39 (2013), 1516–1526. https://doi.org/10.1016/j.compeleceng.2013.02.008 doi: 10.1016/j.compeleceng.2013.02.008
![]() |
[20] | R. Kaur, L. Kaur, S. Gupta, Enhanced K-mean clustering algorithm for liver image segmentation to extract cyst region, in IJCA Special Issue on Novel Aspects of Digital Imaging Applications, 1 (2011), 59–66. |
[21] |
T. Zhou, S. Canu, S. Ruan, Fusion based on attention mechanism and context constraint for multi-modal brain tumor segmentation, Comput. Med. Imaging Graphics, 86 (2020), 101811. https://doi.org/10.1016/j.compmedimag.2020.101811 doi: 10.1016/j.compmedimag.2020.101811
![]() |
[22] |
J. Dolz, K. Gopinath, J. Yuan, H. Lombaert, C. Desrosiers, I. B. Ayed, HyperDense-Net: a hyper-densely connected CNN for multi-modal image segmentation, IEEE Trans. Med. Imaging, 38 (2018), 1116–1126. https://doi.org/10.1109/TMI.2018.2878669 doi: 10.1109/TMI.2018.2878669
![]() |
[23] |
Q. Yu, Y. Shi, J. Sun, Y. Gao, J. Zhu, Y. Dai, Crossbar-net: a novel convolutional neural network for kidney tumor segmentation in CT images, IEEE Trans. Image Process., 28 (2019), 4060–4074. https://doi.org/10.1109/TIP.2019.2905537 doi: 10.1109/TIP.2019.2905537
![]() |
[24] |
X. Ma, L. M. Hadjiiski, J. Wei, H. P. Chan, K. H. Cha, R. H. Cohan, et al., U‐Net based deep learning bladder segmentation in CT urography, Med. Phys., 46 (2019), 1752–1765. https://doi.org/10.1002/mp.13438 doi: 10.1002/mp.13438
![]() |
[25] | P. F. Christ, M. E. A. Elshaer, F. Ettlinger, S. Tatavarty, M. Bickel, P. Bilic, et al., Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random fields, in International Conference on Medical Image Computing and Computer-Assisted Intervention, (2016), 415–423. https://doi.org/10.1007/978-3-319-46723-8_48 |
[26] |
G. Chlebus, A. Schenk, J. H. Moltz, B. van Ginneken, H. K. Hahn, H. Meine, Automatic liver tumor segmentation in CT with fully convolutional neural networks and object-based postprocessing, Sci. Rep., 8 (2018), 15497. https://doi.org/10.1038/s41598-018-33860-7 doi: 10.1038/s41598-018-33860-7
![]() |
[27] | O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in International Conference on Medical Image Computing and Computer-Assisted Intervention, (2015), 234–241. https://doi.org/10.1007/978-3-319-24574-4_28 |
[28] | C. Li, Y. Tan, W. Chen, X. Luo, Y. Gao, X. Jia, et al., Attention Unet++: A nested attention-aware U-Net for liver CT image segmentation, in IEEE International Conference on Image Processing, (2020), 345–349. https://doi.org/10.1109/ICIP40778.2020.9190761 |
[29] | H. Huang, L. Lin, R. Tong, H. Hu, Q. Zhang, Y. Iwamoto, et al., Unet 3+: A full-scale connected unet for medical image segmentation, in IEEE International Conference on Acoustics, Speech and Signal Processing, (2020), 1055–1059. https://doi.org/10.1109/ICASSP40776.2020.9053405 |
[30] |
H. Seo, C. Huang, M. Bassenne, R. Xiao, L. Xing, Modified U-Net (mU-Net) with incorporation of object-dependent high level features for improved liver and liver-tumor segmentation in CT images, IEEE Trans. Med. Imaging, 39 (2019), 1316–1325. https://doi.org/10.1109/TMI.2019.2948320 doi: 10.1109/TMI.2019.2948320
![]() |
[31] |
D. T. Kushnure, S. N. Talbar, MS-UNet: A multi-scale UNet with feature recalibration approach for automatic liver and tumor segmentation in CT images, Comput. Med. Imaging Graphics, 89 (2021), 101885. https://doi.org/10.1016/j.compmedimag.2021.101885 doi: 10.1016/j.compmedimag.2021.101885
![]() |
[32] |
X. Xu, Q. Zhu, H. Ying, J. Li, X. Cai, S. Li, et al., A knowledge-guided framework for fine-grained classification of liver lesions based on multi-phase CT images, IEEE J. Biomed. Health Inf., 27 (2023), 386–396. https://doi.org/10.1109/JBHI.2022.3220788 doi: 10.1109/JBHI.2022.3220788
![]() |
[33] |
W. Shi, S. Kuang, S. Cao, B. Hu, S. Xie, S. Chen, et al., Deep learning assisted differentiation of hepatocellular carcinoma from focal liver lesions: choice of four-phase and three-phase CT imaging protocol, Abdom. Radiol., 45 (2020), 2688–2697. https://doi.org/10.1007/s00261-020-02485-8 doi: 10.1007/s00261-020-02485-8
![]() |
[34] |
Y. Xu, M. Cai, L. Lin, Y. Zhang, H. Hu, Z. Peng, et al., PA-ResSeg: A phase attention residual network for liver tumor segmentation from multiphase CT images, Med. Phys., 48 (2021), 3752–3766. https://doi.org/10.1002/mp.14922 doi: 10.1002/mp.14922
![]() |
[35] |
I. R. Kamel, M. A. Choti, K. M. Horton, H. J. V. Braga, B. A. Birnbaum, E. K. Fishman, et al., Surgically staged focal liver lesions: accuracy and reproducibility of dual-phase helical CT for detection and characterization, Radiology, 227 (2003), 752–757. https://doi.org/10.1148/radiol.2273011768 doi: 10.1148/radiol.2273011768
![]() |
[36] |
F. Ouhmich, V. Agnus, V. Noblet, F. Heitz, P. Pessaux, Liver tissue segmentation in multiphase CT scans using cascaded convolutional neural networks, Int. J. Comput. Assisted Radiol. Surg., 14 (2019), 1275–1284. https://doi.org/10.1007/s11548-019-01989-z doi: 10.1007/s11548-019-01989-z
![]() |
[37] |
C. Sun, S. Guo, H. Zhang, J. Li, M. Chen, S. Ma, et al., Automatic segmentation of liver tumors from multiphase contrast-enhanced CT images based on FCNs, Artif. Intell. Med., 83 (2017), 58–66. https://doi.org/10.1016/j.artmed.2017.03.008 doi: 10.1016/j.artmed.2017.03.008
![]() |
[38] | Y. Wu, Q. Zhou, H. Hu, G. Rong, Y. Li, S. Wang, Hepatic lesion segmentation by combining plain and contrast-enhanced CT images with modality weighted U-Net, in IEEE International Conference on Image Processing, (2019), 255–259. https://doi.org/10.1109/ICIP.2019.8802942 |
[39] | Y. Zhang, C. Peng, L. Peng, H. Huang, R. Tong, L. Lin, et al., Multi-phase liver tumor segmentation with spatial aggregation and uncertain region inpainting, in International Conference on Medical Image Computing and Computer-Assisted Intervention, (2021), 68–77. https://doi.org/10.1007/978-3-030-87193-2_7 |
[40] | A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, in Advances in Neural Information Processing Systems, 30 (2017). |
[41] | L. Wang, X. Wang, B. Zhang, X. Huang, C. Bai, M. Xia, et al., Multi-scale Hierarchical Transformer structure for 3D medical image segmentation, in IEEE International Conference on Bioinformatics and Biomedicine, (2021), 1542–1545. https://doi.org/10.1109/BIBM52615.2021.9669799 |
[42] | H. Cao, Y. Wang, J. Chen, D. Jiang, X. Zhang, Q. Tian, et al., Swin-unet: Unet-like pure transformer for medical image segmentation, in European Conference on Computer Vision, (2021), 205–218. https://doi.org/10.1007/978-3-031-25066-8_9 |
[43] | J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, et al., Transunet: Transformers make strong encoders for medical image segmentation, preprint, arXiv: 2102.04306. https://doi.org/10.48550/arXiv.2102.04306 |
[44] |
H. Xiao, L. Li, Q. Liu, X. Zhu, Q. Zhang, Transformers in medical image segmentation: A review, Biomed. Signal Process., 84 (2023), 104791. https://doi.org/10.1016/j.bspc.2023.104791 doi: 10.1016/j.bspc.2023.104791
![]() |
[45] |
K. He, C. Gan, Z. Li, I. Rekik, Z. Yin, W. Ji, et al., Transformers in medical image analysis, Intell. Med., 3 (2023), 59–78. https://doi.org/10.1016/j.imed.2022.07.002 doi: 10.1016/j.imed.2022.07.002
![]() |
[46] |
Y. Xu, X. He, G. Xu, G. Qi, K. Yu, L. Yin, et al., A medical image segmentation method based on multi-dimensional statistical features, Front. Neurosci., 16 (2022), 1009581. https://doi.org/10.3389/fnins.2022.1009581 doi: 10.3389/fnins.2022.1009581
![]() |
[47] |
X. He, G. Qi, Z. Zhu, Y. Li, B. Cong, L. Bai, Medical image segmentation method based on multi-feature interaction and fusion over cloud computing, Simul. Modell. Pract. Theory, 126 (2023), 102769. https://doi.org/10.1016/j.simpat.2023.102769 doi: 10.1016/j.simpat.2023.102769
![]() |
[48] | A. Hatamizadeh, V. Nath, Y. Tang, D. Yang, H. R. Roth, D. Xu, Swin unetr: Swin transformers for semantic segmentation of brain tumors in MRI images, in International Conference on Medical Image Computing and Computer-Assisted Intervention, (2021), 272–284. https://doi.org/10.1007/978-3-031-08999-2_22 |
[49] |
Z. Zhu, X. He, G. Qi, Y. Li, B. Cong, Y. Liu, Brain tumor segmentation based on the fusion of deep semantics and edge information in multimodal MRI, Inf. Fusion, 91 (2023), 376–387. https://doi.org/10.1016/j.inffus.2022.10.022 doi: 10.1016/j.inffus.2022.10.022
![]() |
[50] |
Y. Li, Z. Wang, L. Yin, Z. Zhu, G. Qi, Y. Liu, X-Net: a dual encoding–decoding method in medical image segmentation, Visual Comput., 39 (2023), 2223–2233. https://doi.org/10.1007/s00371-021-02328-7 doi: 10.1007/s00371-021-02328-7
![]() |
[51] | J. M. J. Valanarasu, P. Oza, I. Hacihaliloglu, V. M. Patel, Medical Transformer: Gated axial-attention for medical image segmentation, in International Conference on Medical Image Computing and Computer-Assisted Intervention, (2021), 36–46. https://doi.org/10.1007/978-3-030-87193-2_4 |
[52] | E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, P. Luo, SegFormer: Simple and efficient design for semantic segmentation with transformers, in Advances in Neural Information Processing Systems, 34 (2021), 12077–12090. |
[53] | A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, et al., An image is worth 16x16 words: Transformers for image recognition at scale, in International Conference on Learning Representations, preprint, arXiv: 2010.11929. https://doi.org/10.48550/arXiv.2010.11929 |
[54] |
C. Peng, Y. Zhang, J. Zheng, B. Li, J. Shen, M. Li, et al., IMⅡN: an inter-modality information interaction network for 3D multi-modal breast tumor segmentation, Comput. Med. Imaging Graphics, 95 (2022), 102021. https://doi.org/10.1016/j.compmedimag.2021.102021 doi: 10.1016/j.compmedimag.2021.102021
![]() |
[55] | L. Yuan, Y. Chen, T. Wang, W. Yu, Y. Shi, Z. H. Jiang, et al., Tokens-to-token vit: Training vision transformers from scratch on imagenet, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2021), 558–567. https://doi.org/10.48550/arXiv.2101.11986 |
[56] | N. Liu, N. Zhang, K. Wan, L. Shao, J. Han, Visual saliency transformer, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2021), 4722–4732. |
[57] | A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, et al., Automatic differentiation in pytorch, in Advances in Neural Information Processing Systems, 2017. |
[58] | D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. https://doi.org/10.48550/arXiv.1412.6980 |
[59] | W. Luo, Y. Li, R. Urtasun, R. Zemel, Understanding the effective receptive field in deep convolutional neural networks, in Advances in Neural Information Processing Systems, 29 (2016). |
[60] |
W. Zhou, W. Jian, X. Cen, L. Zhang, H. Guo, Z. Liu, et al., Prediction of microvascular invasion of hepatocellular carcinoma based on contrast-enhanced MR and 3D convolutional neural networks, Front. Oncol., 11 (2021), 588010. https://doi.org/10.3389/fonc.2021.588010 doi: 10.3389/fonc.2021.588010
![]() |
[61] |
X. Zhong, H. Long, L. Su, R. Zheng, W. Wang, Y. Duan, et al., Radiomics models for preoperative prediction of microvascular invasion in hepatocellular carcinoma: a systematic review and meta-analysis, Abdom. Radiol., 47 (2022), 2071–2088. https://doi.org/10.1007/s00261-022-03496-3 doi: 10.1007/s00261-022-03496-3
![]() |
[62] |
K. Bera, N. Braman, A. Gupta, V. Velcheti, A. Madabhushi, Predicting cancer outcomes with radiomics and artificial intelligence in radiology, Nat. Rev. Clin. Oncol., 19 (2022), 132–146. https://doi.org/10.1038/s41571-021-00560-7 doi: 10.1038/s41571-021-00560-7
![]() |
[63] |
J. Liu, D. Cheng, Y. Liao, C. Luo, Q. Lei, X. Zhang, et al., Development of a magnetic resonance imaging-derived radiomics model to predict microvascular invasion in patients with hepatocellular carcinoma, Quant. Imaging Med. Surg., 13 (2023), 3948–3961. https://doi.org/10.21037/qims-22-1011 doi: 10.21037/qims-22-1011
![]() |
[64] |
J. J. M. Van Griethuysen, A. Fedorov, C. Parmar, A. Hosny, N. Aucoin, V. Narayan, et al., Computational radiomics system to decode the radiographic phenotype, Cancer Res., 77 (2017), e104–e107. https://doi.org/10.1158/0008-5472.CAN-17-0339 doi: 10.1158/0008-5472.CAN-17-0339
![]() |
1. | Yoram Baram, Circuit Polarity Effect of Cortical Connectivity, Activity, and Memory, 2018, 30, 0899-7667, 3037, 10.1162/neco_a_01128 | |
2. | Yoram Baram, Probabilistically segregated neural circuits and subcritical linguistics, 2020, 14, 1871-4080, 837, 10.1007/s11571-020-09602-9 | |
3. | Yoram Baram, Primal-size neural circuits in meta-periodic interaction, 2021, 15, 1871-4080, 359, 10.1007/s11571-020-09613-6 |