
Pituitary adenoma is a common neuroendocrine neoplasm, and most of its MR images are characterized by blurred edges, high noise and similar to surrounding normal tissues. Therefore, it is extremely difficult to accurately locate and outline the lesion of pituitary adenoma. To sovle these limitations, we design a novel deep learning framework for pituitary adenoma MRI image segmentation. Under the framework of U-Net, a newly cross-layer connection is introduced to capture richer multi-scale features and contextual information. At the same time, full-scale skip structure can reasonably utilize the above information obtained by different layers. In addition, an improved inception-dense block is designed to replace the classical convolution layer, which can enlarge the effectiveness of the receiving field and increase the depth of our network. Finally, a novel loss function based on binary cross-entropy and Jaccard losses is utilized to eliminate the problem of small samples and unbalanced data. The sample data were collected from 30 patients in Quzhou People's Hospital, with a total of 500 lesion images. Experimental results show that although the amount of patient sample is small, the proposed method has better performance in pituitary adenoma image compared with existing algorithms, and its Dice, Intersection over Union (IoU), Matthews correlation coefficient (Mcc) and precision reach 88.87, 80.67, 88.91 and 97.63%, respectively.
Citation: Xiaoliang Jiang, Junjian Xiao, Qile Zhang, Lihui Wang, Jinyun Jiang, Kun Lan. Improved U-Net based on cross-layer connection for pituitary adenoma MRI image segmentation[J]. Mathematical Biosciences and Engineering, 2023, 20(1): 34-51. doi: 10.3934/mbe.2023003
[1] | Dong Qiu, Dongju Li . Paradox in deviation measure and trap in method improvement—take international comparison as an example. Quantitative Finance and Economics, 2021, 5(4): 591-603. doi: 10.3934/QFE.2021026 |
[2] | Korhan Gokmenoglu, Baris Memduh Eren, Siamand Hesami . Exchange rates and stock markets in emerging economies: new evidence using the Quantile-on-Quantile approach. Quantitative Finance and Economics, 2021, 5(1): 94-110. doi: 10.3934/QFE.2021005 |
[3] | Lennart Ante . Bitcoin transactions, information asymmetry and trading volume. Quantitative Finance and Economics, 2020, 4(3): 365-381. doi: 10.3934/QFE.2020017 |
[4] | Haryo Kuncoro, Fafurida Fafurida, Izaan Azyan Bin Abdul Jamil . Growth volatility in the inflation-targeting regime: Evidence from Indonesia. Quantitative Finance and Economics, 2024, 8(2): 235-254. doi: 10.3934/QFE.2024009 |
[5] | Yuan-Ming Lee, Kuan-Min Wang . How do Economic Growth Asymmetry and Inflation Expectations Affect Fisher Hypothesis and Fama’s Proxy Hypothesis?. Quantitative Finance and Economics, 2017, 1(4): 428-453. doi: 10.3934/QFE.2017.4.428 |
[6] | Hongxuan Huang, Zhengjun Zhang . An intrinsic robust rank-one-approximation approach for currency portfolio optimization. Quantitative Finance and Economics, 2018, 2(1): 160-189. doi: 10.3934/QFE.2018.1.160 |
[7] | Elvira Caloiero, Massimo Guidolin . Volatility as an Alternative Asset Class: Does It Improve Portfolio Performance?. Quantitative Finance and Economics, 2017, 1(4): 334-362. doi: 10.3934/QFE.2017.4.334 |
[8] | Mustafa Tevfik Kartal, Özer Depren, Serpil Kılıç Depren . The determinants of main stock exchange index changes in emerging countries: evidence from Turkey in COVID-19 pandemic age. Quantitative Finance and Economics, 2020, 4(4): 526-541. doi: 10.3934/QFE.2020025 |
[9] | Yonghong Zhong, Richard I.D. Harris, Shuhong Deng . The spillover effects among offshore and onshore RMB exchange rate markets, RMB Hibor market. Quantitative Finance and Economics, 2020, 4(2): 294-309. doi: 10.3934/QFE.2020014 |
[10] | Tolga Tuzcuoğlu . The impact of financial fragility on firm performance: an analysis of BIST companies. Quantitative Finance and Economics, 2020, 4(2): 310-342. doi: 10.3934/QFE.2020015 |
Pituitary adenoma is a common neuroendocrine neoplasm, and most of its MR images are characterized by blurred edges, high noise and similar to surrounding normal tissues. Therefore, it is extremely difficult to accurately locate and outline the lesion of pituitary adenoma. To sovle these limitations, we design a novel deep learning framework for pituitary adenoma MRI image segmentation. Under the framework of U-Net, a newly cross-layer connection is introduced to capture richer multi-scale features and contextual information. At the same time, full-scale skip structure can reasonably utilize the above information obtained by different layers. In addition, an improved inception-dense block is designed to replace the classical convolution layer, which can enlarge the effectiveness of the receiving field and increase the depth of our network. Finally, a novel loss function based on binary cross-entropy and Jaccard losses is utilized to eliminate the problem of small samples and unbalanced data. The sample data were collected from 30 patients in Quzhou People's Hospital, with a total of 500 lesion images. Experimental results show that although the amount of patient sample is small, the proposed method has better performance in pituitary adenoma image compared with existing algorithms, and its Dice, Intersection over Union (IoU), Matthews correlation coefficient (Mcc) and precision reach 88.87, 80.67, 88.91 and 97.63%, respectively.
Transparent photoelastic materials represent an exciting alternative to the experimental study of stress and strain distributions induced in solids by carges. When subjected to carges, these materials present the double refraction phenomenon, or birefringence, changing the polarization state of the transmitted light through the solids, which can be used to analyze the stress distribution [1]. The effect of double refraction, first described by Bartholinus [2,3,4] and related with the stress state by Brewster in the early 19th century [5], advanced throughout the 20th century with a non-destructive set of techniques and methods which associates the study of material stresses with optics, the photoelasticity [6,7,8]. Since the pioneering work of Coker and Filon [9,10,11], photoelasticity became a fundamental base for determining stress and strain distributions in photoelastic materials. Thus, great interest was generated in several fields, such as Engineering and Odontology [12,13,14], which validated and contributed to developing the theoretical method of finite elements [15,16].
Despite advances, most studies are qualitative or indirectly quantitative [13,14] due to the difficulties in obtaining direct optical information. Improvements in qualitative data and quantitative analysis methods are needed so that non-destructive, fast, and reliable optical methods become a reference for determining stress distribution in materials. Although holographic techniques have advanced significantly in recent years, the various works have invested very little in the stresses distribution analysis in photoelasticity using holography. The dynamics of holography allow the results to be more precise, as they are based on optical properties such as intensity, phase, refractive index, etc., which are provided directly or almost directly, thus offering a great perspective in the more quantitative treatment of problems involving elasticity mechanics [17].
We present an alternative approach to determine the stress distribution profile through a non-destructive procedure based on digital holography (DH) [18,19], allowing us to obtain quantitative intensity and phase information from light transmitted through a photoelastic material. The DH produces remarkably accurate results when combined with appropriate statistical processing of optical data, facilitating the quantitative treatment of the specific problem as outlined in the proposed methods [20]. An off-axis holographic setup was used to obtain two cross-holograms with two orthogonally polarized reference waves and a birefringent system with photoelastic samples under static loads [21,22,23,24,25]. After digitally reconstructed with DH, the received data generates the phase differences used to calculate the distributions of elastic stresses. The validation of the method was carried out with the methods finite elements and RGB (red, green, and blue) photoelasticity [14,16,26,27].
Four standard rectangular blocks, composed of mixtures of epoxy resin solutions, were prepared according to the traditional procedures [28] of the photoelastic technique, constituting the samples used in this work. The preparation of the samples involved two stages: making the silicone molds, from curing in a liquid solution and catalyst, and the photoelastic samples, from curing, in silicone molds, a liquid solution of epoxy resin and hardener. Details are presented in the work [17]. For the determinations of the mechanical and holographic parameters, two samples with different thicknesses were made, one more flexible and one less flexible. Two other samples with different thicknesses, one more and one less flexible, were also made to determine the stress distributions. Details of the procedures are presented in [17]. The more or less flexible samples were intended to help verify the order of magnitude of the stress-optical coefficient (C) and to provide a greater range of comparison with the photoelastic methodology.
The utilized holographic technique is shown in Figure 1.
A laser light source (1) was used to generate three independent waves: one object wave (OW), with the direction of polarization at 45º concerning two orthogonal reference waves, one with the direction of polarization at 0º (W0º), and another with the direction of polarization at 90º (W90º). Two distinct holograms were produced from the resulting interference patterns among OW, W0º, and W90º, propagating with different angles to the digital camera, as shown in Figure 1: θ(0º) between the OW and W0º, and θ (90º) between OW and W90º, as limited by the N-quest Theorem [29,30,31]. Two sets of holograms were recorded from each sample for compression and decompression in the birefringent system. The compression occurs by the progressive addition of load on the samples, and the decompression occurs by progressive removal of these loads.
The photoelastic images were obtained by blocking both reference waves, removing the wave splitter (WS), near the digital camera, and exchanging the polarizer, P45º, of the object wave for two polarizers with orthogonal polarizations, one before and another after the photoelastic sample. Figure 2 presents an experimental configuration scheme used in photoelasticity.
The optical information obtained through photoelasticity is related to the difference between the stresses considered in the components longitudinal (σ∥) and transverse (σ⊥) to the applied load, defined by Eq 1, as given by the stress-optic law [6,17,28]:
n∥−n⊥=C(σ∥−σ⊥) | (1) |
where C is the stress-optical coefficient, and (n∥−n⊥) is the difference between the refraction indexes in the components longitudinal (n∥) and transverse (n⊥) to the effort. For a material with thickness e, the refractive index difference is also associated with the phase difference Δϕ, so Eq 1 can be rewritten as Eq 2:
(σ∥−σ⊥)=fσNe | (2) |
where N=Δϕ2π is defined as the relative retardation, fσ=λC is the fringe value that indicates the degree of rigidity of the material, and λ is the wavelength of the light source. Using the matrix of stress-strain (σ−ε) of the material in the stress state plane [26], the difference between the stresses in the orthogonal components, defined by Eq 3, is [17]:
[σ⊥σ∥]=E1−ν2[1νν1][ε⊥ε∥]⇒(σ∥−σ⊥)=E1+ν(ε∥−ε⊥) | (3) |
E is the mechanical elasticity modulus, and ν is the Poisson's coefficient. Therefore, comparing Eqs 2 and 3, with N=(ε∥−ε⊥), the material fringes value can be determined by their intrinsic properties through Eq 4 [17]:
fσ=eE1+ν=λC(λ) | (4) |
The holographic method was empirically inferred and correlated with the stress-optic law. The stress difference (σ∥−σ⊥) occurs in the plane normal to the passage of light [26]. Due to angular displacements, the shear stresses were not considered to limit the boundary conditions and obtain the desired equation. In analogy with photoelasticity, the stress-strain matrix for the holographic parameters is given by Eq 5 [17]:
[σ⊥σ∥]=Ea(1−ν2)[1νν1][aεH⊥aεH∥]⇒(σ∥−σ⊥)holographic=E(1+ν)ℵ | (5) |
considering, by Eq 6, that
E=aE | (6) |
E is defined as the holographic elasticity modulus, and a is a dimensionless constant that relates the holography elasticity with the mechanical elasticity. εH=1aε is defined as the relative holographic deformation and ℵ=(ϕ∥−ϕ⊥)holographic2π=(εH∥−εH⊥) as the relative holographic retardation. Thus, the holographic dispersion can be written as the Eq 7 [17]:
H(λ)≡λfσ=a(1+ν)eEλ | (7) |
where, by Eq 8,
![]() |
(8) |
and fσ are the fringe values obtained in holography and photoelasticity, respectively. The photoelastic fringe value is related to the wavelength of light (λ) and the photoelastic stress-optical coefficient (C). Then, analogously to what occurs with photoelasticity, there comes the holographic dispersion term, H(λ), an intrinsic property of the material whose value depends on the light wavelength, resultant from the relation between the component differences in refractive indexes, (n∥−n⊥)holographic and the plane stresses (σ∥−σ⊥)holographic. For a given wavelength, the stress-holographic law is given by Eq 9 [17]:
(n∥−n⊥)holographic=H(σ∥−σ⊥)holographic | (9) |
We have experimentally confirmed these equations in [21,22].
The off-axis configuration separates the diffraction orders during the digital reconstruction performed with FTM's fresnel transform method [17,32,33]. The image field (ψmn) and phase (ϕmn) were calculated using Eqs 10 and 11, and the corresponding maps were reconstructed.
ψmn=eikz∙eikzλ2(n2N2Δξ2+m2M2Δη2)izλ∙F[Δkξ,Δkη] | (10) |
ϕmn=arctg{Im[ψmn]Re[ψmn]} | (11) |
where F[Δkξ,Δkη]=F[IH(ξ,η)∙ψR(ξ,η)∙eπizλ[(n∙Δξ)2+(m∙Δη)2]] is the Fourier Transform of the discretized field, Δkξ=−kλNΔξ, Δkη=−kλMΔη, Δξ=zλNΔh, Δη=zλMΔv, Δh and Δv are the horizontal and vertical pixel dimensions, respectively. All the phase maps were demodulated with the Volkov Method [34].
The calibration of the setup followed the work of Colomb et al. [24] and was carried out using a quarter-wave plate as a sample. Two-phase maps reconstructed by FTM, one for each polarization, were subtracted to obtain the maps of phase differences in the function of the angle of orientation of the quarter-wave plate. The general expression for the phase difference (Δφ) as a function of the orientation of the quarter-wave, by Eq 12:
Δϕ=arctan[sin(2δ)cos2(2δ)] | (12) |
The process to obtain the demodulated phase maps for compression and decompression. The area selected (the rectangle on the hologram) of the hologram was processed with the FTM to obtain the frequency spectrum [17]. The chosen area (the rectangle on the frequency spectrum) of the frequency spectrum was obtained from the modulated phase map, and the Volkov method [34] was obtained from the demodulated phase map. The mean phase was determined from the phases of the pixels in the area selected from the demodulated phase map. To reduce the noise, in each phase map, a region of the phase map with no object was chosen, and the mean phase value of this region was subtracted from the phase map [17].
From the vertical phase maps of each stress applied, the load, σi, was calculated in the vertical holographic deformations, εHv−i, in both processes: compression and decompression. In the same way, with the phase values of the horizontal phase maps, the horizontal holographic deformations, εHh−i, were calculated. The mean value ⟨E⟩ was calculated from various values E by fitting the linear function, by Eq 13:
σi=⟨E⟩εHv−i | (13) |
The mean value of the Poisson's coefficient, ⟨ν⟩, was calculated from various values ν by fitting the linear function, by Eq 14:
εHh−i=⟨ν⟩εHv−i | (14) |
In both cases, the Least Squares Method [34] was utilized.
For each horizontal line j, the phase differences, (ϕv−ϕh)j, were obtained and, from the result, the relative retardation, ℵj=(ϕv−ϕh)j/2π, was calculated between the dark fringes. These results, associated with the holographic parameters ⟨E⟩ and⟨ν⟩, allowed us to find the stress differences (σv−σh)j−holographic using Eq 5. The graphic (σv−σh)j−holographic as a function of the number of pixels produced the distributions of stress differences in the selected region.
A dimensionless constant parameter from Eq 6 gives the relation between holographic and mechanical elasticity. According to Eq 4, using the photoelastic images, mechanical parameters, ⟨E⟩ and⟨ν⟩, and the thicknesses, e, it is possible to calculate the photoelastic fringes, fσ, and the photoelastic dispersions, C(λ). Using Eq 8, the holographic parameters, E and ν, the thickness e, and the constant a, the holographic fringes, , can be calculated using the phase maps. Using Eq 7 and the wavelength, λ, the holographic dispersion, H(λ), can be calculated.
The mechanical parameters were measured with samples sized 4.15, 2.25, and 1.03 cm, and the stress distribution was tested in samples sized 4.15, 2.25, and 4.90 cm, both in holography and photoelasticity.
Each technique was applied to only one block from each pair of standard blocks, which were prepared with identical flexibilities and labeled as −F low flexibility and +F high flexibility. Stresses were applied to the top surface of the sample blocks via a loading device, as illustrated in Figure 1.
The modified Mach-Zehnder interferometer apparatus was built with a He-Ne laser (632.8 nm CW, 20 mW, model 1135PUniphase). The photoelastic images and hologram registers were captured with a Thorlabs digital camera, model DCC1240C-HQ color, CMOS sensor, 1280 × 1024 pixels, size pixel 5.3 μm (square).
The calibration process of the holographic system generated an experimental distribution that, when compared to a theoretical curve, Figure 3, showed the reliability of the system [24].
The continuous line represents the theoretical curve, Eq 12, and the circles of the experimental distribution. The distribution of the point around the curve indicates that the adjustment of the polarization of waves is correct.
Two sets of holograms were recorded for each sample for compression and decompression. The stresses, applied to the upper central phase of the sample, ranged from 0.3 to 1.5 MPa. In holography, the mean phases were obtained from the statistics of ten selected areas in each demodulated phase map. The final mean was calculated using the values of compression and decompression.
The photoelasticity RGB method, in transmission mode, was associated with the finite elements method to determine the distributions of the stress differences in the same selected regions as those used with the holographic method. These results were used to evaluate the proposed method by comparing holography and photoelasticity by stress-optic and stress-holographic law, given by Eqs 1 and 9, respectively.
All the fitted functions used the least square method [27]. A third-degree polynomial function was fitted to the points, but all the results showed that they were linear functions.
Figure 4 presents the graphics of the experimental values of the σ versus εv for the +F sample under compression and decompression. This graphic is used to determine the mechanical and holographic modulus of elasticity.
Figure 5 presents the graphics of the experimental values of σ versus εHv for the +F samples under compression and decompression, and they are used to obtain the Holographic and Mechanical modulus of elasticity.
Similar graphs were made for the −F samples. Table 1 summarizes the holographic and mechanical modules of elasticity. The values are similar for the samples with different flexibilities in analysis. However, the rate E/E differed for each method (holographic and mechanical). Considering the uncertainties, the values of ⟨E/E⟩ are practically equal for all samples in both flexibilities.
−F sample (modulus of elasticity) | +F sample (modulus of elasticity) | |
Holography (MPa) | 1.589 ± 0.032 | 1.320 ± 0.024 |
Mechanical elasticity (10 MPa) | 3.35 ± 0.16 | 2.86 ± 0.11 |
⟨a⟩=⟨E/E⟩(10−2) | 4.75 ± 0.26 | 4.62 ± 0.20 |
It was verified, in all graphics, using polynomials up to the third degree, that the best-fit function was a first-degree polynomial, related function, agreeing with Robert Hooke's theory of elasticity for the elastic regime.
Figure 6 presents the values of the εh(transverse) versus εv (longitudinal), used to calculate the mechanical Poisson's coefficient under the compression and decompression processes, using samples +F.
Figure 7 presents the values of the εHh(transverse)versus εHv(longitudinal) used to determine the holographic Poisson's coefficient under the compression and decompression processes, using samples +F to determine the stress distribution.
Similar graphs were made for the −F samples. Table 2 summarizes the Poisson's coefficients: holographic and mechanical. Considering the uncertainties, the Poisson's coefficients ν are practically equal for the sample with the same flexibility.
−F sample [Poisson's coefficient (10−1)] | +F sample [Poisson's coefficient (10−1)] | |
Holography | 3.723 ± 0.022 | 3.735 ± 0.025 |
Mechanical elasticity | 3.822 ± 0.095 | 3.90 ± 0.11 |
With these parameters, it was possible to determine the mean stress-optical coefficients in photoelasticity and holography, whose values are presented in Table 3.
−F sample [stress-optical coefficient (10−12 m2/N)] |
+F sample [stress-optical coefficient (10−12 m2/N)] |
|
Holography | 3.921 ± 0.08 | 4.442 ± 0.09 |
Mechanical elasticity | 3.95 ± 0.21 | 4.50 ± 0.22 |
The difference between the values of the samples with the same flexibility is due to the dependence of the stress-optical coefficient with sample thickness, e. However, for the same flexibility, the values agree both in holography and in photoelasticity. An important observation is that in holography, the precision of the results is better since the values are of the order of 10−6 N/m2.
The graphics in Figure 8 show the distribution of stress difference over the vertical lines between two dark fringes obtained using the holographic method (), photoelasticity RGB (+), and analytical method (
). The load, in mass, applied was 600 g for all samples.
The behavior of curves in graphics exhibits similarities, indicating that the theoretical model of the proposed method is correct. The +F samples are less rigid and suffer more significant vertical deformation, reducing the modulus of elasticity under the same load. This behavior is observed graphically due to the smaller and more scattered peaks about the −F sample distribution.
In photoelasticity, the intensity of the images of the fringes pattern depends on the characteristics of the experimental configuration: transmittance of the materials, anisotropic behavior level, spectral radiation distribution of the light source (white light), the conversion factor of the light-signal control and electric-signal control, and the temporary birefringence effect. In digital holography, these effects are less relevant due to the subtractions performed in the numerical reconstruction process, which eliminate much of this noise, and also due to the optical retardations obtained directly from the phase differences maps.
We present an alternative method to obtain the stress differences in photoelastic materials, as verified by the experiments. The similar results for the modulus of elasticity obtained through the different methods (photoelasticity, mechanical, and holographical) allowed the assumption of an analogy between Hooke's law and holography. The photoelastic and holographic dispersions are equal, allowing the establishment of the stress-holographic law in analogy with the stress-optic law. The experimental stress distributions in holography presented the same behavior as the analytic and photoelastic distributions. The results of stress distributions in holography were more accurate than in photoelasticity when compared to the theoretical results. Thus, the proposed method presents efficiency and independence in procedures since it uses only the extracted parameters obtained directly from the phase maps.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors are grateful for the financial support from Fundação de Amparo à Pesquisa do Estado de São Paulo (2012/18162-4, 2019/23700-4, 2022/15276-0); the physical support from Faculdade de Tecnologia de Pompeia (Fatec Pompeia), Faculdade de Tecnologia de Itaquera (Fatec Itaquera), Instituto de Física da Universidade de São Paulo (IFUSP) and Instituto de Pesquisas Energéticas e Nucleares de São Paulo (IPEN-SP).
The lead author (Sidney Leal da Silva) contributed theory, data, and analysis from his doctoral work [17]. The other co-authors helped with the preparation and first revisions of the article.
The authors declare no conflict of interest.
[1] |
J. Feng, H. Gao, Q. Zhang, Y. Zhou, C. Li, S. Zhao, et al., Metabolic profiling reveals distinct metabolic alterations in different subtypes of pituitary adenomas and confers therapeutic targets, J. Transl. Med., 17 (2019), 1–13. https://doi.org/10.1186/s12967-019-2042-9 doi: 10.1186/s12967-019-2042-9
![]() |
[2] |
X. M. Liu, Q. Yuan, Y. Z Gao, K. L. He, S. Wang, X. Tang, et al., Weakly supervised segmentation of COVID-19 infection with scribble annotation on CT images, Pattern Recognit., 122 (2022), 108341. https://doi.org/10.1016/j.patcog.2021.108341 doi: 10.1016/j.patcog.2021.108341
![]() |
[3] |
B. J. Kar, M. V. Cohen, S. P. McQuiston, C. M. Malozzi, A deep-learning semantic segmentation approach to fully automated MRI-based left-ventricular deformation analysis in cardiotoxicity, Magn. Reson. Imaging, 78 (2021), 127–139. https://doi.org/10.1016/j.mri.2021.01.005 doi: 10.1016/j.mri.2021.01.005
![]() |
[4] |
N. Mu, H. Y. Wang, Y. Zhang, J. F. Jiang, J. S. Tang, Progressive global perception and local polishing network for lung infection segmentation of COVID-19 CT images, Pattern Recognit., 120 (2021), 108168. https://doi.org/10.1016/j.patcog.2021.108168 doi: 10.1016/j.patcog.2021.108168
![]() |
[5] |
X. M. Liu, Z. S. Guo, J. Cao, J. S. Tang, MDC-net: A new convolutional neural network for nucleus segmentation in histopathology images with distance maps and contour information, Comput. Biol. Med., 135 (2021), 104543. https://doi.org/10.1016/j.compbiomed.2021.104543 doi: 10.1016/j.compbiomed.2021.104543
![]() |
[6] |
H. M. Rai, K. Chatterjee, S. Dashkevich, Automatic and accurate abnormality detection from brain MR images using a novel hybrid UnetResNext-50 deep CNN model, Biomed. Signal Process. Control, 66 (2021), 102477. https://doi.org/10.1016/j.bspc.2021.102477 doi: 10.1016/j.bspc.2021.102477
![]() |
[7] | O. Ronneberger, P. Fischer, T. Brox, U-Net: Convolutional networks for biomedical image segmentation, in International Conference on Medical Image Computing and Computer-assisted Intervention, Springer, Cham, (2015), 234–241. https://doi.org/10.1007/978-3-319-24574-4_28 |
[8] | S. Xie, R. Girshick, P. Dollár, Z. Tu, K. He, Aggregated residual transformations for deep neural networks, in IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, (2017), 5987–5995. https://doi.org/10.1016/j.cmpb.2021.106566 |
[9] |
H. C. Lu, S. W. Tian, L. Yu, L. Liu, J. L. Cheng, W. D. Wu, et al., DCACNet: Dual context aggregation and attention-guided cross deconvolution network for medical image segmentation, Comput. Methods Programs Biomed., 214 (2022), 106566. https://doi.org/10.1016/j.cmpb.2021.106566 doi: 10.1016/j.cmpb.2021.106566
![]() |
[10] |
M. U. Rehman, S. Cho, J. Kim, K. T. Chong, BrainSeg-Net: Brain tumor MR image segmentation via enhanced encoder-decoder network, Diagnostics, 11 (2021), 169. https://doi.org/10.3390/diagnostics11020169 doi: 10.3390/diagnostics11020169
![]() |
[11] |
P. Tang, C. Zu, M. Hong, R. Yan, X. C. Peng, J. H. Xiao, et al., DA-DSUnet: Dual attention-based dense SU-Net for automatic head-and-neck tumor segmentation in MRI images, Neurocomputing, 435 (2021), 103–113. https://doi.org/10.1016/j.neucom.2020.12.085 doi: 10.1016/j.neucom.2020.12.085
![]() |
[12] |
U. Latif, A. R. Shahid, B. Raza, S. Ziauddin, M. A. Khan, An end-to-end brain tumor segmentation system using multi-inception-UNet, Int. J. Imaging Syst. Technol., 31 (2021), 1803–1816. https://doi.org/10.1002/ima.22585 doi: 10.1002/ima.22585
![]() |
[13] |
X. F. Du, J. S. Wang, W. Z. Sun, Densely connected U-Net retinal vessel segmentation algorithm based on multi-scale feature convolution extraction, Med. Phys., 48 (2021), 3827–3841. https://doi.org/10.1002/mp.14944 doi: 10.1002/mp.14944
![]() |
[14] |
Z. Y. Wang, Y. J. Peng, D. P. Li, Y. F. Guo, B. Zhang, MMNet: A multi-scale deep learning network for the left ventricular segmentation of cardiac MRI images, Appl. Intell., 52 (2022), 5225–5240. https://doi.org/10.1007/s10489-021-02720-9 doi: 10.1007/s10489-021-02720-9
![]() |
[15] |
M. Yang, H. W. Wang, K. Hu, G. Yin, Z. Q. Wei, IA-Net: An inception-attention-module-based network for classifying underwater images from others, IEEE J. Oceanic Eng., 47 (2022), 704–717. https://doi.org/10.1109/JOE.2021.3126090 doi: 10.1109/JOE.2021.3126090
![]() |
[16] |
J. S. Zhou, Y. W. Lu, S. Y. Tao, X. Cheng, C. X. Huang, E-Res U-Net: An improved U-Net model for segmentation of muscle images, Expert Syst. Appl., 185 (2021), 115625. https://doi.org/10.1016/j.eswa.2021.115625 doi: 10.1016/j.eswa.2021.115625
![]() |
[17] |
S. Y. Chen, Y. N. Zou, P. X. Liu, IBA-U-Net: Attentive BConvLSTM U-Net with redesigned inception for medical image segmentation, Comput. Biol. Med., 135 (2021), 104551. https://doi.org/10.1016/j.compbiomed.2021.104551 doi: 10.1016/j.compbiomed.2021.104551
![]() |
[18] |
F. Hoorali, H. Khosravi, B. Moradi, IRUNet for medical image segmentation, Expert Syst. Appl., 191 (2022), 116399. https://doi.org/10.1016/j.eswa.2021.116399 doi: 10.1016/j.eswa.2021.116399
![]() |
[19] |
Z. Zhang, C. D. Wu, S. Coleman, D. Kerr, Dense-inception U-Net for medical image segmentation, Comput. Biol. Med., 192 (2020), 105395. https://doi.org/10.1016/j.cmpb.2020.105395 doi: 10.1016/j.cmpb.2020.105395
![]() |
[20] |
S. A. Bala, S. Kant, Dense dilated inception network for medical image segmentation, Int. J. Adv. Comput. Sci. Appl., 11 (2020), 785–793. https://doi.org/10.14569/IJACSA.2020.0111195 doi: 10.14569/IJACSA.2020.0111195
![]() |
[21] |
L. Wang, J. Gu, Y. Chen, Y. Liang, W. Zhang, J. Pu, et al., Automated segmentation of the optic disc from fundus images using an asymmetric deep learning network, Pattern Recognit., 112 (2021), 107810. https://doi.org/10.1016/j.patcog.2020.107810 doi: 10.1016/j.patcog.2020.107810
![]() |
[22] |
Z. Zheng, Y. Wan, Y. Zhang, S. Xiang, D. Peng, B. Zhang, CLNet: Cross-layer convolutional neural network for change detection in optical remote sensing imagery, ISPRS J. Photogramm. Remote Sens., 175 (2021), 247–267. https://doi.org/10.1016/j.isprsjprs.2021.03.005 doi: 10.1016/j.isprsjprs.2021.03.005
![]() |
[23] | H. S. Zhao, J. P. Shi, X. J. Qi, X. G. Wang, J. Y. Jia, Pyramid scene parsing network, in IEEE Conference on Computer Vision and Pattern Recognition, (2017), 6230–6239. |
[24] |
S. Ran, J. Ding, B. Liu, X. Ge, G. Ma, Multi-U-Net: Residual module under multisensory field and attention mechanism based optimized U-Net for VHR image semantic segmentation, Sensors, 21 (2021), 1794. https://doi.org/10.3390/s21051794 doi: 10.3390/s21051794
![]() |
[25] |
R. M. Rad, P. Saeedi, J. Au, J. Havelock, Trophectoderm segmentation in human embryo images via inceptioned U-Net, Med. Image Anal., 62 (2020), 101612. https://doi.org/10.1016/j.media.2019.101612 doi: 10.1016/j.media.2019.101612
![]() |
[26] |
N. S. Punn, S. Agarwal, Multi-modality encoded fusion with 3d inception u-net and decoder model for brain tumor segmentation, Multimed. Tools Appl., 80 (2020), 30305–30320. https://doi.org/10.1007/s11042-020-09271-0 doi: 10.1007/s11042-020-09271-0
![]() |
[27] | Z. W. Zhou, M. M. R. Siddiquee, N. Tajbakhsh. J. M. Liang, UNet++: A nested U-Net architecture for medical image segmentation, in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, (2018), 3–11. https://doi.org/10.1007/978-3-030-00889-5_1 |
[28] |
B. Zuo, F. F. Lee, Q. Chen, An efficient U-shaped network combined with edge attention module and context pyramid fusion for skin lesion segmentation, Med. Biol. Eng. Comput., 60 (2022), 1987–2000. https://doi.org/10.1007/s11517-022-02581-5 doi: 10.1007/s11517-022-02581-5
![]() |
[29] |
D. P. Li, Y. J. Peng, Y. F. Guo, J. D. Sun, MFAUNet: Multiscale feature attentive U-Net for cardiac MRI structural segmentation, IET Image Proc., 16 (2022), 1227–1242. https://doi.org/10.1049/ipr2.12406 doi: 10.1049/ipr2.12406
![]() |
[30] |
V. S. Bochkov, L. Y. Kataeva, wUUNet: Advanced fully convolutional neural network for multiclass fire segmentation, Symmetry, 13 (2021), 98. https://doi.org/10.3390/sym13010098 doi: 10.3390/sym13010098
![]() |
[31] |
D. John, C. Zhang, An attention-based U-Net for detecting deforestation within satellite sensor imagery, Int. J. Appl. Earth Obs. Geoinf., 107 (2022), 102685. https://doi.org/10.1016/j.jag.2022.102685 doi: 10.1016/j.jag.2022.102685
![]() |
[32] |
Y. Y. Yang, C. Feng, R. F. Wang, Automatic segmentation model combining U-Net and level set method for medical images, Expert Syst. Appl., 153 (2020), 113419. https://doi.org/10.1016/j.eswa.2020.113419 doi: 10.1016/j.eswa.2020.113419
![]() |
[33] |
I. Ahmed, M. Ahmad, G. Jeon, A real-time efficient object segmentation system based on u-net using aerial drone images, J. Real-Time Image Process., 18 (2021), 1745–1758. https://doi.org/10.1007/s11554-021-01166-z doi: 10.1007/s11554-021-01166-z
![]() |
[34] |
M. Jiang, F. Zhai, J. Kong, A novel deep learning model DDU-net using edge features to enhance brain tumor segmentation on MR images, Artif. Intell. Med., 121 (2021), 102180. https://doi.org/10.1016/j.artmed.2021.102180 doi: 10.1016/j.artmed.2021.102180
![]() |
[35] |
D. Li, A. Cong, S. Guo, Sewer damage detection from imbalanced CCTV inspection data using deep convolutional neural networks with hierarchical classification, Autom. Constr., 101 (2019), 199–208. https://doi.org/10.1016/j.autcon.2019.01.017 doi: 10.1016/j.autcon.2019.01.017
![]() |
[36] |
M. M. Ji, Z. B. Wu, Automatic detection and severity analysis of grape black measles disease based on deep learning and fuzzy logic, Comput. Electron. Agric., 193 (2022), 106718. https://doi.org/10.1016/j.compag.2022.106718 doi: 10.1016/j.compag.2022.106718
![]() |
[37] | O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, et al., Attention U-Net: Learning where to look for the pancreas, preprint, arXiv: 1804.03999. |
[38] | G. Huang, Z. Liu, V. Laurens, K. Q. Weinberger, Densely connected convolutional networks, in IEEE Conference on Computer Vision and Pattern Recognition, (2017), 2261–2269. https://doi.org/10.1109/CVPR.2017.243 |
[39] | A. Paszke, A. Chaurasia, S. Kim, E. Culurciello, ENet: A deep neural network architecture for real-time semantic segmentation, preprint, arXiv: 1606.02147. |
[40] | K. Sun, B. Xiao, D. Liu, J. D. Wang, Deep high-resolution representation learning for human pose estimation, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2019), 5693–5703. https://doi.org/10.1109/CVPR.2019.00584 |
[41] | H. H. Zhao, X. J. Qi, X. Y. Shen, J. P. Shi, J. Y. Jia, ICNet for real-time semantic segmentation on high-resolution images, in Proceedings of the European Conference on Computer Vision, (2018), 405–420. |
[42] | M. Z. Alom, M. Hasan, C. Yakopcic, T. M. Taha, V. K. Asari, Recurrent residual convolutional neural network based on U-Net (R2U-Net) for medical image segmentation, preprint, arXiv: 1802.06955. |
[43] |
V. Badrinarayanan, A. Kendall, R. Cipolla, SegNet: A deep convolutional encoder-decoder architecture for image segmentation, IEEE Trans. Pattern Anal. Mach. Intell., 39 (2017), 2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615 doi: 10.1109/TPAMI.2016.2644615
![]() |
[44] | J. Bullock, C. Cuesta-Lázaro, A. Quera-Bofarull, XNet: A convolutional neural network (CNN) implementation for medical X-Ray image segmentation suitable for small datasets, in Medical Imaging 2019: Biomedical Applications in Molecular, Structural, and Functional Imaging, (2019), 453–463. https://doi.org/10.1117/12.2512451 |
[45] | H. Huang, L. Lin, R. Tong, H. Hu, J. Wu, UNet 3+: A full-scale connected UNet for medical image segmentation, in IEEE International Conference on Acoustics, Speech and Signal Processing, (2020), 1055–1059. https://doi.org/10.1109/ICASSP40776.2020.9053405 |
[46] |
P. Tschandl, C. Rosendahl, H. Kittler, The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions, Sci. Data, 5 (2018), 180161. https://doi.org/10.1038/sdata.2018.161 doi: 10.1038/sdata.2018.161
![]() |
1. | Yan Zheng, Min Zhou, Fenghua Wen, Asymmetric effects of oil shocks on carbon allowance price: Evidence from China, 2021, 97, 01409883, 105183, 10.1016/j.eneco.2021.105183 | |
2. | Michael Frömmel, Darko B. Vukovic, Jinyuan Wu, The Dollar Exchange Rate, Adjustment to the Purchasing Power Parity, and the Interest Rate Differential, 2022, 10, 2227-7390, 4504, 10.3390/math10234504 | |
3. | Saban Nazlioglu, Mehmet Altuntas, Emre Kilic, Ilhan Kucukkkaplan, Purchasing power parity in GIIPS countries: evidence from unit root tests with breaks and non-linearity, 2022, 30, 2632-7627, 176, 10.1108/AEA-10-2020-0146 | |
4. | Qian Ding, Jianbai Huang, Jinyu Chen, Dynamic and frequency-domain risk spillovers among oil, gold, and foreign exchange markets: Evidence from implied volatility, 2021, 102, 01409883, 105514, 10.1016/j.eneco.2021.105514 | |
5. | Shuaishuai Jia, Cunyi Yang, Mengxin Wang, Pierre Failler, Heterogeneous Impact of Land-Use on Climate Change: Study From a Spatial Perspective, 2022, 10, 2296-665X, 10.3389/fenvs.2022.840603 | |
6. | Lynda Atil, Hocine Fellag, Ana E. Sipols, M. T. Santos-Martín, Clara Simón de Blas, Non-linear Cointegration Test, Based on Record Counting Statistic, 2024, 64, 0927-7099, 2205, 10.1007/s10614-023-10520-1 |
−F sample (modulus of elasticity) | +F sample (modulus of elasticity) | |
Holography (MPa) | 1.589 ± 0.032 | 1.320 ± 0.024 |
Mechanical elasticity (10 MPa) | 3.35 ± 0.16 | 2.86 ± 0.11 |
⟨a⟩=⟨E/E⟩(10−2) | 4.75 ± 0.26 | 4.62 ± 0.20 |
−F sample [Poisson's coefficient (10−1)] | +F sample [Poisson's coefficient (10−1)] | |
Holography | 3.723 ± 0.022 | 3.735 ± 0.025 |
Mechanical elasticity | 3.822 ± 0.095 | 3.90 ± 0.11 |
−F sample [stress-optical coefficient (10−12 m2/N)] |
+F sample [stress-optical coefficient (10−12 m2/N)] |
|
Holography | 3.921 ± 0.08 | 4.442 ± 0.09 |
Mechanical elasticity | 3.95 ± 0.21 | 4.50 ± 0.22 |
−F sample (modulus of elasticity) | +F sample (modulus of elasticity) | |
Holography (MPa) | 1.589 ± 0.032 | 1.320 ± 0.024 |
Mechanical elasticity (10 MPa) | 3.35 ± 0.16 | 2.86 ± 0.11 |
⟨a⟩=⟨E/E⟩(10−2) | 4.75 ± 0.26 | 4.62 ± 0.20 |
−F sample [Poisson's coefficient (10−1)] | +F sample [Poisson's coefficient (10−1)] | |
Holography | 3.723 ± 0.022 | 3.735 ± 0.025 |
Mechanical elasticity | 3.822 ± 0.095 | 3.90 ± 0.11 |
−F sample [stress-optical coefficient (10−12 m2/N)] |
+F sample [stress-optical coefficient (10−12 m2/N)] |
|
Holography | 3.921 ± 0.08 | 4.442 ± 0.09 |
Mechanical elasticity | 3.95 ± 0.21 | 4.50 ± 0.22 |