Deep learning models in medical imaging face dual challenges: domain shift, where models perform poorly when deployed in settings different from their training environment, and class imbalance, where certain disease conditions are naturally underrepresented. We have presented imbalance-aware domain adaptation (IADA), a novel framework that simultaneously tackles both challenges through three key components: (1) adaptive feature learning with class-specific attention mechanisms, (2) balanced domain alignment with dynamic weighting, and (3) adaptive threshold optimization. Our theoretical analysis established convergence guarantees and complexity bounds. Through extensive experiments on embryo development assessment across four imaging modalities, IADA demonstrated significant improvements over existing methods, achieving up to 25.19% higher accuracy while maintaining balanced performance across classes. In challenging scenarios with low-quality imaging systems, IADA showed robust generalization with AUC improvements of up to 12.56%. These results demonstrate IADA's potential for developing reliable and equitable medical imaging systems for diverse clinical settings. The code is publicly available at https://github.com/yinghemedical/imbalance-aware_domain_adaptation.
Citation: Lei Li, Xinglin Zhang, Jun Liang, Mengqian Huang, Tao Chen. Addressing domain shift via imbalance-aware domain adaptation in embryo development assessment[J]. Mathematical Biosciences and Engineering, 2026, 23(5): 1375-1401. doi: 10.3934/mbe.2026051
Deep learning models in medical imaging face dual challenges: domain shift, where models perform poorly when deployed in settings different from their training environment, and class imbalance, where certain disease conditions are naturally underrepresented. We have presented imbalance-aware domain adaptation (IADA), a novel framework that simultaneously tackles both challenges through three key components: (1) adaptive feature learning with class-specific attention mechanisms, (2) balanced domain alignment with dynamic weighting, and (3) adaptive threshold optimization. Our theoretical analysis established convergence guarantees and complexity bounds. Through extensive experiments on embryo development assessment across four imaging modalities, IADA demonstrated significant improvements over existing methods, achieving up to 25.19% higher accuracy while maintaining balanced performance across classes. In challenging scenarios with low-quality imaging systems, IADA showed robust generalization with AUC improvements of up to 12.56%. These results demonstrate IADA's potential for developing reliable and equitable medical imaging systems for diverse clinical settings. The code is publicly available at https://github.com/yinghemedical/imbalance-aware_domain_adaptation.
| [1] |
G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, et al., A survey on deep learning in medical image analysis, Med. Image Anal., 42 (2017), 60–88. https://doi.org/10.1016/j.media.2017.07.005 doi: 10.1016/j.media.2017.07.005
|
| [2] |
Z. H. Zhou, X. Y. Liu, Training cost-sensitive neural networks with methods addressing the class imbalance problem, IEEE Trans. Knowl. Data Eng., 18 (2006), 63–77. https://doi.org/10.1109/TKDE.2006.17 doi: 10.1109/TKDE.2006.17
|
| [3] |
Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, et al., Domain-adversarial training of neural networks, J. Mach. Learn. Res., 17 (2016), 2096–2030. https://doi.org/10.5555/2946645.2946704 doi: 10.5555/2946645.2946704
|
| [4] |
M. K. Kanakasabapathy, P. Thirumalaraju, H. Kandula, F. Doshi, A. D. Sivakumar, D. Kartik, et al., Adaptive adversarial neural networks for the analysis of lossy and domain-shifted datasets of medical images, Nat. Biomed. Eng., 5 (2021), 571–585. https://doi.org/10.1038/s41551-021-00733-w doi: 10.1038/s41551-021-00733-w
|
| [5] |
I. Dimitriadis, N. Zaninovic, A. C. Badiola, C. L. Bormann, Artificial intelligence in the embryology laboratory: A review, Reprod. BioMed. Online, 44 (2022), 435–448. https://doi.org/10.1016/j.rbmo.2021.11.003 doi: 10.1016/j.rbmo.2021.11.003
|
| [6] |
N. V. Chawla, K. W. Bowyer, L. O. Hall, W. P. Kegelmeyer, Smote: Synthetic minority over-sampling technique, J. Artif. Intell. Res., 16 (2002), 321–357. https://doi.org/10.1613/jair.953 doi: 10.1613/jair.953
|
| [7] |
I. Rubio, A. Galán, Z. Larreategui, F. Ayerdi, J. Bellver, J. Herrero, et al., Clinical validation of embryo culture and selection by morphokinetic analysis: A randomized, controlled trial of the embryoscope, Fertil. Steril., 102 (2014), 1287–1294. https://doi.org/10.1016/j.fertnstert.2014.07.738 doi: 10.1016/j.fertnstert.2014.07.738
|
| [8] |
N. Zaninovic, Z. Rosenwaks, Artificial intelligence in human in vitro fertilization and embryology, Fertil. Steril., 114 (2020), 914–920. https://doi.org/10.1016/j.fertnstert.2020.09.157 doi: 10.1016/j.fertnstert.2020.09.157
|
| [9] | E. Tzeng, J. Hoffman, K. Saenko, T. Darrell, Adversarial discriminative domain adaptation, preprint, arXiv: 1702.05464. https://doi.org/10.48550/arXiv.1702.05464 |
| [10] | H. He, Y. Bai, Adasyn: Adaptive synthetic sampling approach for imbalanced learning, in 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), IEEE, (2008), 1322–1328. https://doi.org/10.1109/IJCNN.2008.4633969 |
| [11] |
M. Salih, C. Austin, C. Warty, D. Tiktin, D. Rolnik, M. Momeni, et al., Embryo selection through artificial intelligence versus embryologists: A systematic review, Hum. Reprod. Open, 2023 (2023), hoad031. https://doi.org/10.1093/hropen/hoad031 doi: 10.1093/hropen/hoad031
|
| [12] | J. Quinonero-Candela, M. Sugiyama, A. Schwaighofer, N. D. Lawrence, Dataset Shift in Machine Learning, MIT Press, 2008. https://doi.org/10.7551/mitpress/9780262170055.001.0001 |
| [13] | A. Torralba, A. A. Efros, Unbiased look at dataset bias, CVPR, 1521–1528. https://doi.org/10.1109/CVPR.2011.5995347 |
| [14] | B. Sun, K. Saenko, Deep coral: Correlation alignment for deep domain adaptation, in Computer Vision–ECCV 2016 Workshops, Springer, (2016), 443–450. https://doi.org/10.1007/978-3-319-49409-8_35 |
| [15] | M. Long, Z. Cao, J. Wang, M. I. Jordan, Conditional adversarial domain adaptation, in 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), (2018), 1640–1650. |
| [16] | K. Saito, K. Watanabe, Y. Ushiku, T. Harada, Maximum classifier discrepancy for unsupervised domain adaptation, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 3723–3732. https://doi.org/10.1109/CVPR.2018.00392 |
| [17] | J. N. Kundu, N. Venkat, R. V. Babu, R. V. Babu, Universal source-free domain adaptation, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), 4544–4553. https://doi.org/10.1109/CVPR42600.2020.00460 |
| [18] |
J. Zhang, W. Li, P. Ogunbona, D. Xu, Recent advances in transfer learning for cross-dataset visual recognition: A problem-oriented perspective, ACM Comput. Surv., 52 (2019), 1–38. https://doi.org/10.1145/3291124 doi: 10.1145/3291124
|
| [19] |
H. Guan, M. Liu, Domain adaptation for medical image analysis: A survey, IEEE Trans. Biomed. Eng., 69 (202), 1173–1185. https://doi.org/10.1109/TBME.2021.3117407 doi: 10.1109/TBME.2021.3117407
|
| [20] |
V. Cheplygina, M. de Bruijne, J. P. Pluim, Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis, Med. Image Anal., 54 (2019), 280–296. https://doi.org/10.1016/j.media.2019.03.009 doi: 10.1016/j.media.2019.03.009
|
| [21] | Y. Liu, J. Cheng, X. Zhang, S. K. Leung, X. Xu, Y. Li, et al., Shape-aware meta-learning for generalizing prostate mri segmentation to unseen domains, in Medical Image Computing and Computer Assisted Intervention–MICCAI 2020, (2020), 475–485. https://doi.org/10.1007/978-3-030-59713-9_46 |
| [22] | Q. Dou, C. Ouyang, H. Chen, P. A. Heng, Unsupervised cross-modality domain adaptation of convnets for biomedical image segmentations with adversarial loss, preprint, arXiv: 1804.10916. https://doi.org/10.48550/arXiv.1804.10916 |
| [23] | M. Aburidi, Optimal transport driven deep learning with emphasis on pathology images, Appl. Math Capstone Proj., (2023). |
| [24] | K. Sohn, D. Berthelot, C. L. Li, Z. Zhang, N. Carlini, E. D. Cubuk, et al., Fixmatch: Simplifying semi-supervised learning with consistency and confidence, in Advances in Neural Information Processing Systems 33, (2020), 596–608. |
| [25] |
I. Rubio, A. Galán, Z. Larreategui, F. Ayerdi, J. Bellver, J. Herrero, et al., Clinical validation of embryo culture and selection by morphokinetic analysis: A randomized, controlled trial of the embryoscope, Fertil. Steril., 102 (2014), 1287–1294. https://doi.org/10.1016/j.fertnstert.2014.07.738 doi: 10.1016/j.fertnstert.2014.07.738
|
| [26] |
I. Dimitriadis, N. Zaninovic, A. C. Badiola, C. L. Bormann, Artificial intelligence in the embryology laboratory: A review, Reprod. BioMed. Online, 44 (2022), 435–448. https://doi.org/10.1016/j.rbmo.2021.11.003 doi: 10.1016/j.rbmo.2021.11.003
|
| [27] |
N. Zaninovic, Z. Rosenwaks, Artificial intelligence in human in vitro fertilization and embryology, Fertil. Steril., 114 (2020), 914–920. https://doi.org/10.1016/j.fertnstert.2020.09.157 doi: 10.1016/j.fertnstert.2020.09.157
|
| [28] |
M. Salih, C. Austin, R. Warty, C. Tiktin, D. Rolnik, M. Momeni, et al., Embryo selection through artificial intelligence versus embryologists: A systematic review, Hum. Reprod. Open, 2023 (2023), hoad031. https://doi.org/10.1093/hropen/hoad031 doi: 10.1093/hropen/hoad031
|
| [29] | H. C. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter, et al., Medical image synthesis for data augmentation and anonymization using generative adversarial networks, in Simulation and Synthesis in Medical Imaging, Springer, (2018), 1–11. https://doi.org/10.1007/978-3-030-00536-8_1 |
| [30] |
H. Guan, M. Liu, Domain adaptation for medical image analysis: A survey, IEEE Trans. Biomed. Eng., 69 (2021), 1173–1185. https://doi.org/10.1109/TBME.2021.3117407 doi: 10.1109/TBME.2021.3117407
|
| [31] | J. Y. Zhu, T. Park, P. Isola, A. A. Efros, Unpaired image-to-image translation using cycle-consistent adversarial networks, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017). https://doi.org/10.1109/ICCV.2017.244 |
| [32] |
S. Yen, Y. S. Lee, Cluster-based under-sampling approaches for imbalanced data distributions, Expert Syst. Appl., 36 (2009), 5718–5727. https://doi.org/10.1016/j.eswa.2008.06.108 doi: 10.1016/j.eswa.2008.06.108
|
| [33] |
Z. Zhou, X. Liu, Training cost-sensitive neural networks with methods addressing the class imbalance problem, IEEE Trans. Knowl. Data Eng., 18 (2006), 63–77. https://doi.org/10.1109/TKDE.2006.17 doi: 10.1109/TKDE.2006.17
|
| [34] |
C. Seiffert, T. M. Khoshgoftaar, J. Van Hulse, A. Napolitano, Rusboost: A hybrid approach to alleviating class imbalance, IEEE Trans. Syst. Man Cybern., 40 (2010), 185–197. https://doi.org/10.1109/TSMCA.2009.2029559 doi: 10.1109/TSMCA.2009.2029559
|
| [35] | Y. Cui, M. Jia, T. Y. Lin, Y. Song, S. Belongie, Class-balanced loss based on effective number of samples, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 9268–9277. https://doi.org/10.1109/CVPR.2019.00949 |
| [36] | R. Li, H. Chen, C. Huang, Fedfocal loss for federated learning on non-iid data, in 2022 International Joint Conference on Neural Networks (IJCNN), IEEE, (2022), 1–7. |
| [37] | Z. Huo, X. Qian, S. Huang, Z. Wang, B. J. Mortazavi, Density-aware personalized training for risk prediction in imbalanced medical data, Mach. Learn. HealthCare, (2022), 101–122. |
| [38] | N. Wu, L. Yu, X. Yang, K. T. Cheng, Z. Yan, Fediic: Towards robust federated learning for class-imbalanced medical image classification, in Medical Image Computing and Computer Assisted Intervention–MICCAI 2023, Springer, (2023), 692–702. https://doi.org/10.1007/978-3-031-43895-0_65 |
| [39] | K. Chen, W. Lei, R. Zhang, S. Zhao, W. Zheng, R. Wang, PCCT: Progressive class-center triplet loss for imbalanced medical image classification, preprint, arXiv: 2207.04793. https://doi.org/10.48550/arXiv.2207.04793 |
| [40] | Y. Yang, Z. Xu, Rethinking the value of labels for improving class-imbalanced learning, preprint, arXiv: 2006.07529. https://doi.org/10.48550/arXiv.2006.07529 |
| [41] |
H. Guo, Y. Li, S. Jennifer, M. Gu, Y. Huang, B. Gong, Learning from class-imbalanced data: Review of methods and applications, Expert Syst. Appl., 73 (2017), 220–239. https://doi.org/10.1016/j.eswa.2016.12.035 doi: 10.1016/j.eswa.2016.12.035
|
| [42] | S. Ben-David, J. Blitzer, K. Crammer, F. Pereira, Analysis of representations for domain adaptation, Adv. Neu. Inf. Process. Syst., (2007), 137–144. |
| [43] | K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, preprint, arXiv: 1512.03385. https://doi.org/10.48550/arXiv.1512.03385 |
| [44] | X. Xia, C. Xu, B. Nan, Inception-v3 for flower classification, in 2017 2nd International Conference on Image, Vision and Computing (ICIVC)), (2017), 783–787. https://doi.org/10.1109/ICIVC.2017.7984661 |
| [45] | F. Chollet, Xception: Deep learning with depthwise separable convolutions, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 1251–1258. https://doi.org/110.1109/CVPR.2017.195 |
| [46] | T. Gebru, J. Hoffman, L. Fei-Fei, Fine-grained recognition in the wild: A multi-task domain adaptation approach, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017). https://doi.org/10.1109/ICCV.2017.151 |
| [47] |
J. Tian, A. El Saddik, X. Xu, D. Li, Z. Cao, H. T. Shen, Intrinsic consistency preservation with adaptively reliable samples for source-free domain adaptation, IEEE Trans. Neural Networks Learn. Syst., 36 (2025), 4738–4749 https://doi.org/10.1109/TNNLS.2024.3362948 doi: 10.1109/TNNLS.2024.3362948
|
| [48] | D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. https://doi.org/10.48550/arXiv.1412.6980 |
| [49] | R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-cam: Visual explanations from deep networks via gradient-based localization, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017), 618–626. https://doi.org/10.1109/ICCV.2017.74 |