
The Cahn–Hilliard equation is a fundamental model that describes the phase separation process in multi-component mixtures. It has been successfully extended to different contexts in various scientific fields. In this survey article, we briefly review the derivation, structure as well as some analytical issues for the Cahn–Hilliard equation and its variants. Our focus will be placed on the well-posedness as well as long-time behavior of global solutions for the Cahn–Hilliard equation in the classical setting and recent progresses on the dynamic boundary conditions that describe non-trivial boundary effects.
Citation: Hao Wu. A review on the Cahn–Hilliard equation: classical results and recent advances in dynamic boundary conditions[J]. Electronic Research Archive, 2022, 30(8): 2788-2832. doi: 10.3934/era.2022143
[1] | Tej Bahadur Shahi, Cheng-Yuan Xu, Arjun Neupane, William Guo . Machine learning methods for precision agriculture with UAV imagery: a review. Electronic Research Archive, 2022, 30(12): 4277-4317. doi: 10.3934/era.2022218 |
[2] | Abul Bashar . Employing combined spatial and frequency domain image features for machine learning-based malware detection. Electronic Research Archive, 2024, 32(7): 4255-4290. doi: 10.3934/era.2024192 |
[3] | Xiaomeng An, Sen Xu . A selective evolutionary heterogeneous ensemble algorithm for classifying imbalanced data. Electronic Research Archive, 2023, 31(5): 2733-2757. doi: 10.3934/era.2023138 |
[4] | Yixin Sun, Lei Wu, Peng Chen, Feng Zhang, Lifeng Xu . Using deep learning in pathology image analysis: A novel active learning strategy based on latent representation. Electronic Research Archive, 2023, 31(9): 5340-5361. doi: 10.3934/era.2023271 |
[5] | Mashael S. Maashi, Yasser Ali Reyad Ali, Abdelwahed Motwakel, Amira Sayed A. Aziz, Manar Ahmed Hamza, Amgad Atta Abdelmageed . Anas platyrhynchos optimizer with deep transfer learning-based gastric cancer classification on endoscopic images. Electronic Research Archive, 2023, 31(6): 3200-3217. doi: 10.3934/era.2023162 |
[6] | Jiawang Li, Chongren Wang . A deep learning approach of financial distress recognition combining text. Electronic Research Archive, 2023, 31(8): 4683-4707. doi: 10.3934/era.2023240 |
[7] | Ju Wang, Leifeng Zhang, Sanqiang Yang, Shaoning Lian, Peng Wang, Lei Yu, Zhenyu Yang . Optimized LSTM based on improved whale algorithm for surface subsidence deformation prediction. Electronic Research Archive, 2023, 31(6): 3435-3452. doi: 10.3934/era.2023174 |
[8] | Shixiong Zhang, Jiao Li, Lu Yang . Survey on low-level controllable image synthesis with deep learning. Electronic Research Archive, 2023, 31(12): 7385-7426. doi: 10.3934/era.2023374 |
[9] | Xiao Chen, Fuxiang Li, Hairong Lian, Peiguang Wang . A deep learning framework for predicting the spread of diffusion diseases. Electronic Research Archive, 2025, 33(4): 2475-2502. doi: 10.3934/era.2025110 |
[10] | Jianting Gong, Yingwei Zhao, Xiantao Heng, Yongbing Chen, Pingping Sun, Fei He, Zhiqiang Ma, Zilin Ren . Deciphering and identifying pan-cancer RAS pathway activation based on graph autoencoder and ClassifierChain. Electronic Research Archive, 2023, 31(8): 4951-4967. doi: 10.3934/era.2023253 |
The Cahn–Hilliard equation is a fundamental model that describes the phase separation process in multi-component mixtures. It has been successfully extended to different contexts in various scientific fields. In this survey article, we briefly review the derivation, structure as well as some analytical issues for the Cahn–Hilliard equation and its variants. Our focus will be placed on the well-posedness as well as long-time behavior of global solutions for the Cahn–Hilliard equation in the classical setting and recent progresses on the dynamic boundary conditions that describe non-trivial boundary effects.
Common macular and vascular diseases include age-related macular degeneration (ARMD), diabetic macular edema (DME), branch retinal vein occlusion (BRVO), central retinal vein occlusion (CRVO), and central serous chorioretinopathy (CSCR), which are the leading causes of visual impairment and blindness worldwide [1,2,3]. According to the World Health Organization (WHO), DME, which primarily affects working-age adults, affected 425 million people worldwide in 2017 and is expected to affect 629 million people by 2045 [4]. The WHO also estimates that 196 million people had ARMD in 2020; this number is expected to rise to 288 million by 2040 [5]. The prevalence of ARMD in elderly people is 40% at the age of 70 years, rising to 70% at the age of 80 years. Rogers et al. [6] discovered that BRVO and CRVO affected 13.9 million and 2.5 million of the world's population aged 30 years and older, respectively, in 2008. Men have a higher prevalence of CSCR compared to women [7]. A large population is afflicted by these diseases, and projections suggest that this number will escalate in the future. However, the first stage of these diseases can be treated, and patients can recover their vision loss through early detection and treatment [8,9,10].
Optical coherence tomography (OCT) is a noninvasive imaging modality that provides high-resolution information within a cross sectional area. OCT retinal imaging enables visualization of the thickness, structure, and detail of various layers of the retina. In addition, when the retina develops a disease, OCT enables the visualization of abnormal features and damaged retinal structures [11]. Therefore, retinal OCT images are widely used in the medical field to monitor information in medical images prior to treatment or for the diagnosis of various diseases.
For several years, ophthalmologists have analyzed the comprehensive information inside the retina for retinal care services, treatment, and diagnosis using retinal OCT images in clinical settings. The clinician performs these tasks manually and wait for each process. As a result, manual analysis is time consuming when there are numerous OCT images. Even if the clinician has great expertise, this analysis may not be accurate [12]. An automated technique based on deep learning (DL) or machine learning using artificial intelligence has been proposed as a solution to overcome this limitation.
Recently, computer algorithms based on artificial intelligence, DL, and machine learning have been proposed for the automatic diagnosis of various retinal diseases and have been applied in clinical health care. Han et al. [13] modified three well known convolutional neural network (CNN) models to gain access to normal and three subtypes of neovascular age-related macular degeneration (nAMD). The classification layers of the original CNN models were replaced by new layers: four fully connected layers and three dropout layers, along with a Leaky rectified linear activation unit (ReLU) as an activation function. The modified models were trained using the transfer learning technique and tested on 920 OCT images; the VGG-16 model achieved an accuracy of 87.4%. Sotoudeh-Paima et al. [14] classified OCT images to identify normal, AMD, and choroidal neovascularization (CNV) using a multiscale CNN. This CNN was evaluated and achieved a classification accuracy of 93.40% on the public dataset. Elaziz et al. [15] developed a four-class classification method for accessing retinal diseases from OCT images based on an ensemble DL model and machine learning. First, the features are extracted from the two models, MobileNet and DenseNet, and were concatenated as full features of the input images. Then, feature selection was performed to remove irrelevant features and to input the useful features into machine learning to classify the input data. A total of 968 OCT images were used to evaluate classification performance, and an accuracy of 94.31% was achieved. Another study by Liu et al. [16] used a DL model to extract attention features from OCT images. It used the extracted features as guiding features for CNV, DME, drusen, and normal. The classification performance was assessed using public datasets, and an average accuracy of 95.10% was achieved. Minagi et al. [17] used transfer learning with universal adversarial perturbations (UAPs) for classification with a limited dataset. Three types of medical images, including OCT images, were used to assess diseases, and DL models were trained using the ImageNet dataset. The UAPs algorithm was used to generate a training set based on the data provided to train the DL model. There were 11,200 OCT images utilized in training and assessing the model's performance, and a classification accuracy of 95.3% was achieved for the four classes: CNV, DME, drusen, and normal. Tayal et al. [18] presented four ocular disease classifications based on three CNN models using OCT images. Images were enhanced before being fed to CNN models. To assess the performance of the presented method, 6,678 publicly available OCT images were evaluated. The method achieved an accuracy of 96.50% with a CNN model which compressed nine layers. The performance of the CNN models with nine layers outperformed the experimented CNN models with five and seven layers. Adversarial retraining is an algorithm used to improve the performance of DL models based on classification.
According to the literature, retinal OCT classification was developed using DL and DL based methods such as transfer learning, smoothing generative adversarial networks, adversarial retraining, and multi-scale CNN. This method is used to improve the model's performance by fine-tuning previous task knowledge using the OCT image problem, increasing the dataset size for training, applying the technique of inputting data for the training model, and changing the training input image sizes. However, the classification methods can achieve an accuracy of less than 97.00%, indicating their potential for further improvement. Moreover, these studies classify retinal diseases into fewer than five classes. This study aims to improve the classification accuracy and detect five classes of retinal diseases, which are more than the previous studies highlighted in the literature.
In this study, we propose an automatic method based on a hybrid of deep learning and ensemble machine learning for screening five different retinal diseases from OCT images to improve the performance of OCT image classification. The proposed method improves classification accuracy, outperforming standalone classifiers without a hybrid. In addition, it can be trained using a smaller dataset from our hospital, which has been strictly labelled by experts. Moreover, the proposed method enables deployment with a web server for open access to test the evaluation performance within seconds.
All OCT images were collected from Soonchunhyang University's Bucheon Hospital. The OCT images were collected and normalized after approval by the Bucheon Hospital's Institutional Review Board (IRB). OCT images were captured using DRI-OCT (Topcon Medical System, Inc., Oakland, NJ, USA). The scan range was 3–12 mm in the horizontal and vertical directions, with a lateral resolution of 20 μm and an in-depth resolution of 8 μm. The shooting speed was 100,000 A-scans per second. The OCT images utilized were collected twice; the first comprised 2,000 images that were captured between April and September 2021, while the second consisted of 998 images, and took place over a period of approximately five months from September 2021 to January 2022. Therefore, the total number of OCT images collected twice was 2,998; these were labeled by ophthalmologists for five retinal diseases (ARMD:740, BRVO:450, CRVO:299, CSCR:749, and DME:760) as the ground truth.
This study was approved by the Institutional Review Board (IRB) from Soonchunhyang University Bucheon Hospital, Bucheon, Republic of Korea (IRB approval number: 2021-05-001). All methods were performed in accordance with relevant guidelines and regulations. Informed consent was obtained from all subjects.
Image processing is a technique for performing various operations on the original images to convert it into a format suitable for DL models or to extract useful features. In image classification based on deep learning, image processing is an essential initial process to change an image before feeding it to the CNN model. The CNN model requires a unique size for the image input, and higher-resolution images demand longer computing times. To shorten the operating time and the suitable size required by the CNN models, all OCT images were downsized to 300 pixels in height and 500 pixels in width. The OCT image dataset was split into an 80% training set and 20% testing set. The training set was used to train the deep learning model and the testing set was used to assess performance.
The size of the dataset has a significant impact on the DL performance. Therefore, a larger dataset may enable a better performance. However, in the medical field, most medical dataset has size limits. Data augmentation is a technique developed to overcome the limitations of a dataset by performing different operations on the data provided and creating new data, thereby enhancing the dataset size. Additionally, data augmentation is used to enhance performance [19], generalize the model [20], and avoid overfitting [21]. We utilize data augmentation techniques from the Python library imgaug including like vertical flip, rotation, scale, brightness, saturation, contrast, enhance and contrast, and equalization. The OCT images were augmented at angles of 170,175,185, and 190. The selected angle is suitable for rectangle shape representation without loss of information from the original OCT images; scale image with a random range between 0.01 to 0.12; the level of brightness from 1 to 3; the saturate operation, which ranges from 1 to 5, increases by one with each level; random contrast with contrast values ranging from 0.2 to 3; enhance and contrast with levels ranging from 1 to 1.5; and image equalization with levels ranging from 0.9 to 1.4. At the end of the data augmentation process, one OCT image can serve as the basic for generating 29 augmented images. Therefore, the training set comprised a total of 69,455 OCT images, including samples. The acquired OCT and augmented images are shown in Figure 1. Applying data augmentation, only the training set is used for training the proposed method. After finishing the augmentation operation, the OCT images are passed through the 10-fold cross-validation technique to partition the data into folds for the training model (training data) and to test the model after finishing every epoch (validation data).
Figure 2 shows the architecture of the proposed method that comprises three significant blocks: feature extraction, classification, and boosting performance. First, transfer learning based on CNN models extracts one thousand features from the OCT images. Second, various machine learning algorithms are used to classify the OCT images based on the features extracted by the CNN model. Finally, the ensemble algorithm fuses the distribution probabilities of the same class and predicts the retinal disease class based on probability. Each block of the proposed architecture is described in detail in the following subsections.
Transfer learning is a technique used to transform the knowledge of a related task that has already been studied to improve the learning of a new task. Training a CNN model from scratch is computationally expensive and time consuming; moreover, an extensive dataset is required to achieve a better performance. Therefore, transfer learning has been developed to overcome DL's drawbacks [22]. To retrain the model with new tasks based on prior knowledge, pretrain was refined, small top layers were trained, and the final layers were frozen. In this study, the transfer learning CNN (TL-CNN) models EfficientNetB0 [23], InceptionResNetV2 [24], InceptionV3 [25], ResNet50 [26], VGG16 [27], and VGG19 [28] are selected and updated. The modification names of the CNN models start with TL, indicating transfer learning, and ends with the original names of the CNN models, including TL-EfficientNetB0, TL -InceptionResNetV2, TL-InceptionV3, TL-ResNet50, TL-VGG16, and TL-VGG19. The original CNN models were created for generic image classification tasks. They were trained and tested on a large dataset (ImageNet) to categorize 1000 different types of images. To use a CNN model with the transfer learning technique and classify retinal OCT images, each CNN model must modify its classification layers to adapt to the target classes. One specific problem is the categorization of OCT images. The new classification layer is modified with continued stacking of GlobalAveragePooling2D, one Normalization layer, and two Dense layers. The first Dense layer consists of 1,024 with the ReLU activation function and the final dense layer with a five output- vectors. Finally, the updated model is pretrained, and the pretrain model is retrained to fine-tune the previous feature representation in the base model to make it more relevant for OCT image classification. The output consists of five vectors representing the distribution class probabilities using the Softmax activation function. As mentioned previously, a CNN model based on transfer learning is used to extract convolutional features from the OCT images. Therefore, the convolutional features were extracted from the TL-CNNs models where the GlobalAveragePooling2D layers of the classification layer. These features are one-dimensional. Different models provide various features and numbers based on the structure and convolution filters of the model.
Six TL-CNN models independently extracted the features. At the GlobalAverage-Pooling2D layers, the features were extracted (TL-EfficientNetB0: 1,280 features, TL-InceptionResnetV2: 1,536 features, TL-InceptionV3: 2,048 features, TL-ResNet50: 2,048 features, TL-VGG16: 512 features, TL-VGG19: 512 features). Then, the extracted features of each TL-CNN model were used as the input to five popular machine learning classifiers: support vector machine (SVM) [29], k-nearest neighbors (k-NN) [30], decision tree (DT) [31], Random Forests (RF) [32], Naïve Bayes [33], and XGBoost [34]. Various machine learning classifiers use different techniques for learning and distinguishing the different classes of data.
Individual machine learning classifiers provide different identification accuracies. This is because each classifier has its own learning ability to identify classes based on the given features. Therefore, an ensemble method is used to aggregate the distribution probabilities of the two classifiers. The proposed method selects two higher prediction classifiers (k-NN and XGBoost) based on an experiment to perform aggregation. An ensemble is a type of soft voting that performs better than other models [35]. Soft voting predicts the final class label as the class label most frequently predicted by classifiers. In soft voting, class labels are predicted by averaging the probability p of the class. Table 1 presents the proposed algorithm, which includes image processing, splitting data, data augmentation, feature extraction, classification, and an ensemble of classifiers:
yFC=argmaxim∑k=1wkpik | (1) |
Algorithm 1: Proposed OCT images Classification |
1: procedure OCT IMAGES PROCESSING 2: return preprocessed-images 3: procedure SPLIT-DATA (OCT-data) 4: train-data, test-data, train-labels, test-labels = split (OCT-images, labels) 5: procedure DATA AUGMENTATION (train-data) 6: augmented images = augmentation (vertical flip, rotation, scale, brightness, saturation, contrast, enhance and 7: contrast, and equalization) 8: return augmented-images 9: procedure 10-FOLD_CROSS_VALIDATION (augmented images, labels) 10: Fold1, Fold2, ……Fold10 = train_test_split(augmented images, labels) 11: return Fold1-10 12: procedure FEATURE_EXTRACTION (Fold1-10, test-data, test-labels) 13: TL-CNN models = modify the convolutional neural network (CNN) models 14: pre-train the TL-CNN models, small top layers are trained, and the final layers are frozen. 15: extracted features = TL-CNN model at GlobalAveragePooling2D layers 16: return extracted features saved in csv format 17: procedure CLASSIFICATION (extracted features, labels) 18: classifiers = [svm, k-NN, DT, RF, Naïve-Bayes, and XGBoost] 19: for clsf in range (0, 6): 20: predicted-labels = classifiers[clsf]. fit (extracted-features) 21: training-accuracy = accuracy (predicted-labels, labels) 22: save_train_weight 23: voting = "soft" 24: ML1 = k-NN (train-data, train-labels, test-data) 25: ML2 = XGBoost (train-data, train-labels, test-data) 26: procedure ENSMEBLE_CLASSIFIERS (train-data, train-labels, test-data) 27: ensemble-classifiers = concadenate(ML1, ML2) 28: ensemble-classifers.fit (train-data, train-labels) 29: predictions = ensemble-classifers.predict(test-data) 30: save_training_weights, results_visualization |
where w_k is the weight of the machine learning classifiers, which can be either k-NN or XGBoost; it automatically learns from disease features in OCT images and then identifies the type of disease based on the input data; i represents the class label of the retinal diseases, where i ∈{0: ARMD, 1: BRVO, 2: CRVO, 3: CSCR, 4: DME}; and p_ik represents the probability of machine-learning weight k for class i.
The proposed OCT image classification method was developed using Python 3.7 and TensorFlow 2.6.0. In addition, Scikit Learn was operated on a personal computer running the Windows 10 operating system powered by an Intel(R) Xeon (R) Silver 4114 @ 2.20GHz CPU, 192GB RAM, and an NVIDIA TITAN RTX 119GB GPU.
The proposed OCT image classification method was trained using augmented OCT images and evaluated using a test set. There were two types of training. First, six TL-CNN models were trained to perform feature extraction from OCT images.
Six TL-CNN models were separately trained using a combination of the training set and the augmented images of the training set. The combination data were split using a 10-fold cross-validation algorithm to separate the images for training, validate the model during training, and prevent overfitting. Furthermore, the TL-CNN models were individually trained with a fixed batch size of 64, epochs of size 100, and an Adam optimizer with a learning rate of 0.0001. The learning rate was selected based on the standard learning rate provided by the TensorFlow library. For example, with a setting of 100 epochs, each model must be trained 100 times on the same data. Therefore, the performance is improved by updating the weight based on the information lost through repetitions of a training session. The weights of each TL-CNN model were recorded in a separate file after training and were utilized to extract features from the training and testing sets. Then, the machine learning models were trained with the convolution features extracted by the TL-CNN models to access the class probabilities. Six machine learning models were separately trained, and the weights were recorded after the training completed. Finally, an ensemble method based on soft voting was applied to the average class probabilities of the two classifiers to obtain an effective final class prediction.
The results of the proposed OCT image classification are divided into three parts: classification results, deployment of the classification results to web services, and a comparison of the results with similar studies in terms of classification accuracy.
A test set was used to evaluate the performance of the proposed method after training the model. The same preprocessing was performed on both the test dataset and the training dataset without data augmentation. The test set contained 601 OCT images, which were used to assess the classification performance. Six TL-CNN models were individually trained to extract features from the OCT images and store the extracted features in pickle format. Six machine learning classifiers were utilized to discriminate the classes of OCT images based on the features extracted by the TL-CNN. Statistical theories were analyzed to measure the classification ability among the classes, sensitivity, specificity, precision, and accuracy. The relationship between the sensitivity and specificity of various categories was shown through a receiver operating characteristic (ROC) curve. Moreover, the confusion matrix was analyzed, which indicated the correct and incorrect class predictions. Table 2 lists the test results of using TL-EfficientNetB0 as an extractor and seven types of classifiers, including an ensemble classifier, the classification result outperformed the ensemble classifier with a sensitivity, specificity, precision, and accuracy of 96.17, 98.92, 95.89 and 95.85%, respectively. The second highest performance was achieved with the k-NN classifier, which achieved a sensitivity, specificity, precision, and accuracy of 87.37, 96.95, 88.82 and 88.89%, respectively. The classification results for the other machine learning classifiers are unstable, both increasing and decreasing randomly.
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-EfficientNetB0 | SVM | 86.79% | 96.78% | 88.64% | 88.39% |
k-NN | 87.37% | 96.95% | 88.82% | 88.89% | |
DT | 85.80% | 96.41% | 84.95% | 86.90% | |
RF | 66.20% | 92.60% | 81.08% | 75.95% | |
Naive Bayes | 86.11% | 96.40% | 86.58% | 86.90% | |
XGBoost | 85.86% | 96.45% | 86.38% | 87.23% | |
Ensemble | 96.17% | 98.92% | 95.89% | 95.85% |
Table 3 shows the classification results when using TL-InceptionResnetV2 as an extractor and seven classifiers, showing that the result outperforms the ensemble classifier with a sensitivity, specificity, precision, and accuracy of 97.42, 99.40, 97.49 and 97.68%, respectively. The second highest performance was achieved with the k-NN classifier, with a sensitivity, specificity, precision, and accuracy of 87.37, 96.48, 88.19 and 87.56%, respectively. In addition, with the same extractor, the classification performance of XGBoost was similar to that of the k-NN classifier. Table 4 lists the evaluation results when using the TL-InceptionV3 extractor and seven machine learning classifiers, including the ensemble classifier, which outperformed other methods with a sensitivity, specificity, precision, and accuracy of 91.34, 97.59, 91.03 and 91.04%, respectively. The second highest performance was achieved by XGBoost, with a sensitivity, specificity, precision, and accuracy of 84.42, 95.10, 82.88, and 82.91%, respectively. Table 5 lists the classification results when using the TL-ResNet50 model as a feature extractor and classifying those features by seven different classifiers, which indicates that using ensemble classifiers outperforms the obtained a sensitivity, specificity, precision, and accuracy of 96.46, 99.14, 96.76 and 96.68%, respectively. The second highest performance was achieved by XGBoost, with a sensitivity, specificity, precision, and accuracy of 87.63, 96.59%, 88.27 and 87.73%, respectively. The performances of the other two classifiers, SVM and k-NN, were comparable and better than those of the three classifiers in the experiments. Table 6 lists the test results of the proposed classification with VGG-16 as a feature extractor and seven machine learning classifiers, the ensemble classifier exhibited the best performance, with a sensitivity, specificity, precision, and classification accuracy of 92.07, 98.00, 92.60 and 92.54%, respectively. The XGBoost classifier had the second highest performance for TL-VGG16 as a feature extractor; it obtained a sensitivity, specificity, precision, and accuracy of 80.48, 94.91, 81.44 and 82.26%, respectively. A similar performance was observed for SVM and k-NN. Table 7 lists the classification test results of the TL-VGG19 model for feature extraction and classification using these features by various classifiers. Ensemble classifiers algorithm outperformed the five other classifiers; its sensitivity, specificity, precision, and accuracy are 93.86, 93.40, 93.44 and 93.86%, respectively. The second- and third-highest performances were achieved by XGBoost and SVM, respectively.
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-InceptionResnetV2 | SVM | 86.27% | 96.13% | 86.86% | 86.40% |
k-NN | 87.37% | 96.48% | 88.19% | 87.56% | |
DT | 83.77% | 95.54% | 83.37% | 84.41% | |
RF | 72.93% | 93.79% | 80.67% | 79.27% | |
Naive Bayes | 79.66% | 93.41% | 78.25% | 77.78% | |
XGBoost | 87.29% | 96.47% | 88.05% | 87.56% | |
Ensemble | 97.42% | 99.40% | 97.49% | 97.68% |
TL-CNN models | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-InceptionV3 | SVM | 82.42% | 94.50% | 80.85% | 81.09% |
k-NN | 83.05% | 94.61% | 80.94% | 81.43% | |
DT | 81.54% | 94.18% | 79.65% | 80.09% | |
RF | 79.50% | 94.01% | 80.52% | 79.77% | |
Naive Bayes | 65.72% | 86.52% | 2.58% | 61.33% | |
XGBoost | 84.42% | 95.10% | 82.88% | 82.91% | |
Ensemble | 91.34% | 97.59% | 91.03% | 91.04% |
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-ResNet50 | SVM | 86.25% | 96.02% | 85.12% | 85.95% |
k-NN | 85.75% | 96.00% | 86.26% | 85.74% | |
DT | 82.04% | 94.94% | 82.34% | 82.59% | |
RF | 52.90% | 88063% | 71.77% | 65.67% | |
Naive Bayes | 67.71% | 89.82% | 72.49% | 64.68% | |
XGBoost | 87.63% | 96.59% | 88.27% | 87.73% | |
Ensemble | 96.46% | 99.14% | 96.76% | 96.68% |
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-VGG16 | SVM | 76.39% | 93.49% | 76.96% | 78.28% |
k-NN | 74.55% | 92.33% | 75.70% | 74.96% | |
DT | 57.42% | 84.86% | 55.72% | 58.37% | |
RF | 50.55% | 86.58% | 38.39% | 63.18% | |
Naive Bayes | 59.53% | 85.08% | 58.84% | 59.20% | |
XGBoost | 80.48% | 94.91% | 81.44% | 82.26% | |
Ensemble | 92.07% | 98.00% | 92.60% | 92.54% |
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-VGG19 | SVM | 79.90% | 93.74% | 78.77% | 78.82% |
k-NN | 69.16% | 90.99% | 70.56% | 71.64% | |
DT | 53.23% | 82.94% | 53.41% | 54.73% | |
RF | 48.29% | 85.62% | 37.71% | 60.36% | |
Naive Bayes | 56.41% | 82.96% | 54.70% | 54.89% | |
XGBoost | 82.44% | 95.30% | 81.90% | 83.58% | |
Ensemble | 93.86% | 93.40% | 93.44% | 93.86% |
Six TL-CNN models were compared, and TL-InceptionResNetV2 achieved a better performance than the other five models used in this study, with a sensitivity, specificity, precision, and accuracy of 97.42, 99.40, 97.49 and 97.68 %, respectively. The ensemble algorithm always outperformed all the TL-CNN models. The individual k-NN and XGBoost classifiers performed better than the three individual classifiers. Thus, ensembled k-NN and XGBoost also achieved better performance than k-NN and XGBoost.
Figure 3 shows the ROC result of the proposed classification method, which outperforms TL- InceptionResnetV2 with ensemble classifiers (k-NN and XGBoost). The ROC among each class ARMD, BRVO, CRVO, CSCR, and DME is 0.99, 0.96, 0.99, 0.99, and 0.98, respectively. The relationship between sensitivity and specificity of the five classes is most important. The confusion matrix is implemented by using the Sklearn library in Python. The size of test data is essential to present the robustness of classification. The confusion matrix shows the number of correct and incorrect predictions among all classes. Figure 4 shows the confusion matrix of the proposed method which exhibited best performance; 148 of 149 OCT images of ARMD class are correctly predicted, 85 of 91 images of BRVO class are correctly predicted (ARMD:3 and DME:3 are incorrect predictions), 59 of 60 images are correctly predicted as CRVO and one image that is incorrectly predicted as BRVO, 148 of 150 images are correctly predicted and two images are incorrectly predicted as ARMD, and 149 of 153 are correctly predicted, and four are incorrectly predicted (ARMD: 1, BRVO:1, and CRVO:2 are incorrect prediction).
To render the proposed method applicable and accessible from outside through an Internet connection, we deployed the proposed OCT image classification to a web server using the Flask framework. The web server receives one image input at a time and inputs it into the proposed classification method to predict retinal diseases. The input image is an OCT image consisting of three channels with a resolution of 300 pixels in height and 500 pixels in width. When inputting an OCT image through a web service user interface (UI), the image is transferred to a computer server that runs a DL classification model. First, the computer server performs image processing which is the same to the processes used in both the train and test sets. Second, the preprocessed image is inputted into the proposed classification weights for prediction. Finally, the predicted results are forwarded to the web service using the Flask framework. The prediction results consist of the image input, distribution probabilities among the five classes, the retinal disease diagnosis class, and prediction times of an image. The prediction time is the time taken to input an image to a web service to predict and return the prediction result. Figure 5 shows the initial UI of the web server. The prediction results obtained after inputting the OCT images are shown in Figure 6.
The higher accuracy of the proposed OCT image classification method is compared with that of the recent studies reviewed in the literature review section, as listed in Table 8. These studies focused on transfer learning, developing new models, and combining well known CNN models with machine learning. All the listed studies used either different OCT databases or a combination of these datasets. Moreover, the number and type of classification classes were different, with at most four classes. We classify retinal diseases into five classes using a dataset obtained from a hospital. An additional number of classes can affect the performance of the classification methods. Table 8 lists the methods and algorithms that have been presented, including the suggested model with transfer learning, the multiscale DL model, and transfer learning using existing CNN models. However, the results as listed in the literature review have shown an accuracy of < 97%. Instead of focusing on a single classifier, this study combines two machine-learning classifiers and the DL as a feature extractor. Our study exhibits an accuracy of 97.68%, which is greater than the accuracy of the aforementioned studies. In addition, the number of classification classes is higher than that of the studies reviewed.
Author | Year | Method | Disease type | Dataset size | Accuracy |
Han et al. [13] | 2022 | Transfer learning with a modification of the well-known CNN models | 4-class: PCV, RAP, nAMD, and NORMAL | 4749 | 87.4% |
Sotoudeh-Paima et al. [14] | 2022 | Deep learning: multi-scale convolutional neural network | 3-class: AMD, CNV, NORMAL | 120,961 | 93.4% |
Elaziz et al. [15] | 2022 | Ensemble deep learning model for feature extraction, features selection, machine learning as classifier. | 4-class: DME, CNV, DRUSEN, and NORMAL | 84,484 | 94.32% |
Liu et al. [16] | 2022 | Deep learning based on method and lesions segmentation model. | 4-class: CNV, DME, DRUSEN, and NORMAL | 86,134 | 95.10% |
Minagi et al. [17] | 2022 | Transfer learning with DNN models | 4-class: CNV, DME, DRUSEN, and NORMAL | 11,200 | 95.3% |
Tayal et al. [18] | 2022 | Deep learning-based method | 4-class: DME, CNV, DRUSEN, NORMAL | 84,484 | 96.5% |
Proposed method | − | Hybrid of deep learning and machine learning + ensemble machine learning classifiers. | 5-class: ARMD, BRVO, CRVO, CSCR, DME | 2,998 | 97.68% |
Our study classifies retinal OCT images with disease classes that differ from the reviewed studies and are not available in the public dataset. We hope that these retinal diseases will become available in the future, and we will evaluate the proposed OCT image classification system using a public dataset.
This study presents a hybrid ensemble OCT image classification method for the diagnosis of five classes of retinal diseases. The proposed method employs an ensemble machine learning classifier as the classifier and a hybrid deep learning model as the feature extractor. We identified the deep learning model and ensemble classifiers that were most suitable for OCT image classification. The proposed model outperformed an individual classifier. With an accuracy of 97.68%, the best deep learning model and ensemble machine learning classifiers of the proposed method were TL- InceptionResnetV2 and the aggregation of KNN and XGBoost. This classification can be deployed to web services for convenient access to diagnose retinal disease from outside the Internet. Moreover, the prediction time in seconds was short, reducing the time required for prediction. This study contributes to the development of accurate multiclass OCT image classification. In the future, we aim to improve the classification performance. If datasets with the same class as in our study are made public, we will assess the proposed method on these datasets to broaden their applicability. In the medical field, improved performance can be used to automatically classify OCT images and eliminate time-consuming tasks, and this classification can also aid in the prevention of vision loss.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The data used to support this study have not been made available access because they are real clinical data from Soonchunhyang Bucheon Hospital, and patient's privacy should be protected, it enables to detect people through this data, but they are available from the corresponding author on reasonable request.
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2021R1A2C1010362) and the Soonchunhyang University Research Fund.
The authors declare no competing interests.
[1] |
J. W. Cahn, On spinodal decomposition, Acta Metallurgica, 9 (1961), 795–801. https://doi.org/10.1016/0001-6160(61)90182-1 doi: 10.1016/0001-6160(61)90182-1
![]() |
[2] |
J. W. Cahn, J. E. Hilliard, Free energy of a nonuniform system Ⅰ. Interfacial free energy, J. Chem. Phys., 28 (1958), 258–267. https://doi.org/10.1063/1.1744102 doi: 10.1063/1.1744102
![]() |
[3] |
J. W. Cahn, J. E. Hilliard, Free energy of a nonuniform system. Ⅲ. Nucleation in a two-component incompressible fluid, J. Chem. Phys., 31 (1959), 688–699. https://doi.org/10.1063/1.1730447 doi: 10.1063/1.1730447
![]() |
[4] | A. Novick-Cohen, The Cahn–Hilliard equation, in Evolutionary Equations (eds. C. M. Dafermos and M. Pokorný), Handb. Differ. Equ., vol. 4, Elsevier/North-Holland, Amsterdam, (2008), 201–228. https://doi.org/10.1016/S1874-5717(08)00004-2 |
[5] |
P. Bates, P. Fife, The dynamics of nucleation for the Cahn–Hilliard equation, SIAM J. Appl. Math., 53 (1993), 990–1008. https://doi.org/10.1137/0153049 doi: 10.1137/0153049
![]() |
[6] | Q. Du, X.-B. Feng, Chapter 5 – The phase field method for geometric moving interfaces and their numerical approximations, in Handbook of Numerical Analysis, Vol. 21, (eds. A. Bonito and R. H. Nochetto), Elsevier, (2020), 425–508. https://doi.org/10.1016/bs.hna.2019.05.001 |
[7] |
D. M. Anderson, G. B. McFadden, A. A. Wheeler, Diffuse-interface methods in fluid mechanics, Annu. Rev. Fluid Mech., 30 (1997), 139–165. https://doi.org/10.1146/annurev.fluid.30.1.139 doi: 10.1146/annurev.fluid.30.1.139
![]() |
[8] | J. Kim, S. Lee, Y. Choi, S. Lee, D. Jeong, Basic principles and practical applications of the Cahn–Hilliard equation, Math. Probl. Eng., (2016), Art. ID 9532608, 11 pp. https://doi.org/10.1155/2016/9532608 |
[9] |
T. Ohta, K. Kawasaki, Equilibrium morphology of block copolymer melts, Macromolecules, 19 (1986), 2621–2632. https://doi.org/10.1021/ma00164a028 doi: 10.1021/ma00164a028
![]() |
[10] |
A. L. Bertozzi, S. Esedoglu, A. Gillette, Inpainting of binary images using the Cahn–Hilliard equation, IEEE Trans. Image Process., 16 (2007), 285–291. https://doi.org/10.1109/TIP.2006.887728 doi: 10.1109/TIP.2006.887728
![]() |
[11] |
A. L. Bertozzi, S. Esedoglu, A. Gillette, Analysis of a two-scale Cahn–Hilliard model for binary image inpainting, Multiscale Model. Simul., 6 (2007), 913–936. https://doi.org/10.1137/060660631 doi: 10.1137/060660631
![]() |
[12] |
H. Garcke, K.-F. Lam, E. Sitka, V. Styles, A Cahn–Hilliard–Darcy model for tumour growth with chemotaxis and active transport, Math. Models Methods Appl. Sci., 26 (2016), 1095–1148. https://doi.org/10.1142/S0218202516500263 doi: 10.1142/S0218202516500263
![]() |
[13] |
J. T. Oden, A. Hawkins-Daarud, S. Prudhomme, General diffuse-interface theories and an approach to predictive tumor growth modeling, Math. Models Methods Appl. Sci., 20 (2010), 477–517. https://doi.org/10.1142/S0218202510004313 doi: 10.1142/S0218202510004313
![]() |
[14] |
E. Khain, L. M. Sander, Generalized Cahn–Hilliard equation for biological applications, Phys. Rev. E, 77 (2008), 051129. https://doi.org/10.1103/PhysRevE.77.051129 doi: 10.1103/PhysRevE.77.051129
![]() |
[15] |
M. Gurtin, D. Polignone, J. Viñals, Two-phase binary fluids and immiscible fluids described by an order parameter, Math. Models Methods Appl. Sci., 6 (1996), 815–831. https://doi.org/10.1142/S0218202596000341 doi: 10.1142/S0218202596000341
![]() |
[16] |
P. C. Hohenberg, B. I. Halperin, Theory of dynamic critical phenomena, Rev. Modem Phys., 49 (1977), 435–479. https://doi.org/10.1103/RevModPhys.49.435 doi: 10.1103/RevModPhys.49.435
![]() |
[17] |
K.-F. Lam, H. Wu, Thermodynamically consistent Navier–Stokes–Cahn–Hilliard models with mass transfer and chemotaxis, Eur. J. Appl. Math., 29 (2018), 595–644. https://doi.org/10.1017/S0956792517000298 doi: 10.1017/S0956792517000298
![]() |
[18] |
H. Abels, H. Garcke, G. Grün, Thermodynamically consistent, frame indifferent diffuse interface models for incompressible two-phase flows with different densities, Math. Models Methods Appl. Sci., 22 (2012), 1150013. https://doi.org/10.1142/S0218202511500138 doi: 10.1142/S0218202511500138
![]() |
[19] |
D. Jacqmin, Contact-line dynamics of a diffuse fluid interface, J. Fluid Mech., 402 (2000), 57–88. https://doi.org/10.1017/S0022112099006874 doi: 10.1017/S0022112099006874
![]() |
[20] |
T.-Z. Qian, X.-P. Wang, P. Sheng, A variational approach to moving contact line hydrodynamics, J. Fluid Mech., 564 (2006), 333–360. https://doi.org/10.1017/S0022112006001935 doi: 10.1017/S0022112006001935
![]() |
[21] | C. M. Elliott, The Cahn–Hilliard model for the kinetics of phase separation, in Mathematical Models for Phase Change Problems (editor J. F. Rodrigues), Internat. Ser. Numer. Math., 88, Birkhäuser, Basel, (1989), 35–73. https://doi.org/10.1007/978-3-0348-9148-6_3 |
[22] |
J. F. Blowey, C. M. Elliott, The Cahn–Hilliard gradient theory for phase separation with nonsmooth free energy. Ⅰ. Mathematical analysis, Eur. J. Appl. Math., 2 (1991), 233–280. https://doi.org/10.1017/S095679250000053X doi: 10.1017/S095679250000053X
![]() |
[23] |
A. Novick-Cohen, L. A. Segel, Nonlinear aspects of the Cahn–Hilliard equation, Phys. D, 10 (1984), 277–298. https://doi.org/10.1016/0167-2789(84)90180-5 doi: 10.1016/0167-2789(84)90180-5
![]() |
[24] |
A. Debussche, L. Dettori, On the Cahn–Hilliard equation with a logarithmic free energy, Nonlinear Anal., 24 (1995), 1491–1514. https://doi.org/10.1016/0362-546X(94)00205-V doi: 10.1016/0362-546X(94)00205-V
![]() |
[25] | P. C. Fife, Models for phase separation and their mathematics, Electron. J. Differ. Equ., (2000), No. 48, 26 pp. https://ejde.math.txstate.edu/Volumes/2000/48/fife.pdf |
[26] |
J. D. van der Waals, The thermodynamic theory of capillarity under the hypothesis of a continuous variation of density, J. Stat. Phys., 20 (1979), 200–244. https://doi.org/10.1007/BF01011514 doi: 10.1007/BF01011514
![]() |
[27] |
A. Miranville, The Cahn–Hilliard equation and some of its variants, AIMS Math., 2 (2017), 479–544. https://doi.org/10.3934/Math.2017.2.479 doi: 10.3934/Math.2017.2.479
![]() |
[28] |
D. Lee, J.-Y. Huh, D. Jeong, J. Shin, A. Yun, J. Kim, Physical, mathematical, and numerical derivations of the Cahn–Hilliard equation, Comput. Mat. Sci., 81 (2014), 216–225. https://doi.org/10.1016/j.commatsci.2013.08.027 doi: 10.1016/j.commatsci.2013.08.027
![]() |
[29] |
F. Otto, The geometry of dissipative evolution equation: the porous medium equation, Comm. Partial Differ. Equ., 26 (2001), 101–174. https://doi.org/10.1081/PDE-100002243 doi: 10.1081/PDE-100002243
![]() |
[30] |
S. Lisini, D. Matthes, G. Savaré, Cahn–Hilliard and thin film equations with nonlinear mobility as gradient flows in weighted-Wasserstein metrics, J. Differ. Equ., 253 (2012), 814–850. https://doi.org/10.1016/j.jde.2012.04.004 doi: 10.1016/j.jde.2012.04.004
![]() |
[31] |
J. Rubinstein, P. Sternberg, Nonlocal reaction-diffusion equations and nucleation, IMA J. Appl. Math., 48 (1992), 249–264. https://doi.org/10.1093/imamat/48.3.249 doi: 10.1093/imamat/48.3.249
![]() |
[32] |
L. Onsager, Reciprocal relations in irreversible processes. Ⅰ., Phys. Rev., 37 (1931), 405–426. https://doi.org/10.1103/PhysRev.37.405 doi: 10.1103/PhysRev.37.405
![]() |
[33] |
L. Onsager, Reciprocal relations in irreversible processes. Ⅱ., Phys. Rev., 38 (1931), 2265–2279. https://doi.org/10.1103/PhysRev.38.2265 doi: 10.1103/PhysRev.38.2265
![]() |
[34] | J. W. Strutt (Lord Rayleigh), Some general theorems relating to vibrations, Proc. London Math. Soc., 4 (1873), 357–368. https://doi.org/10.1112/plms/s1-4.1.357 |
[35] |
B. Eisenberg, Y. Hyon, C. Liu, Energy variational analysis of ions in water and channels: field theory for primitive models of complex ionic fluids, J. Chem. Phys., 133 (2010), 104104. https://doi.org/10.1063/1.3476262 doi: 10.1063/1.3476262
![]() |
[36] |
Y. Hyon, D.-Y. Kwak, C. Liu, Energetic variational approach in complex fluids: maximum dissipation principle, Discrete Contin. Dyn. Syst., 26 (2010), 1291–1304. https://doi.org/10.3934/dcds.2010.26.1291 doi: 10.3934/dcds.2010.26.1291
![]() |
[37] |
C. Liu, H. Wu, An energetic variational approach for the Cahn–Hilliard equation with dynamic boundary condition: model derivation and mathematical analysis, Arch. Ration. Mech. Anal., 233 (2019), 167–247. https://doi.org/10.1007/s00205-019-01356-x doi: 10.1007/s00205-019-01356-x
![]() |
[38] | A. M. Sonnet, E. G. Virga, Dissipative Ordered Fluids Theories for Liquid Crystals, Springer-Verlag, New York, 2012. https://doi.org/10.1007/978-0-387-87815-7 |
[39] |
H. Wu, X. Xu, C. Liu, On the general Ericksen–Leslie system: Parodi's relation, well-posedness and stability, Arch. Ration. Mech. Anal., 208 (2013), 59–107. https://doi.org/10.1007/s00205-012-0588-2 doi: 10.1007/s00205-012-0588-2
![]() |
[40] |
S. Allen, J. W. Cahn, A microscopic theory for antiphase boundwy motion and its application to antiphase domain coarsing, Acta Metallurgica, 27 (1979), 1085–1095. https://doi.org/10.1016/0001-6160(79)90196-2 doi: 10.1016/0001-6160(79)90196-2
![]() |
[41] | A. Miranville, The Cahn–Hilliard Equation: Recent Advances and Applications, CBMS-NSF Regional Conference Series in Applied Mathematics, 95., SIAM, Philadelphia, 2019. https://doi.org/10.1137/1.9781611975925 |
[42] |
B. Nicolaenko, B. Scheurer, Low-dimensional behavior of the pattern formation Cahn–Hilliard equation, North-Holland Math. Stud., 110 (1985), 323–336. https://doi.org/10.1016/S0304-0208(08)72727-0 doi: 10.1016/S0304-0208(08)72727-0
![]() |
[43] |
C. M. Elliott, A. M. Stuart, Viscous Cahn–Hilliard equation, Ⅱ. Analysis, J. Differ. Equ., 128 (1996), 387–414. https://doi.org/10.1006/jdeq.1996.0101 doi: 10.1006/jdeq.1996.0101
![]() |
[44] |
B. Nicolaenko, B. Scheurer, R. Temam, Some global dynamical properties of a class of pattern formation equations, Comm. Partial Differ. Equ., 14 (1989), 245–297. https://doi.org/10.1080/03605308908820597 doi: 10.1080/03605308908820597
![]() |
[45] | V. K. Kalantarov, Global behavior of the solutions of certain fourth-order nonlinear equations, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov., 163 (1987), 66–75; translation in J. Soviet Math., 49 (1990), 1160–1166. https://doi.org/10.1007/BF02208712 |
[46] | A. Novick-Cohen, On the viscous Cahn–Hilliard equation, in Material Instabilities in Continuum Mechanics and Related Mathematical Problems (editor J. M. Ball), Oxford Univ. Press, Oxford, (1988), 329–342. |
[47] |
M. Gurtin, Generalized Ginzburg–Landau and Cahn–Hilliard equations based on a microforce balance, Phys. D, 92 (1996), 178–192. https://doi.org/10.1016/0167-2789(95)00173-5 doi: 10.1016/0167-2789(95)00173-5
![]() |
[48] |
F. Bai, C. M. Elliott, A. Gardiner, A. Spence, A. M. Stuart, The viscous Cahn–Hilliard equation, I. Computations, Nonlinearity, 8 (1995), 131–160. https://doi.org/10.1088/0951-7715/8/2/002 doi: 10.1088/0951-7715/8/2/002
![]() |
[49] |
J.-X. Yin, On the existence of nonnegative continuous solutions of the Cahn–Hilliard equation, J. Differ. Equ., 97 (1992), 310–327. https://doi.org/10.1016/0022-0396(92)90075-X doi: 10.1016/0022-0396(92)90075-X
![]() |
[50] |
C. M. Elliott, H. Garcke, On the Cahn–Hilliard equation with degenerate mobility, SIAM J. Math. Anal., 27 (1996), 404–423. https://doi.org/10.1137/S0036141094267662 doi: 10.1137/S0036141094267662
![]() |
[51] |
S.-B. Dai, Q. Du, Weak solutions for the Cahn–Hilliard equation with degenerate mobility, Arch. Rational Mech. Anal., 219 (2016), 1161–1184. https://doi.org/10.1007/s00205-015-0918-2 doi: 10.1007/s00205-015-0918-2
![]() |
[52] | C. M. Elliott, S. Luckhaus, A generalized diffusion equation for phase separation of a multi-component mixture with interfacial free energy, IMA preprint series, No. 887, Retrieved from the University of Minnesota Digital Conservancy, 1991. https://conservancy.umn.edu/handle/11299/1733 |
[53] |
Y. Oono, S. Puri, Computionally efficient modeling of ordering of quenched phases, Phys. Rev. Lett., 58 (1987), 836–839. https://doi.org/10.1103/PhysRevLett.58.836 doi: 10.1103/PhysRevLett.58.836
![]() |
[54] |
A. Miranville, Asymptotic behavior of the Cahn–Hilliard–Oono equation, J. Appl. Anal. Comput., 1 (2011), 523–536. https://doi.org/10.11948/2011036 doi: 10.11948/2011036
![]() |
[55] |
A. Giorgini, M. Grasselli, A. Miranville, The Cahn–Hilliard–Oono equation with singular potential, Math. Models Methods Appl. Sci., 27 (2017), 2485–2510. https://doi.org/10.1142/S0218202517500506 doi: 10.1142/S0218202517500506
![]() |
[56] |
N. Kenmochi, M. Niezgódka, I. Pawłow, Subdifferential operator approach to the Cahn–Hilliard equation with constraint, J. Differ. Equ., 117 (1995), 320–354. https://doi.org/10.1006/jdeq.1995.1056 doi: 10.1006/jdeq.1995.1056
![]() |
[57] | R. E. Showalter, Monotone Operators in Banach Space and Nonlinear Partial Differential Equations, in: Mathematical Surveys and Monographs, 49, American Mathematical Society, Providence, RI, 1997. http://dx.doi.org/10.1090/surv/049 |
[58] |
L. Cherfils, A. Miranville, S. Zelik, The Cahn–Hilliard equation with logarithmic potentials, Milan J. Math., 79 (2011), 561–596. https://doi.org/10.1007/s00032-011-0165-4 doi: 10.1007/s00032-011-0165-4
![]() |
[59] |
A. Miranville, S. Zelik, Robust exponential attractors for Cahn–Hilliard type equations with singular potentials, Math. Methods Appl. Sci., 27 (2004), 545–582. https://doi.org/10.1002/mma.464 doi: 10.1002/mma.464
![]() |
[60] | T. Nagai, T. Senba, K. Yoshida, Application of the Trudinger–Moser inequality to a parabolic system of chemotaxis, Funkcial. Ekvac., 40 (1997), 411–433. http://www.math.sci.kobe-u.ac.jp/HOME/fe/xml/mr1610709.xml |
[61] |
A. Giorgini, M. Grasselli, H. Wu, The Cahn–Hilliard–Hele–Shaw system with singular potential, Ann. Inst. H. Poincaré Anal. Non Lineaire, 35 (2018), 1079–1118. https://doi.org/10.1016/j.anihpc.2017.10.002 doi: 10.1016/j.anihpc.2017.10.002
![]() |
[62] |
A. Giorgini, A. Miranville, R. Temam, Uniqueness and regularity for the Navier–Stokes–Cahn–Hilliard system, SIAM J. Math. Anal., 51 (2019), 2535–2574. https://doi.org/10.1137/18M1223459 doi: 10.1137/18M1223459
![]() |
[63] |
J.-N. He, H. Wu, Global well-posedness of a Navier–Stokes–Cahn–Hilliard system with chemotaxis and singular potential in 2D, J. Differ. Equ., 297 (2021), 47–81. https://doi.org/10.1016/j.jde.2021.06.022 doi: 10.1016/j.jde.2021.06.022
![]() |
[64] |
G. Schimperna, H. Wu, On a class of sixth-order Cahn–Hilliard-type equations with logarithmic potential, SIAM J. Math. Anal., 52 (2020), 5155–5195. https://doi.org/10.1137/19M1290541 doi: 10.1137/19M1290541
![]() |
[65] |
H. Abels, M. Wilke, Convergence to equilibrium for the Cahn–Hilliard equation with a logarithmic free energy, Nonlinear Anal., 67 (2007), 3176–3193. https://doi.org/10.1016/j.na.2006.10.002 doi: 10.1016/j.na.2006.10.002
![]() |
[66] |
K. Binder, H. L. Frisch, Dynamics of surface enrichment: a theory based on the Kawasaki spin-exchange model in the presence of a wall, Z. Phys. B, 84 (1991), 403–418. https://doi.org/10.1007/BF01314015 doi: 10.1007/BF01314015
![]() |
[67] |
J. W. Cahn, C. M. Elliott, A. Novick-Cohen, The Cahn–Hilliard equation with a concentration dependent mobility: motion by minus the Laplacian of the mean curvature, Eur. J. Appl. Math., 7 (1996), 287–301. https://doi.org/10.1017/S0956792500002369 doi: 10.1017/S0956792500002369
![]() |
[68] |
J. W. Cahn, J. E. Taylor, Surface motion by surface diffusion, Acta Metallurgica, 42 (1994), 1045–1063. https://doi.org/10.1016/0956-7151(94)90123-6 doi: 10.1016/0956-7151(94)90123-6
![]() |
[69] |
L. Calatroni, P. Colli, Global solution to the Allen–Cahn equation with singular potentials and dynamic boundary conditions, Nonlinear Anal., 79 (2013), 12–27. https://doi.org/10.1016/j.na.2012.11.010 doi: 10.1016/j.na.2012.11.010
![]() |
[70] |
C. Cavaterra, C. G. Gal, M. Grasselli, Cahn–Hilliard equations with memory and dynamic boundary conditions, Asymptot. Anal., 71 (2011), 123–162. https://doi.org/10.3233/ASY-2010-1019 doi: 10.3233/ASY-2010-1019
![]() |
[71] |
C. Cavaterra, M. Grasselli, H. Wu, Non-isothermal viscous Cahn–Hilliard equation with inertial term and dynamic boundary conditions, Commun. Pure Appl. Anal., 13 (2014), 1855–1890. https://doi.org/10.3934/cpaa.2014.13.1855 doi: 10.3934/cpaa.2014.13.1855
![]() |
[72] |
X.-F. Chen, M. Kowalczyk, Existence of equilibria for the Cahn–Hilliard equation via local minimizers of the perimeter, Comm. Partial Differ. Equ., 21 (1996), 1207–1233. https://doi.org/10.1080/03605309608821223 doi: 10.1080/03605309608821223
![]() |
[73] |
X.-F. Chen, X.-P. Wang, X.-M. Xu, Analysis of the Cahn–Hilliard equation with a relaxation boundary condition modeling the contact angle dynamics, Arch. Rational Mech. Anal., 213 (2014), 1–24. https://doi.org/10.1007/s00205-013-0713-x doi: 10.1007/s00205-013-0713-x
![]() |
[74] | L. Cherfils, S. Gatti, A. Miranville, Existence of global solutions to the Caginalp phase-field system with dynamic boundary conditions and singular potentials, J. Math. Anal. Appl., 343 (2008), 557–566. https://doi.org/10.1016/j.jmaa.2008.01.077 With Corrigendum: J. Math. Anal. Appl., 348 (2008), 1029–1030. https://doi.org/10.1016/j.jmaa.2008.01.077 |
[75] |
L. Cherfils, S. Gatti, A. Miranville, A variational approach to a Cahn–Hilliard model in a domain with nonpermeable walls, J. Math. Sci. (N.Y.), 189 (2012), 604–636. https://doi.org/10.1007/s10958-013-1211-2 doi: 10.1007/s10958-013-1211-2
![]() |
[76] |
L. Cherfils, A. Miranville, On the Caginalp system with dynamic boundary conditions and singular potentials, Appl. Math., 54 (2009), 89–115. https://doi.org/10.1007/s10492-009-0008-6 doi: 10.1007/s10492-009-0008-6
![]() |
[77] |
R. Chill, On the Łojasiewicz–Simon gradient inequality, J. Funct. Anal., 201 (2003), 572–601. https://doi.org/10.1016/S0022-1236(02)00102-7 doi: 10.1016/S0022-1236(02)00102-7
![]() |
[78] |
R. Chill, E. Fašangová, J. Prüss, Convergence to steady states of solutions of the Cahn–Hilliard equation with dynamic boundary conditions, Math. Nachr., 279 (2006), 1448–1462. https://doi.org/10.1002/mana.200410431 doi: 10.1002/mana.200410431
![]() |
[79] |
P. Colli, T. Fukao, Cahn–Hilliard equation with dynamic boundary conditions and mass constraint on the boundary, J. Math. Anal. Appl., 429 (2015), 1190–1213. https://doi.org/10.1016/j.jmaa.2015.04.057 doi: 10.1016/j.jmaa.2015.04.057
![]() |
[80] |
P. Colli, T. Fukao, Equation and dynamic boundary condition of Cahn–Hilliard type with singular potentials, Nonlinear Anal., 127 (2015), 413–433. https://doi.org/10.1016/j.na.2015.07.011 doi: 10.1016/j.na.2015.07.011
![]() |
[81] |
P. Colli, T. Fukao, Cahn–Hilliard equation on the boundary with bulk condition of Allen–Cahn type, Adv. Nonlinear Anal., 9 (2020), 16–38. https://doi.org/10.1515/anona-2018-0055 doi: 10.1515/anona-2018-0055
![]() |
[82] |
P. Colli, T. Fukao, Vanishing diffusion in a dynamic boundary condition for the Cahn–Hilliard equation, NoDEA Nonlinear Differ. Equ. Appl., 27 (2020), Paper No. 53, 27 pp. https://doi.org/10.1007/s00030-020-00654-8 doi: 10.1007/s00030-020-00654-8
![]() |
[83] |
P. Colli, T. Fukao, K.-F. Lam, On a coupled bulk-surface Allen–Cahn system with an affine linear transmission condition and its approximation by a Robin boundary condition, Nonlinear Anal., 184 (2019), 116–147. https://doi.org/10.1016/j.na.2018.10.018 doi: 10.1016/j.na.2018.10.018
![]() |
[84] |
P. Colli, T. Fukao, H. Wu, On a transmission problem for equation and dynamic boundary condition of Cahn–Hilliard type with nonsmooth potentials, Math. Nachr., 293 (2020), 2051–2081. https://doi.org/10.1002/mana.201900361 doi: 10.1002/mana.201900361
![]() |
[85] |
P. Colli, G. Gilardi, J. Sprekels, On the Cahn–Hilliard equation with dynamic boundary conditions and a dominating boundary potential, J. Math. Anal. Appl., 419 (2014), 972–994. https://doi.org/10.1016/j.jmaa.2014.05.008 doi: 10.1016/j.jmaa.2014.05.008
![]() |
[86] |
P. Colli, G. Gilardi, J. Sprekels, Global existence for a nonstandard viscous Cahn–Hilliard system with dynamic boundary condition, SIAM J. Math. Anal., 49 (2017), 1732–1760. https://doi.org/10.1137/16M1087539 doi: 10.1137/16M1087539
![]() |
[87] |
P. Colli, G. Gilardi, J. Sprekels, On a Cahn–Hilliard system with convection and dynamic boundary conditions, Ann. Mat. Pura Appl., 197 (2018), 1445–1475. https://doi.org/10.1007/s10231-018-0732-1 doi: 10.1007/s10231-018-0732-1
![]() |
[88] |
P. Colli, G. Gilardi, J. Sprekels, On the longtime behavior of a viscous Cahn–Hilliard system with convection and dynamic boundary conditions, J. Elliptic Parabol. Equ., 4 (2018), 327–347. https://doi.org/10.1007/s41808-018-0021-6 doi: 10.1007/s41808-018-0021-6
![]() |
[89] |
P. Colli, A. Visintin, On a class of doubly nonlinear evolution equations, Comm. Partial Differ. Equ., 15 (1990), 737–756. https://doi.org/10.1080/03605309908820706 doi: 10.1080/03605309908820706
![]() |
[90] |
R. Denk, J. Prüss, R. Zacher, Maximal Lp-regularity of parabolic problems with boundary dynamics of relaxation type, J. Funct. Anal., 255 (2008), 3149–3187. https://doi.org/10.1016/j.jfa.2008.07.012 doi: 10.1016/j.jfa.2008.07.012
![]() |
[91] |
C. M. Elliott, S.-M. Zheng, On the Cahn–Hilliard equation, Arch. Rational Mech. Anal., 96 (1986), 339–357. https://doi.org/10.1007/BF00251803 doi: 10.1007/BF00251803
![]() |
[92] |
J. Escher, Quasilinear parabolic systems with dynamical boundary conditions, Comm. Partial Differ. Equ., 18 (1993), 1309–1364. https://doi.org/10.1080/03605309308820976 doi: 10.1080/03605309308820976
![]() |
[93] |
A. Favini, G. R. Goldstein, J. A. Goldstein, S. Romanelli, The heat equation with generalized Wentzell boundary condition, J. Evol. Equ., 2 (2002), 1–19. https://doi.org/10.1007/s00028-002-8077-y doi: 10.1007/s00028-002-8077-y
![]() |
[94] |
E. Feireisl, F. Simondon, Convergence for semilinear degenerate parabolic equations in several space dimensions, J. Dynam. Differ. Equ., 12 (2000), 647–673. https://doi.org/10.1023/A:1026467729263 doi: 10.1023/A:1026467729263
![]() |
[95] |
H. P. Fischer, P. Maass, W. Dieterich, Novel surface modes in spinodal decomposition, Phys. Rev. Lett., 79 (1997), 893–896. https://doi.org/10.1103/PhysRevLett.79.893 doi: 10.1103/PhysRevLett.79.893
![]() |
[96] |
T. Fukao, H. Wu, Separation property and convergence to equilibrium for the equation and dynamic boundary condition of Cahn–Hilliard type with singular potential, Asymptot. Anal., 124 (2021), 303–341. https://doi.org/10.3233/ASY-201646 doi: 10.3233/ASY-201646
![]() |
[97] |
H. Gajewski, J. Griepentrog, A descent method for the free energy of multicomponent systems, Discrete Contin. Dyn. Syst., 15 (2006), 505–528. https://doi.org/10.3934/dcds.2006.15.505 doi: 10.3934/dcds.2006.15.505
![]() |
[98] |
C. G. Gal, A Cahn–Hilliard model in bounded domains with permeable walls, Math. Methods Appl. Sci., 29 (2006), 2009–2036. https://doi.org/10.1002/mma.757 doi: 10.1002/mma.757
![]() |
[99] | C. G. Gal, Exponential attractors for a Cahn–Hilliard model in bounded domains with permeable walls, Electron. J. Differ. Equ., (2006), No. 143, 23 pp. https://ejde.math.txstate.edu/Volumes/2006/143/gal.pdf |
[100] | C. G. Gal, Global well-posedness for the non-isothermal Cahn–Hilliard equation with dynamic boundary conditions, Adv. Differ. Equ., 12 (2007), 1241–1274. https://projecteuclid.org/journals/advances-in-differential-equations/volume-12/issue-11/Global-well-posedness-for-the-non-isothermal-Cahn-Hilliard-equation/ade/1355867414.full |
[101] |
C. G. Gal, Well-posedness and long time behavior of the non-isothermal viscous Cahn–Hilliard equation with dynamic boundary conditions, Dyn. Partial Differ. Equ., 5 (2008), 39–67. https://dx.doi.org/10.4310/DPDE.2008.v5.n1.a2 doi: 10.4310/DPDE.2008.v5.n1.a2
![]() |
[102] |
C. G. Gal, Robust exponential attractors for a conserved Cahn–Hillard model with singularly perturbed boundary conditions, Commun. Pure Appl. Anal., 7 (2008), 819–836. https://doi.org/10.3934/cpaa.2008.7.819 doi: 10.3934/cpaa.2008.7.819
![]() |
[103] |
C. G. Gal, The role of surface diffusion in dynamic boundary conditions: Where do we stand? Milan J. Math., 83 (2015), 237–278. https://doi.org/10.1007/s00032-015-0242-1 doi: 10.1007/s00032-015-0242-1
![]() |
[104] |
C. G. Gal, Nonlocal Cahn–Hilliard equations with fractional dynamic boundary conditions, Eur. J. Appl. Math., 28 (2017), 736–788. https://doi.org/10.1017/S0956792516000504 doi: 10.1017/S0956792516000504
![]() |
[105] |
C. G. Gal, M. Grasselli, On the asymptotic behavior of the Caginalp system with dynamic boundary conditions, Commun. Pure Appl. Anal., 8 (2009), 689–710. https://doi.org/10.3934/cpaa.2009.8.689 doi: 10.3934/cpaa.2009.8.689
![]() |
[106] |
C. G. Gal, M. Grasselli, A. Miranville, Cahn–Hilliard–Navier–Stokes systems with moving contact lines, Calc. Var. Partial Differ. Equ., 55 (2016), Art. 50, 47 pp. https://doi.org/10.1007/s00526-016-0992-9 doi: 10.1007/s00526-016-0992-9
![]() |
[107] |
C. Gal, M. Grasselli, H. Wu, Global weak solutions to a diffuse interface model for incompressible two-phase flows with moving contact lines and different densities, Arch. Rational Mech. Anal., 234 (2019), 1–56. https://doi.org/10.1007/s00205-019-01383-8 doi: 10.1007/s00205-019-01383-8
![]() |
[108] |
C. G. Gal, A. Miranville, Uniform global attractors for non-isothermal viscous and non-viscous Cahn–Hilliard equations with dynamic boundary conditions, Nonlinear Anal. Real World Appl., 10 (2009), 1738–1766. https://doi.org/10.1016/j.nonrwa.2008.02.013 doi: 10.1016/j.nonrwa.2008.02.013
![]() |
[109] |
C. G. Gal, A. Miranville, Robust exponential attractors and convergence to equilibria for non-isothermal Cahn–Hilliard equations with dynamic boundary conditions, Discrete Contin. Dyn. Syst. Ser. S, 2 (2009), 113–147. https://doi.org/10.3934/dcdss.2009.2.113 doi: 10.3934/dcdss.2009.2.113
![]() |
[110] |
C. G. Gal, H. Wu, Asymptotic behavior of a Cahn–Hilliard equation with Wentzell boundary conditions and mass conservation, Discrete Contin. Dyn. Syst., 22 (2008), 1041–1063. https://doi.org/10.3934/dcds.2008.22.1041 doi: 10.3934/dcds.2008.22.1041
![]() |
[111] |
H. Garcke, P. Knopf, Weak solutions of the Cahn–Hilliard system with dynamic boundary conditions: a gradient flow approach, SIAM J. Math. Anal., 52 (2020), 340–369. https://doi.org/10.1137/19M1258840 doi: 10.1137/19M1258840
![]() |
[112] |
H. Garcke, P. Knopf, S. Yayla, Long-time dynamics of the Cahn–Hilliard equation with kinetic rate dependent dynamic boundary conditions, Nonlinear Anal., 215 (2022), Paper No. 112619. https://doi.org/10.1016/j.na.2021.112619 doi: 10.1016/j.na.2021.112619
![]() |
[113] |
G. Gilardi, A. Miranville, G. Schimperna, On the Cahn–Hilliard equation with irregular potentials and dynamic boundary conditions, Commun. Pure Appl. Anal., 8 (2009), 881–912. 10.3934/cpaa.2009.8.881 doi: 10.3934/cpaa.2009.8.881
![]() |
[114] |
G. Gilardi, A. Miranville, G. Schimperna, Long time behavior of the Cahn–Hilliard equation with irregular potentials and dynamic boundary conditions, Chin. Ann. Math. Ser. B, 31 (2010), 679–712. https://doi.org/10.1007/s11401-010-0602-7 doi: 10.1007/s11401-010-0602-7
![]() |
[115] | G. R. Goldstein, Derivation and physical interpretation of general boundary conditions, Adv. Differ. Equ., 11 (2006), 457–480. https://projecteuclid.org/journals/advances-in-differential-equations/volume-11/issue-4/Derivation-and-physical-interpretation-of-general-boundary-conditions/ade/1355867704.full |
[116] |
G. R. Goldstein, A. Miranville, G. Schimperna, A Cahn–Hilliard model in a domain with non-permeable walls, Phys. D, 240 (2011), 754–766. https://doi.org/10.1016/j.physd.2010.12.007 doi: 10.1016/j.physd.2010.12.007
![]() |
[117] |
M. Grasselli, A. Miranville, G. Schimperna, The Caginalp phase-field system with coupled dynamic boundary conditions and singular potentials, Discrete Contin. Dyn. Syst., 28 (2010), 67–98. https://doi.org/10.3934/dcds.2010.28.67 doi: 10.3934/dcds.2010.28.67
![]() |
[118] |
M. Grinfeld, A. Novick-Cohen, Counting stationary solutions of the Cahn–Hilliard equation by transversality argument, Proc. Roy. Soc. Edinburgh Sect. A, 125 (1995), 351–370. https://doi.org/10.1017/S0308210500028079 doi: 10.1017/S0308210500028079
![]() |
[119] | A. Haraux, M. A. Jendoubi, Decay estimates to equilibrium for some evolution equations with an analytic nonlinearity, Asymptotic Anal., 26 (2001), 21–36. https://content.iospress.com/articles/asymptotic-analysis/asy437 |
[120] | S.-Z. Huang, Gradient Inequalities, with Applications to Asymptotic Behavior and Stability of Gradient-like Systems, Mathematical Surveys and Monographs, 126, AMS, 2006. http://dx.doi.org/10.1090/surv/126 |
[121] |
M. A. Jendoubi, A simple unified approach to some convergence theorem of L. Simon, J. Funct. Anal., 153 (1998), 187–202. https://doi.org/10.1006/jfan.1997.3174 doi: 10.1006/jfan.1997.3174
![]() |
[122] | N. Kajiwara, Global well-posedness for a Cahn–Hilliard equation on bounded domains with permeable and non-permeable walls in maximal regularity spaces, Adv. Math. Sci. Appl., 27 (2018), 277–298. https://mcm-www.jwu.ac.jp/aikit/AMSA/pdf/abstract/2018/014_2018_top.pdf |
[123] |
R. Kenzler, F. Eurich, P. Maass, B. Rinn, J. Schropp, E. Bohl, W. Dieterich, Phase separation in confined geometries: solving the Cahn–Hilliard equation with generic boundary conditions, Comput. Phys. Commun., 133 (2001), 139–157. https://doi.org/10.1016/S0010-4655(00)00159-4 doi: 10.1016/S0010-4655(00)00159-4
![]() |
[124] |
P. Knopf, K.-F. Lam, Convergence of a Robin boundary approximation for a Cahn–Hilliard system with dynamic boundary conditions, Nonlinearity, 33 (2020), 4191–4235. https://doi.org/10.1088/1361-6544/ab8351 doi: 10.1088/1361-6544/ab8351
![]() |
[125] |
P. Knopf, K.-F. Lam, C. Liu, S. Metzger, Phase-field dynamics with transfer of materials: the Cahn–Hilliard equation with reaction rate dependent dynamic boundary conditions, ESAIM Math. Model. Numer. Anal., 55 (2021), 229–282. https://doi.org/10.1051/m2an/2020090 doi: 10.1051/m2an/2020090
![]() |
[126] |
K.-F. Lam, H. Wu, Convergence to equilibrium for a bulk-surface Allen–Cahn system coupled through a nonlinear Robin boundary condition, Discrete Contin. Dyn. Syst., 40 (2020), 1847–1878. https://doi.org/10.3934/dcds.2020096 doi: 10.3934/dcds.2020096
![]() |
[127] |
S. O. Londen, H. Petzeltová, Regularity and separation from potential barriers for the Cahn–Hilliard equation with singular potential, J. Evol. Equ., 18 (2018), 1381–1393. https://doi.org/10.1007/s00028-018-0446-2 doi: 10.1007/s00028-018-0446-2
![]() |
[128] |
A. Miranville, H. Wu, Long-time behavior of the Cahn–Hilliard equation with dynamic boundary condition, J. Elliptic Parabol. Equ., 6 (2020), 283–309. https://doi.org/10.1007/s41808-020-00072-y doi: 10.1007/s41808-020-00072-y
![]() |
[129] |
A. Miranville, S. Zelik, Exponential attractors for the Cahn–Hilliard equation with dynamical boundary conditions, Math. Meth. Appl. Sci., 28 (2005), 709–735. https://doi.org/10.1002/mma.590 doi: 10.1002/mma.590
![]() |
[130] |
A. Miranville, S. Zelik, The Cahn–Hilliard equation with singular potentials and dynamic boundary conditions, Discrete Contin. Dyn. Syst., 28 (2010), 275–310. https://doi.org/10.3934/dcds.2010.28.275 doi: 10.3934/dcds.2010.28.275
![]() |
[131] |
P. Polačik, F. Simondon, Nonconvergent bounded solutions of semilinear heat equations on arbitrary domains, J. Differ. Equ., 186 (2002), 586–610. https://doi.org/10.1016/S0022-0396(02)00014-1 doi: 10.1016/S0022-0396(02)00014-1
![]() |
[132] |
J. Prüss, R. Racke, S.-M. Zheng, Maximal regularity and asymptotic behavior of solutions for the Cahn–Hilliard equation with dynamic boundary conditions, Ann. Mat. Pura Appl., 185 (2006), 627–648. https://doi.org/10.1007/s10231-005-0175-3 doi: 10.1007/s10231-005-0175-3
![]() |
[133] | J. Prüss, M. Wilke, Maximal Lp-regularity and long-time behaviour of the non-isothermal Cahn–Hilliard equation with dynamic boundary conditions, in Partial Differential Equations and Functional Analysis, Oper. Theory Adv. Appl., 168, Birkhäuser, Basel, (2006), 209–236. https://doi.org/10.1007/3-7643-7601-5_13 |
[134] | R. Racke, S.-M. Zheng, The Cahn–Hilliard equation with dynamical boundary conditions, Adv. Differ. Equ., 8 (2003), 83–110. https://projecteuclid.org/journals/advances-in-differential-equations/volume-8/issue-1/The-Cahn-Hilliard-equation-with-dynamic-boundary-conditions/ade/1355926869.full |
[135] |
P. Rybka, K. H. Hoffmann, Convergence of solutions to Cahn–Hilliard equation, Comm. Partial Differ. Equ., 24 (1999), 1055–1077. https://doi.org/10.1080/03605309908821458 doi: 10.1080/03605309908821458
![]() |
[136] |
G. Schimperna, Global attractors for Cahn–Hilliard equations with nonconstant mobility, Nonlinearity, 20 (2007), 2365–2387. https://doi.org/10.1088/0951-7715/20/10/006 doi: 10.1088/0951-7715/20/10/006
![]() |
[137] |
W.-X. Shen, S.-M. Zheng, On the coupled Cahn–Hilliard equations, Comm. Partial Differ. Equ., 18 (1993), 701–727. https://doi.org/10.1080/03605309308820946 doi: 10.1080/03605309308820946
![]() |
[138] |
W.-X. Shen, S.-M. Zheng, Maximal attractor for the coupled Cahn–Hilliard equations, Nonlinear Anal., 49 (2002), 21–34. https://doi.org/10.1016/S0362-546X(00)00246-7 doi: 10.1016/S0362-546X(00)00246-7
![]() |
[139] |
L. Simon, Asymptotics for a class of nonlinear evolution equation with applications to geometric problems, Ann. Math., 118 (1983), 525–571. https://doi.org/10.2307/2006981 doi: 10.2307/2006981
![]() |
[140] | R. Temam, Infinite-dimensional Dynamical Systems in Mechanics and Physics, Appl. Math. Sci., 68, Springer-Verlag, New York, 1988. https://doi.org/10.1007/978-1-4612-0645-3 |
[141] |
J.-C. Wei, M. Winter, Stationary solutions for the Cahn–Hilliard equation, Ann. Inst. H. Poincaré, 15 (1998), 459–492. https://doi.org/10.1016/S0294-1449(98)80031-0 doi: 10.1016/S0294-1449(98)80031-0
![]() |
[142] | H. Wu, Convergence to equilibrium for a Cahn–Hilliard model with the Wentzell boundary condition, Asymptot. Anal., 54 (2007), 71–92. https://content.iospress.com/articles/asymptotic-analysis/asy839 |
[143] |
H. Wu, M. Grasselli, S.-M. Zheng, Convergence to equilibrium for a parabolic-hyperbolic phase-field system with dynamical boundary condition, J. Math. Anal. Appl., 329 (2007), 948–976. https://doi.org/10.1016/j.jmaa.2006.07.011 doi: 10.1016/j.jmaa.2006.07.011
![]() |
[144] |
H. Wu, S.-M. Zheng, Convergence to equilibrium for the Cahn–Hilliard equation with dynamic boundary condition, J. Differ. Equ., 204 (2004), 511–531. https://doi.org/10.1016/j.jde.2004.05.004 doi: 10.1016/j.jde.2004.05.004
![]() |
[145] |
S.-M. Zheng, Asymptotic behavior of solution to the Cahn–Hillard equation, Appl. Anal., 23 (1986), 165–184. https://doi.org/10.1080/00036818608839639 doi: 10.1080/00036818608839639
![]() |
[146] | S.-M. Zheng, Nonlinear Evolution Equations, Pitman Monographs and Surveys in Pure and Applied Mathematics, 133, Chapman & Hall/CRC, Boca Raton, Florida, 2004. https://doi.org/10.1201/9780203492222 |
1. | Mohammad Mahdi Azizi, Setareh Abhari, Hedieh Sajedi, Alan Marmorstein, Stitched vision transformer for age-related macular degeneration detection using retinal optical coherence tomography images, 2024, 19, 1932-6203, e0304943, 10.1371/journal.pone.0304943 | |
2. | S. Vishnu Priyan, R. Vinod Kumar, C. Moorthy, V.S. Nishok, A fusion of deep neural networks and game theory for retinal disease diagnosis with OCT images, 2024, 32, 08953996, 1011, 10.3233/XST-240027 | |
3. | P. T. Karule, Sujata B. Bhele, Prasanna Palsodkar, Poonam T. Agarkar, Hirendra R. Hajare, Prashant R. Patil, 2024, Detection of Multi-Class Multi-Label Ophthalmological Diseases in Retinal Fundus Images Using Machine Learning, 979-8-3503-1901-9, 1, 10.1109/ICICET59348.2024.10616291 | |
4. | Yash Mori, Nandini Modi, 2024, Enhancing OCT Image Classification for Retinal Disease Diagnosis: A Novel Approach Using Squeeze-and-Excitation ResNet, 979-8-3503-8459-8, 940, 10.1109/AIC61668.2024.10730829 | |
5. | Muhammed Enes Atik, İbrahim Kocak, Nihat Sayin, Sadik Etka Bayramoglu, Ahmet Ozyigit, Integration of Optical Coherence Tomography Images and Real‐Life Clinical Data for Deep Learning Modeling: A Unified Approach in Prognostication of Diabetic Macular Edema, 2024, 1864-063X, 10.1002/jbio.202400315 | |
6. | G. Jeyasri, R. Karthiyayini, Deep learning based retinal disease classification using an autoencoder and generative adversarial network, 2025, 108, 17468094, 107852, 10.1016/j.bspc.2025.107852 | |
7. | Pavithra Mani, Neelaveni Ramachandran, Palanichamy Naveen, Prasanna Venkatesh Ramesh, An enhanced lightweight transformer-based framework for accurate retinal disease classification from OCT images, 2025, 0972-8821, 10.1007/s12596-025-02793-6 |
Algorithm 1: Proposed OCT images Classification |
1: procedure OCT IMAGES PROCESSING 2: return preprocessed-images 3: procedure SPLIT-DATA (OCT-data) 4: train-data, test-data, train-labels, test-labels = split (OCT-images, labels) 5: procedure DATA AUGMENTATION (train-data) 6: augmented images = augmentation (vertical flip, rotation, scale, brightness, saturation, contrast, enhance and 7: contrast, and equalization) 8: return augmented-images 9: procedure 10-FOLD_CROSS_VALIDATION (augmented images, labels) 10: Fold1, Fold2, ……Fold10 = train_test_split(augmented images, labels) 11: return Fold1-10 12: procedure FEATURE_EXTRACTION (Fold1-10, test-data, test-labels) 13: TL-CNN models = modify the convolutional neural network (CNN) models 14: pre-train the TL-CNN models, small top layers are trained, and the final layers are frozen. 15: extracted features = TL-CNN model at GlobalAveragePooling2D layers 16: return extracted features saved in csv format 17: procedure CLASSIFICATION (extracted features, labels) 18: classifiers = [svm, k-NN, DT, RF, Naïve-Bayes, and XGBoost] 19: for clsf in range (0, 6): 20: predicted-labels = classifiers[clsf]. fit (extracted-features) 21: training-accuracy = accuracy (predicted-labels, labels) 22: save_train_weight 23: voting = "soft" 24: ML1 = k-NN (train-data, train-labels, test-data) 25: ML2 = XGBoost (train-data, train-labels, test-data) 26: procedure ENSMEBLE_CLASSIFIERS (train-data, train-labels, test-data) 27: ensemble-classifiers = concadenate(ML1, ML2) 28: ensemble-classifers.fit (train-data, train-labels) 29: predictions = ensemble-classifers.predict(test-data) 30: save_training_weights, results_visualization |
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-EfficientNetB0 | SVM | 86.79% | 96.78% | 88.64% | 88.39% |
k-NN | 87.37% | 96.95% | 88.82% | 88.89% | |
DT | 85.80% | 96.41% | 84.95% | 86.90% | |
RF | 66.20% | 92.60% | 81.08% | 75.95% | |
Naive Bayes | 86.11% | 96.40% | 86.58% | 86.90% | |
XGBoost | 85.86% | 96.45% | 86.38% | 87.23% | |
Ensemble | 96.17% | 98.92% | 95.89% | 95.85% |
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-InceptionResnetV2 | SVM | 86.27% | 96.13% | 86.86% | 86.40% |
k-NN | 87.37% | 96.48% | 88.19% | 87.56% | |
DT | 83.77% | 95.54% | 83.37% | 84.41% | |
RF | 72.93% | 93.79% | 80.67% | 79.27% | |
Naive Bayes | 79.66% | 93.41% | 78.25% | 77.78% | |
XGBoost | 87.29% | 96.47% | 88.05% | 87.56% | |
Ensemble | 97.42% | 99.40% | 97.49% | 97.68% |
TL-CNN models | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-InceptionV3 | SVM | 82.42% | 94.50% | 80.85% | 81.09% |
k-NN | 83.05% | 94.61% | 80.94% | 81.43% | |
DT | 81.54% | 94.18% | 79.65% | 80.09% | |
RF | 79.50% | 94.01% | 80.52% | 79.77% | |
Naive Bayes | 65.72% | 86.52% | 2.58% | 61.33% | |
XGBoost | 84.42% | 95.10% | 82.88% | 82.91% | |
Ensemble | 91.34% | 97.59% | 91.03% | 91.04% |
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-ResNet50 | SVM | 86.25% | 96.02% | 85.12% | 85.95% |
k-NN | 85.75% | 96.00% | 86.26% | 85.74% | |
DT | 82.04% | 94.94% | 82.34% | 82.59% | |
RF | 52.90% | 88063% | 71.77% | 65.67% | |
Naive Bayes | 67.71% | 89.82% | 72.49% | 64.68% | |
XGBoost | 87.63% | 96.59% | 88.27% | 87.73% | |
Ensemble | 96.46% | 99.14% | 96.76% | 96.68% |
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-VGG16 | SVM | 76.39% | 93.49% | 76.96% | 78.28% |
k-NN | 74.55% | 92.33% | 75.70% | 74.96% | |
DT | 57.42% | 84.86% | 55.72% | 58.37% | |
RF | 50.55% | 86.58% | 38.39% | 63.18% | |
Naive Bayes | 59.53% | 85.08% | 58.84% | 59.20% | |
XGBoost | 80.48% | 94.91% | 81.44% | 82.26% | |
Ensemble | 92.07% | 98.00% | 92.60% | 92.54% |
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-VGG19 | SVM | 79.90% | 93.74% | 78.77% | 78.82% |
k-NN | 69.16% | 90.99% | 70.56% | 71.64% | |
DT | 53.23% | 82.94% | 53.41% | 54.73% | |
RF | 48.29% | 85.62% | 37.71% | 60.36% | |
Naive Bayes | 56.41% | 82.96% | 54.70% | 54.89% | |
XGBoost | 82.44% | 95.30% | 81.90% | 83.58% | |
Ensemble | 93.86% | 93.40% | 93.44% | 93.86% |
Author | Year | Method | Disease type | Dataset size | Accuracy |
Han et al. [13] | 2022 | Transfer learning with a modification of the well-known CNN models | 4-class: PCV, RAP, nAMD, and NORMAL | 4749 | 87.4% |
Sotoudeh-Paima et al. [14] | 2022 | Deep learning: multi-scale convolutional neural network | 3-class: AMD, CNV, NORMAL | 120,961 | 93.4% |
Elaziz et al. [15] | 2022 | Ensemble deep learning model for feature extraction, features selection, machine learning as classifier. | 4-class: DME, CNV, DRUSEN, and NORMAL | 84,484 | 94.32% |
Liu et al. [16] | 2022 | Deep learning based on method and lesions segmentation model. | 4-class: CNV, DME, DRUSEN, and NORMAL | 86,134 | 95.10% |
Minagi et al. [17] | 2022 | Transfer learning with DNN models | 4-class: CNV, DME, DRUSEN, and NORMAL | 11,200 | 95.3% |
Tayal et al. [18] | 2022 | Deep learning-based method | 4-class: DME, CNV, DRUSEN, NORMAL | 84,484 | 96.5% |
Proposed method | − | Hybrid of deep learning and machine learning + ensemble machine learning classifiers. | 5-class: ARMD, BRVO, CRVO, CSCR, DME | 2,998 | 97.68% |
Algorithm 1: Proposed OCT images Classification |
1: procedure OCT IMAGES PROCESSING 2: return preprocessed-images 3: procedure SPLIT-DATA (OCT-data) 4: train-data, test-data, train-labels, test-labels = split (OCT-images, labels) 5: procedure DATA AUGMENTATION (train-data) 6: augmented images = augmentation (vertical flip, rotation, scale, brightness, saturation, contrast, enhance and 7: contrast, and equalization) 8: return augmented-images 9: procedure 10-FOLD_CROSS_VALIDATION (augmented images, labels) 10: Fold1, Fold2, ……Fold10 = train_test_split(augmented images, labels) 11: return Fold1-10 12: procedure FEATURE_EXTRACTION (Fold1-10, test-data, test-labels) 13: TL-CNN models = modify the convolutional neural network (CNN) models 14: pre-train the TL-CNN models, small top layers are trained, and the final layers are frozen. 15: extracted features = TL-CNN model at GlobalAveragePooling2D layers 16: return extracted features saved in csv format 17: procedure CLASSIFICATION (extracted features, labels) 18: classifiers = [svm, k-NN, DT, RF, Naïve-Bayes, and XGBoost] 19: for clsf in range (0, 6): 20: predicted-labels = classifiers[clsf]. fit (extracted-features) 21: training-accuracy = accuracy (predicted-labels, labels) 22: save_train_weight 23: voting = "soft" 24: ML1 = k-NN (train-data, train-labels, test-data) 25: ML2 = XGBoost (train-data, train-labels, test-data) 26: procedure ENSMEBLE_CLASSIFIERS (train-data, train-labels, test-data) 27: ensemble-classifiers = concadenate(ML1, ML2) 28: ensemble-classifers.fit (train-data, train-labels) 29: predictions = ensemble-classifers.predict(test-data) 30: save_training_weights, results_visualization |
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-EfficientNetB0 | SVM | 86.79% | 96.78% | 88.64% | 88.39% |
k-NN | 87.37% | 96.95% | 88.82% | 88.89% | |
DT | 85.80% | 96.41% | 84.95% | 86.90% | |
RF | 66.20% | 92.60% | 81.08% | 75.95% | |
Naive Bayes | 86.11% | 96.40% | 86.58% | 86.90% | |
XGBoost | 85.86% | 96.45% | 86.38% | 87.23% | |
Ensemble | 96.17% | 98.92% | 95.89% | 95.85% |
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-InceptionResnetV2 | SVM | 86.27% | 96.13% | 86.86% | 86.40% |
k-NN | 87.37% | 96.48% | 88.19% | 87.56% | |
DT | 83.77% | 95.54% | 83.37% | 84.41% | |
RF | 72.93% | 93.79% | 80.67% | 79.27% | |
Naive Bayes | 79.66% | 93.41% | 78.25% | 77.78% | |
XGBoost | 87.29% | 96.47% | 88.05% | 87.56% | |
Ensemble | 97.42% | 99.40% | 97.49% | 97.68% |
TL-CNN models | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-InceptionV3 | SVM | 82.42% | 94.50% | 80.85% | 81.09% |
k-NN | 83.05% | 94.61% | 80.94% | 81.43% | |
DT | 81.54% | 94.18% | 79.65% | 80.09% | |
RF | 79.50% | 94.01% | 80.52% | 79.77% | |
Naive Bayes | 65.72% | 86.52% | 2.58% | 61.33% | |
XGBoost | 84.42% | 95.10% | 82.88% | 82.91% | |
Ensemble | 91.34% | 97.59% | 91.03% | 91.04% |
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-ResNet50 | SVM | 86.25% | 96.02% | 85.12% | 85.95% |
k-NN | 85.75% | 96.00% | 86.26% | 85.74% | |
DT | 82.04% | 94.94% | 82.34% | 82.59% | |
RF | 52.90% | 88063% | 71.77% | 65.67% | |
Naive Bayes | 67.71% | 89.82% | 72.49% | 64.68% | |
XGBoost | 87.63% | 96.59% | 88.27% | 87.73% | |
Ensemble | 96.46% | 99.14% | 96.76% | 96.68% |
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-VGG16 | SVM | 76.39% | 93.49% | 76.96% | 78.28% |
k-NN | 74.55% | 92.33% | 75.70% | 74.96% | |
DT | 57.42% | 84.86% | 55.72% | 58.37% | |
RF | 50.55% | 86.58% | 38.39% | 63.18% | |
Naive Bayes | 59.53% | 85.08% | 58.84% | 59.20% | |
XGBoost | 80.48% | 94.91% | 81.44% | 82.26% | |
Ensemble | 92.07% | 98.00% | 92.60% | 92.54% |
TL-CNN model | Machine Learning | Sensitivity | Specificity | Precision | Accuracy |
TL-VGG19 | SVM | 79.90% | 93.74% | 78.77% | 78.82% |
k-NN | 69.16% | 90.99% | 70.56% | 71.64% | |
DT | 53.23% | 82.94% | 53.41% | 54.73% | |
RF | 48.29% | 85.62% | 37.71% | 60.36% | |
Naive Bayes | 56.41% | 82.96% | 54.70% | 54.89% | |
XGBoost | 82.44% | 95.30% | 81.90% | 83.58% | |
Ensemble | 93.86% | 93.40% | 93.44% | 93.86% |
Author | Year | Method | Disease type | Dataset size | Accuracy |
Han et al. [13] | 2022 | Transfer learning with a modification of the well-known CNN models | 4-class: PCV, RAP, nAMD, and NORMAL | 4749 | 87.4% |
Sotoudeh-Paima et al. [14] | 2022 | Deep learning: multi-scale convolutional neural network | 3-class: AMD, CNV, NORMAL | 120,961 | 93.4% |
Elaziz et al. [15] | 2022 | Ensemble deep learning model for feature extraction, features selection, machine learning as classifier. | 4-class: DME, CNV, DRUSEN, and NORMAL | 84,484 | 94.32% |
Liu et al. [16] | 2022 | Deep learning based on method and lesions segmentation model. | 4-class: CNV, DME, DRUSEN, and NORMAL | 86,134 | 95.10% |
Minagi et al. [17] | 2022 | Transfer learning with DNN models | 4-class: CNV, DME, DRUSEN, and NORMAL | 11,200 | 95.3% |
Tayal et al. [18] | 2022 | Deep learning-based method | 4-class: DME, CNV, DRUSEN, NORMAL | 84,484 | 96.5% |
Proposed method | − | Hybrid of deep learning and machine learning + ensemble machine learning classifiers. | 5-class: ARMD, BRVO, CRVO, CSCR, DME | 2,998 | 97.68% |