Review

FinTech in sustainable banking: An integrated systematic literature review and future research agenda with a TCCM framework

  • Received: 17 November 2023 Revised: 13 February 2024 Accepted: 04 March 2024 Published: 18 March 2024
  • JEL Codes: M20, Q5, Q55

  • Academic interest in understanding the role of financial technology (FinTech) in sustainable development has grown exponentially in recent years. Many studies have highlighted the context, yet no reviews have explored the integration of FinTech and sustainability through the lens of the banking aspect. Therefore, this study sheds light on the literature trends associated with FinTech and sustainable banking using an integrated bibliometric and systematic literature review (SLR). The bibliometric analysis explored publication trends, keyword analysis, top publisher, and author analysis. With the SLR approach, we pondered the theory-context-characteristics-methods (TCCM) framework with 44 articles published from 2002 to 2023. The findings presented a substantial nexus between FinTech and sustainable banking, showing an incremental interest among global scholars. We also provided a comprehensive finding regarding the dominant theories (i.e., technology acceptance model and autoregressive distributed lag model), specific contexts (i.e., industries and countries), characteristics (i.e., independent, dependent, moderating, and mediating variables), and methods (i.e., research approaches and tools). This review is the first to identify the less explored tie between FinTech and sustainable banking. The findings may help policymakers, banking service providers, and academicians understand the necessity of FinTech in sustainable banking. The future research agenda of this review will also facilitate future researchers to explore the research domain to find new insights.

    Citation: Md. Shahinur Rahman, Iqbal Hossain Moral, Md. Abdul Kaium, Gertrude Arpa Sarker, Israt Zahan, Gazi Md. Shakhawat Hossain, Md Abdul Mannan Khan. FinTech in sustainable banking: An integrated systematic literature review and future research agenda with a TCCM framework[J]. Green Finance, 2024, 6(1): 92-116. doi: 10.3934/GF.2024005

    Related Papers:

    [1] Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428
    [2] He Dong, Xili Tan, Yong Zhang . Complete convergence and complete integration convergence for weighted sums of arrays of rowwise m-END under sub-linear expectations space. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340
    [3] Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094
    [4] Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138
    [5] Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165
    [6] Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706
    [7] Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871
    [8] Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992
    [9] Qingfeng Wu, Xili Tan, Shuang Guo, Peiyu Sun . Strong law of large numbers for weighted sums of m-widely acceptable random variables under sub-linear expectation space. AIMS Mathematics, 2024, 9(11): 29773-29805. doi: 10.3934/math.20241442
    [10] Baozhen Wang, Qunying Wu . Almost sure convergence for a class of dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(7): 17259-17275. doi: 10.3934/math.2024838
  • Academic interest in understanding the role of financial technology (FinTech) in sustainable development has grown exponentially in recent years. Many studies have highlighted the context, yet no reviews have explored the integration of FinTech and sustainability through the lens of the banking aspect. Therefore, this study sheds light on the literature trends associated with FinTech and sustainable banking using an integrated bibliometric and systematic literature review (SLR). The bibliometric analysis explored publication trends, keyword analysis, top publisher, and author analysis. With the SLR approach, we pondered the theory-context-characteristics-methods (TCCM) framework with 44 articles published from 2002 to 2023. The findings presented a substantial nexus between FinTech and sustainable banking, showing an incremental interest among global scholars. We also provided a comprehensive finding regarding the dominant theories (i.e., technology acceptance model and autoregressive distributed lag model), specific contexts (i.e., industries and countries), characteristics (i.e., independent, dependent, moderating, and mediating variables), and methods (i.e., research approaches and tools). This review is the first to identify the less explored tie between FinTech and sustainable banking. The findings may help policymakers, banking service providers, and academicians understand the necessity of FinTech in sustainable banking. The future research agenda of this review will also facilitate future researchers to explore the research domain to find new insights.



    Common macular and vascular diseases include age-related macular degeneration (ARMD), diabetic macular edema (DME), branch retinal vein occlusion (BRVO), central retinal vein occlusion (CRVO), and central serous chorioretinopathy (CSCR), which are the leading causes of visual impairment and blindness worldwide [1,2,3]. According to the World Health Organization (WHO), DME, which primarily affects working-age adults, affected 425 million people worldwide in 2017 and is expected to affect 629 million people by 2045 [4]. The WHO also estimates that 196 million people had ARMD in 2020; this number is expected to rise to 288 million by 2040 [5]. The prevalence of ARMD in elderly people is 40% at the age of 70 years, rising to 70% at the age of 80 years. Rogers et al. [6] discovered that BRVO and CRVO affected 13.9 million and 2.5 million of the world's population aged 30 years and older, respectively, in 2008. Men have a higher prevalence of CSCR compared to women [7]. A large population is afflicted by these diseases, and projections suggest that this number will escalate in the future. However, the first stage of these diseases can be treated, and patients can recover their vision loss through early detection and treatment [8,9,10].

    Optical coherence tomography (OCT) is a noninvasive imaging modality that provides high-resolution information within a cross sectional area. OCT retinal imaging enables visualization of the thickness, structure, and detail of various layers of the retina. In addition, when the retina develops a disease, OCT enables the visualization of abnormal features and damaged retinal structures [11]. Therefore, retinal OCT images are widely used in the medical field to monitor information in medical images prior to treatment or for the diagnosis of various diseases.

    For several years, ophthalmologists have analyzed the comprehensive information inside the retina for retinal care services, treatment, and diagnosis using retinal OCT images in clinical settings. The clinician performs these tasks manually and wait for each process. As a result, manual analysis is time consuming when there are numerous OCT images. Even if the clinician has great expertise, this analysis may not be accurate [12]. An automated technique based on deep learning (DL) or machine learning using artificial intelligence has been proposed as a solution to overcome this limitation.

    Recently, computer algorithms based on artificial intelligence, DL, and machine learning have been proposed for the automatic diagnosis of various retinal diseases and have been applied in clinical health care. Han et al. [13] modified three well known convolutional neural network (CNN) models to gain access to normal and three subtypes of neovascular age-related macular degeneration (nAMD). The classification layers of the original CNN models were replaced by new layers: four fully connected layers and three dropout layers, along with a Leaky rectified linear activation unit (ReLU) as an activation function. The modified models were trained using the transfer learning technique and tested on 920 OCT images; the VGG-16 model achieved an accuracy of 87.4%. Sotoudeh-Paima et al. [14] classified OCT images to identify normal, AMD, and choroidal neovascularization (CNV) using a multiscale CNN. This CNN was evaluated and achieved a classification accuracy of 93.40% on the public dataset. Elaziz et al. [15] developed a four-class classification method for accessing retinal diseases from OCT images based on an ensemble DL model and machine learning. First, the features are extracted from the two models, MobileNet and DenseNet, and were concatenated as full features of the input images. Then, feature selection was performed to remove irrelevant features and to input the useful features into machine learning to classify the input data. A total of 968 OCT images were used to evaluate classification performance, and an accuracy of 94.31% was achieved. Another study by Liu et al. [16] used a DL model to extract attention features from OCT images. It used the extracted features as guiding features for CNV, DME, drusen, and normal. The classification performance was assessed using public datasets, and an average accuracy of 95.10% was achieved. Minagi et al. [17] used transfer learning with universal adversarial perturbations (UAPs) for classification with a limited dataset. Three types of medical images, including OCT images, were used to assess diseases, and DL models were trained using the ImageNet dataset. The UAPs algorithm was used to generate a training set based on the data provided to train the DL model. There were 11,200 OCT images utilized in training and assessing the model's performance, and a classification accuracy of 95.3% was achieved for the four classes: CNV, DME, drusen, and normal. Tayal et al. [18] presented four ocular disease classifications based on three CNN models using OCT images. Images were enhanced before being fed to CNN models. To assess the performance of the presented method, 6,678 publicly available OCT images were evaluated. The method achieved an accuracy of 96.50% with a CNN model which compressed nine layers. The performance of the CNN models with nine layers outperformed the experimented CNN models with five and seven layers. Adversarial retraining is an algorithm used to improve the performance of DL models based on classification.

    According to the literature, retinal OCT classification was developed using DL and DL based methods such as transfer learning, smoothing generative adversarial networks, adversarial retraining, and multi-scale CNN. This method is used to improve the model's performance by fine-tuning previous task knowledge using the OCT image problem, increasing the dataset size for training, applying the technique of inputting data for the training model, and changing the training input image sizes. However, the classification methods can achieve an accuracy of less than 97.00%, indicating their potential for further improvement. Moreover, these studies classify retinal diseases into fewer than five classes. This study aims to improve the classification accuracy and detect five classes of retinal diseases, which are more than the previous studies highlighted in the literature.

    In this study, we propose an automatic method based on a hybrid of deep learning and ensemble machine learning for screening five different retinal diseases from OCT images to improve the performance of OCT image classification. The proposed method improves classification accuracy, outperforming standalone classifiers without a hybrid. In addition, it can be trained using a smaller dataset from our hospital, which has been strictly labelled by experts. Moreover, the proposed method enables deployment with a web server for open access to test the evaluation performance within seconds.

    All OCT images were collected from Soonchunhyang University's Bucheon Hospital. The OCT images were collected and normalized after approval by the Bucheon Hospital's Institutional Review Board (IRB). OCT images were captured using DRI-OCT (Topcon Medical System, Inc., Oakland, NJ, USA). The scan range was 3–12 mm in the horizontal and vertical directions, with a lateral resolution of 20 μm and an in-depth resolution of 8 μm. The shooting speed was 100,000 A-scans per second. The OCT images utilized were collected twice; the first comprised 2,000 images that were captured between April and September 2021, while the second consisted of 998 images, and took place over a period of approximately five months from September 2021 to January 2022. Therefore, the total number of OCT images collected twice was 2,998; these were labeled by ophthalmologists for five retinal diseases (ARMD:740, BRVO:450, CRVO:299, CSCR:749, and DME:760) as the ground truth.

    This study was approved by the Institutional Review Board (IRB) from Soonchunhyang University Bucheon Hospital, Bucheon, Republic of Korea (IRB approval number: 2021-05-001). All methods were performed in accordance with relevant guidelines and regulations. Informed consent was obtained from all subjects.

    Image processing is a technique for performing various operations on the original images to convert it into a format suitable for DL models or to extract useful features. In image classification based on deep learning, image processing is an essential initial process to change an image before feeding it to the CNN model. The CNN model requires a unique size for the image input, and higher-resolution images demand longer computing times. To shorten the operating time and the suitable size required by the CNN models, all OCT images were downsized to 300 pixels in height and 500 pixels in width. The OCT image dataset was split into an 80% training set and 20% testing set. The training set was used to train the deep learning model and the testing set was used to assess performance.

    The size of the dataset has a significant impact on the DL performance. Therefore, a larger dataset may enable a better performance. However, in the medical field, most medical dataset has size limits. Data augmentation is a technique developed to overcome the limitations of a dataset by performing different operations on the data provided and creating new data, thereby enhancing the dataset size. Additionally, data augmentation is used to enhance performance [19], generalize the model [20], and avoid overfitting [21]. We utilize data augmentation techniques from the Python library imgaug including like vertical flip, rotation, scale, brightness, saturation, contrast, enhance and contrast, and equalization. The OCT images were augmented at angles of 170,175,185, and 190. The selected angle is suitable for rectangle shape representation without loss of information from the original OCT images; scale image with a random range between 0.01 to 0.12; the level of brightness from 1 to 3; the saturate operation, which ranges from 1 to 5, increases by one with each level; random contrast with contrast values ranging from 0.2 to 3; enhance and contrast with levels ranging from 1 to 1.5; and image equalization with levels ranging from 0.9 to 1.4. At the end of the data augmentation process, one OCT image can serve as the basic for generating 29 augmented images. Therefore, the training set comprised a total of 69,455 OCT images, including samples. The acquired OCT and augmented images are shown in Figure 1. Applying data augmentation, only the training set is used for training the proposed method. After finishing the augmentation operation, the OCT images are passed through the 10-fold cross-validation technique to partition the data into folds for the training model (training data) and to test the model after finishing every epoch (validation data).

    Figure 1.  OCT images before and after performing data augmentation. (a) represents the original OCT image. (b), (c), and (d) illustrate brightness adjustments. (e) and (f) demonstrate contrast modifications. (g), (h), and (i) display contrast enhancement. (j) and (k) depict equalization. (l) represents a vertical flip. (m), (n), (o), (p), (q), and (r) indicate angle rotations. (s), (t), (u), (v), and (w) illustrate saturation changes. (x), (y), (z), (A), (B), and (C) represent scaling variations.

    Figure 2 shows the architecture of the proposed method that comprises three significant blocks: feature extraction, classification, and boosting performance. First, transfer learning based on CNN models extracts one thousand features from the OCT images. Second, various machine learning algorithms are used to classify the OCT images based on the features extracted by the CNN model. Finally, the ensemble algorithm fuses the distribution probabilities of the same class and predicts the retinal disease class based on probability. Each block of the proposed architecture is described in detail in the following subsections.

    Figure 2.  System architecture overview of the proposed method. The proposed method accepts images with resolution of 500 pixels in width and 300 pixels in height. CNN models extract features from OCT images and classify them using machine learning algorithms. Voting classifier ensemble output probabilities for predicting retinal disease.

    Transfer learning is a technique used to transform the knowledge of a related task that has already been studied to improve the learning of a new task. Training a CNN model from scratch is computationally expensive and time consuming; moreover, an extensive dataset is required to achieve a better performance. Therefore, transfer learning has been developed to overcome DL's drawbacks [22]. To retrain the model with new tasks based on prior knowledge, pretrain was refined, small top layers were trained, and the final layers were frozen. In this study, the transfer learning CNN (TL-CNN) models EfficientNetB0 [23], InceptionResNetV2 [24], InceptionV3 [25], ResNet50 [26], VGG16 [27], and VGG19 [28] are selected and updated. The modification names of the CNN models start with TL, indicating transfer learning, and ends with the original names of the CNN models, including TL-EfficientNetB0, TL -InceptionResNetV2, TL-InceptionV3, TL-ResNet50, TL-VGG16, and TL-VGG19. The original CNN models were created for generic image classification tasks. They were trained and tested on a large dataset (ImageNet) to categorize 1000 different types of images. To use a CNN model with the transfer learning technique and classify retinal OCT images, each CNN model must modify its classification layers to adapt to the target classes. One specific problem is the categorization of OCT images. The new classification layer is modified with continued stacking of GlobalAveragePooling2D, one Normalization layer, and two Dense layers. The first Dense layer consists of 1,024 with the ReLU activation function and the final dense layer with a five output- vectors. Finally, the updated model is pretrained, and the pretrain model is retrained to fine-tune the previous feature representation in the base model to make it more relevant for OCT image classification. The output consists of five vectors representing the distribution class probabilities using the Softmax activation function. As mentioned previously, a CNN model based on transfer learning is used to extract convolutional features from the OCT images. Therefore, the convolutional features were extracted from the TL-CNNs models where the GlobalAveragePooling2D layers of the classification layer. These features are one-dimensional. Different models provide various features and numbers based on the structure and convolution filters of the model.

    Six TL-CNN models independently extracted the features. At the GlobalAverage-Pooling2D layers, the features were extracted (TL-EfficientNetB0: 1,280 features, TL-InceptionResnetV2: 1,536 features, TL-InceptionV3: 2,048 features, TL-ResNet50: 2,048 features, TL-VGG16: 512 features, TL-VGG19: 512 features). Then, the extracted features of each TL-CNN model were used as the input to five popular machine learning classifiers: support vector machine (SVM) [29], k-nearest neighbors (k-NN) [30], decision tree (DT) [31], Random Forests (RF) [32], Naïve Bayes [33], and XGBoost [34]. Various machine learning classifiers use different techniques for learning and distinguishing the different classes of data.

    Individual machine learning classifiers provide different identification accuracies. This is because each classifier has its own learning ability to identify classes based on the given features. Therefore, an ensemble method is used to aggregate the distribution probabilities of the two classifiers. The proposed method selects two higher prediction classifiers (k-NN and XGBoost) based on an experiment to perform aggregation. An ensemble is a type of soft voting that performs better than other models [35]. Soft voting predicts the final class label as the class label most frequently predicted by classifiers. In soft voting, class labels are predicted by averaging the probability p of the class. Table 1 presents the proposed algorithm, which includes image processing, splitting data, data augmentation, feature extraction, classification, and an ensemble of classifiers:

    yFC=argmaximk=1wkpik (1)
    Table 1.  Algorithm of the proposed method.
    Algorithm 1: Proposed OCT images Classification
    1: procedure OCT IMAGES PROCESSING
    2:       return preprocessed-images

    3: procedure SPLIT-DATA (OCT-data)
    4: train-data, test-data, train-labels, test-labels = split (OCT-images, labels)

    5: procedure DATA AUGMENTATION (train-data)
    6: augmented images = augmentation (vertical flip, rotation, scale, brightness, saturation, contrast, enhance and
    7: contrast, and equalization)
    8:       return augmented-images

    9: procedure 10-FOLD_CROSS_VALIDATION (augmented images, labels)
    10: Fold1, Fold2, ……Fold10 = train_test_split(augmented images, labels)
    11:       return Fold1-10

    12: procedure FEATURE_EXTRACTION (Fold1-10, test-data, test-labels)
    13: TL-CNN models = modify the convolutional neural network (CNN) models
    14: pre-train the TL-CNN models, small top layers are trained, and the final layers are frozen.
    15: extracted features = TL-CNN model at GlobalAveragePooling2D layers
    16: return extracted features saved in csv format
    17: procedure CLASSIFICATION (extracted features, labels)
    18: classifiers = [svm, k-NN, DT, RF, Naïve-Bayes, and XGBoost]
    19:   for clsf in range (0, 6):
    20:       predicted-labels = classifiers[clsf]. fit (extracted-features)
    21:       training-accuracy = accuracy (predicted-labels, labels)
    22:       save_train_weight
    23: voting = "soft"
    24: ML1 = k-NN (train-data, train-labels, test-data)
    25: ML2 = XGBoost (train-data, train-labels, test-data)
    26: procedure ENSMEBLE_CLASSIFIERS (train-data, train-labels, test-data)
    27: ensemble-classifiers = concadenate(ML1, ML2)
    28: ensemble-classifers.fit (train-data, train-labels)
    29: predictions = ensemble-classifers.predict(test-data)
    30: save_training_weights, results_visualization

     | Show Table
    DownLoad: CSV

    where w_k is the weight of the machine learning classifiers, which can be either k-NN or XGBoost; it automatically learns from disease features in OCT images and then identifies the type of disease based on the input data; i represents the class label of the retinal diseases, where i ∈{0: ARMD, 1: BRVO, 2: CRVO, 3: CSCR, 4: DME}; and p_ik represents the probability of machine-learning weight k for class i.

    The proposed OCT image classification method was developed using Python 3.7 and TensorFlow 2.6.0. In addition, Scikit Learn was operated on a personal computer running the Windows 10 operating system powered by an Intel(R) Xeon (R) Silver 4114 @ 2.20GHz CPU, 192GB RAM, and an NVIDIA TITAN RTX 119GB GPU.

    The proposed OCT image classification method was trained using augmented OCT images and evaluated using a test set. There were two types of training. First, six TL-CNN models were trained to perform feature extraction from OCT images.

    Six TL-CNN models were separately trained using a combination of the training set and the augmented images of the training set. The combination data were split using a 10-fold cross-validation algorithm to separate the images for training, validate the model during training, and prevent overfitting. Furthermore, the TL-CNN models were individually trained with a fixed batch size of 64, epochs of size 100, and an Adam optimizer with a learning rate of 0.0001. The learning rate was selected based on the standard learning rate provided by the TensorFlow library. For example, with a setting of 100 epochs, each model must be trained 100 times on the same data. Therefore, the performance is improved by updating the weight based on the information lost through repetitions of a training session. The weights of each TL-CNN model were recorded in a separate file after training and were utilized to extract features from the training and testing sets. Then, the machine learning models were trained with the convolution features extracted by the TL-CNN models to access the class probabilities. Six machine learning models were separately trained, and the weights were recorded after the training completed. Finally, an ensemble method based on soft voting was applied to the average class probabilities of the two classifiers to obtain an effective final class prediction.

    The results of the proposed OCT image classification are divided into three parts: classification results, deployment of the classification results to web services, and a comparison of the results with similar studies in terms of classification accuracy.

    A test set was used to evaluate the performance of the proposed method after training the model. The same preprocessing was performed on both the test dataset and the training dataset without data augmentation. The test set contained 601 OCT images, which were used to assess the classification performance. Six TL-CNN models were individually trained to extract features from the OCT images and store the extracted features in pickle format. Six machine learning classifiers were utilized to discriminate the classes of OCT images based on the features extracted by the TL-CNN. Statistical theories were analyzed to measure the classification ability among the classes, sensitivity, specificity, precision, and accuracy. The relationship between the sensitivity and specificity of various categories was shown through a receiver operating characteristic (ROC) curve. Moreover, the confusion matrix was analyzed, which indicated the correct and incorrect class predictions. Table 2 lists the test results of using TL-EfficientNetB0 as an extractor and seven types of classifiers, including an ensemble classifier, the classification result outperformed the ensemble classifier with a sensitivity, specificity, precision, and accuracy of 96.17, 98.92, 95.89 and 95.85%, respectively. The second highest performance was achieved with the k-NN classifier, which achieved a sensitivity, specificity, precision, and accuracy of 87.37, 96.95, 88.82 and 88.89%, respectively. The classification results for the other machine learning classifiers are unstable, both increasing and decreasing randomly.

    Table 2.  Shown are OCT images before and after performing data augmentation. (a) represents the original OCT image. (b), (c), and (d) illustrate brightness adjustments. (e) and (f) demonstrate contrast modifications. (g), (h), and (i) display contrast enhancement. (j) and (k) depict equalization. (l) represents a vertical flip. (m), (n), (o), (p), (q), and (r) indicate angle rotations. (s), (t), (u), (v), and (w) illustrate saturation changes. (x), (y), (z), (A), (B), and (C) represent scaling variations.
    TL-CNN model Machine Learning Sensitivity Specificity Precision Accuracy
    TL-EfficientNetB0 SVM 86.79% 96.78% 88.64% 88.39%
    k-NN 87.37% 96.95% 88.82% 88.89%
    DT 85.80% 96.41% 84.95% 86.90%
    RF 66.20% 92.60% 81.08% 75.95%
    Naive Bayes 86.11% 96.40% 86.58% 86.90%
    XGBoost 85.86% 96.45% 86.38% 87.23%
    Ensemble 96.17% 98.92% 95.89% 95.85%

     | Show Table
    DownLoad: CSV

    Table 3 shows the classification results when using TL-InceptionResnetV2 as an extractor and seven classifiers, showing that the result outperforms the ensemble classifier with a sensitivity, specificity, precision, and accuracy of 97.42, 99.40, 97.49 and 97.68%, respectively. The second highest performance was achieved with the k-NN classifier, with a sensitivity, specificity, precision, and accuracy of 87.37, 96.48, 88.19 and 87.56%, respectively. In addition, with the same extractor, the classification performance of XGBoost was similar to that of the k-NN classifier. Table 4 lists the evaluation results when using the TL-InceptionV3 extractor and seven machine learning classifiers, including the ensemble classifier, which outperformed other methods with a sensitivity, specificity, precision, and accuracy of 91.34, 97.59, 91.03 and 91.04%, respectively. The second highest performance was achieved by XGBoost, with a sensitivity, specificity, precision, and accuracy of 84.42, 95.10, 82.88, and 82.91%, respectively. Table 5 lists the classification results when using the TL-ResNet50 model as a feature extractor and classifying those features by seven different classifiers, which indicates that using ensemble classifiers outperforms the obtained a sensitivity, specificity, precision, and accuracy of 96.46, 99.14, 96.76 and 96.68%, respectively. The second highest performance was achieved by XGBoost, with a sensitivity, specificity, precision, and accuracy of 87.63, 96.59%, 88.27 and 87.73%, respectively. The performances of the other two classifiers, SVM and k-NN, were comparable and better than those of the three classifiers in the experiments. Table 6 lists the test results of the proposed classification with VGG-16 as a feature extractor and seven machine learning classifiers, the ensemble classifier exhibited the best performance, with a sensitivity, specificity, precision, and classification accuracy of 92.07, 98.00, 92.60 and 92.54%, respectively. The XGBoost classifier had the second highest performance for TL-VGG16 as a feature extractor; it obtained a sensitivity, specificity, precision, and accuracy of 80.48, 94.91, 81.44 and 82.26%, respectively. A similar performance was observed for SVM and k-NN. Table 7 lists the classification test results of the TL-VGG19 model for feature extraction and classification using these features by various classifiers. Ensemble classifiers algorithm outperformed the five other classifiers; its sensitivity, specificity, precision, and accuracy are 93.86, 93.40, 93.44 and 93.86%, respectively. The second- and third-highest performances were achieved by XGBoost and SVM, respectively.

    Table 3.  Performance summary of proposed classification through feature extraction using TL-InceptionResnetV2, six classifiers, and ensemble voting classifiers. Various sensitivities, specificities, precisions, and accuracies are obtained using different classifiers. The proposed classification method with ensemble classifiers outperforms all statistic measurements.
    TL-CNN model Machine Learning Sensitivity Specificity Precision Accuracy
    TL-InceptionResnetV2 SVM 86.27% 96.13% 86.86% 86.40%
    k-NN 87.37% 96.48% 88.19% 87.56%
    DT 83.77% 95.54% 83.37% 84.41%
    RF 72.93% 93.79% 80.67% 79.27%
    Naive Bayes 79.66% 93.41% 78.25% 77.78%
    XGBoost 87.29% 96.47% 88.05% 87.56%
    Ensemble 97.42% 99.40% 97.49% 97.68%

     | Show Table
    DownLoad: CSV
    Table 4.  Performance summary of proposed classification through feature extraction using TL-InceptionV3, six classifiers, and ensemble voting classifiers. Various sensitivities, specificities, precisions, and accuracies are obtained when using different classifiers. The proposed classification method with ensemble classifiers outperforms all statistic measurements.
    TL-CNN models Machine Learning Sensitivity Specificity Precision Accuracy
    TL-InceptionV3 SVM 82.42% 94.50% 80.85% 81.09%
    k-NN 83.05% 94.61% 80.94% 81.43%
    DT 81.54% 94.18% 79.65% 80.09%
    RF 79.50% 94.01% 80.52% 79.77%
    Naive Bayes 65.72% 86.52% 2.58% 61.33%
    XGBoost 84.42% 95.10% 82.88% 82.91%
    Ensemble 91.34% 97.59% 91.03% 91.04%

     | Show Table
    DownLoad: CSV
    Table 5.  Performance summary of proposed classification through features extraction using TL-ResNet50, six classifiers, and ensemble voting classifiers. Various sensitivities, specificities, precisions, and accuracies are obtained using different classifiers. The proposed classification method with ensemble classifiers outperforms all statistic measurements.
    TL-CNN model Machine Learning Sensitivity Specificity Precision Accuracy
    TL-ResNet50 SVM 86.25% 96.02% 85.12% 85.95%
    k-NN 85.75% 96.00% 86.26% 85.74%
    DT 82.04% 94.94% 82.34% 82.59%
    RF 52.90% 88063% 71.77% 65.67%
    Naive Bayes 67.71% 89.82% 72.49% 64.68%
    XGBoost 87.63% 96.59% 88.27% 87.73%
    Ensemble 96.46% 99.14% 96.76% 96.68%

     | Show Table
    DownLoad: CSV
    Table 6.  Performance summary of proposed classification through features extraction using TL-VGG16, six classifiers, and ensemble voting classifiers. Various sensitivities, specificities, precisions, and accuracies are obtained using different classifiers. The proposed classification method with ensemble classifiers outperforms all statistic measurements.
    TL-CNN model Machine Learning Sensitivity Specificity Precision Accuracy
    TL-VGG16 SVM 76.39% 93.49% 76.96% 78.28%
    k-NN 74.55% 92.33% 75.70% 74.96%
    DT 57.42% 84.86% 55.72% 58.37%
    RF 50.55% 86.58% 38.39% 63.18%
    Naive Bayes 59.53% 85.08% 58.84% 59.20%
    XGBoost 80.48% 94.91% 81.44% 82.26%
    Ensemble 92.07% 98.00% 92.60% 92.54%

     | Show Table
    DownLoad: CSV
    Table 7.  Performance summary of proposed classification through features extraction using TL-VGG19, six classifiers, and ensemble voting classifiers. Various sensitivities, specificities, precisions, and accuracies are obtained using different classifiers. The proposed classification method with ensemble classifiers outperforms all statistic measurements.
    TL-CNN model Machine Learning Sensitivity Specificity Precision Accuracy
    TL-VGG19 SVM 79.90% 93.74% 78.77% 78.82%
    k-NN 69.16% 90.99% 70.56% 71.64%
    DT 53.23% 82.94% 53.41% 54.73%
    RF 48.29% 85.62% 37.71% 60.36%
    Naive Bayes 56.41% 82.96% 54.70% 54.89%
    XGBoost 82.44% 95.30% 81.90% 83.58%
    Ensemble 93.86% 93.40% 93.44% 93.86%

     | Show Table
    DownLoad: CSV

    Six TL-CNN models were compared, and TL-InceptionResNetV2 achieved a better performance than the other five models used in this study, with a sensitivity, specificity, precision, and accuracy of 97.42, 99.40, 97.49 and 97.68 %, respectively. The ensemble algorithm always outperformed all the TL-CNN models. The individual k-NN and XGBoost classifiers performed better than the three individual classifiers. Thus, ensembled k-NN and XGBoost also achieved better performance than k-NN and XGBoost.

    Figure 3 shows the ROC result of the proposed classification method, which outperforms TL- InceptionResnetV2 with ensemble classifiers (k-NN and XGBoost). The ROC among each class ARMD, BRVO, CRVO, CSCR, and DME is 0.99, 0.96, 0.99, 0.99, and 0.98, respectively. The relationship between sensitivity and specificity of the five classes is most important. The confusion matrix is implemented by using the Sklearn library in Python. The size of test data is essential to present the robustness of classification. The confusion matrix shows the number of correct and incorrect predictions among all classes. Figure 4 shows the confusion matrix of the proposed method which exhibited best performance; 148 of 149 OCT images of ARMD class are correctly predicted, 85 of 91 images of BRVO class are correctly predicted (ARMD:3 and DME:3 are incorrect predictions), 59 of 60 images are correctly predicted as CRVO and one image that is incorrectly predicted as BRVO, 148 of 150 images are correctly predicted and two images are incorrectly predicted as ARMD, and 149 of 153 are correctly predicted, and four are incorrectly predicted (ARMD: 1, BRVO:1, and CRVO:2 are incorrect prediction).

    Figure 3.  ROC curve of the proposed classification method, which exhibits best accuracy on TL-InceptionResnetV2 model and ensemble classifiers.
    Figure 4.  Confusion matrix of the proposed method when it exhibits best performance on TL-InceptionResNetV2 and ensemble classifiers.

    To render the proposed method applicable and accessible from outside through an Internet connection, we deployed the proposed OCT image classification to a web server using the Flask framework. The web server receives one image input at a time and inputs it into the proposed classification method to predict retinal diseases. The input image is an OCT image consisting of three channels with a resolution of 300 pixels in height and 500 pixels in width. When inputting an OCT image through a web service user interface (UI), the image is transferred to a computer server that runs a DL classification model. First, the computer server performs image processing which is the same to the processes used in both the train and test sets. Second, the preprocessed image is inputted into the proposed classification weights for prediction. Finally, the predicted results are forwarded to the web service using the Flask framework. The prediction results consist of the image input, distribution probabilities among the five classes, the retinal disease diagnosis class, and prediction times of an image. The prediction time is the time taken to input an image to a web service to predict and return the prediction result. Figure 5 shows the initial UI of the web server. The prediction results obtained after inputting the OCT images are shown in Figure 6.

    Figure 5.  Initial user interface of the developed web service for OCT image classification. The "Select an Image" button allows the user to browse to the location of a stored image and upload it to the webservice, and the "Predict" button sends the image to a deep learning server and receives the diagnosis class.
    Figure 6.  Prediction results from the development web service for OCT image classification. The predicted OCT image, distribution probabilities among five classes of retinal diseases in percent, a final predicted class based on higher probability, and time prediction are represented.

    The higher accuracy of the proposed OCT image classification method is compared with that of the recent studies reviewed in the literature review section, as listed in Table 8. These studies focused on transfer learning, developing new models, and combining well known CNN models with machine learning. All the listed studies used either different OCT databases or a combination of these datasets. Moreover, the number and type of classification classes were different, with at most four classes. We classify retinal diseases into five classes using a dataset obtained from a hospital. An additional number of classes can affect the performance of the classification methods. Table 8 lists the methods and algorithms that have been presented, including the suggested model with transfer learning, the multiscale DL model, and transfer learning using existing CNN models. However, the results as listed in the literature review have shown an accuracy of < 97%. Instead of focusing on a single classifier, this study combines two machine-learning classifiers and the DL as a feature extractor. Our study exhibits an accuracy of 97.68%, which is greater than the accuracy of the aforementioned studies. In addition, the number of classification classes is higher than that of the studies reviewed.

    Table 8.  Results comparison.
    Author Year Method Disease type Dataset size Accuracy
    Han et al. [13] 2022 Transfer learning with a modification of the well-known CNN models 4-class: PCV, RAP, nAMD, and NORMAL 4749 87.4%
    Sotoudeh-Paima et al. [14] 2022 Deep learning: multi-scale convolutional neural network 3-class: AMD, CNV, NORMAL 120,961 93.4%
    Elaziz et al. [15] 2022 Ensemble deep learning model for feature extraction, features selection, machine learning as classifier. 4-class: DME, CNV, DRUSEN, and NORMAL 84,484 94.32%
    Liu et al. [16] 2022 Deep learning based on method and lesions segmentation model. 4-class: CNV, DME, DRUSEN, and NORMAL 86,134 95.10%
    Minagi et al. [17] 2022 Transfer learning with DNN models 4-class: CNV, DME, DRUSEN, and NORMAL 11,200 95.3%
    Tayal et al. [18] 2022 Deep learning-based method 4-class: DME, CNV, DRUSEN, NORMAL 84,484 96.5%
    Proposed method Hybrid of deep learning and machine learning + ensemble machine learning classifiers. 5-class: ARMD, BRVO, CRVO, CSCR, DME 2,998 97.68%

     | Show Table
    DownLoad: CSV

    Our study classifies retinal OCT images with disease classes that differ from the reviewed studies and are not available in the public dataset. We hope that these retinal diseases will become available in the future, and we will evaluate the proposed OCT image classification system using a public dataset.

    This study presents a hybrid ensemble OCT image classification method for the diagnosis of five classes of retinal diseases. The proposed method employs an ensemble machine learning classifier as the classifier and a hybrid deep learning model as the feature extractor. We identified the deep learning model and ensemble classifiers that were most suitable for OCT image classification. The proposed model outperformed an individual classifier. With an accuracy of 97.68%, the best deep learning model and ensemble machine learning classifiers of the proposed method were TL- InceptionResnetV2 and the aggregation of KNN and XGBoost. This classification can be deployed to web services for convenient access to diagnose retinal disease from outside the Internet. Moreover, the prediction time in seconds was short, reducing the time required for prediction. This study contributes to the development of accurate multiclass OCT image classification. In the future, we aim to improve the classification performance. If datasets with the same class as in our study are made public, we will assess the proposed method on these datasets to broaden their applicability. In the medical field, improved performance can be used to automatically classify OCT images and eliminate time-consuming tasks, and this classification can also aid in the prevention of vision loss.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The data used to support this study have not been made available access because they are real clinical data from Soonchunhyang Bucheon Hospital, and patient's privacy should be protected, it enables to detect people through this data, but they are available from the corresponding author on reasonable request.

    This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2021R1A2C1010362) and the Soonchunhyang University Research Fund.

    The authors declare no competing interests.



    [1] Abdul-Rahim R, Bohari SA, Aman A, et al. (2022) Benefit–risk perceptions of FinTech adoption for sustainability from bank consumers' perspective: The moderating role of fear of COVID-19. Sustainability 14: 8357. https://doi.org/10.3390/su14148357 doi: 10.3390/su14148357
    [2] Aduba JJ (2021) On the determinants, gains and challenges of electronic banking adoption in Nigeria. Int J Soc Econ 48: 1021–1043. https://doi.org/10.1108/IJSE-07-2020-0452 doi: 10.1108/IJSE-07-2020-0452
    [3] Alaabed A, Masih M, Mirakhor A (2016) Investigating risk shifting in Islamic banks in the dual banking systems of OIC member countries: an application of two-step dynamic GMM. Risk Manage 18: 236–263. https://doi.org/10.1057/s41283-016-0007-3 doi: 10.1057/s41283-016-0007-3
    [4] Aracil E, Nájera-Sánchez JJ, Forcadell FJ (2021) Sustainable banking: A literature review and integrative framework. Financ Res Lett 42: 101932. https://doi.org/10.1016/j.frl.2021.101932 doi: 10.1016/j.frl.2021.101932
    [5] Ashrafi DM, Dovash RH, Kabir MR (2022) Determinants of fintech service continuance behavior: moderating role of transaction security and trust. J Global Bus Technol 18. Available from: https://www.proquest.com/docview/2766511260?pq-origsite = gscholar & fromopenview = true & sourcetype = Scholarly%20Journals.
    [6] Ashta A, Herrmann H (2021) Artificial intelligence and fintech: An overview of opportunities and risks for banking, investments, and microfinance. Strateg Change 30: 211–222. https://doi.org/10.1002/jsc.2404 doi: 10.1002/jsc.2404
    [7] Banna H, Hassan MK, Ahmad R, et al. (2022) Islamic banking stability amidst the COVID-19 pandemic: the role of digital financial inclusion. Int J Islamic Middle 15: 310–330. https://doi.org/10.1108/IMEFM-08-2020-0389 doi: 10.1108/IMEFM-08-2020-0389
    [8] Boratyńska K (2019) Impact of digital transformation on value creation in Fintech services: an innovative approach. J Promot Manage 25: 631–639. https://doi.org/10.1080/10496491.2019.1585543 doi: 10.1080/10496491.2019.1585543
    [9] Bose S, Khan HZ, Rashid A, et al. (2018) What drives green banking disclosure? An institutional and corporate governance perspective. Asia Pac J Manage 35: 501–527. https://doi.org/10.1007/s10490-017-9528-x doi: 10.1007/s10490-017-9528-x
    [10] Brahmi M, Esposito L, Parziale A, et al. (2023) The role of greener innovations in promoting financial inclusion to achieve carbon neutrality: an integrative review. Economies 11: 194. https://doi.org/10.3390/economies11070194 doi: 10.3390/economies11070194
    [11] Çera G, Phan QPT, Androniceanu A, et al. (2020) Financial capability and technology implications for online shopping. EaM Ekon Manag. https://doi.org/10.15240/tul/001/2020-2-011 doi: 10.15240/tul/001/2020-2-011
    [12] Chang HY, Liang LW, Liu YL (2021) Using environmental, social, governance (ESG) and financial indicators to measure bank cost efficiency in Asia. Sustainability 13: 11139. https://doi.org/10.3390/su132011139 doi: 10.3390/su132011139
    [13] Coffie CPK, Zhao H, Adjei Mensah I (2020) Panel econometric analysis on mobile payment transactions and traditional banks effort toward financial accessibility in Sub-Sahara Africa. Sustainability 12: 895. https://doi.org/10.3390/su12030895 doi: 10.3390/su12030895
    [14] Cumming D, Johan S, Reardon R (2023) Global fintech trends and their impact on international business: a review. Multinatl Bus Rev 31: 413–436. https://doi.org/10.1108/MBR-05-2023-0077 doi: 10.1108/MBR-05-2023-0077
    [15] Danladi S, Prasad M, Modibbo UM, et al. (2023) Attaining Sustainable Development Goals through Financial Inclusion: Exploring Collaborative Approaches to Fintech Adoption in Developing Economies. Sustainability 15: 13039. https://doi.org/10.3390/su151713039 doi: 10.3390/su151713039
    [16] Davis FD, Bagozzi RP, Warshaw PR (1989) User acceptance of computer technology: A comparison of two theoretical models. Manage Sci 35: 982–1003. https://doi.org/10.1287/mnsc.35.8.982 doi: 10.1287/mnsc.35.8.982
    [17] Dewi IGAAO, Dewi IGAAP (2017) Corporate social responsibility, green banking, and going concern on banking company in Indonesia stock exchange. Int J Soc Sci Hum 1: 118–134. https://doi.org/10.29332/ijssh.v1n3.65 doi: 10.29332/ijssh.v1n3.65
    [18] Diep NTN, Canh TQ (2022) Impact analysis of peer-to-peer Fintech in Vietnam's banking industry. J Int Stud 15. https://doi.org/10.14254/2071-8330.2022/15-3/12 doi: 10.14254/2071-8330.2022/15-3/12
    [19] Dong Y, Chung M, Zhou C, et al. (2018) Banking on "mobile money": The implications of mobile money services on the value chain. Manuf Serv Oper Manag. https://doi.org/10.1287/msom.2018.0717 doi: 10.1287/msom.2018.0717
    [20] Eisingerich AB, Bell SJ (2008) Managing networks of interorganizational linkages and sustainable firm performance in business‐to‐business service contexts. J Serv Mark 22: 494–504. https://doi.org/10.1108/08876040810909631 doi: 10.1108/08876040810909631
    [21] Ellili NOD (2022) Is there any association between FinTech and sustainability? Evidence from bibliometric review and content analysis. J Financ Serv Mark 28: 748–762. https://doi.org/10.1057/s41264-022-00200-w doi: 10.1057/s41264-022-00200-w
    [22] Fenwick M, Vermeulen EP (2020) Banking and regulatory responses to FinTech revisited-building the sustainable financial service'ecosystems' of tomorrow. Singap J Legal Stud 2020: 165–189. https://doi.org/10.2139/ssrn.3446273 doi: 10.2139/ssrn.3446273
    [23] Gangi F, Meles A, Daniele LM, et al. (2021) Socially responsible investment (SRI): from niche to mainstream. The Evolution of Sustainable Investments and Finance: Theoretical Perspectives and New Challenges, 1–58. https://doi.org/10.1007/978-3-030-70350-9_1 doi: 10.1007/978-3-030-70350-9_1
    [24] Gbongli K, Xu Y, Amedjonekou KM, et al. (2020) Evaluation and classification of mobile financial services sustainability using structural equation modeling and multiple criteria decision-making methods. Sustainability 12: 1288. https://doi.org/10.3390/su12041288 doi: 10.3390/su12041288
    [25] Goodell JW, Kumar S, Lim WM, et al. (2021) Artificial intelligence and machine learning in finance: Identifying foundations, themes, and research clusters from bibliometric analysis. J Behav Exp Financ 32: 100577. https://doi.org/10.1016/j.jbef.2021.100577 doi: 10.1016/j.jbef.2021.100577
    [26] Gozman D, Willcocks L (2019) The emerging Cloud Dilemma: Balancing innovation with cross-border privacy and outsourcing regulations. J Bus Res 97: 235–256. https://doi.org/10.1016/j.jbusres.2018.06.006 doi: 10.1016/j.jbusres.2018.06.006
    [27] Gruin J, Knaack P (2020) Not just another shadow bank: Chinese authoritarian capitalism and the 'developmental'promise of digital financial innovation. New Polit Econ 25: 370–387. https://doi.org/10.1080/13563467.2018.1562437 doi: 10.1080/13563467.2018.1562437
    [28] Guang-Wen Z, Siddik AB (2023) The effect of Fintech adoption on green finance and environmental performance of banking institutions during the COVID-19 pandemic: the role of green innovation. Environ Sci Pollut Res 30: 25959–25971. https://doi.org/10.1007/s11356-022-23956-z doi: 10.1007/s11356-022-23956-z
    [29] Guo Y, Holland J, Kreander N (2014) An exploration of the value creation process in bank-corporate communications. J Commun manage 18: 254–270. https://doi.org/10.1108/JCOM-10-2012-0079 doi: 10.1108/JCOM-10-2012-0079
    [30] Hassan MK, Rabbani MR, Ali MAM (2020) Challenges for the Islamic Finance and banking in post COVID era and the role of Fintech. J Econ Coop Dev 41: 93–116. Available from: https://www.proquest.com/docview/2503186452?pq-origsite = gscholar & fromopenview = true & sourcetype = Scholarly%20Journals.
    [31] Hassan SM, Rahman Z, Paul J (2022) Consumer ethics: A review and research agenda. Psychol Market 39: 111–130.
    [32] He J, Zhang S (2022) How digitalized interactive platforms create new value for customers by integrating B2B and B2C models? An empirical study in China. J Bus Res 142: 694–706. https://doi.org/10.1016/j.jbusres.2022.01.004 doi: 10.1016/j.jbusres.2022.01.004
    [33] Hommel K, Bican PM (2020) Digital entrepreneurship in finance: Fintechs and funding decision criteria. Sustainability 12: 8035. https://doi.org/10.3390/su12198035 doi: 10.3390/su12198035
    [34] Hyun S (2022) Current Status and Challenges of Green Digital Finance in Korea. Green Digital Finance and Sustainable Development Goals, 243–261. https://doi.org/10.1007/978-981-19-2662-4_12 doi: 10.1007/978-981-19-2662-4_12
    [35] Ⅱ WWC, Demrig I (2002) Investment and capitalisation of firms in the USA. Int J Technol Manage 24: 391–418. https://doi.org/10.1504/IJTM.2002.003062 doi: 10.1504/IJTM.2002.003062
    [36] Iman N (2018) Is mobile payment still relevant in the fintech era? Electron Commer Res Appl 30: 72–82. https://doi.org/10.1016/j.elerap.2018.05.009 doi: 10.1016/j.elerap.2018.05.009
    [37] Ji F, Tia A (2022) The effect of blockchain on business intelligence efficiency of banks. Kybernetes 51: 2652–2668. https://doi.org/10.1108/K-10-2020-0668 doi: 10.1108/K-10-2020-0668
    [38] Jibril AB, Kwarteng MA, Botchway RK, et al. (2020) The impact of online identity theft on customers' willingness to engage in e-banking transaction in Ghana: A technology threat avoidance theory. Cogent Bus Manag 7: 1832825. https://doi.org/10.1080/23311975.2020.1832825 doi: 10.1080/23311975.2020.1832825
    [39] Khan A, Goodell JW, Hassan MK, et al. (2022) A bibliometric review of finance bibliometric papers. Financ Res Lett 47: 102520. https://doi.org/10.1016/j.frl.2021.102520 doi: 10.1016/j.frl.2021.102520
    [40] Khan HU, Sohail M, Nazir S, et al. (2023) Role of authentication factors in Fin-tech mobile transaction security. J Big Data 10: 138. https://doi.org/10.1186/s40537-023-00807-3 doi: 10.1186/s40537-023-00807-3
    [41] Kumar S, Lim WM, Sivarajah U, et al. (2023) Artificial intelligence and blockchain integration in business: trends from a bibliometric-content analysis. Inform Syst Front 25: 871–896. https://doi.org/10.1007/s10796-022-10279-0 doi: 10.1007/s10796-022-10279-0
    [42] Kumari A, Devi NC (2022) The Impact of FinTech and Blockchain Technologies on Banking and Financial Services. Technol Innov Manage Rev 12. https://doi.org/10.22215/timreview/1481 doi: 10.22215/timreview/1481
    [43] Lai X, Yue S, Guo C, et al. (2023) Does FinTech reduce corporate excess leverage? Evidence from China. Econ Anal Policy 77: 281–299. https://doi.org/10.1016/j.eap.2022.11.017 doi: 10.1016/j.eap.2022.11.017
    [44] Lee WS, Sohn SY (2017) Identifying emerging trends of financial business method patents. Sustainability 9: 1670. https://doi.org/10.3390/su9091670 doi: 10.3390/su9091670
    [45] Lekakos G, Vlachos P, Koritos C (2014) Green is good but is usability better? Consumer reactions to environmental initiatives in e-banking services. Ethics Inf Technol 16: 103–117. https://doi.org/10.1007/s10676-014-9337-6 doi: 10.1007/s10676-014-9337-6
    [46] Mądra-Sawicka M (2020) Financial management in the big data era. In Management in the Era of Big Data, 71–81, Auerbach Publications. https://doi.org/10.1201/9781003057291-6
    [47] Mejia-Escobar JC, González-Ruiz JD, Duque-Grisales E (2020) Sustainable financial products in the Latin America banking industry: Current status and insights. Sustainability 12: 5648. https://doi.org/10.3390/su12145648 doi: 10.3390/su12145648
    [48] Mhlanga D (2023) FinTech for Sustainable Development in Emerging Markets with Case Studies. In FinTech and Artificial Intelligence for Sustainable Development: The Role of Smart Technologies in Achieving Development Goals, 337–363, Springer. https://doi.org/10.1007/978-3-031-37776-1_15
    [49] Mohr I, Fuxman L, Mahmoud AB (2022) A triple-trickle theory for sustainable fashion adoption: the rise of a luxury trend. J Fash Mark Manag Int J 26: 640–660. https://doi.org/10.1108/JFMM-03-2021-0060 doi: 10.1108/JFMM-03-2021-0060
    [50] Muniz Jr AM, Schau HJ (2005) Religiosity in the abandoned Apple Newton brand community. J Consum Res 31: 737–747. https://doi.org/10.1086/426607 doi: 10.1086/426607
    [51] Naruetharadhol P, Ketkaew C, Hongkanchanapong N, et al. (2021) Factors affecting sustainable intention to use mobile banking services. Sage Open 11: 21582440211029925. https://doi.org/10.1177/21582440211029925 doi: 10.1177/21582440211029925
    [52] Nenavath S (2022) Impact of fintech and green finance on environmental quality protection in India: By applying the semi-parametric difference-in-differences (SDID). Renew Energ 193: 913–919. https://doi.org/10.1016/j.renene.2022.05.020 doi: 10.1016/j.renene.2022.05.020
    [53] Nosratabadi S, Pinter G, Mosavi A, et al. (2020) Sustainable banking; evaluation of the European business models. Sustainability 12: 2314. https://doi.org/10.3390/su12062314 doi: 10.3390/su12062314
    [54] Ortas E, Burritt RL, Moneva JM (2013) Socially Responsible Investment and cleaner production in the Asia Pacific: does it pay to be good? J Cleaner Prod 52: 272–280. https://doi.org/10.1016/j.jclepro.2013.02.024 doi: 10.1016/j.jclepro.2013.02.024
    [55] Oseni UA, Adewale AA, Omoola SO (2018) The feasibility of online dispute resolution in the Islamic banking industry in Malaysia: An empirical legal analysis. Int J Law Manag 60: 34–54. https://doi.org/10.1108/IJLMA-06-2016-0057 doi: 10.1108/IJLMA-06-2016-0057
    [56] Paiva BM, Ferreira FA, Carayannis EG, et al. (2021) Strategizing sustainability in the banking industry using fuzzy cognitive maps and system dynamics. Int J Sustain Dev World Ecol 28: 93–108. https://doi.org/10.1080/13504509.2020.1782284 doi: 10.1080/13504509.2020.1782284
    [57] Parmentola A, Petrillo A, Tutore I, et al. (2022) Is blockchain able to enhance environmental sustainability? A systematic review and research agenda from the perspective of Sustainable Development Goals (SDGs). Bus Strateg Environ 31: 194–217. https://doi.org/10.1002/bse.2882 doi: 10.1002/bse.2882
    [58] Paul J, Lim WM, O'Cass A, et al. (2021) Scientific procedures and rationales for systematic literature reviews (SPAR‐4‐SLR). Int J Consum Stud 45: O1–O16. https://doi.org/10.1111/ijcs.12695 doi: 10.1111/ijcs.12695
    [59] Puschmann T, Hoffmann CH, Khmarskyi V (2020) How green FinTech can alleviate the impact of climate change—the case of Switzerland. Sustainability 12: 10691. https://doi.org/10.3390/su122410691 doi: 10.3390/su122410691
    [60] Rahman S, Moral IH, Hassan M, et al. (2022) A systematic review of green finance in the banking industry: perspectives from a developing country. Green Financ 4: 347–363. https://doi.org/10.3934/GF.2022017 doi: 10.3934/GF.2022017
    [61] Ryu HS, Ko KS (2020) Sustainable development of Fintech: Focused on uncertainty and perceived quality issues. Sustainability 12: 7669. https://doi.org/10.3390/su12187669 doi: 10.3390/su12187669
    [62] Sagnier C, Loup-Escande E, Lourdeaux D, et al. (2020) User acceptance of virtual reality: an extended technology acceptance model. Int J Hum–Comput Interact 36: 993–1007. https://doi.org/10.1080/10447318.2019.1708612 doi: 10.1080/10447318.2019.1708612
    [63] Sethi P, Chakrabarti D, Bhattacharjee S (2020) Globalization, financial development and economic growth: Perils on the environmental sustainability of an emerging economy. J Policy Model 42: 520–535. https://doi.org/10.1016/j.jpolmod.2020.01.007 doi: 10.1016/j.jpolmod.2020.01.007
    [64] Singh RK, Mishra R, Gupta S, et al. (2023) Blockchain applications for secured and resilient supply chains: A systematic literature review and future research agenda. Comput Ind Eng 175: 108854. https://doi.org/10.1016/j.cie.2022.108854 doi: 10.1016/j.cie.2022.108854
    [65] Sun Y, Luo B, Wang S, et al. (2021) What you see is meaningful: Does green advertising change the intentions of consumers to purchase eco‐labeled products? Bus Strateg Environ 30: 694–704. https://doi.org/10.1002/bse.2648 doi: 10.1002/bse.2648
    [66] Talom FSG, Tengeh RK (2019) The impact of mobile money on the financial performance of the SMEs in Douala, Cameroon. Sustainability 12: 183. https://doi.org/10.3390/su12010183 doi: 10.3390/su12010183
    [67] Taneja S, Siraj A, Ali L, et al. (2023) Is fintech implementation a strategic step for sustainability in today's changing landscape? An empirical investigation. IEEE T Eng Manage. https://doi.org/10.3390/su12010183 doi: 10.3390/su12010183
    [68] Tara K, Singh S, Kumar R, et al. (2019) Geographical locations of banks as an influencer for green banking adoption. Prabandhan: Indian J Manag 12: 21–35. https://doi.org/10.17010/pijom/2019/v12i1/141425 doi: 10.17010/pijom/2019/v12i1/141425
    [69] Tchamyou VS, Erreygers G, Cassimon D (2019) Inequality, ICT and financial access in Africa. Technol Forecast Soc Change 139: 169–184. https://doi.org/10.1016/j.techfore.2018.11.004 doi: 10.1016/j.techfore.2018.11.004
    [70] Tripathi R (2023) Framework of green finance to attain sustainable development goals: an empirical evidence from the TCCM approach. Benchmarking. https://doi.org/10.1108/BIJ-05-2023-0311 doi: 10.1108/BIJ-05-2023-0311
    [71] Truby J, Brown R, Dahdal A (2020) Banking on AI: mandating a proactive approach to AI regulation in the financial sector. Law Financ Mark Rev 14: 110–120. https://doi.org/10.1080/17521440.2020.1760454 doi: 10.1080/17521440.2020.1760454
    [72] Tsindeliani IA, Proshunin MM, Sadovskaya TD, et al. (2022) Digital transformation of the banking system in the context of sustainable development. J Money Laund Contro 25: 165–180. https://doi.org/10.1108/JMLC-02-2021-0011 doi: 10.1108/JMLC-02-2021-0011
    [73] Ullah A, Pinglu C, Ullah S, et al. (2023) Impact of intellectual capital efficiency on financial stability in banks: Insights from an emerging economy. Int J Financ Econ 28: 1858–1871. https://doi.org/10.1002/ijfe.2512 doi: 10.1002/ijfe.2512
    [74] Yadav MS (2010) The decline of conceptual articles and implications for knowledge development. J Mark 74: 1–19. https://doi.org/10.1509/jmkg.74.1.1 doi: 10.1509/jmkg.74.1.1
    [75] Yan C, Siddik AB, Yong L, et al. (2022) A two-staged SEM-artificial neural network approach to analyze the impact of FinTech adoption on the sustainability performance of banking firms: The mediating effect of green finance and innovation. Systems 10: 148. https://doi.org/10.3390/systems10050148 doi: 10.3390/systems10050148
    [76] Yang C, Masron TA (2022) Impact of digital finance on energy efficiency in the context of green sustainable development. Sustainability 14: 11250. https://doi.org/10.3390/su14181125 doi: 10.3390/su14181125
    [77] Yigitcanlar T, Cugurullo F (2020) The sustainability of artificial intelligence: An urbanistic viewpoint from the lens of smart and sustainable cities. Sustainability 12: 8548. https://doi.org/10.3390/su1220854 doi: 10.3390/su1220854
    [78] Zhang Y (2023) Impact of green finance and environmental protection on green economic recovery in South Asian economies: mediating role of FinTech. Econ Chang Restruct 56: 2069–2086. https://doi.org/10.1007/s10644-023-09500-0 doi: 10.1007/s10644-023-09500-0
    [79] Zhao D, Strotmann A (2015) Analysis and visualization of citation networks. Morgan & Claypool Publishers.
    [80] Zhao Q, Tsai PH, Wang JL (2019) Improving financial service innovation strategies for enhancing china's banking industry competitive advantage during the fintech revolution: A Hybrid MCDM model. Sustainability 11: 1419. https://doi.org/10.3390/su11051419 doi: 10.3390/su11051419
    [81] Zuo L, Strauss J, Zuo L (2021) The digitalization transformation of commercial banks and its impact on sustainable efficiency improvements through investment in science and technology. Sustainability 13: 11028. https://doi.org/10.3390/su131911028 doi: 10.3390/su131911028
  • This article has been cited by:

    1. Li Wang, Qunying Wu, Almost sure convergence theorems for arrays under sub-linear expectations, 2022, 7, 2473-6988, 17767, 10.3934/math.2022978
    2. He Dong, Xili Tan, Yong Zhang, Complete convergence and complete integration convergence for weighted sums of arrays of rowwise m-END under sub-linear expectations space, 2023, 8, 2473-6988, 6705, 10.3934/math.2023340
    3. Yansong Bai, Yong Zhang, Congmin Liu, Zhiming Wang, Moderate deviation principle for different types of classical likelihood ratio tests, 2022, 0361-0926, 1, 10.1080/03610926.2022.2144376
    4. Peiyu Sun, Dehui Wang, Xue Ding, Xili Tan, Yong Zhang, Moderate Deviation Principle for Linear Processes Generated by Dependent Sequences under Sub-Linear Expectation, 2023, 12, 2075-1680, 781, 10.3390/axioms12080781
    5. TAN Xili, DONG He, SUN Peiyu, ZHANG Yong, Almost Sure Convergence of Weighted Sums for m-END Sequences under Sub-linear Expectations, 2024, 1, 3006-0656, 10.59782/sidr.v1i1.26
    6. Xue Ding, Yong Zhang, Chover’s law of the iterated logarithm for weighted sums under sub-linear expectations, 2024, 53, 0361-0926, 6055, 10.1080/03610926.2023.2239399
    7. Peiyu Sun, Dehui Wang, Xili Tan, Equivalent Conditions of Complete p-th Moment Convergence for Weighted Sum of ND Random Variables under Sublinear Expectation Space, 2023, 11, 2227-7390, 3494, 10.3390/math11163494
    8. Xue Ding, Complete convergence for moving average process generated by extended negatively dependent random variables under sub-linear expectations, 2024, 53, 0361-0926, 8166, 10.1080/03610926.2023.2279924
    9. Xue Ding, Yong Zhang, Precise asymptotics for maxima of partial sums under sub-linear expectation, 2024, 0361-0926, 1, 10.1080/03610926.2024.2435585
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4087) PDF downloads(517) Cited by(3)

Figures and Tables

Figures(6)  /  Tables(7)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog