Loading [MathJax]/jax/output/SVG/jax.js
Review Special Issues

Role of MOH as a grassroots public health manager in preparedness and response for COVID-19 pandemic in Sri Lanka

  • Received: 08 June 2020 Accepted: 03 August 2020 Published: 05 August 2020
  • Pandemic transmission of COVID-19 virus warranted activation of public health responses in all countries. Public health unit system of Sri Lanka (also known as the Medical Officer of Health unit system) managed by a medical doctor with special training in public health, the Medical officer of Health (MOH), with a team of grassroots field staff who are well aware of the community and supported by a network of infrastructure. The aim of the study was to describe the managerial role of the MOH as a grassroots public health manager in the preparedness and response for COVID-19 pandemic. The research team studied the key documents communicated to MOH by the national authorities. The study revealed, national level authorities used the MOH to implement COVID-19 control and preventive decisions through their technical and managerial directives. MOH unit earned trustworthiness in the community due to their deep-rooted ground level operations. Further, MOH system possess deep understanding and extensive connectivity with the community. Therefore, implementation of rigid prevention and control measures was well facilitated within the assigned geographical public health unit area.

    Citation: Pamila Sadeeka Adikari, KGRV Pathirathna, WKWS Kumarawansa, PD Koggalage. Role of MOH as a grassroots public health manager in preparedness and response for COVID-19 pandemic in Sri Lanka[J]. AIMS Public Health, 2020, 7(3): 606-619. doi: 10.3934/publichealth.2020048

    Related Papers:

    [1] Mario Coccia, Igor Benati . Negative effects of high public debt on health systems facing pandemic crisis: Lessons from COVID-19 in Europe to prepare for future emergencies. AIMS Public Health, 2024, 11(2): 477-498. doi: 10.3934/publichealth.2024024
    [2] Akari Miyazaki, Naoko Kumada Deguchi, Tomoko Omiya . Difficulties and distress experienced by Japanese public health nurses specializing in quarantine services when dealing with COVID-19: A qualitative study in peri-urban municipality. AIMS Public Health, 2023, 10(2): 235-251. doi: 10.3934/publichealth.2023018
    [3] Srikanth Umakanthan, Anuradha Chauhan, Madan Mohan Gupta, Pradeep Kumar Sahu, Maryann M Bukelo, Vijay Kumar Chattu . COVID-19 pandemic containment in the Caribbean Region: A review of case-management and public health strategies. AIMS Public Health, 2021, 8(4): 665-681. doi: 10.3934/publichealth.2021053
    [4] Maria Mercedes Ferreira Caceres, Juan Pablo Sosa, Jannel A Lawrence, Cristina Sestacovschi, Atiyah Tidd-Johnson, Muhammad Haseeb UI Rasool, Vinay Kumar Gadamidi, Saleha Ozair, Krunal Pandav, Claudia Cuevas-Lou, Matthew Parrish, Ivan Rodriguez, Javier Perez Fernandez . The impact of misinformation on the COVID-19 pandemic. AIMS Public Health, 2022, 9(2): 262-277. doi: 10.3934/publichealth.2022018
    [5] Sushant K Singh . COVID-19: A master stroke of Nature. AIMS Public Health, 2020, 7(2): 393-402. doi: 10.3934/publichealth.2020033
    [6] Mehreen Tariq, Margaret Haworth-Brockman, Seyed M Moghadas . Ten years of Pan-InfORM: modelling research for public health in Canada. AIMS Public Health, 2021, 8(2): 265-274. doi: 10.3934/publichealth.2021020
    [7] Mario Coccia . Sources, diffusion and prediction in COVID-19 pandemic: lessons learned to face next health emergency. AIMS Public Health, 2023, 10(1): 145-168. doi: 10.3934/publichealth.2023012
    [8] Okan Bulut, Cheryl N. Poth . Rapid assessment of communication consistency: sentiment analysis of public health briefings during the COVID-19 pandemic. AIMS Public Health, 2022, 9(2): 293-306. doi: 10.3934/publichealth.2022020
    [9] Abdulqadir J. Nashwan, Rejo G. Mathew, Reni Anil, Nabeel F. Allobaney, Sindhumole Krishnan Nair, Ahmed S. Mohamed, Ahmad A. Abujaber, Abbas Balouchi, Evangelos C. Fradelos . The safety, health, and well-being of healthcare workers during COVID-19: A scoping review. AIMS Public Health, 2023, 10(3): 593-609. doi: 10.3934/publichealth.2023042
    [10] Vasiliki Georgousopoulou, Panagiota Pervanidou, Pantelis Perdikaris, Efrosyni Vlachioti, Vaia Zagana, Georgios Kourtis, Ioanna Pavlopoulou, Vasiliki Matziou . Covid-19 pandemic? Mental health implications among nurses and Proposed interventions. AIMS Public Health, 2024, 11(1): 273-293. doi: 10.3934/publichealth.2024014
  • Pandemic transmission of COVID-19 virus warranted activation of public health responses in all countries. Public health unit system of Sri Lanka (also known as the Medical Officer of Health unit system) managed by a medical doctor with special training in public health, the Medical officer of Health (MOH), with a team of grassroots field staff who are well aware of the community and supported by a network of infrastructure. The aim of the study was to describe the managerial role of the MOH as a grassroots public health manager in the preparedness and response for COVID-19 pandemic. The research team studied the key documents communicated to MOH by the national authorities. The study revealed, national level authorities used the MOH to implement COVID-19 control and preventive decisions through their technical and managerial directives. MOH unit earned trustworthiness in the community due to their deep-rooted ground level operations. Further, MOH system possess deep understanding and extensive connectivity with the community. Therefore, implementation of rigid prevention and control measures was well facilitated within the assigned geographical public health unit area.


    According to the World Health Organization (WHO), an increasing number of glaucoma patients have been reported worldwide in recent years [1]. Glaucoma prevention and treatment has been a major focus of international directives including the WHO's Vision 2020 campaign. It is estimated that the number of people with glaucoma is expected to rise from 64 million to 76 million in 2020 and 111.8 million in 2040, with Africa and Asia being affected more heavily than the rest of the world as there is a shortage of trained ophthalmologist for its diagnosis [2,3]. Hence, this disease is considered as a major public health concern, and its early diagnosis is important for preventing blindness.

    Glaucoma is the second most common cause of irreversible vision loss and its diagnosis is a challenging research field because real world glaucoma images are acquired in the presence of several factors such as, illumination changes, background interference, light variation, etc [4]. With the rapid advancement in imaging technologies, several retinal imaging modalities like Heidelberg Retina Tomography (HRT) and Optical Coherence Tomography (OCT) have been developed. Although, these are used in developed and under-developed countries for image based diagnosis of various ocular diseases, i.e., Macular degeneration, Diabetic retinopathy and Glaucoma. However, these are costlier technologies and are not affordable for smaller public health units. Therefore, the most popular and widely used imaging device is the Fundoscopy [5].

    Fundoscopy enables ophthalmologist to examine the Optic Nerve Head (ONH) for the glaucoma diagnosis and a typical image of ONH is shown in Figure 1. There are various parts of ONH which can be considered to classify between normal and glaucoma eyes. Four main changes can be observed in ONH associated with glaucoma, including ONH cupping, Neuroretinal rim thinning, Nerve Fibre Layer (NFL) thickness and Parapapillary Atrophy (PPA). These changes are detected manually through analysis of ONH images to diagnose glaucoma [6]. However, identification of glaucomatous signs in ONH require specialist ophthalmologists with years of experience and practice. Therefore, the development of automatic glaucoma assessment algorithms based on fundus image analysis, will be very helpful in reducing overall workload of ophthalmologists and also make the diagnosis more feasible and efficient even in smaller health units.

    Figure 1.  The most prominent area, Optic Nerve Head to diagnose the glaucoma initially.

    Recently, Deep CNNs have attracted a lot of interest and are seen to have great potential for the solution of computer vision tasks like image classification and semantic segmentation [7]. These networks are more robust to discover meaningful features in the images that are usually ignored in conventional image processing techniques. Moreover, different intermediate steps such as feature extraction and selection are embedded within the networks and these networks can perform feature learning and classification, simultaneously. Due to their popularity, numerous algorithms have been developed during the past few years on the detection and classification for glaucomatous fundus images. However, to train deep CNNs a large amount of annotated data is required [8,9]. Therefore, a lot of research effort is being put on the training methodology of the networks. We can divide these methodologies into two categories, i.e., training a deep network from scratch (full training) and transfer learning with fine-tuning.

    Training from scratch requires large amount of labeled data which is extremely difficult to find in medical imaging. It is expensive to collect images both in terms of time and budget for disease identification. Besides, the training of Deep CNNs is time consuming that usually requires extensive memory and computational resources. Furthermore, the designing and adjustment of the hyper-parameters are the challenging tasks with reference to over-fitting and other issues. On the contrary, transfer learning with fine-tuning [10,11,12] is the easiest way to overcome such problems. The transfer learning is commonly defined as the ability of the network to exploit the knowledge learned in one domain, to another that share some common characteristics. Therefore, it is more popular method in machine learning and data mining for regression, clustering, and classification problems.

    In an ensemble framework, various classifiers are trained to solve the same problem. The classification capability of an individual optimized Deep CNN is not generalized for diverse databases. Generally, the classification performance of grouping of various Deep CNNs is better than single architecture[13]. To make the most of the single classification capability, a promising solution would be to create an ensemble of several Deep CNN models.

    During the past few years, extensive research has been carried out for the development of automatic glaucoma assessment systems based on fundus image analysis through transfer learning [14,15,16]. In the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), millions of labeled images selected from 1000 different classes have been successfully tested on different medical image analysis [17]. It is tested that CNNs outperform the previous implementations with computer aided screening systems for medical images. Two individual CNN architectures are used in [18] to segment the optic disc and optic cup to find the cup-to-disc ratio (CDR). In [19], the authors have developed a CNN architecture to automate the detection process of glaucoma. Another Deep CNN algorithm has been developed in [20] to detect glaucoma through extraction of different features with the combination of different classifiers, i.e., Random Forest, Support Vector Machine and Neural Network. Feature learning through deep learning algorithm from retinal fundus images has also proposed in [21] to detect glaucoma. The authors have considered CNN model to learn features with linear and nonlinear activation functions. In the study [22], authors have implemented a Glaucoma-Deep system. The deep-belief network has been used to select the most discriminative features. In the work [23], two different types of CNNs, i.e., Overfeat and VGG-S, have been used as feature extractors. Contrast-Limited Adaptive Histogram Equalization (CLAHE) and vessels deletion have considered to investigate the performance of these networks. The authors have proposed a joint segmentation of optic disc, optic cup and glaucoma prediction in [24]. CNN feature sharing for different tasks ensured better learning and over-fitting prevention. Inception-v3 architecture has presented in [25] to detect glaucomatous optic neuropathy. Local space average color subtraction has been applied in pre-processing to accommodate for varying illumination. A framework on a dataset of fundus images collecting from different hospitals has presented in [26] by incorporating both domain knowledge and feature learned from a deep learning model. The assessment of deep learning algorithms with transfer learning have also been addressed in [27,28], with greater number of images than previous methods with high accuracy, sensitivity and specificity. Recently, in [29], authors have implemented deep learning based segmentation to identify glaucomatous optic disc. In [30], the authors have combined CNN and Recurrent Neural Network (RNN) to extract the spatial features in a fundus image and also the temporal features embedded in a fundus video, i.e., sequential images. ResNet50 deep CNN model has been explored with 48 full convolutional neural network layers. A deep learning approach based on deep residual neural network (ResNet101) for automated glaucoma detection using fundus images is proposed in [31]. Chronic eye disease diagnosis using ensemble-based classifier has presented in [32]. This study tends to achieve an early and accurate diagnosis of glaucoma based on an ensemble classifier by integrating the principal component analysis with rotation forest tree. Ensemble learning based CNN has been proposed in [33] to segment retinal images. The output of the classifier is subject to an unsupervised graph cut algorithm followed by a convex hull transformation to obtain the final optic cup and disc segmentation. A novel disc-aware ensemble network based on the application of different CNNs is presented in [34] for automatic glaucoma screening. The authors have introduced a deep learning technique to gain additional image-relevant information and screen glaucoma from the fundus image directly. More recently, a multi-class multi-label ophthalmological disease detection using transfer learning has been investigated in [35] with four pre-trained Deep CNN architectures. ResNet, InceptionV3, MobileNet, and VGG16 are implemented to detect eight types of ocular diseases. Besides, in [36], an artificial intelligence and telemedicine based screening tool has been developed to identify glaucoma suspects from color fundus Images. An ensemble of five pre-trained Deep CNN architectures is presented to detect glaucoma on the basis of cup to disc ratio (CDR). Two Xception architectures, InceptionResNetV2, NasNet, and InceptionV3 are used in the proposed ensemble to calculate the final CDR value for glaucoma detection.

    It is evident from the above literature that almost all previously proposed methods employ CNN for the detection of glaucoma. However, limited work has been carried out to ensemble Deep CNNs for glaucoma diagnosis.

    The research contributions of presented work are summarized as follows:

    1) We propose a new two-staged scheme based on Deep CNNs architectures for glaucoma classification using fundus images. This new scheme comprises four Deep architectures, i.e., AlexNet, InceptionV3, InceptionResNetV2 and NasNet-Large obtained through extensive experimental results search. We also assess their individual performance on both publicly and local hospital datasets. Our results clearly demonstrate the effectiveness of the newly proposed scheme based on Deep CNN architecture.

    2) We propose a new ensemble framework for better glaucoma classification. Four different pre-trained Deep CNN models are fused together in parallel and the output scores / probabilities are optimized using five different voting techniques to find the best Deep CNNs combination. We also propose two novel voting techniques to achieve much better results as compared to existing state-of-the-art for glaucoma classification.

    The rest of the paper is organized as follows: the proposed methodology is presented in Section 2. Experiments and results are given in Section 3. Results related discussion is presented in Section 4, while conclusion and future work are summarized in Section 5.

    The proposed automatic glaucoma diagnosis using fundus images is shown in Figure 2. There are five steps: 1) data collection; 2) pre-processing; 3) Deep CNN feature learning; 4) Ensemble (4x Deep CNNs); 5) classification/diagnosis. These steps are explained in the following subsections:

    Figure 2.  The proposed ensemble classifier based on Deep CNNs architecture for glaucoma diagnosis.

    Both public and private datasets are used to train, validate and test the different Deep CNNs. In this way, we can create the diversity in the images and the results will be generalized. The description about each dataset is given as follows:

    ACRIMA is the most newly available public dataset for the classification of glaucoma through Deep learning. It consists of 705 fundus images (309 normal and 396 glaucomatous). All images are annotated by two glaucoma experts with eight years of experience. It is only be used for classification tasks because optic disc and optic cup annotations are not provided [37].

    The ORIGA-light is another public dataset which is annotated by experienced professionals from Singapore Eye Research Institute. There are 650 fundus images (482 normal and 168 glaucomatous) with a resolution of 3072×2048 pixels. This dataset is widely used as a benchmark for the diagnosis of glaucoma [38].

    The RIM-ONE is a very popular publicly available dataset for ONH segmentation and glaucoma detection. The database was created by collaboration of three Spanish hospitals, i.e., Hospital Universitario de Canarias, Hospital Clínico San Carlos and Hospital Universitario Miguel Servet. There are 455 fundus images, 261 normal and 194 glaucoma [39].

    Another 124 fundus images, are collected from local private hospital, i.e., Armed Forces Institute of Ophthalmology (AFIO), Military Hospital, Rawalpindi, Pakistan. Similarly, 55 images are also acquired from local private hospital, i.e., Hayatabad Medical Complex (HMC), Peshaware, Pakistan. These images are annotated by two glaucoma experts with ten years of experience in glaucoma department. The number of normal and glaucoma images in each database are listed in Table 1.

    Table 1.  Key statistics of fundus images used to train/test the Deep CNNs.
    Dataset Normal Glaucoma Total
    ACRIMA 309 396 705
    ORIGA-light 482 168 650
    RIM-ONE 261 194 455
    AFIO 85 39 124
    HMC 40 15 55
    1177 812 1989

     | Show Table
    DownLoad: CSV

    The images of different datasets are shown in Figure 3. The difference between normal and glaucoma images are also presented in Figure 4. The first row shows the normal while the second row is the glaucoma images.

    Figure 3.  Examples of fundus images of different datasets.
    Figure 4.  Normal and glaucoma images, (a)–(c) are normal while (d)–(f) are glaucoma images.

    The data pre-processing steps include the image patch extraction centered at ONH and data augmentation for the training of Deep CNNs.

    First of all, we process all the images into a standard format that are used to train Deep CNNs. Image patches, centered at ONH, are extracted at same size according to the requirement of different Deep CNNs. Bicubic interpolation [40] is considered for resizing. It is examined that output pixel values are weighted average of pixels in 4-by-4 neighborhood pixels. The fundus images are cropped by evaluating the bounding box of 1.5 times the optic disc radius. Meanwhile, the illumination and contrast enhancement procedures are avoided to make the Deep CNNs learning more dynamic. In case of images from the local hospitals, i.e., AFIO and HMC, we have to localize ONH at the center of fundus images. It is to note that glaucoma disease mainly affects the ONH and its surrounding area. The cropping of images around ONH turned out to be more effective as compared to whole image, used for the training of Deep CNNs [41]. Moreover, the computational cost is also reduced during network learning.

    The data augmentation technique is also explored during training of Deep CNNs to increase the training images and minimize over-fitting. The fundus images are invariant for flipping, rotation and translation. Hence, these three steps have been considered to increase the data for training of Deep CNNs.

    Currently, Deep CNNs are applied to a wide variety of applications in image segmentation and classification tasks. In this work, we also implement ImageNet trained Deep CNNs for the diagnosis of glaucoma through retinal fundus images. We have explored four types of Deep CNNs, i.e., AlexNet, InceptionV3, InceptionResNetV2 and NasNet-Large for the classification of normal and glaucomatous images. The basic architecture as well as feature extraction scheme of each network are illustrated in the forthcoming paragraphs.

    AlexNet, first proposed in [42], is an extremely powerful model in achieving high accuracies on the challenging databases. It is very similar architecture as LeNet [43] but much deeper with more filters per layer with stacked CNNs layers. In this model, there are 5 convolutional and 3 Fully Connected (FC) layers with max pooling, Rectified Linear Unit (ReLu), and dropout layers. The input to this model is an RGB image of size 256×256. The basic structure of this network is shown in Figure 5.

    Figure 5.  The basic architecture of AlexNet used for glaucoma classification.

    InceptionV3 [44] is an extended network of the GoogLeNet [45] with good classification performance in several biomedical applications with transfer learning [46,47]. InceptionV3 originally contains 11 Inception modules and one FC layer. Each Inception module also has 4 to 10 multiscale convolutional operations conducting with 1×1,3×3, or 5×5 filters. Max-pooling is used as the spatial pooling operation in the inception modules and FC is for final classification. This design reduces the computational complexity as well as number of parameters to be trained. It is trained on more than a million images from the ImageNet database. The network has an image input size of 299×299. The fundamental architecture of InceptionV3 is illustrated in Figure 6.

    Figure 6.  The InceptionV3 basic structure used for deep glaucoma feature learning.

    We also consider the state-of-the-art InceptionResnetV2 network [48] to extract deep spatial features. It is a combination of two recent networks, one is Residual Connections [49] and another is Inception structure [44]. The engaged InceptionResNetV2 network includes Stem, InceptionResNet, Reduction layers, average pooling layer and a FC layer. The Stem includes preliminary convolution operations executed before entering the Inception blocks. The InceptionResNet layers have residual connections with convolution operations while the Reduction layers are responsible for changing the width and height of the image. The spatial features are extracted from the convolutional layers and the pooling layers decrease the dimensionality of individual feature map, but hold the most significant features. Furthermore, the convolutional layers are followed by the batch normalization layer and ReLU, which is a nonlinearity function and helped to decrease the training time. The network has an image input size of 299×299. Figure 7 illustrates the architecture of deep spatial feature extraction using InceptionResNetV2 network.

    Figure 7.  The InceptionResNetV2 network implemented for Deep spatial feature extraction.

    Neural Architecture Search Network Large (NASNet-Large) is basically composed of two kinds of layers or cells, i.e., Normal and Reduction Cells. During the forward path, the width and height of feature map is reduced half by Reduction Cell while the Normal Cell retain these two dimensions same as the input feature map. The Normal Cells are stacked between Reduction Cells as shown in Figure 8. Each cell in Normal / Reduction Cell is composed of a number of blocks. Each block is built from the set of popular operations in Deep CNNs with various kernel size e.g.: convolutions, max pooling, average pooling, dilated convolution, depth-wise separable convolutions. Finding best architecture for Normal Cell and Reduction Cell with 5 blocks is described in [50]. NASNet-Large with N equals to 6 aims to get maximum possible accuracy. With the help of this arrangement, the network is able to learn rich feature representations for a wide range of images [51]. This network has an image input size of 331×331.

    Figure 8.  The structure of the NasNet-Large architecture considered for glaucoma diagnosis.

    It is to note that the inputs in the above mentioned Deep CNN architectures are pre-processed color fundus images with centered at ONH. The region of interest is also kept same for all the models while the last FC layer is modified with softmax classifier to study only two classes, i.e., normal and glaucoma.

    In case of an ensemble framework, a set of diverse classifiers are aggregated and are trained to solve the same problem. All the above mentioned architectures are combined in one classifier as presented in Figure 9. The final decision is given by any of the five voting techniques, i.e., Majority Voting (MV), Proportional Voting (PV), Averaging (AV), Accuracy based Weighted Voting (AWV), and Accuracy/Score based Weighted Averaging (ASWA).

    Figure 9.  The proposed ensemble classifier for automatic glaucoma classification.

    In MV [52], each model makes a prediction (vote) for each test instance and the final output prediction is the one that receives more than half of the votes. Here Vij is the vote for jth class with reference to ith classifier and N(Vj) is the total number of votes for jth class. We predict the class label, O, via highest number of votes. Mathematically, it is written as

    N(Vj)=Ni=1Vij (2.1)
    O=Max{N(V1),N(V2),.............,N(Vj)} (2.2)

    where N(V1), N(V2), and N(V3) are the total number of votes for class 1, class 2, and class 3, respectively.

    In PV, the training accuracy of each classifier are summed-up with respect to their prediction and the maximum result will be the final outcome. Aij is the accuracy for jth class with respect to ith classifier and T(Aj) is the sum of training accuracies for jth class. We can write as

    T(Aj)=Ni=1Aij (2.3)
    O=Max{T(A1),T(A2),.............,T(Aj)} (2.4)

    where T(A1),T(A2), and T(A3) are the sum of training accuracies for class 1, class 2, and class 3, respectively.

    However, in case of equal votes, the AV is considered. The score vector for each predicted class are summed-up and averaged. Sij is the score for jth class according to ith classifier and Sj is the average score for jth class. The output class, O, is the one corresponding to the highest value, such as

    Sj=1NNi=1Sij (2.5)
    O=Max{S1,S2,.............,Sj} (2.6)

    where S1,S2, and S3 are the averaged score for class 1, class 2, and class 3, respectively.

    In AWV, accuracy of each classifier is modified according to the following relation,

    αi=e10(1ACCi)ki=1e10(1ACCi) (2.7)

    where αi is the updated accuracy of ith classifier, i.e., ACCi. The final decision, O, is assigned to the maximum value, given as

    (AWV)j=Ni=1αi×Wij (2.8)
    O=Max{(AWV)1,(AWV)2,.............,(AWV)j} (2.9)

    where Wij is the weight of jth class with reference to ith classifier and (AWV)1,(AWV)2, and (AWV)3 are the accuracy based weighted votes for class 1, class 2, and class 3, respectively.

    In ASWA, the probabilities / Scores are used to calculate the weighted average based on accuracy. Mathematically, it is written as

    (ASWA)j=Ni=1αi×Sij (2.10)
    O=Max{(ASWA)1,.............,(ASWA)j} (2.11)

    where αi is already calculated from Eq (2.7), and Sij is the probability / score of jth class with respect to ith classifier. The final output decision is evaluated according to the Eq (2.11). (ASWA)1,(ASWA)2, and (ASWA)3 are the accuracy/score based weighted averaging of class 1, class 2, and class 3, respectively.

    The classification task is performed by the Softmax layer. During training a network, this layer updates the weights through back propagation. This process is based on the loss function used in the training stage. Keeping in view the underlying binary classification problem, the Cross Entropy (C.E) has been used as a loss function, as shown in Eq (2.12).

    C.E=C=2t(tilog(Si))=t1log(S1)(1t1)log(1S1) (2.12)

    where ti and Si are the ground truth and the Deep CNNs score for each class i in C.

    We have analyzed the fundus images in two stages. In stage one, the performance of each Deep CNN has been evaluated for glaucoma diagnosis. In stage two, all four Deep CNNs are grouped together as a single classifier to further improve the accuracy of the system. Five voting schemes with two newly proposed techniques are considered to calculate the final decision.

    First, all fundus images are divided into three groups, training, validation and testing images. The sixty percent of all is for training while thirty percent of remaining for validation and rest for testing purpose. Table 2 shows the distribution of images in each group.

    Table 2.  Distribution of the fundus images for training, validation and testing sets.
    Class Training Validation Testing
    Normal 706 141 330
    Glaucoma 487 98 227

     | Show Table
    DownLoad: CSV

    The validation group is selected to monitor the number of epochs in the training process of different deep architectures. All the images used in validation, shared with the training set after selecting hyper-parameters and cross validation experiments. Thus, the training set has 847 normal and 585 glaucoma images. But, the testing sets are kept same during all experiments. The distribution of test images in different datasets is presented in Table 3.

    Table 3.  The distribution of test images in each datasets.
    Dataset Normal Glaucoma
    ACRIMA 87 110
    ORIGA-Light 135 47
    RIM-ONE 73 55
    AFIO 24 11
    HMC 11 4

     | Show Table
    DownLoad: CSV

    Commonly, the transfer learning strategy is applied in Deep CNNs to use the knowledge learned while classifying natural images to classify retinal images with glaucoma. Hence, the transfer learning has been employed in this work that involves, replacing and retraining the softmax layer and also fine-tuning the weights of the pre-trained network. We have carried out several experiments to achieve the optimal performance of each Deep CNNs with different number of fine-tuned layers and number of epochs. Initially, we have considered the last weighted layers of Deep CNNs for the number of fine-tuned layers, while keeping the initial layers in a freezing state. After that, the number of fine-tuned layers is increased until updating all the remaining layers in the Deep CNNs. Secondly, the effect of number of epochs have been evaluated for the best performance of each model. It is to note that we get maximum performance for each Deep CNNs for 30 epochs. The Stochastic Gradient Descent (SGD) is used as the optimizer. The other hyper-parameters, the learning rate, the batch size, and the momentum are optimally selected to get the best results in different sets of experiments. The objective of optimal setting is to adjust the weights of each Deep CNN such that the training loss is minimum. By minimizing the loss, we can achieve the optimal parameters resulting in the best model performance. Batch size is the number of training examples used in the estimation of error gradient for the learning algorithm. We take its value smaller because smaller batch size make it easier to fit one batch worth of training data in memory, offering a regularizing effect and lower the generalization error. The learning rate decides how far to move the weights in the direction of gradient to get the minimum loss. However, an optimal value is required to reach a minimum loss quickly. This is because smaller learning rate will take tiny steps to and hence a large time is required to reach the minimum loss function. While, the higher learning rate will result in overshooting the minimum loss and the Deep CNN may not converge. Besides, momentum is another parameter used to optimize the learning rate. It aims to calculate the weighted average of weights between the average of previous values and the current value. Thus, we have set the batch size to 6, the learning rate to 1×e4 and the momentum to 0.9 for all the networks.

    We have also assessed the performances of Deep CNNs using 10 cross validation technique. Due to limited training data, over-fitting is a well-known problem, occurred in Deep CNNs. To avoid over-fitting and increase the robustness of the architectures, we have pondered the dropout technique as proposed in [53] that temporally remove units along with all its incoming and outgoing connections in deep neural networks. Similarly, we have also employed data augmentation technique during training of all the networks. The fundus images are augmented by using random rotations between 0 to 360 degrees, and random translation of maximum 10 pixels in both x and y direction of the image. The input images are also resized according to the default input size of each Deep CNNs. Hence, we have evaluated the performance of each Deep CNNs using all datasets and experiments have been carried out to compare the best of the above mentioned four Deep CNNs. The training and testing procedures are displayed in Figures 10 and 11, respectively.

    Figure 10.  The flowchart of training process for Deep CNNs.
    Figure 11.  The flowchart of testing process for Deep CNNs.

    The number of epochs has been evaluated for each of the selected Deep CNN during the training process. The validation accuracy/loss during fine-tuning process of all four Deep CNNs for ACRIMA dataset are illustrated in Figure 12. It is observed that after 30 epochs, the validation accuracy reaches its maximum values, i.e., 99.10, 99.28, 94.72 and 100% for AlexNet, InceptionV3, InceptionResNetV2 and NasNet-Large, respectively. Similarly, the validation accuracy/loss results for other Deep CNNs are also evaluated.

    Figure 12.  Fine-tunning process of Deep CNNs for ACRIMA dataset (a) AlexNet (b) InceptionV3 (c)InceptionResNetV2 (d) NasNet-Large.

    It is observed that the NasNet-Large has maximum accuracy, i.e., 100% for ACRIMA dataset while for ORIGA-Light it has minimum value, i.e., 87.10% as compared with other networks. On average, all Deep CNNs perform well on ACRIMA dataset and the lowest results have been considered on ORIGA-Light dataset. This is because ACRIMA is a newly developed dataset for classification of glaucoma images while ORIGA-Light is designed for the segmentation of optic cup and optic disc. The superiority of the NasNet-Large model has also been observed during training for all other datasets. The training accuracy based comparison among 4x Deep CNNs for ACRIMA dataset is illustrated in Figure 13. It is investigated that the NasNet-Large outperforms all other networks during training. It is also applicable for other datasets.

    Figure 13.  The training accuracy based comparison among 4x Deep CNNs for ACRIMA dataset.

    Furthermore, Deep CNNs training time has been reduced through transfer learning. The weights in pre-trained networks are used as the starting point for the training process. Hence, we have minimized the training epochs, i.e., 30 for all Deep CNNs and achieved best validation results, as displayed in Table 4.

    Table 4.  The training accuracies of Deep CNNs for all the datasets.
    Deep CNNs ACRIMA ORIGA-Light RIM-ONE AFIO HMC
    AlexNet 99.10 75.50 86.70 90.90 98.50
    InceptionV3 99.28 78.50 86.40 84.50 90.50
    InceptionResNetV2 94.72 76.50 90.60 85.50 95.50
    NasNet-Large 100.00 87.10 94.70 97.70 98.60

     | Show Table
    DownLoad: CSV

    Commonly, a single evaluation metric is not appropriate to evaluate the performance of a given algorithm due to the presence of some imbalanced classes in the dataset or a large number of training labels [54]. Therefore, the performance of Deep CNNs are reported in terms of five distinct metrics including Accuracy (ACC), Sensitivity (SEN), Specificity (SP), F1 score and Area Under the Curve (AUC) as proposed in the previous studies [55]. These performance parameters are calculated using the following equations:

    ACC=TP+TNTP+FP+TN+FN (3.1)
    SEN=TPTP+FN (3.2)
    SP=TNTN+FP (3.3)
    F1=2precision.recallprecision+recall (3.4)

    where the precision and recall are expressed as

    Precision=TPTP+FP (3.5)
    Recall=TPTP+FN (3.6)

    In the above equations, the True Positive (TP) is defined as the number of glaucoma images classified as glaucoma and True Negative (TN) is the number of normal images classified as normal. False Positive (FP) is the number of normal images identified as glaucoma images and False Negative (FN) is the number of glaucoma images classified as normal.

    AUC is the area under the Receiver Operating Curve (ROC) and it provides the probability that the model ranks a positive example more highly than a negative example. ROC is a plot between two parameters, i.e., True Positive Rate (TPR) and False Positive Rate (FPR). TPR is synonym for recall while FPR can be calculated as

    FPR=FPFP+TN (3.7)

    The confusion matrices of each Deep CNNs have been evaluated for the test images in each dataset. Figures 14 and 15 show the confusion matrices for ACRIMA test images and total test images, respectively. Similarly, for other test images, ACC, SEN and SP results are also calculated according to Eqs (3.1), (3.2) and (3.3).

    Figure 14.  The test results of Deep CNNs for ACRIMA dataset (a) AlexNet (b) InceptionV3 (c) InceptionResNetV2 (d) NasNet-Large.
    Figure 15.  The test results of Deep CNNs for total test images (a) AlexNet (b) InceptionV3 (c) InceptionResNetV2 (d) NasNet-Large.

    It is observed that NasNet-Large achieves best results over ACRIMA dataset, i.e., ACC (99.5%), SP (100%) and SEN (99%). The InceptionResNetV2, gives ACC (99%), SP (97.7%) and SEN (100%). Now for ORIGA-Light test images, it can be seen that all the Deep CNNs show poor performance except NasNet-Large with ACC (88%), SP (91%) and SEN (79%). The AlexNet, InceptionV3, and InceptionResNetV2 achieve worst results in the range of 60–66% SEN.

    All the Deep CNNs perform well on RIM-ONE test images. Like, NasNet-Large again gives better results in terms of ACC (94.5%), SP (96%) and SEN (92.7%). While, the AlexNet shows lowest results with ACC (87.5%), SP (90.4%) and SEN (83.6%) as compared with other networks.

    The performance metrics are also evaluated for both the local datsets, i.e., AFIO and HMC. It is noticed that NasNet-Large provides maximum results in terms of ACC, SP, and SEN for AFIO test images while InceptionResNetV2 has 88.6% ACC, 83.3% SP, and 100% SEN. Similarly, NasNet-Large gives maximum results, i.e., 100% in all performance metrics while AlexNet has 93.3% ACC, 100% SP, and 75.0% SEN with HMC test images.

    In case of total test set of images, the NasNet-Large again performs well as compared to other networks. It provides 99.3% ACC, 99.4% SP, and 99.1% SEN, while InceptionV3 shows lowest results in terms of ACC, SP, and SEN. These results are displayed in Table 5.

    Table 5.  The test results of the Deep CNNs for all the test images selected from each dataset.
    Deep CNNs Datasets
    ACRIMA ORIGA-Light RIM-ONE
    ACC SP SEN ACC SP SEN ACC SP SEN
    AlexNet 99.5 98.9 100.0 75.8 81.5 60.0 87.5 90.4 83.6
    InceptionV3 98.5 97.7 99.1 78.6 83.0 66.0 92.2 94.5 89.1
    InceptionResNetV2 99.0 97.7 100.0 76.9 82.2 61.7 90.6 94.5 85.5
    NasNet-Large 99.5 100.0 99.1 87.9 91.1 78.7 94.5 95.9 92.7
    AFIO HMC TOTAL
    ACC SP SEN ACC SP SEN ACC SP SEN
    AlexNet 91.4 95.8 81.8 93.3 100.0 75.0 97.3 97.3 97.4
    InceptionV3 91.4 91.7 91.0 86.7 100.0 50.0 96.6 97.0 96.0
    InceptionResNetV2 88.6 83.3 100.0 93.3 91.0 100.0 97.1 97.3 96.9
    NasNet-Large 94.3 91.7 100.0 99.9 100.0 100.0 99.3 99.4 99.1

     | Show Table
    DownLoad: CSV

    It is noted that the classification accuracies have been improved using an ensemble of all four Deep CNNs. For ACRIMA dataset, NasNet-Large and AlexNet give 99.5% accuracy while ensemble with AWV provides 99.6% accuracy. In case of ORIGA-Light, the accuracy has been increased from 87.9 to 88.3% with ASWA. Similarly, for RIM-ONE, AFIO, and HMC datasets, the results have been improved. Now, if we consider total test set of images, the result is increased from 99.3 to 99.5% with ASWA. The AV provides lowest results. This is because it is simple averaging. However, the AWV and ASWA provide better results. This is because the weights are updated according to the accuracy and score of each Deep CNNs. The results of five voting schemes are displayed in Table 6. Figure 16 also provides an overview of newly developed AWV and ASWA schemes that makes it different from MV, PV, and AV.

    Table 6.  The accuracies of Deep CNNs ensemble framework for different voting schemes.
    Dataset MV PV AV AWV ASWA
    ACRIMA 99.5 99.4 99.1 99.6 99.5
    ORIGA-Light 87.9 88.0 79.8 88.2 88.3
    RIM-ONE 94.5 94.5 91.2 95.1 95.2
    AFIO 94.3 94.3 91.4 96.1 96.2
    HMC 99.9 99.9 96.3 99.8 99.9
    Total 99.3 99.3 98.5 99.4 99.5

     | Show Table
    DownLoad: CSV
    Figure 16.  Illustration of five voting techniques with test instance of class 2.

    Besides, we have also considered the combinations of two and three Deep CNNs to increase the persuasiveness of the proposed ensemble framework. The results of different voting schemes with ensemble of 2x, 3x, and the proposed 4x Deep CNNs for total test set of images are presented in Table 7. It is observed that the AlexNet and NasNet-Large provide better results as comapred to other networks in 2x Ensemble framework. While, in the case of 3x Ensemble network, AlexNet, InceptionResNetV2 and NasNet-Large show superior results with other combinations. However, our proposed 4x Ensemble framework outperforms in all five voting schemes as compared to 2x, and 3x Deep CNNs ensemble.

    Table 7.  The accuracies of various combinations of Deep CNNs for five voting schemes with total test set of images.
    Deep CNNs MV PV AV AWV ASWA
    2x Ensemble A+B 98.70 98.60 97.80 99.00 98.80
    A+C 98.80 98.50 97.60 99.00 98.90
    A+D 99.00 98.90 97.80 99.20 99.00
    B+C 98.80 99.00 96.90 98.90 98.80
    B+D 98.80 98.70 97.50 98.90 98.90
    3x Ensemble A+B+C 98.90 98.80 98.10 98.80 98.90
    A+B+D 98.80 99.00 98.10 98.80 99.00
    A+C+D 99.00 99.10 98.20 99.00 99.00
    B+C+D 98.90 99.10 98.10 98.80 98.90
    4x Ensemble A+B+C+D 99.30 99.30 98.50 99.40 99.50
    A = AlexNet, B = InceptionV3, C = InceptionResNetV2, & D = NasNet-Large

     | Show Table
    DownLoad: CSV

    The evaluation parameters of classification performance, i.e., sensitivity, specificity, accuracy, and AUC of the ImageNet trained Deep CNNs have been displayed in Table 8, where the performance comparison of proposed work with [22,26,37], and [56,57,58,59,60,61] is presented. In [57,58,59,60], the authors have used CNN based architectures for the classification of glaucoma using RIM-ONE, and ORIGA-Light images. Similarly, in another study proposed by [61], the authors have considered newly developed ACRIMA dataset for the glaucoma classification and achieved highest 96.0% AUC. While, in our study, the NasNet-Large performed well and gives 99.1% sensitivity, 99.4% specificity, 99.3% accuracy and 97.8% AUC for the total test set of images. This is because the cropped images centered at ONH are to be more effective as compared to whole image as well as computational cost is reduced during network learning. It is also helpful in improving the identification of glaucomatous damages in early stages. Furthermore, a data augmentation technique is also considered during training of Deep CNNs to increase the training images and minimize over-fitting problem. The classification accuracy has been further increase from 99.3 to 99.5% with ensemble framework. The results show that AWV and ASWA provide better results as compared with other voting techniques. This is because weights are updated according to accuracy and scores of each Deep CNNs. These results indicate that the proposed study provides better performance than its previous state-of-the-art. To the best of our knowledge, there is no existing related Deep CNNs based ensemble framework for the diagnosis of glaucoma using fundus images.

    Table 8.  The comparison of proposed work with existing state-of-the-art.
    Systems Methods Dataset Sensitivity (%) Specificity (%) Accuracy (%)
    [22] CNN Public & Private 84.5 98.0 99.0
    [26] Multi-branch NN Private 92.0 90.0 91.0
    [37] Deep CNNs Public 93.4 85.8 96.0 (AUC)
    [56] CNN Private 98.0 98.0 98.0
    [57] CNN RIM-ONE 80.0 88.0 85.0
    [58] RCNN ORIGA-Light NR NR 87.4 (AUC)
    [59] GoogleNet HRF & RIM-ONE NR NR 87.6
    [60] AlexNet RIM-ONE 87.0 85.0 88.0
    [61] Deep Models Public & Private NR NR 85.0 (AUC)ACRIMA
    [62] InceptionV3 Private NR NR 84.5 & 93.0 (AUC)
    [63] Ensemble Classifier Private 86.0 90.0 88.0 & 94.0 (AUC)
    Our NasNet-Large Public & Private 99.1 99.4 99.3 & 97.8% (AUC)
    Our Ensemble Classifier Public & Private 99.1 99.7 99.5

     | Show Table
    DownLoad: CSV

    The experimental results presented in this study suggest the following key observations:

    ● From the above mentioned results, the proposed automatic glaucoma diagnosis system is more useful and effective. This is because Deep CNNs have the ability to learn glaucoma specific features automatically. While in traditional methods, the feature extraction strategies are manual that limit the success of overall system. Moreover, the illumination and textural variations in retinal fundus images are also the problems in classification of glaucoma. Conversely, the presented work uses automatic glaucoma features extraction and thus achieve superior classification accuracy across a wide range of publicly and locally available datasets. Hence, the results are generalized for diverse set of images.

    ● In pre-processing step, the region ONH has been extracted and used as the input image to Deep CNNs models. This is due to the fact that most of the initial changes have been occurred in ONH during early stage of glaucoma. Hence, the cropped images centered at ONH are to be more effective as compared to whole image as well as computational cost is reduced during network learning. It is also helpful in improving identification of glaucomatous damages in early stages. Furthermore, a data augmentation technique is also considered during training of Deep CNNs to increase the training images and minimize over-fitting problem.

    ● Transfer learning generally refers to a process where a model trained on one problem is used in some way on a second related problem. In our study, Deep CNNs training time has been reduced through transfer learning. The weights in pre-trained networks are used as the starting point for the training process. Hence, we have minimized the training epochs, i.e., 30 for all Deep CNNs and achieved best validation results, as displayed in Table 4. Additionally, a graphical comparison is also being presented among these models during the training process. It is examined that our proposed model, i.e., NasNet-Large achieves best results as compared to other Deep CNNs. These results are displayed in Figure 13.

    ● Generally, a single performance metrics can lead to inappropriate classification results due to some imbalance classes in the dataset or too small or large number of training subjects. From the literature survey of the existing methods on fundus images such as [37,58,61] show classification performance in terms of AUC only. In contrast, we have evaluated four distinct metrics including SEN, SP, ACC, and AUC. The results show the steady performance in glaucomatous classification across different metrics.

    ● We have also performed extensive experiments to evaluate the performance of four Deep CNNs (AlexNet, InceptionV3, InceptionResNetV2 and NasNet-Large). Comparing the results of these architectures, it is noted that NasNet-Large is significantly better than that of others networks. The results are consistent with the relative performance of these architectures on wide range of public and private fundus images. The AlexNet also provides better results as compared with other networks. However, it shows lower performance on ORIGA-Light and RIM-ONE datasets as compared with others. From the results, it is observed that ORIGA-Light is more challenging dataset for each type of network. On the contrary, all the networks provide good results for ACRIMA dataset because it is a newly developed dataset for Deep learning classification tasks.

    ● We have also presented an ensemble classifier to further improve the classification accuracy. The final results have been evaluated with five voting techniques. Two newly developed voting schemes, i.e., AWV and ASWA provide the better results as compared with others as presented in Table 6. Moreover, we have also carried out ablation experiments to increase the persuasiveness of the experimental results as well as the validation of four Deep CNNs combination. The experimental results are displayed in Table 7. It is clearly observed that ensemble of four networks show better results as compared to ensemble of 2x and 3x networks. These results clearly demonstrate the effectiveness of the proposed ensemble framework to diagnose the glaucoma.

    ● The experimental results are also compared with other deep learning based diagnostic systems developed by other researchers. In [57,58,59,60], the authors have used CNN based architectures for the classification of glaucoma using RIM-ONE, and ORIGA-Light images. Similarly, in another study proposed by [61], the authors have considered newly developed ACRIMA dataset for the glaucoma classification and achieved highest 96.0% AUC. In [62], the authors have implemented InceptionV3 architecture for glaucoma detection and achieved maximum 84.5% accuracy with 93% AUC. More recently, an ensemble of AlexNet, ResNet-50, and ResNet-152 have investigated in [63] and achieved the highest accuracy of 88% with 0.94 AUC. However, our proposed ensemble framework have achieved superior results as compared with previously proposed methods. The results of SEN, SP, and ACC proposed in this study are displayed in Table 5. A detailed comparisons with existing state-of-the-art has been presented in Table 8.

    In this work, we have proposed an ensemble framework based on pre-trained Deep CNNs for glaucoma classification using fundus images. At first, four Deep CNNs, i.e., AlexNet, InceptionV3, InceptionResNetV2, and NasNet-Large are tested on five different datasets, three publicly available, i.e., ACRIMA, ORIGA-Light, and RIM-ONE, and others two collected from the local hospitals. Dropout and data augmentation techniques are also considered to improve the performance of Deep CNNs models. NasNet-Large is the best option with transfer learning and fine-tuning, with AUC (97.8%), SEN (99.1%), SP (99.4%) and ACC (99.3%). Secondly, for even better results, we also proposed an ensemble framework for automatic glaucoma classification. The AWV and ASWA based ensembling methods improve the accuracy with all datasets and total test images to 0.3%. Moreover, the proposed ensemble classifier has considerably better accuracy and robustness than the individual optimized Deep CNN models for automatic glaucoma diagnosis.

    As a future work, the new architectures with more data can be explored and assessed to confirm the presented line of work. The performance of Deep CNNs can also be enhanced to extract deep features for the classification tasks. In this way, we can train even more robust glaucoma classifiers.

    We would like to mention here the great struggle of Dr. Yousaf Jamal Mahsood, Associate Professor, Glaucoma Khyber Girls Medical College, HMC Peshawar, in collecting the fundus images and AFIO department, Military Hospital, Rawalpindi, Pakistan.

    The authors declare no conflict of interest.



    Conflict of interest



    Authors declared no conflict of interest.

    [1] Guo YR, Cao QD, Hong ZS, et al. (2020) The origin, transmission and clinical therapies on coronavirus disease 2019 (COVID-19) outbreak-an update on the status. Mil Med Res 7: 11.
    [2] Kohona P (2020) Sri Lanka Has Been Successful in Countering COVID-19, The Small Indian Ocean Island Deserves Recognition, 2020.Available from: https://www.indepthnews.net/index.php/opinion/3518-sri-lanka-has-been-successful-in-countering-covid-19.
    [3] Worldometer (2020) Sri Lanka Population (2020)-Worldometer, 2020.Available from: https://www.worldometers.info/world-population/sri-lanka-population/.
    [4] World Health Organization (2015) Climate and Health Country Profile-2015, Sri Lanka.Available from: https://www.who.int/docs/default-source/searo/wsh-och-searo/srl-c-h-profile.pdf?sfvrsn=1b62c800_2.
    [5] Perera A, Perera HSR (2017) Primary Healthcare Systems (PRIMASYS)-case study from Sri Lanka, World Health Organization.Available from: https://www.who.int/alliance-hpsr/projects/alliancehpsr_srilankaprimasys.pdf?ua=1.
    [6] World Health Organization (2018) Technical Series on Primary Health Care.Available from: https://www.who.int/docs/default-source/primary-health-care-conference/public-health.
    [7] Ministry of Health (2005) General Circular No:01-09/2005; Working hours of staff in DDHS/MOH office.Available from: http://www.health.gov.lk/CMS/cmsmoh1/upload/english/01-09-2005-eng.pdf.
    [8] Ratnayake HE, Rowel D (2018) Prevalence of exclusive breastfeeding and barriers for its continuation up to six months in Kandy district, Sri Lanka. Int Breastfeed J 13: 1-8. doi: 10.1186/s13006-018-0180-y
    [9] The World Bank (2018) World Bank, Elevating Sri Lanka's Public Health to the Next Level, 2018.Available from: http://projects-beta.worldbank.org/en/results/2018/09/14/elevating-sri-lankas-public-health-next-level.
    [10] Sri Lanka (2017) Reorganising Primary Health Care in Sri Lanka, Preserving our progress, preparing our future.Available from: http://www.health.gov.lk/moh_final/english/public/elfinder/files/publications/2018/ReorgPrimaryHealthCare.pdf.
    [11] Center for Global Development Saving mothers' lives in Sri Lanka.Available from: https://www.cgdev.org/sites/default/files/archive/doc/millions/MS_case_6.pdf.
    [12] Hewa S (2011) Sri Lanka's Health Unit Program: A Model of “Selective” Primary Health Care. Hygiea Int Interdiscip J Hist Public Health 10: 7-33.
    [13] World Health Organization (2020) Critical preparedness, readiness and response actions for COVID-19-Interim Guidance.Available from: https://www.who.int/publications/i/item/critical-preparedness-readiness-and-response-actions-for-covid-19.
    [14] Office of the Provincial Director of Health Services Medical Officer of Health.Available from: http://healthdept.wp.gov.lk/web/?page_id=523.
    [15] Sri lanka (2010) Manual for the Sri Lanka Public Health Inspector.Available from: https://medicine.kln.ac.lk/depts/publichealth/Fixed_Learning/PHI%20manual/PHI%20New%20Documents%20Including%20full%20PHI%20Manual%20$%20Office%20Documents/PHI%20Manual%20Full/03%20Chapter%201%20-Duties%20&%20Responsibilities%20(1-78).pdf.
    [16] Rizan ILM (2018) Daily News, Dengue on the rise in Ampara, 2018.Available from: http://www.dailynews.lk/2018/02/02/local/141771/dengue-rise-ampara.
    [17] Sri Lanka (2017) Public health measures to be adopted in the event of floods.
    [18] Sri Lanka (2016) Policy repository of Ministry of Health 2016.Available from: http://www.health.gov.lk/moh_final/english/public/elfinder/files/publications/publishpolicy/PolicyRepository.pdf.
    [19] Director General of Health Services (2020) Delegation of authority to carry out functions under the quarantine and prevention of diseases ordinance.
    [20] Faculty of Medicine, University of Jaffna (2020) Community engagement; Department of Community and Family Medicine, 2020.Available from: http://www.med.jfn.ac.lk/index.php/community-medicine/service-functions/.
    [21] Amarasinghe SN, Dalpatadu KCS, Rannan-Eliya RP (2018) Sri Lanka Health Accounts: National Health Expenditure 1990–2016.Available from: http://www.ihp.lk/publications/docs/HES1805.pdf.
    [22] Sri Lanka (2020) Live updates on New Coronavirus (COVID-19) outbreak, 2020.Available from: https://www.hpb.health.gov.lk/en.
    [23] World Health Organization (2020) Surveillance strategies for COVID-19 human infection, 2020.Available from: https://www.who.int/publications-detail/surveillance-strategies-for-covid-19-human-infection.
    [24] Sri Lanka (2019) Annual Health Statistics 2017, Sri Lanka.Available from: http://www.health.gov.lk/moh_final/english/public/elfinder/files/publications/AHB/2017/AHS2017.pdf.
    [25] Sri Lanka (2020) Updated interim case definition and advice on initial management.Available from: http://epid.gov.lk/web/index.php?option=com_content&view=article&id=230&lang=en.
    [26] Sri Lanka (2020) COVID-19 (New Coronavirus) Outbreak in Sri Lanka, Interim Guidelines for Sri Lankan Primary Care Physicians.Available from: http://epid.gov.lk/web/index.php?option=com_content&view=article&id=230&lang=en.
    [27] Epidemiology Unit, Ministry of Health, Sri Lanka Brochure on Influenza.Available from: https://www.epid.gov.lk/web/images/pdf/Influenza/Message/influenza_disease.pdf.
    [28] Rao RU, Nagodavithana KC, Samarasekera SD, et al. (2014) A Comprehensive Assessment of Lymphatic Filariasis in Sri Lanka Six Years after Cessation of Mass Drug Administration. PLoS Negl Trop Dis 8: e3281. doi: 10.1371/journal.pntd.0003281
    [29] News First (2020) Sri Lanka News-Newsfirst, Sri Lanka to boost its PCR testing capacity to combat COVID-19, 2020.Available from: https://www.newsfirst.lk/2020/04/27/sri-lanka-to-boost-its-pcr-testing-capacity-to-combat-covid-19/.
    [30] Sri Lanka (2020) Guidance on resumption of immunization services during COVID-19 outbreak.Available from: http://epid.gov.lk/web/index.php?option=com_content&view=article&id=230&lang=en.
    [31] Chughtai AA, Seale H, Islam MS, et al. (2020) Policies on the use of respiratory protection for hospital health workers to protect from coronavirus disease (COVID-19). Int J Nurs Stud 105: 103567. doi: 10.1016/j.ijnurstu.2020.103567
    [32] Sri Lanka (2020) Maintenance of a register for healthcare workers exposed to COVID-19 at health care institutions.Available from: http://epid.gov.lk/web/index.php?option=com_content&view=article&id=230&lang=en.
    [33] Ozili PK, Arun T (2020) Spillover of COVID-19: Impact on the Global Economy. MPRA Paper 99317 Germany: University Library of Munich.
    [34] Directorate of Environmental health, Occupational health and Food safety, Ministry of Health and Indigenous Medical Services (2020) Guidelines on COVID-19 preparedness for workplaces.Available from: http://epid.gov.lk/web/index.php?option=com_content&view=article&id=230&lang=en.
    [35] Jayatilleke AC, Yoshikawa K, Yasuoka J, et al. (2015) Training Sri Lankan public health midwives on intimate partner violence: a pre- and post-intervention study. BMC Public Health 15: 331. doi: 10.1186/s12889-015-1674-9
    [36] Sri Lanka Medical Association (2017) SLMA guidelines and information on vaccines.Available from: https://slma.lk/wp-content/uploads/2017/12/GSK-SLMA-Guidelines-Information-on-Vaccines.pdf.
  • This article has been cited by:

    1. Khandakar Farid Uddin, COVID-19 Pandemic Is About More than Health: A State of Governance Challenges in Bangladesh, 2021, 28, 0971-5231, 72, 10.1177/0971523121993344
    2. Shahin Amiri, Aliakbar Haghdoost, Ehsan Mostafavi, Hamid Sharifi, Niloofar Peykari, Ahmad Raeisi, Mohammad Assai Ardakani, Mohsen Asadi Lari, Hamid Soori, Afshin Ostovar, Babak Eshrati, Mohammad Mehdi Gouya, Mahshid Nasehi, Seyed Mehdi Tabatabaei, Manzar Amirkhani, Sana Eybpoosh, Iran COVID-19 Epidemiology Committee: A Review of Missions, Structures, Achievements, and Challenges, 2021, 21, 2228-7795, e00505, 10.34172/jrhs.2021.45
    3. Mohammad Mohamadian, Taha Nasiri, Mohammadkarim Bahadori, Habib Jalilian, Stakeholders analysis of COVID-19 management and control: a case of Iran, 2022, 22, 1471-2458, 10.1186/s12889-022-14219-0
    4. Nouhoum Bouare, Daouda Kassim Minta, Abdoulaye Dabo, Christiane Gerard, COVID-19: A pluralistic and integrated approach for efficient management of the pandemic, 2022, 11, 2220-3249, 20, 10.5501/wjv.v11.i1.20
    5. Deepthi Wickramasinghe, Vindhya Kulasena Fernando, 2022, 9780323992770, 129, 10.1016/B978-0-323-99277-0.00031-0
    6. IM Karunathilake, WMPC Weerasinghe, R. Rupasinghe, MC Weerasinghe, Use of Proxy Indicators to Interpret the Epi Curve of COVID-19 in Sri Lanka, 2021, 2773-7195, 10.51595/11111125
    7. Sukhbir Singh, Manjunath B Govindagoudar, Dhruva Chaudhry, Pawan Kumar Singh, Madan Gopal Vashist, Assessment of knowledge of COVID-19 among health care workers-a questionnaire-based cross-sectional study in a tertiary care hospital of India, 2021, 8, 2327-8994, 614, 10.3934/publichealth.2021049
    8. Chuthamat Kaewchandee, Unchalee Hnuthong, Sudarat Thinkan, Md. Siddikur Rahman, Suttida Sangpoom, Charuai Suwanbamrung, The experiences of district public health officers during the COVID-19 crisis and its management in the upper southern region of Thailand: A mixed methods approach, 2023, 9, 24058440, e12558, 10.1016/j.heliyon.2022.e12558
    9. Wanyu Ji, Wenjing Shi, Xiaodong Li, Junting Xi, Jingfei Zhong, Feng Qi, Establishment of primary health information in the COVID-19 outbreak: A cross-sectional study of population awareness of self-testing, 2022, 31, 23529148, 100981, 10.1016/j.imu.2022.100981
    10. Ahamed Iqbal, Nuradh Joseph, Cervical Cancer in Sri Lanka, 2023, 12, 2278-330X, 039, 10.1055/s-0043-1764236
    11. Arnab Chakraborty, COVID-19 Response in South Asia: Case Studies from India, Sri Lanka, and Pakistan, 2023, 114, 0021-1753, S447, 10.1086/726992
    12. Glyn Williams, Darshini Mahadevia, Karen Coelho, Binitha V. Thampi, Urban governance in the pandemic and beyond: framing the debate from cities of the South, 2025, 47, 1474-6743, 3, 10.3828/idpr.2025.1
    13. Fiona Carter-Tod, Jessie V Ford, Jessica L Weissman, “You Start With the Community”: The Value of Community-Based Approaches to COVID-19 in Sri Lanka, 2025, 2752-535X, 10.1177/2752535X251337660
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(9497) PDF downloads(284) Cited by(13)

Figures and Tables

Figures(2)  /  Tables(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog