Processing math: 100%
Research article

On q-analogue of meromorphic multivalent functions in lemniscate of Bernoulli domain

  • Utilizing the concepts from q-calculus in the field of geometric function theory, we introduce a subclass of p-valent meromorphic functions relating to the domain of lemniscate of Bernoulli. The well known problem of Fekete-Szegö for this class is evaluated. Also some geometric results related to subordinations are evaluated for this class in connection with Janowski functions.

    Citation: Bakhtiar Ahmad, Muhammad Ghaffar Khan, Basem Aref Frasin, Mohamed Kamal Aouf, Thabet Abdeljawad, Wali Khan Mashwani, Muhammad Arif. On q-analogue of meromorphic multivalent functions in lemniscate of Bernoulli domain[J]. AIMS Mathematics, 2021, 6(4): 3037-3052. doi: 10.3934/math.2021185

    Related Papers:

    [1] Sara Arciniegas Ruiz, Eliav Tikochinsky, Vardit Rubovitch, Chaim G Pick, Bernard Attali . Contextual fear response is modulated by M-type K+ channels and is associated with subtle structural changes of the axon initial segment in hippocampal GABAergic neurons. AIMS Neuroscience, 2023, 10(1): 33-51. doi: 10.3934/Neuroscience.2023003
    [2] Abdulrahman Alhamyani, Prabhat R Napit, Haider Ali, Mostafa MH Ibrahim, Karen P Briski . Ventrolateral ventromedial hypothalamic nucleus GABA neuron adaptation to recurring Hypoglycemia correlates with up-regulated 5′-AMP-activated protein kinase activity. AIMS Neuroscience, 2021, 8(4): 510-525. doi: 10.3934/Neuroscience.2021027
    [3] Chris Cadonic, Benedict C. Albensi . Oscillations and NMDA Receptors: Their Interplay Create Memories. AIMS Neuroscience, 2014, 1(1): 52-64. doi: 10.3934/Neuroscience.2014.1.52
    [4] Ubaid Ansari, Jimmy Wen, Burhaan Syed, Dawnica Nadora, Romteen Sedighi, Denise Nadora, Vincent Chen, Forshing Lui . Analyzing the potential of neuronal pentraxin 2 as a biomarker in neurological disorders: A literature review. AIMS Neuroscience, 2024, 11(4): 505-519. doi: 10.3934/Neuroscience.2024031
    [5] Eduardo Mercado III . Relating Cortical Wave Dynamics to Learning and Remembering. AIMS Neuroscience, 2014, 1(3): 185-209. doi: 10.3934/Neuroscience.2014.3.185
    [6] Nao Fukuwada, Miki Kanno, Satomi Yoshida, Kenjiro Seki . Gαq protein signaling in the bed nucleus of the stria terminalis regulate the lipopolysaccharide-induced despair-like behavior in mice. AIMS Neuroscience, 2020, 7(4): 438-458. doi: 10.3934/Neuroscience.2020027
    [7] Fatemeh Aghighi, Mahmoud Salami, Sayyed Alireza Talaei . Effect of postnatal environmental enrichment on LTP induction in the CA1 area of hippocampus of prenatally traffic noise-stressed female rats. AIMS Neuroscience, 2023, 10(4): 269-281. doi: 10.3934/Neuroscience.2023021
    [8] Nour Kenaan, Zuheir Alshehabi . A review on recent advances in Alzheimer's disease: The role of synaptic plasticity. AIMS Neuroscience, 2025, 12(2): 75-94. doi: 10.3934/Neuroscience.2025006
    [9] Anastasios A. Mirisis, Anamaria Alexandrescu, Thomas J. Carew, Ashley M. Kopec . The Contribution of Spatial and Temporal Molecular Networks in the Induction of Long-term Memory and Its Underlying Synaptic Plasticity. AIMS Neuroscience, 2016, 3(3): 356-384. doi: 10.3934/Neuroscience.2016.3.356
    [10] Manami Inagaki, Masayuki Somei, Tatsunori Oguchi, Ran Ono, Sachie Fukutaka, Ikumi Matsuoka, Mayumi Tsuji, Katsuji Oguchi . Neuroprotective Effects of Dexmedetomidine against Thapsigargin-induced ER-stress via Activity of α2-adrenoceptors and Imidazoline Receptors. AIMS Neuroscience, 2016, 3(2): 237-252. doi: 10.3934/Neuroscience.2016.2.237
  • Utilizing the concepts from q-calculus in the field of geometric function theory, we introduce a subclass of p-valent meromorphic functions relating to the domain of lemniscate of Bernoulli. The well known problem of Fekete-Szegö for this class is evaluated. Also some geometric results related to subordinations are evaluated for this class in connection with Janowski functions.


    1. Introduction

    Remote sensing is a useful and convenient tool for qualitative and quantitative determination of plant growth conditions. The technique can provide information on the actual status of crop conditions by observing a repetitive coverage, where the latter is necessary for change detection studies at a global or regional scale, such as crop yield predictions and monitoring crop status and conditions [1,2]. While crop conditions can be monitored using various remote sensing platforms, the two primary categories are satellite and aerial platforms. Satellites can observe a wide area of thousands of square kilometers at once, revisiting it in a regular and timely manner. These unique characteristics render satellites the most suitable of the current remote sensing platforms for monitoring crop growth over broad areas. Satellites have been used in agricultural remote sensing since the early 1970s [3]. Satellite systems with increasingly higher spatial resolution and more frequent revisiting cycles have been developed to improve the quality of data. For optimum use of these data, atmospheric correction is required to retrieve the rectified surface reflectance from a remotely sensed image by removing the effects of light scattering and absorption by aerosols, haze, and gases. While atmospheric correction is necessary as an important image processing step in many remote sensing applications, significant difficulty is presented during processing due to the complexity of atmospheric conditions in time and space. Because accurate reflectances are highly required for many applications, atmospheric correction accuracy and development of improved algorithms should be evaluated and these areas of research remain very active [4].

    Atmospheric correction can be divided into two categories: (1) empirical methods; and (2) radiative transfer model-based methods. The empirical methods rely on the scene information, i.e., radiance at a certain location, and do not use any physical model as done in model-based methods. The most recent addition to empirical methods is the Quick Atmospheric Correction (QUAC) method [5]. The model-based methods are performed using radiative transfer models. In this procedure, field measurements are not required, and only basic information on the scene is required, such as site location and elevation, flight altitude, the sensor model, local visibility, and acquisition times. Several model-based methods dedicated to retrieving reflectance information from hyperspectral and multispectral data have been developed. These methods include ATmosphere REMoval program (ATREM), Atmospheric and Topographic Correction (ATCOR), and Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) [6,7,8]. These methods retrieve surface reflectance using a radiative transfer model. All of the model-based methods are quite similar in their basic principles and operation [9].

    While some features are distinguishable in a panchromatic or single-band image, most features are more clearly distinguishable in multispectral and hyperspectral images containing multiple wavebands. The reflectance properties of an object depend on the surface features (e.g., color and texture) and environmental conditions (e.g., geographic location and atmospheric components). The reflectance characteristics of various features in the image that have multiple spectral band information (i.e., multispectral and hyperspectral images) are intermixed. Therefore, automated techniques are needed that can identify different surface characteristics and categorize all of the pixels in an image into homogeneous land cover types or themes. This process is termed classification, and the classified data may then be used to produce thematic maps [10]. Classification methods are divided into two methods; supervised and unsupervised classification. Supervised classification is the procedure most often used as a precursor to quantitative analysis of remote sensing image data. It depends upon using suitable algorithms to classify and label the pixels in an image as representing particular ground cover types or classes. A variety of algorithms is available for supervised classification [11]. Among the most frequently used classification algorithms are the parallelepiped, minimum distance, and maximum likelihood classification methods.

    High resolution satellite images are more suitable for monitoring crop growth conditions in precision agriculture than low resolution satellite images. Image correction and classification methods are needed to obtain and/or determine accurate conditions of crop growth; however, most methods have been developed and evaluated for relatively low resolution imageries. The objectives of this study were to identify the canopy growth of paddy rice, and to investigate practical image correction and classification methods. We specifically evaluated the three atmospheric correction methods, QUAC, FLAASH, and ATCOR, as well as a selected classification method in order to obtain an endmember category or class (i.e., paddy) from image data of an area of interest. The selected image correction methods were applied to a RapidEye high resolution (6.5 m) image for projecting vegetation index (VI) maps for monitoring rice growth conditions.


    2. Materials and Method


    2.1. Study sites and satellite image acquisition

    In this study, RapidEye (BlackBridge, Berlin, Germany) satellite images were acquired so that the three atmospheric correction methods (i.e., QUAC, FLAASH, and ATCOR), and three supervised classification methods (i.e., parallelepiped, minimum distance, and maximum likelihood) could be performed and evaluated. RapidEye images were taken over experimental fields of paddy rice at Chonnam National University, Gwangju, and at TaeAn, Choongcheongnam-do, Korea (Figure 1). The CNU RapidEye images were acquired on day of year (DOY) 220 in 2013. The TaeAn RapidEye images were obtained on DOY 152, 174, 220, and 250 in 2010, and thus represent a time series. The CNU RapidEye images were used to evaluate the atmospheric correction methods and the TaeAn RapidEye images were used to evaluate the classification methods.

    Figure 1. Location map (center) and true-colored RapidEye images of two study sites: The Chonnam National University campus, Gwangju (A); and TaeAn, Choongcheongnam-do (B), Korea.

    The RapidEye constellation of five Earth observation satellites are designed to point at several look angles, and each of the five satellites travels on the same orbit, enabling acquisition of high-resolution images with five spectral bands daily. This allows users to get large-area coverage data with a frequent revisit interval [12,13]. RapidEye collects 4 million square kilometers of data per day with a 6.5 m ground resolution. The RapidEye system specifications are given in Table 1. RapidEye images are offered at two processing levels: (1) basic products (level 1B), which are geometrically uncorrected images; and (2) ortho products (level 3A), which are radiometric, geometric, and terrain correction images [13]. Level 3A images were used in this study.

    Table 1.Specifications of RapidEye satellite system.
    Mission characteristicInformation
    Number of satellites5
    Spacecraft lifetime7 years
    Orbit altitude630 km in sun-synchronous orbit
    Sensor typeMulti-spectral push broom imager
    Spectral bands (nm)Blue (440–510 nm)
    Green (520–590 nm)
    Red (630–685 nm)
    Red edge (690–730 nm)
    NIR (760–850 nm)
    Ground sampling distance (nadir)6.5 m
    Pixel size (ortho-rectified)5 m
    Swath Width77 km
    On board data storageUp to 1500 km of image data per orbit
    Revisit timeDaily (off nadir), 5.5 days (at nadir)
    Image capture capacity4 million km2 per day
    Dynamic rangeUp to 12 bit
     | Show Table
    DownLoad: CSV

    2.2. Unmanned aerial vehicle (UAV) image acquisition and corrections

    An UAV image obtained on DOY 220 in 2013 was used to evaluate the atmospheric correction methods for the RapidEye images. The UAV image was obtained using a multi-copter with 8 rotors, and equipped with a miniature multiple camera array (Mini-MCA6, Tetracam Inc., USA). The Mini-MCA6 is a lightweight (700 g), multispectral, remote sensing camera, having six independent sensors to detect different spectral wavebands: Blue (410-490 nm), Green (510-590 nm), Red (610-690 nm), NIR1 (760-840 nm), NIR2 (810-850 nm), and NIR3 (870-890 nm). Each image has a pixel resolution of 1280 × 1024 with 10 bit as a raw file format in flash memory. The image taken by the Mini-MCA6 requires pre-processing to change the file format, and to merge the multispectral wavebands stored in separate sensors into one image. This procedure was performed using the PixelWrench 2 software (PW2, Tetracam Inc., USA) supplied with the Mini-MCA system.

    Radiometric correction of UAV images was performed using empirical relationships between UAV image-based digital values and corresponding ground-based reflectance. For this process, three calibration targets were constructed using aluminum plates (2.4 × 2.4 m each). The plates were painted black, grey, and white with non-reflective paints. The target plates painted with the color of black, grey, and white show average reflectances of 5, 23, and 93%, respectively (Figure 2). The ground-based reflectance was measured using a portable multispectral radiometer, MSR16R, containing 16 wavebands in the range of 450 and 1750 nm (CROPSCAN Inc., MN, USA). It has upward and downward sensors to measure incident and reflected radiation, simultaneously. The radiometer, with a field of view (FOV) of 28°, measured the canopy reflectance of a 1 m diameter target area from a height of 2 m above the nadir position. The UAV-based image reflectance was estimated according to linear regression equations, which were determined from the relationships between UAV-based digital values and the corresponding ground-based reflectance (Table 2). Geometric correction was carried out using an ENVI program (ITT Inc., CO, USA), based on ground control points from the Google Earth (Google Inc., CA, USA) image map.

    Figure 2. Reflectances as a function of wavebands for three (black, grey, and white) calibration plates.
    Table 2.Pearson’s correlation coefficients (r) and linear regression equations between CROPSCAN reflectances (y) and UAV-based digital numbers (x) for the crop season in 2013.
    Wavelength (nm)rLinear regression
    4500.999*y = 0.4939x + 0.1958
    5500.999*y = 0.4705x − 3.8267
    6500.999**y = 0.5432x − 0.4813
    8000.999**y = 0.7698x − 4.3484
    8800.999**y = 1.4032x + 3.4184
    ♩ * and ** represent significance at the 95 and 99 % probability levels. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large.
     | Show Table
    DownLoad: CSV

    2.3. Atmospheric correction methods

    QUAC and FLAASH were employed using the ENVI software (ITT Inc., CO, USA), and ATCOR was employed using the ERDAS IMAGINE software (Hexagon Geospatial, GA, USA). The specific parameters used for FLAASH and ATCOR are shown in Table 3. QUAC is applicable to multispectral and hyperspectral images, and is an in-scene approach that determines atmospheric correction parameters directly from the information contained within the scene, without additional metadata. Because QUAC does not involve radiative-transfer calculations, it is significantly faster than model-based methods. However, QUAC performs a more approximate atmospheric correction than other model-based methods. The use of QUAC has some restrictions, particularly its requirement for a certain minimum amount of land area in the scene. FLASSH also supports the analyses of hyperspectral and multispectral imaging sensors. FLAASH interfaced with MODTRAN4 corrects images according to the radiative transfer (RT) codes that calculate the radiance of the images with some inputs, such as site location, elevation, flight altitude, sun angle, and a few atmospheric parameters [9,14,15]. ATCOR has a fast atmospheric correction algorithm for images from medium and high spatial resolution satellite sensors. ERDAS IMAGINE offers several versions of ATCOR such as ATCOR-2 (specifically designed for use over flat terrain), ATCOR-3 (developed for mountainous terrain), and the latest release ATCOR-4 [16]. We used ATCOR-2 in this study. ERDAS IMAGINE 2010 (Version 10.0) for ATCOR offers several processing options: (a) a haze removal algorithm; (b) atmospheric correction with constant atmospheric conditions; and (c) the capability of viewing reference spectra of selected target areas. Haze or cloud removal and atmospheric water retrieval settings were kept at ‘default’, which in this case, is recommended by the ATCOR user manual [17].

    Table 3.Input parameters for FLAASH and ATCOR2 atmospheric correction methods.
    FLAASHATCOR2
    Input parameterValueInput parameterValue
    Acquisition time (UTC)3:25:01Acquisition time (UTC)3:25:01
    Latitude35.1734°Latitude35.1734°
    Longitude126.8986°Longitude126.8986°
    Visibility40 kmVisibility40 km
    Ground elevation0Aerosol type Rural, Midlat-summer
    CO2 conc. (ppm)414.9Solar zenith19.8°
    Atmospheric modelMidlat-summerSolar azimuth163.3°
    Aerosol modelRuralSatellite azimuth100.42°
    Zenith angle163.48°
    Azimuth angle79.58°
    FLAASH and ATCOR represent Fast Line of Sight Atmospheric Analysis of Hypercubes and Atmospheric and Topographic Correction, respectively.
     | Show Table
    DownLoad: CSV

    The RapidEye image taken at CNU on DOY 220 in 2013 was used as a reference for evaluation of the atmospheric correction methods. Comparing ground-measured point data and satellite image pixel data is difficult because of difference in spatial resolution. In order to make the comparison possible, the surface of interest must be large enough and completely homogeneous for a sufficient number of point measurements to be made on the corresponding surface of the satellite image [18]. In this context, the UAV image is assumed to meet the general requirement mentioned above. The UAV reflectances were compared with the RapidEye reflectances for the evaluation points, which were selected on soil, paddy, roof, and road asphalt.


    2.4. Supervised classification method

    Various supervised classification algorithms may be used to assign an unknown pixel to one of a number of classes [19]. The parallelepiped, minimum distance and maximum likelihood decision rules are among the most frequently used classification algorithms. These three supervised classification methods were applied to the TaeAn RapidEye image using the ENVI software. Supervised classification requires user-defined training classes in the image before performing the classification, and each class is used as a reference for the classifier. The analyst seeks to locate specific sites in the remotely sensed data that represent homogeneous examples of known land cover types. Training classes are groups of pixels in a region of interest (ROI). Five training classes of urban, soil, paddy, forest, and water were selected in this study.

    The parallelepiped classifier divides each axis of multi-spectral feature space forming an n-dimensional parallelepiped. Each pixel fallen into a box is labeled as a defied class. Accuracy of the classification depends on the selection of the lowest and highest values in consideration of the population statistics of each class [20]. The minimum distance classifier is mathematically simple and computationally efficient. The minimum distance classifier is used to classify unknown image data into classes, which minimize the distance between the image data and the class in multi-spectral feature space. The distance is defined as an index of similarity so that the minimum distance is identical to the maximum similarity. All pixels are classified to the nearest class, unless a standard deviation or distance threshold is specified, in which case some pixels may be unclassified if they do not meet the selected criteria [15,20]. Maximum likelihood classification assumes that the statistics for each class in each band are normally distributed and calculates the probability that a given pixel belongs to a specific class. The maximum likelihood classifier quantitatively evaluates both the variance and covariance of the category spectral response patterns when classifying an unknown pixel. The maximum likelihood classifier is one of the most popular methods of classification in remote sensing, in which a pixe1 with the maximum likelihood is classified into the corresponding class [15,20].


    2.5. Classification of the paddy rice field using NDVI

    To minimize errors for practical application of growth monitoring and yield estimation of rice, paddy fields were categorized using RapidEye imagery and NDVI values. Forest, waterbody, soil, and urban areas were removed, with only paddy fields retained using a NDVI threshold method proposed by Xiao et al. [21]. They assumed that the unique reflectance characteristics of the paddy and other features can be used to categorize paddy rice fields. When a pixel is filled by water, then the NDVI is consistently lower than 0.1. Pixels filled with rice tend to have high NDVI values ahead of harvest, while evergreen forest areas tend to have consistently high NDVI values, greater than 0.7. These NDVI thresholds of 0.1 and 0.7 were applied to identify the waterbody and forest areas from the NDVI values of the RapidEye images taken on DOY 152 to 250. Soil areas have similar reflectances in the near-infrared (NIR) and red, but generally the NIR spectral reflectance was larger than the red. Thus, a pixel covered by soil tends to have near zero NDVI values of 0.1 to 0.2 [20]. Hence, the paddy rice fields were identified by processing with the classification method using the NDVI thresholds of 0.1 and 0.7.

    The classified NDVI information for the TaeAn RapidEye image was used for evaluation of the classification methods described above. Accuracy of the classified results in terms of the paddy fields was determined by overlaying a vegetation index map used to monitor paddy rice growth. This was performed using a digitized paddy cover map from the Ministry of Agriculture, Food and Rural Affairs, Korea (Figure 3). The accuracy of the classified results was also analyzed by comparing with the digitized paddy cover map, and projecting an error distribution map.

    Figure 3. Digitized paddy cover map of TaeAn used as standard in this study.

    2.6. Data analysis

    The reflectances of the UAV image were used as standardvalues, and the RapidEye reflectances were compared with the corresponding UAV reflectances. Several statistical analyses were used to evaluate whether the results of the comparison were reliable. The data were analyzed with two-way analysis of variance (ANOVA) using PROC ANOVA, and with Pearson’s correlation coefficients using PROC CORR (SAS version 9.4, SAS Institute Inc., NC, USA). In addition, two statistical equations were used to evaluate the performance of the atmospheric correction methods: (1) root mean square error (RMSE, Equation 1); and (2) model efficiency (ME, Equation 2) [22]:

    RMSE=1Nni=1(SiMi)2, (1)
    ME=1ni=1(SiMi)2ni=1(MiMavg)2, (2)

    where Si the ith simulated value, Mi is the ith measured value, Mavg is the averaged measured value, and n is the number of data pairs. ME values are equal to the coefficient of determination (R2), where the simulated value versus the measured values are close to a 1:1 ratio. However, ME is generally lower than R2, and can be negative when predictions are very biased.

    To evaluate classification accuracy, four measures of the accuracy were tested in this study. The overall accuracy, kappa coefficient, producer accuracy, and user accuracy were computed for each error matrix. In thematic mapping from remotely sensed data, the term accuracy is used typically to express the degree of ‘correctness’ of a map or classification. The four metrics were calculated using the post classification error analysis of the ENVI program. The latter can calculate a confusion matrix (also referred to as the error matrix or a contingency table), including overall accuracy, producer accuracy, user accuracy, and Kappa coefficient using a ground truth image or ground truth region of interests. The confusion matrix is the most common form of expressing classification accuracy. In this matrix table, classification is given as rows and reference data (ground truth) are given as columns for each class type. The overall accuracy is calculated by summing the number of pixels classified correctly and dividing by the total number of pixels [23]. However, more specific measures are needed because the overall accuracy does not indicate how well individual classes were classified. The producer accuracy is the ratio between the number of correctly classified and the column total, and represents how well reference pixels of each ground cover type are classified. The user accuracy is the ratio between the number of correctly classified and the row total, and represents the probability that a pixel classified into a given category actually represents that category on the ground. The user accuracy and producer accuracy for any given class typically are not the same [23]. The kappa coefficient (K) was generated to describe the proportion of agreement between the classification result and the standard reference data after random agreements by chance are removed from consideration. The K value approaches 0 with no agreement, whereas it is approaches 1 with near perfect agreement [24].


    3. Results and Discussion


    3.1. Evaluation of atmospheric correction methods

    The performance of each atmospheric correction method was evaluated by comparing the UAV reflectance and RapidEye reflectance. Among the three atmospheric correction methods, ATCOR produced the best agreement between UAV and RapidEye reflectances (Figure 4). Values of r, RMSE, and ME of the comparison for ATCOR were 0.869, 0.055, and 0.732, respectively (Table 4). Although ATCOR and FLAASH are both MODTRAN4 model-based methods, ATCOR produced comparatively better results for correction performance. In addition, the correction performance indices indicate that either FLAASH or ATCOR produces more reliable atmospheric correction than QUAC. The FLAASH and ATCOR methods have the option of retrieving the aerosol amount, and estimating the scene average visibility. However, when processing data that lacks specific spectral channels, which are required for aerosol retrieval, there are no measurements of aerosol optical depth that can be supplied as input parameters. While these MODTRAN4 model-based methods are theoretically more sophisticated than QUAC, each model’s performance is affected by its ability to accurately characterize atmospheric aerosols [25]. Therefore, when unable to perform aerosol retrieval, and satisfy basic assumptions (i.e., at least 10 diverse materials or dark pixels in a scene), QUAC may produce equivalent atmospheric correction results in comparison with the model-based methods.

    Figure 4. Comparison of reflectances from RapidEye and unmanned aerial vehicle (UAV) images for three atmospheric correction methods, ATCOR (A), QUAC (B), and FLAASH (C), and no correction (D). ATCOR, QUAC, and FLAASH represent Atmospheric and Topographic Correction, Quick Atmospheric Correction, and Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes, respectively.
    Table 4.Error statistics (Pearson’s correlation coefficients, r; root mean square error, RMSE; and model efficiency, ME) of the uncorrected and corrected RapidEye reflectances using three correction methods in comparison with the unmanned aerial vehicle (UAV) reflectances.
    Correction MethodsrRMSEME
    No-correction0.804**0.0760.477
    QUAC0.867**0.0770.463
    FLAASH0.862**0.0560.714
    ATCOR0.869**0.0550.732
    QUAC, FLAASH, and ATCOR denote QUick Atmospheric Correction, Fast Line of Sight Atmospheric Analysis of Hypercubes, and Atmospheric and Topographic Correction, respectively.
    ** represents significance at the 99 % probability level. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large.
     | Show Table
    DownLoad: CSV

    3.2. Evaluation of classification methods

    When the parallelepiped, maximum likelihood, and minimum distance supervised classification methods were applied to the TaeAn RapidEye imagery, the minimum distance method produced reasonably acceptable classification results (Figure 5). The overall accuracy for the minimum distance methods varied from 90 to 96%, whilst the kappa coefficient varied from 0.5 to 0.7 (Table 5). However, the results show that the classified images cannot consistently distinguish between waterbodies and paddy fields early in the crop season, as well as between forest areas and paddy fields when the vegetation is growing vigorously. Paddy fields have unique features as rice plants are grown on flooded lands. Therefore, the reflectances of paddy fields are affected by water when just irrigated and transplanted, whilst the reflectances are similar to forest areas when the canopy of the paddy rice was closed. Furthermore, the most common source of error may occur during the process of defining the training classes. When pixels fall outside the specific class region or within overlapping regions, error may occur that result in misclassification. The automatic supervised classification method is mathematically simple and computationally efficient, but it has certain limitations that are sensitive to accuracy of the training classes [19].

    Figure 5. Time series of classified output images of TaeAn using the minimum distance method on day of year (DOY) 152, 174, 220, and 250 in 2010.
    Table 5.Time series of error matrix of minimum distance classification method.
    DOYReference classClassification result
    PaddyNon-paddyProducer accuracy
    152Paddy82.91 4.14 82.91
    Non-paddy16.70 95.85 95.85
    User accuracy61.52 93.88 -
    Overall accuracy = 90.76%
    Kappa coefficient = 0.52
    174Paddy83.28 3.96 83.28
    Non-paddy16.72 96.04 96.04
    User accuracy61.63 98.39 -
    Overall accuracy = 94.87%
    Kappa coefficient = 0.67
    220Paddy99.92 0.08 99.92
    Non-paddy0.08 100.00 100.00
    User accuracy61.64 99.44 -
    Overall accuracy = 95.82%
    Kappa coefficient = 0.77
    250Paddy83.28 4.01 83.28
    Non-paddy16.72 95.99 95.99
    User accuracy61.64 97.12 -
    Overall accuracy = 93.72%
    Kappa coefficient = 0.62
     | Show Table
    DownLoad: CSV

    3.3. Classification of the paddy rice field using NDVI threshold

    To identify paddy fields from RapidEye satellite images more precisely, a NDVI threshold method suggested by Xiao et al. [21] was applied in this study. This method classified the paddy fields with the overall accuracy of 95.82%, kappa coefficient of 0.77, producer accuracy of 99.92%, and user accuracy of 61.64% (Table 6). In order to determine the errors in the classified paddy map, an error distribution map was produced by comparing each pixel between the digitized paddy cover map and classified paddy map (Figure 6). Soil pixels covered with vegetation were misclassified as paddy fields, while paddy field pixels covered with somewhat more water and less vegetation were misclassified as non-paddy features. The current classification results correspond closely to those reported by Jeong et al. [26]. They also attempted detection of paddy fields using MODIS satellite images, and observed different over-and under-estimated pixels of paddy fields. The spatial resolutions of MODIS and RapidEye are 500 m and 5 m, respectively. Because a RapidEye image has much higher spatial resolution, water or soil in paddy fields can be distinguished more clearly using RapidEye images than those using MODIS images.

    Table 6.Error matrix of NDVI threshold.
    Reference classClassification result
    PaddyNon-paddyProducer accuracy
    Paddy99.920.0899.92
    Non-paddy0.08100100
    User accuracy61.6499.44-
    Overall accuracy = 95.82%
    Kappa coefficient = 0.77
     | Show Table
    DownLoad: CSV
    Figure 6. Map of error distribution between classified paddy fields based on NDVI thresholds and digitized paddy cover maps.

    If the NDVI threshold values used in this study could be adjusted to correctly classify more of the underestimated pixels as paddy field pixels, the proportion of underestimated pixels would be improved. Therefore, it is important to determine a suitable NDVI threshold value. Xiao et al. [21] and Jeong et al. [26] used NDVI, enhanced vegetation index (EVI), and land surface water index (LSWI) for detection of paddy fields during the inundated period. EVI is an improvement over the NDVI index, which reduces atmospheric and variable soil or canopy background effects. LSWI is calculated using near-infrared and shortwave infrared, where the latter is sensitive to the water content of vegetation and soil, and can be applied to estimate the water content of the surface [26]. While using two spectral indices for classifying paddy fields is potentially advantageous over using one spectral index, the satellite images used in the current study contain insufficient waveband information to calculate two spectral indices. Therefore, only NDVI was used to detect paddy fields in this study. If various vegetation indices sensitive to water or chlorophyll content are available, the classification accuracy results would be enhanced.


    4. Conclusion

    Three different atmospheric correction methods, QUAC, FLAASH, and ATCOR were used on RapidEye satellite images obtained over paddy fields at CNU, Gwangju, as well as at TaeAn, Chungcheongnam-do, Korea. The corrected RapidEye satellite images were then evaluated by comparison with UAV images, and classified into representative land cover features using the minimum distance method. Of the three atmospheric correction methods, ATCOR gave results that corresponded comparatively well with those from the UAV images. We also found that the minimum distance classification method performed well, and classified all pixels into the corresponding reference endmember classes. However, this method could not classify the same pixels from different time-series images. Therefore, NDVI threshold values were used to classify paddy fields from RapidEye images, according to the NDVI time-series feature characteristics. As a result, the same pixels could be classified from each of the time-series images, although some under-and over-estimated pixels persisted. This issue could probably be addressed if suitable threshold values could be determined and applied. We contend that the image correction and classification methods validated here are applicable to high resolution satellite images for monitoring crop growth conditions in precision agriculture.


    Acknowledgements

    This study was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), which is funded by the Ministry of Education, Science, and Technology (NRF-2011-0009827 and NRF-2013R1A1A2005788).


    Conflict of interest

    The authors declare that there are no interestsregarding the publication of this paper.




    [1] F. H. Jackson, On q-definite integrals, Q. J. Pure Appl. Math., 41 (1910), 193–203.
    [2] A. Aral, V. Gupta, Generalized q-Baskakov operators, Math. Slovaca, 61 (2011), 619–634.
    [3] A. Aral, V. Gupta, On q-Baskakov type operators, Demonstr. Math., 42 (2009), 109–122.
    [4] A. Aral, On the generalized Picard and Gauss Weierstrass singular integrals, J. Compu. Anal. Appl, 8 (2006), 249–261.
    [5] G. A. Anastassiu, S. G. Gal, Geometric and approximation properties of generalized singular integrals, J. Korean Math. Soci., 23 (2006), 425–443.
    [6] S. Kanas, D. Rǎducanu, Some class of analytic functions related to conic domains, Math. Slovaca, 64 (2014), 1183–1196.
    [7] H. Aldweby, M. Darus, Some subordination results on q-analogue of Ruscheweyh differential operator, Abstr. Appl. Anal., (2014), Article ID 958563.
    [8] S. Mahmmod, J. Sokół, New subclass of analytic functions in conical domain associated with Ruscheweyh q-differential operator, Res. Math., 71 (2017), 1345–1357. doi: 10.1007/s00025-016-0592-1
    [9] T. M. Seoudy, M. K. Aouf, Convolution properties for certain classes of analytic functions defined by q-derivative operator, Abstr. Appl. Anal., (2014), Article ID 846719.
    [10] T. M. Seoudy, M. K. Aouf, Coefficient estimates of new classes of q-starlike and q-convex functions of complex order, J. Math. Inequal., 10 (2016), 135–145.
    [11] C. Ramachandran, T. Soupramanien, B. A. Frasin, New subclasses of analytic function associated with q-difference operator, Eur. J. Pure Appl. Math., 10 (2017), 348–362.
    [12] S. Kavitha, N. E. Cho, G. Murugusundaramoorthy, On (p,q)-Quantum Calculus Involving Quasi-Subordination, Trend in mathematics, Advance in Algebra and Analysis International Conference on Advance in Mathematical Sciences, Vellore, India, December 2017, Vol. 1,215–223.
    [13] B. A. Frasin, G. Murugusundaramoorthy, A subordination results for a class of analytic functions defined by q-differential operator, Ann. Univ. Paedagog. Crac. Stud. Math., 19 (2020), 53–64.
    [14] B. Khan, Z. G. Liu, H. M. Serivastava, N. Khan, M. Darus, M. Tahir, A study of some families of multivalent q-starlike functions involving higher-order q-derivatives, Mathematics, 8 (2020), 1–12.
    [15] H. M. Srivastava, N. Khan, M. Darus, S. Khan, Q. Z. Ahmad, S. Hussain, Fekete-Szego type problems and their applications for a subclass of q-starlike functions with respect to symmetrical points, Mathematics, 8 (2020), 1–18.
    [16] M. Shafiq, H. M. Srivastava, N. Khan, Q. Z. Ahmad, M. Darus, S. Kiran, An upper bound of the third Hankel determinant for a subclass of q-starlike functions associated with k-Fibonacci numbers, Symmetry, 12 (2020), 1–17.
    [17] M. G. Khan, B. Ahmad, B. A. Frasin, J. Abdel, On Janowski analytic (p; q)-starlike functions in symmetric circular domain, J. Funct. Spaces, (2020), Article ID 4257907.
    [18] M. Arif, O. Barkub, H. M. Srivastava, S. Abdullah, A. Khan, Some Janowski type Harmonic q-starlike functions associated with symmetric points, Mathematics, 8 (2020), Article ID 629. doi: 10.3390/math8040629
    [19] H. M. Srivastava, D. Bansal, Close-to-convexity of a certain family of q-Mittag-Leffler functions, J. Nonlinear Var. Anal., 1 (2017), 61–69.
    [20] H. M. Srivastava, M. Tahir, B. Khan, Q. Z. Ahmad, N. Khan, Some general families of q-starlike functions associated with the Janowski functions, Filomat., 33 (2019), 2613–2626. doi: 10.2298/FIL1909613S
    [21] M. Shafiq, N. Khan, H. M. Srivastava, B. Khan, Q. Z. Ahmad, M. Tahir, Generalization of close to convex functions associated with Janowski functions, Maejo Int. J. Sci. Technol., 14 (2020), 141–155.
    [22] L. Shi, M. G. Khan, B. Ahmad, Some geometric properties of a family of analytic functions involving a generalised q-operator, Symmetry, 12 (2020), 1–11.
    [23] S. Islam, M. G. Khan, B. Ahmad, M. Arif, R. Chinram, Q-Extension of starlike functions subordinated with a trignometric sine function, Mathematics, 8 (2020), Article ID 1676. doi: 10.3390/math8101676
    [24] J. Sokól, J. Stankiewicz, Radius of convexity of some subclasses of strongly starlike functions, Zesz. Nauk. Politech. Rzeszowskiej Mat., 19 (1996), 101–105.
    [25] J. Sokól, Radius problem in the class SL, Appl. Math. Comput., 214 (2009), 569–573.
    [26] S. A. Halim, R. Omar, Applications of certain functions associated with lemniscate Bernoulli, J. Indones. Math. Soc., 18 (2012), 93–99.
    [27] R. M. Ali, N. E. Chu, V. Ravichandran, S. S. Kumar, First order differential subordination for functions associated with the lemniscate of Bernoulli, Taiwan. J. Math., 16 (2012), 1017–1026. doi: 10.11650/twjm/1500406676
    [28] J. Sokól, Coefficient estimates in a class of strongly starlike functions, Kyungpook Math. J., 49 (2009), 349–353. doi: 10.5666/KMJ.2009.49.2.349
    [29] M, Fekete, G. Szegö, Eine Bemerkung über ungerade schlichte Funktionen, J. Lond. Math. Soc., 8 (1933), 85–89.
    [30] A. Pfluger, The Fekete-Szegö inequality for complex parameters, Complex Var. Theory Appl., 7 (1986), 149–160.
    [31] F. R. Keogh, E. P. Merkes, A coefficient inequality for certain classes of analytic functions, Proc. Am. Math. Soc., 20 (1969), 8–12. doi: 10.1090/S0002-9939-1969-0232926-9
    [32] W. Ma, D. Minda, An internal geometric characterization of strongly starlike functions, Ann. Univ. Mariae Curie-Sklodowska, Sect. A., 45 (1991), 89–97.
    [33] W. Ma, D. Minda, Coefficient inequalities for strongly close-to-convex functions, J. Math. Anal. Appl., 205, 537–553.
    [34] W. Janowski, Extremal problems for a family of functions with positive real part and for some related families, Ann. Polon. Math., 23 (1970), 159–177. doi: 10.4064/ap-23-2-159-177
    [35] W. Ma, D. Minda, A unified treatment of some special classes of univalent functions, In: Z. Li, F. Ren, L. Yang, S. Zhang (eds.), Proceeding of the conference on Complex Analysis, (Tianjin, 1992), Int. Press, Cambridge, 1994,157–169.
    [36] K. Ademogullari, Y. Kahramaner, q-harmonic mappings for which analytic part is q-convex functions, Nonlinear Anal. Di. Eqns., 4 (2016), 283–293.
  • This article has been cited by:

    1. S. Minu, Amba Shetty, Budiman Minasny, Cécile Gomez, The role of atmospheric correction algorithms in the prediction of soil organic carbon from Hyperion data, 2017, 38, 0143-1161, 6435, 10.1080/01431161.2017.1354265
    2. Rei Sonobe, Hiroshi Tani, Hideki Shimamura, Kan-ichiro Mochizuki, Addition of fake imagery generated by generative adversarial networks for improving crop classification, 2024, 02731177, 10.1016/j.asr.2024.06.026
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3494) PDF downloads(240) Cited by(23)

Article outline

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog