Citation: Mijeong Kim, Seungtaek Jeong, Jong-min Yeom, Hyun-ok Kim, Jonghan Ko. Determination of rice canopy growth based on high resolution satellite images: a case study using RapidEye imagery in Korea[J]. AIMS Environmental Science, 2016, 3(4): 631-645. doi: 10.3934/environsci.2016.4.631
Remote sensing is a useful and convenient tool for qualitative and quantitative determination of plant growth conditions. The technique can provide information on the actual status of crop conditions by observing a repetitive coverage, where the latter is necessary for change detection studies at a global or regional scale, such as crop yield predictions and monitoring crop status and conditions [1,2]. While crop conditions can be monitored using various remote sensing platforms, the two primary categories are satellite and aerial platforms. Satellites can observe a wide area of thousands of square kilometers at once, revisiting it in a regular and timely manner. These unique characteristics render satellites the most suitable of the current remote sensing platforms for monitoring crop growth over broad areas. Satellites have been used in agricultural remote sensing since the early 1970s [3]. Satellite systems with increasingly higher spatial resolution and more frequent revisiting cycles have been developed to improve the quality of data. For optimum use of these data, atmospheric correction is required to retrieve the rectified surface reflectance from a remotely sensed image by removing the effects of light scattering and absorption by aerosols, haze, and gases. While atmospheric correction is necessary as an important image processing step in many remote sensing applications, significant difficulty is presented during processing due to the complexity of atmospheric conditions in time and space. Because accurate reflectances are highly required for many applications, atmospheric correction accuracy and development of improved algorithms should be evaluated and these areas of research remain very active [4].
Atmospheric correction can be divided into two categories: (1) empirical methods; and (2) radiative transfer model-based methods. The empirical methods rely on the scene information, i.e., radiance at a certain location, and do not use any physical model as done in model-based methods. The most recent addition to empirical methods is the Quick Atmospheric Correction (QUAC) method [5]. The model-based methods are performed using radiative transfer models. In this procedure, field measurements are not required, and only basic information on the scene is required, such as site location and elevation, flight altitude, the sensor model, local visibility, and acquisition times. Several model-based methods dedicated to retrieving reflectance information from hyperspectral and multispectral data have been developed. These methods include ATmosphere REMoval program (ATREM), Atmospheric and Topographic Correction (ATCOR), and Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) [6,7,8]. These methods retrieve surface reflectance using a radiative transfer model. All of the model-based methods are quite similar in their basic principles and operation [9].
While some features are distinguishable in a panchromatic or single-band image, most features are more clearly distinguishable in multispectral and hyperspectral images containing multiple wavebands. The reflectance properties of an object depend on the surface features (e.g., color and texture) and environmental conditions (e.g., geographic location and atmospheric components). The reflectance characteristics of various features in the image that have multiple spectral band information (i.e., multispectral and hyperspectral images) are intermixed. Therefore, automated techniques are needed that can identify different surface characteristics and categorize all of the pixels in an image into homogeneous land cover types or themes. This process is termed classification, and the classified data may then be used to produce thematic maps [10]. Classification methods are divided into two methods; supervised and unsupervised classification. Supervised classification is the procedure most often used as a precursor to quantitative analysis of remote sensing image data. It depends upon using suitable algorithms to classify and label the pixels in an image as representing particular ground cover types or classes. A variety of algorithms is available for supervised classification [11]. Among the most frequently used classification algorithms are the parallelepiped, minimum distance, and maximum likelihood classification methods.
High resolution satellite images are more suitable for monitoring crop growth conditions in precision agriculture than low resolution satellite images. Image correction and classification methods are needed to obtain and/or determine accurate conditions of crop growth; however, most methods have been developed and evaluated for relatively low resolution imageries. The objectives of this study were to identify the canopy growth of paddy rice, and to investigate practical image correction and classification methods. We specifically evaluated the three atmospheric correction methods, QUAC, FLAASH, and ATCOR, as well as a selected classification method in order to obtain an endmember category or class (i.e., paddy) from image data of an area of interest. The selected image correction methods were applied to a RapidEye high resolution (6.5 m) image for projecting vegetation index (VI) maps for monitoring rice growth conditions.
In this study, RapidEye (BlackBridge, Berlin, Germany) satellite images were acquired so that the three atmospheric correction methods (i.e., QUAC, FLAASH, and ATCOR), and three supervised classification methods (i.e., parallelepiped, minimum distance, and maximum likelihood) could be performed and evaluated. RapidEye images were taken over experimental fields of paddy rice at Chonnam National University, Gwangju, and at TaeAn, Choongcheongnam-do, Korea (Figure 1). The CNU RapidEye images were acquired on day of year (DOY) 220 in 2013. The TaeAn RapidEye images were obtained on DOY 152, 174, 220, and 250 in 2010, and thus represent a time series. The CNU RapidEye images were used to evaluate the atmospheric correction methods and the TaeAn RapidEye images were used to evaluate the classification methods.
The RapidEye constellation of five Earth observation satellites are designed to point at several look angles, and each of the five satellites travels on the same orbit, enabling acquisition of high-resolution images with five spectral bands daily. This allows users to get large-area coverage data with a frequent revisit interval [12,13]. RapidEye collects 4 million square kilometers of data per day with a 6.5 m ground resolution. The RapidEye system specifications are given in Table 1. RapidEye images are offered at two processing levels: (1) basic products (level 1B), which are geometrically uncorrected images; and (2) ortho products (level 3A), which are radiometric, geometric, and terrain correction images [13]. Level 3A images were used in this study.
Mission characteristic | Information |
Number of satellites | 5 |
Spacecraft lifetime | 7 years |
Orbit altitude | 630 km in sun-synchronous orbit |
Sensor type | Multi-spectral push broom imager |
Spectral bands (nm) | Blue (440–510 nm) |
Green (520–590 nm) | |
Red (630–685 nm) | |
Red edge (690–730 nm) | |
NIR (760–850 nm) | |
Ground sampling distance (nadir) | 6.5 m |
Pixel size (ortho-rectified) | 5 m |
Swath Width | 77 km |
On board data storage | Up to 1500 km of image data per orbit |
Revisit time | Daily (off nadir), 5.5 days (at nadir) |
Image capture capacity | 4 million km2 per day |
Dynamic range | Up to 12 bit |
An UAV image obtained on DOY 220 in 2013 was used to evaluate the atmospheric correction methods for the RapidEye images. The UAV image was obtained using a multi-copter with 8 rotors, and equipped with a miniature multiple camera array (Mini-MCA6, Tetracam Inc., USA). The Mini-MCA6 is a lightweight (700 g), multispectral, remote sensing camera, having six independent sensors to detect different spectral wavebands: Blue (410-490 nm), Green (510-590 nm), Red (610-690 nm), NIR1 (760-840 nm), NIR2 (810-850 nm), and NIR3 (870-890 nm). Each image has a pixel resolution of 1280 × 1024 with 10 bit as a raw file format in flash memory. The image taken by the Mini-MCA6 requires pre-processing to change the file format, and to merge the multispectral wavebands stored in separate sensors into one image. This procedure was performed using the PixelWrench 2 software (PW2, Tetracam Inc., USA) supplied with the Mini-MCA system.
Radiometric correction of UAV images was performed using empirical relationships between UAV image-based digital values and corresponding ground-based reflectance. For this process, three calibration targets were constructed using aluminum plates (2.4 × 2.4 m each). The plates were painted black, grey, and white with non-reflective paints. The target plates painted with the color of black, grey, and white show average reflectances of 5, 23, and 93%, respectively (Figure 2). The ground-based reflectance was measured using a portable multispectral radiometer, MSR16R, containing 16 wavebands in the range of 450 and 1750 nm (CROPSCAN Inc., MN, USA). It has upward and downward sensors to measure incident and reflected radiation, simultaneously. The radiometer, with a field of view (FOV) of 28°, measured the canopy reflectance of a 1 m diameter target area from a height of 2 m above the nadir position. The UAV-based image reflectance was estimated according to linear regression equations, which were determined from the relationships between UAV-based digital values and the corresponding ground-based reflectance (Table 2). Geometric correction was carried out using an ENVI program (ITT Inc., CO, USA), based on ground control points from the Google Earth (Google Inc., CA, USA) image map.
Wavelength (nm) | r♩ | Linear regression |
450 | 0.999* | y = 0.4939x + 0.1958 |
550 | 0.999* | y = 0.4705x − 3.8267 |
650 | 0.999** | y = 0.5432x − 0.4813 |
800 | 0.999** | y = 0.7698x − 4.3484 |
880 | 0.999** | y = 1.4032x + 3.4184 |
♩ * and ** represent significance at the 95 and 99 % probability levels. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large. |
QUAC and FLAASH were employed using the ENVI software (ITT Inc., CO, USA), and ATCOR was employed using the ERDAS IMAGINE software (Hexagon Geospatial, GA, USA). The specific parameters used for FLAASH and ATCOR are shown in Table 3. QUAC is applicable to multispectral and hyperspectral images, and is an in-scene approach that determines atmospheric correction parameters directly from the information contained within the scene, without additional metadata. Because QUAC does not involve radiative-transfer calculations, it is significantly faster than model-based methods. However, QUAC performs a more approximate atmospheric correction than other model-based methods. The use of QUAC has some restrictions, particularly its requirement for a certain minimum amount of land area in the scene. FLASSH also supports the analyses of hyperspectral and multispectral imaging sensors. FLAASH interfaced with MODTRAN4 corrects images according to the radiative transfer (RT) codes that calculate the radiance of the images with some inputs, such as site location, elevation, flight altitude, sun angle, and a few atmospheric parameters [9,14,15]. ATCOR has a fast atmospheric correction algorithm for images from medium and high spatial resolution satellite sensors. ERDAS IMAGINE offers several versions of ATCOR such as ATCOR-2 (specifically designed for use over flat terrain), ATCOR-3 (developed for mountainous terrain), and the latest release ATCOR-4 [16]. We used ATCOR-2 in this study. ERDAS IMAGINE 2010 (Version 10.0) for ATCOR offers several processing options: (a) a haze removal algorithm; (b) atmospheric correction with constant atmospheric conditions; and (c) the capability of viewing reference spectra of selected target areas. Haze or cloud removal and atmospheric water retrieval settings were kept at ‘default’, which in this case, is recommended by the ATCOR user manual [17].
FLAASH | ATCOR2 | |||
Input parameter | Value | Input parameter | Value | |
Acquisition time (UTC) | 3:25:01 | Acquisition time (UTC) | 3:25:01 | |
Latitude | 35.1734° | Latitude | 35.1734° | |
Longitude | 126.8986° | Longitude | 126.8986° | |
Visibility | 40 km | Visibility | 40 km | |
Ground elevation | 0 | Aerosol type | Rural, Midlat-summer | |
CO2 conc. (ppm) | 414.9 | Solar zenith | 19.8° | |
Atmospheric model | Midlat-summer | Solar azimuth | 163.3° | |
Aerosol model | Rural | Satellite azimuth | 100.42° | |
Zenith angle | 163.48° | |||
Azimuth angle | 79.58° | |||
♩ FLAASH and ATCOR represent Fast Line of Sight Atmospheric Analysis of Hypercubes and Atmospheric and Topographic Correction, respectively. |
The RapidEye image taken at CNU on DOY 220 in 2013 was used as a reference for evaluation of the atmospheric correction methods. Comparing ground-measured point data and satellite image pixel data is difficult because of difference in spatial resolution. In order to make the comparison possible, the surface of interest must be large enough and completely homogeneous for a sufficient number of point measurements to be made on the corresponding surface of the satellite image [18]. In this context, the UAV image is assumed to meet the general requirement mentioned above. The UAV reflectances were compared with the RapidEye reflectances for the evaluation points, which were selected on soil, paddy, roof, and road asphalt.
Various supervised classification algorithms may be used to assign an unknown pixel to one of a number of classes [19]. The parallelepiped, minimum distance and maximum likelihood decision rules are among the most frequently used classification algorithms. These three supervised classification methods were applied to the TaeAn RapidEye image using the ENVI software. Supervised classification requires user-defined training classes in the image before performing the classification, and each class is used as a reference for the classifier. The analyst seeks to locate specific sites in the remotely sensed data that represent homogeneous examples of known land cover types. Training classes are groups of pixels in a region of interest (ROI). Five training classes of urban, soil, paddy, forest, and water were selected in this study.
The parallelepiped classifier divides each axis of multi-spectral feature space forming an n-dimensional parallelepiped. Each pixel fallen into a box is labeled as a defied class. Accuracy of the classification depends on the selection of the lowest and highest values in consideration of the population statistics of each class [20]. The minimum distance classifier is mathematically simple and computationally efficient. The minimum distance classifier is used to classify unknown image data into classes, which minimize the distance between the image data and the class in multi-spectral feature space. The distance is defined as an index of similarity so that the minimum distance is identical to the maximum similarity. All pixels are classified to the nearest class, unless a standard deviation or distance threshold is specified, in which case some pixels may be unclassified if they do not meet the selected criteria [15,20]. Maximum likelihood classification assumes that the statistics for each class in each band are normally distributed and calculates the probability that a given pixel belongs to a specific class. The maximum likelihood classifier quantitatively evaluates both the variance and covariance of the category spectral response patterns when classifying an unknown pixel. The maximum likelihood classifier is one of the most popular methods of classification in remote sensing, in which a pixe1 with the maximum likelihood is classified into the corresponding class [15,20].
To minimize errors for practical application of growth monitoring and yield estimation of rice, paddy fields were categorized using RapidEye imagery and NDVI values. Forest, waterbody, soil, and urban areas were removed, with only paddy fields retained using a NDVI threshold method proposed by Xiao et al. [21]. They assumed that the unique reflectance characteristics of the paddy and other features can be used to categorize paddy rice fields. When a pixel is filled by water, then the NDVI is consistently lower than 0.1. Pixels filled with rice tend to have high NDVI values ahead of harvest, while evergreen forest areas tend to have consistently high NDVI values, greater than 0.7. These NDVI thresholds of 0.1 and 0.7 were applied to identify the waterbody and forest areas from the NDVI values of the RapidEye images taken on DOY 152 to 250. Soil areas have similar reflectances in the near-infrared (NIR) and red, but generally the NIR spectral reflectance was larger than the red. Thus, a pixel covered by soil tends to have near zero NDVI values of 0.1 to 0.2 [20]. Hence, the paddy rice fields were identified by processing with the classification method using the NDVI thresholds of 0.1 and 0.7.
The classified NDVI information for the TaeAn RapidEye image was used for evaluation of the classification methods described above. Accuracy of the classified results in terms of the paddy fields was determined by overlaying a vegetation index map used to monitor paddy rice growth. This was performed using a digitized paddy cover map from the Ministry of Agriculture, Food and Rural Affairs, Korea (Figure 3). The accuracy of the classified results was also analyzed by comparing with the digitized paddy cover map, and projecting an error distribution map.
The reflectances of the UAV image were used as standardvalues, and the RapidEye reflectances were compared with the corresponding UAV reflectances. Several statistical analyses were used to evaluate whether the results of the comparison were reliable. The data were analyzed with two-way analysis of variance (ANOVA) using PROC ANOVA, and with Pearson’s correlation coefficients using PROC CORR (SAS version 9.4, SAS Institute Inc., NC, USA). In addition, two statistical equations were used to evaluate the performance of the atmospheric correction methods: (1) root mean square error (RMSE, Equation 1); and (2) model efficiency (ME, Equation 2) [22]:
RMSE=√1N∑ni=1(Si−Mi)2, | (1) |
ME=1−∑ni=1(Si−Mi)2∑ni=1(Mi−Mavg)2, | (2) |
where Si the ith simulated value, Mi is the ith measured value, Mavg is the averaged measured value, and n is the number of data pairs. ME values are equal to the coefficient of determination (R2), where the simulated value versus the measured values are close to a 1:1 ratio. However, ME is generally lower than R2, and can be negative when predictions are very biased.
To evaluate classification accuracy, four measures of the accuracy were tested in this study. The overall accuracy, kappa coefficient, producer accuracy, and user accuracy were computed for each error matrix. In thematic mapping from remotely sensed data, the term accuracy is used typically to express the degree of ‘correctness’ of a map or classification. The four metrics were calculated using the post classification error analysis of the ENVI program. The latter can calculate a confusion matrix (also referred to as the error matrix or a contingency table), including overall accuracy, producer accuracy, user accuracy, and Kappa coefficient using a ground truth image or ground truth region of interests. The confusion matrix is the most common form of expressing classification accuracy. In this matrix table, classification is given as rows and reference data (ground truth) are given as columns for each class type. The overall accuracy is calculated by summing the number of pixels classified correctly and dividing by the total number of pixels [23]. However, more specific measures are needed because the overall accuracy does not indicate how well individual classes were classified. The producer accuracy is the ratio between the number of correctly classified and the column total, and represents how well reference pixels of each ground cover type are classified. The user accuracy is the ratio between the number of correctly classified and the row total, and represents the probability that a pixel classified into a given category actually represents that category on the ground. The user accuracy and producer accuracy for any given class typically are not the same [23]. The kappa coefficient (K) was generated to describe the proportion of agreement between the classification result and the standard reference data after random agreements by chance are removed from consideration. The K value approaches 0 with no agreement, whereas it is approaches 1 with near perfect agreement [24].
The performance of each atmospheric correction method was evaluated by comparing the UAV reflectance and RapidEye reflectance. Among the three atmospheric correction methods, ATCOR produced the best agreement between UAV and RapidEye reflectances (Figure 4). Values of r, RMSE, and ME of the comparison for ATCOR were 0.869, 0.055, and 0.732, respectively (Table 4). Although ATCOR and FLAASH are both MODTRAN4 model-based methods, ATCOR produced comparatively better results for correction performance. In addition, the correction performance indices indicate that either FLAASH or ATCOR produces more reliable atmospheric correction than QUAC. The FLAASH and ATCOR methods have the option of retrieving the aerosol amount, and estimating the scene average visibility. However, when processing data that lacks specific spectral channels, which are required for aerosol retrieval, there are no measurements of aerosol optical depth that can be supplied as input parameters. While these MODTRAN4 model-based methods are theoretically more sophisticated than QUAC, each model’s performance is affected by its ability to accurately characterize atmospheric aerosols [25]. Therefore, when unable to perform aerosol retrieval, and satisfy basic assumptions (i.e., at least 10 diverse materials or dark pixels in a scene), QUAC may produce equivalent atmospheric correction results in comparison with the model-based methods.
Correction Methods♩ | r♪ | RMSE | ME |
No-correction | 0.804** | 0.076 | 0.477 |
QUAC | 0.867** | 0.077 | 0.463 |
FLAASH | 0.862** | 0.056 | 0.714 |
ATCOR | 0.869** | 0.055 | 0.732 |
♩ QUAC, FLAASH, and ATCOR denote QUick Atmospheric Correction, Fast Line of Sight Atmospheric Analysis of Hypercubes, and Atmospheric and Topographic Correction, respectively. ♪ ** represents significance at the 99 % probability level. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large. |
When the parallelepiped, maximum likelihood, and minimum distance supervised classification methods were applied to the TaeAn RapidEye imagery, the minimum distance method produced reasonably acceptable classification results (Figure 5). The overall accuracy for the minimum distance methods varied from 90 to 96%, whilst the kappa coefficient varied from 0.5 to 0.7 (Table 5). However, the results show that the classified images cannot consistently distinguish between waterbodies and paddy fields early in the crop season, as well as between forest areas and paddy fields when the vegetation is growing vigorously. Paddy fields have unique features as rice plants are grown on flooded lands. Therefore, the reflectances of paddy fields are affected by water when just irrigated and transplanted, whilst the reflectances are similar to forest areas when the canopy of the paddy rice was closed. Furthermore, the most common source of error may occur during the process of defining the training classes. When pixels fall outside the specific class region or within overlapping regions, error may occur that result in misclassification. The automatic supervised classification method is mathematically simple and computationally efficient, but it has certain limitations that are sensitive to accuracy of the training classes [19].
DOY | Reference class | Classification result | ||
Paddy | Non-paddy | Producer accuracy | ||
152 | Paddy | 82.91 | 4.14 | 82.91 |
Non-paddy | 16.70 | 95.85 | 95.85 | |
User accuracy | 61.52 | 93.88 | - | |
Overall accuracy = 90.76% | ||||
Kappa coefficient = 0.52 | ||||
174 | Paddy | 83.28 | 3.96 | 83.28 |
Non-paddy | 16.72 | 96.04 | 96.04 | |
User accuracy | 61.63 | 98.39 | - | |
Overall accuracy = 94.87% | ||||
Kappa coefficient = 0.67 | ||||
220 | Paddy | 99.92 | 0.08 | 99.92 |
Non-paddy | 0.08 | 100.00 | 100.00 | |
User accuracy | 61.64 | 99.44 | - | |
Overall accuracy = 95.82% | ||||
Kappa coefficient = 0.77 | ||||
250 | Paddy | 83.28 | 4.01 | 83.28 |
Non-paddy | 16.72 | 95.99 | 95.99 | |
User accuracy | 61.64 | 97.12 | - | |
Overall accuracy = 93.72% | ||||
Kappa coefficient = 0.62 |
To identify paddy fields from RapidEye satellite images more precisely, a NDVI threshold method suggested by Xiao et al. [21] was applied in this study. This method classified the paddy fields with the overall accuracy of 95.82%, kappa coefficient of 0.77, producer accuracy of 99.92%, and user accuracy of 61.64% (Table 6). In order to determine the errors in the classified paddy map, an error distribution map was produced by comparing each pixel between the digitized paddy cover map and classified paddy map (Figure 6). Soil pixels covered with vegetation were misclassified as paddy fields, while paddy field pixels covered with somewhat more water and less vegetation were misclassified as non-paddy features. The current classification results correspond closely to those reported by Jeong et al. [26]. They also attempted detection of paddy fields using MODIS satellite images, and observed different over-and under-estimated pixels of paddy fields. The spatial resolutions of MODIS and RapidEye are 500 m and 5 m, respectively. Because a RapidEye image has much higher spatial resolution, water or soil in paddy fields can be distinguished more clearly using RapidEye images than those using MODIS images.
Reference class | Classification result | ||
Paddy | Non-paddy | Producer accuracy | |
Paddy | 99.92 | 0.08 | 99.92 |
Non-paddy | 0.08 | 100 | 100 |
User accuracy | 61.64 | 99.44 | - |
Overall accuracy = 95.82% | |||
Kappa coefficient = 0.77 |
If the NDVI threshold values used in this study could be adjusted to correctly classify more of the underestimated pixels as paddy field pixels, the proportion of underestimated pixels would be improved. Therefore, it is important to determine a suitable NDVI threshold value. Xiao et al. [21] and Jeong et al. [26] used NDVI, enhanced vegetation index (EVI), and land surface water index (LSWI) for detection of paddy fields during the inundated period. EVI is an improvement over the NDVI index, which reduces atmospheric and variable soil or canopy background effects. LSWI is calculated using near-infrared and shortwave infrared, where the latter is sensitive to the water content of vegetation and soil, and can be applied to estimate the water content of the surface [26]. While using two spectral indices for classifying paddy fields is potentially advantageous over using one spectral index, the satellite images used in the current study contain insufficient waveband information to calculate two spectral indices. Therefore, only NDVI was used to detect paddy fields in this study. If various vegetation indices sensitive to water or chlorophyll content are available, the classification accuracy results would be enhanced.
Three different atmospheric correction methods, QUAC, FLAASH, and ATCOR were used on RapidEye satellite images obtained over paddy fields at CNU, Gwangju, as well as at TaeAn, Chungcheongnam-do, Korea. The corrected RapidEye satellite images were then evaluated by comparison with UAV images, and classified into representative land cover features using the minimum distance method. Of the three atmospheric correction methods, ATCOR gave results that corresponded comparatively well with those from the UAV images. We also found that the minimum distance classification method performed well, and classified all pixels into the corresponding reference endmember classes. However, this method could not classify the same pixels from different time-series images. Therefore, NDVI threshold values were used to classify paddy fields from RapidEye images, according to the NDVI time-series feature characteristics. As a result, the same pixels could be classified from each of the time-series images, although some under-and over-estimated pixels persisted. This issue could probably be addressed if suitable threshold values could be determined and applied. We contend that the image correction and classification methods validated here are applicable to high resolution satellite images for monitoring crop growth conditions in precision agriculture.
This study was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), which is funded by the Ministry of Education, Science, and Technology (NRF-2011-0009827 and NRF-2013R1A1A2005788).
The authors declare that there are no interestsregarding the publication of this paper.
[1] | Haboudane D, Miller JR, Tremblay N, et al. (2002) Integrated narrow-band vegetation indices for prediction of crop chlorophyll content for application to precision agriculture. Remote sens environ 81: 416-426. |
[2] | Berni J, Zarco-Tejada PJ, Suárez L, et al. (2009) Thermal and narrowband multispectral remote sensing for vegetation monitoring from an unmanned aerial vehicle. IEEE Trans Geosci Remote Sens 47: 722-738. |
[3] | Bauer ME, Cipra JE (1973) Identification of Agricultural Crops by Computer Processing of ERTS MSS Data. Symp. on Significant Results Obtained from the Earth Resources Technology Satellite-1. NASA SP-327. NASA Goddard Space Flight Center: 205-212. |
[4] |
Lawrence SB, Xuemin J, Brian G, et al. (2012) Quick atmospheric correction code: algorithm description and recent upgrades. Opt Eng 51: 111719. doi: 10.1117/1.OE.51.11.111719
![]() |
[5] | Richter R (2005) Hyperspectral sensors for military applications. NASA technical report RTO-MP-SET-094, 2005. |
[6] |
Gao BC, Heidebrecht KB, Goetz AFH (1993) Derivation of scaled surface reflectance from AVIRIS data. Remote Sens Environ 44: 145-163. doi: 10.1016/0034-4257(93)90013-N
![]() |
[7] | Richter R (1996) A spatially adaptive fast atmosphere correction algorithm. Int J Remote Sens 11: 159-166. |
[8] | Adler-Golden S, Berk A, Bernstein LS, et al. (1998) FLAASH, A MODTRAN4 atmospheric correction package for hyperspectral data retrievals and simulations. In Proc 7th Ann JPL Airborne Earth Science Workshop: 9-14. |
[9] | Tuominen J, Lipping T (2011) Detection of environmental change using hyperspectral remote sensing at Olkiluoto repository site. Working Report, Posiva Oy, Eurajoki, Finland, 31-34. |
[10] | Jones HG, Vaughan RA (2010) Remote sensing of vegetation: principles, techniques, and applications. Oxford university press. |
[11] | Richards JA, Jia X (2006) Remote sensing digital image analysis, 4th ed. Berlin et al. Springer 78: 193. |
[12] |
Kim H, Yeom J (2014) Sensitivity of vegetation indices to spatial degradation of RapidEye imagery for paddy rice detection: a case study of South Korea. GISci Remote Sens 52: 1-17. doi: 10.1109/TGRS.2013.2290671
![]() |
[13] | RapidEye AG (2011) Satellite imagery product specifications, Version 2.1. |
[14] |
Bernstein LS, Jin X, Gregor B, et al. (2012) Quick atmospheric correction code: algorithm description and recent upgrades. Optical engineering 51: 111719-1. doi: 10.1117/1.OE.51.11.111719
![]() |
[15] | ENVI (2009) Atmospheric Correction Module: QUAC and FLAASH User's Guide, Version 4. 7. ITT Visual Information Solutions, Boulder, CO. |
[16] |
Black M, Fleming A, Riley T, et al. (2014) On the atmospheric correction of antarctic airborne hyperspectral data. Remote Sensing 6: 4498-4514. doi: 10.3390/rs6054498
![]() |
[17] | ERDAS, Geosystems (2009) ATCOR for ERDAS IMAGINE 2010—Haze reduction, atmospheric and topographic correction—User manual ATCOR 2 and ATCOR 3. ERDAS Imagine 1–58. ERDAS–GeoSystems. |
[18] | Liang S (2005) Quantitative remote sensing of land surfaces. John Wiley & Sons. Inc., New York. |
[19] | Mujumdar PP, Kumar DN (2013) Floods in a changing climate: hydrologic modeling. International Hydrology Series, Cambridge University Press, Cambridge. |
[20] | Lillesand TM, Kiefer RW, Chipman JW (2004) Remote sensing and image interpretation (No. Ed. 5). John Wiley & Sons. Inc. New York. |
[21] |
Xiao X, Boles S, Frolking S, et al. (2006) Mapping paddy rice agriculture in South and Southeast Asia using multi-temporal MODIS images. Remote Sens Environ 100: 95-113. doi: 10.1016/j.rse.2005.10.004
![]() |
[22] |
Nash J, Sutcliffe JV (1970) River flow forecasting through conceptual models part I—A discussion of principles. J hydrology 10: 282-290. doi: 10.1016/0022-1694(70)90255-6
![]() |
[23] | ENVI (2004) ENVI user’s guide. Research system Inc. Available from: http://aviris.gl.fcen.uba.ar/Curso_SR/biblio_sr/ENVI_userguid.pdf. |
[24] | Thomas V, Treitz P, Jelinski D, et al. (2002) Image classification of a northern peatland complex using spectral and plant community data. Remote Sens Environ 84: 83-99. |
[25] |
Moses WJ, Gitelson AA, Perk RL, et al. (2012) Estimation of chlorophyll-a concentration in turbid productive waters using airborne hyperspectral data. Water research 46: 993-1004. doi: 10.1016/j.watres.2011.11.068
![]() |
[26] | Jeong ST, Jang KC, Hong SY, et al. (2011) Detection of irrigation timing and the mapping of paddy cover in Korea using MODIS images data. Kor J Agric Forest Meteo 13: 69-78. |
1. | S. Minu, Amba Shetty, Budiman Minasny, Cécile Gomez, The role of atmospheric correction algorithms in the prediction of soil organic carbon from Hyperion data, 2017, 38, 0143-1161, 6435, 10.1080/01431161.2017.1354265 | |
2. | Rei Sonobe, Hiroshi Tani, Hideki Shimamura, Kan-ichiro Mochizuki, Addition of fake imagery generated by generative adversarial networks for improving crop classification, 2024, 02731177, 10.1016/j.asr.2024.06.026 |
Mission characteristic | Information |
Number of satellites | 5 |
Spacecraft lifetime | 7 years |
Orbit altitude | 630 km in sun-synchronous orbit |
Sensor type | Multi-spectral push broom imager |
Spectral bands (nm) | Blue (440–510 nm) |
Green (520–590 nm) | |
Red (630–685 nm) | |
Red edge (690–730 nm) | |
NIR (760–850 nm) | |
Ground sampling distance (nadir) | 6.5 m |
Pixel size (ortho-rectified) | 5 m |
Swath Width | 77 km |
On board data storage | Up to 1500 km of image data per orbit |
Revisit time | Daily (off nadir), 5.5 days (at nadir) |
Image capture capacity | 4 million km2 per day |
Dynamic range | Up to 12 bit |
Wavelength (nm) | r♩ | Linear regression |
450 | 0.999* | y = 0.4939x + 0.1958 |
550 | 0.999* | y = 0.4705x − 3.8267 |
650 | 0.999** | y = 0.5432x − 0.4813 |
800 | 0.999** | y = 0.7698x − 4.3484 |
880 | 0.999** | y = 1.4032x + 3.4184 |
♩ * and ** represent significance at the 95 and 99 % probability levels. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large. |
FLAASH | ATCOR2 | |||
Input parameter | Value | Input parameter | Value | |
Acquisition time (UTC) | 3:25:01 | Acquisition time (UTC) | 3:25:01 | |
Latitude | 35.1734° | Latitude | 35.1734° | |
Longitude | 126.8986° | Longitude | 126.8986° | |
Visibility | 40 km | Visibility | 40 km | |
Ground elevation | 0 | Aerosol type | Rural, Midlat-summer | |
CO2 conc. (ppm) | 414.9 | Solar zenith | 19.8° | |
Atmospheric model | Midlat-summer | Solar azimuth | 163.3° | |
Aerosol model | Rural | Satellite azimuth | 100.42° | |
Zenith angle | 163.48° | |||
Azimuth angle | 79.58° | |||
♩ FLAASH and ATCOR represent Fast Line of Sight Atmospheric Analysis of Hypercubes and Atmospheric and Topographic Correction, respectively. |
Correction Methods♩ | r♪ | RMSE | ME |
No-correction | 0.804** | 0.076 | 0.477 |
QUAC | 0.867** | 0.077 | 0.463 |
FLAASH | 0.862** | 0.056 | 0.714 |
ATCOR | 0.869** | 0.055 | 0.732 |
♩ QUAC, FLAASH, and ATCOR denote QUick Atmospheric Correction, Fast Line of Sight Atmospheric Analysis of Hypercubes, and Atmospheric and Topographic Correction, respectively. ♪ ** represents significance at the 99 % probability level. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large. |
DOY | Reference class | Classification result | ||
Paddy | Non-paddy | Producer accuracy | ||
152 | Paddy | 82.91 | 4.14 | 82.91 |
Non-paddy | 16.70 | 95.85 | 95.85 | |
User accuracy | 61.52 | 93.88 | - | |
Overall accuracy = 90.76% | ||||
Kappa coefficient = 0.52 | ||||
174 | Paddy | 83.28 | 3.96 | 83.28 |
Non-paddy | 16.72 | 96.04 | 96.04 | |
User accuracy | 61.63 | 98.39 | - | |
Overall accuracy = 94.87% | ||||
Kappa coefficient = 0.67 | ||||
220 | Paddy | 99.92 | 0.08 | 99.92 |
Non-paddy | 0.08 | 100.00 | 100.00 | |
User accuracy | 61.64 | 99.44 | - | |
Overall accuracy = 95.82% | ||||
Kappa coefficient = 0.77 | ||||
250 | Paddy | 83.28 | 4.01 | 83.28 |
Non-paddy | 16.72 | 95.99 | 95.99 | |
User accuracy | 61.64 | 97.12 | - | |
Overall accuracy = 93.72% | ||||
Kappa coefficient = 0.62 |
Reference class | Classification result | ||
Paddy | Non-paddy | Producer accuracy | |
Paddy | 99.92 | 0.08 | 99.92 |
Non-paddy | 0.08 | 100 | 100 |
User accuracy | 61.64 | 99.44 | - |
Overall accuracy = 95.82% | |||
Kappa coefficient = 0.77 |
Mission characteristic | Information |
Number of satellites | 5 |
Spacecraft lifetime | 7 years |
Orbit altitude | 630 km in sun-synchronous orbit |
Sensor type | Multi-spectral push broom imager |
Spectral bands (nm) | Blue (440–510 nm) |
Green (520–590 nm) | |
Red (630–685 nm) | |
Red edge (690–730 nm) | |
NIR (760–850 nm) | |
Ground sampling distance (nadir) | 6.5 m |
Pixel size (ortho-rectified) | 5 m |
Swath Width | 77 km |
On board data storage | Up to 1500 km of image data per orbit |
Revisit time | Daily (off nadir), 5.5 days (at nadir) |
Image capture capacity | 4 million km2 per day |
Dynamic range | Up to 12 bit |
Wavelength (nm) | r♩ | Linear regression |
450 | 0.999* | y = 0.4939x + 0.1958 |
550 | 0.999* | y = 0.4705x − 3.8267 |
650 | 0.999** | y = 0.5432x − 0.4813 |
800 | 0.999** | y = 0.7698x − 4.3484 |
880 | 0.999** | y = 1.4032x + 3.4184 |
♩ * and ** represent significance at the 95 and 99 % probability levels. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large. |
FLAASH | ATCOR2 | |||
Input parameter | Value | Input parameter | Value | |
Acquisition time (UTC) | 3:25:01 | Acquisition time (UTC) | 3:25:01 | |
Latitude | 35.1734° | Latitude | 35.1734° | |
Longitude | 126.8986° | Longitude | 126.8986° | |
Visibility | 40 km | Visibility | 40 km | |
Ground elevation | 0 | Aerosol type | Rural, Midlat-summer | |
CO2 conc. (ppm) | 414.9 | Solar zenith | 19.8° | |
Atmospheric model | Midlat-summer | Solar azimuth | 163.3° | |
Aerosol model | Rural | Satellite azimuth | 100.42° | |
Zenith angle | 163.48° | |||
Azimuth angle | 79.58° | |||
♩ FLAASH and ATCOR represent Fast Line of Sight Atmospheric Analysis of Hypercubes and Atmospheric and Topographic Correction, respectively. |
Correction Methods♩ | r♪ | RMSE | ME |
No-correction | 0.804** | 0.076 | 0.477 |
QUAC | 0.867** | 0.077 | 0.463 |
FLAASH | 0.862** | 0.056 | 0.714 |
ATCOR | 0.869** | 0.055 | 0.732 |
♩ QUAC, FLAASH, and ATCOR denote QUick Atmospheric Correction, Fast Line of Sight Atmospheric Analysis of Hypercubes, and Atmospheric and Topographic Correction, respectively. ♪ ** represents significance at the 99 % probability level. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large. |
DOY | Reference class | Classification result | ||
Paddy | Non-paddy | Producer accuracy | ||
152 | Paddy | 82.91 | 4.14 | 82.91 |
Non-paddy | 16.70 | 95.85 | 95.85 | |
User accuracy | 61.52 | 93.88 | - | |
Overall accuracy = 90.76% | ||||
Kappa coefficient = 0.52 | ||||
174 | Paddy | 83.28 | 3.96 | 83.28 |
Non-paddy | 16.72 | 96.04 | 96.04 | |
User accuracy | 61.63 | 98.39 | - | |
Overall accuracy = 94.87% | ||||
Kappa coefficient = 0.67 | ||||
220 | Paddy | 99.92 | 0.08 | 99.92 |
Non-paddy | 0.08 | 100.00 | 100.00 | |
User accuracy | 61.64 | 99.44 | - | |
Overall accuracy = 95.82% | ||||
Kappa coefficient = 0.77 | ||||
250 | Paddy | 83.28 | 4.01 | 83.28 |
Non-paddy | 16.72 | 95.99 | 95.99 | |
User accuracy | 61.64 | 97.12 | - | |
Overall accuracy = 93.72% | ||||
Kappa coefficient = 0.62 |
Reference class | Classification result | ||
Paddy | Non-paddy | Producer accuracy | |
Paddy | 99.92 | 0.08 | 99.92 |
Non-paddy | 0.08 | 100 | 100 |
User accuracy | 61.64 | 99.44 | - |
Overall accuracy = 95.82% | |||
Kappa coefficient = 0.77 |