The evolution of Cheddar cheese flavor and texture is highly dependent on its proteolytic state however, Cheddar cheese is marketed based on its chronological age. Information about the proteolytic age of commercial Cheddar cheese of a given age almost does not exist. The present research challenged the merit of marketing Cheddar cheese according to its chronological age. Full-fat (FF) and Reduced-fat (RF) Cheddar cheeses, of identical chronological age, were aged for 180 days at 5 ℃ and the progression of the proteolytic cascade was investigated and quantified. The accumulation of the cheese N fractions that are soluble at pH 4.6 (4.6SN), soluble in 12% tri-chloroacetic acid (12TCASN), and soluble in 5% phospho-tungstic acid (5PTASN) was quantified along with the accumulation of free L-Glutamic acid (L-Glu). Results indicated that both FF and RF cheeses exhibited very significant among-cheeses differences in accumulation of the investigated fractions (p < 0.05). These significant differences were related to both the concentration of the fractions and the rate at which they accumulated. The results thus reflected significant among-cheeses differences in the inherent proteolytic potential of the cheeses as well as in its manifestation during aging. Results clearly indicated that the chronological age of the investigated cheeses did not reflect their proteolytic age. The results highlighted the need to market Cheddar cheese based on some proteolysis-related quantitative parameters.
Citation: Moshe Rosenberg, Yael Rosenberg. Proteolysis during aging of commercial full-fat and reduced-fat Cheddar cheeses of identical chronological age[J]. AIMS Agriculture and Food, 2022, 7(4): 855-871. doi: 10.3934/agrfood.2022052
[1] | Sara Arciniegas Ruiz, Eliav Tikochinsky, Vardit Rubovitch, Chaim G Pick, Bernard Attali . Contextual fear response is modulated by M-type K+ channels and is associated with subtle structural changes of the axon initial segment in hippocampal GABAergic neurons. AIMS Neuroscience, 2023, 10(1): 33-51. doi: 10.3934/Neuroscience.2023003 |
[2] | Abdulrahman Alhamyani, Prabhat R Napit, Haider Ali, Mostafa MH Ibrahim, Karen P Briski . Ventrolateral ventromedial hypothalamic nucleus GABA neuron adaptation to recurring Hypoglycemia correlates with up-regulated 5′-AMP-activated protein kinase activity. AIMS Neuroscience, 2021, 8(4): 510-525. doi: 10.3934/Neuroscience.2021027 |
[3] | Chris Cadonic, Benedict C. Albensi . Oscillations and NMDA Receptors: Their Interplay Create Memories. AIMS Neuroscience, 2014, 1(1): 52-64. doi: 10.3934/Neuroscience.2014.1.52 |
[4] | Ubaid Ansari, Jimmy Wen, Burhaan Syed, Dawnica Nadora, Romteen Sedighi, Denise Nadora, Vincent Chen, Forshing Lui . Analyzing the potential of neuronal pentraxin 2 as a biomarker in neurological disorders: A literature review. AIMS Neuroscience, 2024, 11(4): 505-519. doi: 10.3934/Neuroscience.2024031 |
[5] | Eduardo Mercado III . Relating Cortical Wave Dynamics to Learning and Remembering. AIMS Neuroscience, 2014, 1(3): 185-209. doi: 10.3934/Neuroscience.2014.3.185 |
[6] | Nao Fukuwada, Miki Kanno, Satomi Yoshida, Kenjiro Seki . Gαq protein signaling in the bed nucleus of the stria terminalis regulate the lipopolysaccharide-induced despair-like behavior in mice. AIMS Neuroscience, 2020, 7(4): 438-458. doi: 10.3934/Neuroscience.2020027 |
[7] | Fatemeh Aghighi, Mahmoud Salami, Sayyed Alireza Talaei . Effect of postnatal environmental enrichment on LTP induction in the CA1 area of hippocampus of prenatally traffic noise-stressed female rats. AIMS Neuroscience, 2023, 10(4): 269-281. doi: 10.3934/Neuroscience.2023021 |
[8] | Nour Kenaan, Zuheir Alshehabi . A review on recent advances in Alzheimer's disease: The role of synaptic plasticity. AIMS Neuroscience, 2025, 12(2): 75-94. doi: 10.3934/Neuroscience.2025006 |
[9] | Anastasios A. Mirisis, Anamaria Alexandrescu, Thomas J. Carew, Ashley M. Kopec . The Contribution of Spatial and Temporal Molecular Networks in the Induction of Long-term Memory and Its Underlying Synaptic Plasticity. AIMS Neuroscience, 2016, 3(3): 356-384. doi: 10.3934/Neuroscience.2016.3.356 |
[10] | Manami Inagaki, Masayuki Somei, Tatsunori Oguchi, Ran Ono, Sachie Fukutaka, Ikumi Matsuoka, Mayumi Tsuji, Katsuji Oguchi . Neuroprotective Effects of Dexmedetomidine against Thapsigargin-induced ER-stress via Activity of α2-adrenoceptors and Imidazoline Receptors. AIMS Neuroscience, 2016, 3(2): 237-252. doi: 10.3934/Neuroscience.2016.2.237 |
The evolution of Cheddar cheese flavor and texture is highly dependent on its proteolytic state however, Cheddar cheese is marketed based on its chronological age. Information about the proteolytic age of commercial Cheddar cheese of a given age almost does not exist. The present research challenged the merit of marketing Cheddar cheese according to its chronological age. Full-fat (FF) and Reduced-fat (RF) Cheddar cheeses, of identical chronological age, were aged for 180 days at 5 ℃ and the progression of the proteolytic cascade was investigated and quantified. The accumulation of the cheese N fractions that are soluble at pH 4.6 (4.6SN), soluble in 12% tri-chloroacetic acid (12TCASN), and soluble in 5% phospho-tungstic acid (5PTASN) was quantified along with the accumulation of free L-Glutamic acid (L-Glu). Results indicated that both FF and RF cheeses exhibited very significant among-cheeses differences in accumulation of the investigated fractions (p < 0.05). These significant differences were related to both the concentration of the fractions and the rate at which they accumulated. The results thus reflected significant among-cheeses differences in the inherent proteolytic potential of the cheeses as well as in its manifestation during aging. Results clearly indicated that the chronological age of the investigated cheeses did not reflect their proteolytic age. The results highlighted the need to market Cheddar cheese based on some proteolysis-related quantitative parameters.
Remote sensing is a useful and convenient tool for qualitative and quantitative determination of plant growth conditions. The technique can provide information on the actual status of crop conditions by observing a repetitive coverage, where the latter is necessary for change detection studies at a global or regional scale, such as crop yield predictions and monitoring crop status and conditions [1,2]. While crop conditions can be monitored using various remote sensing platforms, the two primary categories are satellite and aerial platforms. Satellites can observe a wide area of thousands of square kilometers at once, revisiting it in a regular and timely manner. These unique characteristics render satellites the most suitable of the current remote sensing platforms for monitoring crop growth over broad areas. Satellites have been used in agricultural remote sensing since the early 1970s [3]. Satellite systems with increasingly higher spatial resolution and more frequent revisiting cycles have been developed to improve the quality of data. For optimum use of these data, atmospheric correction is required to retrieve the rectified surface reflectance from a remotely sensed image by removing the effects of light scattering and absorption by aerosols, haze, and gases. While atmospheric correction is necessary as an important image processing step in many remote sensing applications, significant difficulty is presented during processing due to the complexity of atmospheric conditions in time and space. Because accurate reflectances are highly required for many applications, atmospheric correction accuracy and development of improved algorithms should be evaluated and these areas of research remain very active [4].
Atmospheric correction can be divided into two categories: (1) empirical methods; and (2) radiative transfer model-based methods. The empirical methods rely on the scene information, i.e., radiance at a certain location, and do not use any physical model as done in model-based methods. The most recent addition to empirical methods is the Quick Atmospheric Correction (QUAC) method [5]. The model-based methods are performed using radiative transfer models. In this procedure, field measurements are not required, and only basic information on the scene is required, such as site location and elevation, flight altitude, the sensor model, local visibility, and acquisition times. Several model-based methods dedicated to retrieving reflectance information from hyperspectral and multispectral data have been developed. These methods include ATmosphere REMoval program (ATREM), Atmospheric and Topographic Correction (ATCOR), and Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) [6,7,8]. These methods retrieve surface reflectance using a radiative transfer model. All of the model-based methods are quite similar in their basic principles and operation [9].
While some features are distinguishable in a panchromatic or single-band image, most features are more clearly distinguishable in multispectral and hyperspectral images containing multiple wavebands. The reflectance properties of an object depend on the surface features (e.g., color and texture) and environmental conditions (e.g., geographic location and atmospheric components). The reflectance characteristics of various features in the image that have multiple spectral band information (i.e., multispectral and hyperspectral images) are intermixed. Therefore, automated techniques are needed that can identify different surface characteristics and categorize all of the pixels in an image into homogeneous land cover types or themes. This process is termed classification, and the classified data may then be used to produce thematic maps [10]. Classification methods are divided into two methods; supervised and unsupervised classification. Supervised classification is the procedure most often used as a precursor to quantitative analysis of remote sensing image data. It depends upon using suitable algorithms to classify and label the pixels in an image as representing particular ground cover types or classes. A variety of algorithms is available for supervised classification [11]. Among the most frequently used classification algorithms are the parallelepiped, minimum distance, and maximum likelihood classification methods.
High resolution satellite images are more suitable for monitoring crop growth conditions in precision agriculture than low resolution satellite images. Image correction and classification methods are needed to obtain and/or determine accurate conditions of crop growth; however, most methods have been developed and evaluated for relatively low resolution imageries. The objectives of this study were to identify the canopy growth of paddy rice, and to investigate practical image correction and classification methods. We specifically evaluated the three atmospheric correction methods, QUAC, FLAASH, and ATCOR, as well as a selected classification method in order to obtain an endmember category or class (i.e., paddy) from image data of an area of interest. The selected image correction methods were applied to a RapidEye high resolution (6.5 m) image for projecting vegetation index (VI) maps for monitoring rice growth conditions.
In this study, RapidEye (BlackBridge, Berlin, Germany) satellite images were acquired so that the three atmospheric correction methods (i.e., QUAC, FLAASH, and ATCOR), and three supervised classification methods (i.e., parallelepiped, minimum distance, and maximum likelihood) could be performed and evaluated. RapidEye images were taken over experimental fields of paddy rice at Chonnam National University, Gwangju, and at TaeAn, Choongcheongnam-do, Korea (Figure 1). The CNU RapidEye images were acquired on day of year (DOY) 220 in 2013. The TaeAn RapidEye images were obtained on DOY 152, 174, 220, and 250 in 2010, and thus represent a time series. The CNU RapidEye images were used to evaluate the atmospheric correction methods and the TaeAn RapidEye images were used to evaluate the classification methods.
The RapidEye constellation of five Earth observation satellites are designed to point at several look angles, and each of the five satellites travels on the same orbit, enabling acquisition of high-resolution images with five spectral bands daily. This allows users to get large-area coverage data with a frequent revisit interval [12,13]. RapidEye collects 4 million square kilometers of data per day with a 6.5 m ground resolution. The RapidEye system specifications are given in Table 1. RapidEye images are offered at two processing levels: (1) basic products (level 1B), which are geometrically uncorrected images; and (2) ortho products (level 3A), which are radiometric, geometric, and terrain correction images [13]. Level 3A images were used in this study.
Mission characteristic | Information |
Number of satellites | 5 |
Spacecraft lifetime | 7 years |
Orbit altitude | 630 km in sun-synchronous orbit |
Sensor type | Multi-spectral push broom imager |
Spectral bands (nm) | Blue (440–510 nm) |
Green (520–590 nm) | |
Red (630–685 nm) | |
Red edge (690–730 nm) | |
NIR (760–850 nm) | |
Ground sampling distance (nadir) | 6.5 m |
Pixel size (ortho-rectified) | 5 m |
Swath Width | 77 km |
On board data storage | Up to 1500 km of image data per orbit |
Revisit time | Daily (off nadir), 5.5 days (at nadir) |
Image capture capacity | 4 million km2 per day |
Dynamic range | Up to 12 bit |
An UAV image obtained on DOY 220 in 2013 was used to evaluate the atmospheric correction methods for the RapidEye images. The UAV image was obtained using a multi-copter with 8 rotors, and equipped with a miniature multiple camera array (Mini-MCA6, Tetracam Inc., USA). The Mini-MCA6 is a lightweight (700 g), multispectral, remote sensing camera, having six independent sensors to detect different spectral wavebands: Blue (410-490 nm), Green (510-590 nm), Red (610-690 nm), NIR1 (760-840 nm), NIR2 (810-850 nm), and NIR3 (870-890 nm). Each image has a pixel resolution of 1280 × 1024 with 10 bit as a raw file format in flash memory. The image taken by the Mini-MCA6 requires pre-processing to change the file format, and to merge the multispectral wavebands stored in separate sensors into one image. This procedure was performed using the PixelWrench 2 software (PW2, Tetracam Inc., USA) supplied with the Mini-MCA system.
Radiometric correction of UAV images was performed using empirical relationships between UAV image-based digital values and corresponding ground-based reflectance. For this process, three calibration targets were constructed using aluminum plates (2.4 × 2.4 m each). The plates were painted black, grey, and white with non-reflective paints. The target plates painted with the color of black, grey, and white show average reflectances of 5, 23, and 93%, respectively (Figure 2). The ground-based reflectance was measured using a portable multispectral radiometer, MSR16R, containing 16 wavebands in the range of 450 and 1750 nm (CROPSCAN Inc., MN, USA). It has upward and downward sensors to measure incident and reflected radiation, simultaneously. The radiometer, with a field of view (FOV) of 28°, measured the canopy reflectance of a 1 m diameter target area from a height of 2 m above the nadir position. The UAV-based image reflectance was estimated according to linear regression equations, which were determined from the relationships between UAV-based digital values and the corresponding ground-based reflectance (Table 2). Geometric correction was carried out using an ENVI program (ITT Inc., CO, USA), based on ground control points from the Google Earth (Google Inc., CA, USA) image map.
Wavelength (nm) | r♩ | Linear regression |
450 | 0.999* | y = 0.4939x + 0.1958 |
550 | 0.999* | y = 0.4705x − 3.8267 |
650 | 0.999** | y = 0.5432x − 0.4813 |
800 | 0.999** | y = 0.7698x − 4.3484 |
880 | 0.999** | y = 1.4032x + 3.4184 |
♩ * and ** represent significance at the 95 and 99 % probability levels. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large. |
QUAC and FLAASH were employed using the ENVI software (ITT Inc., CO, USA), and ATCOR was employed using the ERDAS IMAGINE software (Hexagon Geospatial, GA, USA). The specific parameters used for FLAASH and ATCOR are shown in Table 3. QUAC is applicable to multispectral and hyperspectral images, and is an in-scene approach that determines atmospheric correction parameters directly from the information contained within the scene, without additional metadata. Because QUAC does not involve radiative-transfer calculations, it is significantly faster than model-based methods. However, QUAC performs a more approximate atmospheric correction than other model-based methods. The use of QUAC has some restrictions, particularly its requirement for a certain minimum amount of land area in the scene. FLASSH also supports the analyses of hyperspectral and multispectral imaging sensors. FLAASH interfaced with MODTRAN4 corrects images according to the radiative transfer (RT) codes that calculate the radiance of the images with some inputs, such as site location, elevation, flight altitude, sun angle, and a few atmospheric parameters [9,14,15]. ATCOR has a fast atmospheric correction algorithm for images from medium and high spatial resolution satellite sensors. ERDAS IMAGINE offers several versions of ATCOR such as ATCOR-2 (specifically designed for use over flat terrain), ATCOR-3 (developed for mountainous terrain), and the latest release ATCOR-4 [16]. We used ATCOR-2 in this study. ERDAS IMAGINE 2010 (Version 10.0) for ATCOR offers several processing options: (a) a haze removal algorithm; (b) atmospheric correction with constant atmospheric conditions; and (c) the capability of viewing reference spectra of selected target areas. Haze or cloud removal and atmospheric water retrieval settings were kept at ‘default’, which in this case, is recommended by the ATCOR user manual [17].
FLAASH | ATCOR2 | |||
Input parameter | Value | Input parameter | Value | |
Acquisition time (UTC) | 3:25:01 | Acquisition time (UTC) | 3:25:01 | |
Latitude | 35.1734° | Latitude | 35.1734° | |
Longitude | 126.8986° | Longitude | 126.8986° | |
Visibility | 40 km | Visibility | 40 km | |
Ground elevation | 0 | Aerosol type | Rural, Midlat-summer | |
CO2 conc. (ppm) | 414.9 | Solar zenith | 19.8° | |
Atmospheric model | Midlat-summer | Solar azimuth | 163.3° | |
Aerosol model | Rural | Satellite azimuth | 100.42° | |
Zenith angle | 163.48° | |||
Azimuth angle | 79.58° | |||
♩ FLAASH and ATCOR represent Fast Line of Sight Atmospheric Analysis of Hypercubes and Atmospheric and Topographic Correction, respectively. |
The RapidEye image taken at CNU on DOY 220 in 2013 was used as a reference for evaluation of the atmospheric correction methods. Comparing ground-measured point data and satellite image pixel data is difficult because of difference in spatial resolution. In order to make the comparison possible, the surface of interest must be large enough and completely homogeneous for a sufficient number of point measurements to be made on the corresponding surface of the satellite image [18]. In this context, the UAV image is assumed to meet the general requirement mentioned above. The UAV reflectances were compared with the RapidEye reflectances for the evaluation points, which were selected on soil, paddy, roof, and road asphalt.
Various supervised classification algorithms may be used to assign an unknown pixel to one of a number of classes [19]. The parallelepiped, minimum distance and maximum likelihood decision rules are among the most frequently used classification algorithms. These three supervised classification methods were applied to the TaeAn RapidEye image using the ENVI software. Supervised classification requires user-defined training classes in the image before performing the classification, and each class is used as a reference for the classifier. The analyst seeks to locate specific sites in the remotely sensed data that represent homogeneous examples of known land cover types. Training classes are groups of pixels in a region of interest (ROI). Five training classes of urban, soil, paddy, forest, and water were selected in this study.
The parallelepiped classifier divides each axis of multi-spectral feature space forming an n-dimensional parallelepiped. Each pixel fallen into a box is labeled as a defied class. Accuracy of the classification depends on the selection of the lowest and highest values in consideration of the population statistics of each class [20]. The minimum distance classifier is mathematically simple and computationally efficient. The minimum distance classifier is used to classify unknown image data into classes, which minimize the distance between the image data and the class in multi-spectral feature space. The distance is defined as an index of similarity so that the minimum distance is identical to the maximum similarity. All pixels are classified to the nearest class, unless a standard deviation or distance threshold is specified, in which case some pixels may be unclassified if they do not meet the selected criteria [15,20]. Maximum likelihood classification assumes that the statistics for each class in each band are normally distributed and calculates the probability that a given pixel belongs to a specific class. The maximum likelihood classifier quantitatively evaluates both the variance and covariance of the category spectral response patterns when classifying an unknown pixel. The maximum likelihood classifier is one of the most popular methods of classification in remote sensing, in which a pixe1 with the maximum likelihood is classified into the corresponding class [15,20].
To minimize errors for practical application of growth monitoring and yield estimation of rice, paddy fields were categorized using RapidEye imagery and NDVI values. Forest, waterbody, soil, and urban areas were removed, with only paddy fields retained using a NDVI threshold method proposed by Xiao et al. [21]. They assumed that the unique reflectance characteristics of the paddy and other features can be used to categorize paddy rice fields. When a pixel is filled by water, then the NDVI is consistently lower than 0.1. Pixels filled with rice tend to have high NDVI values ahead of harvest, while evergreen forest areas tend to have consistently high NDVI values, greater than 0.7. These NDVI thresholds of 0.1 and 0.7 were applied to identify the waterbody and forest areas from the NDVI values of the RapidEye images taken on DOY 152 to 250. Soil areas have similar reflectances in the near-infrared (NIR) and red, but generally the NIR spectral reflectance was larger than the red. Thus, a pixel covered by soil tends to have near zero NDVI values of 0.1 to 0.2 [20]. Hence, the paddy rice fields were identified by processing with the classification method using the NDVI thresholds of 0.1 and 0.7.
The classified NDVI information for the TaeAn RapidEye image was used for evaluation of the classification methods described above. Accuracy of the classified results in terms of the paddy fields was determined by overlaying a vegetation index map used to monitor paddy rice growth. This was performed using a digitized paddy cover map from the Ministry of Agriculture, Food and Rural Affairs, Korea (Figure 3). The accuracy of the classified results was also analyzed by comparing with the digitized paddy cover map, and projecting an error distribution map.
The reflectances of the UAV image were used as standardvalues, and the RapidEye reflectances were compared with the corresponding UAV reflectances. Several statistical analyses were used to evaluate whether the results of the comparison were reliable. The data were analyzed with two-way analysis of variance (ANOVA) using PROC ANOVA, and with Pearson’s correlation coefficients using PROC CORR (SAS version 9.4, SAS Institute Inc., NC, USA). In addition, two statistical equations were used to evaluate the performance of the atmospheric correction methods: (1) root mean square error (RMSE, Equation 1); and (2) model efficiency (ME, Equation 2) [22]:
RMSE=√1N∑ni=1(Si−Mi)2, | (1) |
ME=1−∑ni=1(Si−Mi)2∑ni=1(Mi−Mavg)2, | (2) |
where Si the ith simulated value, Mi is the ith measured value, Mavg is the averaged measured value, and n is the number of data pairs. ME values are equal to the coefficient of determination (R2), where the simulated value versus the measured values are close to a 1:1 ratio. However, ME is generally lower than R2, and can be negative when predictions are very biased.
To evaluate classification accuracy, four measures of the accuracy were tested in this study. The overall accuracy, kappa coefficient, producer accuracy, and user accuracy were computed for each error matrix. In thematic mapping from remotely sensed data, the term accuracy is used typically to express the degree of ‘correctness’ of a map or classification. The four metrics were calculated using the post classification error analysis of the ENVI program. The latter can calculate a confusion matrix (also referred to as the error matrix or a contingency table), including overall accuracy, producer accuracy, user accuracy, and Kappa coefficient using a ground truth image or ground truth region of interests. The confusion matrix is the most common form of expressing classification accuracy. In this matrix table, classification is given as rows and reference data (ground truth) are given as columns for each class type. The overall accuracy is calculated by summing the number of pixels classified correctly and dividing by the total number of pixels [23]. However, more specific measures are needed because the overall accuracy does not indicate how well individual classes were classified. The producer accuracy is the ratio between the number of correctly classified and the column total, and represents how well reference pixels of each ground cover type are classified. The user accuracy is the ratio between the number of correctly classified and the row total, and represents the probability that a pixel classified into a given category actually represents that category on the ground. The user accuracy and producer accuracy for any given class typically are not the same [23]. The kappa coefficient (K) was generated to describe the proportion of agreement between the classification result and the standard reference data after random agreements by chance are removed from consideration. The K value approaches 0 with no agreement, whereas it is approaches 1 with near perfect agreement [24].
The performance of each atmospheric correction method was evaluated by comparing the UAV reflectance and RapidEye reflectance. Among the three atmospheric correction methods, ATCOR produced the best agreement between UAV and RapidEye reflectances (Figure 4). Values of r, RMSE, and ME of the comparison for ATCOR were 0.869, 0.055, and 0.732, respectively (Table 4). Although ATCOR and FLAASH are both MODTRAN4 model-based methods, ATCOR produced comparatively better results for correction performance. In addition, the correction performance indices indicate that either FLAASH or ATCOR produces more reliable atmospheric correction than QUAC. The FLAASH and ATCOR methods have the option of retrieving the aerosol amount, and estimating the scene average visibility. However, when processing data that lacks specific spectral channels, which are required for aerosol retrieval, there are no measurements of aerosol optical depth that can be supplied as input parameters. While these MODTRAN4 model-based methods are theoretically more sophisticated than QUAC, each model’s performance is affected by its ability to accurately characterize atmospheric aerosols [25]. Therefore, when unable to perform aerosol retrieval, and satisfy basic assumptions (i.e., at least 10 diverse materials or dark pixels in a scene), QUAC may produce equivalent atmospheric correction results in comparison with the model-based methods.
Correction Methods♩ | r♪ | RMSE | ME |
No-correction | 0.804** | 0.076 | 0.477 |
QUAC | 0.867** | 0.077 | 0.463 |
FLAASH | 0.862** | 0.056 | 0.714 |
ATCOR | 0.869** | 0.055 | 0.732 |
♩ QUAC, FLAASH, and ATCOR denote QUick Atmospheric Correction, Fast Line of Sight Atmospheric Analysis of Hypercubes, and Atmospheric and Topographic Correction, respectively. ♪ ** represents significance at the 99 % probability level. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large. |
When the parallelepiped, maximum likelihood, and minimum distance supervised classification methods were applied to the TaeAn RapidEye imagery, the minimum distance method produced reasonably acceptable classification results (Figure 5). The overall accuracy for the minimum distance methods varied from 90 to 96%, whilst the kappa coefficient varied from 0.5 to 0.7 (Table 5). However, the results show that the classified images cannot consistently distinguish between waterbodies and paddy fields early in the crop season, as well as between forest areas and paddy fields when the vegetation is growing vigorously. Paddy fields have unique features as rice plants are grown on flooded lands. Therefore, the reflectances of paddy fields are affected by water when just irrigated and transplanted, whilst the reflectances are similar to forest areas when the canopy of the paddy rice was closed. Furthermore, the most common source of error may occur during the process of defining the training classes. When pixels fall outside the specific class region or within overlapping regions, error may occur that result in misclassification. The automatic supervised classification method is mathematically simple and computationally efficient, but it has certain limitations that are sensitive to accuracy of the training classes [19].
DOY | Reference class | Classification result | ||
Paddy | Non-paddy | Producer accuracy | ||
152 | Paddy | 82.91 | 4.14 | 82.91 |
Non-paddy | 16.70 | 95.85 | 95.85 | |
User accuracy | 61.52 | 93.88 | - | |
Overall accuracy = 90.76% | ||||
Kappa coefficient = 0.52 | ||||
174 | Paddy | 83.28 | 3.96 | 83.28 |
Non-paddy | 16.72 | 96.04 | 96.04 | |
User accuracy | 61.63 | 98.39 | - | |
Overall accuracy = 94.87% | ||||
Kappa coefficient = 0.67 | ||||
220 | Paddy | 99.92 | 0.08 | 99.92 |
Non-paddy | 0.08 | 100.00 | 100.00 | |
User accuracy | 61.64 | 99.44 | - | |
Overall accuracy = 95.82% | ||||
Kappa coefficient = 0.77 | ||||
250 | Paddy | 83.28 | 4.01 | 83.28 |
Non-paddy | 16.72 | 95.99 | 95.99 | |
User accuracy | 61.64 | 97.12 | - | |
Overall accuracy = 93.72% | ||||
Kappa coefficient = 0.62 |
To identify paddy fields from RapidEye satellite images more precisely, a NDVI threshold method suggested by Xiao et al. [21] was applied in this study. This method classified the paddy fields with the overall accuracy of 95.82%, kappa coefficient of 0.77, producer accuracy of 99.92%, and user accuracy of 61.64% (Table 6). In order to determine the errors in the classified paddy map, an error distribution map was produced by comparing each pixel between the digitized paddy cover map and classified paddy map (Figure 6). Soil pixels covered with vegetation were misclassified as paddy fields, while paddy field pixels covered with somewhat more water and less vegetation were misclassified as non-paddy features. The current classification results correspond closely to those reported by Jeong et al. [26]. They also attempted detection of paddy fields using MODIS satellite images, and observed different over-and under-estimated pixels of paddy fields. The spatial resolutions of MODIS and RapidEye are 500 m and 5 m, respectively. Because a RapidEye image has much higher spatial resolution, water or soil in paddy fields can be distinguished more clearly using RapidEye images than those using MODIS images.
Reference class | Classification result | ||
Paddy | Non-paddy | Producer accuracy | |
Paddy | 99.92 | 0.08 | 99.92 |
Non-paddy | 0.08 | 100 | 100 |
User accuracy | 61.64 | 99.44 | - |
Overall accuracy = 95.82% | |||
Kappa coefficient = 0.77 |
If the NDVI threshold values used in this study could be adjusted to correctly classify more of the underestimated pixels as paddy field pixels, the proportion of underestimated pixels would be improved. Therefore, it is important to determine a suitable NDVI threshold value. Xiao et al. [21] and Jeong et al. [26] used NDVI, enhanced vegetation index (EVI), and land surface water index (LSWI) for detection of paddy fields during the inundated period. EVI is an improvement over the NDVI index, which reduces atmospheric and variable soil or canopy background effects. LSWI is calculated using near-infrared and shortwave infrared, where the latter is sensitive to the water content of vegetation and soil, and can be applied to estimate the water content of the surface [26]. While using two spectral indices for classifying paddy fields is potentially advantageous over using one spectral index, the satellite images used in the current study contain insufficient waveband information to calculate two spectral indices. Therefore, only NDVI was used to detect paddy fields in this study. If various vegetation indices sensitive to water or chlorophyll content are available, the classification accuracy results would be enhanced.
Three different atmospheric correction methods, QUAC, FLAASH, and ATCOR were used on RapidEye satellite images obtained over paddy fields at CNU, Gwangju, as well as at TaeAn, Chungcheongnam-do, Korea. The corrected RapidEye satellite images were then evaluated by comparison with UAV images, and classified into representative land cover features using the minimum distance method. Of the three atmospheric correction methods, ATCOR gave results that corresponded comparatively well with those from the UAV images. We also found that the minimum distance classification method performed well, and classified all pixels into the corresponding reference endmember classes. However, this method could not classify the same pixels from different time-series images. Therefore, NDVI threshold values were used to classify paddy fields from RapidEye images, according to the NDVI time-series feature characteristics. As a result, the same pixels could be classified from each of the time-series images, although some under-and over-estimated pixels persisted. This issue could probably be addressed if suitable threshold values could be determined and applied. We contend that the image correction and classification methods validated here are applicable to high resolution satellite images for monitoring crop growth conditions in precision agriculture.
This study was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), which is funded by the Ministry of Education, Science, and Technology (NRF-2011-0009827 and NRF-2013R1A1A2005788).
The authors declare that there are no interestsregarding the publication of this paper.
[1] | Murtaza MA (2022) Cheddar-type Cheeses. In: McSweeney PLH, McNamara JP, Encyclopedia of Dairy Sciences, 3 Eds., Oxford: Academic Press, 36-44. https://doi.org/10.1016/B978-0-12-818766-1.00270-1 |
[2] | McSweeney PLH (2022) Biochemistry of Cheese Ripening. In: McSweeney PLH, McNamara JP, Encyclopedia of Dairy Sciences, 3 Eds., Oxford: Academic Press, 22-29. https://doi.org/10.1016/B978-0-12-818766-1.00311-1 |
[3] | Ong L, Lawrence RC, Gilles J, et al. (2017) Chapter 33—Cheddar Cheese and Related Dry-Salted Cheese Varieties. In: McSweeney PLH, Fox PF, Cotter PD, Cheese, 4 Eds., San Diego: Academic Press, 829-863. https://doi.org/10.1016/B978-0-12-417012-4.00033-8 |
[4] | Ardö Y, McSweeney PLH, Magboul AAA, et al. (2017) Chapter 18—Biochemistry of Cheese Ripening: Proteolysis. In: McSweeney PLH, Fox PF, Cotter PD, Cheese, 4 Eds., San Diego: Academic Press, 445-482. https://doi.org/10.1016/B978-0-12-417012-4.00018-1 |
[5] |
Fenelon MA, Guinee TP (2000) Primary proteolysis and textural changes during ripening in Cheddar cheeses manufactured to different fat contents. Int Dairy J 10: 151-158. https://doi.org/10.1016/S0958-6946(00)00040-6 doi: 10.1016/S0958-6946(00)00040-6
![]() |
[6] |
Sousa MJ, Ardö Y, McSweeney PLH (2001) Advances in the study of proteolysis during cheese ripening. Int Dairy J 11: 327-345. https://doi.org/10.1016/S0958-6946(01)00062-0 doi: 10.1016/S0958-6946(01)00062-0
![]() |
[7] | Law BA (2010) Cheese-Ripening and Cheese Flavour Technology. Technology of Cheesemaking, 231-259. https://doi.org/10.1002/9781444323740.ch7 |
[8] | McSweeney PLH (2017) Chapter 14—Biochemistry of Cheese Ripening: Introduction and Overview. In: McSweeney PLH, Fox PF, Cotter PD, Cheese 4 Eds., San Diego: Academic Press, 379-387. https://doi.org/10.1016/B978-0-12-417012-4.00014-4 |
[9] | Ganesan B, Weimer BC (2017) Chapter 19—Amino Acid Catabolism and Its Relationship to Cheese Flavor Outcomes. In: McSweeney PLH, Fox PF, Cotter PD, Cheese, 4 Eds., San Diego: Academic Press, 483-516. https://doi.org/10.1016/B978-0-12-417012-4.00019-3 |
[10] |
Murtaza MA, Ur-Rehman S, Anjum FM, et al. (2014) Cheddar cheese ripening and flavor characterization: A review. Crit Rev Food Sci Nutr 54: 1309-1321. https://doi.org/10.1080/10408398.2011.634531 doi: 10.1080/10408398.2011.634531
![]() |
[11] |
Fox PF, McSweeney PLH (1996) Proteolysis in cheese during ripening. Food Rev Int 12: 457-509. https://doi.org/10.1080/87559129609541091 doi: 10.1080/87559129609541091
![]() |
[12] |
Walsh EA, Diako C, Smith DM, et al. (2020) Influence of storage time and elevated ripening temperature on the chemical and sensory properties of white Cheddar cheese. J Food Sci 85: 268-278. https://doi.org/10.1111/1750-3841.14998 doi: 10.1111/1750-3841.14998
![]() |
[13] |
Rosenberg M, Rosenberg Y (2022) The effects of small changes in temperature on proteolysis during isothermal and non-isothermal aging of full-fat and reduced-fat Cheddar cheese. AIMS Agric Food 7: 341-356. https://doi.org/10.3934/agrfood.2022022 doi: 10.3934/agrfood.2022022
![]() |
[14] |
Paludetti LF, O'Callaghan TF, Sheehan JJ, et al. (2020) Effect of Pseudomonas fluorescens proteases on the quality of Cheddar cheese. J Dairy Sci 103: 7865-7878. https://doi.org/10.3168/jds.2019-18043 doi: 10.3168/jds.2019-18043
![]() |
[15] |
Hossain S, Khetra Y, Ganguly S, et al. (2020) Effect of heat treatment on plasmin activity and bio-functional attributes of Cheddar cheese. LWT 120: 108924. https://doi.org/10.1016/j.lwt.2019.108924 doi: 10.1016/j.lwt.2019.108924
![]() |
[16] |
Duan C, Li S, Zhao Z, et al. (2019) Proteolytic activity of Lactobacillus plantarum strains in cheddar cheese as adjunct cultures. J Food Prot 82: 2108-2118. https://doi.org/10.4315/0362-028X.JFP-19-276 doi: 10.4315/0362-028X.JFP-19-276
![]() |
[17] | Luo J, Wang Y, Li B, et al. (2018) Effect of somatic cells composition on proteolysis and quality of Cheddar cheese. Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering 34: 282-288. |
[18] |
McCarthy CM, Wilkinson MG, Guinee TP (2017) Effect of coagulant type and level on the properties of half-salt, half-fat Cheddar cheese made with or without adjunct starter: Improving texture and functionality. Int Dairy J 75: 30-40. https://doi.org/10.1016/j.idairyj.2017.07.006 doi: 10.1016/j.idairyj.2017.07.006
![]() |
[19] | Rafiq S, Huma N, Pasha I, et al. (2016) Compositional profiling and proteolytic activities in cow and buffalo milk cheddar cheese. Pak J Zool 48: 1141-1146. |
[20] | Fox PF, Guinee TP, Cogan TM, et al. (2016) Microbiology of Cheese Ripening. Boston, MA: Springer US, 333-390. https://doi.org/10.1007/978-1-4899-7681-9_11 |
[21] |
Soodam K, Ong L, Powell IB, et al. (2015) Effect of rennet on the composition, proteolysis and microstructure of reduced-fat Cheddar cheese during ripening. Dairy Sci Technol 95: 665-686. https://doi.org/10.1007/s13594-015-0250-5 doi: 10.1007/s13594-015-0250-5
![]() |
[22] | Broome MC, Xu X, Mayes JJ (2006) Proteolysis in Cheddar cheese made with alternative coagulants. Aust J Dairy Technol 61: 85-87. |
[23] |
Marino R, Considine T, Sevi A, et al. (2005) Contribution of proteolytic activity associated with somatic cells in milk to cheese ripening. Int Dairy J 15: 1026-1033. https://doi.org/10.1016/j.idairyj.2004.10.006 doi: 10.1016/j.idairyj.2004.10.006
![]() |
[24] | Upadhyay VK, McSweeney PLH, Magboul AAA, et al. (2004) Proteolysis in Cheese during Ripening. In: Fox PF, McSweeney PLH, Cogan TM et al., Cheese: Chemistry, Physics and Microbiology, Academic Press, 391-VⅢ. https://doi.org/10.1016/S1874-558X(04)80076-9 |
[25] | Ardö Y, McSweeney PLH, Magboul AAA, et al. (2017) Biochemistry of Cheese Ripening: Proteolysis. Cheese: Chemistry, Physics and Microbiology, 4 Eds., 445-482. https://doi.org/10.1016/B978-0-12-417012-4.00018-1 |
[26] |
Zhao X, Zheng Z, Zhang J, et al. (2019) Change of proteolysis and sensory profile during ripening of Cheddar-style cheese as influenced by a microbial rennet from rice wine. Food Sci Nutr 7: 1540-1550. https://doi.org/10.1002/fsn3.1003 doi: 10.1002/fsn3.1003
![]() |
[27] |
Williams AG, Noble J, Banks JM (2001) Catabolism of amino acids by lactic acid bacteria isolated from Cheddar cheese. Int Dairy J 11: 203-215. https://doi.org/10.1016/S0958-6946(01)00050-4 doi: 10.1016/S0958-6946(01)00050-4
![]() |
[28] |
Grazier CL, Bodyfelt FW, McDaniel MR, et al. (1991) Temperature effects on the development of Cheddar cheese flavor and aroma. J Dairy Sci 74: 3656-3668. https://doi.org/10.3168/jds.S0022-0302(91)78555-X doi: 10.3168/jds.S0022-0302(91)78555-X
![]() |
[29] |
Farkye NY, Fox PF, Fitzgerald GF, et al. (1990) Proteolysis and flavor development in Cheddar cheese made exclusively with single strain proteinase-positive or proteinase-negative starters. J Dairy Sci 73: 874-880. https://doi.org/10.3168/jds.S0022-0302(90)78742-5 doi: 10.3168/jds.S0022-0302(90)78742-5
![]() |
[30] |
McSweeney PLH, Fox PF (1997) Chemical methods for the characterization of proteolysis in cheese during ripening. Lait 77: 41-76. https://doi.org/10.1051/lait:199713 doi: 10.1051/lait:199713
![]() |
[31] | Kuchroo CN, Fox PF (1982) Soluble nitrogen in Cheddar cheese: Comparison of extraction procedures. Milchwissenschaft-milk Sci Int 37: 331-335. |
[32] |
Rosenberg M, Altemueller A (2001) Accumulation of free L-glutamic acid in full- and reduced-fat Cheddar cheese ripened at different time/temperature conditions. LWT-Food Sci Technol 34: 279-287. https://doi.org/10.1006/fstl.2001.0754 doi: 10.1006/fstl.2001.0754
![]() |
[33] |
McCarthy CM, Wilkinson MG, Kelly PM, et al. (2017) A profile of the variation in compositional, proteolytic, lipolytic and fracture properties of retail Cheddar cheese. Int J Dairy Technol 70: 469-480. https://doi.org/10.1111/1471-0307.12385 doi: 10.1111/1471-0307.12385
![]() |
[34] |
Bouzas J, Kantt CA, Bodyfelt FW, et al. (1993) Time and temperature influence on chemical aging indicators for a commercial Cheddar cheese. J Food Sci 58: 1307-1313. https://doi.org/10.1111/j.1365-2621.1993.tb06172.x doi: 10.1111/j.1365-2621.1993.tb06172.x
![]() |
[35] | Guinee T, Kilcawley K, Beresford T (2008) How variable are retail vintage brands of Cheddar cheese in composition and biochemistry? Aust J Dairy Technol 63: 50-60. |
[36] |
Fenelon MA, Guinee TP, Delahunty C, et al. (2000) Composition and sensory attributes of retail Cheddar cheese with different fat contents. J Food Compos Anal 13: 13-26. https://doi.org/10.1006/jfca.1999.0844 doi: 10.1006/jfca.1999.0844
![]() |
[37] | Hooi R, Barbano DM, Bradley RL, et al. (2004) Chapter 15—Chemical and Physical Methods. In: Arnold EA, Standard Methods for the Examination of Dairy Products, American Public Health Association. https://doi.org/10.2105/9780875530024ch15 |
[38] | FDA (2022) 21CFR133.113 Cheddar Cheese. Code of Federal Regulations. |
[39] |
Guinee TP, Harrington D, Corcoran MO, et al. (2000) The compositional and functional properties of commercial mozzarella, cheddar and analogue pizza cheeses. Int J Dairy Technol 53: 51-56. https://doi.org/10.1111/j.1471-0307.2000.tb02658.x doi: 10.1111/j.1471-0307.2000.tb02658.x
![]() |
[40] | FDA (2022) 21CFR101.62 Nutrient content claims for fat, fatty acid, and cholesterol content of foods. Code of Federal Regulations. |
[41] | McSweeney PLH, Fox PF, Ciocia F (2017) Chapter 16—Metabolism of Residual Lactose and of Lactate and Citrate. In: McSweeney PLH, Fox PF, Cotter PD et al., Cheese, 4 Eds., San Diego: Academic Press, 411-421. https://doi.org/10.1016/B978-0-12-417012-4.00016-8 |
[42] |
Lawrence RC, Heap HA, Gilles J (1984) A controlled approach to cheese technology. J Dairy Sci 67: 1632-1645. https://doi.org/10.3168/jds.S0022-0302(84)81486-1 doi: 10.3168/jds.S0022-0302(84)81486-1
![]() |
[43] | Frister H, Michaelis M, Schwerdtfeger T, et al. (2000) Evaluation of bitterness in Cheddar cheese. Milchwissenschaft 55: 691-695. |
[44] |
McCarthy CM, Wilkinson MG, Kelly PM, et al. (2015) Effect of salt and fat reduction on the composition, lactose metabolism, water activity and microbiology of Cheddar cheese. Dairy Sci Technol 95: 587-611. https://doi.org/10.1007/s13594-015-0245-2 doi: 10.1007/s13594-015-0245-2
![]() |
[45] |
Altemueller AG, Rosenberg M (1996) Monitoring proteolysis during ripening of full-fat and low-fat Cheddar cheeses by reverse-phase HPLC. J Food Sci 61: 295-298. https://doi.org/10.1111/j.1365-2621.1996.tb14179.x doi: 10.1111/j.1365-2621.1996.tb14179.x
![]() |
[46] |
Fenelon MA, O'Connor P, Guinee TP (2000) The effect of fat content on the microbiology and proteolysis in Cheddar cheese during ripening. J Dairy Sci 83: 2173-2183. https://doi.org/10.3168/jds.S0022-0302(00)75100-9 doi: 10.3168/jds.S0022-0302(00)75100-9
![]() |
[47] |
Rafiq S, Huma N, Pasha I, et al. (2016) Chemical composition, nitrogen fractions and amino acids profile of milk from different animal species. Asian-Australas J Anim Sci 29: 1022-1028. https://doi.org/10.5713/ajas.15.0452 doi: 10.5713/ajas.15.0452
![]() |
1. | S. Minu, Amba Shetty, Budiman Minasny, Cécile Gomez, The role of atmospheric correction algorithms in the prediction of soil organic carbon from Hyperion data, 2017, 38, 0143-1161, 6435, 10.1080/01431161.2017.1354265 | |
2. | Rei Sonobe, Hiroshi Tani, Hideki Shimamura, Kan-ichiro Mochizuki, Addition of fake imagery generated by generative adversarial networks for improving crop classification, 2024, 02731177, 10.1016/j.asr.2024.06.026 |
Mission characteristic | Information |
Number of satellites | 5 |
Spacecraft lifetime | 7 years |
Orbit altitude | 630 km in sun-synchronous orbit |
Sensor type | Multi-spectral push broom imager |
Spectral bands (nm) | Blue (440–510 nm) |
Green (520–590 nm) | |
Red (630–685 nm) | |
Red edge (690–730 nm) | |
NIR (760–850 nm) | |
Ground sampling distance (nadir) | 6.5 m |
Pixel size (ortho-rectified) | 5 m |
Swath Width | 77 km |
On board data storage | Up to 1500 km of image data per orbit |
Revisit time | Daily (off nadir), 5.5 days (at nadir) |
Image capture capacity | 4 million km2 per day |
Dynamic range | Up to 12 bit |
Wavelength (nm) | r♩ | Linear regression |
450 | 0.999* | y = 0.4939x + 0.1958 |
550 | 0.999* | y = 0.4705x − 3.8267 |
650 | 0.999** | y = 0.5432x − 0.4813 |
800 | 0.999** | y = 0.7698x − 4.3484 |
880 | 0.999** | y = 1.4032x + 3.4184 |
♩ * and ** represent significance at the 95 and 99 % probability levels. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large. |
FLAASH | ATCOR2 | |||
Input parameter | Value | Input parameter | Value | |
Acquisition time (UTC) | 3:25:01 | Acquisition time (UTC) | 3:25:01 | |
Latitude | 35.1734° | Latitude | 35.1734° | |
Longitude | 126.8986° | Longitude | 126.8986° | |
Visibility | 40 km | Visibility | 40 km | |
Ground elevation | 0 | Aerosol type | Rural, Midlat-summer | |
CO2 conc. (ppm) | 414.9 | Solar zenith | 19.8° | |
Atmospheric model | Midlat-summer | Solar azimuth | 163.3° | |
Aerosol model | Rural | Satellite azimuth | 100.42° | |
Zenith angle | 163.48° | |||
Azimuth angle | 79.58° | |||
♩ FLAASH and ATCOR represent Fast Line of Sight Atmospheric Analysis of Hypercubes and Atmospheric and Topographic Correction, respectively. |
Correction Methods♩ | r♪ | RMSE | ME |
No-correction | 0.804** | 0.076 | 0.477 |
QUAC | 0.867** | 0.077 | 0.463 |
FLAASH | 0.862** | 0.056 | 0.714 |
ATCOR | 0.869** | 0.055 | 0.732 |
♩ QUAC, FLAASH, and ATCOR denote QUick Atmospheric Correction, Fast Line of Sight Atmospheric Analysis of Hypercubes, and Atmospheric and Topographic Correction, respectively. ♪ ** represents significance at the 99 % probability level. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large. |
DOY | Reference class | Classification result | ||
Paddy | Non-paddy | Producer accuracy | ||
152 | Paddy | 82.91 | 4.14 | 82.91 |
Non-paddy | 16.70 | 95.85 | 95.85 | |
User accuracy | 61.52 | 93.88 | - | |
Overall accuracy = 90.76% | ||||
Kappa coefficient = 0.52 | ||||
174 | Paddy | 83.28 | 3.96 | 83.28 |
Non-paddy | 16.72 | 96.04 | 96.04 | |
User accuracy | 61.63 | 98.39 | - | |
Overall accuracy = 94.87% | ||||
Kappa coefficient = 0.67 | ||||
220 | Paddy | 99.92 | 0.08 | 99.92 |
Non-paddy | 0.08 | 100.00 | 100.00 | |
User accuracy | 61.64 | 99.44 | - | |
Overall accuracy = 95.82% | ||||
Kappa coefficient = 0.77 | ||||
250 | Paddy | 83.28 | 4.01 | 83.28 |
Non-paddy | 16.72 | 95.99 | 95.99 | |
User accuracy | 61.64 | 97.12 | - | |
Overall accuracy = 93.72% | ||||
Kappa coefficient = 0.62 |
Reference class | Classification result | ||
Paddy | Non-paddy | Producer accuracy | |
Paddy | 99.92 | 0.08 | 99.92 |
Non-paddy | 0.08 | 100 | 100 |
User accuracy | 61.64 | 99.44 | - |
Overall accuracy = 95.82% | |||
Kappa coefficient = 0.77 |
Mission characteristic | Information |
Number of satellites | 5 |
Spacecraft lifetime | 7 years |
Orbit altitude | 630 km in sun-synchronous orbit |
Sensor type | Multi-spectral push broom imager |
Spectral bands (nm) | Blue (440–510 nm) |
Green (520–590 nm) | |
Red (630–685 nm) | |
Red edge (690–730 nm) | |
NIR (760–850 nm) | |
Ground sampling distance (nadir) | 6.5 m |
Pixel size (ortho-rectified) | 5 m |
Swath Width | 77 km |
On board data storage | Up to 1500 km of image data per orbit |
Revisit time | Daily (off nadir), 5.5 days (at nadir) |
Image capture capacity | 4 million km2 per day |
Dynamic range | Up to 12 bit |
Wavelength (nm) | r♩ | Linear regression |
450 | 0.999* | y = 0.4939x + 0.1958 |
550 | 0.999* | y = 0.4705x − 3.8267 |
650 | 0.999** | y = 0.5432x − 0.4813 |
800 | 0.999** | y = 0.7698x − 4.3484 |
880 | 0.999** | y = 1.4032x + 3.4184 |
♩ * and ** represent significance at the 95 and 99 % probability levels. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large. |
FLAASH | ATCOR2 | |||
Input parameter | Value | Input parameter | Value | |
Acquisition time (UTC) | 3:25:01 | Acquisition time (UTC) | 3:25:01 | |
Latitude | 35.1734° | Latitude | 35.1734° | |
Longitude | 126.8986° | Longitude | 126.8986° | |
Visibility | 40 km | Visibility | 40 km | |
Ground elevation | 0 | Aerosol type | Rural, Midlat-summer | |
CO2 conc. (ppm) | 414.9 | Solar zenith | 19.8° | |
Atmospheric model | Midlat-summer | Solar azimuth | 163.3° | |
Aerosol model | Rural | Satellite azimuth | 100.42° | |
Zenith angle | 163.48° | |||
Azimuth angle | 79.58° | |||
♩ FLAASH and ATCOR represent Fast Line of Sight Atmospheric Analysis of Hypercubes and Atmospheric and Topographic Correction, respectively. |
Correction Methods♩ | r♪ | RMSE | ME |
No-correction | 0.804** | 0.076 | 0.477 |
QUAC | 0.867** | 0.077 | 0.463 |
FLAASH | 0.862** | 0.056 | 0.714 |
ATCOR | 0.869** | 0.055 | 0.732 |
♩ QUAC, FLAASH, and ATCOR denote QUick Atmospheric Correction, Fast Line of Sight Atmospheric Analysis of Hypercubes, and Atmospheric and Topographic Correction, respectively. ♪ ** represents significance at the 99 % probability level. Criteria for correlations (Cohen, 1988): 0.1-0.3: small; 0.3-0.5: medium; and 0.5-1.0: large. |
DOY | Reference class | Classification result | ||
Paddy | Non-paddy | Producer accuracy | ||
152 | Paddy | 82.91 | 4.14 | 82.91 |
Non-paddy | 16.70 | 95.85 | 95.85 | |
User accuracy | 61.52 | 93.88 | - | |
Overall accuracy = 90.76% | ||||
Kappa coefficient = 0.52 | ||||
174 | Paddy | 83.28 | 3.96 | 83.28 |
Non-paddy | 16.72 | 96.04 | 96.04 | |
User accuracy | 61.63 | 98.39 | - | |
Overall accuracy = 94.87% | ||||
Kappa coefficient = 0.67 | ||||
220 | Paddy | 99.92 | 0.08 | 99.92 |
Non-paddy | 0.08 | 100.00 | 100.00 | |
User accuracy | 61.64 | 99.44 | - | |
Overall accuracy = 95.82% | ||||
Kappa coefficient = 0.77 | ||||
250 | Paddy | 83.28 | 4.01 | 83.28 |
Non-paddy | 16.72 | 95.99 | 95.99 | |
User accuracy | 61.64 | 97.12 | - | |
Overall accuracy = 93.72% | ||||
Kappa coefficient = 0.62 |
Reference class | Classification result | ||
Paddy | Non-paddy | Producer accuracy | |
Paddy | 99.92 | 0.08 | 99.92 |
Non-paddy | 0.08 | 100 | 100 |
User accuracy | 61.64 | 99.44 | - |
Overall accuracy = 95.82% | |||
Kappa coefficient = 0.77 |