Loading [Contrib]/a11y/accessibility-menu.js
Research article Special Issues

Uncertainty regarding the effectiveness of Federal Reserve monetary policies over time in the U.S.: an exploratory empirical assessment

  • In the present study, we empirically investigate the uncertainty of the effectiveness of recent monetary policies in lowering the real mortgage rate in the U.S. In particular, we have an eye towards determining whether the Fed's policies have been consistently effective or whether, instead, there is uncertainty regarding whether, when, and to what extent these policies achieve their ostensible goal of lowing the mortgage rate. Based upon empirical estimates of a loanable funds model, it is shown that the consistency of recent monetary policies, as reflected in the ratios of the M2 money supply to GDP and quantitative easing to GDP, has varied considerably between the study periods 1974–2009, 1974–2010, 1974–2011, 1974–2012, 1974–2013, 1974–2014, and 1974–2015, implying that there exists uncertainty regarding how consistent monetary policy effectiveness really is. This monetary policy uncertainty is even more apparent when the periods 1974–2008 and 1974–2016 are considered. Moreover, it is observed that elevated interest rate risk is a collateral effect of recent monetary policies. Interest rate risk seriously endangers the health of the macro-economy and throws future monetary policy effectiveness even further into question and yields further economic uncertainty.

    Citation: Richard J. Cebula, Robert Boylan. Uncertainty regarding the effectiveness of Federal Reserve monetary policies over time in the U.S.: an exploratory empirical assessment[J]. Quantitative Finance and Economics, 2019, 3(2): 244-256. doi: 10.3934/QFE.2019.2.244

    Related Papers:

    [1] Tej Bahadur Shahi, Cheng-Yuan Xu, Arjun Neupane, Dayle B. Fleischfresser, Daniel J. O'Connor, Graeme C. Wright, William Guo . Peanut yield prediction with UAV multispectral imagery using a cooperative machine learning approach. Electronic Research Archive, 2023, 31(6): 3343-3361. doi: 10.3934/era.2023169
    [2] Eray Önler . Feature fusion based artificial neural network model for disease detection of bean leaves. Electronic Research Archive, 2023, 31(5): 2409-2427. doi: 10.3934/era.2023122
    [3] Lei Pan, Chongyao Yan, Yuan Zheng, Qiang Fu, Yangjie Zhang, Zhiwei Lu, Zhiqing Zhao, Jun Tian . Fatigue detection method for UAV remote pilot based on multi feature fusion. Electronic Research Archive, 2023, 31(1): 442-466. doi: 10.3934/era.2023022
    [4] Yogesh Kumar Rathore, Rekh Ram Janghel, Chetan Swarup, Saroj Kumar Pandey, Ankit Kumar, Kamred Udham Singh, Teekam Singh . Detection of rice plant disease from RGB and grayscale images using an LW17 deep learning model. Electronic Research Archive, 2023, 31(5): 2813-2833. doi: 10.3934/era.2023142
    [5] Jianting Gong, Yingwei Zhao, Xiantao Heng, Yongbing Chen, Pingping Sun, Fei He, Zhiqiang Ma, Zilin Ren . Deciphering and identifying pan-cancer RAS pathway activation based on graph autoencoder and ClassifierChain. Electronic Research Archive, 2023, 31(8): 4951-4967. doi: 10.3934/era.2023253
    [6] Yuhang Liu, Jun Chen, Yuchen Wang, Wei Wang . Interpretable machine learning models for detecting fine-grained transport modes by multi-source data. Electronic Research Archive, 2023, 31(11): 6844-6865. doi: 10.3934/era.2023346
    [7] Shixiong Zhang, Jiao Li, Lu Yang . Survey on low-level controllable image synthesis with deep learning. Electronic Research Archive, 2023, 31(12): 7385-7426. doi: 10.3934/era.2023374
    [8] Ju Wang, Leifeng Zhang, Sanqiang Yang, Shaoning Lian, Peng Wang, Lei Yu, Zhenyu Yang . Optimized LSTM based on improved whale algorithm for surface subsidence deformation prediction. Electronic Research Archive, 2023, 31(6): 3435-3452. doi: 10.3934/era.2023174
    [9] Ahmed Abul Hasanaath, Abdul Sami Mohammed, Ghazanfar Latif, Sherif E. Abdelhamid, Jaafar Alghazo, Ahmed Abul Hussain . Acute lymphoblastic leukemia detection using ensemble features from multiple deep CNN models. Electronic Research Archive, 2024, 32(4): 2407-2423. doi: 10.3934/era.2024110
    [10] Jing Lu, Longfei Pan, Jingli Deng, Hongjun Chai, Zhou Ren, Yu Shi . Deep learning for Flight Maneuver Recognition: A survey. Electronic Research Archive, 2023, 31(1): 75-102. doi: 10.3934/era.2023005
  • In the present study, we empirically investigate the uncertainty of the effectiveness of recent monetary policies in lowering the real mortgage rate in the U.S. In particular, we have an eye towards determining whether the Fed's policies have been consistently effective or whether, instead, there is uncertainty regarding whether, when, and to what extent these policies achieve their ostensible goal of lowing the mortgage rate. Based upon empirical estimates of a loanable funds model, it is shown that the consistency of recent monetary policies, as reflected in the ratios of the M2 money supply to GDP and quantitative easing to GDP, has varied considerably between the study periods 1974–2009, 1974–2010, 1974–2011, 1974–2012, 1974–2013, 1974–2014, and 1974–2015, implying that there exists uncertainty regarding how consistent monetary policy effectiveness really is. This monetary policy uncertainty is even more apparent when the periods 1974–2008 and 1974–2016 are considered. Moreover, it is observed that elevated interest rate risk is a collateral effect of recent monetary policies. Interest rate risk seriously endangers the health of the macro-economy and throws future monetary policy effectiveness even further into question and yields further economic uncertainty.


    Abbreviations: ANN: Artificial neural network; CC: Canopy cover; CH: Canopy height; CHM: Canopy height model; CNN: Convolutional neural network; CV: Canopy volume; CT: Canopy temperature; DL: Deep learning; DNN: Deep neural network; ELM: Extreme learning machine; FCN: Fully connected network; GPR: Gaussian process regression; GPS: Global positioning system; HS: Hyperspectral; HALE: High altitude long endurance; LASSO: Least absolute shrinkage and selection operator; LiDAR: Light detection and ranging; LSTM: Long short-term memory; ML: Machine learning; MS: Multispectral; MALE: medium-altitude long-endurance; MLR: Multiple linear regression; MTCI: MERIS Terrestrial Chlorophyll Index; NIR: Near-infrared; PA: Precision agriculture; PLSR: Partial least square regression; RBF: Radial bias function; RGB: Red-green-blue; RF: Random forest; SAR: Synthetic aperture radar; SVM: Support vector machine; UAV: Unmanned aerial vehicle; VI: Vegetation index; VTOL: Vertical takeoff and landing; VGG: Visual geometry group

    The development of advanced sensors, data acquisitions platform and the internet has created many opportunities, and challenges for the advancement of agriculture [1]. Moreover, the tremendous growth in the use of emerging technologies in agriculture has initiated a large amount of data or "Big-data" [2]. The massive production of agriculture data from different smart vehicles such as field sensors, aerial vehicles, a global positioning system (GPS), the internet of things (IoT), and cameras [3] can be accumulated and intercommunicated to make better decisions for various smart farming activities such as crop planting [4], crop irrigation [5], soil management [6], disease detection [7], pest identification [8] and so on. To effectively manage such farm activities, it is essential to get field (or agriculture) information such as crop water stress, crop vigour, crop height, soil moisture and so on. Traditionally, crop information was obtained by scouting the plant regularly and scheduling the farm activities accordingly, which is time-consuming as well as laborious. Alternatively, the use of sensors, cameras, moving vehicles, and global positioning systems (GPS) in farm management [3] can provide a large amount of data to facilitate data-driven smart farming activities.

    Remote sensing has been a widely used and most influential technology for smart farming and precision agriculture. The success of remote sensing in precision agriculture is highly affected by factors such as the type of platforms (either ground-based, airborne or satellite), sensors that capture the region of the electromagnetic spectrum (visible, infrared, or thermal), resolutions (temporal and spatial), and energy source (active or passive source) [9]. In the past, airborne or satellite-based remote sensing with various sensors such as multispectral, hyperspectral, radio detection and ranging (RADAR) and Light Detection and Ranging (LiDAR) was used to acquire such crop-related information at a regional or global level. For instance, Vallentin et al. [10] estimated the yield of cereal and canola in northeast Germany using satellite remote sensing data with six different optical sensors. The study showed that high-resolution images from satellites such as RapidEye and Sentinel-2 perform better than low-resolution satellite images with Landsat. However, satellite-based remote sensing has three major limitations that make it not often the best choice for precision agriculture. First, the satellites capture the image in spatial resolution in meters (eg. Landsat has 30m and sentinel has 10m spatial resolution) which is usually inadequate for plant or plot level data analysis. Second, the satellite revisiting time is not flexible and often not available to capture the necessary images at the required time. Third, environmental conditions such as clouds limit the reliable quality of images [11].

    In recent years, remote sensing with the unmanned aerial vehicle (UAV) has made prominent progress in precision agriculture, although UAVs have been exploited for military, defense, survey and other applications for a long time [12]. The UAV is a kind of unmanned system which has camera sensors to capture the data along with manual or automated flight management. It has been popular and widely investigated in precision agriculture because it has addressed most of the limitations that arise with satellite-based remote sensing [13]. For instance, first, it has the flexibility of revisiting the field as it can be taken into flight by the user at any time as long as the weather allows (thereby providing high temporal resolution imagery). Second, it can capture high spatial resolution imagery very closer to the plant producing a bird's eye view of the field (i.e., resulting in the image resolution of centimetres). Third, it is cost-effective and easy to operate and deploy. Finally, it can avoid clouds by flying at a lower altitude and resulting in high-quality images. However, the extraction of useful information from such spatial big data is the main challenge for the researcher to effectively use drones and other remote sensing platforms for precision agriculture [14]. Machine learning and deep learning methods for such spatial data analysis have shown some successes in this area [15].

    Many studies have been conducted to investigate the potential use of UAV imagery for precision agriculture applications such as yield estimation [16,17,18], weed detection [14], plant counting [19], and disease detection [20] over the years. These existing research works use various sensors included in the UAV platforms to capture the field imagery. These sensors mostly include red-green-blue (RGB), multispectral, hyperspectral, and thermal. Among these sensors, the majority of studies use consumer-grade RGB sensors because of their low cost, in-built reliable camera models and high-resolution images. For instance, the spectral information from RGB images acquired with UAV was investigated for corn yield estimation by Geipel et al. [16]. A linear regression model was used to estimate corn yield with RGB images at three corn growth stages. It was found that the images at the end of the season were highly correlated with corn yield. A highly accurate method for weed detection in bean and spinach farms with RGB images from UAV was investigated by Bah et al. [14] using an unsupervised convolution neural network. A maize plant counting with RGB images was performed by Gnadinger et al. [19] using the stretch contrast enhancement method to enhance the colour difference in images. The image-based counting showed a high correlation (R2 = 0.89) with ground truth. However, the RGB sensors can not capture the important information reflected in the infrared range outside of the visible wavelength. Here, the multispectral sensors could complement such information and provide better crop health information. For instance, Yu et al. [18] developed a soybean yield and maturity estimation model with multispectral images and a machine learning approach (Random forest) for large-scale soybean breeding trials. They achieved high correlations between aerial data and final soybean yield while using canopy geometric features extracted from multispectral images. Similarly, a few existing studies reported the potential success of multi-model data fusion approaches combined with machine learning in precision agriculture. The hybrid multi-model machine learning methods for estimating plant traits such as plant height, density, growth stage, disease and yield have been reported in the literature [21,22]. Furthermore, machine learning and deep learning methods showed a magnificent triumph for crop disease and pest detection using UAV imagery as reported by existing works [7,23,24].

    A detailed survey on the application of UAVs for various agriculture tasks such as drought stress detection, pesticide application, weed detection, nitrogen assessment, biomass and yield estimation was presented in [25]. Maes et al. [25] analyzed and synthesized the scientific progress and existing challenge in translating research results into practice. However, the recent progress in artificial intelligence (AI) and machine learning (ML) based data analysis methods has a great potential to tackle such challenges which need to be synthesised and presented. Chlingaryan et al. [26] surveyed the existing research on the application of machine learning algorithms for crop yield and nitrogen status estimation. They concluded that remote sensing and machine learning have great potential for precision agriculture. The fusion of multiple sensor data with a hybrid approach of machine learning was more efficient than a single modality approach on both tasks. However, their focus was on the general remote sensing approach of machine learning, therefore, they did not consider UAV-based remote sensing exclusively in their survey.

    A recent review [27] reported the latest cases of UAVs for various precision agriculture tasks such as weed mapping and management, yield estimation, disease detection, and irrigation management. Different types of UAVs along with their data acquisition methodologies were discussed. However, this survey did not consider the performance evaluation of machine learning and deep learning methods. Similarly, Velusamy et al. [28] surveyed the different UAV types, sensors, and their applications in precision agriculture, especially for precise pest management. They also reported the existing works on UAVs and other remote sensing technologies for early disease detection, crop monitoring and yield estimation. However, they did not discuss the data-driven methods used along with UAV imagery for precision agriculture. Besides, Kamilaris et al. [29] surveyed different deep learning approaches for precision agriculture with a focus on the machine learning pipeline that included data preparation, data augmentation, and model evaluation. They also compared the performance of deep learning methods with the existing popular techniques for agriculture data analysis such as image processing techniques where deep learning provided high accuracy in most cases. However, they did not analyze the contribution of UAVs to smart agriculture issues using advanced data analysis techniques such as machine learning and deep learning. A point-wise comparison of this work with existing survey works is listed in Table 1.

    Table 1.  Comparison of this work with existing works.
    Question Maes et al. [25] Tsouros et al. [27] Velusamy et al. [28] Kamilaris et al. [29] This work
    Does the paper review type of UAV sensor used for precision agriculture? Y Y Y N Y
    Does the paper review the various applications of UAV imagery for precision agriculture? Y Y Y N Y
    Does the paper review types of UAV image features used for precision agriculture? Y Y N N Y
    Does the paper review the machine learning methods used for precision agriculture? N Y N Y Y
    Does the paper review deep learning methods used for precision agriculture? N N N Y Y
    Note: The 'Y' and 'N' represent whether the work considered the question or not respectively.

     | Show Table
    DownLoad: CSV

    The main contributions of this work are as follows:

    (a) We summarize and synthesize the recent work on the various types of UAVs, sensors and their uses in precision agriculture using a proposed UAV data processing pipeline. All stages involved in the UAV data processing pipeline such as UAV image processing, feature extraction, model building, and evaluation are discussed in this work.

    (b) We assimilate and categorise the various image features derived from UAV-based remote sensing with machine learning algorithms.

    (c) We analyze and report the performance of machine learning and deep learning methods on UAV image datasets for specific agriculture applications such as yield estimation, disease detection and crop classifications.

    (d) We outline the existing challenges and opportunities in precision agriculture brought by drones. Also, we report the recent trends and future avenues of UAV-based remote sensing for precision agriculture with the aid of bibliometric analysis.

    The rest of the paper is organized as follows. Section 2 explains the step-by-step activities performed to find the research articles included in this survey. Section 3 presents the background of remote sensing and precision agriculture. Section 4 briefly presents the different types of widely used UAVs or drones in precision agriculture. Likewise, Section 5 presents the different sensors used in UAVs for agricultural data acquisitions along with their pros and cons. Section 6 elaborates on the offline image prepossessing activities involved after the completion of the UAV flight mission. Section 7 details the various features extracted from UAV images for machine learning model building. Section 8 presents the working setting of machine learning and deep learning approaches for precision agriculture with UAV imagery. Section 9 briefly discusses the application of UAV in crop yield estimation, crop disease detection and crop classification. Furthermore, we discuss the research trend, future research perspective and avenues along with challenges and opportunities in Section 10. Finally, we conclude our paper with the future recommendation in Section 11.

    We followed the systematic procedure to identify the articles for review. We first designed a query string using the terms ("Unmanned aerial vehicle" OR "UAV") AND ("Machine Learning" OR "Deep Learning") AND ("Precision agriculture") and performed a database search on three popular databases (Web of Science, Scopus, and Google Scholar) limiting our search within the title, abstract, and keywords of each article on Jan 10, 2022. As a result, we achieved 204,141, and 111 articles from Scopus, Web of Science, and Google Scholar, respectively with such a query string. Second, we performed the initial screening of articles received from each source for duplicacy and peer-review criteria. We excluded the pre-prints and not peer-reviewed works in this survey. Then, we carefully read each article's titles, abstracts, keywords, and full text and excluded those articles that were not considered either UAV imagery or precision agriculture in their study. After such screening, we ended up with only 110 articles. Finally, while analyzing and comparing the performance of machine learning and deep learning approaches in various agriculture applications (refer to Section 9), we reviewed additional 12 articles which show the recent trend and applications of data-driven methods in precision agriculture. Hence, 122 articles were considered for the final review in this study. The detailed pipeline of our survey method is presented in Figure 1.

    Figure 1.  Stepwise procedure to retrieve the articles reviewed in this survey.

    We will now review the existing papers based on different factors pertinent to the research they have presented such as precision agriculture and remote sensing, UAVs and their types, sensors, UAV image processing and model building along with their applications in yield prediction, disease detection and crop classification in the following sections.

    The existing works [30,31,32,33] emphasized the importance of remote sensing techniques for precision agriculture. Here, we discuss the background of remote sensing and precision agriculture while coupling them with UAVs and their application.

    The continuous increment in population poses severe challenges to the agricultural production system to meet the global food demands. The protection of the natural ecosystem is also equally important while providing quality food to everyone [34]. To make more informed and better decisions to tackle such challenges, advanced technologies are necessary where precision agriculture (PA) or smart agriculture helps the farmers to improve crop yield and assists them in farm management. For instance, using a large amount of in-farm sensor data and analytical techniques, farmers can map the effective fertilizer and irrigation applications thereby saving time and cost [30]. PA can help to improve crop productivity and thereby increase crop yield because farmers will be able to provide optimized inputs such as water and agrochemicals including fertilizer, pesticides and growth regulators using crop information acquired with advanced sensing technology [31].

    Remote sensing has been a key source of information for precision agriculture. It is a non-destructive way of acquiring information about objects of interest by recording the reflected or emitted energy from targets. It consists of various stages in a pipeline: source of illumination, interaction with the target, recording of reflected energy by sensors, transmission, reception and processing, interpretation, and analysis of images [35]. Various remote sensing platforms such as field-based sensors (fixed and moving vehicles), and airborne sensors (satellite, aircraft) have been largely used in the past decades to effectively manage the farm activities such as pesticide application [8], yield estimation [16], disease detection [36] and irrigation management [37,38]. The frequently used remote sensing platforms for precision agriculture are depicted in Figure 2.

    Figure 2.  Commonly used remote sensing platform for precision agriculture.

    Initially, precision agriculture methods used field-based sensors, satellite, and aerial imagery sensors to assess the plant status non-destructively [30]. Satellite-based precision agriculture has been employed to estimate agricultural parameters such as yield, plant biomass, and cropland cover at the global or regional level. The satellite has its own visiting time and sensor capability, which cannot be managed by the farmer to get real-time farm data and images [32]. Instead, aerial imagery is usually acquired with sensors mounted in manned aircraft that flies over a large field at a lower altitude and resulting in high spatial resolution images compared to satellite images. The satellite and manned aerial images are prone to cloud and other environmental effects which reduce the image quality. The ground-based sensor systems can provide very high spatial resolution imagery. Nevertheless, it is time-consuming to move these sensors from place to place to measure infield variability [39] and can only cover the limited field area compared to aerial imagery.

    Recently, an unmanned aerial vehicle (UAV), also known as a drone, was developed to fly for a certain time at a specific height. These drones have multiple applications such as aerial photography, shipping and delivery, disaster management, search and rescue, precision agriculture, and many more [33]. In precision agriculture, the latest sensors embedded in the unmanned aerial vehicle (UAV) can collect crop field images at high resolution with multiple visible and non-visible light spectrums. These images can be further analysed using image analysis methods to extract insightful information such as variability in crop stress including biotic (disease and pests) and abiotic (water and nutrient deficient etc.) [23,40], water stress [41], and fertilizer deficiency [42]. In such a situation, the farmer can precisely apply the fertilizer or other pesticides to a specific plant or area rather than applying it to the whole field. Since drones have the flexibility of flight and the capability of acquiring very high-resolution images of the crop field compared to satellite imagery, it has a great potential of working to provide a bird-eye view over the agriculture field. It has explored a new horizon of data-driven intelligent farming or smart farming [43] which could be complemented by the Internet of Things (IoT) and machine learning methods as demonstrated for the prediction of apple disease by Akther et al. [44].

    UAV (Unmanned aerial vehicle) or drone is a type of remotely piloted aerial vehicle without any operating human on the board. Initially, it was developed for use in missions classified as dull, dirty, or dangerous. The broad and diverse applications of UAVs lead to the development of different types of UAVs [45]. It ranges from very small UAVs (e.g., 2 kg or less in weight) or nano-UAVs used for commercial applications to large UAVs (e.g., more than 150 kg) used for military surveillance [46]. With a walkthrough of existing literature [14,20,47,48], the research works can be differentiated based on the type of UAV they use for data acquisitions. Here, we briefly discuss the different types of UAVs, mainly focusing their application on precision agriculture.

    The classification of UAVs can be achieved based on three criteria: a) wings or rotors, b) size or weights and c) altitude or range. The existing works have classified the UAV on their own basis and there is no standard classification. However, we reported our classification based on their application to agriculture. The high-level taxonomy of UAV types is presented in Figure 3.

    Figure 3.  Classification of UAV based on size, wings, and range.

    Based on their wings and rotors design characteristics, UAVs can be grouped into five categories: a) Fixed-wing b) Rotary-wing c) Flapping-wing d) Hybrid-wing e) Parafoil-wing [27]. A fixed-wing UAV resembles the design of an aeroplane, flies at high speeds, covers a large area and carries more payloads. However, they need a large space or runway for takeoff which limit their application in small agriculture field. Whereas a Rotary-wing UAV resembles a helicopter design and can takeoff and land vertically. This kind of UAV is further distinguished by either a single-rotor or a multi-rotor. A single-rotor UAV has one main and one tail rotor whereas a multi-rotor UAV comes with three or more rotors and is known as a tri-copter (3-rotors), quadcopter (4-rotors), hexacopter (6 motors) and octa-copter (8-rotors). Because of their vertical takeoff and landing (VTOL) capability, good camera control and easy-to-use nature, they are the most widely used UAV in precision agriculture [28]. The other UAVs in this category are flapping-wing, hybrid-wing and parafoil-wing which are rarely used in precision agriculture applications [27].

    Similarly, there are five types of UAVs based on their size and weights: Micro (250 g or less), very small (250.1 g to 2 kg), small (2.01 kg to 25 kg), medium (25.01 kg to 150 kg) and large (more than 150 kg) (Note that it is based on Australian standard and might have different weights ranges in other places)1. Among these UAVs, small and medium-size drones are widely used in precision agriculture (refer to Table 2).

    1 https://www.casa.gov.au/drones/drone-rules/drone-safety-rules/types-drones (accessed date 05/02/2022)

    Table 2.  Various sensors used in UAVs for precision agriculture applications along with important parameters*.
    Ref. Crop Type of UAV Sensor Height Application
    [14] Bean & Spinach Multi-rotor RGB 20 m Weed detection
    [16] Corn Multi-rotor RGB 50 m Yield estimation
    [47] Maize Multi-rotor (Quadcopter) RGB & NIR 100 m Vigour and yield estimation
    [17] Rice Multi-motor (Octocopter) MS (6-band) 100 m Yield estimation
    [50] Rice & Wheat Multi-rotor MS (4-band) 30 m Yield estimation
    [36] Peanut Multi-rotor MS (4-band) 20 m Disease detection
    [51] Wheat Multi-rotor HS 30 m Disease detection
    [19] Maize Multi-rotor (Octocopter) RGB 50 m Plant counting
    [52] Coffee & Corn Multi-rotor SAR 120 m Growth estimation
    [53] Grapes Multi-rotor Thermal 70 m Water stress estimation
    [54] Maize Fixed wing MS (4-band) - Yield estimation
    [55] Rice Multi-rotor (Octocopter) MS Yield estimation
    [48] Maize Fixed wing MS 150 m Yield and stress detection
    [56] Vine Multirotor MS 50 m Yield estimation
    [18] Soybean Multi-rotor (Octocopter) RGB & NIR 95 m Maturity estimation
    [57] Sugarcane Multirotor RGB 50 m Yield estimation
    [58] Sorghum Fixed wing MS (3 bands) 62 m Stress assessment
    [20] Barely Fixed wing MS - Disease detection
    [59] Corn & Barley Fixed wing MS & Thermal - Crop monitoring
    [60] Soybean Multi-rotor (octocopter) Thermal 125 m Water status assessment
    [61] Wheat & barley Multi-rotor HS - Biomass and nitrogen estimation
    [62] Corn Multi-rotor RGB 10m Plant counting
    *Note that Height denotes the flight height of the UAV mission, and RGB, NIR, SAR, MS and HS denote the red-green-blue, near-infrared, synthetic aperture radar, multispectral and hyperspectral sensors respectively.

     | Show Table
    DownLoad: CSV

    The altitude and range that the drone can cover are also critical in precision agriculture because it determines the field size that the drone can monitor at a time. The low-altitude drone which can fly less than 600 m and have a low range of 2 km is known as a "Hand-held" UAV. A bit high altitude of up to 1500 m and a range of less than 10km is covered by a "close" UAV. The large drones that are specially designed to cover a wide range and fly at high altitudes are Tactical (<5500 m altitude and 160 km range), medium-altitude long-endurance--MALE (<9100 m altitude and < 200 km range), high-altitude long-endurance HALE (>9100 m altitude) and hypersonic (>15200 m and > 200 km range) [49]. The flight height for drone missions is set by government regulatory agencies in many countries. For instance, drones are not allowed to fly higher than 120 m above ground level for recreational activity in Australia, according to CASA (Civil aviation safety authority) of Australia2.

    2 https://www.casa.gov.au/drones/drone-rules/drone-safety-rules (accessed date: 01/05/2022)

    Understandably, UAVs with smooth and lightweight in size, multi-rotor in design and low and medium range have great potential to be used in precision agriculture. They can acquire high-quality data with high throughput which can be used to create crop models such as the canopy height model from structure-from-motion (SfM) generated point cloud. They also can capture multi-angular data and operate multiple sensors at the same time, thereby capturing crop information at multiple scales, which allow drones to be useful for advanced data modelling method such as multi-model data fusion [21]. Though it has many advantages over traditional remote sensing, a few limitations and technical issues might arise during the use of UAVs. Technical issues such as engine power, payload capacity, takeoff and landing, short flight duration, maintaining aircraft stability at a different altitude, engine failure, and regulatory measures are critical. The regulation criteria set by the government and other regulatory agencies determine their legal uses and other flight parameters such as flight height, and safety measurement which might limit the possible experiments that the researcher would like to undertake [12]. In addition, the UAV image processing and model building techniques such as machine learning and deep learning which require highly technical skills might be challenging to get acquainted with in the first step for precision agriculture researchers and farmers.

    In remote sensing activities, the common information carrier is the electromagnetic (EM) spectrum [35]. It is a form of energy that consists of wavelength and frequency, divided into a range of spectrum from shorter wavelengths to longer wavelengths as depicted in Figure 4. Since different materials react with a specific range of wavelength in this spectrum, only a few sections of the spectrum are practically useful for remote sensing applications. For instance, the visible spectrum (the only spectrum portion that the human eye can capture) is mostly used in photogrammetry applications [63]. The infrared and thermal portions of the spectrum are reflected in the form of heat and are used in agricultural applications. The ultra-violet portion of the spectrum is used in the analysis of some rocks and minerals [64].

    Figure 4.  The wavelength range of electromagnetic spectrum for various sensors.

    There are two types of sensors based on light sources. Passive sensors use the reflected light source to capture the information and do not have their own source of light while active sensors do have their own source of light and capture the reflected portion of light from the object in the domain. Passive sensors include low-cost RGB sensors, multispectral sensors, hyperspectral sensors, and thermal sensors while RADAR and LiDAR are examples of active sensors [12].

    The RGB sensors capture the spectrum in the visible wavelength range. These are relatively low-cost, easy-to-use sensors. As they provide high-resolution low-cost images, they were hugely exploited by existing works (refer to Table 2) to address various agricultural applications such as weed detection [14], yield estimation [16], and plant counting [19]. Since they only provide the information on visible wavelengths, which is not sufficient to acquire some crop health-related information such as pathophysiological change after maturity [18], RGB sensors are complemented with other sensors such as near infra-red (NIR) and multispectral sensors. For example, Yu et al. [18] proposed a soybean maturity estimation with RGB and NIR images where the individual soybean plots were classified into mature or not mature by associating the spectral information of each plot to the binary variable (mature or not-mature).

    The multispectral sensors capture more than one EM spectrum band: red, green, blue, and near-infrared (NIR). They are largely used in precision agriculture as the NIR band is strongly reflected by green vegetation, which is useful to differentiate the vegetation signature such as health status and chlorophyll content [65]. The crop information such as canopy cover, plant density and vegetation indices derived from UAV-based RGB and multispectral images were used by Garcia et al. [66] for corn yield estimation with an artificial neural network (ANN). They explored not only various multispectral vegetation but also included RGB-based crop information such as canopy cover and plant density to improve the performance of the neural network for corn yield estimation.

    Besides these low-cost sensors such as RGB and multispectral, several studies employed hyperspectral sensors and UAVs for precision agriculture [61,67]. The number of spectral bands and their width of energy spectrum measured by the sensor distinguish the hyperspectral and multispectral sensors. Generally, multispectral sensors receive the energy spectrum on the wider bands and a few numbers of channels (ranges from 5 to 12) while hyperspectral sensors consist of hundred or thousand of narrower bands. This allows the hyperspectral sensors to capture the fine-grained information (in both spatial and spectral ranges) about the crop in each narrow band which might be missed because of the wider band in multispectral sensors. A biomass and nitrogen estimation for wheat and barley crops was reported in [61] using UAV-based hyperspectral remote sensing. They carried out preprocessing such as laboratory calibration, spectral correction and pixel transformation followed by feature extraction which included NDVI, and linear and non-linear mixing. These features were utilized for biomass and nitrogen estimation with the k-nearest neighbour method. An early wilt disease detection method for olive trees was reported by Calderon et al. [68] with UAV imagery. The study pointed out the effectiveness of multiple features such as crown temperatures, hyperspectral indices, and structural indices (NDVI) while estimating physiological stress and damage caused by Verticillium wilt (VW).

    While comparing to RGB and multispectral sensors used in several studies (refer to Table 2), the hyperspectral sensors seem to be used for fewer applications. Probably, this is because of their high cost and complex data processing requirement [69]. The high cost of high-resolution spectroscopy might not be necessary for agriculture areas such as crop counting and yield estimation where the multispectral or even RGB sensors are good enough. The complexity of data acquisition and processing involved in hyperspectral imaging requires special training which is another limitation that prevents its widespread use in agriculture. However, researchers have already begun to work on developing low-cost hyperspectral sensors and simple data processing tools which certainly will bring the benefits of hyperspectral imaging in precision agriculture in the near future [70].

    Thermal sensors capture the radiation in the range (1 µm to 14 µm) emitted from the surface of an object and convert it into temperature. They are able to detect an increase in leaf temperature in wider crop areas compared to local measurements, which makes them a better fit for monitoring crop water status and other stress due to excessive temperature. Thermal remote sensing (RS) has been used to monitor crop water status [53,71,72,73], crop vegetation monitoring [59] and so on. Most of the works on water stress estimation convert the radiant temperature measured by a thermal camera with linear regression and then calculate a crop water stress index (CWSI) [74]. Matese et al. [53] investigated the estimation of water stress on grape vineyards with crop water stress index (CWSI) derived from thermal UAV images acquired in the spectral range (7.5–13 µm). Similarly, sugar beet water stress monitoring was proposed in [72] by utilizing the data from thermal sensors along with a low-cost infra-red thermometer. Another work by Zhang et al. [73] compared the maize water stress using RGB and thermal images at the farm scale. Their study demonstrated that the combined use of high-resolution RGB and thermal images provides a more accurate canopy temperature (Tc) for maize. A soybean water stress estimation grown under different irrigation conditions was analyzed by Crusiol et al. [60]. Naturally, the usage of UAV-based thermal remote sensing is increasing these days because of more automation in UAV flight management, cost-effectiveness, and availability of data processing tools. However, more research and development are sought on the fusion of data collected with multiple sources such as thermal sensors, RGB sensors, and weather stations. Likewise, the standard calibration of raw images acquired with thermal sensors to reduce the atmospheric effects and climatic conditions is essential.

    Apart from passive sensors discussed above, few works employ active sensors such as SAR (Synthetic aperture radar) and LiDAR (Light Detection and Ranging) with specific applications in precision agriculture. A crop growth deficit monitoring with differential synthetic aperture radar interferometry (DInSAR) operated in three bands (P, L and C) was developed by Ore et al. [52] for three crops: coffee, sugar, and corn. They used two interferometric C-bands antennas to calculate the digital surface model (DSM) which was further used in the DInSAR calculation. This provides a height accuracy of better than 5 cm with 1 m spatial resolution which showed the potential of DInSAR as a complementary tool to provide crop growth information for precision agriculture tasks such as yield estimation, plant density estimation and so on. A study on UAV-based LiDAR data to analyze the growth of maize height was proposed in [75] where LiDAR data was employed to generate the canopy height model (CHM). The UAV-measured maize height was found to be highly correlated with ground truth which confirms the effectiveness of UAV-based LiDAR for plant height estimation and crop lodging monitoring.

    The hierarchical taxonomy of the sensors used in UAVs is presented in Figure 5. Also, the use of various sensors in precision agriculture along with UAV parameters is reported in Table 2.

    Figure 5.  Sensors used in UAV-based remote sensing platform for precision agriculture.

    After the pre-flight preparation for UAV data acquisitions such as the selection of drone, sensor, and flight mission (location, timing, equipment, etc.), the UAV flight returns a large amount of raw data. However, the raw data are not yet suitable to extract information and reach conclusions, because the UAV platforms are rarely designed for on-the-fly data processing. Thus, necessary image correction activities such as atmospheric, radiometric, and geometric corrections are performed as post-flight UAV image processing. In addition, a single image can not cover the entire field of interest in most cases, it is necessary to capture several overlapping images which are later stitched together to form a single orthomosaic. To perform such image stitching, a scale-invariant feature transform (SIFT) algorithm is generally used [76]. It consists of mainly three steps: image pre-processing, image registration (feature extraction, feature matching and transformation) and image fusion. After these steps, a single mosaic image is obtained for a flight which is further rectified for geo-locations corrections using ground control points (GCPs) to get the ortho-rectified single map.

    Since the electromagnetic energy reflected from the earth's surface interferes with various atmospheric-surface activities such as gaseous absorption, aerosol scattering and absorption, accurate surface reflectance can only be measured after such correction. The amount of such noise is negligible in low-altitude remote sensing with UAVs. However, radiometric calibration is essential to standardize the relationship between incoming radiation and sensor output taken at different times or locations [77]. UAV-based remote sensing uses empirical correction, colour balancing, and irradiance normalization. There are two commonly used approaches for radiometric calibrations: a) ground measurements at the time of data acquisition and b) radiometric calibration target. The reflectance measurements of such white calibration panels are used for image calibration. Similarly, geometric correction of field images is required due to the variation in the sensor positions, platform rotation, terrain effects, lens distortion, etc. These pre-processing options are generally available on commercial data processing packages (Agisoft PhotoScan® and Pix4dMapper®). For instance, Ji et al. [78] used a structure from motion (SfM) based software (Pix4DMapper) to generate final data products such as digital surface model (DSM), digital terrain model (DTM) and orthomosaic (reflectance map). The summary of UAV data processing software along with their applications in agriculture is listed in Table 3.

    Table 3.  Recent studies that use various software packages to generate UAV mosaic images for crop monitoring+.
    Ref. Crop Application Software Summary
    [11] Grape variability assessment Agisoft PhotoScan® • Multispectral images were generated with more than 1000 aerial images and the flight was kept at 35 m above the ground resulting in 5 cm GSD images.
    [79] Tea water stress assessment Pix4D mapper® • A thermal camera was used to collect thermal images with 12 GCP and the drone mission was kept at 60m above the ground
    [80] Maize Biomass estimation Agisoft PhotoScan® • Aerial images were captured with a 16 MP RGB camera having 80% forward and 60% side overlap.
    • Flights' heights were maintained at 65 m and 120 m above the ground surface.
    [55] Rice Yield estimation Agisoft PhotoScan® • Image mosaicking for both RGB and multispectral images was performed and saved into TIFF.
    • The reflectance correction was carried out with five calibration targets measured at 0.5 m height before each flight by a handheld spectrometer.
    [16] Corn Yield estimation Agisoft PhotoScan® • Image alignment, mosaicking and Geo-referencing were performed with SfM algorithms and crop height was derived by subtracting DTM from DEM.
    [78] Bean height and yield estimation Pix4D mapper® • Final data products such as DSM, DTM and Reflectance maps were generated after image stitching and calibration.
    [66] Corn Yield estimation Pix4D mapper® • UAV images acquired with 80% overlap and 80% side-lap were used to generate orthomosaic using structure-from-motion (SfM).
    +Note that DTM, DSM, GCP and GSD denote the digital terrain model, digital surface model ground control point and ground sample distance respectively.

     | Show Table
    DownLoad: CSV

    Many photogrammetry and UAV vendors provide cloud services for easy and low-cost UAV image processing tools over the cloud. However, they have specific limitations such as storage capacity, upload and download bandwidth and limited data product as output. Therefore, most of the works preferred to use offline UAV image pre-processing with a local computer. Generally, the images acquired with a planned UAV mission are downloaded into a high-performance computer for further processing. While capturing images, there are a few important parameters that need to be managed such as flight height, image overlapping percentage and field of interest because these parameters determine the output image quality. The onboard navigation system and flight planner can assist to manage the flight mission in auto-pilot mode [79].

    Once the data is collected by different sensing platforms such as airborne platforms (satellite, aircraft, or UAV) or field-based sensors (field spectroscopy or IoT devices), they need to be further processed to get useful information. Specific applications such as forestry, agriculture, or environments need specific data analysis techniques. Therefore, feature extraction is an essential step while building crop models using data-driven methods such as machine learning and deep learning. The type of features also depends on the sensors used for image acquisition as the different sensors have the ability to capture field images at various spectrums which ultimately leads to a variety of crop features that can be extracted from such images. This section assimilates and examines the various crop features extracted with UAV imagery, mainly targeting their uses for crop model building and training.

    Spectral features are extracted with the reflectance measured by sensors on various ranges of EM. The light reflected by the object's surface depends on its materials such as soil, rocks, water, and crops. For instance, water largely absorbs the EM spectrum in the near and mid-infrared wavelength range, whereas soil is more reflective in the mid-infrared range and green vegetation is highly reflective in the near-infrared range. Therefore, the reflectance and absorption of electromagnetic radiation by earth objects are differentiable [81]. Utilizing such differences in object reflectance of the light spectrum, the spectral indices are derived by algebraic manipulation of such individual spectral bands, commonly known as spectral indices. A spectral index that quantifies crop vegetation properties such as crop biomass, vigour and stress in a remote sensing image is commonly known as vegetation index (VI). It results from the pixel-level information calculated using various operations on a different spectral band of images. The vegetation indices used in precision agriculture are mainly derived from a) visible light b) near-infrared and c) mid-infrared spectrum (refer to Figure 5). Researchers have proposed various formulas to calculate vegetation indices to extract useful information such as biomass estimation and plant health estimation based on different applications and complex environment characteristics [7]. Therefore, it is necessary to include different band information in VI calculation for different applications. However, the calculation of VI is affected by the data acquisition platform, sensors, and other parameters such as noise. Based on sensors, VIs can be classified as a) RGB-based VIs, b) Multispectral VIs, and c) Hyperspectral VIs [67].

    RGB-based VIs utilize three bands (RGB) to construct the vegetation indices and are mostly used for high-resolution image applications such as plant counting, plant density estimation, canopy coverage and so on. The multispectral VIs are extended further to include the near-infrared band in addition to the visible light spectrum. They are mostly used in plant health monitoring based on red and near-infrared bands. The most widely used multispectral vegetation index (VI) is the Normalized Difference Vegetation Index (NDVI), which is calculated from "near-infrared (NIR)" and the "red" band because healthy vegetation reflects more NIR light and less visible light as the chlorophyll absorbs the "red" light and reflects the NIR [50]. The higher the NDVI value the better the vegetation's health [82]. Zerbato et al. [82] investigated the applicability of NDVI measured with terrestrial sensors to estimate the peanut crop yield in a randomized block design. In this study, NDVI was found to be highly correlated with crop productivity, vegetation covers, and plant density. Similarly, NDVI generated from multispectral UAV images was found highly effective in predicting yield and detecting fertilizer application levels in rice and wheat field experiments [50].

    Several other vegetation indices such as Normalized Difference Red Edge (NDRE), Green Normalized Difference Vegetation Index (GNDVI), and Ratio Vegetation Index (RVI) are also based on near-infrared reflectance along with other reflectance bands such as red-edge, green, and red, respectively [55,66]. These vegetation indices were derived from the formula of NDVI by replacing the red band with the red edge in NDRE and green on GNDVI. A few vegetation indices based on a light spectrum other than NIR such as Green Vegetation Index (GVI) and Red Edge Ratio (RER) index with a specific application were also used in precision agriculture [40]. As an example, Figure 6(b) represents the NDVI map of the peanut plant at the late growth stage, which shows the spatial variability over the pixel distribution of healthy peanut plants (green pixels) vs stressed peanut plants (yellow pixels). NDVI has been used to estimate various crop parameters such as vegetation cover, plant density, crop biomass, crop yield, and fertilizer applications. It is noticed that the multispectral images were calibrated with a surface reflectance panel, and their pixel value ranges between 0 and 1. However, digital images do not go through such calibration steps. Hence, the normalization of pixel values must be carried out before computing vegetation indices.

    Figure 6.  The peanut field images represented in (a) RGB and (b) NDVI with individual plots divided using shapefile overlayed on the respective images.

    Structural information about crops such as crop height, density, volume and coverage is highly correlated with crop traits such as biomass, and yield [83]. The crop height can be measured either using LiDAR or photogrammetry techniques. Since the LiDAR can obtain three-dimensional (3D) information about the target at multiple vertical layers, it can be used to derive crop height as suggested by Zhou et al. [75]. The difference between layers at the top of the crop (digital surface model) and the bottom of the crop (digital terrain model), is used to estimate the plant height, commonly known as the canopy height model. Similarly, the photogrammetry technique discussed in [21] used the RGB-derived points cloud-based digital terrain model (DTM) and digital surface model (DSM). The digital terrain model (DTM) is a points cloud orthomosaic of the field taken before crop plantation, whereas the digital surface model (DSM) is the points cloud model after crop plantation. By subtracting DTM from DSM, a plant height for each pixel on orthomosaic is derived. However, the canopy tropical might not be uniform in all places, the mean height method was applied for height estimation. The accuracy of these methods depends on various conditions of UAV flight parameters such as altitude, speed, location of flight lines, crop characteristics (crop consistency) and field terrain. A combination of height estimated from LiDAR data with vegetation indices was claimed to be more accurate in estimating pasture's sward height and above-ground mass at a very fine spatial scale using UAV [84].

    The canopy coverage is derived from the ratio of vegetation pixels with total pixels per plot (field of interest). The vegetation area is discriminated from the non-vegetation area by either using threshold criteria [16] or machine learning approaches [55]. The threshold-based approach finds the appropriate vegetation index (VI) value so that the pixel values below/above that threshold are classified as crop vs non-crop pixels. For example, the excessive green (ExG) index with a threshold value (r = 51) was used in [16] to classify crop vs non-crop pixels where the pixels with a value below the threshold were taken as non-crop pixels and pixels that were above the thresholds were classified as crop pixels. The machine learning approaches segment the pixel into background vs crop by training the learning model. For instance, a support vector machine (SVM) classifier was trained to classify RGB image pixels into either background or vegetation pixels using colour features. A total of 5000 pixels were used to train the SVM model. Then the SVM classifies each pixel into background vs vegetation. Then, the canopy coverage was derived as a ratio of the number of vegetation pixels to total pixels in a plot [55]. The canopy coverage is further used to calculate plant density as discussed in [66] where authors counted the number of plants presented in a plant coverage area to derive the plant density (plant m−2).

    An image texture determines the spatial arrangement of pixel intensities on the images. Various statistical approaches exist to measure the image texture such as edge detection, and co-occurrence matrices. The grey level co-occurrence matrix (GLCM) is the widely used texture feature in existing works. It was first proposed by Haralick et al. [85] which revealed that the spatial distribution of pixels at a certain offset (d) and angle (θ) can be measured with such a co-occurrence matrix for a grey-scale image. They extracted 14 texture measures from such GLCM matrix such as mean (ME), variance (VA), homogeneity (HO), contrast (CO), dissimilarity (DI), entropy (EN), second moment (SE), correlation (CO) and so on, which are commonly known as Haralick features. They have been shown effective for image fusion, change detection and image classification tasks. For example, Guo et al. [86] combined the GLCM texture features with NDVI to identify the tasseling date of summer maize. They considered four GLCM texture measures: contrast, correlation, energy, and homogeneity, where the contrast texture feature performed better than other textures. Similarly, Bah et al. [14] used six Haralick features: autocorrelation, contrast, correlation, dissimilarity, energy, and entropy for weed detection. They reported the highest accuracy of 96.99% for weed detection in the spinach field using Haralick features along with other RGB image features such as colour, Histogram of gradient (HOG) and Gabor. The above discussion shows that the texture features can supplement the other image feature such as spectral and structural for crop trait estimations. An illustration of three Haralick features namely, dissimilarity, contrast and homogeneity derived from RGB peanut images acquired with a drone is shown in Figure 7.

    Figure 7.  Visualization of three texture features derived from (a) RGB image of peanut field acquired with UAV (b) dissimilarity (c) homogeneity (d) contrast.

    Once the preprocessing and feature extraction steps are completed, the crop model to address specific applications needs to be developed. For instance, a linear regression model for yield estimation with vegetation index (VI) as a dependent variable and yield as an independent variable was developed by Guan et al. [50]. The data analysis can be performed with either traditional statistical methods such as correlation and regression analysis or learning models such as machine learning and deep learning [21]. Since the learning models are data-driven, a set of potential input (independent) and output (dependent) variables to train and test the model are important. Once the input and output variables for the model are established, it is followed by the selection of a specific machine learning algorithm, hyper-parameter tuning and model evaluation [15]. In this section, we review the various machine learning and deep learning models that have been developed for precision agriculture applications such as yield estimation, disease detection and crop classification.

    While synthesizing the review for model evaluation, we classified the learning models into two categories based on the output variables: a) regression task for continuous output variable and b) classification task for the discrete or categorical output variable. To evaluate the regression model, the widely used evaluation matrices are coefficient of determination (R2), root means square error (RMSE) and means absolute error (MAE). The coefficient of determination measures how much variability the model can explain while the other two metrics define how much difference the model output is from the actual output [87]. The classification models are evaluated on the basis of f-score and accuracy [88]. These matrices measure the model's prediction ability in comparison to the actual output.

    Machine learning models have been investigated for various data modelling purposes such as image recognition [89], text classification [90], and stock market prediction [87] because of their ability to find the pattern associated with input and output data [54]. Since the precision agriculture system attempts to get information from agricultural data to help farmers make better decisions for farm management concerning time and space [91], researchers have analyzed and investigated machine learning methods to extract a specific pattern from agricultural data. The specific pipeline of machine learning approach for UAV imagery-based precision agriculture is shown in Figure 8. Herein, we observe mainly three steps: image processing, feature extraction, and model building when applying machine learning methods to UAV imagery. Since machine learning methods are data-driven, their throughput is always dependent on the given input, the image pre-processing is an essential step in this pipeline. The pre-processing of UAV images includes image stitching, image calibration, geo-referencing and orthomosaic generation (refer to section 6). Once the reflectance map or orthophoto is generated for a particular flight mission (which covers the area of the agriculture field), the field of interest (FoI) is extracted. It generally consists of three steps. First, it is uploaded to some geographic information system (GIS) such as QGIS software in [66] or ArcGIS software in [47] to extract the area of interest by specifying the coordinates of a boundary point of the field. Then, the extracted map is further cropped and rotated to align the crop plots. Finally, a shape file is built to separate the individual plot on the map. A sample of plot division is shown in Figure 6 where the peanut field orthomosaic is divided into individual plots using a shape file represented as rectangles.

    Figure 8.  The machine learning-based crop models pipeline using UAV imagery.

    The neural network model inspired by the human brain has been used for a long time for data modelling. They consist of layered architecture having a number of nodes or neurons for data processing in each layer. Here, shallow networks consist of a lower number of layers (usually three or fewer) while deep neural networks go beyond this. While looking into existing works on deep learning for precision agriculture, most of the models considered end-to-end image recognition tasks such as crop classification [92], weed detection [93], and crop segmentation [94]. They mostly use the convolutions neural network (CNN). CNN extracts the image feature with convolution operations in each layer, thereby reducing the image size with selective image features passed to the next layer. Hence, deep learning with convolutional neural networks allows automatic feature extraction whereas it must be done manually in the machine learning pipeline (refer to Section 8.1). The general pipeline for a deep learning-based crop model using UAV imagery is shown in Figure 9.

    Figure 9.  Deep learning-based crop model pipeline using UAV imagery.

    Maximizing crop yield with minimum growth cost is a key goal of a smart agriculture system. Early identification of biotic and abiotic stresses of crops that hinder crop yield is beneficial. It helps the farmer to manage well in advance so that the spread of diseases and pests can be reduced by applying appropriate control techniques thereby increasing the yield. Hence, the estimation of yield and related parameters such as biomass, plant health, nitrogen status, and soil conditions are important.

    UAV-based remote sensing has been hugely applied for yield estimation of various crops (refer to Tables 4 and 5). Since the performance of crop yield estimation models differs from one crop to another crop, it is quite challenging to compare the performance of the crop yield estimation models. Here, we synthesized the performance of crop yield estimation models from three perspectives: crop, input features, and machine learning models. We adapt these criteria because the input image features used in yield estimation methods vary from crop to crop which ultimately affects the output of the yield prediction model.

    Table 4.  Performance comparison of different machine learning models on crop yield estimation+.
    Ref. Crop Feature Methods RMSE (t.ha−1) MAE (t.ha) R2 Remarks
    [16] Corn ExG, NGRDI, & PPRb LR - - 0.74 The combination of spectral and spatial indices provided the best results.
    [17] Rice MS & RGB VI MLR 0.926 - 0.76 Regression models such as Linear and logarithmic were implemented at the various growth stage.
    [83] Tomato CC, CH, CV, ExG, & ETc ANN 0.70 A combination of plant attributes, VI and weather information provides the best yield estimation.
    [66] Corn CC, plant density, RGB & MS VI ANN 0.449 0.209 0.92 The ANN with WDRVI, plant density and canopy cover as input features provided the best yield estimation among other features.
    [65] Maize MRBVI ANN, SVM, RF & ELM 0.57 The SVM provided the best yield estimation with MRBVI.
    [54] Maize VI F, SVM, LR, ANN, Ensemble 0.853 - 0.60 The ensemble method based on additive regression provided the best yield estimation.
    [55] Rice CH, CC, RGB & MS VIs RF 3.65% (*) - 0.85 RF was trained with a model transfer concept where the trained model from the 2017 yield data was transferred to the 2018 yield data.
    [56] Vine NDVI, CV & CT LR, RF, SVM, GPR - - 0.80 The GPR model provided the highest performance with the canopy thickness feature.
    [95] Wheat NDVI, EVI & MTCI SVR, and LASSO 0.374 - 0.90 LASSO regression was better in terms of training time while both regressions provide good performance on yield estimation.
    [21] Soybean VI, CH, CC, Thermal & Texture DNN, PLSR, RF, SVR 15.9% (*) 0.72 Among the two DNNs with data fusion at the input level and feature level, the later DNN produced the highest performance.
    [96] Cotton MS-based VI BP-NN - - 0.85 Multi-temporal VI with image segmentation significantly improves the cotton yield estimation.
    + Note that the value of the Correlation coefficient (R) is changed into R2 when the original work has reported only the R value. Similarly, the RMSE and MAE are expressed in t.ha−1 to make the comparison on the level ground. The (*) represents the Relative RMSE.

     | Show Table
    DownLoad: CSV
    Table 5.  Performance comparison of various deep learning models on crop yield estimation+.
    Ref. Crop Inputs Methods RMSE (t.ha−1) MAE (t.ha−1) R2 Remarks
    [97] Soybean MS images and wilt traits vector Mixed CNN 0.3910 - 0.78 Seven image features; the Red-edge band of the multispectral images, three VIs, a DEM and two texture features were used to train CNN along with a categorical wilt trait vector which resulted in the best yield estimation.
    [98] Rice RGB and MS images CNN 0.6580 - 0.58 The two-branch CNN (one branch with RGB and another with MS images) was trained and tested where RGB image has a significant contribution to yield prediction.
    [22] Wheat RGB images 3D-CNN, ConvLSTM and CNN+ LSTM 0.2895 0.2189 0.96 Three deep learning models were investigated for crop yield estimation where 3D-CNN outperforms all other methods.
    [99] Barley RGB images VGG+ MLP - - 0.63 Three pre-trained models (ALexNet, VGG and VGG-19) were used as feature extractors and then extracted features were fed to machine learning regression models where MLP outperforms all other ML models such as SVR, GP, RF, LR, and KNN.
    [100] Wheat and barley RGB and NDVI images CNN - 0.4843 - A CNN with six convolutional layers was implemented for each of RGB and NDVI image where the CNN trained with RGB images showed better performance over the NDVI images in yield prediction.
    +Note that the RMSE and MAE are expressed in t.ha−1 to make the comparison on the level ground.

     | Show Table
    DownLoad: CSV

    The yield estimation for corn was investigated in [16,66] with spectral and structural features using various regression models. A linear regression model [16] with spectral and spatial features (ExG, NGRDI and PPRb) achieved an R2 of 0.74. Furthermore, an artificial neural network (ANN) model was proposed in [66] using both structural (canopy cover, canopy height and canopy volume) and spectral (RGB and multispectral) features provided minimum errors (0.449 t.ha−1 RMSE and 0.209 t.ha−1 MAE) on corn yield estimations. Similarly, another artificial neural network model for tomato yield estimation was reported in [83] which achieved the best result (R2 = 0.70) while combining the plant attributes, vegetation index and weather information as input features. Furthermore, various machine learning methods such as Support vector regression (SVR), and LASSO regression were compared for wheat yield estimation by Shafiee et al. [95] where LASSO regression was better in terms of training time while both regression models provided good performance metrics (R2 = 0.90) on yield estimation.

    Analyzing the features used in each crop yield estimation model, the performance of yield estimation models with multi-model features dominated the single feature model. For instance, the maize yield estimation model [65] with vegetation index (MRBVI) produced a coefficient of determination (R2) of 0.57 using support vector machine regression. In contrast, when the yield estimation model [55] with multiple features such as canopy cover, canopy height, RGB and multispectral VI using random forest regression produced an R2 of 0.85 with minimum error (3.65% for RMSE). While comparing the performance of machine learning algorithms on yield estimation, LASSO regression [95] has the highest coefficient of determination (R2) of 0.90 for the wheat crop, followed by a back-propagation neural network (BP-NN) [96] with an R2 of 0.85 for cotton crop. The majority of the crop yield estimation models had the coefficient of determination (R2) in the range of 0.70 to 0.80.

    Besides, the deep learning model has recently progressed well for yield estimation using UAV imagery [22,97,98]. The convolutions neural network [97,98] has been used mostly in existing works for yield estimation within a deep learning framework. This might be because of the maturity and success gained by the convolutional neural network in remote sensing applications [15]. The majority of these works used RGB images [98,99] while complementing them with multispectral features in some studies [97]. The performance of the deep learning-based yield estimation methods is comparatively higher than those based on machine learning methods in general. For instance, the 3D-CNN model for wheat yield estimation provided the highest coefficient of determination (R2) of 0.96, and the minimum errors (0.2895 t.ha−1 for RMSE and 0.2189 t.ha−1 for MAE) using RGB images.

    In summary, we notice that recent studies in yield prediction using UAV images had been effectively used at the plot level [21], while most of the previous remote sensing techniques using satellite images were implemented at the national or regional level. This is because of the high spatial, temporal, and spectral resolution images acquired with the UAV platforms. Overall, the basic idea behind each yield prediction method is to model the crop features with ground truth using various regression strategies. Both UAV-based remote sensing and machine learning technique have been exploited for crop yield prediction. Remote sensing methods mainly depend upon extracting vegetation indices from RGB, multispectral or hyperspectral images. For instance, a yield map for rice and wheat crops has been developed using NDVI from multispectral images [50]. Furthermore, RGBs combined with multispectral images have been proved more accurate than individual feature methods [55] for grain yield prediction. Moreover, the multi-model data fusion-based machine learning method which combines various features such as thermal, spectral, structures and texture has shown promising results in yield prediction [21]. However, data fusion is challenging with such models. The data received from multiple sensors have different spatial, spectral, and temporal scales which demands a specific data fusion procedure for a particular application. Similarly, deep learning-based methods have shown the highest performance on yield estimation for some crops such as wheat [22], and soybean [97], nevertheless deep learning methods are more like a black box, and are prone to over-fitting, preventing them from generalization over time [98]. Also, they need a large number of high-resolution training images and have high computational costs in comparison to traditional machine learning methods which further hinders their uses in light-weight UAV-based remote sensing [22]. The comparative study of machine learning and deep learning-based yield prediction model for various crops along with the UAV image features used is reported in Tables 4 and 5 respectively.

    Plants are subjected to various stresses from environments, which decrease plant productivity. Stress is caused by either abiotic or biotic factors. Abiotic stress is due to drought, floods, extreme temperatures, etc., whereas biotic stress is caused by pathogens, pests and weeds [101]. Early detection and identification of such stresses are beneficial for farmers as they will get proper time to manage these stresses and prevent possible epidemics and crop productivity and quality losses [102]. There are mainly two approaches used for crop disease estimation using UAV imagery a) vegetation index-based approaches and b) machine learning (deep learning)-based approaches.

    The vegetation index-based techniques mainly estimate the diseases or stress score by processing pixel-level information in RGB, multispectral, hyperspectral [103] or thermal infrared images [104]. For example, RGB images acquired with UAV in a time-series fashion were used to estimate light blight disease on potatoes [105]. The disease severity index derived from the image processing method was highly correlated (R2 = 0.73) with a manual assessment of diseases using the area under the disease progress curves (AUDPC). Furthermore, Patrick et al. [106] used multispectral vegetation indices such as NDRE and NDVI to estimate wilt disease in peanuts using regression analysis. They deployed a threshold value for each vegetation index which segments the image pixels into healthy and disease classes. The number of pixels above the threshold which were classified as healthy pixels was taken as an independent variable to regress it with the wilt disease score. They found the highest coefficient of determination (R2) of 0.82 with the NDRE index. A lightweight UAV-based remote sensing was investigated for pest and disease detection on two crops: thrips in onion cultures and potato blight. The NDVI index map showed the visually distinguishable region affected by the disease for both crops while comparing the index maps generated within an interval of a week [20]. Similarly, three UAV-derived multispectral vegetation indices: normalized difference index (NDI), green index (GI), and green leaf index (GLI) were investigated for wheat foliar disease severity estimation in [24], where GI was found highly correlated with disease infection coefficient.

    Similarly, other vegetation indices (VIs) such as Normalized Difference Vegetation Index (NDVI), Crop Water Stress Index (CSWI), Photochemical Reflectance Index (PRI), Green Normalized Vegetation Index (GNDVI), Water Deficit Index (WDI) [103,104,107,108] were used to assess water deficit stress or drought in the plant. These indices are particularly important in semi-arid areas where the irrigation supply needs to be constantly monitored. These indices are calibrated with ground truth and used for the estimation of water stress using regression analysis [109]. The calibration with ground truth is a major source of error, especially when ground truth data is of large uncertainty.

    The alternative approaches to vegetation index for disease detection are based on data-driven machine learning and deep learning algorithms. Traditional machine learning techniques such as support vector machine [110], artificial neural network [23], and random forest [69] have been used to classify the stress in the plant with multispectral and hyperspectral imagery [51]. To train and learn from these methods, labelling of pixels such as which pixels belong to disease and which belong to healthy plants [111] is essential. A hyperspectral-based remote sensing technique was implemented by Abdulridha et al. [23] for tomato disease detection such as bacterial spot, target spot and yellow leaf curl in field conditions. They classified the diseased tomato plants using multiple vegetation indices and machine learning methods such as artificial neural networks with radial bias function (RBF) and stepwise discriminant analysis. The re-normalized difference vegetation index and modified triangular vegetation index were the best-performing indices to identify the diseases. A wheat yellow rust detection with multispectral UAV imagery was proposed in [69]. The random forest classifier at the pixel level was trained where image pixels were classified into healthy, moderate, and severe with an accuracy of 89.3%.

    Deep learning methods such as CNNs were used to detect disease at the object as well as the pixel level in various kinds of images such as RGB, multispectral and hyperspectral. Kerkech et al. [7] implemented a CNN for RGB image at block or patch level that classifies sliding windows of image (object) into four designated classes: ground, healthy, partially diseased, and diseased. Then, each image patch was post-processed to generate the disease map. They reported an accuracy of 95.8% while classifying the tiles into four classes. Since the expert annotated data has a limited size, there is always a risk of overfitting the model which needs to be validated further. Another study by Wu et al. [112] proposed a two-stage CNN for lesion detection on maize with high-resolution RGB UAV imagery captured by flying a drone at 6m above the ground. In the first stage, they trained a backbone CNN by randomly cropping sub-images of size 500 × 500. Next, the disease heat map was generated with the output of previously trained CNN while feeding the patch generated with sliding windows over the original UAV images. A similar patch-based method using a deep convolution neural network (DCNN) for yellow rust disease detection on wheat was implemented with very high-resolution hyperspectral imagery by Zhang et al. [51]. The DCNN outperformed the traditional classifier such as random forest (RF) by 7% of overall accuracy. Furthermore, early water deficit stress identification using cloud-based artificial intelligence or CNN with a multispectral dataset having three classes: high water stress, low water stress, and no water stress was investigated by Freeman et al. [113]. They trained the machine learning model on a small sample size of 36 plants and 150 images. The four-fold cross validation of their model resulted in an area under the curve (AUC) of 0.98.

    Performance comparison of crop disease detection methods based on machine learning and deep learning approaches is listed in Table 6.

    Table 6.  Performance comparison of crop disease detection using machine learning (ML) and deep learning (DL) with UAV imagery*.
    Ref. Crop Disease Inputs Methods Acc. (%) Remarks
    [23] Tomato leaf spot HS images STDA and RBF 95.00 Two classification algorithms: STDA & radial bias function (RBF) were compared for tomato leaf spot disease classification where the STDA is more accurate (95%) than RBF.
    [69] Wheat yellow rust MS images RF 89.30 Multiple vegetation indices derived with MS images were investigated for better discrimination of diseased and healthy crop pixels where RVI, NDVI and OSAVI were the top three VI.
    [7] Grape vine diseases RGB images CNN 95.80 A CNN was trained with both colour space and vegetation index images as input features and the highest accuracy of 95.8% was achieved with the combination of ExG, ExR and ExGR vegetation indices
    [112] Maize Northern leaf blight RGB images CNN 95.10 A two-stage CNN for leaf blight detection on maize with high-resolution RGB UAV imagery captured by a flying UAV at 6m above the ground was trained with transfer learning with the ResNet-34 model pre-trained on ImageNet.
    [51] Wheat yellow rust HS images Deep CNN 85.00 The performance of DCNN which considers both spatial and spectral information to detect yellow rust was 7% higher than the traditional approaches such as random forest classifier which uses only spectral information.
    [111] Wheat Yellow rust HS images SVM 92.90 The SVM with data normalization (SVM SNV) achieved the highest accuracy in comparison to other approaches such as SVM -indices, SVM-Raw and SAM.
    *Note that "Acc." denotes the detection accuracy.

     | Show Table
    DownLoad: CSV

    Crop classification is one of the fundamental steps in precision agriculture because it helps the policy-makers and stake-holders to retrieve useful information about types of crops and their status, which, in turn, is very useful to monitor the crop. Machine learning and deep learning methods have been extensively used for crop classifications [114,115,116,117,118]. These approaches first train the models with manually annotated data and later these models are tested on new field images. The data annotation can be carried out at the patch (object) level or pixel level. Accordingly, these approaches can be further categorised into patch-based approaches [119] and pixel-based approaches [114,118].

    A patch-based convolutional neural network for corn classification was proposed in [119]. The multispectral UAV images were cropped into patches of size 28 × 28 pixels for each corn and non-corn class. These patches were used for training the deep learning model- LeNet. Their results showed that the altitude of UAV flight affects the classification accuracy of CNN as the best accuracy of 86.8% was obtained using the model trained with a dataset acquired at an altitude of 180 m. Another study conducted on the identification of corn from the background on RGB UAV images using U-Net was implemented in [120]. They used a blob detector to count the corn plants after segmenting the RGB UAV images into the corn or the background. Similarly, a crop and weed distribution estimation using a modified U-Net was designed by Fawakherji et al. [117]. They experimented with multiple combinations of inputs (RED+NIR+NDVI) and achieved an accuracy of 95% while classifying the pixel into soil, weed and sugar beets. UAV image-based segmentation methods such as FCN-AlexNet and SegNet [114] distinguished the rice lodging from other objects such as road, ridge, and background with an overall F1 score of 83.56% and an accuracy of 94.43%.

    A multiple crop classification such as bananas, maize and legumes with drone-based RGB images was developed using deep learning methods [121]. The pre-trained VGG-16 on ImageNet was used for feature extraction, known as transfer learning. The extracted features were used for crop classification with a shallow feed-forward neural network which achieved an overall f1-score of 86.00%. An effect of the GLCM texture feature on crop classification using UAV imagery was investigated by Kwak et al. [118]. They combined the texture features with spectral features to effectively classify the multiple crops such as cabbage, potato, and fallow. The texture feature derived with a larger kernel size improved the performance of the support vector machine (SVM) by 7.72%.

    A binary classification that classifies the image pixels into crop and other pixels (background or other crops or weeds) was also investigated in the past. For instance, the performance comparison of three methods: SVM, FCN and SegNet for sunflower lodging identification (lodging vs non-lodging) [115] showed the superiority of SegNet with the highest accuracy of 89.80%. They also compared the effect of image fusion on the performance of the segmentation model which demonstrated that image fusion increases the segmentation performance by 5.4% in accuracy. The performance comparison of various machine learning and deep learning models for crop classification is presented in Table 7.

    Table 7.  Performance comparison of various models (ML and DL) on crop classifications*.
    Ref. Crop Inputs Methods F1-Score Acc. (%) Remarks
    [114] Rice RGB images FCN-AlexNet and SegNet 79.00 94.43 Two semantic segmentation methods: FCN-AlexNet and SegNet, were simulated with RGB image and vegetation indices as input and FCN-AlexNet outperforms the SegNet during the rice lodging classification.
    [115] Sunflower RGB and MS images SVM, SegNet, FCN - 89.80 Three segmentation methods: SVM, FCN and SegNet for sunflower lodging identification were implemented with and without using image fusion where SegNet outperform all other methods with a highest accuracy of 89.80%.
    [120] Corn RGB images U-Net - 99.40 The U-Net was used to distinguish the corn pixel from the background pixel which was later used to count the corn plant contained in the image.
    [117] Suger beet MS images U-Net - 95.00 The pixel level classification of multispectral UAV images for weed, soil and sugar beet was performed using a modified U-Net which showed the highest accuracy of 95.00% on combined input (RED+NIR+NDVI)
    [119] Corn RGB and MS Images LeNet - 86.80 A patch-based LeNet was implemented for corn vs non-corn classification which achieved an accuracy of 86.80% while using a dataset acquired at 180 m altitude
    [121] Multiple crops RGB images VGG +DNN 86.00 86.00 The pre-trained VGG-16 on ImageNet was used for feature extraction, known as transfer learning. The extracted features were used for crop classification with a shallow feed-forward neural network.
    [118] Multiple crops GLCM texture RF, SVM - 90.85 SVM and random forest were compared for crop classification using texture and spectral features where SVM imparts the best performance.
    *Note that "Acc." denotes the classification accuracy.

     | Show Table
    DownLoad: CSV

    In this section, first, we summarize the existing works under various themes based on the questions listed in Table 1. These themes of discussion include UAV platforms, sensors, image features and modelling methods. Second, we narrate the recent research progress using the BibTex analysis and list out the issues and challenges of UAV-based remote sensing for PA. Finally, we suggest some future avenues of machine learning and deep learning methods for drone-based precision agriculture.

    Over the past decades, the development of UAV platforms has progressed well which enhanced the capability of UAV-based remote sensing in various aspects. We notice that lightweight rotatory-wing and fixed-wing UAVs are the first choices among precision agriculture researchers. Among the rotatory-wing UAVs, quadcopter and octocopter are mostly used drones in yield estimation and disease detection (refer to Table 2). The increased flexibility, manoeuvrability and cost affordability of multi-rotor UAVs attracted people to deploy them in precision agriculture applications. However, the limited endurance and speed may make them unsuitable for large-scale field mapping. The fixed-wing UAVs have the ability to cover longer distances but come with higher costs and less manoeuvrability.

    The sensors attached to UAVs are the essential component of UAV-based data acquisitions. Low-cost sensors such as RGB [19] and custom design multispectral sensors such as RGB with NIR [47] are extensively deployed in recent works, especially when machine learning and deep learning methods are used for crop model building. Since deep learning methods such as CNN and DNN require large numbers of high-resolution training images, which can be achieved with RGB sensors mostly, they have been largely exploited by existing works to build the Artificial intelligence (AI) model for weed detection [14], yield estimation [16], plant counting [19], etc. The RGB images are also the source of structural crop information such as canopy height, canopy cover and canopy density [55]. Besides, RGB and multispectral sensors, the hyperspectral sensors are mostly used to capture the crop information at high spectral resolutions which help to identify the finer spectral signature of crops such as disease stress [111], and nitrogen status [61] in crops rather than the applications such as yield estimation and crop classifications.

    Among the crop features extracted from multiple UAV imagery, spectral features derived from RGB, multispectral, and hyperspectral images are found to be widely used in precision agriculture applications. This is because the spectral features include the spectral signature of the crops which can help to distinguish them from other objects such as soil and weeds. However, the other image features such as texture and structure are complementary information while combining them with spectral features. The combination of such features has proved to be effective in several applications such as yield estimation, disease detection and crop classifications.

    While synthesizing the recent works from model-building perspectives, two kinds of models have been implemented to address precision agriculture applications: correlation and regression analysis-based (or statistical analysis) methods and data-driven (or machine) learning methods. The former methods use either spectral information (vegetation index) or structural information (crop height, volume, etc.) calculated from UAV images as an independent variable and crop trait of interest as a dependent variable. However, the data-driven learning models use the concept of training with sample data and building the crop model for either regression or classification tasks using various learning algorithms. Here, the data-driven algorithms include both traditional machine learning and deep learning methods. Compared to the performances of these models for various crop trait estimation and prediction, learning-based methods have produced the most promising results over the statistical analysis methods. Nevertheless, the machine learning methods are data-intensive and require a large amount of manually annotated data as well as more computation time to train the model which is one of the recurring issues in this field.

    To observe the recent research trend on the applications of UAVs for precision agriculture, we present the word-cloud and word dynamics analysis in Figures 10 and 11, where the authors' keywords were extracted from published research works in this field. The most popular words that describe the popular keywords among the researchers are machine learning, deep learning and remote sensing which is expected because the nature of UAV data resembles the Big-data which further demands large-scale data processing algorithms, whereas data-driven methods such as machine learning and deep learning become the first choice for researchers.

    Figure 10.  Word dynamics of main keywords used in precision agriculture research using UAV.
    Figure 11.  Word-cloud representation of the Top fifty author's keywords using Bibliometric analysis [122] of articles retrieved from the Scopus database. Note that the size of each word in the word-cloud is proportional to the frequency of the keyword. Also, this word-cloud might contain some bias based on the keywords used to select the articles.

    With the analysis of the keywords' growth status over the last decade from 2013 to 2022, the keyword "precision agriculture" is constantly being popular throughout the period, as it is the main keyword that reflects the research on smart agriculture. Interestingly, the second and third dominant keywords are "machine learning" and "deep learning" which further supports the fact that machine learning and deep learning methods are being largely employed to analyze and build the crop models for various crop trait estimations. In summary, the researchers are more focused on these recent methods to leverage their large-scale data processing capability in precision agriculture with the aid of drones. However, we notice some bias may be contained in this word-cloud analysis given the fact of the keywords we used to select the articles.

    UAVs or drones are emerging technology, recently introduced to precision agriculture. However, the pace of development around this technology is so rapid that it has addressed some of the complex problems in agriculture such as disease detection, crop classification and yield estimation more accurately. There are still a few challenges while implementing a UAV-based precision agriculture pipeline regarding data acquisition (sensor and UAV platform), data processing (UAV image processing and feature extraction) and prediction models (machine learning and deep learning). Here, we summarize the findings of this survey into three groups: strength of existing well-developed works, current focus, and challenges.

    The data acquisition platforms such as sensors and UAV systems have made significant progress. Based on these platforms, it is possible to collect high-resolution data with multiple sensors simultaneously. The data pre-processing also becomes relatively easier at the moment. This is because of the availability of high-performance computers, Photogrammetry software (e.g., Pix4dmapper) tools and cloud technology. Recently, more researchers on precision agriculture are focused on the development of advanced data analysis techniques based on machine learning and deep learning algorithms. In summary, we detail the challenges and opportunities of UAVs in PA as follows:

    (ⅰ) Compliance with the drone operating regulations, necessary training for drone operation, and advanced knowledge required by data processing tools and software are the key challenges that need to be properly dealt with for the wider uses of drones in precision agriculture.

    (ⅱ) The existing feature extraction methods are primarily focused on spectral information for crop trait estimation, but the multi-model feature fusion strategies look more efficient for various crop trait estimations. However, the feature fusion strategies are not straightforward to implement. Hence, more explorations are needed in these perspectives.

    (ⅲ) Advanced data analysis techniques such as machine learning and deep learning have shown very promising outcomes in some PA tasks such as crop classification, and yield estimation. However, they require a large amount of manually labelled data for training which is laborious and costly. Therefore, alternative approaches over the manual annotation for training data such as semi-supervised, weakly supervised or unsupervised techniques need to be explored.

    We have provided a detailed analysis and synthesis of the applications of machine learning methods for precision agriculture tasks while using UAVs as data acquisition platforms. We notice that the sensors and UAV platforms are cardinal while conducting drone-based remote sensing work. The overall pipeline of UAV-based remote sensing applications is critically dependent on the types of data acquired by sensors and the reliability of UAV platforms. We also unwrap the various crop features extracted while processing UAV image data for various crop traits estimation tasks, which reveals that the spectral features are the most important features for the better performance of such models while other features are only complementary. Hence, the multi-model feature fusion techniques have great potential to address the precision agriculture challenges such as accurate crop trait estimations. This paper presented a comparative analysis of various machine learning and deep learning methods used to address precision agriculture tasks such as yield estimation, disease detection and crop classification. It is seen that deep learning-based methods outperform the traditional crop estimation models for the majority of the tasks.

    To sum up, we have reviewed the various UAV platforms, the associated sensors, and the data processing pipeline complemented with recent analytical methods such as machine learning and deep learning in this work. The recent trend of using UAVs in smart agriculture shows the increasing success of deep learning methods in accurately addressing precision agriculture tasks.

    The authors would like to acknowledge the Research Training Program (RTP) scholarship funded by the Australian Government and the support and resources provided by CQUniversity. There was no additional external funding received for this study.

    The authors declare no conflict of interest.



    [1] Al-Saji A (1993) Government budget deficits, nominal and ex ante real long term interest rates in the U.K., 1960.1–1990.2. Atl Econ J 21: 71–77.
    [2] Barth JR (1991) The great savings and loan debacle, Washington, D.C.: AEI Press.
    [3] Barth JR, Iden G, Russek FS (1985) Federal borrowing and short term interest rates: Comment. South Econ J 52: 554–559. doi: 10.2307/1059645
    [4] Belton W, Cebula RJ (1994) Does the Federal Reserve create political monetary cycles? J Macroecon 16: 461–479. doi: 10.1016/0164-0704(94)90017-5
    [5] Breusch TS, Pagan AR (1979) A simple test for heteroscedasticity and random coefficient variation. Econometrica 47: 1287–1294. doi: 10.2307/1911963
    [6] Carlson KM, Spencer RW (1975) Crowding out and its critics. Fed Reserve Bank St Louis Rev 57: 1–19.
    [7] Cebula RJ (1997) An empirical note on the impact of the federal budget deficit on ex ante real long-term interest rates, 1973–1995. South Econ J 63: 1094–1099. doi: 10.2307/1061244
    [8] Cebula RJ (2013) An exploratory inquiry into the impact of budget deficits on the nominal interest rate yield on Moody's Aaa-rated corporate bonds. Appl Econ Lett 20: 1497–1500. doi: 10.1080/13504851.2013.826869
    [9] Cebula RJ (2014) Impact of federal government budget deficits on the longer-term ex post real interest rate in the U.S.: Evidence using annual and quarterly data, 1960–2013. Appl Econ Q 60: 23–40.
    [10] Cebula RJ, Angjellari-Dajci F, Foley M (2014) An exploratory empirical inquiry into the impact of federal budget deficits on the expost real interest rate yield on ten-year Treasury notes over the last half century. J Econ Financ 38: 712–720. doi: 10.1007/s12197-014-9280-8
    [11] Choi DFS, Holmes MJ (2014) Budget deficits and real interest rates: a regime-switching reflection on Ricardian Equivalence. J Econ Financ 38: 71–83. doi: 10.1007/s12197-011-9212-9
    [12] Council of Economic Advisors (2004) Economic report of the president, 2004. Washington, D.C: U.S. Government Printing Office.
    [13] Council of Economic Advisors (2018) Economic report of the president, 2018. Washington, D.C: U.S. Government Printing Office.
    [14] Ewing BT, Yanochik MA (1999) Budget deficits and the term structure of interest rates in Italy. Appl Econ Lett 6: 199–201. doi: 10.1080/135048599353636
    [15] Federal Reserve Bank of St. Louis (2017) Economic research. Available from: https://www.fred.stlouisfed.org/.
    [16] Federal Reserve Bank of St. Louis (1980) "Statement by Paul Volker before the Subcommittee on Domestic Monetary Policy of the Committee on Banking & Urban Affairs, House of Representatives".
    [17] Findlay DW (1990) Budget deficits, expected inflation, and short-term real interest rates. Int Econ J 4: 41–53.
    [18] Gale WG, Orszag PR (2003) Economic effects of sustained budget deficits. Nat Tax J 49: 151–164.
    [19] Gissey W (1999) Net Treasury borrowing and interest rate changes. J Econ Financ 23: 211–219.
    [20] Greene WH (1997) Econometric Analysis, Upper saddle River, NJ: Prentice-Hall, Inc.
    [21] Grier K (1991) Congressional influence on U.S. monetary policy. J Monetary Econ 25: 201–220.
    [22] Hoelscher G (1986) New evidence on deficits and interest rates. J Money Credit Bank 18: 1–17. doi: 10.2307/1992316
    [23] Johnson CF (1992) An empirical note on interest rate equations. Q Rev Econ Financ 32: 141–147.
    [24] Keynes JM (1936) The general theory of employment, interest, and money, London: Palgrave Macmillan.
    [25] Kiewiet DR (1983) Macroeconomics and micropolitics, Chicago: University of Chicago Press.
    [26] Krippner L (2012) A model for interest rates near the zero lower bound: An overview and discussion. Reserve Bank of New Zealand Working paper Series.
    [27] Madura J (2008) Financial markets and institutions, 8th ed. Mason, OH: Thomson Higher Education.
    [28] Marfatia HA (2014) Impact of uncertainty on high frequency response of the U.S. stock markets to the Fed's policy surprises. Q Rev Econ Financ 54: 382–392.
    [29] Marfatia HA (2015) Monetary policy's time-varying impact on U.S. bond markets: Role of financial stress and risks. N Am Econ Financ 34: 103–123.
    [30] Newey WK, West KD (1987) A simple positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica 55: 703–708. doi: 10.2307/1913610
    [31] Ostrosky AL (1990) Federal budget deficits and interest rates: Comment. South Econ J 56: 802–803. doi: 10.2307/1059381
    [32] Robinson KJ (1980) Depository Institutions Deregulation and Monetary Control Act of 1980. Fed Reserve Bull 66: 444–453.
    [33] Saltz IS (1998) Exante real long-term interest rates and U.S. federal budget deficits: Preliminary error-correction evidence, 1971–1991. Econ Int 51: 163–169.
    [34] Swamy PAVB, Kolluri BR, Singamsetti RN (1990) What do regressions of interest rates on deficits imply? South Econ J 56: 1010–1028. doi: 10.2307/1059888
    [35] Vargas-Silva C (2008) Monetary policy and the U.S. housing market: A VAR analysis imposing sign restrictions. J Macroecon 30: 990–997.
    [36] Wheeler M, Chowdhury A (1993) The housing market, macroeconomic activity and financial Innovation: An empirical analysis. Appl Econ 25: 1385–1392. doi: 10.1080/00036849300000141
    [37] Wu IC, Xia FD (2016) Measuring the macroeconomic impact of monetary policy at the zero lower bound. J Money Credit Bank 48: 234–248.
    [38] Zahid K (1988) Government budget deficits and interest rates: The evidence since 1971 using alternative deficit measures. South Econ J 54: 725–731. doi: 10.2307/1059015
  • This article has been cited by:

    1. Mingxing Xu, Hongyi Lin, Yang Liu, A deep learning approach for vehicle velocity prediction considering the influence factors of multiple lanes, 2022, 31, 2688-1594, 401, 10.3934/era.2023020
    2. Tej Bahadur Shahi, Cheng-Yuan Xu, Arjun Neupane, Dayle Fresser, Dan O’Connor, Graeme Wright, William Guo, Claudionor Ribeiro da Silva, A cooperative scheme for late leaf spot estimation in peanut using UAV multispectral images, 2023, 18, 1932-6203, e0282486, 10.1371/journal.pone.0282486
    3. V. Sahasranamam, T. Ramesh, R. Rajeswari, 2023, Monitoring and Identifying Paddy Leaf Diseases Using Unmanned Aerial Vehicles (UAVs) with Machine Learning- A Survey, 979-8-3503-8197-9, 560, 10.1109/ICIDeA59866.2023.10295173
    4. Nitin N. Sakhare, N. Thangarasu, Rajarajeswari S, Suruchi G. Dedgaonkar, A Rengarajan, Ravi Kant Pareek, 2023, Automated Learning Engines and their Potential to Enhance Information Retrieval Accuracy, 979-8-3503-1912-5, 1, 10.1109/SMARTGENCON60755.2023.10442971
    5. Laura Sofia Caicedo Apraez, Andrés Felipe Solis Pino, Andres Ossa, Carlos Iván Vasquez, Juan David Solarte, Efrén Venancio Ramos Cabrera, Saul Eduardo Ruiz, Application of Spectral Imaging and Vegetation Index in Latin American Coffee Production: A Systematic Mapping, 2024, 1085-3278, 10.1002/ldr.5373
    6. Tej Bahadur Shahi, Cheng-Yuan Xu, Arjun Neupane, Dayle B. Fleischfresser, Daniel J. O'Connor, Graeme C. Wright, William Guo, Peanut yield prediction with UAV multispectral imagery using a cooperative machine learning approach, 2023, 31, 2688-1594, 3343, 10.3934/era.2023169
    7. Md Habibur Rahman, Mohammad Abrar Shakil Sejan, Md Abdul Aziz, Rana Tabassum, Jung-In Baik, Hyoung-Kyu Song, A Comprehensive Survey of Unmanned Aerial Vehicles Detection and Classification Using Machine Learning Approach: Challenges, Solutions, and Future Directions, 2024, 16, 2072-4292, 879, 10.3390/rs16050879
    8. Palanivel Kuppusamy, Suresh Joseph K., Suganthi Shanmugananthan, 2023, chapter 11, 9781668480984, 172, 10.4018/978-1-6684-8098-4.ch011
    9. Hong Lin, Zhuqun Chen, Zhenping Qiang, Su-Kit Tang, Lin Liu, Giovanni Pau, Automated Counting of Tobacco Plants Using Multispectral UAV Data, 2023, 13, 2073-4395, 2861, 10.3390/agronomy13122861
    10. Elisha Elikem Kofi Senoo, Lia Anggraini, Jacqueline Asor Kumi, Luna Bunga Karolina, Ebenezer Akansah, Hafeez Ayo Sulyman, Israel Mendonça, Masayoshi Aritsugi, IoT Solutions with Artificial Intelligence Technologies for Precision Agriculture: Definitions, Applications, Challenges, and Opportunities, 2024, 13, 2079-9292, 1894, 10.3390/electronics13101894
    11. Tej Bahadur Shahi, Ram Bahadur Khadka, Arjun Neupane, 2024, chapter 10, 9798369305782, 246, 10.4018/979-8-3693-0578-2.ch010
    12. Bere Benjamin Bantchina, Muhammad Qaswar, Selçuk Arslan, Yahya Ulusoy, Kemal Sulhi Gündoğdu, Yücel Tekin, Abdul Mounem Mouazen, Corn yield prediction in site-specific management zones using proximal soil sensing, remote sensing, and machine learning approach, 2024, 225, 01681699, 109329, 10.1016/j.compag.2024.109329
    13. Anchal Rana, Kiran Thakur, Meenakshi Thakur, 2024, Chapter 9, 978-3-031-61094-3, 237, 10.1007/978-3-031-61095-0_9
    14. Hongyan Zhu, Shikai Liang, Chengzhi Lin, Yong He, Jun-Li Xu, Using Multi-Sensor Data Fusion Techniques and Machine Learning Algorithms for Improving UAV-Based Yield Prediction of Oilseed Rape, 2024, 8, 2504-446X, 642, 10.3390/drones8110642
    15. Andres F. Duque, Diego Patino, Julian D. Colorado, Eliel Petro, Maria C. Rebolledo, Ivan F. Mondragon, Natalia Espinosa, Nelson Amezquita, Oscar D. Puentes, Diego Mendez, Andres Jaramillo-Botero, Characterization of Rice Yield Based on Biomass and SPAD-Based Leaf Nitrogen for Large Genotype Plots, 2023, 23, 1424-8220, 5917, 10.3390/s23135917
    16. Yaoyu Li, Tengteng Qu, Yuzhi Wang, Qixin Zhao, Shujie Jia, Zhe Yin, Zhaodong Guo, Guofang Wang, Fuzhong Li, Wuping Zhang, UAV-Based Remote Sensing to Evaluate Daily Water Demand Characteristics of Maize: A Case Study from Yuci Lifang Organic Dry Farming Experimental Base in Jinzhong City, China, 2024, 14, 2073-4395, 729, 10.3390/agronomy14040729
    17. Maral Hooshyar, Yuan-Shuo Li, Wen Chun Tang, Ling-Wei Chen, Yueh-Min Huang, Economic Fruit Trees Recognition in Hillsides: A CNN-Based Approach Using Enhanced UAV Imagery, 2024, 12, 2169-3536, 61991, 10.1109/ACCESS.2024.3391371
    18. Nozomi Kaneko Sato, Takeshi Tsuji, Yoshihiro Iijima, Nobuhito Sekiya, Kunio Watanabe, Predicting Rice Lodging Risk from the Distribution of Available Nitrogen in Soil Using UAS Images in a Paddy Field, 2023, 23, 1424-8220, 6466, 10.3390/s23146466
    19. Zhenyu Wu, Shangjing Lin, 2024, Multi-Unmanned Aerial Vehicle Agricultural Inspection System Based on Federated Learning, 979-8-3503-5021-0, 906, 10.1109/ICCCS61882.2024.10602809
    20. Tej Bahadur Shahi, Cheng-Yuan Xu, Arjun Neupane, William Guo, Recent Advances in Crop Disease Detection Using UAV and Deep Learning Techniques, 2023, 15, 2072-4292, 2450, 10.3390/rs15092450
    21. Parthasarathy Velusamy, Santhosh Rajendran, Alfred Daniel John William, 2023, Chapter 8, 978-981-99-5055-3, 145, 10.1007/978-981-99-5056-0_8
    22. Yue Chi, Chenxi Wang, Zhulin Chen, Sheng Xu, TCSNet: A New Individual Tree Crown Segmentation Network from Unmanned Aerial Vehicle Images, 2024, 15, 1999-4907, 1814, 10.3390/f15101814
    23. Zhong-Han Zhuang, Hui Ping Tsai, Chung-I Chen, Ming-Der Yang, Subtropical region tea tree LAI estimation integrating vegetation indices and texture features derived from UAV multispectral images, 2024, 9, 27723755, 100650, 10.1016/j.atech.2024.100650
    24. Chiranji Lal Chowdhary, S. Vijayan, 2024, chapter 7, 9798369322239, 165, 10.4018/979-8-3693-2223-9.ch007
    25. Juhi Agrawal, Muhammad Yeasir Arafat, Transforming Farming: A Review of AI-Powered UAV Technologies in Precision Agriculture, 2024, 8, 2504-446X, 664, 10.3390/drones8110664
    26. Anna Teresa Seiche, Lucas Wittstruck, Thomas Jarmer, Weed Detection from Unmanned Aerial Vehicle Imagery Using Deep Learning—A Comparison between High-End and Low-Cost Multispectral Sensors, 2024, 24, 1424-8220, 1544, 10.3390/s24051544
    27. Ankit Kumar, Kamred Udham Singh, Teekam Singh, Gaurav Kumar, Saroj Kumar Pandey, Pushpendra Dhar Dwivedi, 2023, An Advanced Deep Learning Approach Combining Image Analysis for Precise Retinal Disease Detection, 979-8-3503-3091-5, 1, 10.1109/ICAIIHI57871.2023.10489139
    28. Nisit Pukrongta, Attaphongse Taparugssanagorn, Kiattisak Sangpradit, 2023, Assessing a Machine Learning Model for Predicting Maize Grain Yield Based on Chlorophyll Content and Vegetation Indices, 979-8-3503-4458-5, 173, 10.1109/ICPEI58931.2023.10473794
    29. Qianxia Li, Zhongfa Zhou, Yuzhu Qian, Lihui Yan, Denghong Huang, Yue Yang, Yining Luo, Accurately Segmenting/Mapping Tobacco Seedlings Using UAV RGB Images Collected from Different Geomorphic Zones and Different Semantic Segmentation Models, 2024, 13, 2223-7747, 3186, 10.3390/plants13223186
    30. Zuojun Zheng, Jianghao Yuan, Wei Yao, Hongxun Yao, Qingzhi Liu, Leifeng Guo, Crop Classification from Drone Imagery Based on Lightweight Semantic Segmentation Methods, 2024, 16, 2072-4292, 4099, 10.3390/rs16214099
    31. Weiyi Yang, Wei Fan, Di Wang, Samantha Latremouille, Guilherme Mendes Sant'Anna, Wissam Shalish, Robert E. Kearney, Detection of differences of cardiorespiratory metrics between non-invasive respiratory support modes using machine learning methods, 2023, 85, 17468094, 105028, 10.1016/j.bspc.2023.105028
    32. R. Thamilselvan, P. Natesan, R. R. Rajalaxmi, Prasanth D, Nithish Swaminathan S, Yuvapriya R, 2024, An Analysis of Irrigation Management for Crops using Machine Learning Algorithms, 979-8-3503-7274-8, 394, 10.1109/ICC-ROBINS60238.2024.10533984
    33. Guntaga Logavitool, Teerayut Horanont, Aakash Thapa, Kritchayan Intarat, Kanok-on Wuttiwong, Xiaoyong Sun, Field-scale detection of Bacterial Leaf Blight in rice based on UAV multispectral imaging and deep learning frameworks, 2025, 20, 1932-6203, e0314535, 10.1371/journal.pone.0314535
    34. Ronald P. Dillner, Maria A. Wimmer, Matthias Porten, Thomas Udelhoven, Rebecca Retzlaff, Combining a Standardized Growth Class Assessment, UAV Sensor Data, GIS Processing, and Machine Learning Classification to Derive a Correlation with the Vigour and Canopy Volume of Grapevines, 2025, 25, 1424-8220, 431, 10.3390/s25020431
    35. Tej Bahadur Shahi, Thirunavukarasu Balasubramaniam, Kenneth Sabir, Richi Nayak, Pasture monitoring using remote sensing and machine learning: A review of methods and applications, 2025, 37, 23529385, 101459, 10.1016/j.rsase.2025.101459
    36. Gabriel-Valentin GHEORGHE, Dragos-Nicolae DUMITRU, Radu CIUPERCĂ, Marinela MATEESCU, Stefano Andrea MANTOVANI, Elisabeta PRISACARIU, Alin HARABAGIU, ADVANCING PRECISION AGRICULTURE WITH UAV’S: INNOVATIONS IN FERTILIZATION, 2025, 20682239, 1057, 10.35633/inmateh-74-89
    37. Veera Venkata Ram Murali K. Muvva, Yogesh Chawla, Kunjan Theodore Joseph, Santosh Pitla, Marilyn Claire Wolf, 2025, Cooperative Localization of UAVs in Multi-Robot Systems Using Deep Learning-Based Detection, 978-1-62410-723-8, 10.2514/6.2025-1537
    38. Tajul Miftahushudur, Halil Mertkan Sahin, Bruce Grieve, Hujun Yin, A Survey of Methods for Addressing Imbalance Data Problems in Agriculture Applications, 2025, 17, 2072-4292, 454, 10.3390/rs17030454
    39. Fernando Garcia, Maria do Socorro Bezerra de Araujo, Antonio Celso De Souza Leite, Rodolfo Cavalcanti Garcia, Josicleda Domiciano Galvíncio, Evolução da agricultura de precisão: uma revisão, 2024, 17, 1984-2295, 4761, 10.26848/rbgf.v17.6.p4761-4812
    40. Karla Vanessa Ayala Cruz, José de Jesús Valenzuela Hernández, Gilberto Bojórquez Delgado, 2024, 5, 9791387631277, 91, 10.61728/AE24004534
    41. Xiangyu BAI, Kai ZHANG, Ranbing YANG, Zhiguo PAN, Huan ZHANG, Jian ZHANG, Xidong JING, Shiteng GUO, Sen DUAN, FARMLAND OBSTACLE RECOGNITION BASED ON IMPROVED FASTER R-CNN, 2025, 20682239, 346, 10.35633/inmateh-75-29
    42. Vedant Agarwal, Samiksha Budhiraja, Ashwin Sasi, S. P. Jeno Lovesum, 2025, Chapter 3, 978-3-031-81085-5, 19, 10.1007/978-3-031-81086-2_3
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4613) PDF downloads(1175) Cited by(3)

Figures and Tables

Tables(9)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog