In order to solve the problem that deep learning-based flower image classification methods lose more feature information in the early feature extraction process, and the model takes up more storage space, a new lightweight neural network model based on multi-scale feature fusion and attention mechanism is proposed in this paper. First, the AlexNet model is chosen as the basic framework. Second, a multi-scale feature fusion module (MFFM) is used to replace the shallow single-scale convolution. MFFM, which contains three depthwise separable convolution branches with different sizes, can fuse features with different scales and reduce the feature loss caused by single-scale convolution. Third, two layers of improved Inception module are first added to enhance the extraction of deep features, and a layer of hybrid attention module is added to strengthen the focus of the model on key information at a later stage. Finally, the flower image classification is completed using a combination of global average pooling and fully connected layers. The experimental results demonstrate that our lightweight model has fewer parameters, takes up less storage space and has higher classification accuracy than the baseline model, which helps to achieve more accurate flower image recognition on mobile devices.
Citation: Zhigao Zeng, Cheng Huang, Wenqiu Zhu, Zhiqiang Wen, Xinpan Yuan. Flower image classification based on an improved lightweight neural network with multi-scale feature fusion and attention mechanism[J]. Mathematical Biosciences and Engineering, 2023, 20(8): 13900-13920. doi: 10.3934/mbe.2023619
[1] | Minghuan Guo, Hao Wang, Zhifeng Wang, Xiliang Zhang, Feihu Sun, Nan Wang . Model for measuring concentration ratio distribution of a dish concentrator using moonlight as a precursor for solar tower flux mapping. AIMS Energy, 2021, 9(4): 727-754. doi: 10.3934/energy.2021034 |
[2] | Yasong Sun, Jiazi Zhao, Xinyu Li, Sida Li, Jing Ma, Xin Jing . Prediction of coupled radiative and conductive heat transfer in concentric cylinders with nonlinear anisotropic scattering medium by spectral collocation method. AIMS Energy, 2021, 9(3): 581-602. doi: 10.3934/energy.2021028 |
[3] | Sunimerjit Kaur, Yadwinder Singh Brar, Jaspreet Singh Dhillon . Multi-objective real-time integrated solar-wind-thermal power dispatch by using meta-heuristic technique. AIMS Energy, 2022, 10(4): 943-971. doi: 10.3934/energy.2022043 |
[4] | Elio Chiodo, Pasquale De Falco, Luigi Pio Di Noia, Fabio Mottola . Inverse Log-logistic distribution for Extreme Wind Speed modeling: Genesis, identification and Bayes estimation. AIMS Energy, 2018, 6(6): 926-948. doi: 10.3934/energy.2018.6.926 |
[5] | Jin H. Jo, Zachary Rose, Jamie Cross, Evan Daebel, Andrew Verderber, John C. Kostelnick . Application of Airborne LiDAR Data and Geographic Information Systems (GIS) to Develop a Distributed Generation System for the Town of Normal, IL. AIMS Energy, 2015, 3(2): 173-183. doi: 10.3934/energy.2015.2.173 |
[6] | Jin H. Jo, Jamie Cross, Zachary Rose, Evan Daebel, Andrew Verderber, David G. Loomis . Financing options and economic impact: distributed generation using solar photovoltaic systems in Normal, Illinois. AIMS Energy, 2016, 4(3): 504-516. doi: 10.3934/energy.2016.3.504 |
[7] | Yuan Liu, Ruilong Deng, Hao Liang . Game-theoretic control of PHEV charging with power flow analysis. AIMS Energy, 2016, 4(2): 379-396. doi: 10.3934/energy.2016.2.379 |
[8] | M. Boussaid, A. Belghachi, K. Agroui, N.Djarfour . Mathematical models of photovoltaic modules degradation in desert environment. AIMS Energy, 2019, 7(2): 127-140. doi: 10.3934/energy.2019.2.127 |
[9] | Constantinos Charalampopoulos, Constantinos S. Psomopoulos, George Ch. Ioannidis, Stavors D. Kaminaris . Implementing the EcoDesign Directive in distribution transformers: First impacts review. AIMS Energy, 2017, 5(1): 113-124. doi: 10.3934/energy.2017.1.113 |
[10] | Arqum Shahid, Ahsan Ali, Faheem Ashiq, Umer Amir Khan, Muhammad Ilyas Menhas, Sajjad Manzoor . Comparison of resistive and inductive superconductor fault current limiters in AC and DC micro-grids. AIMS Energy, 2020, 8(6): 1199-1211. doi: 10.3934/energy.2020.6.1199 |
In order to solve the problem that deep learning-based flower image classification methods lose more feature information in the early feature extraction process, and the model takes up more storage space, a new lightweight neural network model based on multi-scale feature fusion and attention mechanism is proposed in this paper. First, the AlexNet model is chosen as the basic framework. Second, a multi-scale feature fusion module (MFFM) is used to replace the shallow single-scale convolution. MFFM, which contains three depthwise separable convolution branches with different sizes, can fuse features with different scales and reduce the feature loss caused by single-scale convolution. Third, two layers of improved Inception module are first added to enhance the extraction of deep features, and a layer of hybrid attention module is added to strengthen the focus of the model on key information at a later stage. Finally, the flower image classification is completed using a combination of global average pooling and fully connected layers. The experimental results demonstrate that our lightweight model has fewer parameters, takes up less storage space and has higher classification accuracy than the baseline model, which helps to achieve more accurate flower image recognition on mobile devices.
The increasing presence of wind farms (WF) in electrical power networks has made it important to simulate correlated wind speeds. The use of wind speed series in combination with the wind turbine (WT) power curves is common for the resolution of more than one typical problem in electrical power network analysis.
In order to attain a solution for some of these problems it is necessary to deal with wind speed series satisfying features regarding the frequency distribution of wind speeds and the correlation between series at different sites. There is wide agreement on considering the Weibull distribution as the best continuous approximation for the frequency distribution of wind speed in a site [1] and Kavasseri presents a study of the phenomena associated to the correlation [2]. Spatial correlation is explicity mentioned by Damousis [3] and autocorrelation has been studied by Brown et al. [4], and also by Song and Hsiao [5].
Other solutions to the problem stated above have been proposed in different papers, and a review of them has been presented by Feij\'oo et al. [6], especially for the case when no additional chronological constraints are imposed.
Correia and Ferreira de Jesus [7] use the process of obtaining a Weibull distribution as a combination of two Normal distributions while Segura et al. [8] obtain correlated Weibull distributions from Uniform distributions.
A method based on the conditional probability theorem was presented by Karaki et al. [9] and an approach based on the inverse discrete Fourier transform was presented by Young and Beaulieu [10].
Huang and Chalabi [11] present a method based on chronological series and Shamshad et al. [12] a different one based on Markovian models.
Villanueva et al. [13] propose a solution for an application to the economic dispatch problem in networks with penetration of wind farms. It consists of the simulation of wind speed series satisfying statistical constraints such as those mentioned above together with an additional one regarding autocorrelation, which added a chronological nuance to the proposed method. As a result, series of correlated wind speeds are obtained with a very high degree of accuracy regarding correlations, Weibull parameters and autocorrelation of the series, although it is not so clear why these features are retained through the proposed transformation, which is a transformation from Normal to Weibull distributions.
The main objective of this paper is to return to this subject and offer an approach which helps understand and discuss why those features are retained through such a transformation. Polynomial approximations of first and second degree will be used to achieve this.
In the rest of the paper, a Normal distribution with mean μ
A N(0,1)
u=c(−log(1−erf(x√2)2))1k |
(1) |
where x
erf(x)=2√π∫x0e−t2dt |
(2) |
For such a transformation the fact must be taken into account that the cumulative distribution function (CDF) of normally distributed data with mean value μx
When such a transformation is performed, a representation of the normally distributed values against the Weibull distributed ones gives Figure 1 as a result, where the Weibull values cover an interval including [3,25], i.e., the interval of wind speed values in which most WTs can run, which is the reason for being considered an important interval in this paper.
The graph represented in Figure 1 has been obtained for a Weibull distribution with parameters c=7
So far, in Figure 1 attention must be paid to the line with the legend Exact, obtained from (1).
A visual inspection of Figure 1 and the experience of having carried out many different simulations lead the authors to think that polynomial approximations to (1) could give accurate results.
If a least squares approximation is applied to the set of values of such a transformation, then the other curves of Figure 1 are obtained, i.e., those corresponding to the legends 1st order and 2nd order.
Both curves have been obtained by means of polynomial approximations of first and second order, respectively, such as:
f(x)=D∑n=0anxn |
(3) |
where D∈{1,2}
This means that u
The values of the constants obtained for these approximations are a0=5.7660
Summarizing, for the transformation proposed in (1) there are the following possible approximations.
f1(x)=5.7660+4.1384⋅x |
(4) |
\begin{eqnarray}\label{f_12} f_2(x)&=&5.9301+3.5227\cdot x+0.1539\cdot x^2 \end{eqnarray} |
(5) |
For each approximation, a measure of error can be defined as:
efk(x)=√∑Mi=1(ui−fk(xi))2M |
(6) |
where k∈{1,2}
The error made in a set of 100, 000 samples, according to (6), when obtaining the approximations of Figure 1 came to 0.0043 for the first order polynomial and 0.0011 for the second order one. In different simulations the values of these errors can be slightly different, but very close to the one given here.
Although the differences in error according to (6) are not so important, they are bigger when dealing with absolute errors. If an absolute error is measured according to:
eabsfk(x)=max{|ui−fk(xi)|,i∈{1,2,...,M}} |
(7) |
then in different simulations, values around 1.27 have been found for the first degree approximation and around 0.34 for the second degree one.
In Figure 2, the histogram obtained by means of a transformation based on (1) has been included with the notation Exact. In the same Figure, 1st deg. pol. and 2nd deg. pol. denote the histograms corresponding to both proposed approximations. This histogram has been obtained with 100, 000 samples.
An observation must be made here. A first order polynomial does not convert a Normal distribution into a Weibull one. It is a linear conversion and the result has to be a Normal distribution, and this can be appreciated in Figure 2. However, a second order polynomial has the contribution of the second order term, with a strong trend to make the function asymmetric, and its similarity with a Weibull distribution is stronger.
Finally, in Table 1 some of the moments of u
mean μ | std. dev. σ | |
u | 6.1975 | 3.2355 |
f1(x) | 5.7707 | 4.1289 |
f2(x) | 6.0787 | 3.5213 |
The next question to be answered is about the influence of constants c
In this section the variation of constants ai
In (1) it can be observed that the constant c
It is not difficult to deduce that variations of c
For this value of k=2
a0c(c)=0.8464⋅ca1c(c)=0.4972⋅ca2c(c)=0.0243⋅c |
(8) |
As has been reflected in Figure 3 the simulation was made for values of c
The following comment should be added. As mentioned before, sometimes the Rayleigh distribution has been recommended as a good continuous approximation of the frequency distribution of wind speeds at a given site. In a Weibull distribution the mean and standard deviation are calculated by means of the Gamma function first described by Euler, Γ(p)=∫∞0e−xxp−1dx
Taking the previous paragraph into account, the use of (8) can be of interest when using the Rayleigh distribution. In this case, k=2
a0c(μ)=0.8464⋅2√π⋅μ=0.9551⋅μa1c(μ)=0.4972⋅2√π⋅μ=0.5610⋅μa2c(μ)=0.0243⋅2√π⋅μ=0.0274⋅μ |
(9) |
The presence of k
As can be expected, changes in the value of k
A first order approximation does not seem to be able to fit these curves, for which a second order one is here recommended, and it reveals that a0
a0k(k)=−0.0715⋅k2+0.4862⋅k+0.9886a1k(k)=0.0967⋅k2−0.8153⋅k+2.2533a2k(k)=0.1503⋅k2−0.9132⋅k+1.3177 |
(10) |
Both effects of the variations of c
The results of the variation of both c
This combined dependency can also be approximated by means of a polynomial transformation such as:
ai(c,k)=1∑m=02∑n=0pmnicmkni∈{0,1,2} |
(11) |
where the constants pmni
Under the assumption of an interval for c
p00 | p10 | p01 | p11 | p02 | |
a0 | -0.9743 | 0.6891 | 0.8938 | 0.0644 | -0.1788 |
a1 | 1.3170 | 0.8632 | -1.209 | -0.1660 | 0.2417 |
a2 | 2.0480 | 0.2492 | -1.8790 | -0.0808 | 0.3758 |
The coefficients p12
Some consequences of the proposed approximations are given in this section. The conservation of autocorrelation when going from Normal distributions to Weibull distributions is not clearly easy to deduce directly from (1). But some operations with the statistical values based on the approximations can be of help for giving some explanation about why these values are retained. At the end of the section there are also some considerations about negative values in the simulations.
It has been mentioned that (1) was presented by Villanueva et al. in [13] as an option for the obtaining of Weibull distributions, with given mean, standard deviation and also lag 1 autocorrelation. To achieve such an objective, an autoregresive process known as AR(1) was used for randomly simulating Normal distributions with mean 0 and standard deviation 1 and then a transformation based on the Choleski decomposition of the covariance matrix was used to obtain the Weibull distributions.
Something that was observed when using this transformation process was the fact that lag 1 autocorrelation was apparently retained when converting data from the initial Normal distribution to the final Weibull one, i.e., the autocorrelation of each of the simulated Weibull series had a value very close to the value of the autocorrelation of the initial Normal series. In view of (1) it is not so evident why such statistical values are retained through the transformation.
The results obtained with the polynomial approximations presented in this paper can be used as an interesting approach to explain the fact given above.
In previous sections it has been shown that good approximations with polynomials of degrees 1 and 2 can be obtained for (1). The proposal is to look for relationships between the statistical values by means of these approximations.
From this section on, some new notation will be used, and so x=[x1,x2,...,xM],\[xi=[x1,x2,...,xM−1],and\[xi+1=[x2,x3,...,xM].Fortheapproximation,thenotationwillbe\[f1(x)=[f1(x1),f1(x2),...,f1(xM)],\[f1(x)i=[f1(x1),f1(x2),...,f1(xM−1)]andinthesamewayto\[xi+1
It is a well known fact that if the mean value of a Normal distribution is μx=E[x]=∑Mk=1xkM
If σ2x=E[(x−μx)2]=∑Mk=1(xk−μx)2M
Lag 1 autocorrelation is obtained as the correlation between a given series and the same series shifted by one position. For calculating covariances between two series, for example, x=[x1,x2,...,xM]and\[y=[y1,y2,...,yM],aformulationhastobeusedwheretermslikethefollowingappear,\[(xi−μx)(yi−μy)
The covariance between a series of a Normal distribution and the shifted one will be denoted as σxixi+1
σxixi+1=∑M−1i=1(xi−μx)(xi+1−μx)M=∑M−1i=1xixi+1M
by taking into account that μx=0
The correlation between both series can be defined as
ρxixi+1=σxixi+1σxiσxi+1=σxixi+1σ2=σxixi+1
assuming that σ=σxi=σxi+1=1
This correlation between the two series is the lag 1 autocorrelation of the series.
What happens with the values of the approximated Weibull distribution is the following:
σf1(x)if1(x)i+1=1M⋅M−1∑i=1(f1(xi)−μf1(x))(f1(xi+1)−μf1(x))=a21σxixi+1
taking into account that f1(xi)−μf1(x)=a0+a1xi−a0
ρf1(x)if1(x)i+1=σf1(x)if1(x)i+1σf1(x)iσf1(x)i+1
Previously it has been shown that σ2f1(x)=a21
ρf1(x)if1(x)i+1=a21σxixi+1a1σxia1σxi+1=σxixi+1σxiσxi+1=ρxixi+1
The conclusion is that substituting (1) by a first degree polynomial involves a transformation where lag 1 autocorrelation is retained.
In fact, if subindices i+1
A summary of all these moments can be read in Table 3.
Variable | μ | σ |
x | 0 | 1 |
f1(x)=a0+a1x | a0 | a21 |
The conclusions of the previous section are not surprising because the proposed transformation by means of f1
A better approximation to (1) consists of a second order polynomial, such as f2(x)=a0+a1x+a2x2
The mean value of the given distribution is μf2(x)=E[a0+a1x+a2x2]=a0+a2
In the case of the variance the calculation is as follows, σ2f2(x)=E[(a0+a1x+a2x2−μf2(x))2]=E[(a1x+a2(x2−1))2],bytakingintoaccountthefactthat\[μf2(x)=a0+a2
The values of the moments for a Normal distribution can be seen in appendix A, and substituting them in this transformation, the result is that σ2f2(x)=a21+2a22
Autocorrelation can be defined with the help of σxixi+1=∑M−1i=1xixi+1M
For the transformed one: σf2(x)if2(x)i+1==1M⋅M−1∑i=1(f2(xi)−μf2(x))(f2(xi+1)−μf2(x))==1M⋅M−1∑i=1(a1xi+a2(x2i−1))(a1xi+1+a2(x2i+1−1))
For simplicity, M
M−1∑i=1(a21xixi+1+a1a2xix2i+1−a1a2xi+a1a2x2ixi+1−a1a2xi+1+a22x2ix2i+1−a22x2i−a22x2i+1+a22)
This transformation can be simplified by taking into account that, due to properties of the N(0,1) distribution, with the help of expressions given in appendix A, the final result is that σf(x)if(xi+1)=a21σxixi+1+2a22σ2xixi+1
According to this:
ρf2(x)if2(x)i+1=σf2(x)if2(x)i+1σf2(x)iσf2(x)i+1=ρxixi+1a21+2a22ρxixi+1a21+2a22 |
(12) |
where ρxixi+1=σxixi+1
It is interesting to point out the values of ρf2(x)if2(x)i+1
ρf2(x)if2(x)i+1={1if \[\rho_{\bf x_ix_{i+1}}=1\]0if \[\rho_{\bf x_ix_{i+1}}=0\]−a21+2a22a21+2a22if \[\rho_{\bf x_ix_{i+1}}=-1\]
Anyway, as generally a1≫a2
All that has been explained in this section can be summarized as follows: assuming a Normal distribution and the Weibull distribution obtained from this Normal one by means of (1), different approximations to the Weibull distribution can be run on a polynomial basis.
A degree one polynomial is a linear approximation which retains variance and autocorrelation without dependency of the lag.
A degree two polynomial includes a certain degree of asymmetry, which makes the transformed distribution be further from the Normal one and closer to the Weibull one. In this case, the autocorrelation is not retained and its degree of approximation to the values of the original distribution depends on the value itself.
As a second degree approximation is closer to the first degree approximation this allows the conclusion to be made that the autocorrelation is not retained through the exact transformation given by (1).
Variable | μ | σ |
x | 0 | 1 |
f2(x)=a0+a1x+a2x2 | a0+a2 | a21+2a22 |
Correlations between series are retained through linear transformations, and this can be argued in a similar way to the assertion made for the lag 1 autocorrelations.
If x∼N(0,1)
This means that σf1(x)f1(y)=E[(f1(x)−μf1(x))(f1(y)−μf1(y))]=E[(a0+a1x−a0)(b0+b1y−b0)]=a1b1σxy
As σf1(x)=a1
However, in nonlinear transformations, i.e., in the case of second degree approximations, things operate in a different manner. If new approximations are taken, such as f2(x)=a0+a1x+a2x2
The variance between f2(x)
By rearranging the previous expression and by taking into account appendix A, it can be written as:
σf2(x)g2(y)=σxy(a1b1+a2b2σxy)
Now, as σf2(x)=(a21+2a22)σx
ρf2(x)g2(y)=ρxya1b1+2a2b2ρxy√a21+2a22√b21+2b22 |
(13) |
As can be deduced, things operate in a similar way to the case of lag 1 autocorrelation if both distributions coincide, i.e., if they have identical c
However, when both distributions differ, then the four parameters a1
Another consequence of the approximation is the appearance of negative values in the conversion, a problem that was detected by Feij\'oo et al. [14], in a work where correlated Weibull and Rayleigh distributed series of wind speeds were simulated, and then avoided with new methods by Feij\'oo and Sobolewski [15], with the use of nonparametric correlations, i.e., Spearman rank correlations.
For representing wind speed values, the Weibull distribution can be treated just as it has so far, i.e., assuming that its minimum value is 0.
But more generally, the Weibull distribution CDF with an origin γ≠0
According to this, it is natural that when data represent wind speeds, there is a tendency to reject negative values. However, other errors are accepted in all simulations and an interesting question that begs to be answered is how much of a problem it would be to accept negative values as wind speed values.
And the answer is that it does not necessary involve making a significant error in the calculations.
A typical situation consists of combining wind speed data with WT power curves with the aim of calculating either the power generated or the total energy produced during a certain period of time.
In order to check the error made when substituting the exact formulation given by (1) by a polynomial approximation of degree 2, a power curve for a WT has been combined with data corresponding to a site with a Weibull distribution of parameters c=7 and k=2
The power curve has been proposed by Carta et al. [16] and described in appendix B. For the following values, vCI=4$ $ms−1
As a conclusion, the acceptance of negative values as wind speeds for the calculation of power or energy values in a simulation is not so important, as they are filtered by the WT power curve, i.e., if the value of wind speed is negative, then the power generated by the WT will be 0. In many cases it will be just like when the value is positive but under 3 or 4~ms−1
An approximation with a polynomial of degree one is much less satisfactory, as the errors made when calculating energy rise to values close to 13\% for all the rated powers given.
In this paper two different polynomial approximations have been proposed for the transformation of sets of Normally distributed data to sets of Weibull distributed series, satisfying not only the parameters of Weibull distributions, but also their correlations and even autocorrelations.
The approximations have been used to provide an approach to a better understanding of why these features are retained when an exact transformation is carried out, with the following consequences:
1. The use of (1) for the Normal to Weibull transformation is very adequate and gives very good results. It has no disadvantages from a computational point of view. However, it is not easy to explain why certain statistical values, such as correlation and autocorrelation are highly retained, which has been previously obtained by simulations.
2. In order to explain such phenomena, polynomial approximations based on the least square method were used for the CDFWeibull=f(\[CDF
3. A second degree approximation shows a high degree of accuracy, and also shows that correlations and autocorrelations are not exactly retained, but seem to explain why they are highly retained.
4. The appearance of negative values of the wind speed in simulations does not involve an important error. Although there are no negative values of wind speed, they can appear in the simulation, but even so, they are filtered by the power curves of the WTs when used for estimating power or energy captured from the wind.
The following are the expressions of moments of the normal distribution that have been necessary in the paper: E[x]=μx=0E[x2]=σ2x=1E[x3]=μ3x+3μxσ2x=0E[x4]=μ4x+6μ2xσ2x+3σ4x=3E[xi]=E[xi+1]=0E[x2i]=E[x2i+1]=1E[xixi+1]=σxixi+1E[xix2i+1]=E[x2ixi+1]=0E[x2ix2i+1]=σxiσxi+1+2σ2xixi+1=1+2σ2xixi+1
A WT power curve can be described by means of a function such as the following [17]:
P={00≤vw≤vCIh(vw)PRvCI≤vw<VRPRvR≤vw<vCO0vw≥vCO
where vw
The function h(vw)
A=a(vCIb−4vCIvRC)B=a(4bd−3vCIvR)C=a(2−4d)a=1(vCI−vR)2b=vCI+vRd=(vCI+vR2vR)3
[1] |
H. Hiary, H. Saadeh, M. Saadeh, M. Yaqub, Flower classification using deep convolutional neural networks, IET Comput. Vision, 12 (2018), 855–862. https://doi.org/10.1049/iet-cvi.2017.0155 doi: 10.1049/iet-cvi.2017.0155
![]() |
[2] | M. E. Nilsback, A. Zisserman, Automated flower classification over a large number of classes, in 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, (2008), 722–729. https://doi.org/10.1109/ICVGIP.2008.47 |
[3] | B. Fernando, E. Fromont, D. Muselet, M. Sebban, Discriminative feature fusion for image classification, in 2012 IEEE Conference on Computer Vision and Pattern Recognition, (2012), 3434–3441. https://doi.org/10.1109/CVPR.2012.6248084 |
[4] | A. Angelova, S. Zhu, Efficient object detection and segmentation for fine-grained recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2013), 811–818. |
[5] | H. M. Zawbaa, M. Abbass, S. H. Basha, M. Hazman, A. E. Hassenian, An automatic flower classification approach using machine learning algorithms, in 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI), (2014), 895–901. https://doi.org/10.1109/ICACCI.2014.6968612 |
[6] |
S. Inthiyaz, B. Madhav, P. Kishore, Flower segmentation with level sets evolution controlled by colour, texture and shape features, Cogent Eng., 4 (2017). https://doi.org/10.1080/23311916.2017.1323572 doi: 10.1080/23311916.2017.1323572
![]() |
[7] |
G. E. Hinton, R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science, 313 (2006), 504–507. https://doi.org/10.1126/science.1127647 doi: 10.1126/science.1127647
![]() |
[8] |
A. Krizhevsky, I. Sutskever, G. E. Hinton, ImageNet classification with deep convolutional neural networks, Commun. ACM, 60 (2017), 84–90. https://doi.org/10.1145/3065386 doi: 10.1145/3065386
![]() |
[9] | K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint, (2014), arXiv: 1409.1556. https://doi.org/10.48550/arXiv.1409.1556 |
[10] | C. Szegedy, W. Liu, Y. Q. Jia, P. Sermanet, S. Reed, D. Anguelov, et al., Going deeper with convolutions, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), 1–9. |
[11] |
F. Marzialetti, S. Giulio, M. Malavasi, M. G. Sperandii, A. T. R. Acosta, M. L. Carranza, Capturing coastal dune natural vegetation types using a phenology-based mapping approach: The potential of sentinel-2, Remote Sens., 11 (2019), 1506. https://doi.org/10.3390/rs11121506 doi: 10.3390/rs11121506
![]() |
[12] |
M. Ragab, A. E. Abouelregal, H. F. AlShaibi, R. A. Mansouri, Heat transfer in biological spherical tissues during hyperthermia of magnetoma, Biology, 10 (2021), 1259. https://doi.org/10.3390/biology10121259 doi: 10.3390/biology10121259
![]() |
[13] |
M. Versaci, G. Angiulli, P. Crucitti, D. D. Carlo, F. Laganà, D. Pellicanò, et al., A fuzzy similarity-based approach to classify numerically simulated and experimentally detected carbon fiber-reinforced polymer plate defects, Sensors, 22 (2022), 4232. https://doi.org/10.3390/s22114232 doi: 10.3390/s22114232
![]() |
[14] | Y. Y. Liu, F. Tang, D. W. Zhou, Y. P. Meng, W. M. Dong, Flower classification via convolutional neural network, in 2016 IEEE International Conference on Functional-Structural Plant Growth Modeling, Simulation, Visualization and Applications (FSPMA), (2016), 110–116. https://doi.org/10.1109/FSPMA.2016.7818296 |
[15] | S. Cao, B. Song, Visual attentional-driven deep learning method for flower recognition, Math. Biosci. Eng., 18 (2021), 1981–1991. |
[16] | X. L. Xia, C. Xu, B. Nan, Inception-v3 for flower classification, in 2017 2nd International Conference on Image, Vision and Computing (ICIVC), (2017), 783–787. https://doi.org/10.1109/ICIVC.2017.7984661 |
[17] |
J. H. Qin, W. Y. Pan, X. X. Xiang, Y. Tan, G. M. Hou, A biological image classification method based on improved CNN, Ecol. Inf., 58 (2020), 101093. https://doi.org/10.1016/j.ecoinf.2020.101093 doi: 10.1016/j.ecoinf.2020.101093
![]() |
[18] | M. Simon, E. Rodner, Neural activation constellations: Unsupervised part model discovery with convolutional networks, in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2015), 1143–1151. |
[19] |
M. Cıbuk, U. Budak, Y. Guo, M. C. Ince, A. Sengur, Efficient deep features selections and classification for flower species recognition, Measurement, 137 (2019), 7–13. https://doi.org/10.1016/j.measurement.2019.01.041 doi: 10.1016/j.measurement.2019.01.041
![]() |
[20] |
K. Bae, J. Park, J. Lee, Y. Lee, C. Lim, Flower classification with modified multimodal convolutional neural networks, Expert Syst. Appl., 159 (2020), 113455. https://doi.org/10.1016/j.eswa.2020.113455 doi: 10.1016/j.eswa.2020.113455
![]() |
[21] |
C. Pang, W. H. Wang, R. S. Lan, Z. Shi, X. N. Luo, Bilinear pyramid network for flower species categorization, Multimedia Tools Appl., 80 (2021), 215–225. https://doi.org/10.1007/s11042-020-09679-8 doi: 10.1007/s11042-020-09679-8
![]() |
[22] |
C. Liu, L. Huang, Z. Q. Wei, W. F. Zhang, Subtler mixed attention network on fine-grained image classification, Appl. Intell., 51 (2021), 7903–7916. https://doi.org/10.1007/s10489-021-02280-y doi: 10.1007/s10489-021-02280-y
![]() |
[23] | X. Guan, G. Q. Wang, X. Xu, Y. Bin, Learning hierarchal channel attention for fine-grained visual classification, in Proceedings of the 29th ACM International Conference on Multimedia, (2021), 5011–5019. https://doi.org/10.1145/3474085.3475184 |
[24] | M. Sandler, A. Howard, M. L. Zhu, A. Zhmoginov, L. C. Chen, Mobilenetv2: Inverted residuals and linear bottlenecks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 4510–4520. |
[25] | S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, in Proceedings of the 32nd International Conference on Machine Learning, (2015), 448–456. |
[26] | C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, Rethinking the inception architecture for computer vision, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 2818–2826. |
[27] | K. M. He, X. Y. Zhang, S. Q. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 770–778. |
[28] | C. Szegedy, S. Ioffe, V. Vanhoucke, A. A. Alemi, Inception-v4, inception-resnet and the impact of residual connections on learning, in Proceedings of the AAAI Conference on Artificial Intelligence, (2017), 4278–4284. https://doi.org/10.1609/aaai.v31i1.11231 |
[29] |
S. Chaudhari, V. Mithal, G. Polatkan, R. Ramanath, An attentive survey of attention models, ACM Trans. Intell. Syst. Technol., 12 (2021), 1–32. https://doi.org/10.1145/3465055 doi: 10.1145/3465055
![]() |
[30] | J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 7132–7141. |
[31] | S. Woo, J. Park, J. Y. Lee, I. S. Kweon, Cbam: Convolutional block attention module, in Proceedings of the European Conference on Computer Vision (ECCV), (2018), 3–19. |
[32] | I. Goodfellow, D. W. Farley, M. Mirza, A. Courville, Y. Bengio, Maxout networks, in Proceedings of the 30th International Conference on Machine Learning, (2013), 1319–1327. |
[33] |
Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE, 86 (1998), 2278–2324. https://doi.org/10.1109/5.726791 doi: 10.1109/5.726791
![]() |
[34] | M. E. Nilsback, A. Zisserman, A visual vocabulary for flower classification, in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), (2006), 1447–1454. https://doi.org/10.1109/CVPR.2006.42 |
1. | Elio Chiodo, Pasquale De Falco, Luigi Pio Di Noia, Fabio Mottola, Inverse Log-logistic distribution for Extreme Wind Speed modeling: Genesis, identification and Bayes estimation, 2018, 6, 2333-8334, 926, 10.3934/energy.2018.6.926 | |
2. | Andrés Feijóo, Daniel Villanueva, Assessing wind speed simulation methods, 2016, 56, 13640321, 473, 10.1016/j.rser.2015.11.094 |
mean μ | std. dev. σ | |
u | 6.1975 | 3.2355 |
f1(x) | 5.7707 | 4.1289 |
f2(x) | 6.0787 | 3.5213 |
p00 | p10 | p01 | p11 | p02 | |
a0 | -0.9743 | 0.6891 | 0.8938 | 0.0644 | -0.1788 |
a1 | 1.3170 | 0.8632 | -1.209 | -0.1660 | 0.2417 |
a2 | 2.0480 | 0.2492 | -1.8790 | -0.0808 | 0.3758 |
Variable | μ | σ |
x | 0 | 1 |
f1(x)=a0+a1x | a0 | a21 |
Variable | μ | σ |
x | 0 | 1 |
f2(x)=a0+a1x+a2x2 | a0+a2 | a21+2a22 |