Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article

Fractional integral inequalities for h-convex functions via Caputo-Fabrizio operator

  • Received: 19 December 2020 Accepted: 25 March 2021 Published: 12 April 2021
  • MSC : 26A51, 26A33, 26D15

  • The aim of this paper is to study h convex functions and present some inequalities of Caputo-Fabrizio fractional operator. Precisely speaking, we presented Hermite-Hadamard type inequality via h convex function involving Caputo-Fabrizio fractional operator. We also presented some new inequalities for the class of h convex functions. Moreover, we also presented some applications of our results in special means which play a significant role in applied and pure mathematics, especially the accuracy of a results can be confirmed by through special means.

    Citation: Lanxin Chen, Junxian Zhang, Muhammad Shoaib Saleem, Imran Ahmed, Shumaila Waheed, Lishuang Pan. Fractional integral inequalities for h-convex functions via Caputo-Fabrizio operator[J]. AIMS Mathematics, 2021, 6(6): 6377-6389. doi: 10.3934/math.2021374

    Related Papers:

    [1] Bin Yang, Min Chen, Jianjun Zhou . Forecasting the monthly retail sales of electricity based on the semi-functional linear model with autoregressive errors. AIMS Mathematics, 2025, 10(1): 1602-1627. doi: 10.3934/math.2025074
    [2] Rabia Noureen, Muhammad Nawaz Naeem, Dumitru Baleanu, Pshtiwan Othman Mohammed, Musawa Yahya Almusawa . Application of trigonometric B-spline functions for solving Caputo time fractional gas dynamics equation. AIMS Mathematics, 2023, 8(11): 25343-25370. doi: 10.3934/math.20231293
    [3] W. B. Altukhaes, M. Roozbeh, N. A. Mohamed . Feasible robust Liu estimator to combat outliers and multicollinearity effects in restricted semiparametric regression model. AIMS Mathematics, 2024, 9(11): 31581-31606. doi: 10.3934/math.20241519
    [4] Emre Kırlı . A novel B-spline collocation method for Hyperbolic Telegraph equation. AIMS Mathematics, 2023, 8(5): 11015-11036. doi: 10.3934/math.2023558
    [5] Xin Liang, Xingfa Zhang, Yuan Li, Chunliang Deng . Daily nonparametric ARCH(1) model estimation using intraday high frequency data. AIMS Mathematics, 2021, 6(4): 3455-3464. doi: 10.3934/math.2021206
    [6] M. J. Huntul, Kh. Khompysh, M. K. Shazyndayeva, M. K. Iqbal . An inverse source problem for a pseudoparabolic equation with memory. AIMS Mathematics, 2024, 9(6): 14186-14212. doi: 10.3934/math.2024689
    [7] Salim Bouzebda . Weak convergence of the conditional single index U-statistics for locally stationary functional time series. AIMS Mathematics, 2024, 9(6): 14807-14898. doi: 10.3934/math.2024720
    [8] Jingjing Yang, Weizhong Tian, Chengliang Tian, Sha Li, Wei Ning . Empirical likelihood method for detecting change points in network autoregressive models. AIMS Mathematics, 2024, 9(9): 24776-24795. doi: 10.3934/math.20241206
    [9] Hasim Khan, Mohammad Tamsir, Manoj Singh, Ahmed Hussein Msmali, Mutum Zico Meetei . Numerical approximation of the time-fractional regularized long-wave equation emerging in ion acoustic waves in plasma. AIMS Mathematics, 2025, 10(3): 5651-5670. doi: 10.3934/math.2025261
    [10] Qi Xie, Yiting Huang . A class of generalized quadratic B-splines with local controlling functions. AIMS Mathematics, 2023, 8(10): 23472-23499. doi: 10.3934/math.20231193
  • The aim of this paper is to study h convex functions and present some inequalities of Caputo-Fabrizio fractional operator. Precisely speaking, we presented Hermite-Hadamard type inequality via h convex function involving Caputo-Fabrizio fractional operator. We also presented some new inequalities for the class of h convex functions. Moreover, we also presented some applications of our results in special means which play a significant role in applied and pure mathematics, especially the accuracy of a results can be confirmed by through special means.



    Spatial autoregressive model (SAR) and its derivatives have been widely used in many areas such as economics, political science, public health and so on. There are lots of literatures concerning about spatial autoregressive models such as Anselin [1], LeSage [19], Anselin and Bera [2], Lee and Yu [20], LeSage and Pace [21], Lee [22], Dai, Li and Tian [9]. In particular, Lee [22] utilised generalized method of moments to make inference about spatial autoregressive model. Xu and Li [25] investigated the instrumental variable (Ⅳ) and maximum likelihood estimators for spatial autoregressive model using a nonlinear transformation of dependent variable. However, spatial autoregressive model may not be flexible enough to capture nonlinear impact of some covariates since its parametric structure. In order to enrich model adaptability and flexibility, some semiparametric spatial autoregressive models have been proposed. For example, Su [31] studied a semiparametric SAR, which includes nonparametric covariates. Su and Jin [32] proposed partially linear SAR with both linear covariates and nonparametric explanatory variables. Sun et al. [33] studied a semiparametric spatial dynamic model with a profile likelihood approach. Wei and Sun [36] derived semiparametric generalized method of moments estimator. Hoshino [35] proposed a semiparametric series generalized method of moments estimator and established consistency and asymptotic normality of the proposed estimator.

    However, with development of economic and scientific technology, huge amounts of data can be easily collected and stored. In particular, some types of data are observed in high dimensions and frequencies containing rich information. We usually call them functional data. When those types data are included in a model as covariates, it is common to use functional linear model (FLM). There exist vast literatures on estimation and prediction for FLM (See, for example, Reiss et al.[26], Ramsay and Dalzell [27], Delaigle and Hall [11], Aneiros-Pˊerez and Vieu [3]). Many methods were proposed to estimate the slop function such as Cardot et al. [5], Hall and Horowitz [14], Crambes et al. [8], Shin [28]. In particular, Hall and Horowitz [14] established minimax convergence rates of estimation. Cai and Hall [6] proposed functional principle components method and a reproducing kernel Hilbert space approach was used in Yuan and Cai [7].

    In many applications of spatial data, there are often covariates with nonlinear effect and functional explanatory variables. This motivates us to propose an interesting and novel functional semiparametric spatial autoregressive model. The model is relatively flexible because it utilises functional linear model to deal with functional covariate and semiparametric SAR model to allow spatial dependence and nonlinear effect of scalar covariate. Recently, some models consider both functional covariates and spatial dependence. For instance, Pineda-Rˊios [24] proposed functional SAR model and used least squares and maximum likelihood methods to estimate parameters. The functional SAR model considers spatial effect for error term instead of spatial effect for response variable. Huang et al. [12] considered spatial functional linear model and they developed an estimation method based on maximum method and functional principle component analysis. Hu et al. [13] developed generalized methods of moments to estimate parameters in spatial functional linear model. In the paper, we proposed a generalized method of moments estimator which is heteroskedasticity robust and takes a closed-form written explicitly.

    The rest of the paper is organized as follows. Section 2 introduces the proposed model and the estimation procedure. The asymptotic properties of the proposed estimators are established in Section 3. Section 4 conducts simulation studies to evaluate the empirical performance of the proposed estimators. Section 5 gives some discussions about the model. All technical proofs are provided in appendix.

    Consider the following novel functional semiparametric spatial autoregressive model,

    Y=ρWY+Zθ+10X(t)β(t)dt+g(U)+ε, (2.1)

    where Y is a response variable, ρ is an unknown coefficient of the spatial neighboring effect, W is the constant spatial weight matrix with a zero diagonal, Z=(Z1,...,Zp) is a p-dimensional covariate and θ is its coefficient. {X(t):t[0,1]} is a zero-mean and second-order (i.e. E|X(t)|2<,t[0,1]) stochastic process defined on (Ω,B,P) with sample paths in L2[0,1], the Hilbert space containing integrable functions with inner product x,y=10x(t)y(t)dt,x,yL2[0,1] with norm x=x,x1/2. The slope function β(t) is a square integrable function on [0,1], U is a random variable, g() is an unknown function on its support [0,1] without loss of generality. We assume E[g()]=0 to ensure the identifiability of the nonparametric function. ε is a random error with zero mean and finite variance σ2, independent of Z,U and X(t).

    Remark 1. The model (2.1) is more flexible to take different models. It generalizes both semiparametric spatial autoregressive model [32] and functional partial linear model [28] which correspond to the cases β(t)=0 and ρ=0, respectively. The model can be represented by Y=(IρW)110X(t)β(t)dt+(IρW)1Zθ+(IρW)1g(U)+(IρW)1ε. We assume IρW could be inverted to make the presentation valid. Thus Yi is also influenced by its neighbours' covariates Xj(t) as ji. Parameter ρ indicates the basic impact of the neighbours. Greater absolute value of ρ means that the response variable is more likely to be affected by its neighbours.

    In the section, we give a method to estimate unknown parameter ρ and θ, slope function β() and nonparametric function g(). We use B-spline basis function to approximate g() and β(). Let 0=u0<u1...<uk1+1=1 be a partition of interval [0,1]. Using ui as knots, we have N1=k1+l1+1 normalized B-spline basis functions of order l1+1 that from a basis function for the linear spline space. Put the basis function as a vector B1(t)=(B11(t),...,B1N1(t)) and then the slope function β() is approximated by B1()γ. Similarly, let B2(u)=(B21(u),...,B2N2(u)) be normalized B-spline basis function vector determined by k2 interior knots in [0,1] and the order l2+1 to approximate g(), where N2=k2+l2+1. Then it follows that

    β(t)B1(t)γ,   g(u)B2(u)ζ,

    where γ=(γ1,...,γN1) and ζ=(ζ1,...,ζN2).

    Let D=X(t),B1(t)=(10X(t)B11(t)dt,...,10X(t)B1N1(t)dt),Di=Xi(t),B1(t). Then the model can be rewritten as

    YρWY+Zθ+Dγ+B2(U)ζ+ε.

    Let P=Π(ΠΠ)1Π denote the projection matrix onto the space by Π, where Π=(D,B2(U)). Similar to Zhang and Shen [39], profiling out the functional approximation, we obtain

    (IP)Yρ(IP)WY+(IP)Zθ+(IP)ε.

    Let Q=(WY,Z) and η=(ρ,θ). Applying the two stage least squares procedure proposed by Kelejian and Prucha [17], we propose the following estimator

    ˆη=(Q(IP)M(IP)Q)1Q(IP)M(IP)Y,

    where M=H(HH)1H and H is matrices of instrumental variables. Moreover,

    (ˆγ,ˆζ)=(ΠΠ)1Π(YQˆη).

    Consequently, we use ˆβ(t)=B1(t)ˆγ,ˆg(u)=B2(u)ˆζ as the estimator of β(t) and g(u).

    For statistical inference based on ˆη, consistent estimators of the asymptotic covariance matrices are needed. Define the following estimator

    ˆσ2=1nYˆρWYZˆθDˆγB2(U)ˆζ2,

    and

    ˆΣ=1nQ(IP)M(IP)Q,

    where is the L2 norm for a function or the Euclidean norm for a vector. In order to make statistical inference about σ2, it need to get the value ω=E[(ε21σ2)2]. Therefore, we use the following estimator ˆω to estimate ω

    ˆω=1nni=1(ˆε2iˆσ2)2.

    Similar to Zhang and Shen [39], we use an analogous idea for the construction of instrument variables. In the first step, the following instrumental variables are obtained ˜H=(W(I˜ρW)1(Z,D˜γ,B2(U)˜ζ),Z,Π), where ˜ρ, ˜γ and ˜ζ are obtained by simply regressing Y on pseudo regressor variables WY,Π. In the second step, instrumental variables ˜H are used to obtain the estimators ˉη, ˉγ and ˉζ, which are necessary to construct the following instrumental variables H=(W(IˉρW)1(Zˉθ+Dˉγ+B2(U)ˉζ),Z). Finally, we use the instrumental variables H to obtain the final estimators ˆρ and ˆθ.

    In this section, we derive the asymptotic normality and rates of convergence for the estimators defined in previous section. Firstly, we introduce some notations. For convenience and simplicity, c denote a generic positive constant, which may take different values at different places. Let β0() and go() be the true value of function β() and g() respectively. K(t,s)=Cov(X(t),X(s)) denotes the covariance function of X(). anbn means that an/bn is bounded away from zero and infinity as n. In the paper, we make the following assumptions.

    C1 The matrix IρW is nonsingular with |ρ|<1.

    C2 The row and column sums of the matrices W and (IρW)1 are bounded uniformly in absolute value for any |ρ|<1.

    C3 For matrix S=W(IρW)1, there exists a constant λc such that λcISS is positive semidefinite for all n.

    C4 Matrix 1n˜Q(IP)M(IP)˜QΣ in probability for some positive definite matrix, where ˜Q=(W(IρW)1(Zθ+10X(t)β(t)dt+g(U)),Z).

    C5 For matrix ˜Q, there exsits a constant λc such that λcI˜Q˜Q is positive semidefinite for all n.

    C6 X(t) has a finite fourth moment, that is, EX(t)4c.

    C7 K(t,s) is positive definite.

    C8 The nonparametric function g() has bounded and continuous derivatives up to order r(2) and the slope function β(t)Cr[0,1].

    C9 The density of U, fU(u), is bounded away from 0 and on [0,1]. Furthermore, we assume that fU(u) is continuously differentiable on [0,1].

    C10 For the knots number kj,(j=1,2), it is assumed k1k2k.

    Assumptions C1C3 are required in the setting of spatial autoregressive model (see, for example, Lee [23], Kelejian and Prucha [18], Zhang and Shen [39]). They concern the restriction of spatial weight matrix and SAR parameter. Assumption C4 (see Du et al.[10]) is used to represent the asymptotic covariance matrix of ˆη. Moreover, assumption C4 requires implicitly that the generated regressors CZ and Z, deviated from their functional part of projection onto Π, are not asymptotically multicollinear. Assumption C5 is required to ensure the identifiability of parameter η. Assumptions C6C7 are commonly assumed in functional linear model [14]. Assumption C6 is a mild restriction to prove the convergence of our estimator. Assumption C7 guarantees the identifiability of β(t). Assumption C8 ensures that β() and g() are sufficiently smoothed and can be approximated by basis functions in the spline space. Assumption C9 requires a bounded condition on the covariates. It is often assumed in asymptotic analysis of nonparametric regression problems (see, for example [15,37]). Assumption C10 is required to achieve the optimal convergence rate of ˆβ() and ˆg().

    Let

    Δn=E(DD)E{E(DB2(u)|U)[E(B2(u)B2(u)|U)]1E(B2(u)D|U)},Ωn=E(B2B2)E{E(B2(u)D|V)[E(DD|V)]1E(DB2(u)|V)},

    where V=X(t),β0(t). The following theorems state the asymptotic properties of the estimators for parameter and nonparametric function.

    Theorem 1. Suppose assumptions C1-C10 hold, then

    n(ˆηη)dN(0,σ2Σ1).

    Theorem 2. Assumptions C1-C10 hold and kn12r+1, then

    ˆβ()β0()2=Op(n2r2r+1).ˆg()g0()2=Op(n2r2r+1).

    Remark 2. Theorem 2 gives the consistency of function estimators. The slope function estimator ˆβ() and nonparametric function estimator ˆg() have the same optimal global convergence rate established by Stone [29].

    Theorem 3. Suppose assumptions C1-C10 hold, and E(|ε1|4+r)< for some r>0, then

    ˆσ2pσ2,    ˆωpω,    and    ˆΣpΣ.

    Remark 3. From the proof of Theorem 3, if trace(S)/n=o(1), it can be shown that

    n(ˆσ2σ2)dN(0,ω).

    Theorem 4. Suppose assumptions C1-C10 hold and n/(k2r+11)=n/(k2r+12)=o(1), for any fixed points t,u(0,1), as n,

    nk1(ˆβ(t)β(t))dN(0,Ξ(t)),
    nk2(ˆg(u)g(u))dN(0,Λ(u)).

    where β(t)=B1(t)γ0, g(u)=B2(u)ζ0, Ξ(t)=lim, \Lambda(u) = \lim_{n\rightarrow \infty}\frac{\sigma^2}{k_2} \mathit{\boldsymbol{B}}'_2(u)\Omega_n \mathit{\boldsymbol{B}}_2(u), \boldsymbol{\gamma}_0 and \boldsymbol{\zeta}_0 are defined in Lemma 1 of appendix.

    Remark 4. The above conclusions is similar to those of Yu et al. [38], which gave the asymptotic normality for spline estimators in single-index partial functional linear regression model. Note that \hat{\beta}(t)-\beta_0(t) = (\beta^*(t)-\beta_0(t))+(\hat{\beta}(t)-\beta^*(t)). We obtain that \beta^*(t)-\beta_0(t) = O(k_1^{-r}) by Lemma 1 on Appendix and \hat{\beta}(t)-\beta^*(t) dominates \beta^*(t)-\beta_0(t) . Therefore we can use the asymptotic behaviors of \hat{\beta}(t)-\beta^*(t) to describe the asymptotic behaviors of \hat{\beta}(t)-\beta_0(t) .

    The variance \Xi(t) and \Lambda(u) are involved in basic function and knots. Different basis functions and knots can get different variance estimators. Moreover, the variance expression contains unknown quantities. Replacing them by consistent estimators can lead to approximation errors. What's more, there may exist heteroscedasticity in error term and then the estimator \hat{\sigma}^2 is not consistent. Consequently, we propose the following residual-based method to construct piecewise confidence interval in practice.

    It is crucial that spatial structure must be preserved during data resampling in models with spatial dependence [1]. Therefore, we employ the residual-based bootstrap procedure to derive the empirical pointwise standard error of \hat{\beta}(t) and \hat{g}(\cdot) . The procedure can be described as follows:

    (1) Based on the data sets \{ \mathit{\boldsymbol{Y}}, \mathit{\boldsymbol{Z}}, X(t), \mathit{\boldsymbol{U}}\} and spatial matrix \mathit{\boldsymbol{W}} , one fits the proposed model and obtain the residual vector \hat{ \boldsymbol{\varepsilon}}_{1} = (\hat{\varepsilon}_{11}, ..., \hat{\varepsilon}_{n1})' . Then, we derive the centralized residual vector \hat{ \boldsymbol{\varepsilon}} .

    (2) Draw a bootstrap sample \hat{ \boldsymbol{\varepsilon}}^{*} with replacement from the empirical distribution function of \hat{ \boldsymbol{\varepsilon}} and generate \mathit{\boldsymbol{Y}}^{*} = (\mathit{\boldsymbol{I}}-\hat{\rho} \mathit{\boldsymbol{W}})^{-1}(\mathit{\boldsymbol{Z}}' \boldsymbol{\theta}+ {\bf{D}}'\hat{ \boldsymbol{\gamma}}+ \mathit{\boldsymbol{B}}'_2(U)\hat{ \boldsymbol{\zeta}}+\hat{ \boldsymbol{\varepsilon}}^{*}).

    (3) Based on the new data sets \{ \mathit{\boldsymbol{Y}}^*, \mathit{\boldsymbol{Z}}, X(t), \mathit{\boldsymbol{U}}\} and spatial matrix \mathit{\boldsymbol{W}} , we fit the proposed model again to derive the estimator \hat{\beta}^*(t) and \hat{g}^*(u) . Repeat the process many times. Thus, for given t and u , calculate empirical variance of \hat{\beta}^*(t) and \hat{g}^*(u) respectively. Consequently, we use the empirical variance to construct its confidence interval.

    In this section, we use simulation examples to study the properties of the proposed estimators. The data is generated from the following model:

    \begin{eqnarray*} Y_i = \rho\sum\limits_{i = 1}^{n}w_{ij}Y_{j}+Z_{1i}\theta_1+Z_{2i}\theta_2+\int_{0}^{1}\beta(t)X_i(t)dt+g(U_i)+\varepsilon_i, i = 1,...,n, \end{eqnarray*}

    where \rho = 0.5, \beta(t) = \sqrt{2}sin(\pi t/2)+3\sqrt{2}sin(3\pi t/2) and X(t) = \sum_{j = 1}^{50}\gamma_{j}\phi_{j}(t) , where \gamma_{j} is distributed as independent normal with mean 0 and variance \lambda_{j} = ((j-0.5)\pi)^{-2} and \phi_{j}(t) = \sqrt{2}sin((j-0.5)\pi t) . Z_{i1} and Z_{i2} are independent and follow standard normal distribution, \theta_1 = \theta_2 = 1 , U_i\sim U(0, 1) , g(u) = sin(\frac{\pi(u-A)}{C-A}), A = \frac{\sqrt{3}}{2}-\frac{1.654}{\sqrt{12}}, C = \frac{\sqrt{3}}{2}+\frac{1.654}{\sqrt{12}} . The spatial weight matrix \mathit{\boldsymbol{W}} = (w_{ij})_{n\times n} is generated based on mechanism that w_{ij} = 0.3^{|i-j|}I(i\neq j), 1\leq i, j \leq n with w_{ii} = 0, i = 1, ..., n . A standardized transformation is used to convert the matrix \mathit{\boldsymbol{W}} to have row-sums of unit. We set the following three kinds of error term: (1) \varepsilon_i\sim N(0, \sigma^2); (2) \varepsilon_i\sim 0.75t(3); (3) \varepsilon_i\sim (1+0.5U_i)N(0, \sigma^2), where \sigma^2 = 1 . In order to compare the different situations for magnitudes of \rho , we set \rho = \{0.2, 0.7\} with error term N(0, \sigma^2) . Simulation results are derived based on 1000 replications.

    To achieve good numerical performances, the order l_1 and l_2 of splines and the number of interior knots k_1 and k_2 should be chosen. To reduce the burden of computation, we use the cubic B-spline with four evenly distributed knots (i.e., k_1 = k_2 = 2 ) for slope function \beta(\cdot) and nonparametric function g(\cdot) respectively. These choices of k_1 and k_2 are small enough to avoid overfitting in typical problem with sample size not too small and big enough to flexibly approximate many smooth function. We use the square root of average square errors (RASE) to assess the performance of estimators \hat{\beta}(\cdot) and \hat{g}(\cdot) respectively

    \begin{eqnarray*} {\rm{RASE}}_1 = \bigg\{\sum\limits_{i = 1}^{n_1}\frac{(\hat{\beta}(t_{i})-\beta(t_{i}))^2}{n_1}\bigg\}^{1/2}, \end{eqnarray*}
    \begin{eqnarray*} {\rm{RASE}}_2 = \bigg\{\sum\limits_{i = 1}^{n_2}\frac{(\hat{g}(u_{i})-g(u_{i}))^2}{n_2}\bigg\}^{1/2}, \end{eqnarray*}

    where \{t_i, i = 1, ..., n_1\} , \{u_i, i = 1, ..., n_2\} and n_1 = n_2 = 200 are grid points chosen equally spaced in the domain of \beta(\cdot) and g(\cdot) respectively.

    Tables 13 show simulation results with different kinds of error terms. Table 4 presents different magnitudes of \rho with error term N(0, 1) . They show the bias (Bias), standard deviation (SD), standard error (SE) and coverage probability (CP) with nominal level of 95 \% for estimator and the mean and standard deviation (SD) of RASE _j (j = 1, 2) for \hat{\beta}(\cdot) and \hat{g}(\cdot) . The simulation results can be summarized as follows:

    Table 1.  Simulation results for \rho = 0.5 with error term N(0, 1) .
    n Est Bias SD SE CP
    100 \hat{\rho} -0.0091 0.0789 0.0799 0.9430
    \hat{\theta}_1 -0.0065 0.1058 0.1030 0.9500
    \hat{\theta}_2 -0.0012 0.1052 0.1078 0.9430
    \hat{\sigma}^2 -0.1100 0.1379 0.1232 0.7900
    RASE _1 1.4361 0.7046
    RASE _2 0.1950 0.0683
    300 \hat{\rho} -0.0031 0.0441 0.0444 0.9520
    \hat{\theta}_1 -0.0011 0.0594 0.0594 0.9580
    \hat{\theta}_2 0.0027 0.0595 0.0595 0.9480
    \hat{\sigma}^2 -0.0291 0.0834 0.0785 0.8970
    RASE _1 0.7932 0.3728
    RASE _2 0.1108 0.0392
    500 \hat{\rho} -0.0019 0.0339 0.0332 0.9600
    \hat{\theta}_1 -0.0039 0.0456 0.0442 0.9610
    \hat{\theta}_2 -0.0004 0.0455 0.0461 0.9410
    \hat{\sigma}^2 -0.0212 0.0653 0.0616 0.9100
    RASE _1 0.6253 0.2847
    RASE _2 0.0838 0.0303
    Note: It shows that the bias (Bias), standard deviation (SD), standard error (SE) and coverage probability (CP) with nominal level of 95\% for estimator and the mean and standard deviation (SD) of RASE for \hat{\beta}(\cdot) and \hat{g}(\cdot) from 1000 repetitions.

     | Show Table
    DownLoad: CSV
    Table 2.  Simulation results for \rho = 0.5 with error term 0.75 t(3) .
    n Est Bias SD SE CP
    100 \hat{\rho} -0.0152 0.1010 0.1092 0.9620
    \hat{\theta}_1 -0.0059 0.1332 0.1297 0.9540
    \hat{\theta}_2 0.0028 0.1332 0.1376 0.9350
    RASE _1 1.8018 1.0485
    RASE _2 0.2440 0.1053
    300 \hat{\rho} -0.0093 0.0567 0.0600 0.9530
    \hat{\theta}_1 0.0062 0.0761 0.0805 0.9460
    \hat{\theta}_2 -0.0002 0.0759 0.0793 0.9400
    RASE _1 0.9992 0.5281
    RASE _2 0.1393 0.0591
    500 \hat{\rho} -0.0017 0.0429 0.0431 0.9500
    \hat{\theta}_1 -0.0008 0.0577 0.0571 0.9540
    \hat{\theta}_2 -0.0008 0.0578 0.0596 0.9410
    RASE _1 0.8034 0.4053
    RASE _2 0.1073 0.0427
    Note: It shows that the bias (Bias), standard deviation (SD), standard error (SE) and coverage probability (CP) with nominal level of 95\% for estimator and the mean and standard deviation (SD) of RASE for \hat{\beta}(\cdot) and \hat{g}(\cdot) from 1000 repetitions.

     | Show Table
    DownLoad: CSV
    Table 3.  Simulation results for \rho = 0.5 with error term (1+0.5U_i)N(0, 1) .
    n Est Bias SD SE CP
    100 \hat{\rho} -0.0143 0.0979 0.0984 0.9490
    \hat{\theta}_1 -0.0065 0.1331 0.1465 0.9500
    \hat{\theta}_2 0.0048 0.1328 0.1339 0.9540
    RASE _1 1.8170 0.8823
    RASE _2 0.2431 0.0922
    300 \hat{\rho} -0.0037 0.0556 0.0540 0.9530
    \hat{\theta}_1 0.0012 0.0750 0.0735 0.9500
    \hat{\theta}_2 -0.0032 0.0752 0.0755 0.9470
    RASE _1 1.0316 0.4720
    RASE _2 0.1411 0.0536
    500 \hat{\rho} -0.0017 0.0431 0.0430 0.9470
    \hat{\theta}_1 0.0026 0.0577 0.0555 0.9570
    \hat{\theta}_2 -0.0026 0.0577 0.0574 0.9440
    RASE _1 0.8035 0.3615
    RASE _2 0.1058 0.0385
    Note: It shows that the bias (Bias), standard deviation (SD), standard error (SE) and coverage probability (CP) with nominal level of 95\% for estimator and the mean and standard deviation (SD) of RASE for \hat{\beta}(\cdot) and \hat{g}(\cdot) from 1000 repetitions.

     | Show Table
    DownLoad: CSV
    Table 4.  Simulation results for different magnitudes of \rho with error term N(0, 1) .
    \rho n Est Bias SE SD CP
    0.2 100 \hat{\rho} -0.0114 0.1036 0.1046 0.9410
    \hat{\theta}_1 -0.0036 0.1046 0.1064 0.9390
    \hat{\theta}_2 -0.0019 0.1052 0.1009 0.9550
    \hat{\sigma}^2 -0.1141 0.1334 0.1228 0.7500
    RASE _1 1.4329 0.6579
    RASE _2 0.1922 0.0705
    300 \hat{\rho} 0.0013 0.0565 0.0532 0.9620
    \hat{\theta}_1 -0.0029 0.0585 0.0594 0.9500
    \hat{\theta}_2 -0.0011 0.0586 0.0586 0.9480
    \hat{\sigma}^2 -0.0389 0.0817 0.0779 0.8820
    RASE _1 0.8313 0.3826
    RASE _2 0.1115 0.0397
    500 \hat{\rho} -0.0012 0.0434 0.0435 0.9540
    \hat{\theta}_1 -0.0006 0.0452 0.0457 0.9460
    \hat{\theta}_2 -0.0021 0.0452 0.0439 0.9520
    \hat{\sigma}^2 -0.0210 0.0658 0.0617 0.9090
    RASE _1 0.6203 0.2789
    RASE _2 0.0863 0.0307
    0.7 100 \hat{\rho} -0.0059 0.0569 0.0553 0.9600
    \hat{\theta}_1 0.0028 0.1075 0.1101 0.9400
    \hat{\theta}_2 -0.0011 0.1068 0.1125 0.9340
    \hat{\sigma}^2 -0.0990 0.1396 0.1239 0.7740
    RASE _1 1.4460 0.6813
    RASE _2 0.1935 0.0679
    300 \hat{\rho} -0.0012 0.0317 0.0319 0.9500
    \hat{\theta}_1 0.0008 0.0597 0.0606 0.9480
    \hat{\theta}_2 0.0010 0.0597 0.0578 0.9520
    \hat{\sigma}^2 -0.0324 0.0840 0.0782 0.8860
    RASE _1 0.7987 0.3837
    RASE _2 0.1109 0.0415
    500 \hat{\rho} -0.0017 0.0242 0.2382 0.9560
    \hat{\theta}_1 -0.0035 0.0459 0.0443 0.9580
    \hat{\theta}_2 0.0006 0.0459 0.0488 0.9410
    \hat{\sigma}^2 -0.0199 0.0651 0.0617 0.9030
    RASE _1 0.6124 0.2717
    RASE _2 0.0853 0.0311
    Note: It shows that the bias (Bias), standard error (SE), standard deviation (SD) and coverage probability (CP) with nominal level of 95 \% for estimator and the mean and standard deviation (SD) of RASE for \hat{\beta}(\cdot) from 1000 repetitions.

     | Show Table
    DownLoad: CSV

    (1) The estimators \hat{\rho}, \hat{\theta}_1, \hat{\theta}_2, \hat{\sigma}^2 are approximately unbiased and the estimated standard errors are close to sample standard deviations in normal error distribution. The empirical coverage probabilities approximate the nominal level of 95 \% well.

    (2) Figure 1 gives an example of the estimated function curve \hat{\beta}(\cdot) and \hat{g}(\cdot) and its empirical 95% confidence interval with sample size n = 300 for error term N(0, 1) . From the mean and standard deviation (SD) of RASE _j (j = 1, 2) , combined with Figure 1, we conclude that the proposed function estimators \hat{\beta}(\cdot) and \hat{g}(\cdot) perform well.

    Figure 1.  It displays the true curve \beta(t) and g(u) (red solid line), the estimated curve \hat{\beta}(t) and \hat{g}(u) (green dotted line) and ponitwise 2.5 and 97.5 percentile of the estimated function \hat{\beta}(\cdot) and \hat{g}(u) (light green line) in 500 replications with sample size n = 300 respectively. In the firgure, the left one shows estimator \hat{\beta}(t) and the right one shows estimator \hat{g}(u) with error term N(0, 1).

    (3) For error term 0.75t(3) and (1+0.5U_i)N(0, 1) , the estimators \hat{\rho}, \hat{\theta}_1, \hat{\theta}_2 are approximately unbiased and the estimated standard errors are close to sample standard deviations. In addition, the mean and standard deviation for RASE of estimated coefficient function \hat{\beta}(\cdot) and \hat{g}(\cdot) are decreasing. It indicates that parametric and non parametric estimators perform well in non-normal error term.

    (4) From Table 1 and Table 4, as basic spatial effect \rho increases, the SE and SD of \hat{\rho} decrease. For the different magnitudes of \rho , the Bias and SD of parametric estimators for \hat{\theta}_1 and \hat{\theta}_2 , and the mean of RASE for \hat{\beta}(\cdot) and \hat{g}(\cdot) remain stable. It means that the magnitudes of \rho do not affect the other parametric and nonparametric estimators.

    In this paper, an interesting and novel functional semiparametric spatial autoregressive model is proposed. The model considers functional covariates based on semiparametric spatial autoregressive model. The slope function and nonparametric function are approximated by B-spline basis function. Then generalized method of moments is proposed to estimate parameters. Under mild conditions, we establish the asymptotic properties for proposed estimators.

    In order to use our model in practical applications, firstly, response variable needs spatial dependence. Secondly, there are covariates with nonlinear effect and functional variables. A problem of practical interest is to extend our model to take into account functional covariates and single index function simultaneously. What's more, making a test about spatial dependence and nonlinear effect of covariates is an important issue. Those topics are left for future work.

    We would like to thank the referees for their helpfull suggestions and comments which lead to the improvement of this article. Bai's work was supported by the National Natural Science Foundation of China (No.11771268).

    No potential conflict of interest was reported by the authors.

    Lemma 1. Assume condition C8 holds for g_0(u) and \beta_0(t) , there exits \boldsymbol{\gamma}_0 and \boldsymbol{\zeta}_0 such that

    \begin{equation*} \sup\limits_{t\in(0,1)}\|\beta_0(t)- \mathit{\boldsymbol{B}}'_1(t) \boldsymbol{\gamma}_0\|\leq c_1k_1^{-r}, \ \ \ \sup\limits_{u\in(0,1)}\|g_0(u)- \mathit{\boldsymbol{B}}'_2(u) \boldsymbol{\zeta}_0\|\leq c_2k_2^{-r}, \end{equation*}

    where \boldsymbol{\gamma} = (\gamma_{01}, ..., \gamma_{0N_1})' , \boldsymbol{\zeta} = (\zeta_{01}, ..., \zeta_{0N_2})' and c_1 > 0 , c_2 > 0 depend only on l_1 and l_2 , respectively.

    Proof of Lemma 1. It can be followed by spline's approximation property ([4,16,34]).

    Proof of Theorem 1. The proof is similar to Theorem 1 in [10] and we omit here.

    Proof of Theorem 2. Let \delta = n^{-\frac{r}{2r+1}} , \mathit{\boldsymbol{T}}_1 = \delta^{-1}(\boldsymbol{\gamma}- \boldsymbol{\gamma}_0) , \mathit{\boldsymbol{T}}_2 = \delta^{-1}(\boldsymbol{\zeta}- \boldsymbol{\zeta}_0) and \mathit{\boldsymbol{T}} = (\mathit{\boldsymbol{T}}'_1, \mathit{\boldsymbol{T}}'_2)' . We then prove that for any given \epsilon > 0, there exits a sufficient large constant L = L_{\epsilon} such that

    \begin{equation*} p \left\{\inf\limits_{\| \mathit{\boldsymbol{T}}\| = L}l( \boldsymbol{\phi}_0+\delta \mathit{\boldsymbol{T}}) > l( \boldsymbol{\phi}_0)\right\}\geq 1-\epsilon, \end{equation*}

    where \boldsymbol{\phi}_0 = (\boldsymbol{\gamma}'_0, \boldsymbol{\zeta}'_0)' , l(\boldsymbol{\gamma}, \boldsymbol{\zeta}) = \sum_{i = 1}^{n}(Y_i- {\bf{Q}}'_i\hat{ \boldsymbol{\eta}}- {\bf{D}}'_i \boldsymbol{\gamma}- \mathit{\boldsymbol{B}}'_2(u_i) \boldsymbol{\zeta})^2 . This implies that with the probability at least 1-\epsilon that there exits a local minimizer in the ball \{ \boldsymbol{\phi}_0+\delta \mathit{\boldsymbol{T}}:\| \mathit{\boldsymbol{T}}\|\leq L\} . By Taylor expansion and simple calculation, it holds that

    \begin{equation*} \left. \begin{array}{rcl} \left\{l( \boldsymbol{\phi}_0+\delta \mathit{\boldsymbol{T}})-l( \boldsymbol{\phi}_0)\right\}&\geq& -2\delta\sum\limits_{i = 1}^{n}(\varepsilon_i+R_{1i}+R_{2i}+ {\bf{Q}}'_i( \boldsymbol{\eta}-\hat{ \boldsymbol{\eta}}))V_i\\ &&+\delta^2\sum\limits_{i = 1}^{n}V_i^{2}+o_p(1)\\ & = &A_1+A_2+o_p(1), \end{array} \right. \end{equation*}

    where R_{1i} = \langle X_i(t), \beta_0(t)- \mathit{\boldsymbol{B}}'_1(t) \boldsymbol{\gamma}_0\rangle, R_{2i} = g_0(u_i)- \mathit{\boldsymbol{B}}'_2(u_i) \boldsymbol{\zeta}_0 , V_i = {\bf{D}}'_i \mathit{\boldsymbol{T}}_1+ \mathit{\boldsymbol{B}}'_2(u_i) \mathit{\boldsymbol{T}}_2 . By assumption C 6 , Lemmas 1 and 8 of Stone [30], we derive that \|R_{1i}\|\leq ck_1^{-r}, \|R_{2i}\| = O_p(k_2^{-r}) . Then by simple calculation, we obtain that

    \begin{equation*} \sum\limits_{i = 1}^{n}R_{1i}V_i = \sum\limits_{i = 1}^{n}R_{1i}( {\bf{D}}'_i \mathit{\boldsymbol{T}}_1+ \mathit{\boldsymbol{B}}'_2(u_i) \mathit{\boldsymbol{T}}_2) = O_p(nk^{-r})\| \mathit{\boldsymbol{T}}\|. \end{equation*}

    Similarly, it hold that \sum_{i = 1}^{n}\varepsilon_iV_i = O_p(\sqrt{n})\| \mathit{\boldsymbol{T}}\|, \sum_{i = 1}^{n}R_{2i}V_i = O_p(nk^{-r})\| \mathit{\boldsymbol{T}}\|, \sum_{i = 1}^{n}V^2_i = O_p(n)\| \mathit{\boldsymbol{T}}\|^2. Similar to the proof of Theorem 2 in Du et al. [10], we get (\boldsymbol{\eta}-\hat{ \boldsymbol{\eta}})' {\bf{Q}}' {\bf{Q}}(\boldsymbol{\eta}-\hat{ \boldsymbol{\eta}}) = O_p(1). Then it holds that \sum_{i = 1}^{n} {\bf{Q}}_i(\boldsymbol{\eta}-\hat{ \boldsymbol{\eta}})V_i = O_p(\sqrt{n})\| \mathit{\boldsymbol{T}}\|. Consequently, we show that A_1 = O_p(n\delta^2)\| \mathit{\boldsymbol{T}}\|, A_2 = O_p(n\delta^2)\| \mathit{\boldsymbol{T}}\|^2. Then through choosing a sufficiently large L, A_2 dominates A_1 uniformly in \| \mathit{\boldsymbol{T}}\| = L. Thus, there exits local minimizers \hat{ \boldsymbol{\gamma}}, \hat{ \boldsymbol{\zeta}} such that \| \boldsymbol{\gamma}-\hat{ \boldsymbol{\gamma}}\| = O_p(\delta), \| \boldsymbol{\zeta}-\hat{ \boldsymbol{\zeta}}\| = O_p(\delta).

    Let R_{1k_1}(t) = \beta_0(t)- \mathit{\boldsymbol{B}}'_1(t) \boldsymbol{\gamma}_0 . Then we get

    \begin{equation*} \left. \begin{array}{rcl} \|\hat{\beta}(t)-\beta_0(t)\|^2& = &\int_{0}^{1}( \mathit{\boldsymbol{B}}'_1(t)\hat{ \boldsymbol{\gamma}}-\beta_0(t))^2dt\\ & = &\int_{0}^{1}( \mathit{\boldsymbol{B}}'_1(t)\hat{ \boldsymbol{\gamma}}- \mathit{\boldsymbol{B}}'_1(t) \boldsymbol{\gamma}_0+ R_{1k_1}(t))^2dt\\ &\leq& 2\int_{0}^{1}\{ \mathit{\boldsymbol{B}}'_1(t)(\hat{ \boldsymbol{\gamma}}- \boldsymbol{\gamma}_0)\}^2dt+2\int_{0}^{1} R^2_{1k_1}(t)dt\\ & = &2(\hat{ \boldsymbol{\gamma}}- \boldsymbol{\gamma}_0)'\int_{0}^{1} \mathit{\boldsymbol{B}}_1(t) \mathit{\boldsymbol{B}}'_1(t)dt(\hat{ \boldsymbol{\gamma}}- \boldsymbol{\gamma}_0)+2\int_{0}^{1} R^2_{1k_1}(t)dt. \end{array} \right. \end{equation*}

    Since \| \boldsymbol{\gamma}-\hat{ \boldsymbol{\gamma}}\| = O_p(\delta) and \|\int_{0}^{1} \mathit{\boldsymbol{B}}_1(t) \mathit{\boldsymbol{B}}'_1(t)dt\| = O(1) , then we have

    (\hat{ \boldsymbol{\gamma}}- \boldsymbol{\gamma}_0)'\int_{0}^{1} \mathit{\boldsymbol{B}}_1(t) \mathit{\boldsymbol{B}}'_1(t)dt(\hat{ \boldsymbol{\gamma}}- \boldsymbol{\gamma}_0) = O_p(\delta^2).

    In addition, by Lemma1, it holds that \int_{0}^{1} R^2_{1k_1}(t)dt = O_p(\delta^2) . Thus, we obtain \|\hat{\beta}(t)-\beta_0(t)\|^2 = O_p(\delta^2) . Similarly, we get \|\hat{g}(u)-g_0(u)\|^2 = O_p(\delta^2) .

    Proof of Theorem 3. The proof is similar to Theorem 3 in [10] and we omit here.

    Proof of Theorem 4. By the definition of l(\boldsymbol{\gamma}, \boldsymbol{\zeta}) in the proof of Theorem 2, we have

    \begin{align} \left. \begin{array}{rcl} -\frac{1}{2n}\frac{\partial l(\hat{ \boldsymbol{\gamma}}, \hat{ \boldsymbol{\zeta}})}{\partial \boldsymbol{\gamma}}& = &\frac{1}{n}\sum\limits_{i = 1}^n\big[Y_i- {\bf{Q}}'_i\hat{ \boldsymbol{\eta}}- {\bf{D}}'_i \boldsymbol{\gamma}- \mathit{\boldsymbol{B}}'_2(u_i) \boldsymbol{\zeta}\big] {\bf{D}}'_i\\ & = &\frac{1}{n}\sum\limits_{i = 1}^n \big[\tilde{e}_i- {\bf{D}}'_i(\hat{ \boldsymbol{\gamma}}- \boldsymbol{\gamma}_0)- \mathit{\boldsymbol{B}}'_2(u_i)(\hat{ \boldsymbol{\zeta}}- \boldsymbol{\zeta}_0)\big] {\bf{D}}'_i+o_p(1) = 0, \end{array} \right. \end{align} (A.1)

    where \tilde{e}_i = \varepsilon_i+R_{1i}+R_{2i}, R_{1i} = \langle X_i(t), \beta_0(t)- \mathit{\boldsymbol{B}}'_1(t) \boldsymbol{\gamma}_0\rangle, R_{2i} = g_0(u_i)- \mathit{\boldsymbol{B}}'_2(u_i) \boldsymbol{\zeta}_0. The remainder is o_p(1) because \frac{1}{n}\sum_{i = 1}^{n} {\bf{Q}}_i(\hat{ \boldsymbol{\eta}}- \boldsymbol{\eta}) = o_p(1) by Theorem 1. In addition, we have

    \begin{align} \left. \begin{array}{rcl} -\frac{1}{2n}\frac{\partial l(\hat{ \boldsymbol{\gamma}}, \hat{ \boldsymbol{\zeta}})}{\partial \boldsymbol{\zeta}} = \frac{1}{n}\sum\limits_{i = 1}^n \big[\tilde{e}_i- {\bf{D}}'_i(\hat{ \boldsymbol{\gamma}}- \boldsymbol{\gamma}_0)- \mathit{\boldsymbol{B}}'_2(u_i)(\hat{ \boldsymbol{\zeta}}- \boldsymbol{\zeta}_0)\big] \mathit{\boldsymbol{B}}_2'(u_i)+o_p(1) = 0. \end{array} \right. \end{align} (A.2)

    It follows from (A.2) that

    \begin{align} \hat{ \boldsymbol{\zeta}}- \boldsymbol{\zeta}_0 = [\frac{1}{n}\sum\limits_{i = 1}^{n} \mathit{\boldsymbol{B}}'_2(u_i) \mathit{\boldsymbol{B}}_2(u_i)]^{-1}\{\frac{1}{n}\sum\limits_{i = 1}^{n}\tilde{e}_i \mathit{\boldsymbol{B}}_2(u_i)- \frac{1}{n}\sum\limits_{i = 1}^{n} \mathit{\boldsymbol{B}}_2(u_i) {\bf{D}}'_i(\hat{ \boldsymbol{\gamma}}-\gamma_0)+o_p(1)\}. \end{align} (A.3)

    Let

    \begin{align*} \bar{\Lambda}_n = \frac{1}{n}\sum\limits_{i = 1}^{n} {\bf{D}}_i {\bf{D}}'_i-\frac{1}{n}\sum\limits_{i = 1}^{n} {\bf{D}}_i \mathit{\boldsymbol{B}}'_2(u_i)[\frac{1}{n}\sum\limits_{i = 1}^{n} \mathit{\boldsymbol{B}}_2(u_i) \mathit{\boldsymbol{B}}'_2(u_i)]^{-1}\frac{1}{n}\sum\limits_{i = 1}^{n} \mathit{\boldsymbol{B}}_2(u_i) {\bf{D}}'_i. \end{align*}

    By substituting (A.3) into (A.1), we obtain

    \begin{align*} \hat{ \boldsymbol{\gamma}}- \boldsymbol{\gamma}_0 = (\bar{\Lambda}_n)^{-1}\{\frac{1}{n}\sum\limits_{i = 1}^{n}\tilde{e}_i[ {\bf{D}}_i-(\frac{1}{n} {\bf{D}}_i \mathit{\boldsymbol{B}}'_2(u_i))\times [\frac{1}{n}\sum\limits_{i = 1}^{n} \mathit{\boldsymbol{B}}_2(u_i) \mathit{\boldsymbol{B}}'_2(u_i)]^{-1} \mathit{\boldsymbol{B}}_2(u_i)]+o_p(1) \}. \end{align*}

    Since \hat{\beta}(t)-\beta^*(t) = \mathit{\boldsymbol{B}}'_1(t)(\hat{ \boldsymbol{\gamma}}- \boldsymbol{\gamma}_0) and for any t\in(0, 1), as n\rightarrow \infty , by the law of large numbers, the slutsky's theorem and the property of multivariate normal distribution, we obtain that

    \begin{equation*} \sqrt{\frac{n}{k_1}}(\hat{\beta}(t)-\beta^*(t))\xrightarrow{d}N(0,\Xi(t)), \end{equation*}

    where \Xi(t) = \lim_{n\rightarrow \infty}\frac{\sigma^2}{k_1} \mathit{\boldsymbol{B}}'_1(t)\Delta_n \mathit{\boldsymbol{B}}_1(t) . Similar arguments hold for \hat{g}(u) .



    [1] M. Caputo, M. Fabrizio, A new definition of fractional derivative without singular kernel, Progr. Fract. Differ. Appl., 1 (2015), 1–13.
    [2] S. Das, Functional fractional calculus, Springer Science & Business Media, 2011.
    [3] H. Ahmad, A. R. Seadawy, T. A. Khan, P. Thounthong, Analytic approximate solutions for some nonlinear Parabolic dynamical wave equations, J. Taibah Univ. Sci., 14 (2020), 346–358. doi: 10.1080/16583655.2020.1741943
    [4] I. Ahmad, H. Ahmad, A. E. Abouelregal, P. Thounthong, M. Abdel-Aty, Numerical study of integer-order hyperbolic telegraph model arising in physical and related sciences, Eur. Phys. J. Plus, 135 (2020), 1–14. doi: 10.1140/epjp/s13360-019-00059-2
    [5] F. Cesarone, M. Caputo, C. Cametti, Memory formalism in the passive diffusion across a biological membrane, J. Membrane Sci., 250 (2004), 79–84.
    [6] M. Caputo, C. Cametti, Diffusion with memory in two cases of biological interest, J. Theor. Biol., 254 (2008), 697–703. doi: 10.1016/j.jtbi.2008.06.021
    [7] M. Caputo, F. Forte, European union and european monetary union as clubs. The unsatisfactory convergence and beyond, Sudeuropa, Quadrimestrale Civiltae Cultura Eur., 1 (2016), 1–30.
    [8] G. Jumarie, New stochastic fractional models for Malthusian growth, the Poissonian birth process and optimal management of populations, Math. Comput. Model., 44 (2006), 231–254. doi: 10.1016/j.mcm.2005.10.003
    [9] G. Iaffaldano, M. Caputo, S. Martino, Experimental and theoretical memory diffusion of water in sand, Hydrol. Earth Syst. Sci., 10 (2006), 93–100. doi: 10.5194/hess-10-93-2006
    [10] M. El-Shahed, Fractional calculus model of the semilunar heart valve vibrations, In: International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, 37033 (2003), 711–714.
    [11] R. L. Magin, Fractional calculus in bioengineering, Redding: Begell House, 2006.
    [12] D. Baleanu, H. Mohammadi, S. Rezapour, A mathematical theoretical study of a particular system of Caputo-Fabrizio fractional differential equations for the Rubella disease model, Adv. Differ. Equ., 2020 (2020), 1–19. doi: 10.1186/s13662-019-2438-0
    [13] J. Hadamard, Etude sur les proprietes des fonctions entieres et en particulier dune fonction consideree par Riemann, J. Math. Pures Appl., (1893), 171–216.
    [14] C. Hua, U. N. Katugampola, Hermite-Hadamard and Hermite-Hadamard-Fejer type inequalities for generalized fractional integrals, J. Math. Anal. Appl., 446 (2017), 1274–1291. doi: 10.1016/j.jmaa.2016.09.018
    [15] E. Set, I. Iscan, M. Z. Sarikaya, M. E. Ozdemir, On new inequalities of Hermite-Hadamard-Fejer type for convex functions via fractional integrals, Appl. Math. Comput., 259 (2015), 875–881. doi: 10.1016/j.amc.2015.03.030
    [16] E. Set, M. Z. Sarikaya, F. Karakoc, Hermite-Hadamard type inequalities for h-convex functions via fractional integrals, Konuralp J. Math., 4 (2016), 254–260.
    [17] I. Iscan, Hermite-Hadamard type inequalities for harmonically convex functions, Hacet. J. Math. stat., 43 (2014), 935–942.
    [18] M. Gurbuz, A. O. Akdemir, S. Rashid, E. Set, Hermite-Hadamard inequality for fractional integrals of Caputo-Fabrizio type and related inequalities, J. Inequal. Appl., 2020 (2020), 1–10. doi: 10.1186/s13660-019-2265-6
    [19] S. Varoanec, On h-convexity, J. Math. Anal. Appl., 326 (2007), 303–311.
    [20] S. Foschi, D. Ritelli, The Lambert function, the quintic equation and the proactive discovery of the implicit function theorem, Open J. Math. Sci., 5 (2021), 94–114.
    [21] G. Twagirumukiza, E. Singirankabo, Mathematical analysis of a delayed HIV/AIDS model with treatment and vertical transmission, Open J. Math. Sci., 5 (2021), 128–146. doi: 10.30538/oms2021.0151
    [22] S. E. Mukiawa, The effect of time-varying delay damping on the stability of porous elastic system, Open J. Math. Sci., 5 (2021), 147–161.
    [23] A. Yokus, B. Kuzu, U. Demiroglu, Investigation of solitary wave solutions for the (3+1)-dimensional Zakharov-Kuznetsov equation, Int. J. Mod. Phys. B, 33 (2019), 1950350. doi: 10.1142/S0217979219503508
    [24] A. Yokus, H. Bulut, On the numerical investigations to the Cahn-Allen equation by using finite difference method, Int. J. Optim. Control: Theor. Appl. (IJOCTA), 9 (2018), 18–23. doi: 10.11121/ijocta.01.2019.00561
    [25] Y. C. Kwun, G. Farid, W. Nazeer, S. Ullah, S. M. Kang, Generalized riemann-liouville k -fractional integrals associated with Ostrowski type inequalities and error bounds of hadamard inequalities, IEEE Access, 6 (2018), 64946–64953. doi: 10.1109/ACCESS.2018.2878266
    [26] G. Farid, A. U. Rehman, S. Bibi, Y. M. Chu, Refinements of two fractional versions of Hadamard inequalities for Caputo fractional derivatives and related results, Open J. Math. Sci., 5 (2021), 1–10. doi: 10.30538/oms2021.0139
    [27] Y. C. Kwun, G. Farid, S. Ullah, W. Nazeer, K. Mahreen, S. M. Kang, Inequalities for a unified integral operator and associated results in fractional calculus, IEEE Access, 7 (2019), 126283–126292. doi: 10.1109/ACCESS.2019.2939166
    [28] V. T. Nguyen, V. K. Nguyen, P. H. Quy, A note on Jesmanowicz conjecture for non-primitive Pythagorean triples, Open J. Math. Sci., 5 (2021), 115–127. doi: 10.30538/oms2021.0150
    [29] X. Z. Yang, G. Farid, W. Nazeer, Y. M. Chu, C. F. Dong, Fractional generalized Hadamard and Fejer-Hadamard inequalities for m-convex function, AIMS Math., 5 (2020), 6325–6340. doi: 10.3934/math.2020407
    [30] G. Farid, K. Mahreen, Y. M. Chu, Study of inequalities for unified integral operators of generalized convex functions, Open J. Math. Sci., 5 (2021), 80–93. doi: 10.30538/oms2021.0147
    [31] A. A. Al-Gonah, W. K. Mohammed, A new forms of extended hypergeometric functions and their properties, Eng. Appl. Sci. Lett., 4 (2021), 30–41.
  • This article has been cited by:

    1. Yu Liu, Adaptive lasso variable selection method for semiparametric spatial autoregressive panel data model with random effects, 2022, 0361-0926, 1, 10.1080/03610926.2022.2119088
    2. Xinrong Tang, Peixin Zhao, Xiaoshuang Zhou, Weijia Zhang, Robust estimation for semiparametric spatial autoregressive models via weighted composite quantile regression, 2024, 0361-0926, 1, 10.1080/03610926.2024.2395881
    3. Dengke Xu, Shiqi Ke, Jun Dong, Ruiqin Tian, Bayesian Quantile Regression for Partial Functional Linear Spatial Autoregressive Model, 2025, 14, 2075-1680, 467, 10.3390/axioms14060467
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3213) PDF downloads(191) Cited by(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog