In the context of deep learning, the more expensive computational phase is the full training of the learning methodology. Indeed, its effectiveness depends on the choice of proper values for the so-called hyperparameters, namely the parameters that are not trained during the learning process, and such a selection typically requires an extensive numerical investigation with the execution of a significant number of experimental trials. The aim of the paper is to investigate how to choose the hyperparameters related to both the architecture of a Convolutional Neural Network (CNN), such as the number of filters and the kernel size at each convolutional layer, and the optimisation algorithm employed to train the CNN itself, such as the steplength, the mini-batch size and the potential adoption of variance reduction techniques. The main contribution of the paper consists in introducing an automatic Machine Learning technique to set these hyperparameters in such a way that a measure of the CNN performance can be optimised. In particular, given a set of values for the hyperparameters, we propose a low-cost strategy to predict the performance of the corresponding CNN, based on its behavior after only few steps of the training process. To achieve this goal, we generate a dataset whose input samples are provided by a limited number of hyperparameter configurations together with the corresponding CNN measures of performance obtained with only few steps of the CNN training process, while the label of each input sample is the performance corresponding to a complete training of the CNN. Such dataset is used as training set for a Support Vector Machines for Regression and/or Random Forest techniques to predict the performance of the considered learning methodology, given its performance at the initial iterations of its learning process. Furthermore, by a probabilistic exploration of the hyperparameter space, we are able to find, at a quite low cost, the setting of a CNN hyperparameters which provides the optimal performance. The results of an extensive numerical experimentation, carried out on CNNs, together with the use of our performance predictor with NAS-Bench-101, highlight how the proposed methodology for the hyperparameter setting appears very promising.
Citation: Giorgia Franchini, Valeria Ruggiero, Federica Porta, Luca Zanni. Neural architecture search via standard machine learning methodologies[J]. Mathematics in Engineering, 2023, 5(1): 1-21. doi: 10.3934/mine.2023012
[1] | Qian Lin, Yan Zhu . Unicyclic graphs with extremal exponential Randić index. Mathematical Modelling and Control, 2021, 1(3): 164-171. doi: 10.3934/mmc.2021015 |
[2] | Zhen Lin . On the sum of powers of the Aα-eigenvalues of graphs. Mathematical Modelling and Control, 2022, 2(2): 55-64. doi: 10.3934/mmc.2022007 |
[3] | Iman Malmir . Novel closed-loop controllers for fractional nonlinear quadratic systems. Mathematical Modelling and Control, 2023, 3(4): 345-354. doi: 10.3934/mmc.2023028 |
[4] | Zhibo Cheng, Pedro J. Torres . Periodic solutions of the Lp-Minkowski problem with indefinite weight. Mathematical Modelling and Control, 2022, 2(1): 7-12. doi: 10.3934/mmc.2022002 |
[5] | Mrutyunjaya Sahoo, Dhabaleswar Mohapatra, S. Chakraverty . Wave solution for time fractional geophysical KdV equation in uncertain environment. Mathematical Modelling and Control, 2025, 5(1): 61-72. doi: 10.3934/mmc.2025005 |
[6] | Vladimir Stojanovic . Fault-tolerant control of a hydraulic servo actuator via adaptive dynamic programming. Mathematical Modelling and Control, 2023, 3(3): 181-191. doi: 10.3934/mmc.2023016 |
[7] | Jiaquan Huang, Zhen Jia, Peng Zuo . Improved collaborative filtering personalized recommendation algorithm based on k-means clustering and weighted similarity on the reduced item space. Mathematical Modelling and Control, 2023, 3(1): 39-49. doi: 10.3934/mmc.2023004 |
[8] | Qian Wang, Xue Han . Comparing the number of ideals in quadratic number fields. Mathematical Modelling and Control, 2022, 2(4): 268-271. doi: 10.3934/mmc.2022025 |
[9] | Yongming Li, Shou Ma, Kunting Yu, Xingli Guo . Vehicle kinematic and dynamic modeling for three-axles heavy duty vehicle. Mathematical Modelling and Control, 2022, 2(4): 176-184. doi: 10.3934/mmc.2022018 |
[10] | Yanchao He, Yuzhen Bai . Finite-time stability and applications of positive switched linear delayed impulsive systems. Mathematical Modelling and Control, 2024, 4(2): 178-194. doi: 10.3934/mmc.2024016 |
In the context of deep learning, the more expensive computational phase is the full training of the learning methodology. Indeed, its effectiveness depends on the choice of proper values for the so-called hyperparameters, namely the parameters that are not trained during the learning process, and such a selection typically requires an extensive numerical investigation with the execution of a significant number of experimental trials. The aim of the paper is to investigate how to choose the hyperparameters related to both the architecture of a Convolutional Neural Network (CNN), such as the number of filters and the kernel size at each convolutional layer, and the optimisation algorithm employed to train the CNN itself, such as the steplength, the mini-batch size and the potential adoption of variance reduction techniques. The main contribution of the paper consists in introducing an automatic Machine Learning technique to set these hyperparameters in such a way that a measure of the CNN performance can be optimised. In particular, given a set of values for the hyperparameters, we propose a low-cost strategy to predict the performance of the corresponding CNN, based on its behavior after only few steps of the training process. To achieve this goal, we generate a dataset whose input samples are provided by a limited number of hyperparameter configurations together with the corresponding CNN measures of performance obtained with only few steps of the CNN training process, while the label of each input sample is the performance corresponding to a complete training of the CNN. Such dataset is used as training set for a Support Vector Machines for Regression and/or Random Forest techniques to predict the performance of the considered learning methodology, given its performance at the initial iterations of its learning process. Furthermore, by a probabilistic exploration of the hyperparameter space, we are able to find, at a quite low cost, the setting of a CNN hyperparameters which provides the optimal performance. The results of an extensive numerical experimentation, carried out on CNNs, together with the use of our performance predictor with NAS-Bench-101, highlight how the proposed methodology for the hyperparameter setting appears very promising.
Let C denote the complex plane and Cn the n-dimensional complex Euclidean space with an inner product defined as ⟨z,w⟩=∑nj=1zj¯wj. Let B(a,r)={z∈Cn:|z−a|<r} be the open ball of Cn. In particular, the open unit ball is defined as B=B(0,1).
Let H(B) denote the set of all holomorphic functions on B and S(B) the set of all holomorphic self-mappings of B. For given φ∈S(B) and u∈H(B), the weighted composition operator on or between some subspaces of H(B) is defined by
Wu,φf(z)=u(z)f(φ(z)). |
If u≡1, then Wu,φ is reduced to the composition operator usually denoted by Cφ. If φ(z)=z, then Wu,φ is reduced to the multiplication operator usually denoted by Mu. Since Wu,φ=Mu⋅Cφ, Wu,φ can be regarded as the product of Mu and Cφ.
If n=1, B becomes the open unit disk in C usually denoted by D. Let Dm be the mth differentiation operator on H(D), that is,
Dmf(z)=f(m)(z), |
where f(0)=f. D1 denotes the classical differentiation operator denoted by D. As expected, there has been some considerable interest in investigating products of differentiation and other related operators. For example, the most common products DCφ and CφD were extensively studied in [1,10,11,12,13,23,25,26], and the products
MuCφD,CφMuD,MuDCφ,CφDMu,DMuCφ,DCφMu | (1.1) |
were also extensively studied in [14,18,22,27]. Following the study of the operators in (1.1), people naturally extend to study the operators (see [5,6,30])
MuCφDm,CφMuDm,MuDmCφ,CφDmMu,DmMuCφ,DmCφMu. |
Other examples of products involving differentiation operators can be found in [7,8,19,32] and the related references.
As studying on the unit disk becomes more mature, people begin to become interested in exploring related properties on the unit ball. One method for extending the differentiation operator to Cn is the radial derivative operator
ℜf(z)=n∑j=1zj∂f∂zj(z). |
Naturally, replacing D by ℜ in (1.1), we obtain the following operators
MuCφℜ,CφMuℜ,MuℜCφ,CφℜMu,ℜMuCφ,ℜCφMu. | (1.2) |
Recently, these operators have been studied in [31]. Other operators involving radial derivative operators have been studied in [21,33,34].
Interestingly, the radial derivative operator can be defined iteratively, namely, ℜmf can be defined as ℜmf=ℜ(ℜm−1f). Similarly, using the radial derivative operator can yield the related operators
MuCφℜm,CφMuℜm,MuℜmCφ,CφℜmMu,ℜmMuCφ,ℜmCφMu. | (1.3) |
Clearly, the operators in (1.3) are more complex than those in (1.2). Since CφMuℜm=Mu∘φCφℜm, the operator MuCφℜm can be regarded as the simplest one in (1.3) which was first studied and denoted as ℜmu,φ in [24]. Recently, it has been studied again because people need to obtain more properties about spaces to characterize its properties (see [29]).
To reconsider the operator CφℜmMu, people find the fact
CφℜmMu=m∑i=0Cimℜi(ℜm−iu)∘φ,φ. | (1.4) |
Motivated by (1.4), people directly studied the sum operator (see [2,28])
Sm→u,φ=m∑i=0MuiCφℜi, |
where ui∈H(B), i=¯0,m, and φ∈S(B). Particularly, if we set u0≡⋯≡um−1≡0 and um=u, then Sm→u,φ=MuCφℜm; if we set u0≡⋯≡um−1≡0 and um=u∘φ, then Sm→u,φ=CφMuℜm. In [28], Stević et al. studied the operators Sm→u,φ from Hardy spaces to weighted-type spaces on the unit ball and obtained the following results.
Theorem A. Let m∈N, uj∈H(B), j=¯0,m, φ∈S(B), and μ a weight function on B. Then, the operator Sm→u,φ:Hp→H∞μ is bounded and
supz∈Bμ(z)|uj(φ(z))||φ(z)|<+∞,j=¯1,m, | (1.5) |
if and only if
I0=supz∈Bμ(z)|u0(z)|(1−|φ(z)|2)np<+∞ |
and
Ij=supz∈Bμ(z)|uj(z)||φ(z)|(1−|φ(z)|2)np+j<+∞,j=¯1,m. |
Theorem B. Let m∈N, uj∈H(B), j=¯0,m, φ∈S(B), and μ a weight function on B. Then, the operator Sm→u,φ:Hp→H∞μ is compact if and only if it is bounded,
lim|φ(z)|→1μ(z)|u0(z)|(1−|φ(z)|2)np=0 |
and
lim|φ(z)|→1μ(z)|uj(z)||φ(z)|(1−|φ(z)|2)np+j=0,j=¯1,m. |
It must be mentioned that we find that the necessity of Theorem A requires (1.5) to hold. Inspired by [2,28], here we use a new method and technique without (1.5) to study the sum operator Sm→u,φ from logarithmic Bergman-type space to weighted-type space on the unit ball. To this end, we need to introduce the well-known Bell polynomial (see [3])
Bm,k(x1,x2,…,xm−k+1)=∑m!∏m−k−1i=1ji!m−k−1∏i=1(xii!)ji, |
where all non-negative integer sequences j1, j2,…,jm−k+1 satisfy
m−k+1∑i=1ji=kandm−k+1∑i=1iji=m. |
In particular, when k=0, one can get B0,0=1 and Bm,0=0 for any m∈N. When k=1, one can get Bi,1=xi. When m=k=i, Bi,i=xi1 holds.
In this section, we need to introduce logarithmic Bergman-type space and weighted-type space. Here, a bounded positive continuous function on B is called a weight. For a weight μ, the weighted-type space H∞μ consists of all f∈H(B) such that
‖f‖H∞μ=supz∈Bμ(z)|f(z)|<+∞. |
With the norm ‖⋅‖H∞μ, H∞μ becomes a Banach space. In particular, if μ(z)=(1−|z|2)σ(σ>0), the space H∞μ is called classical weighted-type space usually denoted by H∞σ. If μ≡1, then space H∞μ becomes the bounded holomorphic function space usually denoted by H∞.
Next, we need to present the logarithmic Bergman-type space on B (see [4] for the unit disk case). Let dv be the standardized Lebesgue measure on B. The logarithmic Bergman-type space Apwγ,δ consists of all f∈H(B) such that
‖f‖pApwγ,δ=∫B|f(z)|pwγ,δ(z)dv(z)<+∞, |
where −1<γ<+∞, δ≤0, 0<p<+∞ and wγ,δ(z) is defined by
wγ,δ(z)=(log1|z|)γ[log(1−1log|z|)]δ. |
When p≥1, Apwγ,δ is a Banach space. While 0<p<1, it is a Fréchet space with the translation invariant metric ρ(f,g)=‖f−g‖pApωγ,δ.
Let φ∈S(B), 0≤r<1, 0≤γ<∞, δ≤0, and a∈B∖{φ(0)}. The generalized counting functions are defined as
Nφ,γ,δ(r,a)=∑zj(a)∈φ−1(a)wγ,δ(zj(a)r) |
where |zj(a)|<r, counting multiplicities, and
Nφ,γ,δ(a)=Nφ,γ,δ(1,a)=∑zj(a)∈φ−1(a)wγ,δ(zj(a)). |
If φ∈S(D), then the function Nφ,γ,δ has the integral expression: For 1≤γ<+∞ and δ≤0, there is a positive function F(t) satisfying
Nφ,γ,δ(r,u)=∫r0F(t)Nφ,1(t,u)dt,r∈(0,1),u≠φ(0). |
When φ∈S(D) and δ=0, the generalized counting functions become the common counting functions. Namely,
Nφ,γ(r,a)=∑z∈φ−1(a),|z|<r(logr|z|)γ, |
and
Nφ,γ(a)=Nφ,γ(1,a)=∑z∈φ−1(a)(log1|z|)γ. |
In [17], Shapiro used the function Nφ,γ(1,a) to characterize the compact composition operators on the weighted Bergman space.
Let X and Y be two topological spaces induced by the translation invariant metrics dX and dY, respectively. A linear operator T:X→Y is called bounded if there is a positive number K such that
dY(Tf,0)≤KdX(f,0) |
for all f∈X. The operator T:X→Y is called compact if it maps bounded sets into relatively compact sets.
In this paper, j=¯k,l is used to represent j=k,...,l, where k,l∈N0 and k≤l. Positive numbers are denoted by C, and they may vary in different situations. The notation a≲b (resp. a≳b) means that there is a positive number C such that a≤Cb (resp. a≥Cb). When a≲b and b≳a, we write a≍b.
In this section, we obtain some properties on the logarithmic Bergman-type space. First, we have the following point-evaluation estimate for the functions in the space.
Theorem 3.1. Let −1<γ<+∞, δ≤0, 0<p<+∞ and 0<r<1. Then, there exists a positive number C=C(γ,δ,p,r) independent of z∈K={z∈B:|z|>r} and f∈Apwγ,δ such that
|f(z)|≤C(1−|z|2)γ+n+1p[log(1−1log|z|)]−δp‖f‖Apwγ,δ. | (3.1) |
Proof. Let z∈B. By applying the subharmonicity of the function |f|p to Euclidean ball B(z,r) and using Lemma 1.23 in [35], we have
|f(z)|p≤1v(B(z,r))∫B(z,r)|f(w)|pdv(w)≤C1,r(1−|z|2)n+1∫B(z,r)|f(w)|pdv(w). | (3.2) |
Since r<|z|<1 and 1−|w|2≍1−|z|2, we have
log1|w|≍1−|w|≍1−|z|≍log1|z| | (3.3) |
and
log(1−log1|w|)≍log(1−log1|z|). | (3.4) |
From (3.3) and (3.4), it follows that there is a positive constant C2,r such that wγ,δ(z)≤C2,rwγ,δ(w) for all w∈B(z,r). From this and (3.2), we have
|f(z)|p≤C1,rC2,r(1−|z|2)n+1wγ,δ(z)∫B(z,r)|f(w)|pwγ,δ(w)dv(w)≤C1,rC2,r(1−|z|2)n+1wγ,δ(z)‖f‖pApwγ,δ. | (3.5) |
From (3.5) and the fact log1|z|≍1−|z|≍1−|z|2, the following inequality is right with a fixed constant C3,r
|f(z)|p≤C1,rC2,rC3,r(1−|z|2)n+1+γ[log(1−1log|z|)]−δ‖f‖pApwγ,δ. |
Let C=C1,rC2,rC3,rp. Then the proof is end.
Theorem 3.2. Let m∈N, −1<γ<+∞, δ≤0, 0<p<+∞ and 0<r<1. Then, there exists a positive constant Cm=C(γ,δ,p,r,m) independent of z∈K and f∈Apwγ,δ such that
|∂mf(z)∂zi1∂zi2…∂zim|≤Cm(1−|z|2)γ+n+1p+m[log(1−1log|z|)]−δp‖f‖Apwγ,δ. | (3.6) |
Proof. First, we prove the case of m=1. By the definition of the gradient and the Cauchy's inequality, we get
|∂f(z)∂zi|≤|∇f(z)|≤˜C1supw∈B(z,q(1−|z|))|f(w)|1−|z|, | (3.7) |
where i=¯1,n. By using the relations
1−|z|≤1−|z|2≤2(1−|z|), |
(1−q)(1−|z|)≤1−|w|≤(q+1)(1−|z|), |
and
log(1−1log|z|)≍log(1−1log|w|), |
we obtain the following formula
|f(w)|≤˘C1(1−|z|2)γ+n+1p[log(1−1log|z|)]−δp‖f‖Apwγ,δ |
for any w∈B(z,q(1−|z|)). Then,
supw∈B(z,q(1−|z|))|f(w)|≤˘C1(1−|z|2)γ+n+1p[log(1−1log|z|)]−δp‖f‖Apwγ,δ. |
From (3.1) and (3.2), it follows that
|∂f(z)∂zi|≤ˆC1(1−|z|2)γ+n+1p+1[log(1−1log|z|)]−δp‖f‖Apwγ,δ. | (3.8) |
Hence, the proof is completed for the case of m=1.
We will use the mathematical induction to complete the proof. Assume that (3.6) holds for m<a. For convenience, let g(z)=∂a−1f(z)∂zi1∂zi2…∂zia−1. By applying (3.7) to the function g, we obtain
|∂g(z)∂zi|≤˜C1supw∈B(z,q(1−|z|))|g(w)|1−|z|. | (3.9) |
According to the assumption, the function g satisfies
|g(z)|≤ˆCa−1(1−|z|2)γ+n+1p+a−1[log(1−1log|z|)]−δp‖f‖Apwγ,δ. |
By using (3.8), the following formula is also obtained
|∂g(z)∂zi|≤ˆCa(1−|z|2)γ+n+1p+a[log(1−1log|z|)]−δp‖f‖Apwγ,δ. |
This shows that (3.6) holds for m=a. The proof is end.
As an application of Theorems 3.1 and 3.2, we give the estimate in z=0 for the functions in Apωγ,δ.
Corollary 3.1. Let −1<γ<+∞, δ≤0, 0<p<+∞, and 0<r<2/3. Then, for all f∈Apwγ,δ, it follows that
|f(0)|≤C(1−r2)γ+n+1p[log(1−1logr)]−δp‖f‖Apwγ,δ, | (3.10) |
and
|∂mf(0)∂zl1…∂zlm|≤Cm(1−r2)γ+n+1p+m[log(1−1logr)]−δp‖f‖Apwγ,δ, | (3.11) |
where constants C and Cm are defined in Theorems 3.1 and 3.2, respectively.
Proof. For f∈Apwγ,δ, from Theorem 3.1 and the maximum module theorem, we have
|f(0)|≤max|z|=r|f(z)|≤C(1−r2)γ+n+1p[log(1−1logr)]−δp‖f‖Apwγ,δ, |
which implies that (3.10) holds. By using the similar method, we also have that (3.11) holds.
Next, we give an equivalent norm in Apwγ,δ, which extends Lemma 3.2 in [4] to B.
Theorem 3.3. Let r0∈[0,1). Then, for every f∈Apwγ,δ, it follows that
‖f‖pApwγ,δ≍∫B∖r0B|f(z)|pwγ,δ(z)dv(z). | (3.12) |
Proof. If r0=0, then it is obvious. So, we assume that r0∈(0,1). Integration in polar coordinates, we have
‖f‖pApwγ,δ=2n∫10wγ,δ(r)r2n−1dr∫S|f(rζ)|pdσ(ζ). |
Put
A(r)=wγ,δ(r)r2n−1andM(r,f)=∫S|f(rζ)|pdσ(ζ). |
Then it is represented that
‖f‖pApwγ,δ≍∫r00+∫1r0M(r,f)A(r)dr. | (3.13) |
Since M(r,f) is increasing, A(r) is positive and continuous in r on (0,1) and
limr→0A(r)=limx→+∞xγ[log(1+1x)]δe−(2n−1)x=limx→+∞xγ−δe(2n−1)x=0, |
that is, there is a constant ε>0(ε<r0) such that A(r)<A(ε) for r∈(0,ε). Then we have
∫r00M(r,f)A(r)dr≤2r01−r0maxε≤r≤r0A(r)∫1+r02r0M(r,f)dr≤2r01−r0maxε≤r≤r0A(r)minr0≤r≤1+r02A(r)∫1+r02r0M(r,f)A(r)dr≲∫1r0M(r,f)A(r)dr. | (3.14) |
From (3.13) and (3.14), we obtain the inequality
‖f‖pApwγ,δ≲∫1r0M(r,f)A(r)dr. |
The inequality reverse to this is obvious. The asymptotic relationship (3.12) follows, as desired.
The following integral estimate is an extension of Lemma 3.4 in [4]. The proof is similar, but we still present it for completeness.
Lemma 3.1. Let −1<γ<+∞, δ≤0, β>γ−δ and 0<r<1. Then, for each fixed w∈B with |w|>r,
∫Bωγ,δ(z)|1−⟨z,w⟩|n+β+1dv(z)≲1(1−|w|)β−γ[log(1−1log|w|)]δ. |
Proof. Fix |w| with |w|>r0 (0<r0<1). It is easy to see that
log1r≍1−rforr0≤r<1. | (3.15) |
By applying Theorem 3.3 with
fw(z)=1(1−⟨z,w⟩)n+β+1 |
and using (3.15), the formula of integration in polar coordinates gives
∫B1|1−⟨z,w⟩|n+β+1ωγ,δ(z)dv(z)≲∫1r0M(r,fw)(1−r)γ[log(1−1logr)]δr2n−1dr. | (3.16) |
By Proposition 1.4.10 in [15], we have
M(r,fw)≍1(1−r2|w|2)β+1. | (3.17) |
From (3.16) and (3.17), we have
∫B1|1−⟨z,w⟩|β+2nωγ,δ(z)dv(z)≲∫1r01(1−r2|w|2)β+1(1−r)γ[log(1−1logr)]δr2n−1dr≲∫1r01(1−r|w|)β+1(1−r)γ[log(1−1logr)]δr2n−1dr≲∫|w|r01(1−r|w|)β+1(1−r)γ[log(1−1logr)]δr2n−1dr+∫1|w|1(1−r|w|)β+1(1−r)γ[log(1−1logr)]δr2n−1dr=I1+I2. |
Since [log(1−1logr)]δ is decreasing in r on [|w|,1], we have
I2=∫1|w|1(1−r|w|)β+1(1−r)γ[log(1−1logr)]δr2n−1dr≲1(1−|w|)β+1[log(1−1log|w|)]δ∫1|w|(1−r)γdr≍1(1−|w|)β−γ[log(1−1log|w|)]δ. | (3.18) |
On the other hand, we obtain
I1=∫|w|r01(1−r|w|)β+1(1−r)γ[log(1−1logr)]δr2n−1dr≲∫|w|r0(1−r)γ−β−1(log21−r)δdr. |
If δ=0 and β>γ, then we have
I1(0)≲(1−|w|)γ−β. |
If δ≠0, then integration by parts gives
I1(δ)=−1γ−β(1−|w|)γ−β(log21−|w|)δ+1γ−β(1−r0)γ−β(log21−r0)δ+δγ−βI1(δ−1). |
Since δ<0, γ−β<0 and
(log21−r)δ−1≤(log21−r)δforr0<r<|w|<1, |
we have
I1(δ)≤−1γ−β(1−|w|)γ−β(log21−|w|)δ+δγ−βI1(δ) |
and from this follows
I1(δ)≲(1−|w|)γ−β(log21−|w|)δ≍(1−|w|)γ−β[log(1−1log|w|)]δ |
provided γ−β−δ<0. The proof is finished.
The following gives an important test function in Apwγ,δ.
Theorem 3.4. Let −1<γ<+∞, δ≤0, 0<p<+∞ and 0<r<1. Then, for each t≥0 and w∈B with |w|>r, the following function is in Apwγ,δ
fw,t(z)=[log(1−1log|w|)]−δp(1−|w|2)−δp+t+1(1−⟨z,w⟩)γ−δ+n+1p+t+1. |
Moreover,
sup{w∈B:|w|>r}‖fw,t‖Apwγ,δ≲1. |
Proof. By Lemma 3.1 and a direct calculation, we have
‖fw,t‖pApwγ,δ=∫B|[log(1−1log|w|)]−δp(1−|w|2)−δp+t+1(1−⟨z,w⟩)γ−δ+n+1p+t+1|pwγ,δ(z)dA(z)=(1−|w|2)p(t+1)−δ[log(1−1log|w|)]−δ×∫B1|1−⟨z,w⟩|γ−δ+p(t+1)+n+1wγ,δ(z)dA(z)≲1. |
The proof is finished.
In this section, for simplicity, we define
Bi,j(φ(z))=Bi,j(φ(z),φ(z),…,φ(z)). |
In order to characterize the compactness of the operator Sm→u,φ:Apwγ,δ→H∞μ, we need the following lemma. It can be proved similar to that in [16], so we omit here.
Lemma 4.1. Let −1<γ<+∞, δ≤0, 0<p<+∞, m∈N, uj∈H(B), j=¯0,m, and φ∈S(B). Then, the bounded operator Sm→u,φ:Apwγ,δ→H∞μ is compact if and only if for every bounded sequence {fk}k∈N in Apwγ,δ such that fk→0 uniformly on any compact subset of B as k→∞, it follows that
limk→∞‖Sm→u,φfk‖H∞μ=0. |
The following result was obtained in [24].
Lemma 4.2. Let s≥0, w∈B and
gw,s(z)=1(1−⟨z,w⟩)s,z∈B. |
Then,
ℜkgw,s(z)=sPk(⟨z,w⟩)(1−⟨z,w⟩)s+k, |
where Pk(w)=sk−1wk+p(k)k−1(s)wk−1+...+p(k)2(s)w2+w, and p(k)j(s), j=¯2,k−1, are nonnegative polynomials for s.
We also need the following result obtained in [20].
Lemma 4.3. Let s>0, w∈B and
gw,s(z)=1(1−⟨z,w⟩)s,z∈B. |
Then,
ℜkgw,s(z)=k∑t=1a(k)t(t−1∏j=0(s+j))⟨z,w⟩t(1−⟨z,w⟩)s+t, |
where the sequences (a(k)t)t∈¯1,k, k∈N, are defined by the relations
a(k)k=a(k)1=1 |
for k∈N and
a(k)t=ta(k−1)t+a(k−1)t−1 |
for 2≤t≤k−1,k≥3.
The final lemma of this section was obtained in [24].
Lemma 4.4. If a>0, then
Dn(a)=|11⋯1aa+1⋯a+n−1a(a+1)(a+1)(a+2)⋯(a+n−1)(a+n)⋮⋮⋯⋮n−2∏k=0(a+k)n−2∏k=0(a+k+1)⋯n−2∏k=0(a+k+n−1)|=n−1∏k=1k!. |
Theorem 4.1. Let −1<γ<+∞, δ≤0, 0<p<+∞, m∈N, uj∈H(B), j=¯0,m, and φ∈S(B). Then, the operator Sm→u,φ:Apwγ,δ→H∞μ is bounded if and only if
M0:=supz∈Bμ(z)|u0(z)|(1−|φ(z)|2)γ+n+1p[log(1−1log|φ(z)|)]−δp<+∞ | (4.1) |
and
Mj:=supz∈Bμ(z)|∑mi=jui(z)Bi,j(φ(z))|(1−|φ(z)|2)γ+n+1p+j[log(1−1log|φ(z)|)]−δp<+∞ | (4.2) |
for j=¯1,m.
Moreover, if the operator Sm→u,φ:Apwγ,δ→H∞μ is bounded, then
‖Sm→u,φ‖Apwγ,δ→H∞μ≍m∑j=0Mj. | (4.3) |
Proof. Suppose that (4.1) and (4.2) hold. From Theorem 3.1, Theorem 3.2, and some easy calculations, it follows that
μ(z)|m∑i=0ui(z)ℜif(φ(z))|≤μ(z)m∑i=0|ui(z)||ℜif(φ(z))|=μ(z)|u0(z)||f(φ(z))|+μ(z)|m∑i=1i∑j=1(ui(z)n∑l1=1⋯n∑lj=1(∂jf∂zl1∂zl2⋯∂zlj(φ(z))∑k1,…,kjC(i)k1,…,kjj∏t=1φlt(z)))|=μ(z)|u0(z)f(φ(z))|+μ(z)|m∑j=1m∑i=j(ui(z)n∑l1=1⋯n∑lj=1(∂jf∂zl1∂zl2⋯∂zlj(φ(z))∑k1,…,kjC(i)k1,…,kjj∏t=1φlt(z)))|≲μ(z)|u0(z)|(1−|φ(z)|2)γ+n+1p[log(1−1log|φ(z)|)]−δp‖f‖Apwγ,δ+m∑j=1μ(z)|∑mi=jui(z)Bi,j(φ(z))|(1−|φ(z)|2)γ+n+1p+j[log(1−1log|φ(z)|)]−δp‖f‖Apwγ,δ=M0‖f‖Apwγ,δ+m∑j=1Mj‖f‖Apwγ,δ. | (4.4) |
By taking the supremum in inequality (4.4) over the unit ball in the space Apwγ,δ, and using (4.1) and (4.2), we obtain that the operator Sm→u,φ:Apwγ,δ→H∞μ is bounded. Moreover, we have
‖Sm→u,φ‖Apwγ,δ→H∞μ≤Cm∑j=0Mj, | (4.5) |
where C is a positive constant.
Assume that the operator Sm→u,φ:Apwγ,δ→H∞μ is bounded. Then there exists a positive constant C such that
‖Sm→u,φf‖H∞μ≤C‖f‖Apwγ,δ | (4.6) |
for any f∈Apwγ,δ. First, we can take f(z)=1∈Apwγ,δ, then one has that
supz∈Bμ(z)|u0(z)|<+∞. | (4.7) |
Similarly, take fk(z)=zjk∈Apwγ,δ, k=¯1,n and j=¯1,m, by (4.7), then
μ(z)|u0(z)φk(z)j+m∑i=j(ui(z)Bi,j(φk(z))))|<+∞ | (4.8) |
for any j∈{1,2,…,m}. Since φ(z)∈B, we have |φ(z)|≤1. So, one can use the triangle inequality (4.7) and (4.8), the following inequality is true
supz∈Bμ(z)|m∑i=jui(z)Bi,j(φ(z))|<+∞. | (4.9) |
Let w∈B and dk=γ+n+1p+k. For any j∈{1,2,…,m} and constants ck=c(j)k, k=¯0,m, let
h(j)w(z)=m∑k=0c(j)kfw,k(z), | (4.10) |
where fw,k is defined in Theorem 3.4. Then, by Theorem 3.4, we have
Lj=supw∈B‖h(j)w‖Apwγ,δ<+∞. | (4.11) |
From (4.6), (4.11), and some easy calculations, it follows that
Lj‖Sm→u,φ‖Apwγ,δ→H∞μ≥‖Sm→u,φh(j)φ(w)‖H∞μ=supz∈Bμ(z)|m∑i=0u0(z)h(j)φ(w)(φ(z))|≥μ(w)|u0(w)h(j)φ(w)(φ(w))+m∑i=1(ui(w)ℜih(j)φ(w)(φ(w)))|=μ(w)|u0(w)h(j)φ(w)(φ(w))+m∑i=1ui(w)m∑k=0c(j)kfφ(w),k(φ(w))|=μ(w)|u0(w)c0+c1+⋯+cm(1−|φ(z)|2)γ+n+1p+⟨m∑i=1ui(w)Bi,1(φ(w)),φ(w)⟩(d0c0+⋯+dmcm)(1−|φ(w)|2)γ+n+1p+1+⋯+⟨m∑i=jui(w)Bi,j(φ(w)),φ(w)j⟩(d0⋯dj−1c0+⋯+dm⋯dm+j−1cm)(1−|φ(w)|2)γ+n+1p+j+⋯+⟨um(w)Bm,m(φ(w)),φ(w)m⟩(d0⋯dm−1c0+⋯+dm⋯d2m−1cm)(1−|φ(w)|2)γ+n+1p+m|[log(1−1log|φ(w)|)]−δp. | (4.12) |
Since d_{k} > 0 , k = \overline{0, m} , by Lemma 4.4, we have the following linear equations
\begin{equation} \left( \begin{array}{cccc} 1 & 1 &\cdots & 1 \\ d_{0} & d_{1} &\cdots & d_{m} \\ \vdots &\vdots &\ddots &\vdots \\ \prod\limits_{k = 0}^{j-1}d_{k}& \prod\limits_{k = 0}^{j-1} d_{k+m}&\cdots & \prod\limits_{k = 0}^{j-1}d_{k+m} \\ \vdots &\vdots &\ddots &\vdots \\ \prod\limits_{k = 0}^{m-1}d_{k}& \prod\limits_{k = 0}^{m-1} d_{k+m}&\cdots & \prod\limits_{k = 0}^{m-1}d_{k+m} \end{array} \right) \left( \begin{array}{cccc} c_{0}\\ c_{1}\\ \vdots\\ \quad\\ c_{j}\\ \quad\\ \vdots\\ \quad\\ c_{m} \end{array} \right) = \left( \begin{array}{cccc} 0\\ 0\\ \vdots\\ \quad\\ 1\\ \quad\\ \vdots\\ \quad\\ 0 \end{array} \right). \end{equation} | (4.13) |
From (4.12) and (4.13), we have
\begin{align} L_{j}\|\mathfrak{S}^l_{\vec{u},{\varphi}}\|_{{A^p_{w_\gamma,\delta}}\rightarrow H_{\mu}^{\infty}} &\geq\sup_{|\varphi(z)| > 1/2}\frac{\mu(z)|\sum _{i = j}^{m}u_{i}(z)B_{i,j} (\varphi(z))||\varphi(z)|^{j}}{(1-|\varphi(z)|^2)^{\frac{\gamma +n+1}{p}+j}} \Big[\log\Big(1-\frac{1}{\log|\varphi(z)|}\Big)\Big]^{-\frac{\delta}{p}}\\ &\gtrsim\sup_{|\varphi(z)| > 1/2}\frac{\mu(z)|\sum _{i = j}^{m}u_{i}(z)B_{i,j} (\varphi(z))|}{(1-|\varphi(z)|^2)^{\frac{\gamma +n+1}{p}+j}} \Big[\log\Big(1-\frac{1}{\log|\varphi(z)|}\Big)\Big]^{-\frac{\delta}{p}}. \end{align} | (4.14) |
On the other hand, from (4.9), we have
\begin{align} &\sup_{|\varphi(z)|\leq1/2}\frac{\mu(z)|\sum _{i = j}^{m}u_{i}(z)B_{i,j} (\varphi(z))|}{(1-|\varphi(z)|^2)^{\frac{\gamma +n+1}{p}+j}} \Big[\log\Big(1-\frac{1}{\log|\varphi(z)|}\Big)\Big]^{-\frac{\delta}{p}}\\ &\leq\sup_{z\in\mathbb{B}}\Big(\frac{4}{3}\Big)^{\frac{\gamma +n+1}{p}+j} \Big[\log\Big(1-\frac{1}{\log\frac{1}{2}}\Big)\Big]^{-\frac{\delta}{p}} \mu(z)\Big|\sum_{i = j}^{m}u_{i}(z)B_{i,j}(\varphi(z))\Big| < +\infty. \end{align} | (4.15) |
From (4.14) and (4.15), we get that (4.2) holds for j = \overline{1, m} .
For constants c_{k} = c_{k}^{(0)} , k = \overline{0, m} , let
\begin{align} h_{w}^{(0)}(z) = \sum_{k = 0}^{m}c_{k}^{(0)}f_{w,k}(z). \end{align} | (4.16) |
By Theorem 3.4, we know that L_{0} = \sup_{w\in\mathbb{B}}\|h_{w}^{(0)}\|_{A^p_{w_{\gamma, \delta}}} < +\infty . From this, (4.12), (4.13) and Lemma 4.4, we get
\begin{align*} L_{0}\|\mathfrak{S}^m_{\vec{u},{\varphi}}\|_{{A^p_{w_{\gamma,\delta}}}\rightarrow H_{\mu}^{\infty}} \geq \frac{\mu(z)|u_{0}(z)|}{(1-|\varphi(z)|^{2})^{\frac{\gamma +n+1}{p}}} \Big[\log\Big(1-\frac{1}{\log|\varphi(z)|}\Big)\Big]^{-\frac{\delta}{p}}. \end{align*} |
So, we have M_0 < +\infty . Moreover, we have
\begin{align} \|\mathfrak{S}^m_{\vec{u},{\varphi}}\|_{A^p_{w_{\gamma,\delta}}\rightarrow H_\mu^\infty} \geq\sum_{j = 0}^{m}M_{j}. \end{align} | (4.17) |
From (4.5) and (4.17), we obtain (4.3). The proof is completed.
From Theorem 4.1 and (1.4), we obtain the following result.
Corollary 4.1. Let m\in\mathbb{N} , u\in H(\mathbb{B}) , \varphi\in S(\mathbb{B}) and \mu is a weight function on \mathbb{B} . Then, the operator C_{{\varphi}}\Re^{m}M_{u}:A^p_{w_{\gamma, \delta}}\rightarrow H_\mu^\infty is bounded if and only if
\begin{align*} I_{0}: = \sup_{z\in\mathbb{B}} \frac{\mu(z)|\Re^mu \circ {\varphi}(z)|}{(1-|\varphi(z)|^{2})^{\frac{\gamma +n+1}{p}}} \Big[\log\Big(1-\frac{1}{\log|\varphi(z)|}\Big)\Big]^{-\frac{\delta}{p}} < +\infty \end{align*} |
and
\begin{align*} I_{j}: = \sup_{z\in\mathbb{B}}\frac{\mu(z)|\sum _{i = j}^{m}\Re^{m-i}u \circ {\varphi}(z)B_{i,j}(\varphi(z))|}{(1-|\varphi(z)|^2)^{\frac{\gamma +n+1}{p}+j}}\Big[\log\Big(1-\frac{1}{\log|\varphi(z)|}\Big)\Big]^{-\frac{\delta}{p}} < +\infty \end{align*} |
for j = \overline{1, m} .
Moreover, if the operator C_{{\varphi}}\Re^{m}M_{u}:A^p_{w_{\gamma, \delta}}\rightarrow H_\mu^\infty is bounded, then
\begin{align*} \|C_{{\varphi}}\Re^{m}M_{u}\|_{A^p_{w_{\gamma,\delta}}\rightarrow H_\mu^\infty} \asymp\sum_{j = 0}^{m}I_{j}. \end{align*} |
Theorem 4.2. Let -1 < \gamma < +\infty , \delta\leq0 , 0 < p < +\infty , m\in\mathbb{N} , u_j\in H(\mathbb{B}) , j = \overline{0, m} , and \varphi\in S(\mathbb{B}) . Then, the operator \mathfrak{S}^m_{\vec{u}, {\varphi}}:A^p_{w_{\gamma, \delta}}\to H_\mu^\infty is compact if and only if the operator \mathfrak{S}^m_{\vec{u}, {\varphi}}:A^p_{w_{\gamma, \delta}}\to H_\mu^\infty is bounded,
\begin{align} \lim_{|\varphi(z)|\rightarrow1}\frac{\mu(z)|\sum _{i = j}^{m}(u_{i}(z)B_{i,j} (\varphi(z))|}{(1-|\varphi(z)|^2)^{\frac{\gamma +n+1}{p}+j}}\Big[\log\Big(1-\frac{1}{\log|\varphi(z)|}\Big)\Big]^{-\frac{\delta}{p} } = 0 \end{align} | (4.18) |
for j = \overline{1, m} , and
\begin{align} \lim_{|\varphi(z)|\rightarrow1}\frac{\mu(z)|u_{0}(z)| }{(1-|\varphi(z)|^2)^{\frac{\gamma+n+1}{p}}}\Big[\log\Big(1-\frac{1}{\log|\varphi(z)|}\Big)\Big]^{-\frac{\delta}{p}} = 0. \end{align} | (4.19) |
Proof. Assume that the operator \mathfrak{S}^m_{\vec{u}, {\varphi}}:A^p_{w_{\gamma, \delta}}\rightarrow H_\mu^\infty is compact. It is obvious that the operator \mathfrak{S}^m_{\vec{u}, {\varphi}}:A^p_{w_{\gamma, \delta}}\rightarrow H_\mu^\infty is bounded.
If \|\varphi\|_{\infty} < 1 , then it is clear that (4.18) and (4.19) are true. So, we suppose that \|\varphi\|_{\infty} = 1 . Let \{z_{k}\} be a sequence in \mathbb{B} such that
\lim\limits_{k\rightarrow1}|\mu(z_k)|\to 1 \quad \mbox{and} \quad h_{k}^{(j)} = h_{\varphi(z_{k})}^{(j)}, |
where h_{w}^{(j)} are defined in (4.10) for a fixed j\in\{1, 2, \ldots, l\} . Then, it follows that h_{k}^{(j)}\rightarrow 0 uniformly on any compact subset of \mathbb{B} as k\rightarrow \infty . Hence, by Lemma 4.1, we have
\begin{align*} \lim_{k\to\infty}\|\mathfrak{S}^m_{\vec{u},{\varphi}} h_{k}\|_{H_{\mu}^{\infty}} = 0. \end{align*} |
Then, we can find sufficiently large k such that
\begin{align} &\frac{\mu(z_{k})|\sum_{i = j}^{m}(u_{i}(z_{k})B_{i,j}(\varphi(z_{k})) |}{{(1-|\varphi(z_k)|^2)^{\frac{\gamma+n+1}{p}+j}}}\Big[\log\Big(1-\frac{1} {\log|\varphi (z_k)|}\Big)\Big]^{-\frac{\delta}{p}} \leq L_k\|\mathfrak{S}^m_{\vec{u},{\varphi}} h_{k}^{(j)}\|_{H_{\mu}^{\infty}}. \end{align} | (4.20) |
If k\rightarrow \infty , then (4.20) is true.
Now, we discuss the case of j = 0 . Let h_{k}^{(0)} = h_{\varphi(z_{k})}^{(0)} , where h_{w}^{(0)} is defined in (4.16). Then, we also have that \|h_{k}^{(0)}\|_{A^p_{w_{\gamma, \delta}}} < +\infty and h_{k}^{(0)}\rightarrow 0 uniformly on any compact subset of \mathbb{B} as k\rightarrow \infty . Hence, by Lemma 4.1, one has that
\begin{align} \lim_{k\to\infty}\|\mathfrak{S}^m_{\vec{u},{\varphi}} h_{k}^{(0)}\|_{H_{\mu}^{\infty}(\mathbb{B})} = 0. \end{align} | (4.21) |
Then, by (4.21), we know that (4.18) is true.
Now, assume that \mathfrak{S}^m_{\vec{u}, {\varphi}}:A^p_{w_{\gamma, \delta}}\rightarrow H_\mu^\infty is bounded, (4.18) and (4.19) are true. One has that
\begin{align} \mu(z)|u_{0}(z)|\leq C < +\infty \end{align} | (4.22) |
and
\begin{align} \mu(z)\Big|\sum_{i = j}^{m}(u_{i}(z) B_{i,j}(\varphi(z)))\Big|\leq C < +\infty \end{align} | (4.23) |
for any z\in\mathbb{B} . By (4.18) and (4.19), for arbitrary \varepsilon > 0 , there is a r\in(0, 1) , for any z\in K such that
\begin{align} \frac{\mu(z)|u_{0}(z)| }{(1-|\varphi(z)|^2)^{\frac{\gamma+n+1}{p}}}\Big[\log\Big(1-\frac{1} {\log|\varphi(z)|}\Big)\Big]^{-\frac{\delta}{p}} < \varepsilon. \end{align} | (4.24) |
and
\begin{align} \frac{\mu(z)\Big|\sum_{i = j}^{m}(u_{i}(z)B_{i,j}(\varphi(z)))\Big| }{(1-|\varphi(z)|^2)^{\frac{\gamma+n+1}{p}+j}} \Big[\log\Big(1-\frac{1}{\log|\varphi(z)|}\Big)\Big]^{-\frac{\delta}{p}} < \varepsilon. \end{align} | (4.25) |
Assume that \{f_{s}\} is a sequence such that \sup_{s\in\mathbb{N}}\|f_{s}\|_{A^p_{w_{\gamma, \delta}}}\leq M < +\infty and f_{s}\rightarrow 0 uniformly on any compact subset of \mathbb{B} as s\rightarrow \infty . Then by Theorem 3.1, Theorem 3.2 and (4.22)–(4.25), one has that
\begin{align} \|\mathfrak{S}^m_{\vec{u},{\varphi}} f_{s}\|_{H_{\mu}^{\infty}(\mathbb{B})} & = \sup_{z\in\mathbb{B}}\mu(z)\Big|u_{0}(z)f(\varphi(z))+ \sum_{i = 1}^{m}u_{i}(z)\Re^{i} f(\varphi(z))\Big|\\ & = \sup_{z\in K}\mu(z)\Big|u_{0}(z)f(\varphi(z))+ \sum_{i = 1}^{m}u_{i}(z)\Re^{i} f(\varphi(z))\Big|\\ &\quad+\sup_{z\in\mathbb{B}\setminus K}\mu(z)\Big|u_{0}(z)f(\varphi(z))+ \sum_{i = 1}^{m}u_{i}(z)\Re^{i} f(\varphi(z))\Big|\\ &\lesssim \sup_{z\in K}\frac{\mu(z)|u_{0}(z)| }{(1-|\varphi(z)|^2)^{\frac{\gamma+n+1}{p}}}\Big[\log\Big(1-\frac{1} {\log|\varphi (z)|}\Big)\Big]^{-\frac{\delta}{p}}\|f_{s}\|_{A^p_{w_\gamma,\delta}} \\ &\quad+\sup_{z\in K}\frac{\mu(z)\Big|\sum_{i = j}^{m}(u_{i}(z) B_{i,j}(\varphi(z)))\Big| }{(1-|\varphi(z)|^2)^{\frac{\gamma+n+1}{p}+j}}\Big[\log\Big(1-\frac{1} {\log|\varphi (z)|}\Big)\Big]^{-\frac{\delta}{p}} \|f_{s}\|_{A^p_{w_\gamma,\delta}} \\ &\quad+\sup_{z\in\mathbb{B}\setminus K}\mu(z)|u_{0}(z)||f_{s}(\varphi(z))|\\ &\quad+\sup_{z\in\mathbb{B}\setminus K}\sum_{j = 1}^{m} \mu(z)\Big|\sum_{i = j}^{m}(u_{i}(z)B_{i,j}(\varphi(z)))\Big| \max_{\{l_{1},l_{2},\ldots,l_{j}\}}\Big|\frac{\partial^{j} f_{s}}{\partial z_{l_{1}} \partial z_{l_{2}}\cdots\partial z_{l_{j}}}(\varphi(z))\Big|\\ &\leq M\varepsilon+C\sup_{|w|\leq \delta}\sum_{j = 0}^{m} \max_{\{l_{1},l_{2},\ldots,l_{j}\}}\Big|\frac{\partial^{j} f_{s}}{\partial z_{l_{1}} \partial z_{l_{2}}\cdots\partial z_{l_{j}}}(w)\Big|. \end{align} | (4.26) |
Since f_{s}\rightarrow0 uniformly on any compact subset of \mathbb{B} as s\rightarrow \infty . By Cauchy's estimates, we also have that \frac{\partial^{j} f_{s}}{\partial z_{l_{1}}\partial z_{l_{2}}\cdots\partial z_{l_{j}}}\rightarrow 0 uniformly on any compact subset of \mathbb{B} as s\rightarrow \infty . From this and using the fact that \{w\in{\mathbb{B}}:|w|\leq\delta\} is a compact subset of \mathbb{B} , by letting s\rightarrow \infty in inequality (4.26), one get that
\begin{align*} \limsup_{s\rightarrow \infty}\|\mathfrak{S}^m_{\vec{u},{\varphi}} f_{s}\|_{H_{\mu}^{\infty}}\lesssim \varepsilon. \end{align*} |
Since \varepsilon is an arbitrary positive number, it follows that
\begin{align*} \lim_{s\rightarrow \infty}\|\mathfrak{S}^m_{\vec{u},{\varphi}} f_{s}\|_{H_{\mu}^{\infty}} = 0. \end{align*} |
By Lemma 4.1, the operator \mathfrak{S}^m_{\vec{u}, {\varphi}}:A^p_{w_{\gamma, \delta}}\rightarrow H_\mu^\infty is compact.
As before, we also have the following result.
Corollary 4.2. Let m\in\mathbb{N} , u\in H(\mathbb{B}) , \varphi\in S(\mathbb{B}) and \mu is a weight function on \mathbb{B} . Then, the operators C_{{\varphi}}\Re^{m}M_{u}:A^p_{w_{\gamma, \delta}}\rightarrow H_\mu^\infty is compact if and only if the operator C_{{\varphi}}\Re^{m}M_{u}:A^p_{w_{\gamma, \delta}}\rightarrow H_\mu^\infty is bounded,
\begin{align*} \lim_{|\varphi(z)|\rightarrow1}\frac{\mu(z)|\Re^mu \circ {\varphi}(z)| }{(1-|\varphi(z)|^2)^{\frac{\gamma+n+1}{p}}}\Big[\log\Big(1-\frac{1}{\log|\varphi (z)|}\Big)\Big]^{-\frac{\delta}{p}} = 0 \end{align*} |
and
\begin{align*} \lim_{|\varphi(z)|\rightarrow1}\frac{\mu(z)|\sum_{i = j}^{m}(\Re^{m-i}u \circ {\varphi}(z)B_{i,j}(\varphi(z))| }{\Big(1-|\varphi(z)|^2)^{\frac{\gamma +n+1}{p}+j}}\Big[\log(1-\frac{1}{\log|\varphi (z)|}\Big)\Big]^{-\frac{\delta}{p}} = 0 \end{align*} |
for j = \overline{1, m} .
In this paper, we study and obtain some properties about the logarithmic Bergman-type space on the unit ball. As some applications, we completely characterized the boundedness and compactness of the operator
\begin{align*} \mathfrak{S}^m_{\vec{u},{\varphi}} = \sum_{i = 0}^{m}M_{u_i}C_{\varphi}\Re^{i} \end{align*} |
from the logarithmic Bergman-type space to the weighted-type space on the unit ball. Here, one thing should be pointed out is that we use a new method and technique to characterize the boundedness of such operators without the condition (1.5), which perhaps is the special flavour in this paper.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work was supported by Sichuan Science and Technology Program (2022ZYD0010) and the Graduate Student Innovation Foundation (Y2022193).
The authors declare that they have no competing interests.
[1] | T. Akiba, S. Sano, T. Yanase, T. Ohta, M. Koyama, Optuna: A next-generation hyperparameter optimization framework, In: Proceedings of the 25rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019, 2623–2631. http://dx.doi.org/10.1145/3292500.3330701 |
[2] | B. Baker, O. Gupta, R. Raskar, N. Naik, Accelerating neural architecture search using performance prediction, 2017, arXiv: 1705.10823. |
[3] | B. Baker, O. Gupta, N. Naik, R. Raskar, Designing neural network architectures using reinforcement learning, 2017, arXiv: 1611.02167. |
[4] |
J. F. Barrett, N. Keat, Artifacts in CT: recognition and avoidance, RadioGraphics, 24 (2004), 1679–1691. http://dx.doi.org/10.1148/rg.246045065 doi: 10.1148/rg.246045065
![]() |
[5] | J. Bergstra, R. Bardenet, Y. Bengio, B. Kégl, Algorithms for hyper-parameter optimization, In: Advances in Neural Information Processing Systems, 2011, 2546–2554. |
[6] | J. Bergstra, Y. Bengio, Random search for hyper-parameter optimization, J. Mach. Learn. Res., 13 (2012), 281–305. |
[7] |
L. Bottou, F. E. Curtis, J. Nocedal, Optimization methods for large-scale machine learning, SIAM Rev., 60 (2018), 223–311. http://dx.doi.org/10.1137/16M1080173 doi: 10.1137/16M1080173
![]() |
[8] |
L. Breiman, Random forests, Machine Learning, 45 (2001), 5–32. http://dx.doi.org/10.1023/A:1010933404324 doi: 10.1023/A:1010933404324
![]() |
[9] |
C. J. C. Burges, A tutorial on support vector machines for pattern recognition, Data Mining and Knowledge Discovery, 2 (1998), 121–167. http://dx.doi.org/10.1023/A:1009715923555 doi: 10.1023/A:1009715923555
![]() |
[10] | H. Cai, T. Chen, W. Zhang, Y. Yu, J. Wang, Efficient architecture search by network transformation, 2017, arXiv/1707.04873. |
[11] | T. Domhan, J. T. Springenberg, F. Hutter, Speeding Up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves, In: IJCAI International Joint Conference on Artificial Intelligence, 2015, 3460–3468. |
[12] | T. Elsken, J. H. Metzen, F. Hutter, Neural architecture search: a survey, J. Mach. Learn. Res., 20 (2019), 1997–2017. |
[13] | T. Elsken, J.-H. Metzen, F. Hutter, Simple and efficient architecture search for convolutional neural networks, 2017, arXiv: 1711.04528. |
[14] | G. Franchini, M. Galinier, M. Verucchi, Mise en abyme with artificial intelligence: how to predict the accuracy of NN, applied to hyper-parameter tuning, In: INNSBDDL 2019: Recent advances in big data and deep learning, Cham: Springer, 2020,286–295. http://dx.doi.org/10.1007/978-3-030-16841-4_30 |
[15] | D. E. Goldberg, Genetic algorithms in search, optimization, and machine learning, Addison Wesley Publishing Co. Inc., 1989. |
[16] | T. Hospedales, A. Antoniou, P. Micaelli, A. Storkey, Meta-learning in neural networks: a survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, in press. http://dx.doi.org/10.1109/TPAMI.2021.3079209 |
[17] | F. Hutter, L. Kotthoff, J. Vanschoren, Automatic machine learning: methods, systems, challenges, Cham: Springer, 2019. http://dx.doi.org/10.1007/978-3-030-05318-5 |
[18] | F. Hutter, H. Hoos, K. Leyton-Brown, Sequential model-based optimization for general algorithm configuration, In: LION 2011: Learning and Intelligent Optimization, Berlin, Heidelberg: Springer, 2011,507–523. http://dx.doi.org/10.1007/978-3-642-25566-3_40 |
[19] | D. P. Kingma, J. Ba, Adam: a method for stochastic optimization, 2017, arXiv: 1412.6980. |
[20] |
N. Loizou, P. Richtarik, Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods, Comput. Optim. Appl., 77 (2020), 653–710. http://dx.doi.org/10.1007/s10589-020-00220-z doi: 10.1007/s10589-020-00220-z
![]() |
[21] | J. Mockus, V. Tiesis, A. Zilinskas, The application of Bayesian methods for seeking the extremum, In: Towards global optimisation, North-Holand, 2012,117–129. |
[22] |
B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, N. De Freitas, Taking the human out of the loop: A review of bayesian optimization, Proc. IEEE, 104 (2016), 148–175. http://dx.doi.org/10.1109/JPROC.2015.2494218 doi: 10.1109/JPROC.2015.2494218
![]() |
[23] | S. Thrun, L. Pratt, Learning to learn: introduction and overview, In: Learning to learn, Boston, MA: Springer, 1998, 3–17. http://dx.doi.org/10.1007/978-1-4615-5529-2_1 |
[24] | C. Ying, A. Klein, E. Real, E. Christiansen, K. Murphy, F. Hutter, NAS-Bench-101: Towards reproducible neural architecture search, In: Proceedings of the 36–th International Conference on Machine Learning, 2019, 7105–7114. |
[25] | Z. Zhong, J. Yan, W. Wei, J. Shao, C.-L. Liu, Practical block-wise neural network architecture generation, In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, 2423–2432. http://dx.doi.org/10.1109/CVPR.2018.00257 |
[26] | B. Zoph, Q. V. Le, Neural architecture search with reinforcemente learning, 2017, arXiv: 1611.01578. |
1. | Hafiz Muhammad Fraz, Kashif Ali, Muhammad Faisal Nadeem, Entropy measures of silicon nanotubes using degree based topological indices, 2025, 100, 0031-8949, 015202, 10.1088/1402-4896/ad94b4 | |
2. | Pranavi Jaina, K. Anil Kumar, J. Vijayasekhar, Application of Zagreb Index Models in Predicting the Physicochemical Properties of Unsaturated Fatty Acids, 2025, 41, 22315039, 201, 10.13005/ojc/410124 |