Understanding the patterns of financial activities and predicting their evolution and changes has always been a significant challenge in the field of behavioral finance. Stock price prediction is particularly difficult due to the inherent complexity and stochastic nature of the stock market. Deep learning models offer a more robust solution to nonlinear problems compared to traditional algorithms. In this paper, we propose a simple yet effective fusion model that leverages the strengths of both transformers and convolutional neural networks (CNNs). The CNN component is employed to extract local features, while the Transformer component captures temporal dependencies. To validate the effectiveness of the proposed approach, we conducted experiments on four stocks representing different sectors, including finance, technology, industry, and agriculture. We performed both single-step and multi-step predictions. The experimental results demonstrate that our method significantly improves prediction accuracy, reducing error rates by 45%, 32%, and 36.8% compared to long short-term memory(LSTM), attention-based LSTM, and transformer models.
Citation: Ying Li, Xiangrong Wang, Yanhui Guo. CNN-Trans-SPP: A small Transformer with CNN for stock price prediction[J]. Electronic Research Archive, 2024, 32(12): 6717-6732. doi: 10.3934/era.2024314
[1] | Cai-Mei Yan, Jin-Lin Liu . On second-order differential subordination for certain meromorphically multivalent functions. AIMS Mathematics, 2020, 5(5): 4995-5003. doi: 10.3934/math.2020320 |
[2] | Jianhua Gong, Muhammad Ghaffar Khan, Hala Alaqad, Bilal Khan . Sharp inequalities for q-starlike functions associated with differential subordination and q-calculus. AIMS Mathematics, 2024, 9(10): 28421-28446. doi: 10.3934/math.20241379 |
[3] | Ying Yang, Jin-Lin Liu . Some geometric properties of certain meromorphically multivalent functions associated with the first-order differential subordination. AIMS Mathematics, 2021, 6(4): 4197-4210. doi: 10.3934/math.2021248 |
[4] | K. Saritha, K. Thilagavathi . Differential subordination, superordination results associated with Pascal distribution. AIMS Mathematics, 2023, 8(4): 7856-7864. doi: 10.3934/math.2023395 |
[5] | Inhwa Kim, Young Jae Sim, Nak Eun Cho . First-order differential subordinations associated with Carathéodory functions. AIMS Mathematics, 2024, 9(3): 5466-5479. doi: 10.3934/math.2024264 |
[6] | Ebrahim Amini, Mojtaba Fardi, Shrideh Al-Omari, Rania Saadeh . Certain differential subordination results for univalent functions associated with q-Salagean operators. AIMS Mathematics, 2023, 8(7): 15892-15906. doi: 10.3934/math.2023811 |
[7] | Shujaat Ali Shah, Ekram Elsayed Ali, Adriana Cătaș, Abeer M. Albalahi . On fuzzy differential subordination associated with q-difference operator. AIMS Mathematics, 2023, 8(3): 6642-6650. doi: 10.3934/math.2023336 |
[8] | Georgia Irina Oros, Gheorghe Oros, Daniela Andrada Bardac-Vlada . Certain geometric properties of the fractional integral of the Bessel function of the first kind. AIMS Mathematics, 2024, 9(3): 7095-7110. doi: 10.3934/math.2024346 |
[9] | Georgia Irina Oros . Carathéodory properties of Gaussian hypergeometric function associated with differential inequalities in the complex plane. AIMS Mathematics, 2021, 6(12): 13143-13156. doi: 10.3934/math.2021759 |
[10] | Ekram E. Ali, Georgia Irina Oros, Abeer M. Albalahi . Differential subordination and superordination studies involving symmetric functions using a q-analogue multiplier operator. AIMS Mathematics, 2023, 8(11): 27924-27946. doi: 10.3934/math.20231428 |
Understanding the patterns of financial activities and predicting their evolution and changes has always been a significant challenge in the field of behavioral finance. Stock price prediction is particularly difficult due to the inherent complexity and stochastic nature of the stock market. Deep learning models offer a more robust solution to nonlinear problems compared to traditional algorithms. In this paper, we propose a simple yet effective fusion model that leverages the strengths of both transformers and convolutional neural networks (CNNs). The CNN component is employed to extract local features, while the Transformer component captures temporal dependencies. To validate the effectiveness of the proposed approach, we conducted experiments on four stocks representing different sectors, including finance, technology, industry, and agriculture. We performed both single-step and multi-step predictions. The experimental results demonstrate that our method significantly improves prediction accuracy, reducing error rates by 45%, 32%, and 36.8% compared to long short-term memory(LSTM), attention-based LSTM, and transformer models.
Throughout this paper, we assume that
n∈N, p∈N∖{1}, −1≤B<1, B<A and α>0. | (1.1) |
Let An(p) be the class of functions of the form:
f(z)=zp+∞∑k=nap+kzp+k | (1.2) |
which are analytic in the open unit disk
U={z:z∈Cand|z|<1}. |
For functions f(z) and g(z) analytic in U, we say that f(z) is subordinate to g(z) and write f(z)≺g(z) (z∈U), if there exists an analytic function w(z) in U such that
|w(z)|≤|z|andf(z)=g(w(z))(z∈U). |
If g(z) is univalent in U, then
f(z)≺g(z)(z∈U)⟺f(0)=g(0)andf(U)⊂g(U). |
Definition. A function f(z)∈An(p) is said to be in the class Qn(A,B,α) if it satisfies the following second-order differential subordination:
(1−α)z1−pf′(z)+αp−1z2−pf″(z)≺p1+Az1+Bz(z∈U). | (1.3) |
Recently, several authors (see, for example, [1,2,3,4,5,6,7,8,10,11,12,13,14,15,17] and the references cited therein) introduced and studied various subclasses of multivalent analytic functions. Some properties such as distortion bounds, inclusion relations and coefficient estimates are investigated. In this paper we obtain inclusion relation, sharp bounds on Re(f′(z)zp−1), Re(f(z)zp), |f(z)| and coefficient estimates for functions f(z) belonging to the class Qn(A,B,α). Furthermore, we investigate a new problem, that is, to find
min|z|=r<1Re{(1−α)z1−pf′(z)+αp−1z2−pf″(z)}, |
where f(z) varies in the class:
Qn(A,B,0)={f(z)∈An(p):f′(z)zp−1≺p1+Az1+Bz(z∈U)}. | (1.4) |
We need the following lemma in order to derive our main results for the class Qn(A,B,α).
Lemma. (see [9]) Let the function g(z) be analytic in U. Suppose also that the function h(z) is analytic and convex univalent in U with h(0)=g(0). If
g(z)+1μzg′(z)≺h(z), |
where Reμ≥0 and μ≠0, then g(z)≺h(z).
Theorem 1. Let 0<α1<α2. Then Qn(A,B,α2)⊂Qn(A,B,α1).
Proof. Suppose that
g(z)=z1−pf′(z) | (2.1) |
for f(z)∈Qn(A,B,α2). Then the function g(z) is analytic in U with g(0)=p. By using (1.3) and (2.1), we have
(1−α2)z1−pf′(z)+α2p−1z2−pf″(z)=g(z)+α2p−1zg′(z)≺p1+Az1+Bz. | (2.2) |
An application of the above Lemma yields
g(z)≺p1+Az1+Bz. | (2.3) |
By noting that 0<α1α2<1 and that the function 1+Az1+Bz is convex univalent in U, it follows from (2.1), (2.2) and (2.3) that
(1−α1)z1−pf′(z)+α1p−1z2−pf″(z)=α1α2((1−α2)z1−pf′(z)+α2p−1z2−pf″(z))+(1−α1α2)g(z)≺p1+Az1+Bz. |
This shows that f(z)∈Qn(A,B,α1). The proof of Theorem 1 is completed.
Theorem 2. Let f(z)∈Qn(A,B,α). Then, for |z|=r<1,
Re(f′(z)zp−1)≥p(1−(p−1)(A−B)∞∑m=1Bm−1rnmαnm+p−1), | (2.4) |
Re(f′(z)zp−1)>p(1−(p−1)(A−B)∞∑m=1Bm−1αnm+p−1), | (2.5) |
Re(f′(z)zp−1)≤p(1+(p−1)(A−B)∞∑m=1(−B)m−1rnmαnm+p−1) | (2.6) |
and
Re(f′(z)zp−1)<p(1+(p−1)(A−B)∞∑m=1(−B)m−1αnm+p−1)(B≠−1). | (2.7) |
All the bounds are sharp for the function fn(z) given by
fn(z)=zp+p(p−1)(A−B)∞∑m=1(−B)m−1znm+p(nm+p)(αnm+p−1)(z∈U). | (2.8) |
Proof. It is known that for |ξ|≤σ (σ<1) that
|1+Aξ1+Bξ−1−ABσ21−B2σ2|≤(A−B)σ1−B2σ2 | (2.9) |
and
1−Aσ1−Bσ≤Re(1+Aξ1+Bξ)≤1+Aσ1+Bσ. | (2.10) |
Let f(z)∈Qn(A,B,α). Then we can write
(1−α)z1−pf′(z)+αp−1z2−pf″(z)=p1+Aw(z)1+Bw(z)(z∈U), | (2.11) |
where w(z)=wnzn+wn+1zn+1+⋯ is analytic and |w(z)|<1 for z∈U. By the Schwarz lemma, we know that |w(z)|≤|z|n (z∈U). It follows from (2.11) that
(1−α)(p−1)αz(1−α)(p−1)α−1f′(z)+z(1−α)(p−1)αf″(z)=p(p−1)αzp−1α−1(1+Aw(z)1+Bw(z)), |
which implies that
(z(1−α)(p−1)αf′(z))′=p(p−1)αzp−1α−1(1+Aw(z)1+Bw(z)). |
After integration we arrive at
f′(z)=p(p−1)αz−(1−α)(p−1)α∫z0ξp−1α−1(1+Aw(ξ)1+Bw(ξ))dξ=p(p−1)αzp−1∫10tp−1α−1(1+Aw(tz)1+Bw(tz))dt. | (2.12) |
Since
|w(tz)|≤tnrn(|z|=r<1; 0≤t≤1), |
we get from (2.12) and left-hand inequality in (2.10) that, for |z|=r<1,
Re(f′(z)zp−1)≥p(p−1)α∫10tp−1α−1(1−Atnrn1−Btnrn)dt=p−p(p−1)(A−B)∞∑m=1Bm−1rnmαnm+p−1, | (2.13) |
and, for z∈U,
Re(f′(z)zp−1)>p(p−1)α∫10tp−1α−1(1−Atn1−Btn)dt=p−p(p−1)(A−B)∞∑m=1Bm−1αnm+p−1. |
Similarly, by using (2.12) and the right-hand inequality in (2.10), we have (2.6) and (2.7) (with B≠−1).
Furthermore, for the function fn(z) given by (2.8), we find that fn(z)∈An(p),
f′n(z)=pzp−1+p(p−1)(A−B)∞∑m=1(−B)m−1znm+p−1αnm+p−1 | (2.14) |
and
(1−α)z1−pf′n(z)+αp−1z2−pf″n(z)=p+p(A−B)∞∑m=1(−B)m−1znm=p1+Azn1+Bzn. |
Hence fn(z)∈Qn(A,B,α) and, from (2.14), we conclude that the inequalities (2.4) to (2.7) are sharp. The proof of Theorem 2 is completed.
Corollary. Let f(z)∈Qn(A,B,α). If
(p−1)(A−B)∞∑m=1Bm−1αnm+p−1≤1, | (2.15) |
then f(z) is p-valent close-to-convex in U.
Proof. Let f(z)∈Qn(A,B,α) and (2.15) be satisfied. Then, by using (2.5) in Theorem 2, we see that
Re(f′(z)zp−1)>0(z∈U). |
This shows that f(z) is p-valent close-to-convex in U. The proof of the corollary is completed.
Theorem 3. Let f(z)∈Qn(A,B,α). Then, for |z|=r<1,
Re(f(z)zp)≥1−p(p−1)(A−B)∞∑m=1Bm−1rnm(nm+p)(αnm+p−1), | (2.16) |
Re(f(z)zp)≤1+p(p−1)(A−B)∞∑m=1(−B)m−1rnm(nm+p)(αnm+p−1) | (2.17) |
and
Re(f(z)zp)>1−p(p−1)(A−B)∞∑m=1Bm−1(nm+p)(αnm+p−1). | (2.18) |
All of the above bounds are sharp.
Proof. It is obvious that
f(z)=∫z0f′(ξ)dξ=z∫10f′(tz)dt=zp∫10tp−1f′(tz)(tz)p−1dt(z∈U). | (2.19) |
Making use of (2.4) in Theorem 2, it follows from (2.19) that
Re(f(z)zp)=∫10tp−1Re(f′(tz)(tz)p−1)dt≥∫10tp−1(p−p(p−1)(A−B)∞∑m=1Bm−1(rt)nmαnm+p−1)dt=1−p(p−1)(A−B)∞∑m=1Bm−1rnm(nm+p)(αnm+p−1), |
which gives (2.16).
Similarly, we deduce from (2.6) in Theorem 2 and (2.19) that (2.17) holds true.
Also, with the help of (2.13), we find that
Re(f′(tz)(tz)p−1)≥p(p−1)α∫10up−1α−1(1−A(utr)n1−B(utr)n)du>p−p(p−1)(A−B)∞∑m=1Bm−1tnmαnm+p−1(|z|=r<1; 0<t≤1). |
From this and (2.19), we obtain (2.18).
Furthermore, it is easy to see that the inequalities (2.16), (2.17) and (2.18) are sharp for the function fn(z) given by (2.8). Now the proof of Theorem 3 is completed.
Theorem 4. Let f(z)∈Qn(A,B,α) and AB≤1. Then, for |z|=r<1,
|f(z)|≤rp+p(p−1)(A−B)∞∑m=1(−B)m−1rnm+p(nm+p)(αnm+p−1) | (2.20) |
and
|f(z)|<1+p(p−1)(A−B)∞∑m=1(−B)m−1(nm+p)(αnm+p−1). | (2.21) |
The above bounds are sharp.
Proof. Since AB≤1, it follows from (2.9) that
|1+Aξ1+Bξ|≤|1−ABσ21−B2σ2|+(A−B)σ1−B2σ2=1+Aσ1+Bσ(|ξ|≤σ<1). | (2.22) |
By virtue of (2.12) and (2.22), we have, for |z|=r<1,
|f′(uz)(uz)p−1|≤p(p−1)α∫10tp−1α−1|1+Aw(utz)1+Bw(utz)|dt≤p(p−1)α∫10tp−1α−1(1+A(utr)n1+B(utr)n)dt | (2.23) |
<p(p−1)α∫10tp−1α−1(1+Auntn1+Buntn)dt. | (2.24) |
By noting that
|f(z)|≤rp∫10up−1|f′(uz)(uz)p−1|du, |
we deduce from (2.23) and (2.24) that the desired inequalities hold true.
The bounds in (2.20) and (2.21) are sharp with the extremal function fn(z) given by (2.8). The proof of Theorem 4 is completed.
Theorem 5. Let f(z)∈Q1(A,B,α) and
g(z)∈Q1(A0,B0,α0)(−1≤B0<1; B0<A0; α0>0). |
If
p(p−1)(A0−B0)∞∑m=1Bm−10(m+p)(α0m+p−1)≤12, | (2.25) |
then (f∗g)(z)∈Q1(A,B,α), where the symbol ∗ denotes the familiar Hadamard product of two analytic functions in U.
Proof. Since g(z)∈Q1(A0,B0,α0), we find from the inequality (2.18) in Theorem 3 and (2.25) that
Re(g(z)zp)>1−p(p−1)(A0−B0)∞∑m=1Bm−10(m+p)(α0m+p−1)≥12(z∈U). |
Thus the function g(z)zp has the following Herglotz representation:
g(z)zp=∫|x|=1dμ(x)1−xz(z∈U), | (2.26) |
where μ(x) is a probability measure on the unit circle |x|=1 and ∫|x|=1dμ(x)=1.
For f(z)∈Q1(A,B,α), we have
z1−p(f∗g)′(z)=(z1−pf′(z))∗(z−pg(z)) |
and
z2−p(f∗g)″(z)=(z2−pf″(z))∗(z−pg(z)). |
Thus
(1−α)z1−p(f∗g)′(z)+αp−1z2−p(f∗g)″(z)=(1−α)((z1−pf′(z))∗(z−pg(z)))+αp−1((z2−pf″(z))∗(z−pg(z)))=h(z)∗g(z)zp, | (2.27) |
where
h(z)=(1−α)z1−pf′(z)+αp−1z2−pf″(z)≺p1+Az1+Bz(z∈U). | (2.28) |
In view of the fact that the function 1+Az1+Bz is convex univalent in U, it follows from (2.26) to (2.28) that
(1−α)z1−p(f∗g)′(z)+αp−1z2−p(f∗g)″(z)=∫|x|=1h(xz)dμ(x)≺p1+Az1+Bz(z∈U). |
This shows that (f∗g)(z)∈Q1(A,B,α). The proof of Theorem 5 is completed.
Theorem 6. Let
f(z)=zp+∞∑k=nap+kzp+k∈Qn(A,B,α). | (2.29) |
Then
|ap+k|≤p(p−1)(A−B)(p+k)(αk+p−1)(k≥n). | (2.30) |
The result is sharp for each k≥n.
Proof. It is known that, if
φ(z)=∞∑j=1bjzj≺ψ(z)(z∈U), |
where φ(z) is analytic in U and ψ(z)=z+⋯ is analytic and convex univalent in U, then |bj|≤1 (j∈N).
By using (2.29), we have
(1−α)z1−pf′(z)+αp−1z2−pf″(z)−pp(A−B)=1p(p−1)(A−B)∞∑k=n(p+k)(αk+p−1)ap+kzk≺z1+Bz(z∈U). | (2.31) |
In view of the fact that the function z1+Bz is analytic and convex univalent in U, it follows from (2.31) that
(p+k)(αk+p−1)p(p−1)(A−B)|ap+k|≤1(k≥n), |
which gives (2.30).
Next we consider the function fk(z) given by
fk(z)=zp+p(p−1)(A−B)∞∑m=1(−B)m−1zkm+p(km+p)(αkm+p−1)(z∈U; k≥n). |
Since
(1−α)z1−pf′k(z)+αp−1z2−pf″k(z)=p1+Azk1+Bzk≺p1+Az1+Bz(z∈U) |
and
fk(z)=zp+p(p−1)(A−B)(p+k)(αk+p−1)zp+k+⋯ |
for each k≥n, the proof of Theorem 6 is completed.
Theorem 7. Let f(z)∈Qn(A,B,0). Then, for |z|=r<1,
(i) if Mn(A,B,α,r)≥0, we have
Re{(1−α)z1−pf′(z)+αp−1z2−pf″(z)}≥p[p−1−((p−1)(A+B)+αn(A−B))rn+(p−1)ABr2n](p−1)(1−Brn)2; | (2.32) |
(ii) if Mn(A,B,α,r)≤0, we have
Re{(1−α)z1−pf′(z)+αp−1z2−pf″(z)}≥p(4α2KAKB−L2n)4α(p−1)(A−B)rn−1(1−r2)KB, | (2.33) |
where
{KA=1−A2r2n−nArn−1(1−r2),KB=1−B2r2n−nBrn−1(1−r2),Ln=2α(1−ABr2n)−αn(A+B)rn−1(1−r2)−(p−1)(A−B)rn−1(1−r2),Mn(A,B,α,r)=2αKB(1−Arn)−Ln(1−Brn). | (2.34) |
The above results are sharp.
Proof. Equality in (2.32) occurs for z=0. Thus we assume that 0<|z|=r<1.
For f(z)∈Qn(A,B,0), we can write
f′(z)pzp−1=1+Aznφ(z)1+Bznφ(z)(z∈U), | (2.35) |
where φ(z) is analytic and |φ(z)|≤1 in U. It follows from (2.35) that
(1−α)z1−pf′(z)+αp−1z2−pf″(z)=f′(z)zp−1+αp(A−B)(nznφ(z)+zn+1φ′(z))(p−1)(1+Bznφ(z))2=f′(z)zp−1+αnp(p−1)(A−B)(f′(z)pzp−1−1)(A−Bf′(z)pzp−1)+αp(A−B)zn+1φ′(z)(p−1)(1+Bznφ(z))2. | (2.36) |
By using the Carathéodory inequality:
|φ′(z)|≤1−|φ(z)|21−r2, |
we obtain
Re{zn+1φ′(z)(1+Bznφ(z))2}≥−rn+1(1−|φ(z)|2)(1−r2)|1+Bznφ(z)|2=−r2n|A−Bf′(z)pzp−1|2−|f′(z)pzp−1−1|2(A−B)2rn−1(1−r2). | (2.37) |
Put f′(z)pzp−1=u+iv(u,v∈R). Then (2.36) and (2.37), together, yield
Re{(1−α)z1−pf′(z)+αp−1z2−pf″(z)}≥p(1+αn(A+B)(p−1)(A−B))u−αnpA(p−1)(A−B)−αnpB(p−1)(A−B)(u2−v2)−αp[r2n((A−Bu)2+(Bv)2)−((u−1)2+v2)](p−1)(A−B)rn−1(1−r2)=p(1+αn(A+B)(p−1)(A−B))u−αnp(p−1)(A−B)(A+Bu2)−αp(r2n(A−Bu)2−(u−1)2)(p−1)(A−B)rn−1(1−r2)+αp(p−1)(A−B)(nB+1−B2r2nrn−1(1−r2))v2. | (2.38) |
We note that
1−B2r2nrn−1(1−r2)≥1−r2nrn−1(1−r2)=1rn−1(1+r2+r4+⋯+r2(n−2)+r2(n−1))=12rn−1[(1+r2(n−1))+(r2+r2(n−2))+⋯+(r2(n−1)+1)]≥n≥−nB. | (2.39) |
Combining (2.38) and (2.39), we have
Re{(1−α)z1−pf′(z)+αp−1z2−pf″(z)}≥p(1+αn(A+B)(p−1)(A−B))u−αnp(p−1)(A−B)(A+Bu2)+αp((u−1)2−r2n(A−Bu)2)(p−1)(A−B)rn−1(1−r2)=:ψn(u). | (2.40) |
Also, (2.10) and (2.35) imply that
1−Arn1−Brn≤u=Re(f′(z)pzp−1)≤1+Arn1+Brn. |
We now calculate the minimum value of ψn(u) on the segment [1−Arn1−Brn,1+Arn1+Brn]. Obviously, we get
ψ′n(u)=p(1+αn(A+B)(p−1)(A−B))−2αnpB(p−1)(A−B)u+2αp((1−B2r2n)u−(1−ABr2n))(p−1)(A−B)rn−1(1−r2), |
ψ″n(u)=2αp(p−1)(A−B)(1−B2r2nrn−1(1−r2)−nB)≥2αnp(1−B)(p−1)(A−B)>0(see (2.36)) | (2.41) |
and ψ′n(u)=0 if and only if
u=un=2α(1−ABr2n)−αn(A+B)rn−1(1−r2)−(p−1)(A−B)rn−1(1−r2)2α(1−B2r2n−nBrn−1(1−r2))=Ln2αKB(see (2.31)). | (2.42) |
Since
2αKB(1+Arn)−Ln(1+Brn)=2α[(1+Arn)(1−B2r2n)−(1+Brn)(1−ABr2n)]+αnrn−1(1−r2)[(A+B)(1+Brn)−2B(1+Arn)]+(p−1)(A−B)rn−1(1−r2)(1+Brn)=2α(A−B)rn(1+Brn)+αn(A−B)rn−1(1−r2)(1−Brn)+(p−1)(A−B)rn−1(1−r2)(1+Brn)>0, |
we see that
un<1+Arn1+Brn. | (2.43) |
But un is not always greater than 1−Arn1−Brn. The following two cases arise.
(ⅰ) un≤1−Arn1−Brn, that is, Mn(A,B,α,r)≥0 (see (2.34)). In view of ψ′n(un)=0 and (2.41), the function ψn(u) is increasing on the segment [1−Arn1−Brn,1+Arn1+Brn]. Therefore, we deduce from (2.40) that, if Mn(A,B,α,r)≥0, then
Re{(1−α)z1−pf′(z)+αp−1z2−pf″(z)}≥ψn(1−Arn1−Brn)=p(1+αn(A+B)(p−1)(A−B))(1−Arn1−Brn)−αnp(p−1)(A−B)(A+B(1−Arn1−Brn)2)=p1−Arn1−Brn−αnp(p−1)(A−B)(1−1−Arn1−Brn)(A−B1−Arn1−Brn)=p[p−1−((p−1)(A+B)+αn(A−B))rn+(p−1)ABr2n](p−1)(1−Brn)2. |
This proves (2.32).
Next we consider the function f(z) given by
f(z)=p∫z0tp−11−Atn1−Btndt∈Qn(A,B,0). |
It is easy to find that
(1−α)r1−pf′(r)+αp−1r2−pf″(r)=p[p−1−((p−1)(A+B)+αn(A−B))rn+(p−1)ABr2n](p−1)(1−Brn)2, |
which shows that the inequality (2.32) is sharp.
(ⅱ) un≥1−Arn1−Brn, that is, Mn(A,B,α,r)≤0. In this case, we easily see that
Re{(1−α)z1−pf′(z)+αp−1z2−pf″(z)}≥ψn(un). | (2.44) |
In view of (2.34), ψn(u) in (2.40) can be written as follows:
ψn(u)=p(αKBu2−Lnu+αKA)(p−1)(A−B)rn−1(1−r2). | (2.45) |
Therefore, if Mn(A,B,α,r)≤0, then it follows from (2.42), (2.44) and (2.45) that
Re{(1−α)z1−pf′(z)+αp−1z2−pf″(z)}≥p(αKBu2n−Lnun+αKA)(p−1)(A−B)rn−1(1−r2)=p(4α2KAKB−L2n)4α(p−1)(A−B)rn−1(1−r2)KB. |
To show that the inequality (2.33) is sharp, we take
f(z)=p∫z0tp−11+Atnφ(t)1+Btnφ(t)dtandφ(z)=−z−cn1−cnz(z∈U), |
where cn∈R is determined by
f′(r)prp−1=1+Arnφ(r)1+Brnφ(r)=un∈[1−Arn1−Brn,1+Arn1+Brn). |
Clearly, −1≤φ(r)<1, −1≤cn<1, |φ(z)|≤1 (z∈U), and so f(z)∈Qn(A,B,0). Since
φ′(r)=−1−c2n(1−cnr)2=−1−|φ(r)|21−r2, |
from the above argument we obtain that
(1−α)r1−pf′(r)+αp−1r2−pf″(r)=ψn(un). |
The proof of Theorem 7 is completed.
In our present investigation, we have introduced and studied some geometric properties of the class Qn(A,B,α) which is defined by using the principle of second-order differential subordination. For this function class, we have derived the sharp lower bound on |z|=r<1 for the following functional:
Re{(1−α)z1−pf′(z)+αp−1z2−pf″(z)} |
over the class Qn(A,B,0). We have also obtained other properties of the function class Qn(A,B,α).
For the benefit and motivation of the interested readers, we have chosen to include a number of recent developments of the related subject of the widespread usages of the basic (or q-) calculus in Geometric Function Theory of Complex Analysis (see, for example, [11,14,16,18]), of which the citation [11] happens to be a survey-cum-expository review article on this important subject.
The authors would like to express sincere thanks to the referees for careful reading and suggestions which helped us to improve the paper. This work was supported by National Natural Science Foundation of China (Grant No.11571299).
The authors declare no conflicts of interest.
[1] |
L. Zhang, F. Wang, B. Xu, W. Chi, Q. Wang, T. Sun, Prediction of stock prices based on LM-BP neural network and the estimation of overfitting point by RDCI, Neural Comput. Appl., 30 (2018), 1425–1444. https://doi.org/10.1007/s00521-017-3296-x doi: 10.1007/s00521-017-3296-x
![]() |
[2] |
Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature, 521 (2015), 436–444. https://doi.org/10.1038/nature14539 doi: 10.1038/nature14539
![]() |
[3] |
R. F. Engle, Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation, Econometrica, 50 (1982), 987–1007. https://doi.org/10.2307/1912773 doi: 10.2307/1912773
![]() |
[4] |
T. Bollerslev, Generalized autoregressive conditional heteroskedasticity, J. Econom., 31 (1986), 307–327. https://doi.org/10.1016/0304-4076(86)90063-1 doi: 10.1016/0304-4076(86)90063-1
![]() |
[5] |
A. K. Jain, J. Mao, K. M. Mohiuddin, Artificial neural networks: A tutorial, IEEE Comput., 29 (1996), 31–44. https://doi.org/10.1109/2.485891 doi: 10.1109/2.485891
![]() |
[6] |
J. A. K. Suykens, J. Vandewalle, Least squares support vector machine classifiers, Neural Process. Lett., 9 (1999), 293–300. https://doi.org/10.1023/A:1018628609742 doi: 10.1023/A:1018628609742
![]() |
[7] |
F. E. H. Tay, L. Cao, Application of support vector machines in financial time series forecasting, IEEE Comput., 29 (2001), 309–317. https://doi.org/10.1016/S0305-0483(01)00026-3 doi: 10.1016/S0305-0483(01)00026-3
![]() |
[8] | B. Egeli, M. Ozturan, B. Badur, Stock market prediction using artificial neural networks, Decis. Support Syst., 22 (2003), 171–185. |
[9] |
Y. Kara, M. A. Boyacioglu, Ö. K. Baykan, Predicting direction of stock price index movement using artificial neural networks and support vector machines: The sample of the Istanbul Stock Exchange, Expert Syst. Appl., 38 (2011), 5311–5319. https://doi.org/10.1016/j.eswa.2010.10.027 doi: 10.1016/j.eswa.2010.10.027
![]() |
[10] |
G. Armano, M. Marchesi, A. Murru, A hybrid genetic-neural architecture for stock indexes forecasting, Inf. Sci., 170 (2005), 3–33. https://doi.org/10.1016/j.ins.2003.03.023 doi: 10.1016/j.ins.2003.03.023
![]() |
[11] | J. Fu, K. S. Lum, M. N. Nguyen, J. Shi, Stock prediction using fcmac-byy, in Advances in Neural Networks – ISNN 2007, Springer Berlin Heidelberg, 4492 (2007), 346–351. https://doi.org/10.1007/978-3-540-72393-6_42 |
[12] | R. Choudhry, K. Garg, A hybrid machine learning system for stock market forecasting, Int. J. Comput. Inf. Eng., 2 (2008), 689–692. |
[13] |
M. Vijh, D. Chandola, V. A. Tikkiwal, A. Kumar, Stock closing price prediction using machine learning techniques, Procedia Comput. Sci., 167 (2020), 599–606. https://doi.org/10.1016/j.procs.2020.03.326 doi: 10.1016/j.procs.2020.03.326
![]() |
[14] |
K. S. Chandar, H. Punjabi, Cat swarm optimization algorithm tuned multilayer perceptron for stock price prediction, Int. J. Web-Based Learn. Teach. Technol., 17 (2022), 1–15. https://doi.org/10.4018/IJWLTT.303113 doi: 10.4018/IJWLTT.303113
![]() |
[15] |
Y. Guo, S. Han, C. Shen, Y. Li, X. Yin, Y. Bai, An adaptive SVR for high-frequency stock price forecasting, IEEE Access, 6 (2018), 11397–11404. https://doi.org/10.1109/ACCESS.2018.2806180 doi: 10.1109/ACCESS.2018.2806180
![]() |
[16] | B. W. Wanjawa, L. Muchemi, ANN model to predict stock prices at stock exchange markets, preprint, arXiv: 1502.06434. |
[17] | A. Tsantekidis, N. Passalis, A. Tefas, J. Kanniainen, M. Gabbouj, A. Iosifidis, Forecasting stock prices from the limit order book using convolutional neural networks, in 2017 IEEE 19th Conference on Business Informatics (CBI), IEEE, (2017), 7–12. https://doi.org/10.1109/CBI.2017.23 |
[18] | M. U. Gudelek, S. A. Boluk, A. M. Ozbayoglu, A deep learning based stock trading model with 2-D CNN trend detection, in 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, (2017), 1–8. https://doi.org/10.1109/SSCI.2017.8285188 |
[19] | A. J. P. Samarawickrama, T. G. I. Fernando, A recurrent neural network approach in predicting daily stock prices an application to the Sri Lankan stock market, in the 2017 IEEE International Conference on Industrial and Information Systems (ICIIS), IEEE, (2017), 1–6. https://doi.org/10.1109/ICIINFS.2017.8300345 |
[20] |
M. Roondiwala, H. Patel, S. Varma, Predicting stock prices using LSTM, Int. J. Sci. Res., 6 (2017), 1754–1756. https://doi.org/10.21275/ART20172755 doi: 10.21275/ART20172755
![]() |
[21] | S. Selvin, R. Vinayakumar, E. A. Gopalakrishnan, V. K. Menon, K. P. Soman, Stock price prediction using LSTM, RNN and CNN-sliding window model, in 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), IEEE, (2017), 1643–1647. https://doi.org/10.1109/ICACCI.2017.8126078 |
[22] |
W. Lu, J. Li, Y. Li, A. Sun, J. Wang, A CNN-LSTM-based model to forecast stock prices, Complexity, 1 (2020), 1–10. https://doi.org/10.1155/2020/6622927 doi: 10.1155/2020/6622927
![]() |
[23] | A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, in Advances in Neural Information Processing Systems, 30 (2017). |
[24] | Y. Wang, R. Huang, S. Song, Z. Huang, G. Huang, Not all images are worth 16 × 16 words: dynamic transformers for efficient image recognition, in Advances in Neural Information Processing Systems, 34 (2021), 11960–11973. |
[25] |
P. Xu, X. Zhu, D. A. Clifton, Multimodal learning with transformers: A survey, IEEE Trans. Pattern Anal. Mach. Intell., 45 (2023), 12113–12132. https://doi.org/10.1109/TPAMI.2023.3275156 doi: 10.1109/TPAMI.2023.3275156
![]() |
[26] |
S. Lai, Mi. Wang, S. Zhao, G. R. Arce, Predicting high-frequency stock movement with differential Transformer neural network, Electronics, 12 (2023), 2943. https://doi.org/10.3390/electronics12132943 doi: 10.3390/electronics12132943
![]() |
[27] |
Z. Tao, W. Wu, J. Wang, Series decomposition Transformer with period-correlation for stock market index prediction, Expert Syst. Appl., 237 (2024), 121424. https://doi.org/10.1016/j.eswa.2023.121424 doi: 10.1016/j.eswa.2023.121424
![]() |
[28] |
A. K. Mishra, J. Renganathan, A. Gupta, Volatility forecasting and assessing risk of financial markets using multi-transformer neural network based architecture, Eng. Appl. Artif. Intell., 133 (2024), 108223. https://doi.org/10.1016/j.engappai.2024.108223 doi: 10.1016/j.engappai.2024.108223
![]() |
[29] | Z. Shi, MambaStock: Selective state space model for stock prediction, preprint, arXiv: 2402.18959. |
[30] |
X. Wen, W. Li, Time series prediction based on LSTM-attention-LSTM model, IEEE Access, 11 (2023), 48322–48331. https://doi.org/10.1109/ACCESS.2023.3276628 doi: 10.1109/ACCESS.2023.3276628
![]() |
[31] |
D. O. Oyewola, S. A. Akinwunmi, T. O. Omotehinwa, Deep LSTM and LSTM-Attention Q-learning based reinforcement learning in oil and gas sector prediction, Knowledge-Based Syst., 284 (2024), 111290. https://doi.org/10.1016/j.knosys.2023.111290 doi: 10.1016/j.knosys.2023.111290
![]() |
1. | Afis Saliu, Kanwal Jabeen, V. Ravichandran, Differential subordination for certain strongly starlike functions, 2024, 73, 0009-725X, 1, 10.1007/s12215-023-00904-5 |