This paper discusses the study of asymptotic behavior of non-oscillatory solutions for high order differential equations of Poincaré type. We present two new and weaker hypotheses on the coefficients, which implies a well posedness result and a characterization of asymptotic behavior for the solution of the Poincaré equation. In our discussion, we use the scalar method: we define a change of variable to reduce the order of the Poincaré equation and thus demonstrate that a new variable can satisfies a nonlinear differential equation; we apply the method of variation of parameters and the Banach fixed-point theorem to obtain the well posedness and asymptotic behavior of the non-linear equation; and we establish the existence of a fundamental system of solutions and formulas for the asymptotic behavior of the Poincaré type equation by rewriting the results in terms of the original variable. Moreover we present an example to show that the results introduced in this paper can be used in class of functions where classical theorems fail to be applied.
Citation: Aníbal Coronel, Fernando Huancas. New results for the non-oscillatory asymptotic behavior of high order differential equations of Poincaré type[J]. AIMS Mathematics, 2022, 7(4): 6420-6444. doi: 10.3934/math.2022358
[1] | Xuemin Xue, Xiangtuan Xiong, Yuanxiang Zhang . Two fractional regularization methods for identifying the radiogenic source of the Helium production-diffusion equation. AIMS Mathematics, 2021, 6(10): 11425-11448. doi: 10.3934/math.2021662 |
[2] | M. J. Huntul . Inverse source problems for multi-parameter space-time fractional differential equations with bi-fractional Laplacian operators. AIMS Mathematics, 2024, 9(11): 32734-32756. doi: 10.3934/math.20241566 |
[3] | Shuang-Shuang Zhou, Saima Rashid, Asia Rauf, Khadija Tul Kubra, Abdullah M. Alsharif . Initial boundary value problems for a multi-term time fractional diffusion equation with generalized fractional derivatives in time. AIMS Mathematics, 2021, 6(11): 12114-12132. doi: 10.3934/math.2021703 |
[4] | Fethi Bouzeffour . Inversion formulas for space-fractional Bessel heat diffusion through Tikhonov regularization. AIMS Mathematics, 2024, 9(8): 20826-20842. doi: 10.3934/math.20241013 |
[5] | Yilihamujiang Yimamu, Zui-Cha Deng, Liu Yang . An inverse volatility problem in a degenerate parabolic equation in a bounded domain. AIMS Mathematics, 2022, 7(10): 19237-19266. doi: 10.3934/math.20221056 |
[6] | Zui-Cha Deng, Liu Yang . Unicity of solution for a semi-infinite inverse heat source problem. AIMS Mathematics, 2022, 7(4): 7026-7039. doi: 10.3934/math.2022391 |
[7] | Arivazhagan Anbu, Sakthivel Kumarasamy, Barani Balan Natesan . Lipschitz stability of an inverse problem for the Kawahara equation with damping. AIMS Mathematics, 2020, 5(5): 4529-4545. doi: 10.3934/math.2020291 |
[8] | Yashar Mehraliyev, Seriye Allahverdiyeva, Aysel Ramazanova . On one coefficient inverse boundary value problem for a linear pseudoparabolic equation of the fourth order. AIMS Mathematics, 2023, 8(2): 2622-2633. doi: 10.3934/math.2023136 |
[9] | Xiangtuan Xiong, Wanxia Shi, Xuemin Xue . Determination of three parameters in a time-space fractional diffusion equation. AIMS Mathematics, 2021, 6(6): 5909-5923. doi: 10.3934/math.2021350 |
[10] | Dun-Gang Li, Fan Yang, Ping Fan, Xiao-Xiao Li, Can-Yun Huang . Landweber iterative regularization method for reconstructing the unknown source of the modified Helmholtz equation. AIMS Mathematics, 2021, 6(9): 10327-10342. doi: 10.3934/math.2021598 |
This paper discusses the study of asymptotic behavior of non-oscillatory solutions for high order differential equations of Poincaré type. We present two new and weaker hypotheses on the coefficients, which implies a well posedness result and a characterization of asymptotic behavior for the solution of the Poincaré equation. In our discussion, we use the scalar method: we define a change of variable to reduce the order of the Poincaré equation and thus demonstrate that a new variable can satisfies a nonlinear differential equation; we apply the method of variation of parameters and the Banach fixed-point theorem to obtain the well posedness and asymptotic behavior of the non-linear equation; and we establish the existence of a fundamental system of solutions and formulas for the asymptotic behavior of the Poincaré type equation by rewriting the results in terms of the original variable. Moreover we present an example to show that the results introduced in this paper can be used in class of functions where classical theorems fail to be applied.
The study of operator equations has a long history. Khatri and Mitra [10], Wu and Cain [15], Xiong and Qin [16] and Yuan et al. [18] studied matrix equations AX=C and XB=C in matrix algebra. Dajić and Koliha [1], Douglas [4] and Liang and Deng [12] investigated these equations in operator space. Xu et al. [5,13,14,17], Fang and Yu [6] studied these equations of adjoint operator space on Hilbert C∗-modules. The famous Douglas' range inclusion theorem played a key role in the existence of the solutions of equation AX=C. Many scholars discussed the existence and the general formulae of self-adjoint solutions (resp. positive or real positive solutions) of one equation or common solutions of two equations. In finite dimensional case, Groβ gave the necessary and sufficient conditions for the matrix equation AX=C to have a real positive solution in [7]. However, the detailed formula for the real positive solution of this equation is not fully provided. Dajíc and Koliha [2] first provided a general form of real positive solutions of equation AX=C in Hilbert space under certain conditions. Recently, the real positive solutions of AX=C were considered with corresponding operator A not necessarily having closed range in [6,12] for adjoint operators. However, these formulae of real positive solutions still have some additional restrictions. In [16], the authors considered the equivalent conditions for the existence of common real positive solutions to the pair of the matrix equations AX=C, XB=D in matrix algebra and offered partial common real positive solutions.
Let H and K be complex Hilbert spaces. We denote the set of all bounded linear operators from H into K by B(H,K) and by B(H) when H=K. For an operator A, we shall denote by A∗, R(A), ¯R(A) and N(A) the adjoint, the range, the closure of the range and the null space of A, respectively. An operator A∈B(H) is said to be positive (A≥0), if ⟨Ax,x⟩≥0 for all x∈H. Note that a positive operator has unique square root and
|A|:=(A∗A)12 |
is the absolute value of A. Let A† be the Moore-Penrose inverse of A, which is bounded if and only if R(A) is closed [9]. An operator A is called real positive if
Re(A):=12(A+A∗)≥0. |
The set of all real positive operators in B(H) is denoted by ReB+(H). An operator sequence {Tn} convergents to T∈B(H) in strong operator topology if ||Tnx−Tx||→0 as n→∞ for any x∈H. Denote by
T=s.o.−limn→∞Tn. |
Let PM denote the orthogonal projection on the closed subspace M. The identity onto a closed subspace M is denoted by IM or I, if there is no confusion.
In this paper, we focus our work on the problem of characterizing the real positive solutions of operator equations AX=C and XB=D with corresponding operators A and B not necessarily having closed ranges in infinite dimensional Hilbert space. Our current goal is three-fold:
Firstly, by using polar decomposition and the strong operator convergence, a completely new representation of the reduced solution F of operator equation AX=C is given by
F=s.o.−limn→∞(|A|+1nI)−1U∗AC, |
where A=UA|A| is the polar decomposition of A. This solution has property F=A†C when R(A) is closed. Furthermore, the necessary and sufficient conditions for the existence of real positive solutions and the detailed formulae for the real positive solutions of equation AX=C are obtained, which improves the related results in [2,7,12]. Some comments on the reduced solution and real positive solutions are given.
Next, we discuss common solutions of the system of two operator equations AX=C and XB=D. The necessary and sufficient conditions for the existence common solutions and the detailed representations of general common solutions are provided, which extends the classical closed range case with a short proof. Furthermore, we consider the problem of finding the sufficient conditions which ensure this system has a real positive solution as well as the formula of these real positive solutions.
Finally, two examples are provided. As shown by Example 4.1, the system of equations AX=C and XB=D has common real positive solutions under above given sufficient conditions. It is shown by Example 4.2 that a gap is unfortunately contained in the original paper [16], where the authors gave two equivalent conditions for the existence of common real positive solutions for this system in matrix algebra. Here, it is still an open question to give an equivalent condition for the existence of common real positive solutions of AX=C and XB=D in infinite dimensional Hilbert space.
In general, A†C is the reduced solution of equation AX=C if R(A) is closed. Liang and Deng gave an expression of the reduced solution denoted A‡C through the spectral measure of positive operators where operator A may not be closed, but sometimes A‡ is only a symbol since the spectral integral ∫∞0λ‡dEλ (see [12]) may be divergent. In this section, we will give a new representation of reduced solution of AX=C by operator sequence. we begin with some lemmas.
Lemma 2.1. ([8]) Suppose that {Tn}∞n=1 is an invertible positive operator sequence in B(H) and
T=s.o.−limn→∞Tn. |
If T is also invertible and positive, then
T−1=s.o.−limn→∞T−1n. |
For an operator T∈B(H), then T has a unique polar decomposition T=UT|T|, where UT is a partial isometry and P¯R(T∗)=U∗TUT([8]). Denote
Tn=(1nIH+|T|)−1, |
for each positive integer n. In [11], the authors paid attention to the operator sequence Tn|T| over Hilbert C∗-module. It was given that Tn|T| converges to P¯R(T∗) in strong operator topology for T is a positive operator. Here we have some relative results as follows,
Lemma 2.2. Suppose that T∈B(H) and Tn=(1nIH+|T|)−1. Then the following statements hold:
(i)([8,11]) P¯R(T∗)=s.o.−limn→∞Tn|T|.
(ii) T†=s.o.−limn→∞TnU∗T if R(T) is closed.
Proof. We only prove the statement (ii). If R(T) is closed, then R(T∗) is also closed. It is natural that H=R(T∗)⊕N(T)=R(T)⊕N(T∗). Thus T and |T| have the following matrix forms, respectively,
T=(T11000):(R(T∗)N(T))→(R(T)N(T∗)) | (2.1) |
and
|T|=(T∗T)12=((T∗11T11)12000):(R(T∗)N(T))→(R(T∗)N(T)), | (2.2) |
where T11∈B(R(T∗),R(T)) is invertible. Then T† has the matrix form
T†=(T−111000):(R(T)N(T∗))→(R(T∗)N(T)). | (2.3) |
And from (2.2), it is easy to get that
1nIH+|T|=(1nIR(T∗)+|T11|001nIN(A)):(R(T∗)N(T))→(R(T∗)N(T)). | (2.4) |
By the invertiblity of diagonal operator matrix and the above matrix form (2.4), we have
Tn=(1nIH+|T|)−1=((1nIR(T∗)+|T11|)−100nIN(A)):(R(T∗)N(T))→(R(T∗)N(T)). | (2.5) |
Because the operator sequence
{1nIR(T∗)+|T11|}⊂B(R(T∗)) |
converges to |T11| and
1nIR(T∗)+|T11| |
is invertible in B(R(T∗)) for each n, then the operator sequence
{(1nIR(T∗)+|T11|)−1}⊂B(R(T∗)) |
converges to |T11|−1 by Lemma 2.1. Moreover, partial isometry UT has the matrix form
UT=(U11000):(R(T∗)N(T))→(R(T)N(T∗)) | (2.6) |
with respect to the space decomposition H=R(T∗)⊕N(T), where U11 is a unitary from R(T∗) onto R(T). Then T11=U11|T11| and T−111=|T11|−1U∗11 by T=UT|T| and the matrix forms (2.1), (2.2), and (2.6). From matrix forms (2.5) and (2.6), we have
TnU∗T=((1nIR(T∗)+|T11|)−1U∗11000):(R(T)N(T∗))→(R(T∗)N(T)). |
Since the operator sequence (1nIR(T∗)+|T11|)−1U∗11 converges to
|T11|−1U∗11=T−111, |
we obtain that the operator sequence {TnU∗T} converges to (T−111000), which is equivalent to T† by the matrix form (2.3).
In Lemma 2.2, the statement (ii) is not necessarily true if R(T) is not closed. The following is an example.
Example 2.3. Let {e1,e2,⋯} be a basis of an infinite Hilbert space H. Define an operator T as follows,
Tek=1k,k=1,2,⋯,n,⋯. |
Then
Tn=(1nIH+|T|)−1=(1nIH+T)−1, |
since T is a positive operator. It is easy to get that
Tnek=knn+kek |
for any k. Suppose
x=∞∑k=11kek. |
Then x∈H with
‖x‖2=∞∑k=11k2<∞. |
By direct computing, we have
‖Tnx‖⋅‖x‖≥|<Tnx,x>|=∞∑k=1nk(n+k)=∞∑k=1(1k−1n+k)>n∑k=11k−n∑k=11n+k>n∑k=11k−12, |
that is, ‖Tnx‖>1‖x‖(∑nk=11k−12). It is shown that the number sequence {‖Tnx‖} is divergent since the harmonic progression ∑∞k=11k is divergent. This implies that the operator sequence {Tn} is divergent in strong operator topology.
Lemma 2.4. ([4]) Let A,C∈B(H). The following three conditions are equivalent:
(1) R(C)⊆R(A).
(2) There exists λ>0 such that CC∗≤λAA∗.
(3) There exists D∈B(H) such that AD=C.
If one of these conditions holds then there exists a unique solution F∈B(H) of the equation AX=C such that R(F)⊆R(A∗) and N(F)=N(C). This solution is called the reduced solution.
Theorem 2.5. Let A,C∈B(H) be such that the equation AX=C has a solution. Then the unique reduced solution F can be formulated by
F=s.o.−limn→∞(|A|+1nI)−1U∗AC, |
where UA∈B(H) is the partial isometry associated to the polar decomposition A=UA|A|. Particularly, if R(A) is closed, the reduced solution F is A†C.
Proof. Since A=UA|A|, we have U∗AA=|A|. Assuming that X is a solution of AX=C, then |A|X=U∗AC. By multiplication (|A|+1nI)−1 on the left, it follows that
(|A|+1nI)−1|A|X=(|A|+1nI)−1U∗AC. |
From Lemma 2.2 (i), {(|A|+1n)−1|A|X} is convergent to P¯R(A∗)X in strong operator topology. So
P¯R(A∗)X=s.o.−limn→∞(|A|+1nI)−1U∗AC≜F. |
This shows that
AF=AP¯R(A∗)X=AX=C, |
R(F)⊆¯R(A∗) and N(F)⊆N(C). By the definition of F, we know N(C)⊆N(F). So N(C)=N(F). Suppose that F′ is another reduced solution, then A(F−F′)=0 and so
R(F−F′)⊆N(A)∩¯R(A∗)={0}. |
It is shown that F−F′=0 and then F=F′. That is to say, F is the unique reduced solution of AX=C. If R(A) is closed, F=A†C by Lemma 2.2 (ii).
Remark 2.6. Although (|A|+1nI)−1U∗A does not necessarily converge in strong operator topology by Example 2.3, (|A|+1nI)−1U∗AC is convergent from the proof of the Theorem 2.5 if R(C)⊆R(A).
As a consequence of the above Theorem 2.5, Lemma 2.4 and Theorem 3.5 in [12], the following result holds.
Theorem 2.7. ([12]) Let A,C∈B(H). Then AX=C has a solution if and only if R(C)⊆R(A). In this case, the general solutions can be represented by
X=F+(I−P)Y, ∀Y∈B(H), |
where F is the reduced solution of AX=C and P=P¯R(A∗). Particularly, if R(A) is closed, then the general solutions can be represented by
X=A†C+(I−P)Y, ∀Y∈B(H). |
In this section, we mainly study the real positive solutions of equation
AX=C | (3.1) |
and the system of equations
AX=C,XB=D. | (3.2) |
Firstly, some preliminaries are given. For a given operator A∈B(H), denote P=P¯R(A∗) in the sequel.
Lemma 3.1. Let A,C∈B(H) be such that Eq (3.1) has a solution and F is the reduced solution. Then the following statements hold:
(i) CA∗+AC∗≥0 if and only if FP+PF∗≥0.
(ii)R(FP+PF∗) is closed if R(CA∗+AC∗) is closed.
Proof. (i) Assume that CA∗+AC∗≥0. For any x∈H, there exist y∈¯R(A∗) and z∈N(A) such that x=y+z. And there exists {A∗yn}⊂R(A∗) such that limn→∞A∗yn=y. According to R(F)⊂¯R(A∗), by direct computing,
⟨(FP¯R(A∗)+P¯R(A∗)F∗)x,x⟩=⟨(FP+PF∗)y,y⟩=limn→∞⟨(F+F∗)A∗yn,A∗yn⟩=limn→∞⟨A(F+F∗)A∗yn,yn⟩=⟨(CA∗+AC∗)y,y⟩≥0. |
This shows that FP+PF∗≥0.
Contrarily, if FP+PF∗≥0, it is elementary to get that
⟨(CA∗+AC∗)x,x⟩=⟨(AFA∗+AF∗A∗)x,x⟩=⟨(F+F∗)A∗x,A∗x⟩=⟨(F+F∗)PA∗x,PA∗x⟩=⟨(FP+PF∗)A∗x,A∗x⟩≥0, |
for any x∈H by R(F)⊂¯R(A∗). This implies that CA∗+AC∗≥0.
(ii) Since R(CA∗+AC∗)=R(A(F+F∗)A∗) is closed, we have R(A(F+F∗)P) is closed. It follows that R(P(F+F∗)A∗) is closed and so R(P(F+F∗)P) is closed.
Lemma 3.2. ([3]) Let
A=(A11A12A∗12A22)∈B(H⊕H). |
The following two statements hold:
(i) There exists an operator A22 such that A≥0 if and only if A11≥0 and R(A12)⊆R(A1211).
(ii) Suppose that A11 is an invertible and positive operator. Then A≥0 if and only if A22≥A∗12A−111A12.
Lemma 3.3. Let
A=(A11A12A∗12A22)∈B(H⊕K). |
Then A≥0 if and only if
(i) A11≥0.
(ii)A22≥X∗P¯R(A11)X, where X is a solution of A1211X=A12.
Proof. Assume that the statements (i) and (ii) hold. Then
A=(A11A1211XX∗A1211A22)=(A11A1211P¯R(A11)XX∗P¯R(A11)A1211A22)=(A1211P¯R(A11)X00)∗(A1211P¯R(A11)X00)+(000A22−X∗P¯R(A11)X)≥0. |
Conversely, if A≥0, it follows from Lemma 3.2 that A11≥0. And also we get that
A1211X=A12 |
has a solution X since R(A12)⊂R(A1211). Moreover, A≥0 shows that for any positive integer n,
A+1nPH≥0, |
that is,
(A11+1nIHA12A∗12A22)≥0. |
By Lemma 3.2 again, we have
A22≥A∗12(A11+1nIH1)−1A12=X∗A1211(A11+1nIH1)−1A1211X. |
From Lemma 2.2 (i), it is immediate that
A1211(A11+1nIH1)−1A1211=(A11+1nIH1)−1A11→P¯R(A11)(n→∞) |
in strong operator topology. Thus
X∗P¯R(A11)X=s.o.−limn→∞X∗A1211(A11+1nIH1)−1A1211X |
holds. Therefore A22≥X∗P¯R(A11)X.
Corollary 3.4. Let
A=(A11A12A∗12A22)∈B(H⊕K) |
be such that A11 has closed range. Then A≥0 if and only if
(i) A11≥0.
(ii)A22≥X∗A11X, where X is a solution of A11X=A12.
Proof. Since A11 is an operator with closed range, then R(A1211)=R(A11). Thus the solvability of the two equations A11X=A12 and A1211X=A12 are equivalent. It is easy to check that A1211X is a solution of A1211X=A12 if X is a solution of A11X=A12. From the Lemma 3.3, the result holds.
In [12], the author gave a formula of real positive solutions for Eq (3.1) which is only a restriction on the general solutions. Here we obtain a specific expression of the real positive solutions.
Theorem 3.5. Let A,C∈B(H). The Eq (3.1) has real positive solutions if and only if R(C)⊆R(A) and CA∗+AC∗≥0. In this case, the real positive solutions can be represented by
X=F−(I−P)F∗+(I−P)Y∗(PF∗+FP)12+12(I−P)Y∗P¯R(F0)Y(I−P)+(I−P)Z(I−P), | (3.3) |
for any Y∈B(H), Z∈ReB+(H), where F is the reduced solution of AX=C and F0=PF∗+FP. Particularly, if R(CA∗+AC∗) is closed, then the real positive solutions can be represented by
X=F−(I−P)F∗+(I−P)Y∗(PF∗+FP)+(I−P)Y∗FPY(I−P)+(I−P)Z(I−P). | (3.4) |
Proof. From Theorem 2.7, AX=C has a solution if and only if R(C)⊆R(A). Suppose that X is a solution of AX=C. There exists an operator ¯Y∈B(H) such that
X=F+(I−P)¯Y. |
Set H=¯R(A∗)⊕N(A). Then F has the following matrix form
F=(F11F1200), | (3.5) |
since R(F)⊂¯R(A∗). The operator ¯Y can be expressed as
¯Y=(¯Y11¯Y12¯Y21¯Y22). | (3.6) |
This follows that
X+X∗=(F11+F∗11F12+¯Y∗21¯Y21+F∗12¯Y22+¯Y∗22). | (3.7) |
"⇒": If X is a real positive solution, then F11+F∗11≥0 from Lemma 3.3 and the matrix form (3.7) of X+X∗. And so PF∗+FP≥0. According to Lemma 3.1, CA∗+AC∗≥0.
"⇐": Assume that CA∗+AC∗≥0, then PF∗+FP≥0 and so F11+F11∗≥0. Denote X0=F−(I−P)F∗. Then AX0=C and
X0+X∗0=(F11+F∗11000)≥0. |
That is, X0 is a real positive solution of AX=C.
Next, we analyse the general form of real positive solutions. Suppose that X is a real positive solution of AX=C. Then X+X∗ has the matrix form (3.7). From Lemma 3.3, there exists an operator Y12 such that
¯Y∗21=(F11+F∗11)12Y12−F12 |
and
¯Y22+¯Y∗22≥Y∗12P¯R(F11+F∗11)Y12. |
Let
Z22=¯Y22−12Y∗12P¯R(F11+F∗11)Y12. |
In this case, X can be represented by the following form
X=(F11F12Y12∗(F11+F11∗)12−F12∗12Y12∗Y12+Z22)=F−(I−P)F∗+(I−P)Y∗(PF∗+FP)12+12(I−P)Y∗P¯R(F0)Y(I−P)+(I−P)Z(I−P), |
for some Y∈B(H), Z∈ReB+(H), where F0=PF∗+FP.
On the contrary, for any operator Y∈B(H) and Z∈ReB+(H) such that X has the form (3.3), it is clear that AX=C. Let Y=(Yij)2×2 and Z=(Zij)2×2 with respect to the space decomposition H=¯R(A∗)⊕N(A). Then
X+X∗=(F11+F∗11(F11+F∗11)12Y12Y12∗(F11+F∗11)1212Y12∗Y12+Z22+Z22∗)=((F11+F∗11)12Y1200)∗((F11+F∗11)12Y1200)+(000Z22+Z22∗)≥0. |
Particularly, if R(CA∗+AC∗) is closed, then R(F11+F∗11) is closed by Lemma 3.1. Suppose that X is a real positive solution of AX=C, then X+X∗ has the matrix form (3.7). From Corollary 3.4, there exists an operator Y12∈B(N(A),¯R(A∗)) such that
¯Y∗21=(F11+F∗11)Y12−F12 |
and
¯Y22+¯Y∗22≥Y∗12(F11+F∗11)Y12. |
Let
Z22=¯Y22−Y∗12F11Y12. |
Consequently,
X=(F11F12Y∗12(F11+F∗11)−F∗12Y∗12F11Y12+Z22)=F−(I−P)F∗+(I−P)Y∗(PF∗+FP)+(I−P)Y∗FPY(I−P)+(I−P)Z(I−P), |
for some Y∈B(H), Z∈ReB+(H).
On the contrary, for any Y∈B(H) and Z∈ReB+(H) such that X has a form (3.2), it is clear that AX=C and also we have
X+X∗=(F11+F∗11(F11+F∗11)Y12Y12∗F11+F∗11Y12∗F11+F∗11Y12+Z22+Z22∗)=((F11+F∗11)120Y∗12(F11+F∗11)120)((F11+F∗11)12(F11+F∗11)12Y1200)+(000Z22+Z∗22)≥0. |
Next, we consider the solutions of AX=C,XB=D.
Theorem 3.6. Let A,B,C,D∈B(H). Then the system of Eq (3.2) has a solution if and only if R(C)⊆R(A), R(D∗)⊆R(B∗) and AD=CB. In this case, the general common solution can be represented by
X=F−(I−P)H∗+(I−P)Z(I−P¯R(B)), |
for any Z∈B(H), where F is the reduced solution of (3.1) and H is the reduced solution of B∗X=D∗. Particularly, if R(A) and R(B) are closed, the general solution can be represented by
X=A†C+DB†−A†ADB†+(I−A†A)Z(I−BB†). |
Proof. The necessity is clear. We only need to prove the sufficient condition. From R(C)⊆R(A) and Theorem 2.7, a solution X of Eq (3.1) can be represented by
X=F+(I−P)Y, |
for some Y∈B(H), where F is the reduced solution. Substitute above X into equation XB=D, we have
(I−P)YB=D−FB. |
This shows that
(I−P)YB=(I−P)(D−FB) |
and then
(I−P)YB=(I−P)D, |
since R(F)⊆¯R(A∗) and (I−P)F=0. Moreover, YB=D has a solution since R(D∗)⊆R(B∗). Denote H is the reduced solution of B∗ˆY=D∗. Then
R(H(I−P))⊆¯R(B) |
and
N(H(I−P))=N(D∗(I−P)), |
since R(H)⊆¯R(B) and N(H)=N(D∗). So H(I−P) is the reduced solution of B∗ˆY=D∗(I−P) and then there exists Z∈B(H) such that
Y∗(I−P)=H(I−P)+(I−P¯R(B))Z. |
That is to say, the system of equations AX=C, XB=D has common solutions. The general solution can be represented by
X=F−(I−P)H∗+(I−P)Z(I−P¯R(B)), for any Z∈B(H). |
Especially, if R(A) and R(B) are closed, then F=A†C and H=B∗†D∗. It follows that the general common solution is
X=A†C+(I−A†A)DB†+(I−A†A)Z(I−BB†), |
for any Z∈B(L,H).
Theorem 3.7. Let A,B,C,D∈B(H) and Q=P¯R((I−P)B). The system of Eq (3.2) has a real positive solution if the following statements hold:
(i)R(C)⊆R(A), R(D∗)⊆R(B∗), AD=CB,
(ii) CA∗, (D∗+B∗F)(I−P)B are real positive operators,
(iii) R((D∗+B∗F)(I−P))⊆R(B∗(I−P)),
where F is the reduced solution of AX=C. In this case, one of common real positive solutions can be represented by
X=F−(I−P)F∗+(I−P)H∗(I−P)−(I−P)H(I−P−Q)+(I−P−Q)Z(I−P−Q), | (3.8) |
for any Z∈ReB+(H), where H is the reduced solution of
B∗(I−P)X=(D∗+B∗F)(I−P). |
Proof. Combined Theorems 3.5 and 3.6 with statements (i) and (ii), the system of equations AX=C, XB=D has common solutions. For any Y∈ReB+(H),
X=F−(I−P)F∗+(I−P)Y(I−P) |
is a real positive solution of AX=C. We only need to prove that there exists Y∈ReB+(H) such that above X is also a solution of XB=D. Since
H=¯R(A∗)⊕N(A)=¯R(A)⊕N(A∗) |
and also
H=¯R(B∗)⊕N(B), |
then A, F and B have the following operator matrix forms,
A=(A11000):(¯R(A∗)N(A))→(¯R(A)N(A∗)), | (3.9) |
F=(F11F1200):(¯R(A∗)N(A))→(¯R(A∗)N(A)), | (3.10) |
B=(B110B210):(¯R(B∗)N(B))→(¯R(A∗)N(A)), | (3.11) |
respectively, where A11 is an injective operator from ¯R(A∗) into ¯R(A) with dense range. The operator D has the matrix form
D=(D110D210):(¯R(B∗)N(B))→(¯R(A∗)N(A)), | (3.12) |
since R(D∗)⊆R(B∗). Let
Y=(Y11Y12Y21Y22):(¯R(A∗)N(A))→(¯R(A∗)N(A)). | (3.13) |
Then X has a matrix form as follows:
X=(F11F12−F∗12Y22):(¯R(A∗)N(A))→(¯R(A∗)N(A)). | (3.14) |
By the matrix forms (3.11) and (3.14) of B and X, respectively, it is easy to get that
B∗X∗=(B∗11B∗2100)(F∗11−F12F∗12Y∗22)=(B∗11F∗11+B∗21F∗12−B∗11F12+B∗21Y∗2200). | (3.15) |
Combining AD=CB=AFB with the matrix forms (3.9)–(3.12), we have
AD = \left( \begin{array}{cc} A_{11}D_{11} & 0 \\0 & 0 \\ \end{array}\right) = \left( \begin{array}{cc} A_{11}(F_{11} B_{11}+F_{12}B_{21}) & 0 \\0 & 0 \\ \end{array}\right) = AFB. |
It is immediate that
A_{11}(F_{11} B_{11}+F_{12}B_{21}) = A_{11}D_{11}. |
And so
( B_{11}^* F_{11}^* +B_{21}^* F_{12}^* )A_{11}^* = D_{11} ^* A_{11}^*. |
Therefore,
\begin{equation} B_{11}^* F_{11}^* +B_{21}^* F_{12}^* = D_{11} ^*, \end{equation} | (3.16) |
since R(A_{11}^*) has dense range in \overline{R(A_{11}^*)} . From the statement (iii), R(D_{21}^* +B_{11}^* F_{12})\subseteq R(B_{21}^*) holds. Moreover, by the statement (ii) (D^*+B^*F)(I-P)B being real positive, we have (D_{21}^* +B_{11}^* F_{12}) is real positive. These infer that the equation
\begin{equation} B_{21}^*\widehat{ Y_{22}} = D_{21}^* +B_{11}^* F_{12} \end{equation} | (3.17) |
has real positive solutions by Theorem 3.5. Suppose that H_{22} is the reduced solution of Eq (3.17), we can deduce that
H = \left( \begin{array}{cc} 0 & 0 \\ 0 & H_{22} \\ \end{array}\right) |
is the reduced solution of the following equation
\begin{equation} B^*(I-P)\widehat{ Y } = (D^* +B^* F)(I-P), \end{equation} | (3.18) |
since R(H)\subseteq(I-P)B and
N(H) = N((D^* +B^* F)(I-P)). |
Using Theorem 3.5 again, H-(I-Q)H^*\geq 0 is a real positive solution of (3.18). Then (I-P)(H-(I-Q)H^*)(I-P) is also a real positive solution of (3.18). Because of
I-P\geq P_{\overline{R((I-P)B)}} = Q, |
we have
(I-P)(I-Q) = (I-Q)(I-P) = (I-P-Q). |
Let
X_0 = F-(I-P)F^*+(I-P)H^*(I-P)-(I-P)H(I-P-Q). |
That is,
X_0 = \left( \begin{array}{cc} F_{11} & F_{12} \\ -F^*_{12} & H_{22}^*-H_{22}(I_{N(A)}-Q) \\ \end{array}\right): \left( \begin{array}{cc} \overline{R(A^*)}\\ N(A) \end{array}\right) \rightarrow \left( \begin{array}{cc} \overline{R(A^*)}\\ N(A) \end{array}\right). |
Put X = X_0 into formula (3.15), combining Eqs (3.16) and (3.17) with the matrix form (3.12) and (3.15), we can get B^*X_0^* = D^* . So X_0 is a common real positive solution of Eq (3.2). Furthermore, it is easy to verify that
A(I-P-Q)Z(I-P-Q) = 0 |
and
(I-P-Q)Z(I-P-Q)B = 0, |
for any Z\in {\mathcal B}(\mathcal H) . Therefore
X = X_0+(I-P-Q)Z(I-P-Q), \hbox { for any } Z\in Re{\mathcal B}_+(\mathcal H) |
is a real positive solution of system equations AX = C, XB = D .
Some examples are given in this section to demonstrate our results are valid. And also it is shown that a gap is unfortunately contained in the original paper [16] about existence of real positive solutions.
Example 4.1. Let \mathcal H be the infinite Hilbert space as in Example 2.3 with a basis \{e_1, e_2, \cdots\} and Te_k = \frac{1}{k}e_{k}, \forall k \geq 1. As well known the range of T\in {\mathcal B}(\mathcal H) is not closed and \overline{R(T)} = \mathcal H . Define operators A, C, B, D\in {\mathcal B}(\mathcal H \oplus \mathcal H as follows:
A = \left( \begin{array}{cccc} T& 0 \\0&0 \\ \end{array}\right), C = \left( \begin{array}{cccc} T&T \\ 0& 0\\ \end{array}\right), B = \left( \begin{array}{cccc} 0& T \\ 0& 0 \\ \end{array}\right), D = \left( \begin{array}{cccc} 0& T \\ 0& -T\\ \end{array}\right). |
Then
A\geq 0, \ U_{A} = \left( \begin{array}{cccc} I_{\mathcal H}& 0 \\0&0 \\ \end{array}\right) |
and
(|A|+\frac{1}{n}I)^{-1}U_{A}^*C = \left( \begin{array}{cccc}(T+\frac{1}{n}I_{\mathcal H})^{-1}T &(T+\frac{1}{n}I_{\mathcal H})^{-1} T \\ 0& 0\\ \end{array}\right). |
By Theorem 2.5 and Lemma 2.2 (i), the reduced solution F of the operator equation AX = C has the following form
F = s.o.-lim_{n\rightarrow \infty}(|A|+\frac{1}{n}I)^{-1}U_{A}^*C = \left( \begin{array}{cccc} I_{\mathcal H}&I_{\mathcal H} \\ 0& 0\\ \end{array}\right). |
Denote
P = P_{\overline{R(A^*)} } = \left( \begin{array}{cccc} I_{\mathcal H}& 0 \\0&0 \\ \end{array}\right). |
From Theorem 3.5, we know that the formula of real positive solutions of AX = C is
X = \left( \begin{array}{cccc} I_{\mathcal H}& I_{\mathcal H} \\-I_{\mathcal H}+\sqrt{2}Y_{12}^* & \frac{1}{2}Y_{12}^*Y_{12} +Z_{22} \\ \end{array}\right), \ { for \ any } \ Y_{12}\in {\mathcal B}({\mathcal H}), Z_{22}\in Re{\mathcal B}_+({\mathcal H}). |
Furthermore, it is easy to check that
{\rm (i)} AD = CB, R(C) = R(A), R(D^*) = R(B^*) .
{\rm (ii)} CA^* = A and (D^*+B^*F)(I-P)B = \left(\begin{array}{cccc} 0 & 0\\ 0 & 0 \\ \end{array}\right) are real positive operators.
{\rm (iii)} (D^*+B^*F)(I-P) = \left(\begin{array}{cccc} 0 & 0\\ 0 & 0 \\ \end{array}\right) and B^*(I-P) = \left(\begin{array}{cccc} 0 & 0\\ 0&T \\ \end{array}\right) .
By Theorem 3.7, we have the system of equations
AX = C, XB = D |
has common real positive solutions. Moreover,
Q = P_{\overline{R((I-P)B)}} = 0. |
So one of real positive solutions has the following form
X = F-(I-P)F^*+(I-P-Q)Z(I-P-Q) = \left( \begin{array}{cccc} I_{\mathcal H}& I_{\mathcal H} \\ -I_{\mathcal H} &Z_{22} \\ \end{array}\right), \ { for \ any } \ Z_{22}\in Re{\mathcal B}_+({\mathcal H}). |
Whereas the statements (i)–(iii) in Theorem 3.7 are only sufficient conditions for the existence of common real positive solutions of system (3.2). The following is an example.
Example 4.2. Let A, B, C, D be 2\times 2 complex matrices. Denote
A = \left( \begin{array}{cccc} 1& 0 \\0&0 \\ \end{array}\right), B = \left( \begin{array}{cccc} 0& 1 \\ 1& 1 \\ \end{array}\right), C = \left( \begin{array}{cccc} 1& 1 \\ 0& 0\\ \end{array}\right), D = \left( \begin{array}{cccc}1& 2\\ \frac{1}{2}& \sqrt{2}-\frac{1}{2} \end{array}\right), X_0 = \left( \begin{array}{cccc}1& 1\\\sqrt{2}-1 &\frac{1}{2} \end{array}\right). |
By direct computing, AX_0 = C , X_0B = D and
X_0+X_0^* = \left( \begin{array}{cccc}2&\sqrt{ 2}\\ \sqrt{2}& 1 \end{array}\right) = \left( \begin{array}{cccc}\sqrt{2}& 0\\ 1& 0 \end{array}\right)\left( \begin{array}{cccc}\sqrt{2}& 0 \\ 1 & 0 \end{array}\right)^* \geq 0. |
So X_0 is a common real positive solution of system equations AX = C, XB = D . But in this case, Theorem 3.7 does not work. In fact,
F = A^{\dagger}C = \left( \begin{array}{cccc} 1& 1 \\ 0& 0\\ \end{array}\right), \ P = P_{R(A^*)} = A. |
Hence,
(D^*+B^*F)(I-P) = \left(\left( \begin{array}{cccc}1&\frac{1}{2}\\ 2& \sqrt{2}-\frac{1}{2} \end{array}\right)+ \left( \begin{array}{cccc} 0& 1 \\ 1& 1 \\ \end{array}\right)\left( \begin{array}{cccc} 1& 1 \\ 0& 0\\ \end{array}\right) \right) \left( \begin{array}{cccc} 0& 0 \\0&1\\ \end{array}\right) = \left( \begin{array}{cccc} 0& \frac{1}{2} \\ 0&\sqrt{2}+\frac{1}{2} \\ \end{array}\right), |
and
B^*(I-P) = \left( \begin{array}{cccc} 0& 1 \\ 1& 1 \\ \end{array}\right) \left( \begin{array}{cccc} 0& 0 \\0&1\\ \end{array}\right) = \left( \begin{array}{cccc} 0& 1 \\0&1\\ \end{array}\right). |
This shows that the statement (iii)
R((D^*+B^*F)(I-P))\subseteq R(B^*(I-P)) |
in Theorem 3.7 does not hold.
Xiong and Qin gave two equivalent conditions (Theorems 2.1 and 2.2 in [16]) for the existence of common real positive solutions for AX = C, XB = D in matrix algebra. But unfortunately, there is a gap in these results. Here, Example 4.2 is a counterexample. In fact, r(A) = 1\neq 2 , where r(A) stands for the rank of matrix A and A = A^* = CA^* . Simplifying by elementary block matrix operations, we have
\begin{equation} r\left( \begin{array}{cccc} AA^*& 0&AC^*+CA^* \\ A^*& D&C^* \\ 0& B&-A^* \\ \end{array}\right) = r\left( \begin{array}{cccc} A& 0&2A \\ A& D&C^* \\ 0& B&-A \\ \end{array}\right) = r\left( \begin{array}{cccc} A& 0&0 \\ 0& D&C^*-2A \\ 0& B&-A \\ \end{array}\right). \end{equation} | (4.1) |
Moreover,
\begin{array}{lll} r\left( \begin{array}{cccc} D&C^*-2A \\ B&-A \\ \end{array}\right) & = & r\left( \begin{array}{cccc} 1& 2&-1&0 \\ \frac{1}{2}& \sqrt{2}-\frac{1}{2} & 1 &0 \\ 0&1&-1& 0 \\ 1&1&0&0 \end{array}\right) = r\left( \begin{array}{cccc} 0& 1&-1&0 \\ \frac{1}{2}& \sqrt{2}-\frac{1}{2} & 1 &0 \\ 0&1&-1& 0 \\ 1&1&0&0 \end{array}\right) \\ & = &r\left( \begin{array}{cccc} 0& 0&0&0 \\ \frac{1}{2}& \sqrt{2}-\frac{1}{2} & 1 &0 \\ 0&1&-1& 0 \\ 1&1&0&0 \end{array}\right) = r\left( \begin{array}{cccc} 0& 0&0&0 \\ \frac{1}{2}& \sqrt{2}+\frac{1}{2} & 1 &0 \\ 0&0&-1& 0 \\ 1&1&0&0 \end{array}\right) \\ & = & r\left( \begin{array}{cccc} 0& 0&0&0 \\ 0& \sqrt{2} & 1 &0 \\ 0&0&-1& 0 \\ 1&1&0&0 \end{array}\right) = r\left( \begin{array}{cccc} 0& 0&0&0 \\ 0& \sqrt{2} & 0 &0 \\ 0&0&-1& 0 \\ 1&0&0&0 \end{array}\right) = 3. \end{array} |
Therefore, combining the above result with formula (4.1), we obtain that
\begin{equation} r\left( \begin{array}{cccc} AA^*& 0&AC^*+CA^* \\ A^*& D&C^* \\ 0& B&-A^* \\ \end{array}\right) = r(A)+r\left( \begin{array}{cccc} D&C^*-2A \\ B&-A \\ \end{array}\right) = 4. \end{equation} | (4.2) |
Then
\begin{equation} r(A)\neq 2 \ { and } \ r\left( \begin{array}{cccc} AA^*& 0&AC^*+CA^* \\ A^*& D&C^* \\ 0& B&-A^* \\ \end{array}\right)\neq 2r(A). \end{equation} | (4.3) |
Noting that
AD = \left( \begin{array}{cccc} 1& 0 \\0&0 \\ \end{array}\right)\left( \begin{array}{cccc}1& 2\\ \frac{1}{2}& \sqrt{2}-\frac{1}{2} \end{array}\right) = \left( \begin{array}{cccc}1& 2\\ 0& 0 \end{array}\right). |
Using elementary row-column transformation again, we have
\begin{equation} \begin{array}{lll} r\left( \begin{array}{cccc} B& A^* \\ AD&CA^* \\ \end{array}\right) & = & r\left( \begin{array}{cccc} B& A \\ AD&A \\ \end{array}\right) = r\left( \begin{array}{cccc} 0& 1&1&0 \\ 1& 1 & 0 &0 \\ 1&2& 1& 0 \\ 0&0&0&0 \end{array}\right) \\& = &r\left( \begin{array}{cccc} 0& 0&1&0 \\ 1& 1 & 0 &0 \\ 1&1& 1& 0 \\ 0&0&0&0\end{array}\right) = r\left( \begin{array}{cccc} 0& 0&1&0 \\ 1& 1 & 0 &0 \\ 0&0& 1& 0 \\ 0&0&0&0 \end{array}\right) = 2. \end{array} \end{equation} | (4.4) |
From formulaes (4.1), (4.2) and (4.4), it is natural to get that
\begin{equation} r(AD-CB)+r\left( \begin{array}{cccc} AA^*& 0&AC^*+CA^* \\ A^*& D&C^* \\ 0& B&-A^* \\ \end{array}\right)\neq r(A)+r\left( \begin{array}{cccc} B& A^* \\ AD&CA^* \\ \end{array}\right), \end{equation} | (4.5) |
since AD = CB . But AX = C, XB = D have a common real positive solution. The statements (4.3) and (4.5) shows that Theorems 2.1 and 2.2 in [16] do not work, respectively. Actually, the conditions in [16] are also only sufficient conditions for the existence of common real positive solutions of Eq (3.2). So here is still an open question.
Question 4.3. Let A, B, C, D\in {\cal B}({\mathcal H}) . Give an equivalent condition for the existence of common real positive solutions of AX = C, XB = D .
In this work, a new representation of reduced solution of AX = C is given by a strong operator convergent sequence. This result provides us a method to discuss the general solutions of Eq (3.2). By making full use of block operator matrix methods, the formula of real positive solutions of AX = C is obtained in Theorem 3.5, which is the basis of finding common real positive solutions of Eq (3.2). Through Example 4.1, it is demonstrated that Theorem 3.7 is useful to find some common real positive solutions. But unfortunately, it is complicated to consider all the common real positive solutions by using of the method in Theorem 3.7. Maybe, we need some other techniques. This will be our next problem to solve.
The authors would like to thank the referees for their useful comments and suggestions, which greatly improved the presentation of this paper. Part of this work was completed during the first author visit to Shanghai Normal University. The author thanks professor Qingxiang Xu for his discussion and useful suggestion.
This research was supported by the National Natural Science Foundation of China (No. 12061031), the Natural Science Basic Research Plan in Shaanxi Province of China (No. 2021JM-189) and the Natural Science Basic Research Plan in Hainan Province of China (Nos. 120MS030,120QN250).
The authors declare no conflicts of interest.
[1] | R. Bellman, A survey of the theory of the boundedness, stability, and asymptotic behaviour of solutions of linear and nonlinear differential and difference equations, Office of Naval Research, Washington, D.C., 1949. |
[2] |
R. Bellman, On the asymptotic behavior of solutions of u{''}-(1+f(t))u = 0, Ann. Mat. Pur. Appl., 31 (1950), 83–91. http://dx.doi.org/10.1007/BF02428257 doi: 10.1007/BF02428257
![]() |
[3] | R. Bellman, Stability theory of differential equations, New York: McGraw-Hill Book Company, 1953. |
[4] | C. A. Charambides, Enumerative combinatorics, Chapman & Hall/CRC, 2002. |
[5] |
A. Coronel, F. Huancas, M. Pinto, Asymptotic integration of a linear fourth order differential equation of Poincaré type, Electron. J. Qual. Theo., 76 (2015), 1–24. http://dx.doi.org/10.14232/ejqtde.2015.1.76 doi: 10.14232/ejqtde.2015.1.76
![]() |
[6] |
A. Coronel, L. Friz, F. Huancas, M. Pinto, L^p-solutions of a nonlinear third order differential equation and the Poincaré-Perron problem, J. Fixed Point Theory Appl., 21 (2019), 3. http://dx.doi.org/10.1007/s11784-018-0641-3 doi: 10.1007/s11784-018-0641-3
![]() |
[7] |
A. D. D. Craik, Prehistory of Faà di Bruno's formula, Am. Math. Mon., 112 (2005), 119–130. http://dx.doi.org/10.2307/30037410 doi: 10.2307/30037410
![]() |
[8] | M. S. P. Eastham, The asymptotic solution of linear differential systems: Applications of the Levinson theorem, Oxford University Press, 1989. |
[9] | P. Figueroa, M. Pinto, Asymptotic expansion of the variable eigenvalue associated to second order differential equations, Nonlinear Stud., 13 (2006), 261–272. |
[10] | P. Figueroa, M. Pinto, Riccati equations and non-oscillatory solutions of third order differential equations, Dynam. Syst. Appl., 17 (2008), 459–475. |
[11] | P. Figueroa, M. Pinto, L^p-solutions of Riccati-type differential equations and asymptotics of third order linear differential equations, Dyn. Contin. Discret. Impuls. Syst. Ser. A Math. Anal., 17 (2010), 555–571. |
[12] |
W. A. Harris Jr, D. A. Lutz, A unified theory of asymptotic integration, J. Math. Anal. Appl., 57 (1977), 571–586. http://dx.doi.org/10.1016/0022-247X(77)90247-5 doi: 10.1016/0022-247X(77)90247-5
![]() |
[13] |
P. Hartman, Unrestricted solution fields of almost-separable differential equations, T. Am. Math. Soc., 63 (1948), 560–580. http://dx.doi.org/10.2307/1990575 doi: 10.2307/1990575
![]() |
[14] |
P. Hartman, A. Wintner, Asymptotic integrations of linear differential equations, Am. J. Math., 77 (1955), 45–86. http://dx.doi.org/10.2307/2372422 doi: 10.2307/2372422
![]() |
[15] |
N. Levinson, The asymptotic nature of solutions of linear systems of differential equations, Duke Math. J., 15 (1948), 111–126. http://dx.doi.org/10.1215/S0012-7094-48-01514-2 doi: 10.1215/S0012-7094-48-01514-2
![]() |
[16] |
O. Perron, Über einen satz des Herrn Poincaré, J. Reine Angew. Math., 136 (1909), 17–38. http://dx.doi.org/10.1515/crll.1909.136.17 doi: 10.1515/crll.1909.136.17
![]() |
[17] |
H. Poincaré, Sur les equations lineaires aux differentielles ordinaires et aux differences finies, Amer. J. Math., 7 (1885), 203–258. http://dx.doi.org/10.2307/2369270 doi: 10.2307/2369270
![]() |
[18] |
P. Řehák, A few remarks on Poincaré-Perron solutions and regularly varying solutions, Math. Slovaca, 66 (2016), 1297–1318. http://dx.doi.org/10.1515/ms-2016-0224 doi: 10.1515/ms-2016-0224
![]() |
[19] |
P. Řehák, Nonlinear Poincaré-Perron theorem, Appl. Math. Lett., 121 (2021), 107425. http://dx.doi.org/10.1016/j.aml.2021.107425 doi: 10.1016/j.aml.2021.107425
![]() |
1. | Haiyan Zhang, Yanni Dou, Weiyan Yu, Positive Solutions of Operator Equations AX = B, XC = D, 2023, 12, 2075-1680, 818, 10.3390/axioms12090818 | |
2. | Qing-Wen Wang, Zi-Han Gao, Jia-Le Gao, A Comprehensive Review on Solving the System of Equations AX = C and XB = D, 2025, 17, 2073-8994, 625, 10.3390/sym17040625 | |
3. | Hranislav Stanković, Polynomially accretive operators, 2025, 54, 2651-477X, 516, 10.15672/hujms.1421159 |