Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Fredholm inversion around a singularity: Application to autoregressive time series in Banach space

  • This paper considers inverting a holomorphic Fredholm operator pencil. Specifically, we provide necessary and sufficient conditions for the inverse of a holomorphic Fredholm operator pencil to have a simple pole and a second order pole. Based on these results, a closed-form expression of the Laurent expansion of the inverse around an isolated singularity is obtained in each case. As an application, we also obtain a suitable extension of the Granger-Johansen representation theorem for random sequences taking values in a separable Banach space. Due to our closed-form expression of the inverse, we may fully characterize solutions to a given autoregressive law of motion except a term that depends on initial values.

    Citation: Won-Ki Seo. Fredholm inversion around a singularity: Application to autoregressive time series in Banach space[J]. Electronic Research Archive, 2023, 31(8): 4925-4950. doi: 10.3934/era.2023252

    Related Papers:

    [1] Xijun Hu, Li Wu . Decomposition of spectral flow and Bott-type iteration formula. Electronic Research Archive, 2020, 28(1): 127-148. doi: 10.3934/era.2020008
    [2] Messoud Efendiev, Vitali Vougalter . Linear and nonlinear non-Fredholm operators and their applications. Electronic Research Archive, 2022, 30(2): 515-534. doi: 10.3934/era.2022027
    [3] Qingming Hao, Wei Chen, Zhigang Pan, Chao Zhu, Yanhua Wang . Steady-state bifurcation and regularity of nonlinear Burgers equation with mean value constraint. Electronic Research Archive, 2025, 33(5): 2972-2988. doi: 10.3934/era.2025130
    [4] Kwok-Pun Ho . Martingale transforms on Banach function spaces. Electronic Research Archive, 2022, 30(6): 2247-2262. doi: 10.3934/era.2022114
    [5] Mustafa Aydin, Nazim I. Mahmudov, Hüseyin Aktuğlu, Erdem Baytunç, Mehmet S. Atamert . On a study of the representation of solutions of a $ \Psi $-Caputo fractional differential equations with a single delay. Electronic Research Archive, 2022, 30(3): 1016-1034. doi: 10.3934/era.2022053
    [6] Yang Song, Beiyan Yang, Jimin Wang . Stability analysis and security control of nonlinear singular semi-Markov jump systems. Electronic Research Archive, 2025, 33(1): 1-25. doi: 10.3934/era.2025001
    [7] Lili Zhang, Chengbo Zhai . An existence result for a new coupled system of differential inclusions involving with Hadamard fractional orders. Electronic Research Archive, 2024, 32(11): 6450-6466. doi: 10.3934/era.2024301
    [8] Mufit San, Seyma Ramazan . A study for a higher order Riemann-Liouville fractional differential equation with weakly singularity. Electronic Research Archive, 2024, 32(5): 3092-3112. doi: 10.3934/era.2024141
    [9] Minzhi Wei . Existence of traveling waves in a delayed convecting shallow water fluid model. Electronic Research Archive, 2023, 31(11): 6803-6819. doi: 10.3934/era.2023343
    [10] Linlin Tan, Bianru Cheng . Global well-posedness of 2D incompressible Navier–Stokes–Darcy flow in a type of generalized time-dependent porosity media. Electronic Research Archive, 2024, 32(10): 5649-5681. doi: 10.3934/era.2024262
  • This paper considers inverting a holomorphic Fredholm operator pencil. Specifically, we provide necessary and sufficient conditions for the inverse of a holomorphic Fredholm operator pencil to have a simple pole and a second order pole. Based on these results, a closed-form expression of the Laurent expansion of the inverse around an isolated singularity is obtained in each case. As an application, we also obtain a suitable extension of the Granger-Johansen representation theorem for random sequences taking values in a separable Banach space. Due to our closed-form expression of the inverse, we may fully characterize solutions to a given autoregressive law of motion except a term that depends on initial values.



    The Granger-Johansen representation theorem (see [1]) is a result on the existence and representation of solutions to a given autoregressive law of motion. Due to the crucial contributions by [2,3,4,5], we already have a well developed representation theory in finite dimensional Euclidean space. It is worth mentioning that, in the latter two works, the representation theorem is obtained in the framework of analytic function theory; [4] obtains a necessary and sufficient condition for a matrix-valued function of a single complex variable (matrix pencil), which characterizes an autoregressive law of motion, to have a simple pole at one and shows that this leads to nonstationary I(1) solutions, which become stationary through first-order differencing. The monograph of [5] provides a systematic reworking and extension of [4] and contains a representation theorem associated with nonstationary I(2) solutions, which become stationary through second-order differencing; see also [6,7] for more general results on this topic.

    More recently, the Granger-Johansen representation theorem was extended to infinite dimensional function spaces (see, e.g., [8]). As in [4], it turns out that the desired representation can be obtained by inverting the operator pencil which characterizes the autoregressive law of motion at an isolated singularity; the reader is referred to [9,10] for general Hilbert-valued time series, [11] for density-valued time series, and [12] for general Banach-valued time series. Of course, this is certainly not the only example where inversion of operator pencils can be useful in applied fields.

    In this paper, we consider inverting holomorphic Fredholm operator pencils around an isolated singularity. Specifically, we first obtain necessary and sufficient conditions for the inverse of a holomorphic Fredholm pencil to have a simple pole and a second order pole. We then obtain a closed-form expression of the inverse by deriving a recursive formula that determines all the coefficients in the Laurent expansion of the inverse around an isolated singularity. We apply our theoretical results to obtain a suitable version of the Granger-Johansen representation theorem; our version of the theorem distinguishes itself from existing ones by placing a stronger emphasis on presenting more detailed mathematical expressions of I(1) and I(2) solutions, rather than focusing on their cointegration properties, which have already been developed in the aforementioned literature. Of course, this is achieved through the use of our closed-form expression of the inverse of a holomorphic Fredholm pencil. We believe that this application demonstrates the usefulness of our theoretical results.

    While the local behavior around an isolated singularity of the inverse of a Fredholm operator pencil has been explored in the context of the Granger-Johansen representation theorem in a Hilbert space (see, e.g., [9,10]), this paper appears to be the first to provide a full characterization of the inverse specifically around a pole of order 1 and 2. Considering the recent extension of the Granger-Johansen representation theorem to incorporate function-valued highly integrated processes (see [13]), obtaining a closed-form expression of the inverse around a pole of an arbitrary order would be important. Furthermore, our closed-form expression is derived by leveraging some special spectral properties of Fredholm operator pencils. It would also be interesting to explore whether a similar characterization can be achieved for more general non-Fredholm operator pencils. However, these pursuits fall outside the scope of this paper and are left for future research.

    The remainder of the paper is organized as follows. In Section 2, we review some essential mathematics. In Section 3, we study in detail inversion of a holomorphic Fredholm pencil based on the analytic Fredholm theorem; our main results are obtained in this section. Section 4 contains a suitable extension of the Granger-Johansen representation theorem as an application of our inversion results. The conclusion follows in Section 5.

    Let B be a separable Banach space over the complex plane C with norm , and let LB denote the Banach space of bounded linear operators on B with the usual operator norm ALB=supx1Ax. We also let idBLB denote the identity map on B. Given a subspace VB, let AV denote the restriction of an operator ALB to V. Given ALB, we define two important subspaces of B as follows:

    kerA={xBAx=0},ranA={AxxB}.

    Let V1,V2,,Vk be subspaces of B. The algebraic sum of V1,V2,,Vk is defined by

    kj=1Vj={v1+v2+,vk:vjVj for each j}.

    We say that B is the (internal) direct sum of V1,V2,,Vk and write B=kj=1Vj, if V1,V2,,Vk are closed subspaces satisfying VjjjVj={0} and kj=1Vj=B. For any VB, we let VcB denote a subspace (if it exists) such that B=VVc. Such a subspace Vc is called a complementary subspace of V. It turns out that a subspace V allows a complementary subspace Vc if and only if there exists a unique bounded projection onto Vc along V (Theorem 3.2.11 of [14]). In general, a complementary subspace is not uniquely determined. In the case where B is a Hilbert space, any closed subspace V can be complemented by its orthogonal complement V.

    For any subspace VB, the cosets of V are the collection of the following sets:

    x+V={x+v:vV},xB.

    The quotient space B/V is the vector space whose elements are equivalence classes of the cosets of V, with the equivalence relation given by

    x+Vy+VxyV.

    When V=ranA for some ALB, the dimension of B/V is called the defect of A.

    An operator ALB is said to be a Fredholm operator if kerA and B/ranA are finite dimensional. The index of a Fredholm operator A is the integer given by dim(kerA)dim(B/ranA). It turns out that a bounded linear operator with finite defect has a closed range (Lemma 4.38 of [15]); that is, for ALB, ranA is closed if dim(B/ranA)<. Therefore, ranA is closed if A is a Fredholm operator. Fredholm operators are invariant under compact perturbation; if A is a Fredholm operator, and K is a compact operator, A+K is a Fredholm operator of the same index. In this paper, we mainly consider Fredholm operators of index zero, so we let F0(LB) denote the collection of such operators.

    Let B1 and B2 be Banach spaces and LB1,B2 denote the space of bounded linear operators from B1 to B2. In the subsequent discussion, we need a notion of a generalized inverse operator of ALB1,B2. Suppose that B1=kerA(kerA)c and B2=ranA(ranA)c. Given the direct sum conditions, the generalized inverse of A, denoted by Ag, is defined as the unique linear extension of (A(kerA)c)1(defined on ranA) to B. Specifically, Ag is given by

    Ag=(A(kerA)c)1(idBP(ranA)c), (2.1)

    where PVc denotes the bounded projection onto Vc along V. It can be shown (Section 1 of [16]) that the generalized inverse Ag has the following properties:

    AAgA=A,AgAAg=Ag,AAg=(idBP(ranA)c),AgA=P(kerA)c.

    Since complementary subspaces are not uniquely determined, Ag depends on our choice of them.

    The considered notion of a generalized inverse in this section is studied in the frame of Banach algebras by [17]. In a Hilbert space setting, (kerA)c (resp. (ranA)c) can be set to the orthogonal complement (kerA) (resp. (ranA)) of kerA (resp. ranA), and then Ag becomes identical to the Moore-Penrose inverse of A.

    Let U be an open connected subset of C. A map A:ULB is called an operator pencil. An operator pencil A is holomorphic at z0U if the limit

    A(1)(z0):=limzz0A(z)A(z0)zz0

    exists in the uniform operator topology. If A is holomorphic for all zDU for an open connected set D, then we say that A is holomorphic on D. A holomorphic operator pencil A on D allows the Taylor series for every z0D (see pages 7 and 8 of [18]).

    An operator pencil A is said to be meromorphic on U if there exists a discrete set U0U such that A:UU0LB is holomorphic, and the following Laurent expansion is allowed in a punctured neighborhood of z0U0:

    A(z)=1j=mAj(zz0)j+j=0Aj(zz0)j,

    where the first term is called the principal part, and the second term is called the holomorphic part of the Laurent series. A finite positive integer m is called the order of the pole at z0. When m=1 (resp. m=2), we simply say that A(z) has a simple pole (resp. second order pole) at z0. If Am,,A1 are finite rank operators, we say that A(z) is finitely meromorphic at z0. In addition, A(z) is said to be finitely meromorphic on U if it is finitely meromorphic at each of its poles.

    The set of complex numbers zU at which the operator A(z) is noninvertible is called the spectrum of A and denoted by σ(A). It turns out that the spectrum is always a closed set (see, e.g., page 56 of [19]).

    If A(z) is a Fredholm operator of index zero for zU, we hereafter simply call it an F0-pencil.

    We provide a crucial input, called the analytic Fredholm theorem, for the subsequent discussion.

    Analytic Fredholm Theorem. (Corollary 8.4 in [18]) Let A:ULB be a holomorphic Fredholm operator pencil, i.e., A(z) is a Fredholm operator for zU, and A(1)(z0):=limzz0A(z)A(z0)zz0 exists for all z0U, and assume that A(z) is invertible for some element zU. Then, the following hold.

    (i) σ(A) is a discrete set.

    (ii) In a punctured neighborhood of z0σ(A),

    A(z)1=j=mAj(zz0)j,

    where A0 is a Fredholm operator of index zero, and Am,,A1 are finite rank operators.

    That is, the analytic Fredholm theorem implies that if the inverse of a holomorphic Fredholm pencil exists, it is finitely meromorphic.

    We briefly introduce Banach-valued random variables, called B-random variables. More detailed discussion on this subject can be found in, e.g., Chapter 1 of [20]. We let B denote the topological dual of B.

    Let (Ω,F,P) be an underlying probability triple. A B-random variable is defined as a measurable map X:ΩB, where B is understood to be equipped with its Borel σfield. X is said to be integrable if EX<. If X is integrable, there exists a unique element EXB such that for all fB,

    E[f(X)]=f(EX).

    Let L2B denote the space of B-random variables X such that EX=0 and EX2<.

    Let ε=(εt,tZ) be an independent and identically distributed sequence in L2B such that Eεt=0 and 0<Eεt2<. In this paper, ε is simply called a strong white noise.

    For some t0Z{}, let X=(Xt,tt0) be a stochastic process taking values in B satisfying

    Xt=j=0Ajεtj,

    where (Aj,j0) is a sequence in LB satisfying j=0AjLB<. We call the sequence (Xt,tt0) a standard linear process. In this case j=0Aj is convergent in LB.

    A sequence in L2B is said to be I(0) if it is a standard linear process with j=0Aj0. For d{1,2}, let X=(Xt,td+1) be a sequence in L2B. We say (Xt,t0) is I(d) if its d-th difference ΔdX=(ΔdXt,t1) is I(0) (see, e.g., [12]).

    Throughout this section, we employ the following assumption.

    Assumption 3.1. A:ULB is a holomorphic Fredholm pencil, and z0σ(A) is an isolated element.

    Under the above assumption, A(z)1 exists in a punctured neighborhood of z=z0, and its properties around z0 have been studied in various contexts (see, e.g., [9,10,18]). In the case where z0=1 and A(z) is understood as a linear filter, Assumption 3.1 may be understood as a generalization of the standard unit root assumption in time series analysis (see [12]). Since A(z) is holomorphic, it allows the Taylor series around z0 as follows:

    A(z)=j=0Aj(zz0)j, (3.1)

    where A0=A(z0), Aj=A(j)(z0)/j! for j1, and A(j)(z) denotes the j-th complex derivative of A(z). Furthermore, we know from the analytic Fredholm theorem that N(z):=A(z)1 allows the Laurent series expansion in a punctured neighborhood of z0 as follows:

    N(z)=1j=mNj(zz0)j+j=0Nj(zz0)j,1m<. (3.2)

    Our first goal is to find necessary and sufficient conditions for m=1 and 2. We then provide a recursive formula to obtain Nj for jm. Before stating our main assumptions and results of this section, we provide some preliminary results.

    First, it can be shown that A:ULB in Assumption 3.1 is in fact an F0-pencil.

    Lemma 3.1. Under Assumption 3.1, A:ULB is an F0-pencil.

    Proof. Since z0 is an isolated element, it implies that there exists some point in U where the operator pencil is invertible. It turns out that the index of A(z) does not depend on zU given that U is connected, and Fredholm operators of nonzero index are not invertible (Section 2 of [21]). We thus find that A(z) has index zero for zU.

    In view of Lemma 3.1, it may also be deduced that the analytic Fredholm theorem provided in Section 2.5 is only for F0-pencils. The following is another important observation implied by Assumption 3.1.

    Lemma 3.2. Under Assumption 3.1,

    (i) ranA(z) allows a complementary subspace for zU.

    (ii) kerA(z) allows a complementary subspace for zU.

    (iii) For any finite dimensional subspace V, ranA(z)+V allows a complementary subspace for zU

    Proof. Since A(z) is a Fredholm operator, we know that ranA(z) is closed, and B/ranA(z) is finite dimensional. Given any closed subspace V, it turns out that V allows a complementary subspace if B/V is finite dimensional (Theorem 3.2.18 of [14]). Thus, (ⅰ) is proved. Our proof of (ⅱ) follows from the fact that every finite dimensional subspace allows a complementary subspace (Theorem 3.2.18 of [14]). (ⅲ) follows from the fact that the algebraic sum ranA(z)+V is a closed subspace and B/(ranA(z)+V) is finite dimensional since ranA(z) is closed and V is finite dimensional.

    In a Hilbert space, a closed subspace allows a complementary subspace, which can always be chosen as the orthogonal complement. We therefore know that ranA(z) and ranA(z)+V allow complementary subspaces in a Hilbert space if ranA(z) is closed. However, in a Banach space, the closedness of a certain subspace does not guarantee the existence of a complementary subspace. The reader is referred to [14] for a detailed discussion on this subject.

    Due to Lemma 3.2, we know that ranA0 and kerA0 are complemented, meaning that we may find their complementary subspaces, as well as the associated bounded projections. Depending on our choice of complementary subspaces, we may also define the corresponding generalized inverse of A0 as in (2.1). To simplify expressions, we let

    1j=0={idB if j=0,0 otherwise, Gj(,m)=j1k=mNkAj+k,=0,1,2,,R0=ranA0,K0=kerA0,K1={xK0:A1xR0},Rc0=a complementary subspace of ranA0,Kc0=a complementary subspace of kerA0,PRc0=the bounded projection onto Rc0alongR0,PKc0=the bounded projection onto Kc0alongK0,SRc0=PRc0A1K0:K0Rc0,(A0)g{Rc0,Kc0}=the generalized inverse of A0,

    where Rc0 and Kc0 are not uniquely determined, and thus we need to be careful with PRc0,PKc0 and SRc0 depending on our choice of those complementary subspaces; however, if Rc0 and Kc0 are specified, PRc0,PKc0 and SRc0 are uniquely defined. Similarly, the subscript {Rc0,Kc0} of the generalized inverse underscores its dependence on Rc0 and Kc0.

    We provide another useful lemma.

    Lemma 3.3. Suppose that Assumption 3.1 is satisfied. Then, invertibility (or noninvertibility) of SRc0 does not depend on the choice of Rc0.

    Proof. Let V0 and W0 be two different choices of Rc0. Then, it is trivial to show that

    kerSV0=kerSW0=K1. (3.3)

    Moreover, we know due to Lemma 3.1 that A(z) satisfying Assumption 3.1 is in fact an F0-pencil, which implies that dim(B/ranA0)=dim(kerA0)<. Since a complementary subspace of ranA0 is isomorphic to B/ranA0 (Corollary 3.2.16 of [14]), we have

    dim(V0)=dim(W0)=dim(K0)<. (3.4)

    Any injective linear map between finite dimensional vector spaces of the same dimension is also bijective. Therefore, in view of (3.4), K1={0} is necessary and sufficient for SV0 (and SW0) to be invertible. Therefore, if either one is invertible (resp. noninvertible), then the other is also invertible (resp. noninvertible).

    We next provide necessary and sufficient conditions for A(z)1 to have a simple pole at z0 and its closed form expression in a punctured neighborhood of z0.

    Proposition 3.1. Suppose that Assumption 3.1 is satisfied. Then, the following conditions are equivalent to each other.

    (i) m=1 in the Laurent series expansion (3.2).

    (ii) B=R0A1K0.

    (iii) For all possible choices of Rc0, SRc0:K0Rc0 is invertible.

    (iv) For some choice of Rc0, SRc0:K0Rc0 is invertible.

    Under any of these conditions and any choice of Rc0 and Kc0, the coefficients (Nj1) in (3.2) are given by the following recursive formula.

    N1=S1Rc0PRc0, (3.5)
    Nj=(1j=0Gj(0,1))(A0)g{Rc0,Kc0}(idBA1S1Rc0PRc0)Gj(1,1)S1Rc0PRc0, (3.6)

    where each Nj is understood as a map from B to B without restriction of the codomain.

    Proof. We first show the claimed equivalence between conditions (ⅰ)–(ⅳ) and then verify the recursive formula.

    Equivalence between (ⅰ)–(ⅳ) : Due to the analytic Fredholm theorem, we know that A(z)1 admits the Laurent series expansion (3.2) in a punctured neighborhood z0. Moreover, A(z) is holomorphic and thus admits the Taylor series as in (3.1). Combining (3.1) and (3.2), we obtain the identity expansion idB=A(z)1A(z) as follows:

    idB=k=m(m+kj=0NkjAj)(zz0)k. (3.7)

    Since (ⅲ) (ⅳ) is deduced from Lemma 3.3, we demonstrate equivalence between (ⅰ)–(ⅳ) by showing (ⅱ)(ⅰ)(ⅳ)(ⅱ).

    Now, we show that (ⅱ)(ⅰ). Suppose that m>1. Collecting the coefficients of (zz0)m and (zz0)m+1 in (3.7), we obtain

    NmA0=0, (3.8)
    Nm+1A0+NmA1=0. (3.9)

    Eq (3.8) implies that NmR0={0}, and further, (3.9) implies that NmA1K0={0}. Therefore, if the direct sum decomposition (ⅱ) is true, we necessarily have Nm=0. Note that Nm=0 holds for any 2m<. We therefore conclude that m=1, which proves (ⅱ)(ⅰ).

    We next show that (ⅰ)(ⅳ). Collecting the coefficients of (zz0)1 and (zz0)0 in (3.7) when m=1, we have

    N1A0=0, (3.10)
    N1A1+N0A0=idB. (3.11)

    Since A0 is a Fredholm operator, we know from Lemma 3.2 that R0 allows a complementary subspace V0, and there exists the associated projection operator PV0. Then, Eq (3.10) implies that

    N1(idBPV0)=0andN1=N1PV0. (3.12)

    Moreover, (3.11) implies idBK0=N1A1K0. In view of (3.12), it is apparent that

    idBK0=N1SV0. (3.13)

    Eq (3.13) implies that SV0 is an injection. Moreover, due to Lemma 3.1, we know A0F0. Using the same arguments we used to establish (3.4), we obtain

    dim(V0)=dim(B/R0)=dim(K0)<. (3.14)

    Eqs (3.13) and (3.14) together imply that SV0:K0V0 is an injective linear map between finite dimensional vector spaces of the same dimension. Therefore, we conclude that SV0:K0V0 is a bijection.

    To show (ⅳ)(ⅱ), suppose that our direct sum condition (ⅱ) is false. We first consider the case where R0A1K0{0}. If there exists a nonzero element x in R0A1K0, we have, for any arbitrary choice of Rc0, SRc0x=0. This implies that SRc0 cannot be injective. We next consider the case where BR0+A1K0 even if R0A1K0={0} holds. In this case, clearly, R0A1K0 is a strict subspace of B. On the other hand, since Rc0 is a complementary subspace of R0, it is deduced that

    dim(A1K0)<dim(Rc0). (3.15)

    Note that SRc0 can be viewed as the composition of PRc0 and A1K0. From the rank-nullity theorem, dim(SRc0K0) must be at most equal to dim(A1K0). In view of (3.15), this implies that SRc0 cannot be surjective for any arbitrary choice of Rc0. Therefore, we conclude that (ⅳ)(ⅱ).

    Recursive formula for (Nj,j1) : Assume that V0 as a choice of Rc0 and W0 as a choice of Kc0 are fixed. We first verify the claimed formulas (3.5) and (3.6) for this specific choice of complementary subspaces.

    At first, we consider the claimed formula for N1. In our demonstration of (ⅰ)(ⅲ) above, we obtained (3.13). Since the codomain of SV0 is restricted to V0, (3.13) can be written as

    idBK0=N1V0SV0. (3.16)

    Moreover, we know that SV0:K0V0 is invertible. We therefore have N1V0=S1V0, and note that we still need to restrict the domain of N1 to V0. By composing both sides of (3.16) with PV0, we obtain N1PV0=S1V0PV0. Recalling (3.12), which implies that N1=N1PV0, we find that

    N1=S1V0PV0. (3.17)

    Since the codomain of S1V0 is K0, the map (3.17) is the formula for N1 with the restricted codomain. However, it can be understood as a map from B to B by composing both sides of (3.17) with a proper embedding.

    Now, we verify the recursive formula for (Nj,j0). Collecting the coefficients of (z1)j and (z1)j+1 in the identity expansion (3.7), the following can be shown:

    Gj(0,1)+NjA0=1j=0, (3.18)
    Gj(1,1)+NjA1+Nj+1A0=0. (3.19)

    Since idB=(idBPV0)+PV0, Nj can be written as the sum of Nj(idBPV0) and NjPV0. We will obtain an explicit formula for each summand.

    Given complementary subspaces V0 and W0, we may define (A0)g{V0,W0}:BW0. Since we have A0(A0)g{V0,W0}=idBPV0, (3.18) implies that

    Nj(idBPV0)=1j=0(A0)g{V0,W0}Gj(0,1)(A0)g{V0,W0}. (3.20)

    Moreover, by restricting the domain of both sides of (3.19) to K0, we have

    Gj(1,1)K0+NjA1K0=0. (3.21)

    Since Nj=NjPV0+Nj(idBPV0), it is easily deduced from (3.21) that

    NjSV0=Gj(1,1)K0Nj(idBPV0)A1K0. (3.22)

    Substituting (3.20) into (3.22), we obtain

    NjSV0=Gj(1,1)K01j=0(A0)g{V0,W0}A1K0+Gj(0,1)(A0)g{V0,W0}A1K0. (3.23)

    Since SV0:K0V0 is invertible, it is deduced that ranS1V0=K0, and SV0S1V0=idBV0. We therefore obtain the following equation from (3.23):

    NjV0=Gj(1,1)1j=0(A0)g{V0,W0}A1S1V0+Gj(0,1)(A0)g{V0,W0}A1S1V0. (3.24)

    Composing both sides of (3.24) with PV0, we obtain an explicit formula for NjPV0. Combining this result with (3.20), we obtain the formula for Nj, which is similar to (3.6), in terms of PV0, (A0)g{V0,W0}, SV0, Gj(0,1) and Gj(1,1) after a little algebra. Of course, the resulting operator Nj should be understood as a map from B to B.

    The formula for each Nj that we have obtained seems to depend on our choice of complementary subspaces, especially due to PV0, SV0 and (A0)g{V0,W0}. However, if a Laurent series exists, it is unique. Even if the aforementioned operators are differently defined due to any changes in our choice, we can still obtain a recursive formula for (Nj,j1) in terms of those operators. Such a newly obtained formula cannot be different from what we have obtained from a fixed choice of complementary subspaces because of the uniqueness of the Laurent series. Therefore, it is easily deduced that our recursive formula for Nj derived in Proposition 3.1 does not depend on a specific choice of complementary subspaces.

    To simplify expressions, we let

    R1=ranA0+A1kerA0,Rc1=a complementary subspace of ranA0+A1kerA0,Kc1=a complementary subspace of K1inK0,PRc1=the bounded projection onto Rc1alongR1,PKc1=the bounded projection onto Kc1alongK1.

    We know from Lemma 3.2 that R0, K0 and R1 are complemented, so we may find complementary subspaces Rc0, Kc0, and Rc1, as well as the bounded projections PRc0, PKc0 and PRc1. Given Rc0, Rc1 is not uniquely determined in general. We require our choice to satisfy

    Rc1Rc0, (3.25)

    so that

    Rc0=SRc0K0Rc1, (3.26)
    PRc0PRc1=PRc1PRc0=PRc1. (3.27)

    For any choice of Rc0, Rc1 satisfying (3.25) always exists, and such a subspace can easily be obtained as follows:

    Lemma 3.4. Suppose that Assumption 3.1 is satisfied. Given Rc0, let V1 be a specific choice of Rc1. Then, PRc0V1Rc0 is also a complementary subspace of R1.

    Proof. Let V0 be a given choice of Rc0. If B=R0+A1K0, then V1={0}, and then our statement trivially holds. Now, consider the case when V1 is a nontrivial subspace. Since R0+A1K0=R0PV0A1K0 holds, it is deduced that B=R0PV0A1K0V1. This implies that M:=PV0A1K0V1 is a complementary subspace of R0. Since PV0B=PV0M, clearly PV0M:MV0 must be a surjection, so we have

    PV0M=V0. (3.28)

    Moreover, both M and V0 are complementary subspaces of R0, and we know due to Lemma 3.1 that A0F0. Then, it is deduced from similar arguments to those we used to derive (3.4) that

    dim(B/R0)=dim(V0)=dim(M).

    Thus, PV0M:MRc0 is a surjection between vector spaces of the same finite dimension, meaning that it is also an injection. We therefore obtain PV0A1K0PV0V1={0}, which implies that PV0M=PV0A1K0PV0V1. Combining this with (3.28), it is deduced that

    B=R0PV0A1K0PV0V1.

    Clearly, PV0V1 is a complementary subspace of R1.

    Due to Lemma 3.4, we know how to make an arbitrary choice of Rc1 satisfy the requirement (3.25) and thus may assume that our choice of Rc1 satisfies (3.25) in the subsequent discussion.

    Under any choice of our complementary subspaces satisfying (3.25), we define

    A2{Rc0,Kc0}=A2A1(A0)g{Rc0,Kc0}A1,S{Rc0,Kc0,Rc1}=PRc1A2{Rc0,Kc0}K1:K1Rc1,

    where the subscripts indicate the sets of complementary subspaces upon which the corresponding operators depend.

    In this section, we consider the case K1{0}. Then, SRc0 is not invertible since kerSRc0=K1. However, note that SRc0 is a linear map between finite dimensional subspaces, so we can always define its generalized inverse as follows:

    (SRc0)g{Rc1,Kc1}=(SRc0Kc1)1(idBPRc1)Rc0. (3.29)

    Before stating our main result of this section, we first establish the following preliminary result.

    Lemma 3.5. Suppose that Assumption 3.1 is satisfied. Let V0 and ˜V0 be arbitrary choices of Rc0 and V1V0 and ˜V1˜V0 be arbitrary choices of Rc1. Then,

    dim(V1)=dim(~V1)=dim(K1).

    Proof. For V0 and ~V0, we have the two operators SV0:K0V0 and S˜V0:K0˜V0. We established that kerSV0=kerS˜V0=K1 in (3.3). From Lemma 3.1, we know A0F0, so it is easily deduced that

    dim(V0)=dim(K0)=dim(SV0K0)+dim(K1), (3.30)
    dim(˜V0)=dim(K0)=dim(S˜V0K0)+dim(K1). (3.31)

    In each of (3.30) and (3.31), the first equality is deduced from the same arguments as those we used to derive (3.4), and the second equality is justified by the rank-nullity theorem. Moreover, the following direct sum decompositions are allowed:

    V0=SV0K0V1, (3.32)
    ˜V0=S˜V0K0˜V1. (3.33)

    To see why (3.32) and (3.33) are true, first note that we have R0+A1K0=R0SV0K0=R0S˜V0K0. We thus have B=R0SV0K0V1=R0S˜V0K0˜V1. These direct sum conditions imply that SV0K0V1 and S˜V0K0˜V1 are complementary subspaces of R0. Since V1V0 and ˜V1˜V0, (3.32) and (3.33) are established. Now, it is deduced from (3.32) and (3.33) that

    dim(V0)=dim(SV0K0)+dim(V1), (3.34)
    dim(˜V0)=dim(S˜V0K0)+dim(˜V1). (3.35)

    Comparing (3.30) and (3.34), we obtain dim(K1)=dim(V1). Additionally, from (3.31) and (3.35), we obtain dim(K1)=dim(˜V1).

    Now, we provide necessary and sufficient conditions for A(z)1 to have a second order pole at z0 and its closed-form expression in a punctured neighborhood of z0.

    Proposition 3.2. Suppose that Assumption 3.1 is satisfied, and K1{0}. Then, the following conditions are equivalent to each other.

    (i) m=2 in the Laurent series expansion (3.2).

    (ii) For some choice of Rc0 and Kc0, we have

    B=R1A2{Rc0,Kc0}K1.

    (iii) For all possible choices of Rc0, Kc0 and Rc1 satisfying (3.25), S{Rc0,Kc0,Rc1}:K1Rc1is invertible.

    (iv) For some choice of Rc0, Kc0 and Rc1 satisfying (3.25), S{Rc0,Kc0,Rc1}:K1Rc1is invertible.

    Under any of these conditions and any choice of complementary subspaces satisfying (3.25), the coefficients (Nj2) in (3.2) are given by the following recursive formula.

    N2=(S{Rc0,Kc0,Rc1})1PRc1, (3.36)
    N1=(QR{Rc0,Kc0}(SRc0)g{Rc1,Kc1}PRc0N2A1(A0)g{Rc0,Kc0})QL{Rc0,Kc0}QR{Rc0,Kc0}(A0)g{Rc0,Kc0}A1N2N2A3{Rc0,Kc0}N2, (3.37)
    Nj=(Gj(1,2)(A0)g{Rc0,Kc0}A1Gj(2,2))N2+(1j=0Gj(0,2))(A0)g{Rc0,Kc0}(idBA1(SRc0)g{Rc1,Kc1}PRc0)QL{Rc0,Kc0}Gj(1,2)(SRc0)g{Rc1,Kc1}PRc0QL{Rc0,Kc0}, (3.38)

    where

    A3{Rc0,Kc0}=A3A1(A0)g{Rc0,Kc0}A1(A0)g{Rc0,Kc0}A1,QL{Rc0,Kc0}=idBA2{Rc0,Kc0}N2,QR{Rc0,Kc0}=idBN2A2{Rc0,Kc0}.

    Each Nj is understood as a map from B to B without restriction of the codomain.

    Proof. We first establish some results that are repeatedly mentioned in the subsequent proof. Given any choice of complementary subspaces satisfying (3.25), the following identity decomposition is easily deduced from (3.27):

    idB=(idBPRc0)+(idBPRc1)PRc0+PRc1. (3.39)

    Since we have R1=R0+A1K0=R0SRc0K0, our direct sum condition (ⅱ) is equivalent to

    B=R0SRc0K0A2{Rc0,Kc0}K1. (3.40)

    Moreover, we may obtain the following expansion of the identity from (3.1) and (3.2):

    idB=k=m(m+kj=0NkjAj)(zz0)k (3.41)
    =k=m(m+kj=0AjNkj)(zz0)k. (3.42)

    Equivalence between (ⅰ)–(ⅳ) : Since (ⅲ) (ⅳ) is trivial, we will show that (ⅱ)(ⅰ)(ⅲ) and (ⅳ) (ⅱ).

    To show (ⅱ)(ⅰ), let V0 (resp. W0) be a choice of Rc0 (resp. Kc0), and the direct sum condition (ⅱ) holds for V0 and W0. Since kerSV0=K1{0}, SV0 cannot be invertible. Therefore, m1 by Proposition 3.1. Therefore, suppose that 2m< in (3.2). Collecting the coefficients of (zz0)m, (zz0)m+1 and (zz0)m+2 in (3.41) and (3.42), we obtain

    NmA0=A0Nm=0, (3.43)
    NmA1+Nm+1A0=A1Nm+A0Nm+1=0, (3.44)
    NmA2+Nm+1A1+Nm+2A0=0. (3.45)

    We may define the generalized inverse (A0)g{V0,W0} for V0 and W0. Composing both sides of (3.43) with (A0)g{V0,W0}, we obtain

    Nm(idBPV0)=0andNm=NmPV0. (3.46)

    From (3.44) and (3.46), it is deduced that

    NmA1K0=NmPV0A1K0=NmSV0=0. (3.47)

    Restricting the domain of both sides of (3.45) to K1, we find that

    NmA2K1+Nm+1A1K1=0. (3.48)

    Moreover, (3.44) trivially implies that

    Nm+1A0=NmA1andA0Nm+1=A1Nm. (3.49)

    By composing each of (3.49) with (A0)g{V0,W0}, it can be deduced that

    Nm+1(idBPV0)=NmA1(A0)g{V0,W0}, (3.50)
    PW0Nm+1=(A0)g{V0,W0}A1Nm. (3.51)

    Composing both sides of (3.51) with PV0 and using (3.46), we find that

    PW0Nm+1PV0=(A0)g{V0,W0}A1Nm. (3.52)

    From (3.52) and the identity decomposition idB=(idBPW0)+PW0, we obtain

    Nm+1PV0=(A0)g{V0,W0}A1Nm+(idBPW0)Nm+1PV0. (3.53)

    Summing both sides of (3.50) and (3.53) gives

    Nm+1=NmA1(A0)g{V0,W0}(A0)g{V0,W0}A1Nm+(idBPW0)Nm+1PV0. (3.54)

    Therefore, (3.48) and (3.54) together imply that

    0=NmA2K1NmA1(A0)g{V0,W0}A1K1(A0)g{V0,W0}A1NmA1K1+(idBPW0)Nm+1PV0A1K1. (3.55)

    From the definition of K1, PV0A1K1=0. Therefore, the last term in (3.55) is zero. Moreover, in view of (3.46), we have Nm(idBPV0)=0. This implies that the third term in (3.55) is zero, and (3.55) reduces to

    NmA2{V0,W0}K1=0. (3.56)

    Given our direct sum condition (ⅱ) (or equivalently (3.40)) with Eqs (3.46), (3.47) and (3.56), we conclude that Nm=0. The above arguments hold for any arbitrary choice of m such that 2<m<, and we already showed that m=1 is impossible. Therefore, m must be 2. This proves (ⅱ)(ⅰ).

    Now, we show that (ⅰ)(ⅲ). We let V0, W0 and V1(V0) be our choice of Rc0,Kc0 and Rc1, respectively. Suppose that S{V0,W0,V1} is not invertible. Due to Lemma 3.5, we know dim(V1)=dim(K1), meaning that S{V0,W0,V1} is not injective. Therefore, we know there exists an element xK1 such that S{V0,W0,V1}x=0. Collecting the coefficients of (zz0)2, (zz0)1 and (zz0)0 in (3.41) and (3.42), we have

    3k=mNkA2k+N2A0=0, (3.57)
    3k=mNkA1k+N2A1+N1A0=0, (3.58)
    3k=mNkAk+N2A2+N1A1+N0A0=idB. (3.59)

    From (3.39), N2 can be written as the sum of N2(idBPV0), N2(idBPV1)PV0 and N2PV1. We will obtain an explicit formula for each summand. It is deduced from (3.57) that

    N2(idBPV0)=3k=mNkA2k(A0)g{V0,W0}. (3.60)

    Restricting both sides of (3.58) to K0, we obtain

    N2A1K0=3k=mNkA1kK0. (3.61)

    Since N2=N2(idBPV0)+PV0, we obtain from (3.60) and (3.61) that

    N2SV0=3k=mNkA1kK0+3k=mNkA2k(A0)g{V0,W0}A1K0. (3.62)

    We may define (SV0)g{V1,W1} as in (3.29). Composing both sides of (3.62) with (SV0)g{V1,W1}PV, we obtain

    N2(idBPV1)PV0=3k=mNkA1k(SV0)g{V1,W1}PV0+3k=mNkA2k(A0)g{V0,W0}A1(SV0)g{V1,W1}PV0. (3.63)

    Restricting both sides of (3.59) to K1, we have

    3k=mNkAkK1+N2A2K1+N1A1K1=idBK1. (3.64)

    From (3.58), we can also obtain

    N1(idBPV0)=3k=mNkA1k(A0)g{V0,W0}N2A1(A0)g{V0,W0}. (3.65)

    Since A1K1R0, we have N1A1K1=N1(idBPV0)A1K1. Substituting (3.65) into (3.64), the following can be obtained:

    3k=mNkAkK13k=mNkA1k(A0)g{V0,W0}A1K1+N2A2{V0,W0}K1=idBK1.

    Since N2=N2(idBPV0)+N2(idBPV1)PV0+N2PV1, we have

    idBK1=3k=mNkAkK13k=mNkA1k(A0)g{V0,W0}A1K1+N2(idBPV0)A2{V0,W0}K1+N2(idBPV1)PV0A2{V0,W0}K1+N2S{V0,W0,V1}. (3.66)

    Note that if Nj is zero for every j3, then the first four terms of the right hand side of (3.66) are equal to zero, which can be easily deduced from the obtained formulas for N2(idBPV0) and N2(idBPV1)PV0 in (3.60) and (3.63). However, we showed that there exists some xK1 such that S{V0,W0,V1}x=0, which implies that Nj for some j3 must not be zero. This shows (ⅰ)(ⅲ).

    It remains to show (ⅳ) (ⅱ). Suppose that (ⅱ) does not hold. Then, for any arbitrary choice of Rc0 and Kc0, we must have either

    R1A2{Rc0,Kc0}K1{0} (3.67)

    or

    R1+A2{Rc0,Kc0}K1B. (3.68)

    If (3.67) is true, then clearly S{Rc0,Kc0,Rc1} cannot be injective for any arbitrary choice of Rc1 satisfying (3.25). Moreover, if (3.68) is true, then we must have dim(A2{Rc0,Kc0}K1)<dim(Rc1). This implies that S{Rc0,Kc0,Rc1} cannot be surjective for any arbitrary choice of Rc1 satisfying (3.25). Therefore, (ⅳ) (ⅱ) is easily deduced.

    Formulas for N2 and N1 : We let V0, W0, V1(V0) and W1 be our choice of Rc0,Kc0, Rc1 and Kc1, respectively. Collecting the coefficients of (zz0)2,(zz0)1 and (zz0)0 from (3.41) and (3.42), we have

    N2A0=A0N2=0,N2A1+N1A0=A1N2+A0N1=0,N2A2+N1A1+N0A0=0.

    From similar arguments and algebra to those in our demonstration of (ⅱ)(ⅰ), it can easily be deduced that

    N2R1={0}, (3.69)
    N2A2{V0,W0}K1=idBK1. (3.70)

    Eq (3.69) implies that

    N2(idBPV1)=0andN2=N2PV1. (3.71)

    Eqs (3.69) and (3.70) together imply that

    N2V1S{V0,W0,V1}=idBK1. (3.72)

    Composing both sides of (3.72) with (S{V0,W0,V1})1PV1, we obtain

    N2PV1=(S{V0,W0,V1})1PV1. (3.73)

    In view of (3.71), (3.73) is in fact equal to N2 with the codomain restricted to K1. Viewing this as a map from B to B, we obtain (3.36) for our choice of complementary subspaces.

    We next verify the claimed formula for N1. In view of the identity decomposition (3.39), N1 may be written as the sum of N1(idBPV0), N1(idBPV1)PV0 and N1PV1. We will find an explicit formula for each summand. From (3.41) when m=2, we obtain the coefficients of (zz0)1, (zz0)0 and (zz0)1 as follows.

    N2A1+N1A0=0, (3.74)
    N2A2+N1A1+N0A0=idB, (3.75)
    N2A3+N1A2+N0A1+N1A0=0. (3.76)

    From (3.74) and the properties of the generalized inverse, it is easily deduced that

    N1(idBPV0)=N2A1(A0)g{V0,W0}. (3.77)

    Restricting the domain of both sides of (3.75) to K0, we obtain

    N1A1K0=idBK0N2A2K0. (3.78)

    Using the identity decomposition idB=PV0+(idBPV0), (3.78) can be written as

    N1SV0=idBK0N2A2K0N1(idBPV0)A1K0. (3.79)

    Substituting (3.77) into (3.79), we obtain

    N1SV0=(idBN2A2{V0,W0})K0. (3.80)

    Under our direct sum condition (ⅱ), SV0:K0V0 is not invertible but allows a generalized inverse as in (3.29). From the construction of (SV0)g{V1,W1}, we have SV0(SV0)g{V1,W1}=(idBPV1)V0. Composing both sides of (3.80) with (SV0)g{V1,W1}PV0, we obtain

    N1(idBPV1)PV0=(idBN2A2{V0,W0})(SV0)g{V1,W1}PV0. (3.81)

    Restricting the domain of both sides of (3.76) to K1, we have

    N2A3K1+N2A2+N0A1K1=0. (3.82)

    Composing both sides of (3.75) with (A0)g{V0,W0}, it is deduced that

    N0(idBPV0)=(idBN2A2N1A1)(A0)g{V0,W0}. (3.83)

    From the definition of K1, we have A1K1R0. Therefore, it is easily deduced that

    N0A1K1=N0(idBPV0)A1K1. (3.84)

    Combining (3.82), (3.83) and (3.84), we have

    (N2A3+N1A2+(idBN2A2N1A1)(A0)g{V0,W0}A1)K1=0. (3.85)

    Rearranging terms, (3.85) reduces to

    N1A2{V0,W0}K1=N2(A3A2(A0)g{V0,W0}A1)K1(A0)g{V0,W0}A1K1. (3.86)

    Moreover, with trivial algebra, it can be shown that (3.86) is equal to

    N1A2{V0,W0}K1=N2(A3{V0,W0}A2{V0,W0}(A0)g{V0,W0}A1)K1(A0)g{V0,W0}A1K1. (3.87)

    From the identity decomposition (3.39), we have N1=N1(idBPV0)+N1(idBPV1)PV0+N1PV1, so (3.87) can be written as follows:

    N1S{V0,W0,V1}=N2(A3{V0,W0}A2{V0,W0}(A0)g{V0,W0}A1)K1(A0)g{V0,W0}A1K1N1(idBPV0)A2{V0,W0}K1N1(idBPV1)PV0A2{V0,W0}K1. (3.88)

    We obtained explicit formulas for N1(idBPV0) and N1(idBPV1)PV0 in (3.77) and (3.81). Moreover, we proved that S{V0,W0,V1}:K1R1 is invertible. After some tedious algebra from (3.88), one can obtain the claimed formula for N1, (3.37), for our choice of complementary subspaces. Of course, the resulting N1 needs to be understood as a map from B to B.

    Formulas for (Nj,j0) : Collecting the coefficients of (z1)j, (z1)j+1 and (z1)j+2 in the expansion of the identity (3.41) when m=2, we have

    Gj(0,2)+NjA0=1j=0, (3.89)
    Gj(1,2)+NjA1+Nj+1A0=0, (3.90)
    Gj(2,2)+NjA2+Nj+1A1+Nj+2A0=0. (3.91)

    From the identity decomposition (3.39), the operator Nj can be written as the sum of Nj(idBPV0), Nj(idBPV1)PV0 and NjPV1. We will find an explicit formula for each summand. First, from (3.89), it can be easily verified that

    Nj(idBPV0)=1j=0(A0)gGj(0,2)(A0)g{V0,W0}. (3.92)

    By restricting the domain of (3.90) to K0, we obtain

    NjA1K0=Gj(1,2)K0. (3.93)

    Using the identity decomposition idB=PV0+(idBPV0) and (3.92), we may rewrite (3.93) as follows:

    NjSV0=Gj(1,2)K01j=0(A0)g{V0,W0}A1K0+Gj(0,2)(A0)g{V0,W0}A1K0. (3.94)

    Composing both sides of (3.94) with (SV0)g{V1,W1}PV0, an explicit formula for Nj(idBPV1)PV0 can be obtained as follows:

    Nj(idBPV1)PV0=Gj(1,2)(SV0)g{V1,W1}PV01j=0(A0)g{V0,W0}A1(SV0)g{V1,W1}PV0+Gj(0,2)(A0)g{V0,W0}A1(SV0)g{V1,W1}PV0. (3.95)

    Restricting the domain of (3.91) to K1, we obtain

    Gj(2,2)K1+NjA2K1+Nj+1A1K1=0. (3.96)

    Composing both sides of (3.90) with (A0)g{V0,W0}, it is easily deduced that

    Nj+1(idBPV0)=Gj(1,2)(A0)g{V0,W0}NjA1(A0)g{V0,W0}. (3.97)

    Note that we have Nj+1A1K1=Nj+1(idBPV0)A1K1 from the definition of K1. Combining this with (3.96) and (3.97), we obtain the following equation:

    NjA2{V0,W0}K1=Gj(2,2)K1+Gj(1,2)(A0)g{V0,W0}A1K1. (3.98)

    We know Nj=Nj(idBPV0)+Nj(idBPV1)PV0+NjPV1 and already obtained explicit formulas for the last two terms. Substituting the obtained formulas into (3.98), we obtain

    NjPV1A2{V0,W0}K1=Gj(2,2)K1+Gj(1,2)(A0)g{V0,W0}A1K11j=0(A0)g{V0,W0}A2{V0,W0}K1+Gj(0,2,)(A0)g{V0,W0}A2{V0,W0}K1+Gj(1,2)(SV0)g{V1,W1}PV0A2{V0,W0}K1+1j=0(A0)gA1(SV0)g{V1,W1}PV0A2{V0,W0}K1Gj(0,2)(A0)g{V0,W0}A1(SV0)g{V1,W1}PV0A2{V0,W0}K1. (3.99)

    Composing both sides of (3.99) with (S{V0,W0,V1})1PV1, we obtain the formula for NjPV1. Combining this formula with (3.92) and (3.95), one can verify the claimed formula (3.38) for our choice of complementary subspaces after some algebra.

    Even though our recursive formula is obtained under a given choice of complementary subspaces V0,W0,V1 and W1, we know, due to the uniqueness of the Laurent series, that it does not depend on our choice of complementary subspaces.

    Let us narrow down our discussion to H, a complex separable Hilbert space. In H, there is a canonical notion of a complementary subspace, called the orthogonal complement, while we do not have such a notion in B. We therefore may let the orthogonal complement (ranA0) (resp. (kerA0)) be our choice of Rc0 (resp. Kc0). In this case, P(ranA0) and P(kerA0) are orthogonal projections. Then, the generalized inverse (A0)g{(ranA0),(kerA0)} has the following properties:

    (A0)g{(ranA0),(kerA0)}A0=(idHP(ranA0)),A0(A0)g{(ranA0),(kerA0)}=P(kerA0).

    That is, both (A0)g{(ranA0),(kerA0)}A0 and A0(A0)g{(ranA0),(kerA0)} are self-adjoint operators, meaning that (A0)g{(ranA0),(kerA0)} is the Moore-Penrose inverse operator of A0 (Section 1 of [16]). Moreover, we may let (ranA0)(S(ranA0)K0) be our choice of Rc1. This choice trivially satisfies (3.25), and it allows the orthogonal decomposition of H as follows:

    H=R0S(ranA0)K0Rc1.

    Letting K1K0 be our choice of Kc1, we can also make the generalized inverse of S(ranA0) become the Moore-Penrose inverse of S(ranA0). This specific choice of complementary subspaces appears to be standard in H. Under this choice, [9] stated and proved theorems similar to our Propositions 3.1 and 3.2, without providing a recursive formula for Nj. The reader is referred to Theorems 3.1 and 3.2 of their paper for more details. On the other hand, we explicitly take all other possible choices of complementary subspaces into account and provide a recursive formula to obtain a closed-form expression of the Laurent series. Therefore, even if we restrict our concern to a Hilbert space setting, our propositions can be viewed as extended versions of those in [9].

    In this section, we derive a suitable extension of the Granger-Johansen representation theorem, which will be given as an application of the results established in Section 3. Even if there are a few versions of this theorem developed in a possibly infinite dimensional Hilbert/Banach space (see, e.g., [9,10,12,13]), ours seems to be the first that can provide a full characterization of I(1) and I(2) solutions (except a term depending on initial values) of a possibly infinite order autoregressive law of motion in a Banach space.

    Let A:CLB be a holomorphic operator pencil, and then it allows the following Taylor series:

    A(z)=j=0Aj,(0)zj,

    where Aj,(0) denotes the coefficient of zj in the Taylor series of A(z) around 0. Note that we use the additional subscript (0) to distinguish it from Aj denoting the coefficient of (z1)j in the Taylor series of A(z) around 1. As in the previous sections, we let N(z) denote A(z)1 if it exists.

    Let DrC denote the open disk centered at the origin with radius r>0 and ¯Dr be its closure. Throughout this section, we employ the following assumption:

    Assumption 4.1.

    (i) A:CLB is a holomorphic Fredholm pencil.

    (ii) A(z) is invertible on ¯D1{1}.

    A similar assumption is employed to derive the Granger-Johansen representation in a Hilbert space setting (see, e.g., [9] and [10]). Under the above assumption, A(1) is not invertible, and in this case the local behavior of A(z)1 near z=1 turns out to be crucial to characterize the behavior of Xt.

    Now, we provide one of the main results of this section. To simplify expressions in the following propositions, we keep using the notations introduced in Section 3. Moreover, we introduce πj(k) for j0, which is given by

    π0(k)=1,π1(k)=k,πj(k)=k(k1)(kj+1)/j!,j2.

    Proposition 4.1. Suppose that A(z) satisfies Assumption 4.1, and we have a sequence (Xt,tp+1) satisfying

    j=0Aj,(0)Xtj=εt, (4.1)

    where ε=(εt,tZ) is a strong white noise. Then, the following conditions are equivalent to each other.

    (i) A(z)1 has a simple pole at z=1.

    (ii) B=R0A1K0.

    (iii) For any choice of Rc0, SRc0:K0Rc0 is invertible.

    (iv) For some choice of Rc0, SRc0:K0Rc0 is invertible.

    Under any of these equivalent conditions, Xt allows the representation, for some τ0 depending on initial values,

    Xt=τ0N1ts=1εs+νt,t1. (4.2)

    Moreover, νtL2B and satisfies

    νt=j=0Φjεtj,Φj=k=j(1)kjπj(k)Nk, (4.3)

    where (Nj,j1) can be explicitly obtained from Proposition 3.1.

    Proof. Under Assumption 4.1, there exists η>0 such that A(z)1 depends holomorphically on zD1+η{1}. To see this, note that the analytic Fredholm theorem implies that σ(A) is a discrete set. Since σ(A) is closed (page 56 of [19]), it is deduced that σ(A)¯D1+r is a closed discrete subset of ¯D1+r for some 0<r<. The fact that ¯D1+r is a compact subset of C implies that there are only finitely many elements in σ(A)¯D1+r. Furthermore, since 1 is an isolated element of σ(A), it can be easily deduced that there exists η(0,r) such that A(z)1 depends holomorphically on zD1+η{1}. Since 1σ(A) is an isolated element, the equivalence of conditions (ⅰ)–(ⅳ) is implied by Proposition 3.1.

    Under any of the equivalent conditions, it is deduced from Proposition 3.1 that N(z)=N1(z1)1+NH(z), where NH(z) denotes the holomorphic part of the Laurent series. Moreover, we can explicitly obtain the coefficients (Nj,j1) using the recursive formula provided in Proposition 3.1. It is clear that (1z)N(z) can be holomorphically extended over 1, and we can rewrite it as

    (1z)N(z)1=N1+(1z)NH(z). (4.4)

    Applying the linear filter induced by (4.4) to both sides of (4.1), we obtain

    ΔXt:=XtXt1=N1εt+(νtνt1),

    where νs=j=0NHj,(0)εsj, and NHj,(0) denotes the coefficient of zj in the Taylor series of NH(z) around 0. Clearly, the process

    Xt=N1ts=1εs+νt

    is a solution, and the complete solution is obtained by adding the solution to ΔXt=0, which is given by τ0. We then show νs is convergent in L2H. Note that

    j=0NHj,(0)εsjj=0NHj,(0)LBεsjCj=0NHj,(0)LB, (4.5)

    where C is some positive constant. The fact that NH(z) is holomorphic on D1+η implies that NHj,(0) exponentially decreases as j goes to infinity. This shows that the right-hand side of (4.5) converges to a finite quantity, so νs converges in L2H.

    It is easy to verify (4.3) from elementary calculus.

    Remark 4.1. Given that εt is a strong white noise, the sequence (νt,tZ) in our representation (4.2) is a stationary sequence. Therefore, (4.2) shows that Xt can be decomposed into three different components: a random walk, a stationary process and a term that depends on initial values.

    Proposition 4.2. Suppose that A(z) satisfies Assumption 4.1, and we have a sequence (Xt,tp+1) satisfying (4.1). Then, the following conditions are equivalent to each other.

    (i) A(z)1 has a second order pole at z=1.

    (ii) For some choice of Rc0 and Kc0, we have

    B=R1A2{Rc0,Kc0}K1.

    (iii) For any choice of Rc0, Kc0, and Rc1 satisfying (3.25), S{Rc0,Kc0,Rc1}:K1Rc1is invertible.

    (iv) For some choice of Rc0, Kc0, and Rc1 satisfying (3.25), S{Rc0,Kc0,Rc1}:K1Rc1is invertible.

    Under any of these equivalent conditions, Xt allows the representation, for some τ0 and τ1 depending on initial values,

    Xt=τ0+τ1t+N2tτ=1τs=1εsN1ts=1εt+νt,t1. (4.6)

    Moreover, νtL2B and satisfies

    νt=j=0Φjεtj,Φj=k=j(1)kjπj(k)Nk, (4.7)

    where (Nj,j2) can be explicitly obtained from Proposition 3.2.

    Proof. As shown in Proposition 4.1, there exists η>0 such that A(z)1 depends holomorphically on zD1+η{1}. Due to Proposition 3.2, we know N(z)=N2(z1)2+N1(z1)1+NH(z), where NH(z) is the holomorphic part of the Laurent series.

    (1z)2A(z)1 can be holomorphically extended over 1 so that it is holomorphic on D1+η. Then, we have

    (1z)2N(z)1=N2N1(1z)+(1z)2NH(z).

    Applying the linear filter induced by (1z)2A(z)1 to both sides of (4.1), we obtain

    Δ2Xt=N2εtN1Δεt+(ΔνtΔνt1),

    where νt:=jNHj,(0)εtj. From (4.5), we know that νt converges in L2B. Clearly, the process

    Xt=N2tτ=1τs=1εsN1ts=1εt+νt

    is a solution. Since the solution to Δ2Xt=0 is given by τ0+τ1t, we obtain (4.6). It is also easy to verify (4.7) from elementary calculus.

    Remark 4.2. The sequence (νt,tZ) in our representation (4.6) is stationary given that ε is a strong white noise. Then, the representation (4.6) shows that Xt can be decomposed into a cumulative random walk, a random walk, a stationary process and a term that depends on initial values.

    From the analytic Fredholm theorem, we know that the random walk component in our I(1) or I(2) representation takes values in a finite dimensional space, which is similar to the existing results by [9,10]. For statistical inference on function-valued time series containing a random walk component, the component is often assumed to be finite dimensional, and the representation results presented by [9] and [10] are used to justify this assumption (see, e.g., [22,23]).

    Propositions 4.1 and 4.2 require the autoregressive law of motion to be characterized by a holomorphic operator pencil satisfying Assumption 4.1. We expect that a wide class of autoregressive processes considered in practice satisfies the requirement. For example, for pN, let Φ1,,Φp be compact operators. Then, the autoregressive law of motion given by

    Xt=pj=1ΦjXtj+εt

    satisfies the requirement (see Theorems 3.1 and 4.1 of [9]).

    Even though we have assumed that ε is a strong white noise for simplicity, we may allow more general innovations in Propositions 4.1 and 4.2. For example, we could allow εt to depend on t. Even in this case, if εt is bounded by a+|t|b for some a,bR, the right hand side of (4.5) is still bounded by a finite quantity, meaning that νt converges in L2H. Moreover, we have only considered a purely stochastic process in Propositions 4.1 and 4.2. However, the inclusion of a deterministic component does not cause significant difficulties. Suppose that we have the following autoregressive law of motion:

    j=0Aj,(0)Xtj=γt+εt,t1,

    where (γt,tZ) is a deterministic sequence. In this case, we need an additional condition on γt for jNHj,(0)(γtj+εtj) to be convergent. We can assume that γta+|t|b for some a,bR.

    This paper considered inversion of a holomorphic Fredholm pencil based on the analytic Fredholm theorem. We obtained necessary and sufficient conditions for the inverse of a Fredholm operator pencil to have a simple pole and a second order pole and further derived a closed-form expression of the Laurent expansion of the inverse around an isolated singularity. Using the results, we obtained a suitable version of the Granger-Johansen representation theorem in a general Banach space setting, which fully characterizes I(1) (and I(2)) solutions except a term depending on initial values.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This paper is based on a chapter of the author's Ph.D. dissertation, titled Representation Theory for Cointegrated Functional Time Series, at the University of California, San Diego. The author expresses deep appreciation to four anonymous referees for their invaluable and insightful suggestions.

    The author declares no conflicts of interest in this paper.



    [1] R. F. Engle, C. W. J. Granger, Co-integration and error correction: representation, estimation, and testing, Econometrica: J. Econom. Soc., 55 (1987), 251–276. https://doi.org/10.2307/1913236 doi: 10.2307/1913236
    [2] S. Johansen, Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models, Econometrica: J. Econom. Soc., 59 (1991), 1551–1580. https://doi.org/10.2307/2938278 doi: 10.2307/2938278
    [3] S. Johansen, Likelihood-Based Inference in Cointegrated Vector Autoregressive Models, Oxford University Press, Oxford, 1995. https://doi.org/10.1093/0198774508.001.0001
    [4] J. M. Schumacher, System-theoretic trends in econometrics, in Mathematical System Theory (eds. A. C. Antoulas), Springer Berlin, (1991), 559–577. https://doi.org/10.1007/978-3-662-08546-2
    [5] M. Faliva, M. G. Zoia, Dynamic Model Analysis, Springer Berlin, Heidelberg, 2010. https://doi.org/10.1007/978-3-540-85996-3
    [6] M. Franchi, P. Paruolo, Inverting a matrix function around a singularity via local rank factorization, SIAM J. Matrix Anal. Appl., 37 (2016), 774–797. https://doi.org/10.1137/140999839 doi: 10.1137/140999839
    [7] M. Franchi, P. Paruolo, A general inversion theorem for cointegration, Econom. Rev., 38 (2019), 1176–1201. https://doi.org/10.1080/07474938.2018.1536100 doi: 10.1080/07474938.2018.1536100
    [8] B. K. Beare, J. Seo, W. K. Seo, Cointegrated linear processes in Hilbert space, J. Time Ser. Anal., 38 (2017), 1010–1027. https://doi.org/10.1111/jtsa.12251 doi: 10.1111/jtsa.12251
    [9] B. K. Beare, W. K. Seo, Representation of I(1) and I(2) autoregressive Hilbertian processes, Econom. Theory, 36 (2020), 773–802. https://doi.org/10.1017/s0266466619000276 doi: 10.1017/s0266466619000276
    [10] M. Franchi, P. Paruolo, Cointegration in functional autoregressive processes, Econom. Theory, 36 (2020), 803–839. https://doi.org/10.1017/s0266466619000306 doi: 10.1017/s0266466619000306
    [11] W. K. Seo, Cointegrated density-valued linear processes, arXiv preprint, (2017), arXiv: 1710.07792v1. https://doi.org/10.48550/arXiv.1710.07792
    [12] W. K. Seo, Cointegration and representation of cointegrated autoregressive processes in Banch spaces, Econom. Theory, (2022), in press. https://doi.org/10.1017/s0266466622000172
    [13] A. R. Albrecht, A. Konstantin, B. K. Beare, J. Boland, M. Franchi, P. G. Howlett, The resolution and representation of time series in Banach space, arXiv preprint, (2021), arXiv: 2105.14393. https://doi.org/10.48550/arXiv.2105.14393
    [14] R. E. Megginson, Introduction to Banach Space Theory, Springer, New York, USA, 2012. https://doi.org/10.1007/978-1-4612-0603-3
    [15] Y. A. Abramovich, C. D. Aliprantis, An Invitation to Operator Theory, American Mathematical Society, Providence, 2002. https://doi.org/10.1090/gsm/050
    [16] H. W. Engl, M. Z. Nashed, Generalized inverses of random linear operators in Banach spaces, J. Math. Anal. Appl., 83 (1981), 582 – 610. https://doi.org/10.1016/0022-247x(81)90143-8 doi: 10.1016/0022-247x(81)90143-8
    [17] E. Boasso, On the Moore–Penrose inverse, EP Banach space operators, and EP Banach algebra elements, J. Math. Anal. Appl., 339, (2008), 1003–1014. https://doi.org/10.1016/j.jmaa.2007.07.059 doi: 10.1016/j.jmaa.2007.07.059
    [18] I. Gohberg, S. Goldberg, M. Kaashoek, Classes of Linear Operators, Birkhäuser, Basel, 2013. https://doi.org/10.1007/978-3-0348-7509-7
    [19] A. S. Markus, Introduction to the Spectral Theory of Polynomial Operator Pencils (Translations of Mathematical Monographs), American Mathematical Society, Providence, 2012. https://doi.org/10.1090/mmono/071
    [20] D. Bosq, Linear Processes in Function Spaces, Springer-Verlag, New York, USA, 2000. https://doi.org/10.1007/978-1-4612-1154-9
    [21] W. Kaballo, Meromorphic generalized inverses of operator functions, Indagationes Math., 23 (2012), 970–994. https://doi.org/10.1016/j.indag.2012.05.001 doi: 10.1016/j.indag.2012.05.001
    [22] M. Ø. Nielsen, W. K. Seo, D. Seong, Inference on the dimension of the nonstationary subspace in functional time series, Econom. Theory, 39 (2023), 443–480. https://doi.org/10.1017/s0266466622000111 doi: 10.1017/s0266466622000111
    [23] W. K. Seo, Functional principal component analysis for cointegrated functional time series, J. Time Ser. Anal., (2023), in press. https://doi.org/10.1111/jtsa.12707 doi: 10.1111/jtsa.12707
  • This article has been cited by:

    1. Won-Ki Seo, Han Lin Shang, Fractionally integrated curve time series with cointegration, 2024, 18, 1935-7524, 10.1214/24-EJS2290
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1680) PDF downloads(65) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog