Processing math: 100%
Research article Special Issues

Ultimate linear block and convolutional codes

  • Linear block and convolutional codes are designed using unit schemes and families of these to required length, rate, distance and type are mined. Properties, such as type and distance, of the codes follow from the types of units used and thus required codes are built from specific units. Orthogonal units, units in group rings, Fourier/Vandermonde units and related units are used to construct and analyse linear block and convolutional codes and to construct these to predefined length, rate, distance and type. Series of self-dual, dual containing, quantum error-correcting and linear complementary dual codes are constructed for both linear block and convolutional codes. Low density parity check linear block and linear convolutional codes are constructed from unit schemes.

    Citation: Ted Hurley. Ultimate linear block and convolutional codes[J]. AIMS Mathematics, 2025, 10(4): 8398-8421. doi: 10.3934/math.2025387

    Related Papers:

    [1] Chaofeng Guan, Ruihu Li, Hao Song, Liangdong Lu, Husheng Li . Ternary quantum codes constructed from extremal self-dual codes and self-orthogonal codes. AIMS Mathematics, 2022, 7(4): 6516-6534. doi: 10.3934/math.2022363
    [2] Yuezhen Ren, Ruihu Li, Guanmin Guo . New entanglement-assisted quantum codes constructed from Hermitian LCD codes. AIMS Mathematics, 2023, 8(12): 30875-30881. doi: 10.3934/math.20231578
    [3] Hatoon Shoaib . Double circulant complementary dual codes over $ \mathbb{F}_4 $. AIMS Mathematics, 2023, 8(9): 21636-21643. doi: 10.3934/math.20231103
    [4] Ismail Aydogdu . On double cyclic codes over $ \mathbb{Z}_2+u\mathbb{Z}_2 $. AIMS Mathematics, 2024, 9(5): 11076-11091. doi: 10.3934/math.2024543
    [5] Xiying Zheng, Bo Kong, Yao Yu . Quantum codes from $ \sigma $-dual-containing constacyclic codes over $ \mathfrak{R}_{l, k} $. AIMS Mathematics, 2023, 8(10): 24075-24086. doi: 10.3934/math.20231227
    [6] Adel Alahmadi, Tamador Alihia, Patrick Solé . The build up construction for codes over a non-commutative non-unitary ring of order $ 9 $. AIMS Mathematics, 2024, 9(7): 18278-18307. doi: 10.3934/math.2024892
    [7] Doris Dumičić Danilović, Andrea Švob . On Hadamard $ 2 $-$ (51, 25, 12) $ and $ 2 $-$ (59, 29, 14) $ designs. AIMS Mathematics, 2024, 9(8): 23047-23059. doi: 10.3934/math.20241120
    [8] Shudi Yang, Zheng-An Yao . Weight distributions for projective binary linear codes from Weil sums. AIMS Mathematics, 2021, 6(8): 8600-8610. doi: 10.3934/math.2021499
    [9] Adel Alahmadi, Altaf Alshuhail, Patrick Solé . The mass formula for self-orthogonal and self-dual codes over a non-unitary commutative ring. AIMS Mathematics, 2023, 8(10): 24367-24378. doi: 10.3934/math.20231242
    [10] Bodigiri Sai Gopinadh, Venkatrajam Marka . Reversible codes in the Rosenbloom-Tsfasman metric. AIMS Mathematics, 2024, 9(8): 22927-22940. doi: 10.3934/math.20241115
  • Linear block and convolutional codes are designed using unit schemes and families of these to required length, rate, distance and type are mined. Properties, such as type and distance, of the codes follow from the types of units used and thus required codes are built from specific units. Orthogonal units, units in group rings, Fourier/Vandermonde units and related units are used to construct and analyse linear block and convolutional codes and to construct these to predefined length, rate, distance and type. Series of self-dual, dual containing, quantum error-correcting and linear complementary dual codes are constructed for both linear block and convolutional codes. Low density parity check linear block and linear convolutional codes are constructed from unit schemes.



    Unit-derived methods for coding theory were initiated in [21,22] and further developed in [13,14,25]; an introduction in chapter form is available in [23,24]. All linear block codes over fields are equivalent to unit-derived codes (see, for example Proposition 2.1 below) and using units of different types enables the construction and analysis of linear block codes to required length, rate, distance and type.

    The method extends to the design of convolutional codes and to build these to type, length and rate as required. The lack of any algebraic methods for constructing convolutional codes has long been a problem – for example McEliece [33] remarks: "A most striking fact is the lack of algebraic constructions of families of convolutional codes", Blahut ([3], p.312), writes: "No general method is yet known for constructing a family of high performance multiple-error-correcting (convolutional) codes... The codes now in common use have been found by computer search". The computer search is not practical except for small length.

    Infinite families of codes with rates approaching a given rational R, (0<R<1), and relative distances approaching (1R) can be designed by the methods. Some codes derived have applications in solving underdetermined systems of equations (see [19]).

    The (free) distance of a convolutional code designed from a unit is much better than that of the distance of the linear block code of the same rate designed. An illuminating example of this is given in Example 3.9 below in which a unit-derived form for the Hamming [7,4,3] binary code is obtained and from this a memory 1 convolutional (binary) code of the same length and rate but with twice the distance is derived.* The process is general and a unit-derived form for a linear block code can be used to derive convolutional codes of the same length and rate as the block code but with much better distance.

    *The convolutional code obtained has parameters (7,4,4;1,6); see Section 1.1 for explanation of convolutional code parameters.

    New families of linear block and convolutional codes are now available and such codes to required length, rate and type required are mined by the unit-derived method. Codes are formed from within a unit structure and properties of the codes flow from the properties of the substructures of the unit used. In mathematics we often think of breaking a structure into more manageable parts or building a structure from already known structures but here we think of looking at the bigger (unit) structure already formed with which embedded substructures are mined to create codes and codes to specific requirements.

    Using special types of units such as orthogonal units, Vandermonde/Fourier units or units in group rings allows the construction of codes such as self-dual codes, dual containing codes, complementary dual codes and quantum code as well as these to specified lengths, rates, and distances. Codes over particular required fields, such as over fields of characteristic 2 or codes over prime fields (for which modular arithmetic is available) and families of such are constructible by the methods. Special linear block and convolutional codes may also be induced from Hadamard matrices; an Example 4.4 is given, but the general Hadamard case is dealt with separately in [16]. These can have large rates, as opposed to Walsh-Hadamard matrices, but note that now convolutional codes related to Hadamard matrices are constructed.

    Using units in group rings allows the construction of low density parity check (LDPC) linear block and LDPC convolutional codes and families of such. These are constructed with no short cycles in the control matrix, a huge advantage in decoding LDPC codes.

    Dual-containing (DC) codes have their own intrinsic interest but are also used for designing quantum error-correcting codes by the Calderbank-Shor-Steane (CSS) method, [5,6,40]. Here then families of convolutional quantum error correcting codes of different lengths and rates are explicitly constructed. Linear complementary dual (LCD) codes have been studied extensively in the literature. For background, history and general theory on LCD codes, consult the articles [7,8,9,34] by Carlet, Mesnager, Tang, Qi and Pelikaan. LCD codes were originally introduced by Massey in [30,31]. These codes have been studied amongst other things for improving the security of information on sensitive devices against side-channel attacks (SCA) and fault non-invasive attacks (see [10]) and have found use in data storage and communications' systems.

    The relationships between DC linear block codes and LCD convolutional codes and between LCD linear block codes and convolutional DC codes when formed from the same unit scheme are quite remarkable.

    Hermitian codes over fields of the form GF(q2) may be formed by looking at unitary matrices over these fields; this was initiated in [14], and further development is left to later work.

    Requiring one of {U,V} in UV=I to be of low density enables the construction of LDPC linear block and convolutional LDPC codes by the unit-derived method. These are constructed so that there are no short cycles in the control matrix using units in group rings. The linear block case for such LDPC codes has been dealt with in [26] and the convolutional case follows from the unit-derived method. Iterative decoders for low density parity check codes are impacted by short cycles. Here for a given unit scheme, described in a precise way, multiple such codes, all with no short cycles, are constructed and with prescribed rate and dimension. These LDPC codes have many applications and can further be stored using an algebraic 'short' formula which, for example, is important in applications requiring low storage and low power.

    Some of this work is additional to and complementary to that in [15,20]. As pointed out in [15], convolutional codes which appeared previously in the literature are very special cases of constructions using unit-derived methods and now whole families and required types are available.

    The unit-derived method allows the construction of multiple linear block codes and multiple convolutional codes from the same unit. Methods are introduced for constructing families of self-dual codes from units by adding if necessary a square root of (1) to the field over which the unit is defined.

    A number of examples are given here which of necessity are of relatively small length; these can be looked upon as prototype examples for large length constructions which are attainable. The codes are easily implementable once the units from which they are derived are formed. The brilliant computer algebra system GAP [42] proves extremely useful in constructing examples and verifying properties.

    Coding theory background is contained in Subsection 1.1 of this introduction. The contents of other sections are given in Subsection 1.4.

    Basics on linear block coding theory may be found in any of [3,27,29,33] and many others. The notation [n,r,d] is used here for a linear block code of length n, dimension r, and (minimum) distance d. The rate is then Frn. A maximum distance separable (MDS) linear block code is one of the form [n,r,nr+1], where the maximum distance possible for the length and rate is achieved.

    Different equivalent definitions for convolutional codes are given in the literature. The notation and definitions used here follow that given in [38,39,41]. A rate kn convolutional code with parameters (n,k,δ) over a field F is a submodule of F[z]n generated by a reduced basic matrix G[z]=(gij)F[z]r×n of rank r where n is the length, δ=ri=1δi is the degree with δi=max1jrdeggij. Also μ=max1irδi is known as the memory of the code and then the code may then be given with parameters (n,k,δ;μ). The parameters (n,r,δ;μ,df) are used for such a code with free (minimum) distance df.

    Suppose C is a convolutional code in F[z]n of rank k. A generating matrix G[z]F[z]k×n of C having rank k is called a generator or encoder matrix of C. A matrix HF[z]n×(nk) satisfying C=kerH={vF[z]n:vH=0} is said to be a control matrix or check matrix of the code C.

    Convolutional codes can be catastrophic or non-catastrophic; see, for example [33], for the basic definitions. A catastrophic convolutional code is prone to catastrophic error propagation and is not much use. A convolutional code described by a generator matrix with right polynomial inverse is a non-catastrophic code; this is sufficient for our purposes. The designs given here for the generator matrices allow for specifying directly the control matrices and the right polynomial inverses where appropriate. There exist very few algebraic constructions for designing convolutional codes and search methods limit their size and availability, see McEliece [33] for discussion and also [1,2,11,37].

    By Rosenthal and Smarandache [38] the maximum free distance attainable by an (n,r,δ) convolutional code is (nr)(δr+1)+δ+1. The case δ=0, which is the case of zero memory, corresponds to the linear Singleton bound (nr+1). The bound (nr)(δr+1)+δ+1 is then called the generalised Singleton bound [38] (GSB), and a convolutional code attaining this bound is known as an MDS convolutional code. The papers [38] and [41] are major beautiful contributions to this area.

    The criteria for a convolutional code to be an MDS code are given in terms of the parameters for a convolutional code and the criteria for a linear block code to be an MDS code are given in terms of the parameters for a linear block code.

    Let G(z) be the generator matrix for a convolutional code C with memory m. Suppose G(z)HT(z)=0, so that HT(z) is a control matrix, and then H(z1)zm generates the convolutional dual code of C, see [4] and [12]. This is also known as the module-theoretic dual code. The code is then dual-containing provided the code generated by H(z1)zm is contained in the code generated by G(z). The code is self-dual provided the code generated by H(z1)zm is equal to the code generated by G(z).

    In convolutional coding theory, the idea of dual code has two meanings. The other dual convolutional code defined is called the sequence space dual; the generator matrices for these two types are related by a specific formula.

    Let G(z) be the generator matrix for a convolutional (n,r) code C. Code words will consist of P(z)G(z) where P(z) is a polynomial in z with coefficients which are 1×r vectors. The polynomial P(z) is said to be an information vector for the code C. The support of P(z) is the number of non-zero coefficient vectors appearing in its expression as a polynomial.

    The dual of a code C is denoted by C. Note the definition of the dual code of a convolutional code is as given in Subsection 1.1 above. A self-dual code is a code equal to its dual. A code C is said to be dual containing, written DC, if it contains its dual C. Say a code is a linear complementary dual, written LCD, code provided it has trivial intersection with its dual.

    A self-dual code is a particular type of dual-containing code. Thus

    C is a dual containing (DC) code CC=C;

    C is a linear complementary dual (LCD) code CC=0;

    C is a self-dual code provided C=C.

    Constructions of DC, including self-dual, convolutional codes and LCD convolutional codes were initiated in [14]. DC codes are theoretically interesting in themselves but in addition a DC code enables the construction of a quantum error-correcting code by the CSS method.

    LCD codes and DC codes are 'supplemental' to one another: C is DC if and only if CC=C and C is LCD if and only if CC=0. As noted in [14], MDS DC block linear codes lead to the construction of MDS LCD convolutional codes and LCD MDS block linear codes leads to the construction of MDS DC convolutional codes.

    Main abbreviations used:

    DC: dual-containing;

    MDS: maximum distance separable;

    To be MDS has different parameter requirements for block codes, convolutional codes and quantum codes.

    LCD: linear complementary dual;

    QECC: quantum error-correcting code;

    LDPC: low density parity check.

    Many decoding schemes exist for linear block codes such as syndrome decoding [3]. Efficient decoding techniques for unit-derived linear block codes from Fourier/Vandermonde type matrices are established in [25], Algorithms 6.1–6.3. The constructions quickly lead to the establishment of error-correcting pairs for the codes; error-correcting pairs are due to Pellikan [36]. The algorithms are especially useful for solving underdetermined systems using error-correcting codes, see [19]. Several efficient algorithms exist for decoding convolutional codes, the most common ones being the Viterbi algorithm and the sequential decoding algorithm. Algebraic decoding schemes for convolutional codes are being developed.

    ● Section 2: The unit-derived method in general is described.

    ● Section 2.1: Orthogonal unit schemes are used to design codes and families of codes, including general methods for designing LCD and self-dual codes.

    ● Section 2.2: Fourier type units are used to construct families of MDS, LCD, DC and QECC codes.

    ● Section 3: Methods for designing families of convolutional codes are devised. In particular families of DC, self-dual, and QECC convolutional codes are designed.

    ● Section 4: Series of higher memory convolutional codes are constructed.

    ● Section 5: General algorithms using units are described for constructing LDPC linear block and convolutional codes and to design these with no short cycles in the control matrix.

    The designs are general in nature and series with large lengths and distances are achievable. Each technical section contains explicit examples and prototype examples of the general constructions.

    The unit-derived method for constructing and analysing linear block codes was initiated in [21,22,23,24] and continued in [13,25] and elsewhere. Any linear block code over a field (see, for example, Proposition 2.1) can be constructed by the unit-derived method although this may not have been the original line of thought in the construction of the code. The unit-derived method gives further information on the code in addition to describing the generator and control matrices. Codes over a field formed by other methods may be given in a unit derived form from which further development can be performed.

    A linear block code with generator matrix G and check matrix H is described by GHT=0. The matrix H generates the dual of the code and the term 'control matrix' is also used for the 'check matrix' HT. The basic unit-derived method is as follows: U is an invertible n×n matrix and is broken up as U=(AB). The inverse of U has a compatible form (CD) so that (AB)(CD)=In.

    Now A has size r×n for some r and then B has size (nr)×n, C has size n×r and D has size n×(nr). Then precisely AC=Ir,AD=0r×(nr),BC=0(nr)×r,BD=I(nr). So AD=0 defines an [n,r] code C where A is the generator matrix, D is a control matrix and DT generates the dual code of C.

    A more general form of the unit-derived method (see [21,22,23,24]) is as follows: Given a unit matrix system UV=In, taking any r rows of U gives an [n,r] code and a control matrix is obtained by eliminating the corresponding columns of V. Thus many codes may be derived, and codes of a particular type, from a single unit scheme.

    The properties of the units are used to obtain properties of the codes, and units are formed with a particular type, length, rate, distance or field type in mind. Infinite families of required codes are also constructed and analysed; see for example the paper [17]. The number of choices of r rows from n is (nr), thus deriving many codes from a single unit scheme. Having a big choice is also useful in producing cryptographic schemes from large unit schemes.

    In the basic unit scheme above A is the generator matrix of an [n,r] code C and then D, which is a (nr)×r matrix, is a check matrix for C. But note that B,C have been ignored! They can also be used to describe a 'complementary code' but even better can be used to form a convolutional code with A,D. Distances of the convolutional codes formed from a unit can often be determined in terms of a sum of the distances of linear codes formed from that unit. Convolutional codes have in addition their own efficient decoding algorithms, such as Viterbi algorithm and sequential decoding algorithm.

    The matrices {A,B,C,D} have full ranks as they are parts of invertible matrices.

    That every linear block code over a field arises as a unit-derived code is shown in the following propositions.

    Proposition 2.1. Let C be a linear code over a field. Then C is equivalent to a unit-derived code.

    Proof. Assume C is an [n,r] code with generator matrix A and check matrix H. Then AHT=0 for an r×n matrix A, and an (nr)×n matrix H; here 0=0r×(nr). Let {e1,e2,,er} be the rows of A which are linearly independent. Extend these to a basis {e1,e2,,er,er+1,,en} for the whole space n-dimensional space. Let B=(er+1en) and G=(AB). Then G is invertible with inverse given by K=G1=(CD) where C is an n×r matrix and D is an n×(nr) matrix. Thus (AB)(CD)=In. Then AD=0. Now DT has rank (nr) and HT has rank (nr) and hence the code generated by A with check matrix H is equivalent to the code generated by A with check matrix DT. (D and HT generate the null space of Ax=0 and have the same rank.)

    A code may not be originally constructed as a unit-derived code but it is useful to look at a code in this manner which leads to further and better constructions and in particular to constructions of convolutional codes. A code is a structures which is part of a bigger structure on which more is already known. Using the bigger structure to construct and analyse the embedded structures has many advantages. Multiple codes of a particular type may be deduced from just one unit.

    If we require particular types of codes, as for example DC (including self-dual) codes or LCD codes, then look for particular types of units which give such codes in the unit-derived way.

    The convolutional codes derived in [14] use unit-derived methods from Vandermonde/Fourier and other related matrices.

    Unit-derived codes may also be obtained from a scheme where UV=αIn,α0. The process is similar: Choose any r rows of U for a generator matrix and a check matrix is obtained by eliminating the corresponding columns of V. This is useful when considering Vandermonde/Fourier matrices, Propositions 2.11 and 2.12.

    Proposition 2.2. Let U be an orthogonal matrix. Then any unit-derived block linear code from U is an LCD (linear complementary dual) code.

    Proof. Now UUT=In. Thus the unit scheme is UUT=(AB)(CD)=In for matrices A,B,C,D where A is of size r×n, B is os size (nr)×n, C is of size n×r and D is of size n×(nr). Denote the code generated by A by C. This code has control matrix D, which means AD=0. Now UT=(CD) and so U=(CTBT) giving that CT=A,DT=B. Thus AD=0 is the same as ABT=0.

    Hence B generates the dual code of C. Now no non-trivial sum of rows of A can be a sum of rows of B as U=(AB) is non-singular. Hence CC=0 as required.

    The following proposition is shown in a similar manner to Proposition 2.2.

    Proposition 2.3. Let X be an n×n matrix such that XXT=αIn, for α0. Suppose X is broken as follow: X=(AB). From this it follows that (AB)(ATBT)=αIn where A has size r×n. Then A generates an [n,r] LCD code and B generates the dual of this code.

    Orthogonal matrices are thus a rich source for LCD codes. Given an orthogonal n×n matrix U any r rows of U may be chosen as the generator matrix for a [n,r] code and this code is then an LCD linear block code.

    An orthogonal matrix may also be used to form a self-dual code by combining it with an identity as described in the following Propositions 2.4 and 2.10. The extended Hamming [8,4,4] and Golay [24,12,8] codes are constructed in this way, see Examples 2.5 and 2.6 below. Other self-dual codes may be constructed in a similar manner from orthogonal matrices.

    The following proposition is known but is given here in a form suitable for the constructions.

    Proposition 2.4. Let X be an orthogonal n×n matrix in a field of characteristic 2. Then the matrix A=(In,X) generates a self-dual [2n,n] matrix. Conversely if A=(In,X) is a self-dual code where X is an n×n matrix in a field of characteristic 2, then X is orthogonal.

    Proof. Suppose X is orthogonal. Then (In,X)(InXT)=In+XXT=In+In=On×n. Thus (InXT), of rank n, is a control matrix for the code C generated by (In,X). Thus ((InXT))T=(In,X) generates the dual code of C. Hence C is self-dual.

    On the other hand if the code generated by (In,X) is self-dual then (I,X)T=(InXT) is a control matrix for this code and so (In,X)(InXT)=0n×n. Hence In+XXT=0n×n and so XXT=In.

    Example 2.5. The Hamming [8,4,4] self-dual binary code H is formed this way. Let U=(0111111011011011), then U2=I4,U=UT and A=(I4,U) is a generator matrix for the Hamming [8,4,4] self-dual code H. In addition the control matrix for the code has the form (I4U) in which each row is unique and can be used to correct any one error in the one-error correcting code H.

    Example 2.6. The Golay [24,12,8] is formed in this way, [32]. Let U be the reverse circulant matrix formed using (0,1,1,0,1,1,1,1,0,1,0,0) as the first row. Then U2=I12,U=UT and (I12,U) is a generator matrix for the self-dual Golay [24,12,8] code G. See [32] for details. The control matrix for the code has the form (I12U). The sum of any 1, 2 or 3 rows is unique and thus a lookup table can be formed to correct up to three errors in this 3-error correcting Golay [24,12,8] self-dual code.

    A generator matrix for a linear block code may be given in systematic form, G=(In,P) where P is an n×t matrix, see [3]. The distance of the code generated by (In,P) is a function of the 'unit-derived' type codes from P.

    Proposition 2.7. Consider the code C generated by G=(In,P). Suppose the code generated by any s rows of P has distance (ds) and for some choice of r rows the code generated by these r rows has distance exactly (dr); then the distance of C is d.

    The proposition makes sense even if the number of columns of P is less than n. The following Lemma is easy from results on fields.

    Lemma 2.8. Let F be a field. Then F has an square root of (1) or else a quadratic extension of F has a square root of (1).

    Proof. If F does not have a square root of (1) then x2+1 is irreducible over F.

    Lemma 2.9. Let F be a field with contains a square root of (1), denoted by i, and X an n×n matrix over F. Then XXT=In if and only if (iX)(iX)T=In.

    Proposition 2.4 is implicit in the following more general proposition which enables the construction of self-dual codes over fields.

    Proposition 2.10. Let X be an n×n matrix over a field F.

    (i) If X is an orthogonal matrix then (In,iX) generates a self-dual code where i is a square root of (1) in F or in a quadratic extension of F.

    (ii) If (In,X) is self-dual then iX is orthogonal where i is a square root of (1) in F or in a quadratic extension of F.

    Proof. (ⅰ) (In,iX)(In(iX)T)=InXXT=0 and so (In,iX) generates a self-dual code as ((In(iX)T))T=(In,iX).

    (ⅱ) Suppose (In,X) is self-dual. Then a control matrix of the code is (In,X)T=(InXT) and so (In,X)(InXT)=0. Hence In+XXT=0 and so XXT=In. Hence iX(iX)T=In.

    In a field of characteristic 2, (1)=1 and so Proposition 2.4 follows from Proposition 2.10.

    This gives a general method for constructing and analysing self-dual codes from unit orthogonal and orthogonal like matrices.

    Using Fourier (or more generally Vandermonde) unit matrices to construct various types of MDS codes was initiated in [13,25] and further developed in [17] and others. Here we present propositions in a very general form from which these constructions may be derived. Families of required types, lengths and rates are achievable.

    Let Fn be a Fourier matrix over a finite field F. Over which finite fields this Fn can be constructed is discussed in [17,25] and elsewhere. Let Fn be the inverse of Fn giving the unit scheme FnFn=In. Let Fn have rows {e0,e1,,en1} in order and Fn have columns {f0,f1,,fn1} in order. Now e1=(1,ω,ω2,,ωn1) and ei=(1,ωi,ω2i,,ω(n1)i) where ω is a primitive nth root of unity in the field F.

    Then (see [13,17,25]) it is noted that fi=1neTni and ei=nfTn1.

    Proposition 2.11. Let Fn be a Fourier matrix over a finite field F. Let Fn be the inverse of Fn giving the unit scheme FnFn=In. Let Fn have rows {e0,e1,,en1} in order where ei=(1,ωi,ω2i,,ω(n1)i) and ω is a primitive nth root of unity in the field F.

    Then the basic unit scheme from the Fourier matrix, FnFn=In, is given as follows:

    (e0e1en1)(eT0,eTn1,eTn2,,eT1)=nIn.

    Having nIn rather than In is no problem in describing codes from the scheme as n0 in a field in which the Fourier n×n matrix exists; H is a control, respectively generator, matrix if and only if αH is a control, respectively generator, matrix for α0.

    This gives the following, see for example [25]:

    Proposition 2.12. Let Fn be Fourier n×n matrix over a finite field and has rows {e0,e1,,en1}. Suppose Fn=(AB) where A=(e0e1er1).

    (i) The code generated by A is an [n,r,nr+1] MDS block code.

    (ii) When r>n2 the code generated by A is an [n,r,nr+1] DC MDS code.

    (iii) In the case when r>n2 the CSS construction from the DC [n,r,nr+1] code gives a quantum error-correcting [[n,2rn,nr+1]] which is an MDS quantum code§.

    §MDS in the quantum code sense

    If r rows of Fn are chosen in arithmetic order, starting at any row, with arithmetic difference k where gcd(k,n)=1 then an MDS [n,r,nr+1] is still obtained; the differences are taken mod n. This may be used to construct LCD codes. It will be shown later that DC convolutional codes may in many circumstances be produced from the unit scheme that produced the LCD codes. Examples are given as follows before the general result is described.

    Example 2.13. Let F7 denote a Fourier 7×7 matrix over a finite field. Such a matrix exists over a field whose characteristic does not divide 7 and which has an element of order 7. Thus such a matrix exists for example over GF(23) or over GF(132). Look at a unit scheme formed by rearranging the rows of F7 as follows:

    (e6e0e1e2e3e4e5)(eT1eT0eT6eT5eT4eT3eT2)=7I7.

    Then the first three rows (e6e0e1) generates an [8,3,6] MDS code. A control code is (eT5eT4eT3eT2) and thus the dual code is generated by the transpose of this, (e5e4e3e2). Thus a [8,3,6] LCD code is obtained; the ei are independent as rows of an invertible matrix.

    Looking at the unit scheme as given, it will be shown later how a convolutional code which is DC can be constructed.

    Example 2.14. Let F8 be a Fourier 8×8 matrix over a finite field. Such a matrix exists in a field of characteristic not dividing 8 and having an element of order 8, for example over GF(32) or over the prime field GF(17).

    The rows of F8 in order are denoted by {e0,e1,,e7}. Look at the unit-scheme in the form

    (e6e7e0e1e2e3e4e5)(eT2eT1eT0eT7eT6eT5eT4eT3)=8I8.

    Then (e6e7e0e1e2) generates an [8,5,4] MDS code. A control matrix is (eT5eT4eT3) and thus the transpose of this, (e5e4e3), generates the dual of the code. Hence the code is an LCD code - the ei are independent as rows of an invertible matrix.

    The idea is to keep a row ei and its 'conjugate' eni together in the generating matrix and thus get an LCD code. When using the same unit scheme to obtain convolutional codes, then DC convolutional codes are obtainable. The following is the general result which is best understood by looking at Examples 2.13 and 2.14.

    Proposition 2.15. Let Fn be a Fourier matrix over a finite field. Let the rows in order of Fn be denoted by {e0,e1,,en1}. Rearrange the Fourier matrix unit scheme as follows:

    (eren1e0e1enrenr+1er1)(eTnr,,eT1,eT0,eTn1,,eTr,eTr1,,eTnr+1)=nIn.

    Then the code generated by (eren1e0e1enr) is an LCD MDS code.

    An advantage of this is that looking at the full unit scheme allows the construction of convolutional DC codes and from the convolutional DC code quantum error-correcting codes are defined by the CSS construction.

    From full unit schemes:

    DC linear block codes from unit unitderived convolutional LCD codes from the unit.

    LCD linear block codes from unit unitderived convolutional DC codes from the unit unitderived convolutional quantum codes from the unit scheme.

    The basic unit-derived scheme is given by: (AB)(CD)=In.

    As noted, using AD=0 defines a block linear code where A generates the code and DT generates the dual of this code. The total power of the unit is not used as {B,C} are ignored. This can be rectified by going on to describe convolutional codes from the unit scheme. The general idea is to use A,B to describe convolutional codes G(z)=A+Rz where R is formed from B. DC and LCD convolutional codes are obtained. Constructing DC convolutional codes lead to the construction of quantum error correcting convolutional (QECC) codes, by CSS construction.

    A free distance can be prescribed as a linear functional of the distances of the (block linear) codes generated by A and B.

    Convolutional MDS codes are constructed in [14,17] by the method.

    Suppose that A,B both have the same size, r×n, in the unit-derived formula; now r=n2 and n is even. Let the code generated by A have distance d1 and the code generated by B have distance d2. Consider G[z]=A+Bz. This generates a convolutional code of memory 1. As G(z)C=AC=Ir the generator matrix has a right inverse and so the code is non-catastrophic.

    Now also (A+Bz)(DCz)=ADACz+BDzBCz=Irz+Irz=0r. Thus DCz is a control matrix for the (n,r,r;1) convolutional code.

    The free distance for the code is (d1+d2). This minimum distance is obtained when the information vector has support 1. If the information vector P(z) has support k then P(z)G(z) has distance (d1+d2+k1).

    The dual code generator matrix is obtained from the control matrix HT(z); as noted the dual generator matrix is H(z1)zm where m is the memory and HT(z) is the control matrix. In this case the control matrix is HT(z)=DCz and so a generator matrix for the dual code is (DTCTz1)z=CT+DTz.

    If CT=A and DT=B then a self-dual convolutional code is obtained. Such a situation arises when (AB) is orthogonal and the characteristic is 2.

    As noted in Section 2.1, Lemma 2.8, a finite field F has a square root of (1) or else a quadratic extension of F has a square root. Thus define G(z)=A+iB where i denotes a square root of (1). Then (A+iB)(iD+Cz)=0 and so HT(z)=iD+Cz is a control matrix giving that H(z1)z=CT+iDT is a dual matrix. In case U is an orthogonal matrix, CT=A,DT=B and a dual matrix is A+iB giving that G(z) is a self-dual convolutional code of distance equal to the sum of the distances of the codes generated by A and by B.

    Proposition 3.1. Let U be a 2n×2n invertible matrix. Suppose U=(AB) and (AB)(CD)=I2n where A and B have size n×2n and C,D have size 2n×n.

    1) A generates a [2n,n] code C and DT generates the dual code of C. B generates a [2n,n] code D and CT generates the dual code of D.

    2) G(z)=A+Bz generates a (non-catastrophic) convolutional (2n,n,n;1) code C. Then G(z)(DCz)=0, DCz is a control matrix of C and CT+DTz generates the dual code of C.

    3) If the code generated by A has distance d1 and the code generated by B has distance d2, then the (free) distance of C is d=d1+d2 and C is a (2n,n,n;1,d) convolutional code. Further if the information vector P(z) has support k then P(z)G(z) has distance (d+k1).

    Proposition 3.2. Let U be a 2n×2n orthogonal matrix. Suppose U=(AB) and (AB)(CD)=I2n where A and B have size n×2n and C,D have size 2n×n.

    (1) A generates a [2n,n] code C and B generates the dual of C; hence C is an LCD code. B generates a [2n,n] code D and A generates the dual of D; hence D is an LCD code.

    (2) G(z)=A+Bz generates a (non-catastrophic) convolutional (2n,n,n;1) code C. Then CT+DTz=A+Bz generates the dual code of C.

    (3) If U is a matrix over a field of characteristic 2 then G(z)=A+Bz generates a self-dual convolutional code.

    (4) If the code generated by A has distance d1 and the code generated by B has distance d2, then the (free) distance of the code generated by G(z) is d=d1+d2 and is a (2n,n,n;1,d) convolutional code. Further if the information vector P(z) has support k then P(z)G(z) has distance (d+k1).

    (5) In case of characteristic 2, the code generated by G(z) is used to generate a quantum convolutional code of memory 1 which has type [[2n,0,d]] where d=d1+d2 is given in item 3.2.

    (6) Define G(z)=A+iB where i denotes a square root of (1). Then (A+iB)(iD+Cz)=0 and so HT(z)=iD+Cz is a control matrix giving that H(z1)z=CT+iDTz is a dual matrix. The free distance of G(z) is equal to the sum, d, of the distances of the codes generated by A and by B.

    (7) Suppose now U=(AB) is an orthogonal matrix, then in item (6), CT=A,DT=B. A dual matrix is then A+iBz giving that G(z) is a self-dual convolutional (2n,n,n;1,d) code. This can be used to define a [[2n,0,d]] quantum convolutional code.

    These propositions are very general and can be used to construct infinite series of such codes with increasing distances.

    Consider cases where A has size greater than B in the unit-derived formula (AB)(CD)=In with U=(AB).

    Let A have size r×n, then B has size (nr)×n. Now r>(nr) is equivalent to 2r>n. Let t=r(nr)=2rn and 0t=0t×n. Thus B1=(0tB) is an r×n matrix. Now C is an n×r matrix and thus has the form C=(X,C1) where C1 has size n×(nr) and X has size n×(2rn). As AC=Ir then AC1=(0(2rn)×(nr)I(nr)×(nr)).

    Define G(z)=A+B1z. This defines a generator matrix for a convolutional (n,r,nr;1) code. BC=0 implies BC1=0(nr)×(nr).

    Then (A+B1z)(DC1z)=ADAC1z+B1DzB1C1z2=0r×(nr)(02rn×nrInr×nr)z+(02rn×nrI(nr)×(nr))z=0r×(nr). Thus the control matrix is (DC1z) and a dual matrix is CT+Dz As (A+B1z)C=Ir, the code is non-catastrophic.

    Define G(z)=A+iB1z where i is a square root of (1) in the field or in a quadratic extension of the field. Then (A+iB1)(iD+C1z)=0 and CT1+iDTz is a dual matrix. In case U is orthogonal C=AT,D=BT and A=(XTCT1),B1=(0DT). Hence the code generated by G(z) contains its dual.

    Proposition 3.3. Let U be a matrix unit over a field with U=(AB) where A has size r×n and B has size (nr)×n with r>nr. Let t=2rn and B1=(0t×nB). Then

    (1) G(z)=A+B1z generates a convolutional (n,r,nr;1) code C.

    (2) Let A1 be the matrix of the first (2rn) rows of A. The distance d of C is min{d(A1),d(A)+d((A1B))} where d(X) denotes the distance of the code generated by X.

    Proposition 3.4. Let U be an orthogonal matrix in a field F and U=(AB1) where A has size r×n and B1 has size (nr)×n with r>nr. Let t=2rn and B1=(0t×nB). Let i be a square root of 1 in F or in a quadratic extension of F Then

    (1) G(z)=A+iB1z generates a convolutional dual-containing (n,r,nr;1) code C.

    (2) Let A1 be the matrix of the first (2rn) rows of A. The distance d of C is min{d(A1),d(A)+d((A1B))} where d(X) denotes the distance of the code generated by X.

    (3) A quantum convolutional code of the form [[n,2rn,d]] is constructed from C where d is the distance of C.

    The process is developed similarly by considering (AB)(CD)=αIn with U=(AB) where α0.

    It is best illustrated by looking at a block decomposition of the form (A0A1A2A3)(B0B1B2B3)=I4n, where each Ai has size n×4n. Take G(z)=(A0A1A2)+(00A3)z.

    In case U is orthogonal, let G(z)=(A0A1A2)+i(00A3)z and then a (4n,3n,n;1,d) DC convolutional code is obtained from which a quantum convolutional code of form [[4n,2n,d]] is obtained. The d may be calculated algebraically and depends on the distances of codes formed from the blocks.

    The following prototype examples exemplify some of the general constructions. The examples given tend to be linear block and convolutional of types DC and LCD, but many types and infinite families of such codes may be built up by the techniques. From DC codes, quantum error correcting codes (QECC) may be formed and families of DC codes lead to the formation of families of quantum codes .

    If the DC code has the form [n,r,d] then the QECC formed has length n, distance d and dimension (2rn) – see [5,6,40].

    Example 3.5. Let F7 be a Fourier 7×7 matrix over some field F. The field F is any field over which the Fourier 7×7 matrix exists. Thus F could be GF(23), a characteristic 2 field, but also over fields with characteristic not dividing 7.

    Denote the rows of F7 in order by {e0,e1,e2,e3,e4,e5,e6} and the columns of the inverse of F7 in order by {f0,f1,f2,f3,f4,f5,f6}. Note that eifj=0,ij,eifi=1 but also as F7 is a Fourier matrix that fTi=17e7i. The fraction part is no problem for check or control matrices: H is a check matrix if and only if αH is a check matrix for any α0.

    Let A=(e0e1e2e3),B1=(e4e5e6) where F7=(AB1) is the Fourier 7×7 matrix. Now from [14] A generates a [7,4,4] MDS (block) DC code.

    Let B=(0B1). Define G(z)=A+Bz=A+(0e4e5e6)z. Now G(z)(f1,f2,f3,f4)=I4 and so G(z) has a right inverse and thus the code generated by G(z) is non-catastrophic. G(z) defines a [7,4,3;1] convolutional code C. The GSB for such a code is (74)(34+1)+3+1=3+3+1=7. It is easy to check that this 7 is the free distance of the code and so the code is an MDS convolutional code.

    Note that (α1,α2,α3,α4)B has distance 5 as an element in a [7,3,5] code where (α1,α2,α3,α4) is a non-zero 1×4 vector, except when (α1,α2,α3,α4)=(α1,0,0,0); but then (α1,0,0,0)A=α1e0 which has distance 7.

    G(z)((f4,f5,f6)(f2,f3,f4)z)=0 and so HT(z)=(f4,f5,f6)(f2,f3,f4)z is a control matrix. Then H(z1)z=(fT4fT5fT6)z(fT1fT2fT3) generates the dual matrix. Now since fTi=17e7i this means 7H(z1)z=(e6e5e4)+(e3e2e1)z generates the dual matrix. Thus C is a convolutional (7,4,3;1,7) code which is an LCD code.

    Example 3.6. Use the same setup as in Example 3.5 where F7 is a Fourier 7×7 matrix. Take A=(e0e1e6e2e5). Then it follows from [14] that the code generated by A is an LCD [7,5,3] code. Let B=(000e4e3).

    Define G(z)=A+Bz=(e0e1e6e2e5)+(000e4e3)z. Then (A+Bz)(f0,f1,f6,f2,f5)=I5 and so G(z) has a right inverse and hence code, C, generated by G(z) is non-catastrophic. Now G(z){(f4,f3)(f2,f5)z}=0 and so (f4,f3)(f2,f5)z=HT(z) is a control matrix.

    Now H(z1)z=(fT4fT3)z(fT2fT5)=17{(e5e2)+(e3e4)z}. Thus (e5e2)+(e3e4)z generates the dual code of C.

    If the field F has characteristic 2 the code C is dual containing. Thus when F7 is a Fourier matrix over GF(23), the code C is dual containing. It is easy to check directly that the free distance of C is 5 and is thus an MDS convolutional (7,5,2;1,5) code; this is dual-containing when F7 is a Fourier matrix over GF(23).

    If the field does not have characteristic 2 then define G(z)=A+iBz where i is a square root of (1) in the field or in a quadratic extension of the field. Then again a MDS convolutional dual-containing code is obtained. A quantum convolutional code is constructed from a dual-containing code.

    The next example is a prototype example which although small demonstrates the power of the methods. In larger examples each row is replaced by a block of rows and lengths, distances are increased substantially.

    Example 3.7. Let X=(0111111011011011) over GF(2). Then X2=I4,X=XT. Thus DC codes are obtained by taking rows of X as a generating matrix and deleting corresponding columns of XT=X to obtain a control matrix.

    Let A=(01111110),B=(11011011) and (AB)(CD)=I4 is our unit scheme. Now here D=BT,C=AT. Thus A generates a [4,2,2] code C with control matrix D=BT and so B generates the dual code of C. The code C is an [4,2,2] LCD code.

    Extend this to a convolutional code C using G(z)=A+Bz. Now (A+Bz)(D+Cz)=0 so that HT(z)=D+Cz is a control matrix. Also G(z)C=I2 and so the code is non-catastrophic. The dual code of C is generated by H(z1)z=CT+DTz=A+Bz and hence the code is self-dual. Thus a convolutional self-dual (4,2,2;1,4) is obtained. From this a quantum error-correcting code of form [[4,0,4]] is obtained.

    P(z)G(z) has distance 4+(s1) for an information vector P(z) of support s.

    Any rows of U may be chosen to generate a code and the resulting code is automatically an LCD code. For example choose the first and third row of U and get A=(01111101),B=(11101011). Then D=BT,C=AT similar to above. A then generates an LCD [4,2,2] and G(z)=A+Bz generates a non-catastrophic self-dual (4,2,2;1,4) convolutional code.

    This idea of choosing arbitrary rows, when used on large size matrices, lends itself to forming McEliece type of cryptographic systems.

    Larger rates are obtained as follows. Choose three rows of U and get A=(011111101101),B=(1011). Then in general form C=AT,D=BT. A generates a [4,3,1] LCD code. Give U in its row form: U=(E0E1E2E3). (In larger length constructions the Ei are blocks of matrices of size n×4n.) Then A=(E0E1E2),B=(E3).

    Define G(z)=(E0E1E2)+(0404E3)z=A+B1z, say; 04 indicates here a row of zeros of length 4. This gives a (4,3,1;1) convolutional code which is dual-containing. The distance is 2. Note that (α1,α2,α3)B1=α3e3 has distance 3 except when α3=0 in which case (α1,α2,α3)A=α1e0+α2e1 has distance 2 or 3. If P(z) is an information vector of support s then P(z)G(z) has distance 2+(s1).

    Example 3.8. In another way construct rate 34 and 14 convolutional codes as follows:

    Define G(z)=(E0E1E2)+(E1E0E3)z+(E2E3E0)z2+(E3E2E2)z3 and HT(z)=ET3+ET2z+ET1z2+ET0z3.

    Then G(z)HT(z)=0. Let the code generated by G(z) be denoted by C. The dual of C is generated by H(z1)z3=E0+E1z+E2z2+E3z3. Thus C is a dual containing (4,3,9;3) convolutional code. Its free distance is 4 giving a (4,3,9;3,4) convolutional dual-containing code. From this a quantum [[4,2,4]] convolutional code is obtained.

    P(z)G(z) has distance 4+(s1) when P(z) is an information vector of support s.

    Example 3.9. Hamming convolutional code: Here the Hamming [7,4,3] code is extended to a (7,4,3;1,6) convolutional code. The distance is 6 which is twice that of the Hamming [7,4,3] but it's also a convolutional code which has its own decoding techniques. The method is to look at the Hamming code as a unit-derived code and proceed from there by a general technique of constructing convolutional codes by the unit-derived method.

    The Hamming [7,4,3] is given with generator matrix G=(1000110010010100100110001111) and check matrix H=(110110010110100111001). Now (GH) is not invertible so this cannot be used for extending G to be a unit-derived code. For reasons that will appear later use L=(1111111010010100100110001111) as the generator matrix.

    This is obtained by adding the other three rows to the first row to G above

    Now complete L to a unit U=(1111111010010100100110001111101110001001110001110)=(LK), say. It is easy to check that K generates a [7,3,3] code.

    U has inverse V=(0011100110111101110111100101100011001000100001001)=(CD), where C is 7×4 and D is 7×3. Form G(z)=L+(0K)z=(1111111010010100100110001111)+(0000000101110001001110001110)z.

    Precisely:

    G(z) generates a convolutional (7,4,3;1) non-catastrophic code C.

    The free distance of C is 6 so G(z) generates a (7,4,3;1,6) convolutional code.

    If P(z) is an information vector then the distance of P(z)G(z) is (6+d1) where d is the support of P(z).

    The free distance may be shown from the following observations. Let (α1,α2,α3,α4) be a non-zero vector. Then (α1,α2,α3,α4)(0K) has distance 3 except when α2=0=α3=α4; but in this case (α1,α2,α3,α4)L=α1(1,1,1,1,1,1,1) which has distance 7.

    Term the code generated by G(z) to be the Hamming convolutional code.

    The basic unit-derived scheme from UV=I breaks U=(AB) to derive (AB)(CD) and linear convolutional codes of memory 1 from the scheme have been described and analysed.

    Here the unit is broken into more than two blocks and linear convolutional codes of higher memory are derived and analysed.

    Consider the case UV=I with U=(ABCD),V=(EFGH) appropriately, giving another type of unit scheme:

    (ABCD)(EFGH)=I.

    First assume that A,B,C,D are of the same size. Thus U is a 4n×4n matrix. Three-quarter rate block linear codes are described by taking (ABC) as a generator matrix and then (H) is a control matrix. More generally by choosing three of A,B,C,D to form the generator matrix for a code gives a [4n,3n] three quarter rate code. The control matrix is immediately clear as one of E,F,G,H. When U is orthogonal, LCD block codes are obtained from which the convolutional codes described are DC when the characteristic is 2.

    Memory 3 codes are described: G(z)=A+Bz+Cz2+Dz3 is the generator of a (4n,n,3n;3) code. This is non-catastrophic as G(z)E=In. The distance is d where d is a linear functional of the distances of the codes generated by A,B,C,D. Moreover P(z)G(z) has distance (d+t1) where P(z) is an information vector and t is the support of P(z).

    Then (A+Bz+Cz2+Dz3)((F,G,H)(E,H,G)z(H,E,F)z2+(G,F,E)z3=0 and so KT(z)=(F,G,H)(E,H,G)z(H,E,F)z2+(G,F,E)z3 is a control matrix. The matrix of the dual is given by

    K(z1)z3=(GTFTET)(HTETFT)z(ETHTGT)z2+(FTGTHT)z3.

    This dual code is a (4n,3n,9n;3) code. When U is orthogonal, A=ET,B=FT,C=GT,D=HT.

    Proposition 4.1. Let (ABCD)(EFGH)=I4n be a unit scheme in which {A,B,C,D} are of the same size. Then

    (i) G(z)=A+Bz+Cz2+Dz3 is a generator matrix of a (4n,n,3n;3) convolutional code. The distance is a linear functional of the distances of the codes generated by {A,B,C,D}.

    (ii) P(z)G(z) has distance (d+t1) where t is the support of the information vector P(z).

    (iii) The control matrix of C is (F,G,H)(E,H,G)z(H,E,F)z2+(G,F,E)z3 and the dual code of C is generated by (GTFTET)(HTETFT)z(ETHTGT)z2+(FTGTHT)z3.

    (iv) When the full matrix is orthogonal the dual code of C is generated by

    (CBA)(DAB)z(ADC)z2+(BCD)z3.

    (v) When the full matrix is orthogonal and the characteristic is 2 the dual code is generated by (CBA)+(DAB)z+(ADC)z2+(BCD)z3. In this case the dual code C of C is a dual containing (4n,3n,9n;3) convolutional code. From this dual containing code of rate 34, using the CSS construction, a quantum error correcting code of length 4n and rate 12 is obtained.

    The Example 4.2 below is a very small prototype example with which to illustrate the general method.

    Example 4.2. Consider X=(0111111011011011) over GF(2). The matrix is orthogonal, XXT=I4, and also X=XT.** Then XXT=I is broken up to give (ABCD)(EFGH)=I4 where {A,B,C,D} are row 1×4 vectors and ET=A,FT=B,GT=C,HT=D as X is orthogonal. Define G(z)=A+Bz+Cz2+Dz3. Then G(z) generates a (4,1,1;3,12) convolutional (non-catastrophic) code C. The distance is 12 as the code generated by each of A,B,C,D has distance 3.

    **(I4,X) is a generator matrix for the extended Hamming [8,4,4] code.

    By Proposition 4.1 part (iv), the dual of C is generated by (CBA)+(DAB)z+(ADC)z2+(BCD)z3.

    Thus C is a dual-containing convolutional rate 34 code of the form (4,3,9;3). From this a quantum error-correcting code of rate 12 is formed.

    Example 4.3. Golay binary code to convolutional rates 34 and 14 codes with memory 3.

    Consider the matrix X used in forming the self-dual Golay binary [24,12,8] code in the form (I12,X) as in [32]. This X is the reverse circulant matrix with first row L=[0,1,1,0,1,1,1,1,0,1,0,0]. The X is symmetric and XXT=X2=I12. Here break X into four blocks, X1,X2,X3,X4 of equal size 3×12. The code generated by each Xi has distance 5. Then define G(z)=X1+X2z+X3z2+X4z3 and G(z) generates a binary (12,3,9;3,20) code C. Also P(z)G(z) has distance (20+s1) where P(z) is an information vector and s is the support of P(z). Note that since the rows of X are independent so any non-zero combination of the rows of X1X2X3X4 has distance 1.

    As X is orthogonal the dual, C, of C is generated by (X3X2X1)+(X4X1X2)z+(X1X4X3)z2+(X2X3X4)z3 by Proposition 4.1. Thus C is a convolutional (12,9,9;3) code. It is seen that C is a dual containing convolutional code of rate 34 and is used to form a convolutional quantum error correcting code of rate 12.

    Example 4.4. Let H be a Hadamard 12×12 matrix. Here the computer algebra system GAP [42] is used to generate H and the subsystem GUAVA is used to construct the codes and verify their distances in the linear block cases. The distances for the convolutional codes can be determined algebraically from the distances of the associated linear block codes.

    Thus H has the unit form (AB)(ATBT)=12I12. In any field not of characteristic 2 or 3, A and B generate LCD codes.

    Three rows of H generate a [12,3,6] LCD code.

    Six rows generate a [12,6,6] LCD code.

    Nine rows generate a [12,9,2] LCD code.

    Let A be the first six rows of H and B be the last six rows of H. Define G(z)=A+Bz. Then G(z) generates a (12,6,6;1,12) convolutional code.

    Define G(z)=A+iBz where i is a square root of (1) in the field or in a quadratic extension of the field. Then G(z) generates a self-dual convolutional (12,6,6;1,12). From this a convolutional quantum code of type [[12,0,12]] is formed.

    GF(5) has 2 as a square root of (1) so over GF(5), i can be taken to be 2. Arithmetic in GF(5) is modular arithmetic. GF(7) does not contain a square root of (1) so it needs to be extended to GF(72) in which to obtain a self-dual convolutional code.

    Dual-containing convolutional codes of form (12,9,3;1,d) are obtained by letting A be the first nine rows of H and B the last three rows of H and defining G(z)=A+iB1z where B1 has first six rows consisting of zeros and last three rows consist of B. This gives rise to a quantum convolutional code of the form [[12,6,d]]. The distance d=4 but note that P(z)G(z) has distance 5+(s2) where s is the support of the information vector P(z).

    Form (I12,H). This is a [24,12,8] code. Form (I12,iH), where i is a square of (1) in the field or in a quadratic extension of the field. This is a self-dual [24,12,8] code.

    In GF(5) the element 2 is a square root of (1) and thus (I12,2X) gives a self-dual [24,12,8] code in GF(5). The field GF(7) needs to be extended to GF(72) and then a self-dual code is obtained.

    1) Cases where the unit system is of the form A unit system (ABCD)(EFGH)=I where A,B,C have the same size r×n but D has size s×n with s<n can be dealt in a similar but more complicated manner. The details are omitted.

    2) Cases where the unit system is of size 3n×3n is dealt with by the following proposition:

    Proposition 4.5. Let (ABC)(DEF)=I3n be a unit scheme in which A,B,C are of the same size. Let G(z)=A+Bz+Cz2. Then and then verifying that (A+Bz+Cz2)((E,F)(D,F)z+(D,E)z2+(0,ED)z3)=0

    This allows the construction of rate 13 and rate 23 convolutional codes which are dual to one another similar to the method and results in Proposition 4.1.

    3) A unit scheme which can be broken into blocks of 8 in an 8n×8n enables for example convolutional codes with memory 7 and rate 18 and 78 to be established. In special cases the 78 rate code is dual containing establishing a rate 34 quantum convolutional code of memory 8 similar to Proposition 4.1.

    4) These types of constructions may be continued. For instance matrices with blocks of size n and matrix size 2in×2in are more amenable; when the matrix is orthogonal then dual-containing convolutional codes are obtainable from which quantum error correcting convolutional codes are formed with rate 2i112i1. Details are omitted.

    A low density parity check, LDPC, code is one where the check/control matrix of the code has a small number of non-zero entries compared to its length, see for example [28].

    The methods devised in previous sections for constructing linear block and convolutional codes are now used to construct LDPC linear and convolutional codes. What is required is the scheme produces a check/control matrix with low density compared to its length. It is known that for best performance of LDPC codes, there should be be no short cycles in the control matrix and this can be achieved by the methods.

    Given a unit scheme UV=I unit-derived codes are formed by taking any r rows of U as generator matrix and a check matrix is obtained by eliminating the corresponding columns of V. Thus if V itself is of low density then any such code formed is an LDPC code; if in addition V itself has no short cycles then any such code formed is an LDPC code with no short cycles.

    Thus given a unit scheme UV=I, where V is of low density and has no short cycles, choose any r rows of U to form the generator matrix of an [n,r] code C and deleting the corresponding r columns in V gives a check matrix for the code and the code C formed is an LDPC code with no short cycles in the check or control matrix. The code can also be specified by choosing the columns of the low density matrix V to form the control matrix and going to U to choose the rows which form a generator matrix.

    This is done in [26] for linear block codes. Therein methods are derived using units in group rings to produce linear LDPC codes and to produce such LDPC codes with no short cycles in the the check matrices. Thus a unit system, uv=1, is constructed in a group ring where one of the elements u,v, say v, has small support as a group ring element. The uv=1 is then mapped to the corresponding matrix equation, UV=I, by a process given in[18], in which V has low density. Then using unit-derived codes leads to the construction of LDPC codes as required and when V has no short cycles it leads to the construction of LDPC codes with no short cycles in the check matrix. It can be ensured that V has no short cycles by a condition (see [26]) on the group elements with non-zero coefficient used in forming the group ring element v short cycles.

    In that paper [26] simulations are made and the examples given are shown to outperform substantially previously constructed ones of the same size and rate. Randomly selected LDPC codes with no short cycles are produced from the same unit. The codes produced are of particular use in applications and industry where low storage and low power may be a requirement or necessary for better functioning.

    Note that U, from which the generator matrix is derived, does not, necessarily, have low density which is good from the point of the minimum distance of the code; however as stated in Mackay [28], "Distance isn't everything".

    Thus using group rings, systems are constructed UV=In in which V has low density with no short cycles anywhere. This gives an enormous freedom in which to construct LDPC codes with no short cycles. Indeed eliminating any (nr) columns of V gives a control matrix, and the generator matrix is formed by using the rows from U corresponding to the eliminated columns of V; the result is an [n,r] LDPC code. Thus given UV=In where V has low density and no short cycles allows the construction of many LDPC codes with no short cycles.

    In previous sections, methods are given for constructing convolutional codes from the unit-derived formula (AB)(CD)=In from UV=In using all the components A,B,C,D. Convolutional codes of higher memory are obtained by further breaking up the unit system in blocks. These techniques may also be applied for constructing convolutional LDPC codes with no short cycles in the control matrix.

    Considering the basic formula and when A,B have the same size, and n is even, then G(z)=A+Bz generates a non-catastrophic convolutional code. The control matrix is DCz and the dual code is generated by CTDTz. If V is of low density and has no short cycles then CTDTz is of low density and has no short cycles. Thus the codes derived is a convolutional LDPC code with no short cycles.

    It is difficult to describe explicitly the LDPC codes derived as for applications large lengths are required. Note that the method is very general and length and rate achieved can be decided in advance. We will concentrate on extending two of the examples in [26] to construct LDC convolutional codes with no short cycles.

    Only basic information on group rings is required. A good nice book on group rings is [35] and also the basic information may be found online by a simple search.

    Low density convolution codes and with no short in the control matrix are constructed by applying the methods in the previous sections together with the methods described in [26] for constructing LDPC linear codes with no short cycles. The following algorithm describes the constructions in general:

    Algorithm 1. (1) In a group ring with group size n find a unit and its inverse uv=1 where v has small support and no short cycles. The size n of the group should be large and the support of v relatively small compared to n.

    (2) From uv=1 go over to the matrix embedding of the group ring in a ring of matrices of size n×n, as in [18], to get a unit scheme UV=In of matrices where V is of low density and has no short cycles.

    (3) Choose r columns of V to eliminate to form an n×(nr) matrix which will be a control matrix for a [n,r] code. A generator matrix for this code is the r×n matrix formed by selecting the r rows from U corresponding in order to the r columns eliminated from V.

    (4) The unit scheme from item 1 may be presented as (AB)(CD) where A has size r×n, B has size (nr)×n, C has size n×r and D has size n×(nr). Now here both C,D are of low density and have no short cycles. An LDPC code with no short cycles in the control matrix is given by AD=0 as in [26]. But also notice BC=0 gives a LDPC code in addition.

    (5) The unit scheme in item (4) as in previous sections is extended to G(z)=A+Bz when A,B have the same size (in which case the rate is 1/2) or, when A has size heater than the size of B, to G(z)=A+B1z where B1 is obtained by extending B with zero rows to be the size of A. Then G(z) generates a convolutional memory 1 code which is non-catastrophic and has low density control matrix with no short cycles. The control matrix is DC1z where C1 is C or a submatrix of C as explained above.

    (6) Obtaining memory greater than 1 from the unit matrix scheme UV derived from the group ring unit uv also follows in a similar manner as described earlier. Examples of such are given below.

    Examples must be of large length in order to satisfy the low density criterion. In general the examples in [26] are taken from unit-derived codes within Z2(Cn×C4), where Z2=GF(2) is the field of two elements.

    The matrices derived are then submatrices of circulant-by-circulant matrices and are easy to program. They are not circulant and thus are not cyclic codes. Having circulant-by-circulant rather than circulant allows a natural spreading of the non-zero coefficients and gives better distance and better performance.

    Assume that Cn is generated by g and C4 is generated by h. Every element in the group ring is then of the form:

    n1i=0(αigi+hβigi+h2γigi+h3δigi),

    with αi,βi,γi,δiZ2.

    Example 5.1. Examples of [96,48] LDPC codes are given in Sections 3.2, 3.4 of [26]. These examples are derived from the group ring Z2(C24×C4).

    The check element v=g249+g2415+g2419+hg243+hg2420+h2g2422+h3g2422+h3g2412 is used to define an LDPC linear code.

    Then v has no short cycles in its matrix V and just 8 or less non-zero elements in each row and column. Any choice of columns of V will give an LDPC block linear code. A pattern to delete half the columns from the matrix V of v is chosen to produce a rate 1/2 code and is simulated and compared to other LDPC codes, outperforming these even when random columns are chosen.

    This selection is then presented as (AB)(CD) where A has size 48×96, B has size 48×96, C has size 96×48 and D has size 96×48. Then define G(z)=A+Bz to obtain a convolutional [96,48,48;1] low density parity check code. The control matrix is DCz which is D+Cz as the characteristic is 2.

    In this case the matrix U has the form (ABCD) and V has the form (EFGH) where A,B,C,D have size 24×96 and E,F,G,H have size 96×24 and each of E,F,G,H have low density. Then as in Proposition 4.1, G(z)=A+Bz+Cz2+Dz3 defines a convolutional (96,24,243;1) convolutional code which has low density check matrix.

    Using units to design and analyse both linear block and convolutional codes is a unique method in the important area of Coding Theory. The method is used to define codes to required length, rate and type by using appropriate units which give the required codes. Families of LCD (linear complementary dual) codes, families of self-dual codes, families of DC (dual-containing) codes, families of quantum codes and LDPC codes are designed from the method using appropriate units. The general method allow further new families of codes to be designed. New families of linear block codes, and with particular properties, can be designed from unit schemes; any linear block code over a field has such a designation. New families of convolutional codes of various memories can be mined from unit schemes; the question of which convolutional codes are unit-derived is left open.

    The author declares that he/she has not used Artificial Intelligence (AI) tools in the creation of this article.

    The author declares no conflicts of interest in this paper.



    [1] P. Almeida, D. Napp, R. Pinto, A new class of superregular matrices and MDP convolutional codes, Linear Algebra Appl., 439 (2013), 2145–2157. https://doi.org/10.1016/j.laa.2013.06.013 doi: 10.1016/j.laa.2013.06.013
    [2] P. Almeida, D. Napp, R. Pinto, Superregular matrices and applications to convolutional codes, Linear Algebra Appl., 499 (2016), 1–25. https://doi.org/10.1016/j.laa.2016.02.034 doi: 10.1016/j.laa.2016.02.034
    [3] R. E. Blahut, Algebraic codes for data transmission, Cambridge: Cambridge University Press, 2003.
    [4] I. E. Bocharova, F. Hug, R. Johannesson, B. D. Kudryashov, Dual convolutional codes and the MacWilliams identities, Probl. Inf. Transm., 48 (2012), 21–30. https://doi.org/10.1134/S0032946012010036 doi: 10.1134/S0032946012010036
    [5] A. R. Calderbank, E. M. Rains, P. W. Shor, N. J. A. Sloane, Quantum error correction via codes over GF(4), In: Proceedings of IEEE international symposium on information theory, Germany: IEEE, 1997. https://doi.org/10.1109/ISIT.1997.613213
    [6] A. R. Calderbank, P. W. Shor, Good quantum error-correcting codes exist, Phys. Rev. A, 54 (1996), 1098–1105. https://doi.org/10.1103/PhysRevA.54.1098 doi: 10.1103/PhysRevA.54.1098
    [7] C. Carlet, S. Mesnager, C. Tang, Y. Qi, Euclidean and Hermitian LCD MDS codes, Des. Codes Cryptogr., 86 (2018), 2605–2618. https://doi.org/10.1007/s10623-018-0463-8 doi: 10.1007/s10623-018-0463-8
    [8] C. Carlet, S. Mesnager, C. Tang, Y. Qi, R. Pelikaan, Linear codes over Fq are equivalent to LCD codes for q>3, IEEE Trans. Inform. Theory, 64 (2018), 3010–3017. https://doi.org/10.1109/TIT.2018.2789347 doi: 10.1109/TIT.2018.2789347
    [9] C. Carlet, S. Mesnager, C. Tang, Y. Qi, New characterization and parametrization of LCD codes, IEEE Trans. Inform. Theory, 65 (2019), 39–49. https://doi.org/10.1109/TIT.2018.2829873 doi: 10.1109/TIT.2018.2829873
    [10] C. Carlet, Boolean functions for cryptography and error correcting codes, In: Chapter of the monography Boolean models and methods in mathematics, computer science, and engineering, Cambridge: Cambridge University Press, 2010,257–397.
    [11] G. G. La Guardia, On negacyclic MDS-convolutional codes, Linear Algebra Appl., 448 (2014), 85–96. https://doi.org/10.1016/j.laa.2014.01.033 doi: 10.1016/j.laa.2014.01.033
    [12] H. Gluesing-Luerssen, G. Schneider, A MacWilliams identity for convolutional codes: The general case, IEEE Trans. Inf. Theory, 55 (2009), 2920–2930. https://doi.org/10.1109/TIT.2009.2021302 doi: 10.1109/TIT.2009.2021302
    [13] T. Hurley, D. Hurley, B. Hurley, Quantum error-correcting codes: The unit-derived strategy, arXiv: 1805.09053, 2018. https://doi.org/10.48550/arXiv.1805.09053
    [14] T. Hurley, Linear block and convolutional MDS codes to required rate, distance and type, In: Lecture notes in networks and systems, Cham: Springer, 507 (2022), 129–157. https://doi.org/10.1007/978-3-031-10464-0_10
    [15] T. Hurley, Convolutional codes from unit schemes, arXiv: 1412.1695, 2018. https://doi.org/10.48550/arXiv.1412.1695
    [16] T. Hurley, On codes induced from Hadamard matrices, arXiv: 2410.24027, 2024. https://doi.org/10.48550/arXiv.2410.24027
    [17] T. Hurley, D. Hurley, B. Hurley, Maximum distance separable codes to order, arXiv: 1902.06624, 2019. https://doi.org/10.48550/arXiv.1902.06624
    [18] T. Hurley, Group rings and rings of matrices, Int. J. Pure Appl. Math., 31 (2006), 319–335.
    [19] T. Hurley, Solving underdetermined systems with error-correcting codes, Int. J. Inf. Coding Theory, 4 (2017), 201–221. http://dx.doi.org/10.1504/IJICOT.2017.10005825 doi: 10.1504/IJICOT.2017.10005825
    [20] T. Hurley, Convolutional codes from units in matrix and group rings, Int. J. Pure Appl. Math., 50 (2009), 431–463.
    [21] P. Hurley, T. Hurley, Module codes in group rings, In: 2007 IEEE International symposium on information theory, France: IEEE, 2007, 1981–1985. https://doi.org/10.1109/ISIT.2007.4557511
    [22] P. Hurley, T. Hurley, Codes from zero-divisors and units in group rings, Int. J. Inform. Coding Theory, 1 (2009), 57–87. https://doi.org/10.1504/IJICOT.2009.024047 doi: 10.1504/IJICOT.2009.024047
    [23] P. Hurley, T. Hurley, Block codes from matrix and group rings, In: Selected topics in information and coding theory, World Scientific, 2010,159–194. https://doi.org/10.1142/9789812837172_0004
    [24] P. Hurley, T. Hurley, LDPC and convolutional codes from matrix and group rings, In: Selected topics in information and coding theory, World Scientific, 2010,195–237. https://doi.org/10.1142/9789812837172_0006
    [25] T. Hurley, D. Hurley, Coding theory: The unit-derived methodology, Int. J. Inf. Coding Theory, 5 (2018), 55–80. http://dx.doi.org/10.1504/IJICOT.2018.10013082 doi: 10.1504/IJICOT.2018.10013082
    [26] T. Hurley, P. McEvoy, J. Wenus, Algebraic constructions of LDPC codes with no short cycles, Int. J. Inf. Coding Theory, 1 (2010), 285–297. https://doi.org/10.1504/IJICoT.2010.032544 doi: 10.1504/IJICoT.2010.032544
    [27] R. Johannesson, K. Sh. Zigangirov, Fundamentals of convolutional coding, Wiley-IEEE Press, 2015.
    [28] D. J. C. MacKay, Information theory, inference, and learning algorithms, Cambridge: Cambridge University Press, 2003.
    [29] F. J. MacWilliams, N. J. A. Sloane, The theory of error-correcting codes, Elsevier, 1977.
    [30] J. L. Massey, Linear codes with complementary duals, Discrete Math., 106-107 (1992), 337–342. https://doi.org/10.1016/0012-365X(92)90563-U
    [31] J. L. Massey, Reversible codes, Inf. Control, 7 (1964), 369–380. https://doi.org/10.1016/S0019-9958(64)90438-3
    [32] I. McLoughlin, T. Hurley, A group ring construction of the extended binary Golay code, IEEE Trans. Inform. Theory, 54 (2008), 4381–4383. https://doi.org/10.1109/TIT.2008.928260 doi: 10.1109/TIT.2008.928260
    [33] R. McEliece, The theory of information and coding, 2 Eds., Cambridge: Cambridge University Press, 2009. https://doi.org/10.1017/CBO9780511606267
    [34] S. Mesnager, C. Tang, Y. Qi, Complementary dual algebraic geometry codes, IEEE Trans. Inf. Theory, 64 (2018), 2390–2397. https://doi.org/10.1109/TIT.2017.2766075 doi: 10.1109/TIT.2017.2766075
    [35] C. P. Milies, S. K. Sehgal, An introduction to group rings, In: Algebra and applications, Dordrecht: Springer, 1 (2002).
    [36] R. Pellikaan, On decoding by error location and dependent sets of error positions, Discrete Math., 106-107 (1992), 369–381. https://doi.org/10.1016/0012-365X(92)90567-Y doi: 10.1016/0012-365X(92)90567-Y
    [37] J. M. M. Porras, J. A. D. Pérez, J. I. I. Curto, J. S. Sotelo, Convolutional Goppa codes, IEEE Trans. Inform. Theory, 52 (2006), 340–344. https://doi.org/10.1109/TIT.2005.860447
    [38] J. Rosenthal, R. Smarandache, Maximum distance separable convolutional codes, Appl. Algebra Engrg. Comm. Comput., 10 (1999), 15–32. https://doi.org/10.1007/s002000050120 doi: 10.1007/s002000050120
    [39] J. Rosenthal, Connections between linear systems and convolutional codes, In: The IMA volumes in mathematics and its applications, New York: Springer, 123 (1999). https://doi.org/10.1007/978-1-4613-0165-3_2
    [40] A. M. Steane, Simple quantum error correcting codes, Phys. Rev. A, 54 (1996), 4741–4751.
    [41] R. Smarandache, H. Gluesing-Luerssen, J. Rosenthal, Constructions for MDS-convolutional codes, IEEE Trans. Inform. Theory, 47 (2001), 2045–2049. https://doi.org/10.1109/18.930938 doi: 10.1109/18.930938
    [42] The GAP group, GAP – groups, algorithms, and programming, Version 4.12.2, 2022. Available from: https://www.gap-system.org
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(534) PDF downloads(51) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog