Research article Recurring Topics

Classification of Spike Wave Propagations in a Cultured Neuronal Network: Investigating a Brain Communication Mechanism

  • Received: 22 September 2016 Accepted: 01 December 2016 Published: 26 December 2016
  • In brain information science, it is still unclear how multiple data can be stored and transmitted in ambiguously behaving neuronal networks. In the present study, we analyze the spatiotemporal propagation of spike trains in neuronal networks. Recently, spike propagation was observed functioning as a cluster of excitation waves (spike wave propagation) in cultured neuronal networks. We now assume that spike wave propagations are just events of communications in the brain. However, in reality, various spike wave propagations are generated in neuronal networks. Thus, there should be some mechanism to classify these spike wave propagations so that multiple communications in brain can be distinguished. To prove this assumption, we attempt to classify various spike wave propagations generated from different stimulated neurons using our original spatiotemporal pattern matching method for spike temporal patterns at each neuron in spike wave propagation in the cultured neuronal network. Based on the experimental results, it became clear that spike wave propagations have various temporal patterns from stimulated neurons. Therefore these stimulated neurons could be classified at several neurons away from the stimulated neurons. These are the classifiable neurons. Moreover, distribution of classifiable neurons in a network is also different when stimulated neurons generating spike wave propagations are different. These results suggest that distinct communications occur via multiple communication links and that classifiable neurons serve this function.

    Citation: Yoshi Nishitani, Chie Hosokawa, Yuko Mizuno-Matsumoto, Tomomitsu Miyoshi, Shinichi Tamura. Classification of Spike Wave Propagations in a Cultured Neuronal Network: Investigating a Brain Communication Mechanism[J]. AIMS Neuroscience, 2017, 4(1): 1-13. doi: 10.3934/Neuroscience.2017.1.1

    Related Papers:

    [1] Kezheng Zuo, Yang Chen, Li Yuan . Further representations and computations of the generalized Moore-Penrose inverse. AIMS Mathematics, 2023, 8(10): 23442-23458. doi: 10.3934/math.20231191
    [2] Yang Chen, Kezheng Zuo, Zhimei Fu . New characterizations of the generalized Moore-Penrose inverse of matrices. AIMS Mathematics, 2022, 7(3): 4359-4375. doi: 10.3934/math.2022242
    [3] Qi Xiao, Jin Zhong . Characterizations and properties of hyper-dual Moore-Penrose generalized inverse. AIMS Mathematics, 2024, 9(12): 35125-35150. doi: 10.3934/math.20241670
    [4] Mahmoud S. Mehany, Faizah D. Alanazi . An η-Hermitian solution to a two-sided matrix equation and a system of matrix equations over the skew-field of quaternions. AIMS Mathematics, 2025, 10(4): 7684-7705. doi: 10.3934/math.2025352
    [5] Vladislav N. Kovalnogov, Ruslan V. Fedorov, Denis A. Demidov, Malyoshina A. Malyoshina, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis . Computing quaternion matrix pseudoinverse with zeroing neural networks. AIMS Mathematics, 2023, 8(10): 22875-22895. doi: 10.3934/math.20231164
    [6] Yongge Tian . Miscellaneous reverse order laws and their equivalent facts for generalized inverses of a triple matrix product. AIMS Mathematics, 2021, 6(12): 13845-13886. doi: 10.3934/math.2021803
    [7] Li Wen, Feng Yin, Yimou Liao, Guangxin Huang . A greedy average block Kaczmarz method for the large scaled consistent system of linear equations. AIMS Mathematics, 2022, 7(4): 6792-6806. doi: 10.3934/math.2022378
    [8] Abdur Rehman, Ivan Kyrchei, Muhammad Zia Ur Rahman, Víctor Leiva, Cecilia Castro . Solvability and algorithm for Sylvester-type quaternion matrix equations with potential applications. AIMS Mathematics, 2024, 9(8): 19967-19996. doi: 10.3934/math.2024974
    [9] Jiale Gao, Kezheng Zuo, Qingwen Wang, Jiabao Wu . Further characterizations and representations of the Minkowski inverse in Minkowski space. AIMS Mathematics, 2023, 8(10): 23403-23426. doi: 10.3934/math.20231189
    [10] Jue Feng, Xiaoli Li, Kaicheng Fu . On the generalized spectrum of bounded linear operators in Banach spaces. AIMS Mathematics, 2023, 8(6): 14132-14141. doi: 10.3934/math.2023722
  • In brain information science, it is still unclear how multiple data can be stored and transmitted in ambiguously behaving neuronal networks. In the present study, we analyze the spatiotemporal propagation of spike trains in neuronal networks. Recently, spike propagation was observed functioning as a cluster of excitation waves (spike wave propagation) in cultured neuronal networks. We now assume that spike wave propagations are just events of communications in the brain. However, in reality, various spike wave propagations are generated in neuronal networks. Thus, there should be some mechanism to classify these spike wave propagations so that multiple communications in brain can be distinguished. To prove this assumption, we attempt to classify various spike wave propagations generated from different stimulated neurons using our original spatiotemporal pattern matching method for spike temporal patterns at each neuron in spike wave propagation in the cultured neuronal network. Based on the experimental results, it became clear that spike wave propagations have various temporal patterns from stimulated neurons. Therefore these stimulated neurons could be classified at several neurons away from the stimulated neurons. These are the classifiable neurons. Moreover, distribution of classifiable neurons in a network is also different when stimulated neurons generating spike wave propagations are different. These results suggest that distinct communications occur via multiple communication links and that classifiable neurons serve this function.


    The applications of the generalized inverse of matrices or operators are of interest in numerical mathematics. Indeed, when a matrix is singular or rectangular, many computational and theoretical problems require different forms of generalized inverses. In the finite-dimensional case, an important application of the Moore-Penrose inverse is to minimize a Hermitian positive definite quadratic form xtx where t denotes the transpose, under linear constraints. Precisely, weighted Moore-Penrose inverse plays a prominent role in the indefinite linear least-square problems [1,2].

    Let Cm×n denotes the set of all matrices of order m×n, having complex entries. Further, assume that for an arbitrary matrix ACm×n, and two Hermitian positive definite matrices MCm×m, and NCn×n, there exist a unique matrix SCn×m such that satisfying the following properties:

    1)ASA=A,2)SAS=S,3)(MAS)=MAS,4)(NSA)=NSA. (1.1)

    Then, S is said to be a weighted Moore-Penrose inverse (WMPI) of A with respect to matrices M and N; generally, it is denoted by AMN. In particular, when the matrix M and N are the identity matrix of order m and n, respectively, then S is known as Moore-Penrose inverse, denoted by A. Moreover, the above relation is reduced to well-known Penrose equations [3,4] as follows:

    1)ASA=A,2)SAS=S,3)(AS)=AS,4)(SA)=SA. (1.2)

    The elementary technique for computing the WMPI of the matrix is entirely based on the weighted singular value decomposition [5], in accordance with the following form. Let ACm×nr, where Cm×nr be a set of complex matrices of order m×n with rank r, there exist matrices PCm×m and QCn×n satisfying the conditions PMP=Im and QN1Q=In, such that

    A=P(D000)Q, (1.3)

    where D=diag(σ1,σ2,,σr), σ2i is the nonzero eigenvalue of matrix N1AMA, and it satisfies the relation: σ1σ2σr>0. Then, the WMPI (AMN) of matrix A could be expressed as:

    AMN=N1Q(D1000)PM. (1.4)

    Note that, in this manuscript weighted conjugate transpose of matrix A is denoted by A# and is equal to N1AM, whereas A denotes the conjugate transpose of the matrix ACm×n. Consequently, (AMN)#=M1(AMN)N=M1(A)N1M1N and (AAMN)#=P(Ir000)P1 [6, pp. 41]. Moreover, the following properties hold:

    A#AAMN=A#,A#(AAMN)#=A#,(AMNA)#A#=A#,AMNAA#=A#.

    A diverse range of other methodologies has presented in the literature to determine the WMPI of a matrix. For calculating the generalized inverse numerically, Greville's partitioning method was introduced in [7]. A new proof of Greville's method was illustrated by Wang [8] for WMPI. But, such methods involve more operations and therefore more rounding errors are accumulated. Often numerical techniques for finding the Moore-Penrose inverse lack in numerical stability [9]. Besides it, Wang [10] obtained a comprehensive proof for the WMPI of a partition matrix in a recursive form. Also, WMPI method was introduced in [9] for the multi-variable polynomial. Moreover, new determinantal representations for weighted generalized inverse were presented in [11]. Whereas, a representation of the WMPI of a quaternion matrix was discussed by Ivan Kyrchei [12,13]. Further, its explicit representation for the two-sided restricted quaternionic matrix equations was investigated in [14].

    The hyperpower iterative method was given by Altman [15] for inverting a linear bounded operator in the Hilbert space, whereas it's applicability for generating the Moore-Penrose inverse of a matrix was shown by Ben-Israel [16]. Several iterative methods fall in this category of hyperpower matrix iterations and the general pth order iterative method is written as follows:

    Sk+1=Sk(I+Rk+R2k++Rpk), (1.5)

    where Rk=IASk, I is identity matrix of order m and S0 is the initial approximation to input matrix A1. This scheme is attractive as it is entirely based on matrix-matrix products, which is implemented fruitfully in parallel machines. By this approach, each pth- order iterative method brought forward in terms of hyperpower series, which require p times matrix-matrix multiplications. If p=2, iterative method (1.5) yields well known Newton-Schulz method (derived in [17,18]):

    Sk+1=Sk(2IASk),k=0,1,. (1.6)

    Although it is a quadratically convergent method, and its complexity is poly-logarithmic and numerically stable (discussed in [19]). But, scheme (1.6) often shows slow convergence behavior during the initial process, thus would lead to an increment in computational workload while calculating matrix inverse.

    In 2017, H. Esmaeili and A. Pirnia [20] constructed a quadratic convergence iterative scheme as follows:

    Sk+1=Sk(5.5IASk(8I3.5ASk)),k=0,1,. (1.7)

    If p=3, the hyperpower iterative method (1.5) turns into a cubically convergent method and can also be derived from Chebyshev scheme [21]:

    Sk+1=Sk(3IASk(3IASk)), (1.8)

    and investigated by Li et al. [21] in 2001. Along with this, they developed two other third-order iterative methods, from the mid-point rule method [22] and Homeier's method [22], given as:

    Sk+1=(I+14(ISkA)(3ISkA)2)Sk, (1.9)

    and

    Sk+1=Sk(I+12(IASk)(I+(2IASk)2)), (1.10)

    respectively. In 2017, a fourth-order scheme has been presented by Esmaeili et al. [23] and demonstrated below:

    Sk+1=Sk(9I26(ASk)+34(ASk)221(ASk)3+5(ASk)4). (1.11)

    Toutounian and Soleymani [24] have proposed the following fourth-order method:

    Sk+1=12Sk(9IASk(16IASk(14IASk(6IASk)))). (1.12)

    As a general way of extracting can be done by using equation (1.5). Thus, for p=4 the fourth-order iterative scheme:

    Sk+1=Sk(4I6ASk+4(ASk)2(ASk)3), (1.13)

    that uses four matrix-matrix multiplications. In 2013, Soleymani [25] demonstrated the fifth-order iterative method with the use of six times matrix multiplications at each step and scheme is presented below:

    Sk+1=12Sk(11I+ASk(25I+ASk(30I+ASk(20I+ASk(7I+ASk))))). (1.14)

    In this paper, we have been investigated the fifth-order convergent iterative method for computing the weighted Moore-Penrose inverse. We focused on one of the major factors of computational cost which is paying close attention to reduce the time of computations. In addition, the theoretical study was carried out to justify the ability to finding the weighted Moore-Penrose inverse of any matrix. Also, the aim of the presented work have been supported by the numerical performance.

    Our aim is to derive a fifth-order iterative method with the help of Eq. (1.5) to find the weighted Moore-Penrose inverse of a matrix A, that uses minimum number of matrix multiplications than the required ones in (1.5). The hyperpower series for p=5 can be written as:

    Sk+1=Sk(5I10ASk+10(ASk)25(ASk)3+(ASk)4)=SkΦ(ASk), (2.1)

    where Φ(ASk)=5I10ASk+10(ASk)25(ASk)3+(ASk)4. The count of matrix multiplications in the iterative method (2.1) is five. The computational time used by the iterative method (2.1) can be minimized by reducing the count of matrix-matrix multiplications at each step. For this, we re-formulate the above scheme (2.1) as follows:

    {Xk=ASk,Yk=X2k,Vk=5I5Xk,Xk+1=Sk(Vk5Xk+Yk(5I+Vk+Yk))=SkΦ(ASk),k=0,1,2,. (2.2)

    This is a new iterative method (2.2) for computing the generalized inverse of any matrix. It can be seen easily that it uses four matrix-matrix multiplications at every step.

    In the next section, we will prove theoretically that the order of convergence of the above scheme is five and is applicable for generating the weighted Moore-Penrose inverse.

    Lemma 3.1. For the approximate sequence {Sk}k=0 generated by the iterative method (2.2) with the initial matrix

    S0=δA#, (3.1)

    the following Penrose equations hold:

    (a) SkAAMN=Sk,    (b) AMNASk=Sk,    (c) (MASk)=MASk,   (d) (NSkA)=NSkA.

    Proof. This lemma could be proved via mathematical induction on k. For k=0, the Eq. (a) is true. That is,

    S0AAMN=δA#AAMN=δN1AMAAMN=δN1Q(D000)PMP(D000)QN1Q(D1000)PM=δN1Q(D000)Im(D000)In(D1000)PM=δN1Q(D000)PM=δA#=S0.

    Further, we assume that the result holds for k, i.e.,

    SkAAMN=Sk or ASkAAMN=ASk. (3.2)

    Now, we will prove that the Eq. (a) continues to hold for k+1, i.e., Sk+1AAM,N=Sk+1. Thus, considering its left-hand side expression and using the iterative scheme (2.2), we get

    Sk+1AAMN=Sk(5I10ASk+10(ASk)25(ASk)3+(ASk)4)AAMN=5SkAAMN10SkASkAAMN+10Sk(ASk)1ASkAAMN5Sk(ASk)2ASkAAMN+Sk(ASk)3ASkAAMN.

    Substituting Eq. (3.2) in the above equation, one can obtain

    Sk+1AAMN=5Sk10SkASk+10Sk(ASk)1(ASk)5Sk(ASk)2ASk+Sk(ASk)3ASk=Sk(5I10ASk+10(ASk)25(ASk)3+(ASk)4)=Sk+1.

    Hence, by the principle of mathematical induction, the Eq. (a) holds for kW, where W={0,1,2, 3,}. Now, third Eq. (c) of this lemma can easily be verified for k=0. Let the result is true for k i.e., (MASk)=MASk. Next, we will show that the result holds for k+1. Using the iterative scheme (2.2),

    (MASk+1)=(MASk(5I10ASk+10(ASk)25(ASk)3+(ASk)4))=5(MASk)10(M(ASk)2)+10(M(ASk)3)5(M(ASk)4)+(M(ASk)5). (3.3)

    Using the fact (MASk)=MASk, and also for q>0,

    (M(ASk)q)=(M(ASk)(ASk)(ASk)q1 terms)=(ASk)(ASk)(ASk)q1 terms(MASk)=(ASk)(ASk)(ASk)q1 termsMAS=(ASk)(ASk)(ASk)q1 termsMAS=(ASk)(ASk)(ASk)q2 terms(MASk)AS=(ASk)(ASk)(ASk)q2 termsM(AS)2=M(ASk)q.

    Thus, the Eq. (3.3) becomes:

    (MASk+1)=5MASk10M(ASk)2+10M(ASk)35M(ASk)4+M(ASk)5=MASk+1. (3.4)

    Thus, third equality holds for k+1. The second and the fourth equations (i.e., (c) & (d)) can be proved analogously. Hence, the proof is completed.

    Let A be a complex matrix of order m×n having rank r. Assume that the matrices PCm×m, QCn×n, M and N are the Hermitian positive definite matrices satisfy PMP=Im and QN1Q=In. Then, the weighted singular value decomposition of matrix A can be expressed by Eq. (1.4).

    Lemma 3.2. Considering the conditions of Lemma 3.1, for each approximate inverse produced by the iterative scheme (2.2), the following expression holds:

    Θk=(Q1N)Sk(M1(P)1)=(Tk000), (3.5)

    where Tk is a diagonal matrix and it is given by

    Tk={Tk1Φ(DTk1),k1,δD,k=0. (3.6)

    Here, D denotes the diagonal matrix of order r.

    Proof. We prove this lemma by using the mathematical induction on k. For k=0, we have

    (Q1N)S0(M1(P)1)=δ(Q1N)A#(M1(P)1)=δ(Q1NN1)A(MM1(P)1)=δ(Q1NN1Q)(D000)(PMM1(P)1)=(δD000). (3.7)

    Further, we assume that the result holds for k. Now, we will prove that the result (3.5) is valid for k+1. For this, it is sufficient to prove that

    (Q1N)Sk+1(M1(P)1)=(TkΦ(DTk)000).

    Thus,

    (Q1N)Sk+1(M1(P)1)=Q1N(Sk(5I10ASk+10(ASk)25(ASk)3+(ASk)4))(M1(P)1)=5(Q1N)Sk(M1(P)1)10(Q1N)SkASk(M1(P)1)+10(Q1N)Sk(ASk)2(M1(P)1)5(Q1N)Sk(ASk)3(M1(P)1)+(Q1N)Sk(ASk)4(M1(P)1)=Θk(5I10DΘk+10(DΘk)25(DΘk)3+(DΘk)4)=(TkΦ(DTk)000). (3.8)

    Theorem 3.1. For a complex matrix ACm×n, the sequence {Sk}k=0 generated by (2.2) with initial matrix S0=δA#, for any k>0 converges to AMN with at least fifth-order of convergence.

    Proof. In view of iterative scheme (2.2) and to establish this result, we must show that

    limk(Q1N)Sk(M1(P)1)=(D1000). (3.9)

    It follows from Lemma 3.2, that

    Tk=diag(t(k)1,t(k)2,,t(k)r), where t(0)i=δξi, (3.10)

    and

    tk+1i=tki(5I10ξti+10(ξti)25(ξti)3+(ξti)4). (3.11)

    The sequence generated by Eq. (3.11) gives rise to the result of applying the iterative scheme (2.2) for computing the zero ξ1 of the function f(t)=ξt1 with the initial guess t(0)i. It could be seen that the iteration converges to ξ1i, provided 0<t(0)i<2ξi, which leads to the condition on δ (so the choice of initial guess is proved). Thus, TkD1 and relation (3.9) is satisfied. It proves that the iterative method (2.2) converges to its weighted matrix inversion AMN. Now, we will show that the obtained sequence {Sk}k=0 converges with the fifth-order. For this, assume that

    SkA=N1Q(Tk000)PMP(D000)Q. (3.12)

    Since PMP=Im, and QN1Q=In, we have (Q)1=N1Q. Consequently,

    SkA=(Q)1(TkD000)Q=(Q)1(Ek000)Q, (3.13)

    where Ek=TkD=diag(β(k)1,β(k)2,,β(k)3). This yields to

    Sk+1A=N1Q(TkΦ(DTk)000)PMP(D000)Q, (3.14)

    therefore

    Sk+1A=(Q)1(TkDΦ(DTk)000)Q. (3.15)

    Eqs. (3.13) and (3.15) implies Ek+1=Ekp(Ek). Now by simplifying, we obtain

    IEk+1=(IEk)5, (3.16)

    and thus for all j, 1jr, we have (1β(k+1)j)(1β(k)j)5. That shows at least the fifth-order of convergence of the method (2.2) for finding the WMPI. This completes the proof.

    The purpose of this section is to confirm the theoretical aspects through numerical testing. For this, an attempt is made to illustrate the comparison of the proposed strategy with the existing schemes on practical and academic models. The outcomes are estimated by using the Mathematica software, as numerical calculations are accomplished with high accuracy. Moreover, its programming language posses the symbolic calculations and exact arithmetics. The software Mathematica 11 with the specification of a processor is Intel(R) Core(TM) i7-8565U CPU @ 1.89GHz (64-bit operating system) Window 10 Pro @ 2019 Microsoft Corporation was used. The comparisons have been done by considering the total number of matrix multiplications (TMM), actual running time (T) in seconds, and the computational order of convergence (ρ). For calculating the ρ,

    ρ=ln(Sk+1A#MN/SkA#MN)ln(SkA#MN/Sk1A#MN),k=1,2,, (4.1)

    the last three approximations Sk1,Sk,Sk+1 are used and here, denotes the generic matrix norm. For comparison purposes, the proposed scheme PM5 (2.2) is compared with methods proposed by Schulz (1.6), Esmaeili et al. (1.7), Li. et al. {(1.8), (1.9) and (1.10)}, Esmaeili et al. (1.11), Toutounian and Soleymani (1.12), Li et al. (1.13), and Soleymani (1.14) and denoted by SM2, EM2, CM3, MP3, HM3, EM4, TM4, LM4, and SO5, respectively.

    Example 1. (Academic problem) Consider the rank-deficiency matrix

    A=[123411346223453345644567666778]. (4.2)

    The comparison of this test problem is calculated by using initial guess S0=1σ2min+σ2maxA [26], where σmin and σmax are the bounds of singular values of A. Moreover, the stopping criteria Sk+1Sk<10100 is used for finding a better approximation. From Table 1, we can observe that the presented method attains the desired result with the lesser number of multiplications of the matrix than the existing schemes, in minimum time. The presented scheme is, therefore, more efficient for this rank-deficient matrix because it shows better outcomes in each component, whereas some of the techniques are not able to determine the solution for this test problem.

    Table 1.  Outcomes for comparison by testing schemes on Example 1.
    Method SM2 EM2 CM3 MP3 HM3 EM4 TM4 LM4 SO5 PM5
    ρ 2.0002 * 3.0000 3.0000 3.0000 * 4.0000 4.0000 5.0000 5.0467
    T 1.141 * 0.860 0.938 0.875 * 0.856 0.873 0.719 0.610
    TMM 38 * 39 48 48 * 50 44 54 32
    * denotes the divergence

     | Show Table
    DownLoad: CSV

    Example 2. Consider the following elliptic partial differential equation:

    2ϕx2+2ϕy2=32ϕ(x,y), (4.3)

    where ϕ is a function of x and y. It is satisfied at every point inside the square formed by x=±1, y=±1 and subject to the following the boundary conditions:

    (ⅰ) ϕ(x,y)=0 on y=1,    1x1,

    (ⅱ)ϕ(x,y)=1 on y=1,   1x1,

    (ⅲ) ϕx=12ϕ(x,y) on x=1,    1y1,

    (ⅳ) ϕx=12ϕ(x,y) on x=1,    1y1.

    By using the central difference formulae on Eq. (4.3), one can obtain

    ϕi+1,j2ϕi,j+ϕi1,jh2+ϕi,j+12ϕi,j+ϕi,j1h2=32ϕi,j, (4.4)

    here ϕi,j=ϕ(xi,yj). Consider square mesh size h=14 which yields seventy finite difference equations to find the approximate solution ϕ(xi,yj). By observing the boundary conditions, one can easily see that the function ϕ is symmetric about the y-axis. Finally, implementing the boundary conditions on (4.4), we obtain the linear system Aϕ=u, of thirty-five unknown parameters, where A=(YIIYIIYIIYIIYIIYIIY) is a tri-diagonal matrix, I is the identity matrix of order 5×5, Y=(620001610001610001610002254), ϕ and u are the column vector whose transpose is equal to (ϕ1,ϕ2,,ϕ34,ϕ35), and (0,0,,0,1,1,1,1,1), respectively. To tackle the large sparse array, the SparseArray and Band function are applied for saving the memory space and reduce the computational burden of matrix multiplication as follows:

    A=SparseArray[{Band[{1, 1}, {i, i}] {-6, -6, -6, -6, -254}, Band[{2, 1}, {i, i}] {1, 1, 1, 2, 0}, Band[{1, 2}, {i, i}] {2, 1, 1, 1, 0}, Band[{1, 6}] 1, Band[{6, 1}] 1}, {i, i}, 0.]

    The initial guess for this test problem is considered as S0=1A1AA [26], where A1=maxj(mi=1|ai,j|), and ||A||=maxi(nj=1|ai,j|). The results of comparisons are shown in Table 2. The methods LM4 and PM5 are better for this example than other methods, as uses minimum computational cost (i.e., matrix products), despite it PM5 yields the result faster. Thus, overall it demonstrates that the presented method converges faster than its competitors.

    Table 2.  Outcomes for comparison by testing schemes on Example 2.
    Method SM2 EM2 CM3 MP3 HM3 EM4 TM4 LM4 SO5 PM5
    ρ 2.00234 1.98506 3.25404 3.12742 3.08096 4.08699 4.34940 4.58575 4.98237 5.10871
    T 0.250 0.265 0.203 0.250 0.218 0.406 0.266 0.249 0.235 0.188
    TMM 18 21 15 18 18 20 20 16 24 16

     | Show Table
    DownLoad: CSV

    Example 3. In this test, we compute the weighted Moore-Penrose inverse of random dense matrix Am×n as follows:

    A=RandomReal[{10,10},{m,n}], (4.5)

    where the M and N are the Hermitian positive definite matrices and considered as:

    MM=RandomReal[{2},{m,n}];MM=Transpose[MM].MM;NN=RandomReal[{3},{n,n}];NN=Transpose[NN].NN;

    The results are drawn by using initial guess and stopping criteria as 1σ2min+σ2maxA# [26] and Sk+1Sk<1012, respectively. The comparisons of this problem are listed in Table 3, which manifests that the PM5 is much efficient than other existing methods, in each aspect.

    Table 3.  Outcomes for comparison by testing schemes on Example 3.
    Method SM2 EM2 CM3 MP3 HM3 EM4 TM4 LM4 SO5 PM5
    T 2.172 1.391 1.329 1.610 1.282 1.265 1.188 1.437 1.157 1.047
    TMM 156 105 132 200 168 130 190 164 186 116

     | Show Table
    DownLoad: CSV

    Example 4. Consider the following different order of ill-conditioned Hilbert matrix for computing the Moore-Penrose inverse

    A=Table[1i+j1,{i,m},{j,n}]. (4.6)

    The comparison is obtained with the initial approximation 1σ2min+σ2maxAT [26], with stopping criteria Sk+1Sk1020. The results are listed in Table 4 for various order of matrices. It can be concluded that the PM5 method gives the desired result faster than other methods, while in this test problem some methods are failed. Moreover, PM5 over-performs using a minimum number of matrix multiplications in each different order matrices. Hence, this justifies the aim of this paper.

    Table 4.  Outcomes for comparison by testing schemes on Example 4.
    Method SM2 EM2 CM3 MP3 HM3 EM4 TM4 LM4 SO5 PM5
    Order of matrix is 10 × 10
    ρ 2.0000 * 3.0001 3.0029 3.0047 * 4.1194 4.0004 5.0000 5.2569
    T 7.657 * 5.563 5.546 5.688 * 4.984 5.061 4.197 4.769
    TMM 184 * 174 216 204 * 184 210 222 160
    Order of matrix is 10 × 20
    ρ 2.0000 * 3.0000 3.0027 3.0001 * 4.0006 4.0157 5.0059 5.1114
    T 10.688 * 8.078 9.375 7.234 * 7.313 7.172 9.157 6.109
    TMM 160 * 153 188 180 * 160 185 198 136
    Order of matrix is 15 × 15
    ρ 2.0000 * 3.0000 3.0003 3.0028 * 4.0027 4.0033 5.0305 5.0336
    T 25.656 * 25.500 19.953 18.624 * 16.563 17.234 15.640 15.048
    TMM 286 * 270 252 237 * 284 330 348 244
    * denotes the divergence

     | Show Table
    DownLoad: CSV

    In this manuscript, we have established new formulations of the fifth-order hyperpower method to compute the weighted Moore-Penrose inverse. Compared to standard hyperpower method from a theoretical perspective, this new formulation has improved efficiency indices. Such approximations AMN are found to be robust and effective when implemented as a preconditioned matrix to solve the linear systems. Further, a wide range of the practical and academical test is performed to test our proposed iterative scheme consistency and effectiveness. The outcomes in each test problems show that the presented method gives the desired result with the least number of matrix multiplications in minimum computational time. Hence, it supports our attempt for new transformations of the hyperpower iterative scheme of order five.

    The authors wish to thank anonymous reviewers for careful reading and valuable comments which improved the quality of the paper.

    The authors declare no conflict of interest.

    [1] Bonifazi P, Goldin M, Picardo MA, et al. (2009) GABAergic Hub Neurons Orchestrate Synchrony in Developing Hippocampal Networks. Science 326: 1419-1424. doi: 10.1126/science.1175509
    [2] Lecerf C (1998) The double loop as a model of a learning neural system. Proceedings World Multiconference on Systemics. Cybernetics Informatics 1: 587-594.
    [3] Choe Y (2003) Analogical Cascade: A Theory on the Role of the Thalamo-Cortical Loop in Brain Function. Neurocomputing 1: 52-54.
    [4] Tamura S, Mizuno-Matsumoto Y, Chen YW, et al. (2009) Association and abstraction on neural circuit loop and coding. 5th Int’l Conf. Intelligent Information Hiding and Multimedia Signal Processing (IIHMSP2009) 10-07.546 (appears in IEEE Xplore).
    [5] Thorpre S, Fize D, Marlot C, et al. (1996) Speed of processing in the human visual system. Nature 381: 520-522. doi: 10.1038/381520a0
    [6] Shadlen MN, Newsome WT (1998) The variable discharge of cortical neurons: implications for connectivity computation and information coding. J Neurosci 18: 3870-3896.
    [7] Olshausen BA, Field DJ (1996) Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Letters Nat 381: 607-609. doi: 10.1038/381607a0
    [8] Bell, Sejnowski T (1997) The independent components of natural scenes are edge filters. Vision Res 37: 3327-3338. doi: 10.1016/S0042-6989(97)00121-1
    [9] Klipera, Hornb D, Quene B (2005) The inertial-DNF model: spatiotemporal coding on two time scales. Neurocomputing 65-66: 543-548. doi: 10.1016/j.neucom.2004.10.046
    [10] Takahashi K, Kim S, Coleman TP, et al. (2015) Large-scale spatiotemporal spike patterning consistent with wave propagation in motor cortex. Nat Commu 6: 7169. DOI:10.1038/ncomms8169. Available from: http://www.nature.com/ncomms/2015/150521/ncomms8169/full/ncomms8169.html doi: 10.1038/ncomms8169
    [11] Aviel Y, Horn D, Abeles M (2004) Synfire waves in small balanced networks. Neural Computation 58-60: 123-127.
    [12] Nishitani Y, Hosokawa C, Mizuno-Matsumoto Y, et al. (2012) Detection of M-sequences from spike sequence in neuronal networks. Comput Intell Neurosci. Article ID, 862579: 1-9
    [13] Nishitani Y, Hosokawa C, Mizuno-Matsumoto Y, et al. (2014) Synchronized Code Sequences from Spike Trains in Cultured Neuronal Networks. Int J Engineer Industries 5: 13-24.
    [14] Tamura S, Nishitani Y, Kamimura T, et al. (2013) Multiplexed spatiotemporal communication model in artificial neural networks. Auto Control Intell Systems 1: 121-130. DOI: 10.11648/j.acis.20130106.11
    [15] Tamura S, Nishitani Y, Hosokawa C, et al. (2016) Simulation of code spectrum and code flow of cultured neuronal networks. Compu Intell Neurosci 7186092: 1-12.
    [16] Nishitani Y, Hosokawa C, Mizuno-Matsumoto Y, et al. (2016) Variance of spatiotemporal spiking patterns by different stimulated neurons in cultured neuronal networks. Int J Acade Res Reflect 4: 11-19.
    [17] Shinichi Tamura Yoshi Nishitani, Chie Hosokawa (2016) Feasibility of multiplex communication in 2D mesh asynchronous neural network with fluctuations. AIMS Neurosci 3: 385-397. doi: 10.3934/Neuroscience.2016.4.385
    [18] Wagenaar DA, Pine J, Potter SM (2004) Effective parameters for stimulation of dissociated cultures using multi-electrode arrays. J Neurosci Method 138: 27-37. doi: 10.1016/j.jneumeth.2004.03.005
    [19] Muller M (2007) Dynamic Time Warping. In: Information Retrieval for Music and Motion, Springer.
    [20] Mei J, Liu M, Wang YF, et al. (2015) Learning a Mahalanobis Distance based Dynamic Time Warping Measure for Multivariate Time Series Classification. IEEE_cybernetics Available from: https://www.google.co.jp/?gws_rd=ssl#q=dtw+weakpoint
    [21] Rivlin-Etzion M, Ritov Y, Heimer G, et al. (2006) Local shuffling of spike trains boosts the accuracy of spike train spectral analysis. J Neurophysiol 95: 3245-3256. doi: 10.1152/jn.00055.2005
  • Reader Comments
  • © 2017 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5973) PDF downloads(1375) Cited by(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog