Algorithm 1 GROU algorithm |
1: procedure GROU() |
2: |
3: |
4: for do |
5: |
6: |
7: |
8: if or then goto 13 |
9: end if |
10: end for |
11: return and |
12: break |
13: return and |
14: end procedure |
Algorithms that use tensor decompositions are widely used due to how well they perfor with large amounts of data. Among them, we find the algorithms that search for the solution of a linear system in separated form, where the greedy rank-one update method stands out, to be the starting point of the famous proper generalized decomposition family. When the matrices of these systems have a particular structure, called a Laplacian-like matrix which is related to the aspect of the Laplacian operator, the convergence of the previous method is faster and more accurate. The main goal of this paper is to provide a procedure that explicitly gives, for a given square matrix, its best approximation to the set of Laplacian-like matrices. Clearly, if the residue of this approximation is zero, we will be able to solve, by using the greedy rank-one update algorithm, the associated linear system at a lower computational cost. As a particular example, we prove that the discretization of a general partial differential equation of the second order without mixed derivatives can be written as a linear system with a Laplacian-type matrix. Finally, some numerical examples based on partial differential equations are given.
Citation: J. Alberto Conejero, Antonio Falcó, María Mora–Jiménez. A pre-processing procedure for the implementation of the greedy rank-one algorithm to solve high-dimensional linear systems[J]. AIMS Mathematics, 2023, 8(11): 25633-25653. doi: 10.3934/math.20231308
[1] | Ruijuan Zhao, Mengyao Li, Tingjia Liu, Shu-Xin Miao . A noise-tolerant zeroing neural network with fixed-time convergence for solving multi-linear systems with nonsingular -tensors. AIMS Mathematics, 2025, 10(6): 13974-13995. doi: 10.3934/math.2025628 |
[2] | Dilara Siraeva . The invariant solution with blow-up for the gas dynamics equations from one-parameter three-dimensional subalgebra consisting of space, Galilean and pressure translations. AIMS Mathematics, 2024, 9(1): 89-98. doi: 10.3934/math.2024006 |
[3] | Xiang Gao, Linzhang Lu, Qilong Liu . Non-negative Tucker decomposition with double constraints for multiway dimensionality reduction. AIMS Mathematics, 2024, 9(8): 21755-21785. doi: 10.3934/math.20241058 |
[4] | Yajun Xie, Minhua Yin, Changfeng Ma . Novel accelerated methods of tensor splitting iteration for solving multi-systems. AIMS Mathematics, 2020, 5(3): 2801-2812. doi: 10.3934/math.2020180 |
[5] | Yajun Xie, Changfeng Ma . The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050 |
[6] | Xiaofeng Guo, Jianyu Pan . Approximate inverse preconditioners for linear systems arising from spatial balanced fractional diffusion equations. AIMS Mathematics, 2023, 8(7): 17284-17306. doi: 10.3934/math.2023884 |
[7] | Ruiping Wen, Tingyan Liu, Yalei Pei . A new algorithm by embedding structured data for low-rank tensor ring completion. AIMS Mathematics, 2025, 10(3): 6492-6511. doi: 10.3934/math.2025297 |
[8] | Amjid Ali, Teruya Minamoto, Rasool Shah, Kamsing Nonlaopon . A novel numerical method for solution of fractional partial differential equations involving the -Caputo fractional derivative. AIMS Mathematics, 2023, 8(1): 2137-2153. doi: 10.3934/math.2023110 |
[9] | Abdur Rehman, Muhammad Zia Ur Rahman, Asim Ghaffar, Carlos Martin-Barreiro, Cecilia Castro, Víctor Leiva, Xavier Cabezas . Systems of quaternionic linear matrix equations: solution, computation, algorithm, and applications. AIMS Mathematics, 2024, 9(10): 26371-26402. doi: 10.3934/math.20241284 |
[10] | Pablo Díaz, Esmeralda Mainar, Beatriz Rubio . Total positivity, Gramian matrices, and Schur polynomials. AIMS Mathematics, 2025, 10(2): 2375-2391. doi: 10.3934/math.2025110 |
Algorithms that use tensor decompositions are widely used due to how well they perfor with large amounts of data. Among them, we find the algorithms that search for the solution of a linear system in separated form, where the greedy rank-one update method stands out, to be the starting point of the famous proper generalized decomposition family. When the matrices of these systems have a particular structure, called a Laplacian-like matrix which is related to the aspect of the Laplacian operator, the convergence of the previous method is faster and more accurate. The main goal of this paper is to provide a procedure that explicitly gives, for a given square matrix, its best approximation to the set of Laplacian-like matrices. Clearly, if the residue of this approximation is zero, we will be able to solve, by using the greedy rank-one update algorithm, the associated linear system at a lower computational cost. As a particular example, we prove that the discretization of a general partial differential equation of the second order without mixed derivatives can be written as a linear system with a Laplacian-type matrix. Finally, some numerical examples based on partial differential equations are given.
Working with large amounts of data is one of the main challenges we face today. With the rise of social networks and rapid technological advances, we must develop tools that allow us to work with so much information. At this point the use of tensor products comes into play, since their use reduces the number of and speed up the operations to be carried out. Proof of this is the recent article [1], where tensor products are used to speed up the calculation of matrix products. Other articles that exemplify the goodness of this operation include [2], where the solution of 2, 3-dimensional optimal control problems with spectral fractional Laplacian-type operators is studied, and [3], where high-order problems are studied through the use of proper generalized decomposition methods.
When we try to solve a linear system of the form , in addition to the classical methods, there are methods based on tensors that can be more efficient [4], since the classical methods face the problem of the curse of dimensionality, which makes them lose effectiveness as the size of the problem increases. The tensor methods look for the solution in separated form, that is, as the tensor combination
where , is the dimension of the problem, and is the Kronecker product as reviewed in the next Section. The main family of methods that solves this problem is proper generalized decomposition family [5], and it is based on the greedy rank-one update (GROU) algorithm [6,7]. This algorithm calculates the solution of the linear system in separated form and, for this, in each iteration, it updates the approximation of the solution with the term resulting from minimizing the remaining residue. Furthermore, there are certain square matrices for which the GROU algorithm improves their convergence i.e., matrices of the form
where is the identity matrix of size and for . These matrices are called Laplacian-like matrices, due to their relationship with the Laplace operator written as
It is not easy to decide when a given matrix can be represented in that form. To do this, we can use some of the previously results obtained by the authors of [8]. In this paper, we prove that the set of Laplacian-like matrices is a linear subspace for the space of square matrices with a particular decomposition of its dimension. Moreover, we provide a greedy algorithm that provides the best Laplacian approximation for a given matrix as well its residue, However, an iterative algorithm it is not useful enough against a direct solution algorithm. The main goal of this paper is to provide a direct algorithm that allows one to construct the best Laplacian-like approximation by using only a particular block decomposition of the matrix It can be considered as a pre-processing procedure that allows one to represent a given matrix in its best Laplacian-like form, and if the residual is equal to zero, we definitively have its Laplacian-like representation form. Hence, we efficiently use the GROU algorithm to solve the high-dimensional linear system associated with the matrix
We remark that, by using the decomposition we can rewrite the linear system as , and when the value of the remainder is small, we can approximate the solution of the system by using the solution of the Laplacian system . This fact is specially interesting in the case of the discretization of some partial differential equations. We also study the Laplacian decomposition of the matrix that comes from the discretization of a general second order partial differential equation of the form
with homogeneous boundary conditions. Besides, to compare different numerical methods to solve partial differential equations, we consider two particular cases: the Helmholtz equation, which solves an eigenvalue problem for the Laplace operator. Furthermore, to illustrate that it is not necessary to be limited to the second order, we also consider the 4th order Swift-Hohenberg equation
This equation is noted for its pattern-forming behavior, and it was derived from the equations for thermal convection [9].
The paper is organized as follows. We begin by recalling some preliminary definition and results used throughout the paper in Section 2. Section 3 is devoted to the statement and the proof of the main result of this paper, which allow one to construct explicitly the best approximation of a given matrix to the linear space of Laplacian-like matrices. After that, in Section 4, we discuss how we applied this result to compute the best Laplacian approximation for the discretization of a second order partial differential equations without mixing derivatives. Finally, some numerical examples are given in Section 5.
First at all we introduce some notations that we use throughout the paper. We denote by the set of -matrices and by the transpose of a given matrix As usual we use
to denote the Euclidean inner product in and its corresponding -norm, by
Given a sequence we say that a vector can be written as
if and only if
in the -topology.
The Kronecker product of two matrices and is defined by
We can see some of the well-known properties of the Kronecker product in [7].
As we already said, we are interested solving the high-dimensional linear system obtained from a discretization of a partial differential equation. We are interested in solving it by using a tensor-based algorithm; so, we are going to look for an approximation of the solution in separated form. To see this, we assume that the coefficient matrix is a -dimensional invertible matrix for some Next, we look for an approximation (of rank ) of of the form
(2.1) |
To do this, given we say that if where for For we define, inductively, that that is,
Note that for all
To perform (2.1), what we will do is minimize the difference
that is, solve the problem
(2.2) |
Here, is the 2-norm, or the Frobenius norm, defined by
Unfortunately, from Proposition 4.1(a) of [10], we have that the set is not necessarily (or even usually) closed for each In consequence, no best rank-n approximation exists, that is, (2.2) has no solution. However, from Proposition 4.2 of [10] it follows that is a closed set in any norm-topology. This fact allows us to introduce the following algorithm.
The GROU algorithm is an iterative method to solve linear systems of the form by using only rank-one updates. Thus, given with , and we can obtain an approximation of the form
for some and for and [7]. We proceed with the following iterative procedure (see algorithm 1 below): let , and, for each take
(2.3) |
(2.4) |
Since we can define the for obtained by the GROU Algorithm as
The next result, presented in [7], gives the convergence of the sequence to the solution of the linear system.
Theorem 2.1. Let and be an invertible matrix. Then, by using the iterative scheme described by (2.3) and (2.4), we obtain that the sequence is strictly decreasing and
(2.5) |
Note that the updates in the previous scheme works under the assumption that, in line 5 of algorithm 1, we have a way to obtain
(2.6) |
To compute we can use an alternating least squares (ALS) approach (see [7,11]).
Algorithm 1 GROU algorithm |
1: procedure GROU() |
2: |
3: |
4: for do |
5: |
6: |
7: |
8: if or then goto 13 |
9: end if |
10: end for |
11: return and |
12: break |
13: return and |
14: end procedure |
The idea below the ALS strategy to solve (2.6) is as follows: for each we proceed as follows. Assume that the values are given. Then, we look for the unknown , satisfying
where we can write
In consequence, by using a least squares approach [11], we can obtain by solving the following -dimensional linear system:
(2.7) |
where
and
Clearly,
holds for all However, it is well known (see Section 4 in [11]) that the performance of the ALS strategy can be improved (see Algorithm 2 below) when the shape of the matrix with , can be written in the form
(2.8) |
where here, for and In particular, when the matrix is given by
then the matrix can be easily written in the form of (2.8). These matrices were introduced in [8] as Laplacian-like matrices since they can be easily related to the classical Laplacian operator [2,12]. The next section will be devoted to the study of this class of matrices.
As we said in the introduction, the proper orthogonal decomposition is a popular numerical strategy in the engineering process to solve high-dimensional problems. It is based on the GROU algorithms (2.3) and (2.4), and it can be considered as a tensor-based decomposition algorithm.
There is a particular type of matrices to solve high-dimensional linear systems for which these methods work particularly well, i.e., those that satisfy the property (2.8). To this end, we introduce the following definition.
Algorithm 2 An ALS algorithm for matrices in the form of (2.8) [11, Algorithm 2] |
1: Given and |
2: Initialize for |
3: Introduce and = 1. |
4: while > and < do |
5: for do |
6: |
7: for do |
8: |
9: end for |
10: solves |
11: end for |
12: = + 1. |
13: |
14: end while |
Definition 3.1. Given a matrix where we say that is a Laplacian-like matrix if there exist matrices for be such that
(3.1) |
where is the identity matrix of size
It is not difficult to see that the set of Laplacian-like matrices is a linear subspace of matrices satisfying the property (2.8). From now on, we will denote by the subspace of Laplacian-like matrices in for a fixed decomposition of .
Now, given a matrix our goal is to solve the following optimization problem:
(3.2) |
Clearly, if we denote by the orthogonal projection onto the linear subspace then is the solution of (3.2). Observe that if and only if
We are interested in trying to achieve a structure similar to (3.1) to study the matrices of large-dimensional problems. We search an algorithm that allows one to construct, for a given matrix its Laplacian-like best approximation
To do this, we will use the following theorem, which describes a particular decomposition of the space of matrices Observe that the linear subspace in has, as the orthogonal space, the following null trace matrices:
with respect to the inner product
Theorem 3.2. Consider as a Hilbert space where Then, there exists a decomposition
where is the orthogonal complement of the linear subspace generated by the identity matrix. Moreover,
(3.3) |
where Furthermore, is a subspace of and
Proof. It follows from Lemma 3.1, Theorem 3.1 and Theorem 3.2 in [8].
The above theorem allows us to compute the projection of matrix onto as follows. Denote by the orthogonal projection of onto the linear subspace
for Thus, is the orthogonal projection of onto the linear subspace In consequence, by using (3.3), we have
(3.4) |
If we further analyze (3.4), we observe that the second term on the left is of the form
and that it has only -degrees of freedom (recall that In addition, due to the tensor structure of the products, the unknowns of are distributed in the form of a block so that we can calculate which will be the entries of the matrix that we can approximate. Therefore, to obtain the value of each we only need to calculate which is the value that best approximates the entries of the original matrix that are in the same position as .
In our next result, we will see how to carry out this procedure. To do this, we make the following observation. Given a matrix for some integers we can write as a matrix block:
(3.5) |
where the block for is given by
Moreover,
Observe that and can be easily interchanged. To simplify the notation, from now on given we denote it by for each
Theorem 3.3. Let with For each fixed consider the linear function given by
Then, the solution of the minimization problem
(3.6) |
is given by
Proof. First, let us observe that so, we can find three different situations in the calculation of the projections:
(1). ; in this case,
(2). ; in this case,
where denotes the zero matrix in
(3). for in this case, for a fixed we write and Thus,
In either case, a difference of the form
must be minimized. To this end, we will consider each case as a block matrix in the form of (3.5).
Case 1: For we take and hence,
In this situation, we have
hence, we wish for each to yield
Thus, it is not difficult to see that
for
Case 2: For we take and hence,
Now, we have
Thus, minimizes if and only if
In consequence,
Case 3: For we take and hence,
In this case,
so we need to solve the following problem:
(3.7) |
Since we can write (3.7) as
(3.8) |
Observe that
To simplify the notation, we write . Then, we have the following orthogonal decomposition, Denote by the orthogonal projection onto the linear subspace Then, for each we have
because and In consequence, the process of solving (3.8) is equivalent that for solving the following optimization problem:
(3.9) |
Thus,
that is, hence,
Proceeding in a similar way as in Case 1, we obtain
for This concludes the proof of the theorem.
To conclude, we obtain the following useful corollary.
Corollary 3.4. Let with For each fixed consider the linear function given by
For each let be the solution of the optimization problem (3.6). Then,
(3.10) |
Proof. Observe that, for the matrix satisfies
where
is a linear subspace of that is linearly isomorphic to Since then
hence
We can conclude that recall that is the orthogonal projection of onto the linear subspace
From (3.4), the corollary is proved.
In this section, we consider the general equation of a generic second-order partial differential equation without mixing derivatives with homogeneous boundary conditions. More precisely, let
(4.1) |
(4.2) |
We discretize (4.1) with the help of the following derivative approximations:
and
for and From (4.2), we have that for all and
Next, in order to obtain a linear system, we put and where for and In this way, the represented mesh is traversed as shown in Figure 1, and the elements and are column vectors. It allows us to represent (4.1) and (4.2) as the linear system , where is the -block matrix
(4.3) |
for given by
and are the diagonal matrices:
In this case, so, instead of looking for as in (3.10), we will look for where
has a null trace. Proceeding according to Theorem 3.3 for sizes and , we obtain the following decomposition:
and
We remark that Moreover, the residual of the approximation of is In consequence, we can write the original matrix as
Recall that the first term is
hence, can be written as
where
is an -matrix and
is a -matrix.
Now, we can use this representation of to implement the GROU Algorithm 1, together with the ALS strategy given by Algorithm 2, to solve the following linear system:
This study can be extended to high-dimensional equations, as occurs in [8] with the three-dimensional Poisson equation.
Next, we are going to consider some particular equations to analyze their numerical behavior. In all cases, the characteristics of the computer used are as follows: 11th Gen Intel Core i7-11370H @ 3.30GHz, RAM 16 GB, 64-bit operating system; and, Matlab version R2021b [13].
Let us consider the particular case of the second-order partial differential equation with , and , that is,
This is the 2D-Helmholtz equation. To obtain the linear system associated with the discrete problem, we need some boundary conditions; for example,
and
This initial value problem has a closed solution for
From the above operations, and by taking for simplicity, we can write the matrix of the discrete linear system associated with the equation of Helmholtz as
or, equivalently,
If we solve this linear system for the case , and with , we obtain the temporary results shown in Figure 2. To carry out this experiment, we have used the following parameter values: for the GROU Algorithm 1: ; ; ; ( and were used to perform Algorithm 2); and, the number of nodes in (that is, the number of rows or columns of the matrix ) was increased from to .
To measure the goodness of the approximations obtained, we calculated the normalized errors, that is, the value of the difference, in absolute value, of the results obtained and the real solution, between the length of the solution, i.e.,
for the different approximations obtained. The values of these errors were of the order of and can be seen in Figure 3.
Now, let us consider the partial differential equation of order four:
(5.1) |
with the boundary conditions
(5.2) |
and
(5.3) |
For , the initial value problem (5.1)–(5.3) has the following as a solution:
If we discretize the problem described by (5.1)–(5.3) as in the previous example with the same step size for both variables, , we obtain a linear system of the form , where , in Laplacian-like form, is the matrix
and is the order established for the indices, with and .
To perform a numerical experiment, we set , and the same number of points for the two variables. At this point, we can solve the linear system associated with the Swift-Hohenberg discrete problem by using our tools: the Matlab command , the GROU Algorithm 1, and the GROU Algorithm 1, together with the ALS Algorithm 2 with written in Laplacian-like form. In this case, we used the following parameter values in the algorithms: ; ; for the GROU Algorithm 1, with for the ALS step; and, the number of nodes in was increased from to . Figure 4 shows the results obtained.
Again, we calculated the normalized errors to estimate the goodness of the approximations, the results of which are shown in Figure 5.
In this work, we have studied the Laplacian decomposition algorithm, which, given any square matrix, calculates its best Laplacian approximation. Furthermore, in Theorem 3.3, we have shown how it is implemented optimally.
For us, the greatest interest in this algorithm lies in the computational improvement of combining it with the GROU Algorithm 1 to solve linear systems through the discretization of a partial derivative equation. Said improvement can be seen in the different numerical examples shown, where we have compared this procedure with the standard resolution of Matlab by means of the instruction
This proposal is a new way of dealing with certain large-scale problems, where classical methods prove to be more inefficient.
The authors declare that they have not used artificial intelligence tools in the creation of this article.
J. A. Conejero acknowledges funding from grant PID2021-124618NB-C21, funded by MCIN/AEI/ 10.13039/501100011033, and by "ERDF: A way of making Europe", by the "European Union"; M. Mora-Jiménez was supported by the Generalitat Valenciana and the European Social Fund under grant number ACIF/2020/269; A. Falcó was supported by the MICIN grant number RTI2018-093521-B-C32 and Universidad CEU Cardenal Herrera under grant number INDI22/15.
The authors declare that they no have conflicts of interest.
[1] |
A. Fawzi, M. Balog, A. Huang, T. Hubert, B. Romera-Paredes, M. Barekatain, et al., Discovering faster matrix multiplication algorithms with reinforcement learning, Nature, 610 (2022), 47–53. https://doi.org/10.1038/s41586-022-05172-4 doi: 10.1038/s41586-022-05172-4
![]() |
[2] |
G. Heidel, V. Khoromskaia, B. Khoromskij, V. Schulz, Tensor product method for fast solution of optimal control problems with fractional multidimensional Laplacian in constraints, J. Comput. Phys., 424 (2021), 109865. https://doi.org/10.1016/j.jcp.2020.109865 doi: 10.1016/j.jcp.2020.109865
![]() |
[3] |
C. Quesada, G. Xu, D. González, I. Alfaro, A. Leygue, M. Visonneau, et al., Un método de descomposición propia generalizada para operadores diferenciales de alto orden, Rev. Int. Metod. Numer., 31 (2015), 188–197. https://doi.org/10.1016/j.rimni.2014.09.001 doi: 10.1016/j.rimni.2014.09.001
![]() |
[4] | A. Nouy, Low-rank methods for high-dimensional approximation and model order reduction, Soc. Ind. Appl. Math., 2017,171–226. |
[5] |
A. Falcó, A. Nouy, Proper generalized decomposition for nonlinear convex problems in tensor Banach spaces, Numer. Math., 121 (2012), 503–530. https://doi.org/10.1007/s00211-011-0437-5 doi: 10.1007/s00211-011-0437-5
![]() |
[6] |
I. Georgieva, C. Hofreither, Greedy low-rank approximation in Tucker format of solutions of tensor linear systems, J. Comput. Appl. Math., 358 (2019), 206–220. https://doi.org/10.1016/j.cam.2019.03.002 doi: 10.1016/j.cam.2019.03.002
![]() |
[7] |
A. Ammar, F. Chinesta, A. Falcó, On the convergence of a Greedy Rank-One Update algorithm for a class of linear systems, Arch. Comput. Methods Eng., 17 (2010), 473–486. https://doi.org/10.1007/s11831-010-9048-z doi: 10.1007/s11831-010-9048-z
![]() |
[8] |
J. A. Conejero, A. Falcó, M. Mora-Jiménez, Structure and approximation properties of laplacian-like matrices, Results Math., 78 (2023), 184. https://doi.org/10.1007/s00025-023-01960-0 doi: 10.1007/s00025-023-01960-0
![]() |
[9] | J. Swift, P. C. Hohenberg, Hydrodynamic fluctuations at the convective instability, Phys. Rev. A, 15 (1977), 319–328. |
[10] |
V. de Silva, L. H. Lim, Tensor rank and the ill-posedness of the best low-rank approximation problem, SIAM J. Matrix Anal. Appl., 30 (2008), 1084–1127. https://doi.org/10.1137/06066518X doi: 10.1137/06066518X
![]() |
[11] |
A. Falcó, L. Hilario, N. Montés, M. Mora, Numerical strategies for the galerkin–proper generalized decomposition method, Math. Comput. Model., 57 (2013), 1694–1702. https://doi.org/10.1016/j.mcm.2011.11.012 doi: 10.1016/j.mcm.2011.11.012
![]() |
[12] |
W. Hackbusch, B. Khoromskij, S. Sauter, E. Tyrtyshnikov, Use of tensor formats in elliptic eigenvalue problems, Numer. Lin. Algebra Appl., 19 (2012), 133–151. https://doi.org/10.1002/nla.793 doi: 10.1002/nla.793
![]() |
[13] | MATLAB, version R2021b, The MathWorks Inc., Natick, Massachusetts, 2021. |
Algorithm 1 GROU algorithm |
1: procedure GROU() |
2: |
3: |
4: for do |
5: |
6: |
7: |
8: if or then goto 13 |
9: end if |
10: end for |
11: return and |
12: break |
13: return and |
14: end procedure |
Algorithm 2 An ALS algorithm for matrices in the form of (2.8) [11, Algorithm 2] |
1: Given and |
2: Initialize for |
3: Introduce and = 1. |
4: while > and < do |
5: for do |
6: |
7: for do |
8: |
9: end for |
10: solves |
11: end for |
12: = + 1. |
13: |
14: end while |
Algorithm 1 GROU algorithm |
1: procedure GROU() |
2: |
3: |
4: for do |
5: |
6: |
7: |
8: if or then goto 13 |
9: end if |
10: end for |
11: return and |
12: break |
13: return and |
14: end procedure |
Algorithm 2 An ALS algorithm for matrices in the form of (2.8) [11, Algorithm 2] |
1: Given and |
2: Initialize for |
3: Introduce and = 1. |
4: while > and < do |
5: for do |
6: |
7: for do |
8: |
9: end for |
10: solves |
11: end for |
12: = + 1. |
13: |
14: end while |