In this paper, we propose and analyze a Riemannian Gauss–Newton method for the low-rank matrix completion problem. Different from the existing second-order Riemannian optimization methods for this problem, we adopt the approximate Riemannian Hessian of Gauss–Newton methods instead of the true Riemannian Hessian. This approximate Riemannian Hessian has a computationally efficient and symmetric positive semidefinite structure. Added with a regularization term, the corresponding Riemannian Gauss–Newton equation can be solved efficiently by the linear conjugate gradient method. Global and local convergence of the proposed method are established under mild assumptions. Preliminary numerical results on synthetic and image recovery problems with various dimensionalities and ranks are reported to demonstrate the efficiency of the proposed method.
Citation: Xiaojing Zhu, Fengyi Yuan. A Riemannian regularized Gauss–Newton method for low-rank matrix completion[J]. AIMS Mathematics, 2025, 10(12): 28556-28582. doi: 10.3934/math.20251257
In this paper, we propose and analyze a Riemannian Gauss–Newton method for the low-rank matrix completion problem. Different from the existing second-order Riemannian optimization methods for this problem, we adopt the approximate Riemannian Hessian of Gauss–Newton methods instead of the true Riemannian Hessian. This approximate Riemannian Hessian has a computationally efficient and symmetric positive semidefinite structure. Added with a regularization term, the corresponding Riemannian Gauss–Newton equation can be solved efficiently by the linear conjugate gradient method. Global and local convergence of the proposed method are established under mild assumptions. Preliminary numerical results on synthetic and image recovery problems with various dimensionalities and ranks are reported to demonstrate the efficiency of the proposed method.
| [1] |
P. A. Absil, L. Amodei, G. Meyer, Two Newton methods on the manifold of fixed-rank matrices endowed with Riemannian quotient geometries, Comput. Stat., 29 (2014), 569–590. https://doi.org/10.1007/s00180-013-0441-6 doi: 10.1007/s00180-013-0441-6
|
| [2] |
P. A. Absil, C. G. Baker, K. A. Gallivan, Trust-region methods on Riemannian manifolds, Found. Comput. Math., 7 (2007), 303–330. https://doi.org/10.1007/s10208-005-0179-9 doi: 10.1007/s10208-005-0179-9
|
| [3] | P. A. Absil, R. Mahony, R. Sepulchre, Optimization Algorithms on Matrix Manifolds, Princeton: Princeton University Press, 2008. |
| [4] |
P. A. Absil, J. Malick, Projection-like retractions on matrix manifolds, SIAM J. Optim., 22 (2012), 135–158. https://doi.org/10.1137/100802529 doi: 10.1137/100802529
|
| [5] |
P. A. Absil, I. V. Oseledets, Low-rank retractions: a survey and new results, Comput. Optim. Appl., 62 (2015), 5–29. https://doi.org/10.1007/s10589-014-9714-4 doi: 10.1007/s10589-014-9714-4
|
| [6] | N. Boumal, An Introduction to Optimization on Smooth Manifolds, Cambridge: Cambridge University Press, 2023. |
| [7] |
N. Boumal, P. A. Absil, Low-rank matrix completion via preconditioned optimization on the Grassmann manifold, Linear Algebra Appl., 475 (2015), 200–239. https://doi.org/10.1016/j.laa.2015.02.027 doi: 10.1016/j.laa.2015.02.027
|
| [8] |
J. F. Cai, E. J. Candès, Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956–1982. https://doi.org/10.1137/080738970 doi: 10.1137/080738970
|
| [9] | J. F. Cai, W. Huang, H. Wang, K. Wei, Tensor completion via tensor train based low-rank quotient geometry under a preconditioned metric, preprint paper, 2022. https://doi.org/10.48550/arXiv.2209.04786 |
| [10] |
W. Dai, E. Kerman, O. Milenkovic, A geometric approach to low-rank matrix completion, IEEE Trans. Inform. Theory, 58 (2012), 237–247. https://doi.org/10.1109/TIT.2011.2171521 doi: 10.1109/TIT.2011.2171521
|
| [11] |
Y. Deng, T. Mu, Faster Riemannian Newton-type optimization by subsampling and cubic regularization, Mach. Learn., 112 (2023), 3527–3589. https://doi.org/10.1007/s10994-023-06321-0 doi: 10.1007/s10994-023-06321-0
|
| [12] |
B. Gao, P. A. Absil, A Riemannian rank-adaptive method for low-rank matrix completion, Comput. Optim. Appl., 81 (2022), 67–90. https://doi.org/10.1007/s10589-021-00328-w doi: 10.1007/s10589-021-00328-w
|
| [13] |
B. Gao, R. Peng, Y. Yuan, Low-rank optimization on Tucker tensor varieties, Math. Program., 214 (2025), 357–407. https://doi.org/10.1007/s10107-024-02186-w doi: 10.1007/s10107-024-02186-w
|
| [14] |
J. Hu, A. Milzarek, Z. Wen, Y. Yuan, Adaptive quadratically regularized Newton method for Riemannian optimization, SIAM J. Optim., 39 (2018), 1181–1207. https://doi.org/10.1137/17M1142478 doi: 10.1137/17M1142478
|
| [15] |
W. Huang, P. A. Absil, K. A. Gallivan, A Riemannian symmetric rank-one trust-region method, Math. Program., 150 (2015), 179–216. https://doi.org/10.1007/s10107-014-0765-1 doi: 10.1007/s10107-014-0765-1
|
| [16] | P. Jain, P. Netrapalli, S. Sanghavi, Low-rank matrix completion using alternating minimization, In: Proceedings of the 45th annual ACM symposium on Theory of Computing, 2013,665–674. https://doi.org/10.1145/2488608.2488693 |
| [17] |
O. Koch, C. Lubich, Dynamical low-rank approximation, SIAM J. Matrix Anal. Appl., 29 (2007), 434–454. https://doi.org/10.1137/050639703 doi: 10.1137/050639703
|
| [18] |
D. Kressner, M. Steinlechner, B. Vandereycken, Low-rank tensor completion by Riemannian optimization, BIT Numer. Math., 54 (2014), 447–468. https://doi.org/10.1007/s10543-013-0455-z doi: 10.1007/s10543-013-0455-z
|
| [19] |
D. Kressner, M. Steinlechner, B. Vandereycken, Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure, SIAM J. Sci. Comput., 38 (2016), A2018–A2044. https://doi.org/10.1137/15M1032909 doi: 10.1137/15M1032909
|
| [20] | J. M. Lee, Introducton to Riemannian Manifolds, 2 Eds., Berlin: Springer, 2018. |
| [21] |
S. Ma, D. Goldfarb, L. Chen, Fixed point and Bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), 321–353. https://doi.org/10.1007/s10107-009-0306-5 doi: 10.1007/s10107-009-0306-5
|
| [22] |
S. Mao, L. Xiong, L. Jiao, T. Feng, S. K. Yeung, A novel Riemannian metric based on Riemannian structure and scaling information for fixed low-rank matrix completion, IEEE Trans. Cybern., 47 (2017), 1299–1312. https://doi.org/10.1109/TCYB.2016.2587825 doi: 10.1109/TCYB.2016.2587825
|
| [23] |
B. Mishra, G. Meyer, S. Bonnabel, R. Sepulchre, Fixed-rank matrix factorizations and Riemannian low-rank optimization, Comput. Stat., 29 (2014), 591–621. https://doi.org/10.1007/s00180-013-0464-z doi: 10.1007/s00180-013-0464-z
|
| [24] |
L. T. Nguyen, J. Kim, B. Shim, Low-rank matrix Completion: A contemporary survey, IEEE Access, 7 (2019), 94215–94237. https://doi.org/10.1109/ACCESS.2019.2928130 doi: 10.1109/ACCESS.2019.2928130
|
| [25] |
G. Olikier, A. Uschmajew, B. Vandereycken, Gauss-Southwell type descent methods for low-rank matrix optimization, J. Optim. Theory Appl., 206 (2025), 6. https://doi.org/10.1007/s10957-025-02682-9 doi: 10.1007/s10957-025-02682-9
|
| [26] |
M. V. Rakhuba, I. V. Oseledets, Jacobi-Davidson method on low-rank matrix manifolds, SIAM J. Sci. Comput., 40 (2018), A1149–A1170. https://doi.org/10.1137/17M1123080 doi: 10.1137/17M1123080
|
| [27] |
B. Recht, M. Fazel, P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471–501. https://doi.org/10.1137/070697835 doi: 10.1137/070697835
|
| [28] | H. Sato, Riemannian Optimization and Its Applications, Switzerland: Springer, 2021. |
| [29] |
A. Séguin, D. Kressner, Continuation methods for Riemannian optimization, SIAM J. Optim., 32 (2022), 1069–1093. https://doi.org/10.1137/21M1428650 doi: 10.1137/21M1428650
|
| [30] |
G. J. Song, M. K. Ng, Nonnegative low rank matrix completion by Riemannian optimalization methods, Ann. Appl. Math., 39 (2023), 181–205. https://doi.org/10.4208/aam.OA-2023-0010 doi: 10.4208/aam.OA-2023-0010
|
| [31] |
M. Steinlechner, Riemannian optimization for high-dimensional tensor completion, SIAM J. Sci. Comput., 38 (2016), S461–S484. https://doi.org/10.1137/15M1010506 doi: 10.1137/15M1010506
|
| [32] |
J. Tanner, K. Wei, Low rank matrix completion by alternating steepest descent methods, Appl. Comput. Harmon. Anal., 40 (2016), 417–429. https://doi.org/10.1016/j.acha.2015.08.003 doi: 10.1016/j.acha.2015.08.003
|
| [33] |
B. Vandereycken, Low-rank matrix completion by Riemannian optimization, SIAM J. Optim., 23 (2013), 1214–1236. https://doi.org/10.1137/110845768 doi: 10.1137/110845768
|
| [34] |
B. Vandereycken, S. Vandewalle, A Riemannian optimization approach for computing low-rank solutions of Lyapunov equations, SIAM J. Matrix Anal. Appl., 31 (2010), 2553–2679. https://doi.org/10.1137/090764566 doi: 10.1137/090764566
|
| [35] |
Z. Wen, W. Yin, Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Math. Program. Comput., 4 (2012), 333–361. https://doi.org/10.1007/s12532-012-0044-1 doi: 10.1007/s12532-012-0044-1
|
| [36] |
S. Zheng, W. Huang, B. Vandereycken, X. Zhang, Riemannian optimization using three differentmetrics for Hermitian PSD fixed-rank constraints, Comput. Optim. Appl., 91 (2025), 1135–1184. https://doi.org/10.1007/s10589-025-00687-8 doi: 10.1007/s10589-025-00687-8
|
| [37] |
J. Zhou, K. Deng, H. Wang, Z. Peng, Inexact Riemannian gradient descent method for nonconvex optimization with strong convergence, J. Sci. Comput., 103 (2025), 96. https://doi.org/10.1007/s10915-025-02913-1 doi: 10.1007/s10915-025-02913-1
|
| [38] |
G. Zhou, W. Huang, K. A. Gallivan, P. Van Dooren, P. A. Absil, A Riemannian rank-adaptive method for low-rank optimization, Neurocomputing, 192 (2016), 72–80. https://doi.org/10.1016/j.neucom.2016.02.030 doi: 10.1016/j.neucom.2016.02.030
|