This paper considers the recovery problem of a low-rank matrix which arises in many areas, such as the Netflix ranking matrix recovery problem. We introduce the difference between the minimum nuclear norm and Ky Fan $ K $-norm to approximate the rank of the target matrix and incorporate it into a linear least squares problem. As the resulting optimization problem is nonconvex and nonsmooth, albeit with some special structures, we propose a specially devised alternative directional method of multipliers (ADMM) for finding approximate or local optimal solutions. The efficiency of the proposed method is mainly due to the two subproblems at each iteration having explicit optimal solutions. A computational experiment is conducted to demonstrate the performance of the ADMM method by comparing with the existing methods. Computational results show that the ADMM method has better performance in terms of computing cost for most test instances.
Citation: Qihan Zhang, Xueting Cui. A proximal alternative directional method of multipliers for a class of low rank matrix recovery problems[J]. Journal of Industrial and Management Optimization, 2026, 22(1): 24-43. doi: 10.3934/jimo.2026002
This paper considers the recovery problem of a low-rank matrix which arises in many areas, such as the Netflix ranking matrix recovery problem. We introduce the difference between the minimum nuclear norm and Ky Fan $ K $-norm to approximate the rank of the target matrix and incorporate it into a linear least squares problem. As the resulting optimization problem is nonconvex and nonsmooth, albeit with some special structures, we propose a specially devised alternative directional method of multipliers (ADMM) for finding approximate or local optimal solutions. The efficiency of the proposed method is mainly due to the two subproblems at each iteration having explicit optimal solutions. A computational experiment is conducted to demonstrate the performance of the ADMM method by comparing with the existing methods. Computational results show that the ADMM method has better performance in terms of computing cost for most test instances.
| [1] | J. Abernethy, F. Bach, T. Evgeniou, J. P. Vert, Low-rank matrix factorization with attributes, arXiv preprint arXiv: cs/0611124, 2006. https://doi.org/10.48550/arXiv.cs/0611124 |
| [2] | Y. Amit, M. Fink, N. Srebro, S. Ullman, Uncovering shared structures in multiclass classification, Proc. 24th Int. Conf. Mach. Learn., 2007. https://doi.org/10.1145/1273496.1273499 |
| [3] |
M. Mesbahi, G. P. Papavassilopoulos, On the rank minimization problem over a positive semidefinite linear matrix inequality, IEEE Trans. Autom. Control, 42 (1997), 239–243. https://doi.org/10.1109/9.554402 doi: 10.1109/9.554402
|
| [4] |
A. Ramlatchan, M. Y. Yang, Q. Liu, M. Li, J. X. Wang, Y. H. Li, A survey of matrix completion methods for recommendation systems, Big Data Min. Anal., 1 (2018), 308–323. https://doi.org/10.26599/BDMA.2018.9020008 doi: 10.26599/BDMA.2018.9020008
|
| [5] |
X. P. Li, Z. L. Shi, M. Dai, H. C. So, I. Frerichs, Z. Q. Zhao, Robust preprocessing of impulsive motion artifacts using low-rank matrix recovery for electrical impedance tomography, IEEE Trans. Comput. Imaging, 11 (2025), 942–954. https://doi.org/10.1109/TCI.2025.3587458 doi: 10.1109/TCI.2025.3587458
|
| [6] | J. Bennett, S. Lanning, The netflix prize, Proc. KDD Cup Workshop, 2007. |
| [7] | J. D. M. Rennie, N. Srebro, Fast maximum margin matrix factorization for collaborative prediction, Proc. 22nd Int. Conf. Mach. Learn., 2005,713–719. https://doi.org/10.1145/1102351.1102441 |
| [8] | N. Srebro, Learning with matrix factorizations, 2004. |
| [9] |
P. Chen, D. Suter, Recovering the missing components in a large noisy low-rank matrix: Application to SFM, IEEE Trans. Pattern Anal. Mach. Intell., 26 (2004), 1051–1063. https://doi.org/10.1109/TPAMI.2004.52 doi: 10.1109/TPAMI.2004.52
|
| [10] | C. Tomasi, T. Kanade, Shape and motion from image streams under orthography: a factorization method, Int. J. Comput. Vis., 9 (1992), 137–154. https://doi.org/10.1007/BF00129684 |
| [11] | D. B. Zhang, Y. Hu, J. P. Ye, X. L. Li, X. F. He, Matrix completion by truncated nuclear norm regularization, Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2012, 2192–2199. https://doi.org/10.1109/cvpr.2012.6247927 |
| [12] |
A. L. Chistov, D. Y. Grigor'Ev, Complexity of quantifier elimination in the theory of algebraically closed fields, LNCS, 176 (1984), 17–31. https://doi.org/10.1007/bfb0030287 doi: 10.1007/bfb0030287
|
| [13] |
E. J. Candès, T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inf. Theory , 56 (2010), 2053–2080. https://doi.org/10.1109/TIT.2010.2044061 doi: 10.1109/TIT.2010.2044061
|
| [14] | M. Fazel, H. Hindi, S. P. Boyd, Log-det heuristic for matrix rank minimization with applications to hankel and Euclidean distance matrices, Proc. Amer. Control Conf., 2003. https://doi.org/10.1109/acc.2003.1243393 |
| [15] |
Z. Liu, L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. Appl., 31 (2010), 1235–1256. https://doi.org/10.1137/090755436 doi: 10.1137/090755436
|
| [16] |
E. Candes, B. Recht, Exact matrix completion via convex optimization, Commun. ACM, 55 (2012), 111–119. https://doi.org/10.1145/2184319.2184343 doi: 10.1145/2184319.2184343
|
| [17] |
C. H. Chen, B. S. He, X. M. Yuan, Matrix completion via an alternating direction method, IMA J. Numer. Anal., 32 (2012), 227–245. https://doi.org/10.1093/imanum/drq039 doi: 10.1093/imanum/drq039
|
| [18] |
Z. Y. Wang, X. P. Li, H. C. So, Robust matrix completion based on factorization and truncated-quadratic loss function, IEEE Trans. Circuits Syst. Video Technol., 33 (2022), 1521–1534. https://doi.org/10.1109/TCSVT.2022.3214583 doi: 10.1109/TCSVT.2022.3214583
|
| [19] | Z. Y. Wang, H. C. So, Robust matrix completion via novel M-estimator functions, Proc. Int. Conf. Electr. Comput. Energy Technol., 2024, 1–5. https://doi.org/10.1109/icecet61485.2024.10698047 |
| [20] | E. V. Laufer, B. Nadler, Rgnmr: A gauss-newton method for robust matrix completion with theoretical guarantees, arXiv preprint arXiv: 2505.12919, 2025. |
| [21] |
T. Saeedi, M. Rezghi, A novel enriched version of truncated nuclear norm regularization for matrix completion of inexact observed data, IEEE Trans. Knowl. Data Eng., 34 (2020), 519–530. https://doi.org/10.1109/tkde.2020.2983708 doi: 10.1109/tkde.2020.2983708
|
| [22] |
Y. Hu, D. B. Zhang, J. P. Ye, X. L. Li, X. F. He, Fast and accurate matrix completion via truncated nuclear norm regularization, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2012), 2117–2130. https://doi.org/10.1109/TPAMI.2012.271 doi: 10.1109/TPAMI.2012.271
|
| [23] |
H. L. Ye, H. Li, F. L. Cao, L. M. Zhang, A hybrid truncated norm regularization method for matrix completion, IEEE Trans. Image Process., 28 (2019), 5171–5186. https://doi.org/10.1109/TIP.2019.2918733 doi: 10.1109/TIP.2019.2918733
|
| [24] |
C. Y. Lu, J. H. Tang, S. C. Yan, Z. C. Lin, Nonconvex nonsmooth low rank minimization via iteratively reweighted nuclear norm, IEEE Trans. Image Process., 25 (2015), 829–839. https://doi.org/10.1109/TIP.2015.2511584 doi: 10.1109/TIP.2015.2511584
|
| [25] |
H. M. Zhang, J. J. Qian, B. Zhang, J. Yang, C. Gong, Y. Wei, Low-rank matrix recovery via modified schatten-$ p $ norm minimization with convergence guarantees, IEEE Trans. Image Process., 29 (2019), 3132–3142. https://doi.org/10.1109/TIP.2019.2957925 doi: 10.1109/TIP.2019.2957925
|
| [26] |
D. N. Phan, T. N. Nguyen, An accelerated IRNN-Iteratively Reweighted Nuclear Norm algorithm for nonconvex nonsmooth low-rank minimization problems, J. Comput. Appl. Math., 396 (2021), 113602. https://doi.org/10.1016/j.cam.2021.113602 doi: 10.1016/j.cam.2021.113602
|
| [27] |
X. P. Li, M. L. Wang, H. C. So, An interpretable bi-branch neural network for matrix completion, Signal Process., 200 (2022), 108640. https://doi.org/10.1016/j.sigpro.2022.108640 doi: 10.1016/j.sigpro.2022.108640
|
| [28] |
J. Gotoh, A. Takeda, K. Tono, DC formulations and algorithms for sparse optimization problems, Math. Program., 169 (2018), 141–176. https://doi.org/10.1007/s10107-017-1181-0 doi: 10.1007/s10107-017-1181-0
|
| [29] | S. Boyd, L. Vandenberghe, Convex optimization, Cambridge University Press, 2004. https://doi.org/10.1017/CBO9780511804441 |
| [30] |
Y. F. Lou, M. Yan, Fast l1–l2 minimization via a proximal operator, J. Sci. Comput., 74 (2018), 767–785. https://doi.org/10.1007/s10915-017-0463-2 doi: 10.1007/s10915-017-0463-2
|
| [31] | J. V. Neumann, Some matrix-inequalities and metrization of matric space, 1937. |
| [32] |
B. S. He, M. Tao, X. M. Yuan, Alternating direction method with Gaussian back substitution for separable convex programming, Siam. J. Optimiz., 22 (2012), 313–340. https://doi.org/10.1137/110822347 doi: 10.1137/110822347
|
| [33] |
X. T. Cui, X. L. Sun, S. S. Zhu, R. J. Jiang, D. Li, Portfolio optimization with nonparametric value-at-risk: A block coordinate descent method, INFORMS J. Comput., 30 (2018), 454–471. https://doi.org/10.1287/ijoc.2017.0793 doi: 10.1287/ijoc.2017.0793
|
| [34] |
K. Goldberg, T. Roeder, D. Gupta, C. Perkins, Eigentaste: A constant time collaborative filtering algorithm, Inf. Retr., 4 (2001), 133–151. https://doi.org/10.1023/A:1011419012209 doi: 10.1023/A:1011419012209
|
| [35] |
S. Q. Ma, D. Goldfarb, L. F. Chen, Fixed point and bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), 321–353. https://doi.org/10.1007/s10107-009-0306-5 doi: 10.1007/s10107-009-0306-5
|