Research article

The maximum residual block Kaczmarz algorithm based on feature selection

  • Published: 20 March 2025
  • MSC : 15A06, 65F10, 65F20

  • Based on the K-means method, an effective block-row partitions algorithm was proposed in [1], which involves partitioning the rows of the coefficient matrix $ A \in {\mathbb{R}^ {m \times n}} $. However, with the increase of the size of the coefficient matrix, the time required for the partitioning process will increase significantly. To address this problem, we considered selecting features from the columns of the matrix $ A $ to obtain a low-rank matrix $ \tilde A \in {\mathbb{R}^ {m \times d}} \left(d \ll n \right) $. Lasso is a regression analysis method for feature selection, which is simple and has excellent processing ability for high-dimensional data. In view of this, we first introduced a new criterion for selecting the projection block, and proposed the maximum residual block Kaczmarz algorithm. Then, we put forward the feature selection algorithm based on Lasso, and further presented a maximum residual block Kaczmarz algorithm based on feature selection. We analyzed the convergence of these algorithms and demonstrated their effectiveness through numerical results, while also verifying the performance of the proposed algorithms in image reconstruction.

    Citation: Ran-Ran Li, Hao Liu. The maximum residual block Kaczmarz algorithm based on feature selection[J]. AIMS Mathematics, 2025, 10(3): 6270-6290. doi: 10.3934/math.2025286

    Related Papers:

  • Based on the K-means method, an effective block-row partitions algorithm was proposed in [1], which involves partitioning the rows of the coefficient matrix $ A \in {\mathbb{R}^ {m \times n}} $. However, with the increase of the size of the coefficient matrix, the time required for the partitioning process will increase significantly. To address this problem, we considered selecting features from the columns of the matrix $ A $ to obtain a low-rank matrix $ \tilde A \in {\mathbb{R}^ {m \times d}} \left(d \ll n \right) $. Lasso is a regression analysis method for feature selection, which is simple and has excellent processing ability for high-dimensional data. In view of this, we first introduced a new criterion for selecting the projection block, and proposed the maximum residual block Kaczmarz algorithm. Then, we put forward the feature selection algorithm based on Lasso, and further presented a maximum residual block Kaczmarz algorithm based on feature selection. We analyzed the convergence of these algorithms and demonstrated their effectiveness through numerical results, while also verifying the performance of the proposed algorithms in image reconstruction.



    加载中


    [1] R.-R. Li, H. Liu, On randomized partial block Kaczmarz method for solving huge linear algebraic systems, Comput. Appl. Math., 41 (2022), 278. https://doi.org/10.1007/s40314-022-01978-0 doi: 10.1007/s40314-022-01978-0
    [2] Y. Censor, Row-action methods for huge and sparse systems and their applications, SIAM Rev., 23 (1981), 444–466. https://doi.org/10.1137/1023097 doi: 10.1137/1023097
    [3] C. Brezinski, Projection methods for systems of equations, Amsterdam: Elsevier, 1997.
    [4] Z.-Z. Bai, C.-H. Jin, Column-decomposed relaxation methods for the overdetermined systems of linear equations, Int. J. Appl. Math., 13 (2003), 71–82.
    [5] S. Kaczmarz, Angen$\ddot a$herte aufl$\ddot o$sung von systemen linearer gleichungen, Bull. Int. Acad. Polon. Sci. Lett. A, 35 (1937), 355–357.
    [6] F. Natterer, The mathematics of computerized tomography, Philadelphia: SIAM, 2001.
    [7] G. T. Herman, Fundamentals of computerized tomography: Image reconstruction from projections, Springer, 2009.
    [8] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Probl., 20 (2004), 103–120. https://doi.org/10.1088/0266-5611/20/1/006 doi: 10.1088/0266-5611/20/1/006
    [9] C. Popa, R. Zdunek, Kaczmarz extended algorithm for tomographic image reconstruction from limited-data, Math. Comput. Simulat., 65 (2004), 579–598. https://doi.org/10.1016/j.matcom.2004.01.021 doi: 10.1016/j.matcom.2004.01.021
    [10] T. Strohmer, R. Vershynin, A randomized Kaczmarz algorithm with exponential convergence, J. Fourier Anal. Appl., 15 (2009), 262–278. https://doi.org/10.1007/s00041-008-9030-4 doi: 10.1007/s00041-008-9030-4
    [11] Z.-Z. Bai, W.-T. Wu, On convergence rate of the randomized Kaczmarz method, Linear Algebra Appl., 553 (2018), 252–269. https://doi.org/10.1016/j.laa.2018.05.009 doi: 10.1016/j.laa.2018.05.009
    [12] Z.-Z. Bai, W.-T. Wu, On greedy randomized Kaczmarz method for solving large sparse linear systems, SIAM J. Sci. Comput., 40 (2018), 592–606. https://doi.org/10.1137/17m1137747 doi: 10.1137/17m1137747
    [13] Z.-Z. Bai, W.-T. Wu, On relaxed greedy randomized Kaczmarz methods for solving large sparse linear systems, Appl. Math. Lett., 83 (2018), 21–26. https://doi.org/10.1016/j.aml.2018.03.008 doi: 10.1016/j.aml.2018.03.008
    [14] J.-J. Zhang, A new greedy Kaczmarz algorithm for the solution of very large linear systems, Appl. Math. Lett., 91 (2019), 207–212. https://doi.org/10.1016/j.aml.2018.12.022 doi: 10.1016/j.aml.2018.12.022
    [15] Y. Liu, C.-Q. Gu, Variant of greedy randomized Kaczmarz for ridge regression, Appl. Numer. Math., 143 (2019), 223–246. https://doi.org/10.1016/j.apnum.2019.04.008 doi: 10.1016/j.apnum.2019.04.008
    [16] Z.-Z. Bai, W.-T. Wu, On greedy randomized augmented Kaczmarz method for solving large sparse inconsistent linear systems, SIAM J. Sci. Comput., 43 (2021), A3892–A3911. https://doi.org/10.1137/20m1352235 doi: 10.1137/20m1352235
    [17] S. Steinerberger, A weighted randomized Kaczmarz method for solving linear systems, Math. Comput., 90 (2021), 2815–2826. https://doi.org/10.1090/mcom/3644 doi: 10.1090/mcom/3644
    [18] X. Yang, A geometric probability randomized Kaczmarz method for large scale linear systems, Appl. Numer. Math., 164 (2021), 139–160. https://doi.org/10.1016/j.apnum.2020.10.016 doi: 10.1016/j.apnum.2020.10.016
    [19] Z.-Z. Bai, L. Wang, On convergence rates of Kaczmarz-type methods with different selection rules of working rows, Appl. Numer. Math., 186 (2023), 289–319. https://doi.org/10.1016/j.apnum.2023.01.013 doi: 10.1016/j.apnum.2023.01.013
    [20] Z.-Z. Bai, W.-T. Wu, Randomized Kaczmarz iteration methods: Algorithmic extensions and convergence theory, Japan J. Indust. Appl. Math., 40 (2023), 1421–1443. https://doi.org/10.1007/s13160-023-00586-7 doi: 10.1007/s13160-023-00586-7
    [21] A. Ma, D. Molitor, Randomized Kaczmarz for tensor linear systems, BIT Numer. Math., 62 (2022), 171–194. https://doi.org/10.1007/s10543-021-00877-w doi: 10.1007/s10543-021-00877-w
    [22] X.-Z Wang, M.-L Che, C.-X. Mo, Y.-M. Wei, Solving the system of nonsingular tensor equations via randomized Kaczmarz-like method, J. Comput. Appl. Math., 421 (2023), 114856. https://doi.org/10.1016/j.cam.2022.114856 doi: 10.1016/j.cam.2022.114856
    [23] S.-Y. Zhong, G.-X. Huang, An almost-maximal residual tensor block Kaczmarz method for large tensor linear systems, J. Comput. Appl. Math., 437 (2024), 115439. https://doi.org/10.1016/j.cam.2023.115439 doi: 10.1016/j.cam.2023.115439
    [24] T. Elfving, Block-iterative methods for consistent and inconsistent linear equations, Numer. Math., 35 (1980), 1–12. https://doi.org/10.1007/bf01396365 doi: 10.1007/bf01396365
    [25] D. Needell, J. A. Tropp, Paved with good intentions: Analysis of a randomized block Kaczmarz method, Linear Algebra Appl., 441 (2014), 199–221. https://doi.org/10.1016/j.laa.2012.12.022 doi: 10.1016/j.laa.2012.12.022
    [26] I. Necoara, Faster randomized block Kaczmarz algorithms, SIAM J. Matrix Anal. Appl., 40 (2019), 1425–1452. https://doi.org/10.1137/19m1251643 doi: 10.1137/19m1251643
    [27] C.-Q. Miao, W.-T. Wu, On greedy randomized average block Kaczmarz method for solving large linear systems, J. Comput. Appl. Math., 413 (2022), 114372. https://doi.org/10.1016/j.cam.2022.114372 doi: 10.1016/j.cam.2022.114372
    [28] D. Needell, R. Zhao, A. Zouzias, Randomized block Kaczmarz method with projection for solving least squares, Linear Algebra Appl., 484 (2015), 322–343. https://doi.org/10.1016/j.laa.2015.06.027 doi: 10.1016/j.laa.2015.06.027
    [29] Y. Liu, C.-Q. Gu, On greedy randomized block Kaczmarz method for consistent linear systems, Linear Algebra Appl., 616 (2021), 178–200. https://doi.org/10.1016/j.laa.2021.01.024 doi: 10.1016/j.laa.2021.01.024
    [30] R.-R. Li, H. Liu, On global randomized block Kaczmarz method for image reconstruction, Electron. Res. Arch., 30 (2022), 1442–1453. https://doi.org/10.3934/era.2022075 doi: 10.3934/era.2022075
    [31] J.-Q. Chen, Z.-D. Huang, On a fast deterministic block Kaczmarz method for solving large-scale linear systems, Numer. Algor., 89 (2022), 1007–1029. https://doi.org/10.1007/s11075-021-01143-4 doi: 10.1007/s11075-021-01143-4
    [32] X.-L. Jiang, K. Zhang, J.-F. Yin, Randomized block Kaczmarz methods with $k$-means clustering for solving large linear systems, J. Comput. Appl. Math., 403 (2022), 113828. https://doi.org/10.1016/j.cam.2021.113828 doi: 10.1016/j.cam.2021.113828
    [33] T. Hastie, R. Tibshirani, M. Wainwright, Statistical learning with sparsity: The Lasso and generalizations, London: Chapman and Hall/CRC, 2015.
    [34] D. L. Donoho, De-noising by soft-thresholding, IEEE Trans. Inf. Theory, 41 (1995), 613–627. https://doi.org/10.1109/18.382009 doi: 10.1109/18.382009
    [35] L. Xiao, T. Zhang, A proximal stochastic gradient method with progressive variance reduction, SIAM J. Optim., 24 (2014), 2057–2075. https://doi.org/10.1137/140961791 doi: 10.1137/140961791
    [36] A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imag. Sci., 2 (2009), 183–202. https://doi.org/10.1137/080716542 doi: 10.1137/080716542
    [37] K. Bredies, D. A. Lorenz, Linear convergence of iterative soft-thresholding, J. Fourier Anal. Appl., 14 (2008), 813–837. https://doi.org/10.1007/s00041-008-9041-1 doi: 10.1007/s00041-008-9041-1
    [38] H. Karimi, J. Nutini, M. Schmidt, Linear convergence of gradient and proximal-gradient methods under the polyak-Lojasiewicz condition, In: Machine learning and knowledge discovery in databases, Cham: Springer, 2016,795–811. https://doi.org/10.1007/978-3-319-46128-1_50
    [39] T. A. Davis, Y. Hu, The university of Florida sparse matrix collection, ACM Trans. Math. Software, 38 (2011), 1–25. https://doi.org/10.1145/2049662.2049663 doi: 10.1145/2049662.2049663
    [40] U. Sara, M, Akter, M. S. Uddin, Image quality assessment through FSIM, SSIM, MSE and PSNR-a comparative study, J. Comput. Commun., 7 (2019), 8–18. https://doi.org/10.4236/jcc.2019.73002 doi: 10.4236/jcc.2019.73002
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(927) PDF downloads(55) Cited by(0)

Article outline

Figures and Tables

Figures(8)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog