Volume 43 Issue 6
Jun.  2017
Turn off MathJax
Article Contents
YANG Qian, LIU Hongying. Sparse principal component analysis via regularized rank-k matrix approximation[J]. Journal of Beijing University of Aeronautics and Astronautics, 2017, 43(6): 1239-1246. doi: 10.13700/j.bh.1001-5965.2016.0462(in Chinese)
Citation: YANG Qian, LIU Hongying. Sparse principal component analysis via regularized rank-k matrix approximation[J]. Journal of Beijing University of Aeronautics and Astronautics, 2017, 43(6): 1239-1246. doi: 10.13700/j.bh.1001-5965.2016.0462(in Chinese)

Sparse principal component analysis via regularized rank-k matrix approximation

doi: 10.13700/j.bh.1001-5965.2016.0462
Funds:

National Natural Science Foundation of China 61172060

National Natural Science Foundation of China 61403011

More Information
  • Corresponding author: LIU Hongying, E-mail:liuhongying@buaa.edu.cn
  • Received Date: 31 May 2016
  • Accepted Date: 02 Sep 2016
  • Publish Date: 20 Jun 2017
  • In calculating the sparse principal components (PCs), attaining k PCs simultaneously can reduce the accumulated error arising from the calculation process. We proposed the sparse principal component model via regularized rank-k matrix approximation and designed a block coordinate descent method (BCD-sPCA-rSVD) to solve this problem. Its main idea is to first divide variables into 2k blocks by coordinates, and then solve sub-problem with respect to each single coordinate block when keeping other 2k-1 variables fixed. By solving these sub-problems with explicit solutions recursively until the stopping criterion is satisfied, the BCD-sPCA-rSVD algorithm can be easily constructed. Its per-iteration complexity is linear in both sample size and variable dimensionality. The algorithm is convergent and easy to implement. Numerical simulation results show that the algorithm is feasible and effective when applied to real and synthetic data sets. The proposed method reduces the accumulated error and has lower computational complexity, which makes it well suited to handling large-scale problems of sparse principal component analysis.

     

  • loading
  • [1]
    ZOU H, HASTIE T.Sparse principal component analysis[J].Journal of Computational and Graphical Statistics, 2006, 15(2):265-286. doi: 10.1198/106186006X113430
    [2]
    TRENDAFILOV N T, JOLLIFFE I T.Projected gradient approach to the numerical solution of the SCoTLASS[J].Computational Statistics and Data Analysis, 2006, 50(1):242-253. doi: 10.1016/j.csda.2004.07.017
    [3]
    D'ASPREMONT A, GHAOUI L E, JORDAN M I, et al.A direct formulation for sparse PCA using semidefinite programming[J].SIAM Review, 2007, 48(3):434-448. http://papers.nips.cc/paper/2628-a-direct-formulation-for-sparse-pca-using-semidefinite-programming.pdf
    [4]
    SHEN H, HUANG J Z.Sparse principal component analysis via regularized low rank matrix approximation[J].Journal of Multivariate Analysis, 2008, 99(6):1015-1034. doi: 10.1016/j.jmva.2007.06.007
    [5]
    MOGHADDAM B, WEISS Y, AVIDAN S.Spectral bounds for sparse PCA:Exact and greedy algorithms[C]//Advances in Neural Information Processing Systems.Montreal:Neural Information Processing System Foundation, 2006:915-922.
    [6]
    D'ASPREMONT A, BACH F R, GHAOUI L E.Optimal solutions for sparse principal component analysis[J].Machine Learning, 2008, 9(7):1269-1294. https://www.researchgate.net/publication/1897313_Optimal_Solutions_for_Sparse_Principal_Component_Analysis
    [7]
    LUSS R, TEBOULLE M.Conditional gradient algorithms for rank-one matrix approximations with a sparsity constraint[J].SIAM Review, 2013, 55(1):65-98. doi: 10.1137/110839072
    [8]
    LUSS R, TEBOULLE M.Convex approximations to sparse PCA via Lagrangian duality[J].Operations Research Letters, 2011, 39(1):57-61. doi: 10.1016/j.orl.2010.11.005
    [9]
    SIGG C, BUHMANN J.Expectation-maximization for sparse and nonnegative PCA[C]//Proceedings of the 25th International Conference on Machine Learning.NewYork:ACM, 2008:960-967.
    [10]
    JOURNEE M, NESTEROV Y, RICHTARIK P, et al.Generalized power method for sparse principal component analysis[J].Journal of Machine Learning Research, 2010, 11(2):517-553. http://www.oalib.com/paper/3867039
    [11]
    LU Z S, ZHANG Y.An augmented Lagrangian approach for sparse principal component analysis[J].Math Program Series A, 2012, 135(1-2):149-193. doi: 10.1007/s10107-011-0452-4
    [12]
    ZHAO Q, MENG D Y, XU Z B, et al.A block coordinates descent approach for sparse principal component analysis[J].Neurocomputing, 2015, 153(4):180-190. http://gr.xjtu.edu.cn/c/document_library/get_file?folderId=1834195&name=DLFE-34834.pdf
    [13]
    TIBSHIRANI R. Regression shrinkage and selection via the LASSO[J].Journal of the Royal Statistical Society Series B, 1996, 58(3):267-268. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.35.7574&rep=rep1&type=pdf
    [14]
    TSENG P.Convergence of a block coordinates descent method for nondifferentiable minimization[J].Journal of Optimization Theory Apply, 2001, 109(3):475-494. doi: 10.1023/A:1017501703105
    [15]
    JEFFERS J N R.Two case studies in the application of principal component analysis[J].Applied Statistics, 1967, 16(3):225-236. doi: 10.2307/2985919
    [16]
    ALON U, BARKAI N, NOTTERMAN D A, et al.Broad patterns of gene expression revealed by clustering of tumor and normal colon tissues probed by oligonucleotide arrays[J].Proceedings of the National Academy of Sciences of the United States of America, 1999, 96(12):6745-6750. doi: 10.1073/pnas.96.12.6745
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Tables(5)

    Article Metrics

    Article views(613) PDF downloads(8) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return