Citation: | YANG Qian, LIU Hongying. Sparse principal component analysis via regularized rank-k matrix approximation[J]. Journal of Beijing University of Aeronautics and Astronautics, 2017, 43(6): 1239-1246. doi: 10.13700/j.bh.1001-5965.2016.0462(in Chinese) |
In calculating the sparse principal components (PCs), attaining
[1] |
ZOU H, HASTIE T.Sparse principal component analysis[J].Journal of Computational and Graphical Statistics, 2006, 15(2):265-286. doi: 10.1198/106186006X113430
|
[2] |
TRENDAFILOV N T, JOLLIFFE I T.Projected gradient approach to the numerical solution of the SCoTLASS[J].Computational Statistics and Data Analysis, 2006, 50(1):242-253. doi: 10.1016/j.csda.2004.07.017
|
[3] |
D'ASPREMONT A, GHAOUI L E, JORDAN M I, et al.A direct formulation for sparse PCA using semidefinite programming[J].SIAM Review, 2007, 48(3):434-448. http://papers.nips.cc/paper/2628-a-direct-formulation-for-sparse-pca-using-semidefinite-programming.pdf
|
[4] |
SHEN H, HUANG J Z.Sparse principal component analysis via regularized low rank matrix approximation[J].Journal of Multivariate Analysis, 2008, 99(6):1015-1034. doi: 10.1016/j.jmva.2007.06.007
|
[5] |
MOGHADDAM B, WEISS Y, AVIDAN S.Spectral bounds for sparse PCA:Exact and greedy algorithms[C]//Advances in Neural Information Processing Systems.Montreal:Neural Information Processing System Foundation, 2006:915-922.
|
[6] |
D'ASPREMONT A, BACH F R, GHAOUI L E.Optimal solutions for sparse principal component analysis[J].Machine Learning, 2008, 9(7):1269-1294. https://www.researchgate.net/publication/1897313_Optimal_Solutions_for_Sparse_Principal_Component_Analysis
|
[7] |
LUSS R, TEBOULLE M.Conditional gradient algorithms for rank-one matrix approximations with a sparsity constraint[J].SIAM Review, 2013, 55(1):65-98. doi: 10.1137/110839072
|
[8] |
LUSS R, TEBOULLE M.Convex approximations to sparse PCA via Lagrangian duality[J].Operations Research Letters, 2011, 39(1):57-61. doi: 10.1016/j.orl.2010.11.005
|
[9] |
SIGG C, BUHMANN J.Expectation-maximization for sparse and nonnegative PCA[C]//Proceedings of the 25th International Conference on Machine Learning.NewYork:ACM, 2008:960-967.
|
[10] |
JOURNEE M, NESTEROV Y, RICHTARIK P, et al.Generalized power method for sparse principal component analysis[J].Journal of Machine Learning Research, 2010, 11(2):517-553. http://www.oalib.com/paper/3867039
|
[11] |
LU Z S, ZHANG Y.An augmented Lagrangian approach for sparse principal component analysis[J].Math Program Series A, 2012, 135(1-2):149-193. doi: 10.1007/s10107-011-0452-4
|
[12] |
ZHAO Q, MENG D Y, XU Z B, et al.A block coordinates descent approach for sparse principal component analysis[J].Neurocomputing, 2015, 153(4):180-190. http://gr.xjtu.edu.cn/c/document_library/get_file?folderId=1834195&name=DLFE-34834.pdf
|
[13] |
TIBSHIRANI R. Regression shrinkage and selection via the LASSO[J].Journal of the Royal Statistical Society Series B, 1996, 58(3):267-268. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.35.7574&rep=rep1&type=pdf
|
[14] |
TSENG P.Convergence of a block coordinates descent method for nondifferentiable minimization[J].Journal of Optimization Theory Apply, 2001, 109(3):475-494. doi: 10.1023/A:1017501703105
|
[15] |
JEFFERS J N R.Two case studies in the application of principal component analysis[J].Applied Statistics, 1967, 16(3):225-236. doi: 10.2307/2985919
|
[16] |
ALON U, BARKAI N, NOTTERMAN D A, et al.Broad patterns of gene expression revealed by clustering of tumor and normal colon tissues probed by oligonucleotide arrays[J].Proceedings of the National Academy of Sciences of the United States of America, 1999, 96(12):6745-6750. doi: 10.1073/pnas.96.12.6745
|