基于张量环分解的张量填充算法及应用文献综述
2020-04-20 13:03:38
大数据时代来临,人们获得信息的渠道也越来越多样化,同时数据量和数据传输速度也呈现快速上升趋势。大数据由多维、多模的数据集组成,它们的数量及其巨大并且复杂,这导致它们不能在标准的计算机上被有效地存储和处理。大数据的特征不仅仅是巨大的体积,而是有着以下四个特征:体积、速度、多样性、真实性。大体积意味着数据处理算法是可扩展的;高速度要求这些数据基本是被实时处理的;真实性表明我们需要针对噪声鲁棒和可预测的算法,特别是不完全或者不连续的数据;最后,多样性暗示着需要数据一体化,例如:神经影像、时间序列、脉冲、基因和行为数据等。以下是一些关于大数据的挑战,包含数据的获取、处理、搜索、可视化、聚类、识别、同化、融合以及在一个可容忍的时间内处理数据。因此,大数据的来临意味着我们需要革新方法和技巧。目前兴起的技术有低秩张量分解和张量网络。目前的挑战是分析大规模、多维数据。数据爆炸刺激了对新的可扩展张量分解算法和张量网络算法的深入研究。
张量作为高阶矩阵的自然推广,提供一个非常好的数据表达方式。张量分解将数据张量分解成因子矩阵,而张量网络则用连接的低阶张量来表示高阶张量。张量分解和张量网络是信源盲分离方法和二维成分倒多维成分分析的自然延伸。另外,有关二者的算法非常适合维度约减,并且它们可以处理带有缺失值和富含噪声的数据。并且它们对于处理含有百万甚至十亿级别的非零耦合数据是很具有潜力的,这可通过映射-归约和分治算法来实现。除以上提及之处外,张量模型还具有很多的优点,例如发现在多维数据中的隐藏成分和其分布模式。因此,研究适合低秩张量近似的模型和算法以及相应的可扩展的软件是非常有必要的。
{title}2. 研究的基本内容与方案
{title}研究包含的基本内容为:矩阵的基本操作及其意义,例如:特征值分解、奇异值分解等;矩阵填充模型及主流算法,例如:SDP(半正定锥规划)、SVT(奇异值阈值收缩算法)、FPC(不动点连续算法)、APG(加速近端梯度算法)等;张量的基本概念及操作(包括计算、编程技巧等),例如:维度、阶、缩并、-mode展开等;经典和最新的张量分解及计算方法,例如:CP(Canonical Polyadic)分解、Tucker分解、TT(Tensor Train)分解及TR(Tensor Ring)分解等;张量网络的概念及计算、编程技巧;基于不同分解模型的张量填充模型及算法;矩阵填充的抽样定理;基于矩阵的鲁棒主成分分析模型和算法;张量填充的抽样定理;基于张量环的鲁棒主成分分析模型和算法;一些额外的算法和数学分析等。
研究的目标是基于张量环分解的鲁棒主成分分析及其应用。
技术路线如下:第一阶段:从矩阵模型开始,研究模型和算法,同时研究矩阵模型的抽样定理;然后熟悉张量的基本操作,进而过渡到张量分解和张量网络;熟悉张量填充模型及算法;提出新的模型及算法(张量环填充剂相应算法);最后从已有的矩阵模型开始逐步深入研究,提出张量模型下的抽样定理。第二阶段:从矩阵鲁棒主成分分析开始研究,根据第一阶段已有的结果过渡到张量环路棒主成分分析;最后同样从矩阵鲁棒主成分分析一样给出有关张量的一些恢复条件。
3. 参考文献- J. F. Cai, E. J. Cande`s and Z. W. Shen, “A Singular Value Thresholding Algorithm for Matrix Completion,” SIAM Journal on Optimization, Vol. 20, no .4, pp. 1956-1982, Mar 2010.
- S. Ma, D. Goldfarb and L. Chen, “Fixed Point and Bregman Iterative Methods for Matrix Rank Minimization,” Mathematical Programming, vol. 128, no. 1-2, pp. 321-353, May 2009.
- K. C. Toch and S. Yun, “An Accelerated Proximal Gradient Algorithm for Nuclear Norm Regularized Linear Least Square Problems,” Pacific Journal of Optimization, vol. 6, no. 3, pp. 615-640, Nov 2009.
- E. J. Cande`s and T. Tao, “The Power of Convex Relaxation: Near-Optimal Matrix Completion,” IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2053-2080, May 2010.
- E. J. Cande`s and Y. Plan, “Matrix Completion with Noise,” proceedings of the IEEE, vol. 98, no .6, pp. 925-936, Jun 2010.
- S. Rabanser, O. Shchur, S. Günnemann, “Introduction to Tensor Decompositions and their Applications in Machine Learning,” arXiv, vol. stat/ML/1711.10781v1, Nov 2017.
- A. Cichocki, D. P. Mandic, A. H. Phan, C. F. Caiafa, G. Zhou, Q. Zhao and L. D. Lathauwer, “Tensor Decompositions for Signal Processing Applications,” IEEE Signal Processing Magazine, vol. 32, no. 2, pp. 145-163, Mar, 2015.
- N. D. Sidiropoulos, L. D. Lathauwer, X. Fu, K. Huang, E. E. Papalexakis, C. Faloutsos, “Tensor Decomposition for Signal Processing and Machine Learning,” IEEE Transactions on Signal Processing, vol. 65, no. 13, pp. 3551-3582, Jul 2017.
- J. Liu, P. Musialski, P. Wonka, J. Ye, “Tensor Completion for Estimating Missing Values in Visual Data,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 208-220, Jan 2013.
- Q. Zhao, M. Sugiyama and A. Cichocki, “Learning Efficient Tensor Representations with Ring Structure Networks,” arXiv, vol. cs/NA/1705.08286v3, May 2017.
- W. Wang, V. Aggarwal and S. Aeron, “Tensor Completion by Alternating Minimization under the Tensor Train (TT) Model,” arXiv, vol. cs/NA/1609.05587v1, Sep 2016.
- L. Yuan, Q. Zhao and J. Cao, “Completion of High Order Tensor Data with Missing Entries via Tensor-train Decomposition,” arXiv, vol. cs/NA/1709.02641v2, Sep 2017.
- J. A. Bengua, H. N. Phien, H. D. Tuan and M. N. Do, “Efficient Tensor Completion for Color Image and Video Recovery: Low-rank Tensor Train,” arXiv, vol. cs/NA/1606.01500v1, Jun 2016.
- W. Wang, V. Aggarwal and S. Aeron, “Efficient Low Rank Tensor Ring Completion,” arXiv, vol. cs/LG/1707.08184v1, Jul 2017.
- A. Cichocki, “Era of Big Data Processing: A New Approach via Tensor Networks and Tensor Decompositions,” arXiv, vol. cs/ET/1403.2048v4, Aug 2014.
- A. Cichocki, N. Lee, I. Oseledets, A. H. Phan, Q. Zhao and D. Mandic, “Tensor Networks for Dimensionality Reduction and Large-Scale Optimization Part 1 Low-Rank Tensor Decompositions,” Foundations and Trends in Machine Learning, vol. 9, no. 4-5, pp. 249-429, Dec 2016.
- A. Cichocki, A. H. Phan, Q. Zhao, N. Lee, I. Oseledets, M. Sugiyama and D. Mandic, “Tensor Networks for Dimensionality Reduction and Large-Scale Optimization Part 2 Applications and Future Perspectives,” Foundations and Trends in Machine Learning, vol. 9, no. 6, pp. 249-429, May 2017.
- N. Parikh and S. Boyd, “Proximal Algorithms,” Foundations and Trends in Optimization, vol. 1, no. 3, pp. 127-239, Jan 2014.
- Y. Xu and W. Yin, “A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion,” SIAM Journal on Image Sciences, vol. 6, no. 3, pp. 1758-1789, Sep 2013.
- L. Yang, Z. Huang and X. Shi, “A Fixed Point Iterative Method for Low n-Rank Tensor Pursuit,” IEEE Transactions on Signal Processing, vol. 61, no. 11, pp. 2952-2962, Jun 2013.
- K. Ye and L. H. Lim, “Tensor Network Ranks,” arXiv, vol. math/NA/1801.02662v1, Jan 2018.
- E. T. Hale, W. Yin and Y. Zhang, “Fixed-Point Continuation For L1-Minimization: Methodology and Convergence,” SIAM Journal on Optimization, vol. 19, no. 3, pp. 1107-1130, Oct 2008.
- E. T. Hale, W. Yin and Y. Zhang, “Fixed-Point Continuation Applied to Compressed Sensing: Implementation and Numerical Experiments,” Journal of Computational Mathematics, vol. 28, no. 2, pp. 170-194, Mar 2010.
- E. Van den Berg and M. P. Friedlander, “In Pursuit of a Root,” UBC Computer Science Technical Report, TR-2007-16, 2007.