Title: Complex tensors almost always have best low-rank approximations
Abstract:Low-rank tensor approximations are plagued by a well-known problem - a tensor may fail to have a best rank-$r$ approximation. Over $\mathbb{R}$, it is known that such failures can occur with positive ...Low-rank tensor approximations are plagued by a well-known problem - a tensor may fail to have a best rank-$r$ approximation. Over $\mathbb{R}$, it is known that such failures can occur with positive probability, sometimes with certainty. We will show that while such failures still occur over $\mathbb{C}$, they happen with zero probability. In fact we establish a more general result with useful implications on recent scientific and engineering applications that rely on sparse and/or low-rank approximations: Let $V$ be a complex vector space with a Hermitian inner product, and $X$ be a closed irreducible complex analytic variety in $V$. Given any complex analytic subvariety $Z \subseteq X$ with $\dim Z < \dim X$, we prove that a general $p \in V$ has a unique best $X$-approximation $\pi_X (p)$ that does not lie in $Z$. In particular, it implies that over $\mathbb{C}$, any tensor almost always has a unique best rank-$r$ approximation when $r$ is less than the generic rank. Our result covers many other notions of tensor rank: symmetric rank, alternating rank, Chow rank, Segre-Veronese rank, Segre-Grassmann rank, Segre-Chow rank, Veronese-Grassmann rank, Veronese-Chow rank, Segre-Veronese-Grassmann rank, Segre-Veronese-Chow rank, and more - in all cases, a unique best rank-$r$ approximation almost always exist. It applies also to block-terms approximations of tensors: for any $r$, a general tensor has a unique best $r$-block-terms approximations. When applied to sparse-plus-low-rank approximations, we obtain that for any given $r$ and $k$, a general matrix has a unique best approximation by a sum of a rank-$r$ matrix and a $k$-sparse matrix with a fixed sparsity pattern; this arises in, for example, estimation of covariance matrices of a Gaussian hidden variable model with $k$ observed variables conditionally independent given $r$ hidden variables.Read More
Publication Year: 2017
Publication Date: 2017-12-06
Language: en
Type: article
Access and Citation
Cited By Count: 2
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot