## Tensor Decompositions

The Singular Value Decomposition (SVD) is a well-trodden subject of many a linear algebra class. Given a matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\), we'd like to find \(\mathbf{U}\in\mathbb{R}^{m\times m}\), \(\mathbf{V}\in\mathbb{R}^{n\times n}\) and \(\mathbf{\Sigma}\in\mathbb{R}^{m\times n}\) such that \(\mathbf{A} = \mathbf{U\Sigma V}^T\), where \(\mathbf{U},\mathbf{V}\) are orthogonal matrices and \(\mathbf{\Sigma}\) is a diagonal matrix with positive, decreasing entries on the diagonal. For a more rigorous undertaking, see, e.g., Trefethen and Bau. This SVD can be applied to any number of things, and is frequently used as an optimal way to compress data and exploit structure. For example, if \(\mathbf{U}_p,\mathbf{V}_p\) are the first \(p\) columns of \(\mathbf{U},\mathbf{V}\) respectively, and \(\mathbf{\Sigma}_p\) corresponds to \(\mathbf{\Sigma}\) appropriately, then \(\mathbf{A}_p := \mathbf{U}_p\mathbf{\Sigma}_p\mathbf{V}_p^T\) will minimize \(\|\mathbf{A}-\mathbf{B}\|\) over all rank-\(p\) matrices \(B\). In this sense, we know that the columns of \(\mathbf{U}\) and \(\mathbf{V}\) "contain important information" about \(\mathbf{A}\) in some sense of the phrase, which is exploited in much literature (citations needed).

This motivates important questions on extensions-- can we exploit the same properties in tensors? What information can we recover? The gist of the research I performed on this revolved around the Higher Order SVD (HOSVD) and its applications to parametric model reduction, in some sense. Suppose you had data on some \(N\) dimensional problem with a \(P\) dimensional parameter space. Then, if you sample in dimension \(j\) a number of \(m_j\) times, and parameter \(j\) a number of \(q_j\) times, then you end up with \(m_1\times m_2\times \ldots\times m_N\times q_1\times q_2\times\ldots\times q_P\) dimensions, which can easily be enormous. The idea in the research, though, is if you can possibly construct this data tensor, what can you do with it? Can you perform something like "looking at the singular vectors"? The answer is *yes*, and in fact, instead of having only left and right singular vectors, you get singular vectors corresponding to each dimension (i.e. \(m_j\) and \(q_j\)). Then, intuitively, one can examine the behavior of, say, a discretized PDE along one particular dimension in a way that shows us the important parts. See more here.