Multi-Way Data Representation: A Comprehensive Survey on Tensor Decomposition in Machine Learning
DOI:
https://doi.org/10.31224/4421Keywords:
Tensor decomposition, machine learningAbstract
Tensor decomposition has emerged as a fundamental tool in machine learning, enabling efficient representation, compression, and interpretation of high-dimensional data. Unlike traditional matrix factorization methods, tensor decomposition extends to multi-way data structures, capturing complex relationships and latent patterns that would otherwise remain hidden. This paper provides a comprehensive overview of tensor decomposition techniques, including CANDECOMP/PARAFAC (CP), Tucker, and Tensor Train (TT) decompositions, and their applications in various machine learning domains. We explore optimization strategies, computational challenges, and real-world case studies demonstrating the effectiveness of tensor methods in areas such as natural language processing, recommender systems, deep learning compression, and biomedical informatics. Furthermore, we discuss emerging trends and future research directions, including the integration of tensor decomposition with deep learning, scalability improvements, and applications in quantum computing. As machine learning continues to evolve, tensor decomposition is poised to play an increasingly critical role in data-driven discovery and model interpretability.
Downloads
Downloads
Posted
License
Copyright (c) 2025 Aliona Tatyana

This work is licensed under a Creative Commons Attribution 4.0 International License.