Abstract: Abstract A popular research area today in statistics and machine learning is that of manifold learning, which is related to the algorithmic techniques of dimensionality reduction. Manifold learning can be divided into linear and nonlinear methods. Linear methods, which have long been part of the statistician's toolbox for analyzing multivariate data, include principal component analysis (PCA) and multidimensional scaling (MDS). Recently, there has been a flurry of research activity on nonlinear manifold learning, which includes Isomap, local linear embedding, Laplacian eigenmaps, Hessian eigenmaps, and diffusion maps. Some of these techniques are nonlinear generalizations of the linear methods. The algorithmic process of most of these techniques consists of three steps: a nearest‐neighbor search, a definition of distances or affinities between points (a key ingredient for the success of these methods), and an eigenproblem for embedding high‐dimensional points into a lower dimensional space. This article gives us a brief survey of these new methods and indicates their strengths and weaknesses. WIREs Comput Stat 2012 doi: 10.1002/wics.1222 This article is categorized under: Statistical and Graphical Methods of Data Analysis > Dimension Reduction Statistical Learning and Exploratory Methods of the Data Sciences > Manifold Learning Statistical and Graphical Methods of Data Analysis > Multivariate Analysis
Publication Year: 2012
Publication Date: 2012-07-16
Language: en
Type: review
Indexed In: ['crossref']
Access and Citation
Cited By Count: 108
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot