7 resultados para Canonical Correlation Analysis
em Cambridge University Engineering Department Publications Database
Resumo:
Copyright © (2014) by the International Machine Learning Society (IMLS) All rights reserved. Classical methods such as Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA) are ubiquitous in statistics. However, these techniques are only able to reveal linear re-lationships in data. Although nonlinear variants of PCA and CCA have been proposed, these are computationally prohibitive in the large scale. In a separate strand of recent research, randomized methods have been proposed to construct features that help reveal nonlinear patterns in data. For basic tasks such as regression or classification, random features exhibit little or no loss in performance, while achieving drastic savings in computational requirements. In this paper we leverage randomness to design scalable new variants of nonlinear PCA and CCA; our ideas extend to key multivariate analysis tools such as spectral clustering or LDA. We demonstrate our algorithms through experiments on real- world data, on which we compare against the state-of-the-art. A simple R implementation of the presented algorithms is provided.
Resumo:
© 2015 John P. Cunningham and Zoubin Ghahramani. Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data features of interest, such as covariance, dynamical structure, correlation between data sets, input-output relationships, and margin between data classes. Methods have been developed with a variety of names and motivations in many fields, and perhaps as a result the connections between all these methods have not been highlighted. Here we survey methods from this disparate literature as optimization programs over matrix manifolds. We discuss principal component analysis, factor analysis, linear multidimensional scaling, Fisher's linear discriminant analysis, canonical correlations analysis, maximum autocorrelation factors, slow feature analysis, sufficient dimensionality reduction, undercomplete independent component analysis, linear regression, distance metric learning, and more. This optimization framework gives insight to some rarely discussed shortcomings of well-known methods, such as the suboptimality of certain eigenvector solutions. Modern techniques for optimization over matrix manifolds enable a generic linear dimensionality reduction solver, which accepts as input data and an objective to be optimized, and returns, as output, an optimal low-dimensional projection of the data. This simple optimization framework further allows straightforward generalizations and novel variants of classical methods, which we demonstrate here by creating an orthogonal-projection canonical correlations analysis. More broadly, this survey and generic solver suggest that linear dimensionality reduction can move toward becoming a blackbox, objective-agnostic numerical technology.
Resumo:
In the field of motor control, two hypotheses have been controversial: whether the brain acquires internal models that generate accurate motor commands, or whether the brain avoids this by using the viscoelasticity of musculoskeletal system. Recent observations on relatively low stiffness during trained movements support the existence of internal models. However, no study has revealed the decrease in viscoelasticity associated with learning that would imply improvement of internal models as well as synergy between the two hypothetical mechanisms. Previously observed decreases in electromyogram (EMG) might have other explanations, such as trajectory modifications that reduce joint torques. To circumvent such complications, we required strict trajectory control and examined only successful trials having identical trajectory and torque profiles. Subjects were asked to perform a hand movement in unison with a target moving along a specified and unusual trajectory, with shoulder and elbow in the horizontal plane at the shoulder level. To evaluate joint viscoelasticity during the learning of this movement, we proposed an index of muscle co-contraction around the joint (IMCJ). The IMCJ was defined as the summation of the absolute values of antagonistic muscle torques around the joint and computed from the linear relation between surface EMG and joint torque. The IMCJ during isometric contraction, as well as during movements, was confirmed to correlate well with joint stiffness estimated using the conventional method, i.e., applying mechanical perturbations. Accordingly, the IMCJ during the learning of the movement was computed for each joint of each trial using estimated EMG-torque relationship. At the same time, the performance error for each trial was specified as the root mean square of the distance between the target and hand at each time step over the entire trajectory. The time-series data of IMCJ and performance error were decomposed into long-term components that showed decreases in IMCJ in accordance with learning with little change in the trajectory and short-term interactions between the IMCJ and performance error. A cross-correlation analysis and impulse responses both suggested that higher IMCJs follow poor performances, and lower IMCJs follow good performances within a few successive trials. Our results support the hypothesis that viscoelasticity contributes more when internal models are inaccurate, while internal models contribute more after the completion of learning. It is demonstrated that the CNS regulates viscoelasticity on a short- and long-term basis depending on performance error and finally acquires smooth and accurate movements while maintaining stability during the entire learning process.