Paper 4
Learning Through Non-linearly Supervised Dimensionality ReductionAuthors: Josif Grabocka and Lars Schmidt-Thieme |
AbstractDimensionality reduction is a crucial ingredient of machine learning and data mining, boosting classication accuracy through the isolation of patterns via omission of noise. Nevertheless, recent studies have shown that dimensionality reduction can benet from label information, via a joint estimation of predictors and target variables from a low-rank representation. In the light of such inspiration, we propose a novel dimensionality reduction which simultaneously reconstructs the predictors using matrix factorization and estimates the target variable via a dual-form maximum margin classier from the latent space. Compared to existing studies which conduct the decomposition via linearly supervision of targets, our method reconstructs the labels using nonlinear functions. If the hyper-plane separating the class regions in the original data space is non-linear, then a nonlinear dimensionality reduction helps improving the generalization over the test instances. The joint optimization function is learned through a coordinate descent algorithm via stochastic updates. Empirical results demonstrate the superiority of the proposed method compared to both classication in the original space (no reduction), classication after unsupervised reduction, and classi- cation using linearly supervised projection. |