In classification, a large number of features often make the design of a classifier difficult and degrade its performance. This is particularly pronounced when the number of examples is small relative to the number of features, which is due to the curse of dimensionality. There are many dimensionality reduction techniques in the literature. However, most these techniques are either informative (or minimum information loss), as in principal component analysis (PCA), or discriminant, as in linear discriminant analysis (LDA). Each type of technique has its strengths and weaknesses. Motivated by Gaussian Processes Latent Variable Models, we propose a simple linear projection technique that explores the characteristics of both PCA and LDA in latent representations. The proposed technique optimizes a regularized information preserving objective, where the regularizer is a LDA based criterion. And as such, it prefers a latent space that is both informative and discriminant, thereby providing better generalization performance. Experimental results based on a variety of data sets are provided to validate the proposed technique.