In many machine learning applications, data are often described by a large number of features or attributes. However, too many features can result in overfitting. This is often the case when the number of examples is smaller than the number of features. The problem can be mitigated by learning latent variable models where the data can be described by a fewer number of latent dimensions. There are many techniques for learning latent variable models in the literature. Most of these techniques can be grouped into two classes: techniques that are informative, represented by principal component analysis (PCA), and techniques that are discriminant, represented by linear discriminant analysis (LDA). Each class of the techniques has its advantages. In this work, we introduce a technique for learning latent variable models with discriminant regularization that combines the characteristics of both classes. Empirical evaluation using a variety of data sets is presented to verify the performance of the proposed technique.