Abstract
Local discriminative learning methods approximate a target function (a posteriori class probability function) directly by partitioning the feature space into a set of local regions, and appropriately modeling a simple input-output relationship (function) in each one. This paper presents a new method for judiciously partitioning the input feature space in order to accurately represent the target function. The method accomplishes this by approximating not only the target function itself but also its derivatives. As such, the method partitions the input feature space along those dimensions for which the class probability function changes most rapidly, thus minimizing bias. The efficacy of the method is validated using a variety of simulated and real-world data.
Original language | English |
---|---|
Pages (from-to) | 139-150 |
Number of pages | 12 |
Journal | Pattern Recognition |
Volume | 34 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2001 |
Keywords
- Discriminative learning
- Feature relevance
- Local learning
- Recursive partitioning
- Statistical pattern classification