This package aims at multi-class vector classification based on cost function-driven learning vector quantization originally proposed as generalized LVQ (GLVQ) by Sato & Yamada (1996).
The aim is to optimize prototypes representing 'average' vectors for each class, that is, a class-related cluster structure of the data cloud is assumed.
Here, the extension of Hammer & Villmann (2002) for learning weights (relevances) of the squared Euclidean data metric along with the prototypes, called GRLVQ, is provided. Also a version for learning metrics relevances expressed by quadratic matrix forms similar to MRLVQ by Schneider, Biehl and Hammer(2009). Metric adaptation can help reduce the curse of dimensionality for high-dimensional data.
Contrary to existing algorithms, this package provides a 2nd order batch optimization (l-BFGS) for the learning, which -in addition with a steep discriminative function- allows much faster and somewhat more accurate calculations than obtained by standard stochastic gradient descent optimization.
For Euclidean metrics, GRLVQ was shown by Hammer, Strickert, and Villmann (2005) to be a large margin optimizer in the flavour of support vector machines, but with compact and interpretable class representatives living in the data space. In the limit of step discriminative functions, the cost function directly minimizes misclassification.
For practical use just define the desired number of prototypes per class. You can adjust the convergence goals on demand. In matrix learning you can specify the dimension of the subspace, that is, the rank n_vec of the quadratic matrix form.
- Changes to previous version:
Initial Announcement on mloss.org.
No one has posted any comments yet. Perhaps you'd like to be the first?
Leave a comment
You must be logged in to post comments.