Project details for Sparse Compositional Metric Learning

Logo Sparse Compositional Metric Learning v1.1

by bellet - August 16, 2015, 16:41:20 CET [ BibTeX BibTeX for corresponding Paper Download ]

view (2 today), download ( 1 today ), 2 subscriptions

Description:

Sparse Compositional Metric Learning is a software to learn metrics in the form of sparse combinations of simple basis elements (obtained for instance from Linear Discriminant Analysis), which allows it to scale well with the data dimensionality. It can be used to learn a single global metric or multiple local metrics that vary smoothly across the feature space. It also supports the multi-task setting, where a metric is learned for each task in a coupled fashion. All formulations are solved in a scalable way using stochastic optimization techniques.

For more information / citation, refer to:

Y. Shi, A. Bellet and F. Sha. Sparse Compositional Metric Learning. AAAI Conference on Artificial Intelligence (AAAI), 2014, 2078-2084.

http://perso.telecom-paristech.fr/~abellet/papers/sparse_metric_learning_aaai14.html

Changes to previous version:

Various minor bug fixes and improvements. The basis and triplet generation now fully supports with datasets with very small classes and arbitrary labels (no need to be consecutive or positive). The computational and memory efficiency of the code when data is high dimensional has been largely improved, and we generate a rectangular (smaller) projection matrix when the number of selected basis is smaller than the dimension. K-NN classification with local metrics has been optimized and made significantly less costly in both time and memory.

BibTeX Entry: Download
Corresponding Paper BibTeX Entry: Download
Supported Operating Systems: Agnostic
Data Formats: Any Format Supported By Matlab
Tags: Sparsity, Multi Task, Metric Learning, Local Metrics
Archive: download here

Other available revisons

Version Changelog Date
v1.1

Various minor bug fixes and improvements. The basis and triplet generation now fully supports with datasets with very small classes and arbitrary labels (no need to be consecutive or positive). The computational and memory efficiency of the code when data is high dimensional has been largely improved, and we generate a rectangular (smaller) projection matrix when the number of selected basis is smaller than the dimension. K-NN classification with local metrics has been optimized and made significantly less costly in both time and memory.

August 16, 2015, 16:41:20
v1

Initial Announcement on mloss.org.

May 28, 2014, 09:54:10

Comments

No one has posted any comments yet. Perhaps you'd like to be the first?

Leave a comment

You must be logged in to post comments.