Model Monitor is a Java toolkit for the systematic evaluation of classifiers under changes in distribution. It provides methods for detecting distribution shifts in data, comparing the performance of multiple classifiers under shifts in distribution, and evaluating the robustness of individual classifiers to distribution change. As such, it allows users to determine the best model (or models) for their data under a number of potential scenarios. Additionally, Model Monitor is fully integrated with the WEKA machine learning environment, so that a variety of commodity classifiers can be used if desired.
Some of the techniques implemented in the software come from our papers:
David A. Cieslak and Nitesh V. Chawla "Detecting Fracture Points in Classifier Performance", 7th IEEE Conference on Data Mining, pp. 123-132, 2007.
David A. Cieslak and Nitesh V. Chawla "A Framework for Monitoring Classifiers' Performance: When and Why Failure Occurs?", Knowledge and Information Systems 2008.
Interested parties can find them on our website:
- Changes to previous version:
Improved AUROC calculation. Several minor bug fixes.
No one has posted any comments yet. Perhaps you'd like to be the first?
Leave a comment
You must be logged in to post comments.