Disco is an open-source implementation of the Map-Reduce framework for distributed computing. As the original framework, Disco supports parallel computations over large data sets on unreliable cluster of computers. You don't need a cluster to use Disco -- a script is provided that installs Disco automatically to the Amazon's EC2 computing cloud where you get computing resources on demand basis.
The Disco core is written in Erlang, a functional language that is designed for building robust fault-tolerant distributed applications. Users of Disco typically write jobs in Python, which makes it possible to express even complex algorithms or data processing tasks often only in tens of lines of code. This means that you can quickly write scripts to process massive amounts of data.
Disco was started at Nokia Research Center as a lightweight framework for rapid scripting of distributed data processing tasks. This far Disco has been succesfully used, for instance, in parsing and reformatting data, data clustering, probabilistic modelling, data mining, full-text indexing, and log analysis with hundreds of gigabytes of real-world data on hundreds of CPUs in parallel.
Many well-known machine learning and data mining methods map cleanly to the Map/Reduce framework (see Map-Reduce for Machine Learning on Multicore for examples).
Disco includes example implementations of the following ML methods:
In addition to these examples, we know that Disco has been used at least in the following tasks:
- Learning Hidden Markov Models
- Frequent itemset mining
- Full-text indexing
- Changes to previous version:
Initial Announcement on mloss.org.
No one has posted any comments yet. Perhaps you'd like to be the first?
Leave a comment
You must be logged in to post comments.