mloss.org SHOGUNhttp://mloss.orgUpdates and additions to SHOGUNenThu, 05 Feb 2015 09:09:37 -0000SHOGUN 4.0.0http://mloss.org/software/view/2/<html><h2>Overview</h2> <p>The <strong>SHOGUN machine learning toolbox's</strong> focus is on large scale kernel methods and especially on Support Vector Machines (SVM). It comes with a generic interface for kernel machines and features 15 different SVM implementations that all access features in a unified way via a general kernel framework or in case of linear SVMs so called "DotFeatures", i.e., features providing a minimalistic set of operations (like the dot product). </p> <h2>Features</h2> <p><strong>SHOGUN</strong> includes the LinAdd accelerations for string kernels and the COFFIN framework for on-demand computing of features for the contained linear SVMs. In addition it contains more advanced Multiple Kernel Learning, Multi Task Learning and Structured Output learning algorithms and other linear methods. <strong>SHOGUN</strong> digests input feature-objects of basically any known type, e.g., dense, sparse or variable length features (strings) of any type char/byte/word/int/long int/float/double/long double. </p> <p>The toolbox provides efficient implementations to 35 different kernels among them the </p> <ul> <li> Linear, </li> <li> Polynomial, </li> <li> Gaussian and </li> <li> Sigmoid Kernel </li> </ul> <p>and also provides a number of recent string kernels like the </p> <ul> <li> Locality Improved, </li> <li> Fischer, </li> <li> TOP, </li> <li> Spectrum, </li> <li> Weighted Degree Kernel (with shifts) . </li> </ul> <p>For the latter the efficient LINADD optimizations are implemented. Also <strong>SHOGUN</strong> offers the freedom of working with custom pre-computed kernels. One of its key features is the combined kernel which can be constructed by a weighted linear combination of a number of sub-kernels, each of which not necessarily working on the same domain. An optimal sub-kernel weighting can be learned using Multiple Kernel Learning. Currently SVM one-class, 2-class, multi-class classification and regression problems are supported. However <strong>SHOGUN</strong> also implements a number of linear methods like </p> <ul> <li> Linear Discriminant Analysis (LDA) </li> <li> Linear Programming Machine (LPM), </li> <li> Perceptrons and features algorithms to train Hidden Markov Models. </li> </ul> <p>The input feature-objects can be read from plain ascii files (tab separated values for dense matrices; for sparse matrices libsvm/svmlight format), a efficient native binary format and general support to the hdf5 based format, supporting </p> <ul> <li> dense </li> <li> sparse or </li> <li> strings of various types </li> </ul> <p>that can often be converted between each other. Chains of preprocessors (e.g. subtracting the mean) can be attached to each feature object allowing for on-the-fly pre-processing. </p> <h2>Structure and Interfaces</h2> <p><strong>SHOGUN</strong>'s core is implemented in C++ and is provided as a library libshogun to be readily usable for C++ application developers. Its common interface functions are encapsulated in libshogunui, such that only minimal code (like setting or getting a double matrix to/from the target language) is necessary. This allowed us to easily create interfaces to Matlab(tm), R, Octave and Python. (note that a modular object oriented and static interfaces are provided to r, octave, matlab, python, python_modular, r_modular, octave_modular, cmdline, libshogun). </p> <h2>Application</h2> <p>We have successfully applied <strong>SHOGUN</strong> to several problems from computational biology, such as Super Family classification, Splice Site Prediction, Interpreting the SVM Classifier, Splice Form Prediction, Alternative Splicing and Promoter Prediction. Some of them come with no less than 10 million training examples, others with 7 billion test examples. </p> <h2>Documentation</h2> <p>We use Doxygen for both user and developer documentation which may be read online <a href="http://www.shogun-toolbox.org/doc/">here</a>. More than 600 documented examples for the interfaces python_modular, octave_modular, r_modular, static python, static matlab and octave, static r, static command line and C++ libshogun developer interface can be found in the documentation. </p></html>Soeren Sonnenburg, Gunnar Raetsch, Sergey Lisitsyn, Heiko Strathmann, Viktor Gal, Fernando IglesiasThu, 05 Feb 2015 09:09:37 -0000http://mloss.org/software/rss/comments/2http://mloss.org/software/view/2/bioinformaticslarge scalestring kernelkernelkernelmachineldalpmmatlabmkloctavepythonrsvmsgdicml2010liblinearlibsvmmultiple kernel learningocasgaussian processesreg<b>Comment by Soeren Sonnenburg on 2008-09-12 16:14</b>http://mloss.org/comments/cr/14/2/#c23<p>In case you find bugs, feel free to report them at <a href="http://trac.tuebingen.mpg.de/shogun">http://trac.tuebingen.mpg.de/shogun</a>.</p> Soeren SonnenburgFri, 12 Sep 2008 16:14:36 -0000http://mloss.org/comments/cr/14/2/#c23<b>Comment by Ginger on 2009-07-09 00:28</b>http://mloss.org/comments/cr/14/2/#c421<p>Good answer, I am looking for the solution of the same question. Find the movies or mp3 you are looking for at your-download.org the most comprehensive source for free-to-try files downloads on the Web</p> GingerThu, 09 Jul 2009 00:28:18 -0000http://mloss.org/comments/cr/14/2/#c421<b>Comment by Tom Fawcett on 2011-01-03 03:20</b>http://mloss.org/comments/cr/14/2/#c644<p>You say, "Some of them come with no less than 10 million training examples, others with 7 billion test examples." I'm not sure what this means. I have problems with mixed symbolic/numeric attributes and the training example sets don't fit in memory. Does SHOGUN require that training examples fit in memory?</p> Tom FawcettMon, 03 Jan 2011 03:20:48 -0000http://mloss.org/comments/cr/14/2/#c644<b>Comment by Soeren Sonnenburg on 2011-01-14 18:12</b>http://mloss.org/comments/cr/14/2/#c645<p>Shogun does not necessarily require examples to be in memory (if you use any of the FileFeatures). However, most algorithms within shogun are batch type - so using the non in-memory FileFeatures would probably be very slow.</p> <p>This does not matter for doing predictions of course, even though the 7 billion test examples above referred to predicting gene starts on the whole human genome (in memory ~3.5GB and a context window of 1200nt was shifted around in that string).</p> <p>In addition one can compute features (or feature space) on-the-fly potentially saving lots of memory.</p> <p>Not sure how big your problem is but I guess this is better discussed on the shogun mailinglist.</p> Soeren SonnenburgFri, 14 Jan 2011 18:12:01 -0000http://mloss.org/comments/cr/14/2/#c645<b>Comment by Yuri Hoffmann on 2013-09-14 17:12</b>http://mloss.org/comments/cr/14/2/#c694<p>cannot use the java interface in cygwin (already reported on github) nor in debian.</p> Yuri HoffmannSat, 14 Sep 2013 17:12:16 -0000http://mloss.org/comments/cr/14/2/#c694