All entries.
Showing Items 1-10 of 605 on page 1 of 61: 1 2 3 4 5 6 Next Last

Logo r-cran-CoxBoost 1.4

by r-cran-robot - December 1, 2015, 00:00:06 CET [ Project Homepage BibTeX Download ] 23224 views, 4661 downloads, 3 subscriptions

About: Cox models by likelihood based boosting for a single survival endpoint or competing risks


Fetched by r-cran-robot on 2015-12-01 00:00:06.225128

Logo r-cran-e1071 1.6-7

by r-cran-robot - December 1, 2015, 00:00:06 CET [ Project Homepage BibTeX Download ] 20896 views, 4489 downloads, 2 subscriptions

Rating Whole StarWhole StarWhole StarWhole Star1/2 Star
(based on 1 vote)

About: Misc Functions of the Department of Statistics, Probability Theory Group (Formerly


Fetched by r-cran-robot on 2015-12-01 00:00:06.355374

Logo r-cran-Boruta 5.0.0

by r-cran-robot - December 1, 2015, 00:00:05 CET [ Project Homepage BibTeX Download ] 14038 views, 2983 downloads, 2 subscriptions

About: Wrapper Algorithm for All-Relevant Feature Selection


Fetched by r-cran-robot on 2015-12-01 00:00:05.244246

Logo ELKI 0.7.0

by erich - November 27, 2015, 18:23:16 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 15783 views, 2873 downloads, 4 subscriptions

About: ELKI is a framework for implementing data-mining algorithms with support for index structures, that includes a wide variety of clustering and outlier detection methods.


Additions and Improvements from ELKI 0.6.0:

ELKI is now available on Maven:|de.lmu.ifi.dbs.elki|elki|0.7.0|jar

Please clone for a minimal project example.

Uncertain data types, and clustering algorithms for uncertain data.

Major refactoring of distances - removal of Distance values and removed support for non-double-valued distance functions (in particular DoubleDistance was removed). While this reduces the generality of ELKI, we could remove about 2.5% of the codebase by not having to have optimized codepaths for double-distance anymore. Generics for distances were present in almost any distance-based algorithm, and we were also happy to reduce the use of generics this way. Support for non-double-valued distances can trivially be added again, e.g. by adding the specialization one level higher: at the query instead of the distance level, for example. In this process, we also removed the Generics from NumberVector. The object-based get was deprecated for a good reason long ago, and e.g. doubleValue are more efficient (even for non-DoubleVectors).

Dropped some long-deprecated classes.


  • speedups for some initialization heuristics.

  • K-means++ initialization no longer squares distances (again).

  • farthest-point heuristics now uses minimum instead of sum (renamed).

  • additional evaluation criteria.

  • Elkan's and Hamerly's faster k-means variants.

CLARA clustering.


Hierarchical clustering:

  • Renamed naive algorithm to AGNES.

  • Anderbergs algorithm (faster than AGNES, slower than SLINK).

  • CLINK for complete linkage clustering in O(n²) time, O(n) memory.

  • Simple extraction from HDBSCAN.

  • "Optimal" extraction from HDBSCAN.

  • HDBSCAN, in two variants.

LSDBC clustering.

EM clustering was refactored and moved into its own package. The new version is much more extensible.

OPTICS clustering:

  • Added a list-based variant of OPTICS to our heap-based.

  • FastOPTICS (contributed by Johannes Schneider).

  • Improved OPTICS Xi cluster extraction.

Outlier detection:

  • KDEOS outlier detection (SDM14).

  • k-means based outlier detection (distance to centroid) and Silhouette coefficient based approach (which does not work too well on the toy data sets - the lowest silhouette are usually where two clusters touch).

  • bug fix in kNN weight, when distances are tied and kNN yields more than k results.

  • kNN and kNN weight outlier have their k parameter changed: old 2NN outlier is now 1NN outlier, as commonly understood in classification literature (1 nearest neighbor other than the query object; whereas in database literature the 1NN is usually the query object itself). You can get the old result back by decreasing k by one easily.

  • LOCI implementation is now only O(n^3 log n) instead of O(n^4).

  • Local Isolation Coefficient (LIC).

  • IDOS outlier detection with intrinsic dimensionality.

  • Baseline intrinsic dimensionality outlier detection.

  • Variance-of-Volumes outlier detection (VOV).

Parallel computation framework, and some parallelized algorithms

  • Parallel k-means.

  • Parallel LOF and variants.

LibSVM format parser.

kNN classification (with index acceleration).

Internal cluster evaluation:

  • Silhouette index.

  • Simplified Silhouette index (faster).

  • Davis-Bouldin index.

  • PBM index.

  • Variance-Ratio-Criteria.

  • Sum of squared errors.

  • C-Index.

  • Concordant pair indexes (Gamma, Tau).

  • Different noise handling strategies for internal indexes.

Statistical dependence measures:

  • Distance correlation dCor.

  • Hoeffings D.

  • Some divergence / mutual information measures.

Distance functions:

  • Big refactoring.

  • Time series distances refactored, allow variable length series now.

  • Hellinger distance and kernel function.


  • Faster MDS implementation using power iterations.

Indexing improvements:

  • Precomputed distance matrix "index".

  • iDistance index (static only).

  • Inverted-list index for sparse data and cosine/arccosine distance.

  • Cover tree index (static only).

  • Additional LSH hash functions.

Frequent Itemset Mining:

  • Improved APRIORI implementation.

  • FP-Growth added.

  • Eclat (basic version only) added.

Uncertain clustering:

  • Discrete and continuous data models.

  • FDBSCAN clustering.

  • UKMeans clustering.

  • CKMeans clustering.

  • Representative Uncertain Clustering (Meta-algorithm).

  • Center-of-mass meta Clustering (allows using other clustering algorithms on uncertain objects).


  • Several estimators for intrinsic dimensionality.

MiniGUI has two "secret" new options: -minigui.last -minigui.autorun to load the last saved configuration and run it, for convenience.

Logging API has been extended, to make logging more convenient in a number of places (saving some lines for progress logging and timing).

Logo KeLP 2.0.0

by kelpadmin - November 26, 2015, 16:14:53 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 4045 views, 1009 downloads, 3 subscriptions

About: Kernel-based Learning Platform (KeLP) is Java framework that supports the implementation of kernel-based learning algorithms, as well as an agile definition of kernel functions over generic data representation, e.g. vectorial data or discrete structures. The framework has been designed to decouple kernel functions and learning algorithms, through the definition of specific interfaces. Once a new kernel function has been implemented, it can be automatically adopted in all the available kernel-machine algorithms. KeLP includes different Online and Batch Learning algorithms for Classification, Regression and Clustering, as well as several Kernel functions, ranging from vector-based to structural kernels. It allows to build complex kernel machine based systems, leveraging on JSON/XML interfaces to instantiate classifiers without writing a single line of code.


This is a major release that includes brand new features as well as a renewed architecture of the entire project.

Now KeLP is organized in four maven projects:

  • kelp-core: it contains the infrastructure of abstract classes and interfaces to work with KeLP. Furthermore, some implementations of algorithms, kernels and representations are included, to provide a base operative environment.

  • kelp-additional-kernels: it contains several kernel functions that extend the set of kernels made available in the kelp-core project. Moreover, this project implements the specific representations required to enable the application of such kernels. In this project the following kernel functions are considered: Sequence kernels, Tree kernels and Graphs kernels.

  • kelp-additional-algorithms: it contains several learning algorithms extending the set of algorithms provided in the kelp-core project, e.g. the C-Support Vector Machine or ν-Support Vector Machine learning algorithms. In particular, advanced learning algorithms for classification and regression can be found in this package. The algorithms are grouped in: 1) Batch Learning, where the complete training dataset is supposed to be entirely available during the learning phase; 2) Online Learning, where individual examples are exploited one at a time to incrementally acquire the model.

  • kelp-full: this is the complete package of KeLP. It aggregates the previous modules in one jar. It contains also a set of fully functioning examples showing how to implement a learning system with KeLP. Batch learning algorithm as well as Online Learning algorithms usage is shown here. Different examples cover the usage of standard kernel, Tree Kernels and Sequence Kernel, with caching mechanisms.

Furthermore this new release includes:

  • CsvDatasetReader: it allows to read files in CSV format

  • DCDLearningAlgorithm: it is the implementation of the Dual Coordinate Descent learning algorithm

  • methods for checking the consistency of a dataset.

Check out this new version from our repositories. API Javadoc is already available. Your suggestions will be very precious for us, so download and try KeLP 2.0.0!

Logo PROFET 1.0.0

by Hamda - November 26, 2015, 13:20:28 CET [ Project Homepage BibTeX Download ] 170 views, 32 downloads, 1 subscription

About: Software for Automatic Construction and Inference of DBNs Based on Mathematical Models


Initial Announcement on

Logo A Library for Online Streaming Feature Selection 1.0

by ykui713 - November 25, 2015, 13:23:01 CET [ BibTeX Download ] 171 views, 47 downloads, 0 subscriptions

About: LOFS is a software toolbox for online streaming feature selection


Initial Announcement on

Logo PyScriptClassifier 0.3.0

by cjb60 - November 25, 2015, 04:07:51 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 932 views, 244 downloads, 2 subscriptions

About: Easily prototype WEKA classifiers and filters using Python scripts.



  • Filters have now been implemented.
  • Classifier and filter classes satisfy base unit tests.


  • Can now choose to save the script in the model using the -save flag.


  • Added Python 3 support.
  • Added uses decorator to prevent non-essential arguments from being passed.
  • Fixed nasty bug where imputation, binarisation, and standardisation would not actually be applied to test instances.
  • GUI in WEKA now displays the exception as well.
  • Fixed bug where single quotes in attribute values could mess up args creation.
  • ArffToPickle now recognises class index option and arguments.
  • Fix nasty bug where filters were not being saved and were made from scratch from test data.


  • ArffToArgs gets temporary folder in a platform-independent way, instead of assuming /tmp/.
  • Can now save args in ArffToPickle using save.


  • Initial release.

Logo r-cran-caret 6.0-62

by r-cran-robot - November 23, 2015, 00:00:00 CET [ Project Homepage BibTeX Download ] 79433 views, 16109 downloads, 3 subscriptions

About: Classification and Regression Training


Fetched by r-cran-robot on 2015-12-01 00:00:05.446562

Logo bandicoot 0.4

by yvesalexandre - November 20, 2015, 17:08:31 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 323 views, 61 downloads, 2 subscriptions

About: An open-source Python toolbox to analyze mobile phone metadata.


Initial Announcement on

Showing Items 1-10 of 605 on page 1 of 61: 1 2 3 4 5 6 Next Last