All entries.
Showing Items 31-40 of 605 on page 4 of 61: Previous 1 2 3 4 5 6 7 8 9 Next Last

Logo KEEL Knowledge Extraction based on Evolutionary Learning 3.0

by keel - September 18, 2015, 12:38:54 CET [ Project Homepage BibTeX Download ] 628 views, 189 downloads, 1 subscription

About: KEEL (Knowledge Extraction based on Evolutionary Learning) is an open source (GPLv3) Java software tool that can be used for a large number of different knowledge data discovery tasks. KEEL provides a simple GUI based on data flow to design experiments with different datasets and computational intelligence algorithms (paying special attention to evolutionary algorithms) in order to assess the behavior of the algorithms. It contains a wide variety of classical knowledge extraction algorithms, preprocessing techniques (training set selection, feature selection, discretization, imputation methods for missing values, among others), computational intelligence based learning algorithms, hybrid models, statistical methodologies for contrasting experiments and so forth. It allows to perform a complete analysis of new computational intelligence proposals in comparison to existing ones. Moreover, KEEL has been designed with a two-fold goal: research and educational. KEEL is also coupled with KEEL-dataset: a webpage that aims at providing to the machine learning researchers a set of benchmarks to analyze the behavior of the learning methods. Concretely, it is possible to find benchmarks already formatted in KEEL format for classification (such as standard, multi instance or imbalanced data), semi-supervised classification, regression, time series and unsupervised learning. Also, a set of low quality data benchmarks is maintained in the repository.


Initial Announcement on

Logo WEKA 3.7.13

by mhall - September 11, 2015, 04:55:02 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 51546 views, 7633 downloads, 4 subscriptions

Rating Whole StarWhole StarWhole StarWhole StarEmpty Star
(based on 6 votes)

About: The Weka workbench contains a collection of visualization tools and algorithms for data analysis and predictive modelling, together with graphical user interfaces for easy access to this [...]


In core weka:

  • Numerically stable implementation of variance calculation in core Weka classes - thanks to Benjamin Weber
  • Unified expression parsing framework (with compiled expressions) is now employed by filters and tools that use mathematical/logical expressions - thanks to Benjamin Weber
  • Developers can now specify GUI and command-line options for their Weka schemes via a new unified annotation-based mechanism
  • ClassConditionalProbabilities filter - replaces the value of a nominal attribute in a given instance with its probability given each of the possible class values
  • GUI package manager's available list now shows both packages that are not currently installed, and those installed packages for which there is a more recent version available that is compatible with the base version of Weka being used
  • ReplaceWithMissingValue filter - allows values to be randomly (with a user-specified probability) replaced with missing values. Useful for experimenting with methods for imputing missing values
  • WrapperSubsetEval can now use plugin evaluation metrics

In packages:

  • alternatingModelTrees package - alternating trees for regression
  • timeSeriesFilters package, contributed by Benjamin Weber
  • distributedWekaSpark package - wrapper for distributed Weka on Spark
  • wekaPython package - execution of CPython scripts and wrapper classifier/clusterer for Scikit Learn schemes
  • MLRClassifier in RPlugin now provides access to almost all classification and regression learners in MLR 2.4

Logo JMLR Darwin 1.9

by sgould - September 8, 2015, 06:50:37 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 41991 views, 8698 downloads, 4 subscriptions

About: A platform-independent C++ framework for machine learning, graphical models, and computer vision research and development.


Version 1.9:

  • Replaced drwnInPaint class with drwnImageInPainter class and added inPaint application
  • Added function to read CIFAR-10 and CIFAR-100 style datasets (see
  • Added drwnMaskedPatchMatch, drwnBasicPatchMatch, drwnSelfPatchMatch and basicPatchMatch application
  • drwnPatchMatchGraph now allows multiple matches to the same image
  • Upgraded wxWidgets to 3.0.2 (problems on Mac OS X)
  • Switched Mac OS X compilation to libc++ instead of libstdc++
  • Added Python scripts for running experiments and regression tests
  • Refactored drwnGrabCutInstance class to support both GMM and colour histogram model
  • Added cacheSortIndex to drwnDecisionTree for trading-off speed versus memory usage
  • Added mexLoadPatchMatchGraph for loading drwnPatchMatchGraph objects into Matlab
  • Improved documentation, other bug fixes and performance improvements

About: Nowadays, this is very popular to use the deep architectures in machine learning. Deep Belief Networks (DBNs) are deep architectures that use stack of Restricted Boltzmann Machines (RBM) to create a powerful generative model using training data. DBNs have many ability like feature extraction and classification that are used in many applications like image processing, speech processing and etc. According to the results of the experiments conducted on MNIST (image), ISOLET (speech), and 20 Newsgroups (text) datasets, it was shown that the toolbox can learn automatically a good representation of the input from unlabeled data with better discrimination between different classes. In addition, the toolbox supports different sampling methods (e.g. Gibbs, CD, PCD and our new FEPCD method), different sparsity methods (quadratic, rate distortion and our new normal method), different RBM types (generative and discriminative), GPU, etc. The toolbox is a user-friendly open source software and is freely available on the website.


New in toolbox

  • Bug was fixed for computeBatchSize function in Linux.
  • Revision of some demo scripts. cardinal

Logo r-cran-e1071 1.6-7

by r-cran-robot - December 1, 2015, 00:00:06 CET [ Project Homepage BibTeX Download ] 20928 views, 4495 downloads, 2 subscriptions

Rating Whole StarWhole StarWhole StarWhole Star1/2 Star
(based on 1 vote)

About: Misc Functions of the Department of Statistics, Probability Theory Group (Formerly


Fetched by r-cran-robot on 2015-12-01 00:00:06.355374

Logo YCML 0.2.2

by yconst - August 24, 2015, 20:28:45 CET [ Project Homepage BibTeX Download ] 815 views, 153 downloads, 3 subscriptions

About: A Machine Learning framework for Objective-C and Swift (OS X / iOS)


Initial Announcement on

Logo Java Data Mining Package 0.3.0

by arndt - August 19, 2015, 15:44:46 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 1147 views, 209 downloads, 3 subscriptions

About: A Java library for machine learning and data analytics


Initial Announcement on

Logo jLDADMM 1.0

by dqnguyen - August 19, 2015, 12:52:36 CET [ Project Homepage BibTeX Download ] 772 views, 179 downloads, 2 subscriptions

About: The Java package jLDADMM is released to provide alternative choices for topic modeling on normal or short texts. It provides implementations of the Latent Dirichlet Allocation topic model and the one-topic-per-document Dirichlet Multinomial Mixture model (i.e. mixture of unigrams), using collapsed Gibbs sampling. In addition, jLDADMM supplies a document clustering evaluation to compare topic models.


Initial Announcement on

Logo Presage 0.9.1

by Dzmitry_Lahoda - August 18, 2015, 10:13:05 CET [ BibTeX Download ] 522 views, 156 downloads, 3 subscriptions

About: Presage is an intelligent predictive text entry platform.


Initial Announcement on

Logo Sparse Compositional Metric Learning v1.1

by bellet - August 16, 2015, 16:41:20 CET [ BibTeX BibTeX for corresponding Paper Download ] 2529 views, 910 downloads, 2 subscriptions

About: Scalable learning of global, multi-task and local metrics from data


Various minor bug fixes and improvements. The basis and triplet generation now fully supports with datasets with very small classes and arbitrary labels (no need to be consecutive or positive). The computational and memory efficiency of the code when data is high dimensional has been largely improved, and we generate a rectangular (smaller) projection matrix when the number of selected basis is smaller than the dimension. K-NN classification with local metrics has been optimized and made significantly less costly in both time and memory.

Showing Items 31-40 of 605 on page 4 of 61: Previous 1 2 3 4 5 6 7 8 9 Next Last