mloss.org new softwarehttp://mloss.orgUpdates and additions to mloss.orgenSat, 01 Sep 2018 00:00:04 -0000r-cran-Boruta 6.0.0http://mloss.org/revision/view/2194/<html><p>Wrapper Algorithm for All Relevant Feature Selection: An all relevant feature selection wrapper algorithm. It finds relevant features by comparing original attributes' importance with importance achievable at random, estimated using their permuted copies (shadows).
</p></html>Miron Bartosz Kursa [aut, cre] (), Witold Remigiusz Rudnicki [aut]Sat, 01 Sep 2018 00:00:04 -0000http://mloss.org/software/rss/comments/2194http://mloss.org/revision/view/2194/r-cranr-cran-BART 1.9http://mloss.org/revision/view/2192/<html><p>Bayesian Additive Regression Trees: Bayesian Additive Regression Trees (BART) provide flexible nonparametric modeling of covariates for continuous, binary, categorical and time-to-event outcomes. For more information on BART, see Chipman, George and McCulloch (2010) [HTML_REMOVED] and Sparapani, Logan, McCulloch and Laud (2016) [HTML_REMOVED].
</p></html>Robert McCulloch [aut], Rodney Sparapani [aut, cre], Robert Gramacy [aut], Charles Spanbauer [aut], Matthew Pratola [aut], Bill Venables [ctb], Brian Ripley [ctb]Fri, 17 Aug 2018 00:00:00 -0000http://mloss.org/software/rss/comments/2192http://mloss.org/revision/view/2192/r-cranr-cran-bst 0.3-15http://mloss.org/revision/view/2195/<html><p>Gradient Boosting: Functional gradient descent algorithm for a variety of convex and non-convex loss functions, for both classical and robust regression and classification problems. See Wang (2011) [HTML_REMOVED], Wang (2012) [HTML_REMOVED], Wang (2018) [HTML_REMOVED], Wang (2018) [HTML_REMOVED].
</p></html>Zhu Wang [aut, cre], Torsten Hothorn [ctb]Sun, 22 Jul 2018 00:00:00 -0000http://mloss.org/software/rss/comments/2195http://mloss.org/revision/view/2195/r-cranMLPACK 3.0.2http://mloss.org/revision/view/2191/<html><p>mlpack is a fast, flexible C++ machine learning library. Its aim is to make large-scale machine learning possible for novice users by means of a simple, consistent API, while simultaneously exploiting C++ language features to provide maximum performance and maximum flexibility for expert users. mlpack also provides bindings to other languages.
</p>
<p>The following methods are provided:
</p>
<ul>
<li>
Approximate furthest neighbor search techniques
</li>
<li>
Collaborative Filtering (with NMF)
</li>
<li>
Decision Stumps
</li>
<li>
DBSCAN
</li>
<li>
Density Estimation Trees
</li>
<li>
Euclidean Minimum Spanning Trees
</li>
<li>
Fast Exact Max-Kernel Search (FastMKS)
</li>
<li>
Gaussian Mixture Models (GMMs)
</li>
<li>
Hidden Markov Models (HMMs)
</li>
<li>
Hoeffding trees (streaming decision trees)
</li>
<li>
Kernel Principal Components Analysis (KPCA)
</li>
<li>
K-Means Clustering
</li>
<li>
Least-Angle Regression (LARS/LASSO)
</li>
<li>
Local Coordinate Coding
</li>
<li>
Locality-Sensitive Hashing (LSH)
</li>
<li>
Logistic regression
</li>
<li>
Naive Bayes Classifier
</li>
<li>
Neighborhood Components Analysis (NCA)
</li>
<li>
Neural Networks (FFNs, CNNs, RNNs)
</li>
<li>
Nonnegative Matrix Factorization (NMF)
</li>
<li>
Perceptron
</li>
<li>
Principal Components Analysis (PCA)
</li>
<li>
QUIC-SVD
</li>
<li>
RADICAL (ICA)
</li>
<li>
Regularized SVD
</li>
<li>
Rank-Approximate Nearest Neighbor (RANN)
</li>
<li>
Simple Least-Squares Linear Regression (and Ridge Regression)
</li>
<li>
Sparse Autoencoder
</li>
<li>
Sparse Coding
</li>
<li>
Tree-based Neighbor Search (all-k-nearest-neighbors, all-k-furthest-neighbors), using either kd-trees or cover trees
</li>
<li>
Tree-based Range Search
</li>
<li>
and also more not listed here
</li>
</ul>
<p>Command-line executables are provided for each of these, and the C++ classes which define the methods are highly flexible, extensible, and modular. More information (including documentation, tutorials, and bug reports) is available at http://www.mlpack.org/.
</p></html>Ryan Curtin, James Cline, Neil Slagle, Matthew Amidon, Ajinkya Kale, Bill March, Nishant Mehta, Parikshit Ram, Dongryeol Lee, Rajendran Mohan, Trironk Kiatkungwanglai, Patrick Mason, Marcus Edel, etc.Sat, 09 Jun 2018 18:03:57 -0000http://mloss.org/software/rss/comments/2191http://mloss.org/revision/view/2191/gmmhmmmachine learningsparsedual treefastscalabletreeSpectra. A Library for Large Scale Eigenvalue Problems 0.6.2http://mloss.org/revision/view/2190/<html><p>Spectra is a C++ library for large scale eigenvalue problems, built on top of Eigen (<a href="http://eigen.tuxfamily.org">http://eigen.tuxfamily.org</a>).
</p>
<p>Spectra is designed to calculate a specified number (k) of eigenvalues of a large square matrix (A). Usually k is much smaller than the size of matrix (n), so that only a few eigenvalues and eigenvectors are computed, which in general is more efficient than calculating the whole spectral decomposition. Users can choose eigenvalue selection rules to pick the eigenvalues of interest, such as the largest k eigenvalues, or eigenvalues with largest real parts, etc.
</p>
<p>Spectra is implemented as a header-only C++ library, whose only dependence, Eigen, is also header-only. Hence Spectra can be easily embedded in C++ projects that require calculating eigenvalues of large matrices.
</p>
<p>Key Features:
</p>
<ul>
<li>
Calculates a small number of eigenvalues/eigenvectors of a large square matrix.
</li>
<li>
Broad application in dimensionality reduction, principal component analysis, community detection, etc.
</li>
<li>
High performance. In most cases faster than ARPACK.
</li>
<li>
Header-only. Easy to be embedded into other projects.
</li>
<li>
Supports symmetric/general, dense/sparse matrices.
</li>
<li>
Elegant and user-friendly API with great flexibility.
</li>
<li>
Convenient and powerful R interface, the RSpectra R package.
</li>
</ul></html>Yixuan QiuWed, 23 May 2018 19:40:46 -0000http://mloss.org/software/rss/comments/2190http://mloss.org/revision/view/2190/singular value decompositionprincipal component analysisfactorizationeigenvalueTheano 1.0.2http://mloss.org/revision/view/2189/<html><p>Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features:
</p>
<pre><code>* tight integration with numpy – Use numpy.ndarray in Theano-compiled functions.
* transparent use of a GPU – perform data-intensive computations much faster than on a CPU.
* symbolic differentiation – Let Theano do your derivatives.
* speed and stability optimizations – Get the right answer for log(1+x) even when x is really tiny.
* dynamic C code generation – Evaluate expressions faster.
* extensive unit-testing and self-verification – Detect and diagnose many types of mistake.
</code></pre><p>Theano has been powering large-scale computationally intensive scientific investigations since 2007. But it is also approachable enough to be used in the classroom (IFT6266 at the University of Montreal).
</p>
<p>Theano has been used primarily to implement large-scale deep learning algorithms. To see how, see the Deep Learning Tutorials (http://www.deeplearning.net/tutorial/)
</p></html>mostly LISA labWed, 23 May 2018 16:34:31 -0000http://mloss.org/software/rss/comments/2189http://mloss.org/revision/view/2189/pythoncudagpusymbolic differentiationnumpydlib ml 19.11http://mloss.org/revision/view/2188/<html><p>A C++ toolkit containing machine learning algorithms and tools that facilitate creating complex software in C++ to solve real world problems.
</p>
<p>The library provides efficient implementations of the following algorithms:
</p>
<ul>
<li>
Deep neural networks
</li>
<li>
support vector machines for classification, regression, and ranking
</li>
<li>
reduced-rank methods for large-scale classification and regression.<br />
This includes an SVM implementation and a method for performing
kernel ridge regression with efficient LOO cross-validation.
</li>
<li>
multi-class SVM
</li>
<li>
structural SVM (modes: single-threaded, multi-threaded, and fully distributed)
</li>
<li>
sequence labeling using structured SVMs
</li>
<li>
relevance vector machines for regression and classification
</li>
<li>
reduced set approximation of SV decision surfaces
</li>
<li>
online kernel RLS regression
</li>
<li>
online kernelized centroid estimation/one class classifier
</li>
<li>
online SVM classification
</li>
<li>
kernel k-means clustering
</li>
<li>
radial basis function networks
</li>
<li>
kernelized recursive feature ranking
</li>
<li>
Bayesian network inference using junction trees or MCMC
</li>
<li>
General purpose unconstrained non-linear optimization algorithms using the conjugate gradient, BFGS, and L-BFGS techniques
</li>
<li>
Levenberg-Marquardt for solving non-linear least squares problems
</li>
<li>
A general purpose cutting plane optimizer.
</li>
</ul>
<p>The library also comes with extensive documentation and example programs that walk the user through the use of these machine learning techniques.<br />
</p>
<p>Finally, dlib includes a fast matrix library that lets the user use a simple Matlab like syntax. It is also capable of using BLAS and LAPACK libraries such as ATLAS or the Intel MKL when available. Additionally, the use of BLAS and LAPACK is transparent to the user, that is, the dlib matrix object uses BLAS and LAPACK internally to optimize various operations while still allowing the user to use a simple MATLAB like syntax.
</p></html>Davis KingFri, 18 May 2018 04:19:52 -0000http://mloss.org/software/rss/comments/2188http://mloss.org/revision/view/2188/svmclassificationclusteringregressionkernel methodsmatrix librarykkmeansoptimizationalgorithmsexact bayesian methodsapproximate inferencebayesian networksjunction treeDatabases for DMNS source codes 1.0http://mloss.org/revision/view/2187/<html><p>In DMNS source, five databases are used in slover.cpp and data_veh_layer.cpp,
these images and databases are included in this file, except munich database.
</p></html>xueyun chenTue, 15 May 2018 07:56:12 -0000http://mloss.org/software/rss/comments/2187http://mloss.org/revision/view/2187/dmnsSource codes of DMNS based on caffe platform 1.0http://mloss.org/revision/view/2186/<html><p>Deep measuring net sequence(DMNS) is a sequence of three deep measuring nets, the later are deep fcn-based networks, directely output object category score, object orientation, location and scale simultaneously without any anchor boxes. DMNS acheived high accuracy in maneuvering target detection and geometrical measurements. Its average orientation error is less than 3.5 degree, loaction error less than 1.3 pixel, scale measuring error less than 10%, achieve a detection F1-score 96.5% in OAD, 91.8% in SVDS ,90.8% in Munich , 87.3% in OIRDS, outperforms SSD, Fater R-CNN, etc.
</p></html>xueyun chenTue, 15 May 2018 07:52:48 -0000http://mloss.org/software/rss/comments/2186http://mloss.org/revision/view/2186/dmnsAika 0.17http://mloss.org/revision/view/2185/<html><p>Aika is a Java library that automatically extracts and annotates semantic information into text. In case this information is ambiguous, Aika will generate several hypothetical interpretations concerning the meaning of the text and pick the most likely one. The Aika algorithm is based on various ideas and approaches from the field of AI such as artificial neural networks, frequent pattern mining and logic based expert systems. It can be applied to a broad spectrum of text analysis task and combines these concepts in a single algorithm.
</p>
<p>Aika allows to model linguistic concepts like words, word meanings (entities), categories (e.g. person name, city), grammatical word types and so on as neurons in a neural network. By choosing appropriate synapse weights, these neurons can take on different functions within the network. For instance neurons whose synapse weights are chosen to mimic a logical AND can be used to match an exact phrase. On the other hand neurons with an OR characteristic can be used to connect a large list of word entity neurons to determine a category like 'city' or 'profession'.
</p>
<p>Aika is based on non-monotonic logic, meaning that it first draws tentative conclusions only. In other words, Aika is able to generate multiple mutually exclusive interpretations of a word, phrase, or sentence, and select the most likely interpretation. For example a neuron representing a specific meaning of a given word can be linked through a negatively weighted synapse to a neuron representing an alternative meaning of this word. In this case these neurons will exclude each other. These synapses might even be cyclic. Aika can resolve such recurrent feedback links by making tentative assumptions and starting a search for the highest ranking interpretation.
</p>
<p>In contrast to conventional neural networks, Aika propagates activations objects through its network, not just activation values. These activation objects refer to a text segment and an interpretation.
</p>
<p>Aika consists of two layers. The neural layer, containing all the neurons and continuously weighted synapses and underneath that the discrete logic layer, containing a boolean representation of all the neurons. The logic layer uses a frequent pattern lattice to efficiently store the individual logic nodes. This architecture allows Aika to process extremely large networks since only neurons that are activated by a logic node need to compute their weighted sum and their activation value. This means that the fast majority of neurons stays inactive during the processing of a given text.
</p>
<p>To prevent that the whole network needs to stay in memory during processing, Aika uses the provider pattern to suspend individual neurons or logic nodes to an external storage like a mongo db.
</p></html>Lukas MolzbergerMon, 14 May 2018 15:42:00 -0000http://mloss.org/software/rss/comments/2185http://mloss.org/revision/view/2185/information extractioninferenceneural networktext mining