About: Bayesian Model Averaging for linear models with a wide choice of (customizable) priors. Built-in priorss include coefficient priors (fixed, flexible and hyper-g priors), 5 kinds of model priors, moreover model sampling by enumeration or various MCMC approaches. Changes:Initial Announcement on mloss.org.
|
About: It's a C++ program for symmetric matrix diagonalization, inversion and principal component anlaysis(PCA). The matrix diagonalization function can also be applied to the computation of singular value decomposition (SVD), Fisher linear discriminant analysis (FLDA) and kernel PCA (KPCA) if forming the symmetric matrix appropriately. Changes:Initial Announcement on mloss.org.
|
About: Hype is a proof-of-concept deep learning library, where you can perform optimization on compositional machine learning systems of many components, even when such components themselves internally perform optimization. Changes:Initial Announcement on mloss.org.
|
About: LDPar is an efficient data-driven dependency parser. You can train your own parsing model on treebank data and parse new data using the induced model. Changes:Initial Announcement on mloss.org.
|
About: DataDeps is a package for simplifying the management of data in your julia application. In particular it is designed to make getting static data from some server into the local machine, and making programs know where that data is trivial. Changes:Initial Announcement on mloss.org.
|
About: This program is a C++ implementation of Linear Discriminant Function Classifier. Discriminant functions such as perceptron criterion, cross entropy (CE) criterion, and least mean square (LMS) criterion (all for multi-class classification problems) are supported in it. The program uses a sparse-data structure to represent the feature vector to seek higher computational speed. Some other techniques such as online updating, weights averaging, gaussian prior regularization are also supported. Changes:Initial Announcement on mloss.org.
|
About: This program is used to find point matches between two images. The procedure can be divided into two parts: 1) use SIFT matching algorithm to find sparse point matches between two images. 2) use "quasi-dense propagation" algorithm to get "quasi-dense" point matches. Changes:Initial Announcement on mloss.org.
|
About: This Matlab package implements a method for learning a choquistic regression model (represented by a corresponding Moebius transform of the underlying fuzzy measure), using the maximum likelihood approach proposed in [2], eqquiped by sigmoid normalization, see [1]. Changes:Initial Announcement on mloss.org.
|
About: This program is used to extract SIFT points from an image. Changes:Initial Announcement on mloss.org.
|
About: This is a implementation of the classic P3P(Perspective 3-Points) algorithm problem solution in the Ransac paper "M. A. Fischler, R. C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM, Vol 24, pp 381-395, 1981.". The algorithm gives the four probable solutions of the P3P problem in about 0.1ms, and can be used as input of the consequent RANSAC step. The codes needs the numerics library VNL which is a part of the widely used computer vision library VXL. One can download & install it from http://vxl.sourceforge.net/. Changes:Initial Announcement on mloss.org.
|
About: Stepwise Diagonal Discriminant Analysis Changes:Fetched by r-cran-robot on 2012-02-01 00:00:11.677447
|
About: DynaML is a Scala environment for conducting research and education in Machine Learning. DynaML comes packaged with a powerful library of classes implementing predictive models and a Scala REPL where one can not only build custom models but also play around with data work-flows. Changes:Initial Announcement on mloss.org.
|
About: Boosting Methods for GAMLSS Models Changes:Fetched by r-cran-robot on 2013-04-01 00:00:04.956804
|
About: Preparing Changes:Initial Announcement on mloss.org.
|
About: Inference algorithms for models based on Luce's choice axiom. Changes:Initial Announcement on mloss.org.
|
About: Supervised Latent Semantic Indexing(SLSI) is an supervised feature transformation method. The algorithms in this package are based on the iterative algorithm of Latent Semantic Indexing. Changes:Initial Announcement on mloss.org.
|
About: A non-iterative, incremental and hyperparameter-free learning method for one-layer feedforward neural networks without hidden layers. This method efficiently obtains the optimal parameters of the network, regardless of whether the data contains a greater number of samples than variables or vice versa. It does this by using a square loss function that measures errors before the output activation functions and scales them by the slope of these functions at each data point. The outcome is a system of linear equations that obtain the network's weights and that is further transformed using Singular Value Decomposition. Changes:Initial Announcement on mloss.org.
|
About: A non-iterative learning method for one-layer (no hidden layer) neural networks, where the weights can be calculated in a closed-form manner, thereby avoiding low convergence rate and also hyperparameter tuning. The proposed learning method, LANN-SVD in short, presents a good computational efficiency for large-scale data analytic. Changes:Initial Announcement on mloss.org.
|
About: Hofmann, T. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd ACM-SIGIR International Conference on Research and Development in Information Retrieval (Berkeley,Calif.), ACM, New York, 50–57. Changes:Initial Announcement on mloss.org.
|
About: A method to optimize the hyperparameters for machine learning methods implemented in Scikit-learn based on Derivative Free Optimization Changes:Initial Announcement on mloss.org.
|