The incomplete Cholesky decomposition for a dense symmetric positive definite matrix A is a simple way of approximating A by a matrix of low rank (you can choose the rank). It has been used frequently in machine learning (Fine, Scheinberg; Bach, Jordan). Here is an efficient implementation.
Supported kernels in the moment are RBF (Gaussian) and squared-exponential, I might add some more if I need them. Please consider sending me extensions to new kernels you wrote yourself.
- Changes to previous version:
Initial Announcement on mloss.org.
No one has posted any comments yet. Perhaps you'd like to be the first?
Leave a comment
You must be logged in to post comments.