Projects that are tagged with symbolic differentiation.

Logo DiffSharp 0.7.0

by gbaydin - September 29, 2015, 14:09:01 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 2956 views, 613 downloads, 3 subscriptions

About: DiffSharp is an automatic differentiation (AD) library providing gradients, Hessians, Jacobians, directional derivatives, and matrix-free Hessian- and Jacobian-vector products. It allows exact and efficient calculation of derivatives, with support for nesting.


Version 0.7.0 is a reimplementation of the library with support for linear algebra primitives, BLAS/LAPACK, 32- and 64-bit precision and different CPU/GPU backends

Changed: Namespaces have been reorganized and simplified. This is a breaking change. There is now just one AD implementation, under DiffSharp.AD (with DiffSharp.AD.Float32 and DiffSharp.AD.Float64 variants, see below). This internally makes use of forward or reverse AD as needed.

Added: Support for 32 bit (single precision) and 64 bit (double precision) floating point operations. All modules have Float32 and Float64 versions providing the same functionality with the specified precision. 32 bit floating point operations are significantly faster (as much as twice as fast) on many current systems.

Added: DiffSharp now uses the OpenBLAS library by default for linear algebra operations. The AD operations with the types D for scalars, DV for vectors, and DM for matrices use the underlying linear algebra backend for highly optimized native BLAS and LAPACK operations. For non-BLAS operations (such as Hadamard products and matrix transpose), parallel implementations in managed code are used. All operations with the D, DV, and DM types support forward and reverse nested AD up to any level. This also paves the way for GPU backends (CUDA/CuBLAS) which will be introduced in following releases. Please see the documentation and API reference for information about how to use the D, DV, and DM types. (Deprecated: The FsAlg generic linear algebra library and the Vector<'T> and Matrix<'T> types are no longer used.)

Fixed: Reverse mode AD has been reimplemented in a tail-recursive way for better performance and preventing StackOverflow exceptions encountered in previous versions.

Changed: The library now uses F# 4.0 (FSharp.Core

Changed: The library is now 64 bit only, meaning that users should set "x64" as the platform target for all build configurations.

Fixed: Overall bug fixes.

Logo Theano 0.7

by jaberg - March 27, 2015, 16:40:18 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 18722 views, 3470 downloads, 3 subscriptions

About: A Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Dynamically generates CPU and GPU modules for good performance. Deep Learning Tutorials illustrate deep learning with Theano.


Theano 0.7 (26th of March, 2015)

We recommend to everyone to upgrade to this version.


* Integration of CuDNN for 2D convolutions and pooling on supported GPUs
* Too many optimizations and new features to count
* Various fixes and improvements to scan
* Better support for GPU on Windows
* On Mac OS X, clang is used by default
* Many crash fixes
* Some bug fixes as well