Email: <fred.mailhot AT gmail DOT com>
This page will be used to track the progress of my Google Summer of Code project, which involves adding "neural" network functionality to SciPy (Robert Kern is my mentor).
Today is August 20th, the SoC deadline. I intend to continue contributing to this project and will keep using this page to track my progress. Hopefully things will get more regular now.
Apparently I wasn't so good at keeping up with the Wiki part of this SoC project. Today's the coding deadline (in about 1hr40, actually).
So I've got working MLP, RBF and SRN (Elman net) modules. For the moment they're still more basic than what I was hoping for...the available architectures and optimization routines are pretty limited: MLP, SRN and RBF weights are optimized using the leastsq routine from scipy.optimize (I still have to find out what the story with respect to whether or not that's a no-no), RBF centers are a random subset of the input data.
The project stagnated for a good part of the summer due to non-SoC-related conflicts, but the work I've done in the past little while has confirmed to me that I will continue working on this project (I'm still dreaming of creating something like Doug Rohde's LENS in Python).
One BIG issue that I've come up against is the lack of unit tests, which is something I explicitly included in my proposal, at Robert Kern's behest. I was pretty ignorant about what proper unit testing involved when I got into this, so of course I found out not long ago that creating these tests is way harder when a bunch of your code is written. Eventually I'll find a way to work something out for the code that I've written so far, and I'll make sure to code tests first for the stuff that I still intend to add.
Speaking of which, my main goals are to give users more options w.r.t. architectural parameters (both types of networks, and within network params -- e.g. several hidden layers) and to get the fully recurrent network in there.
I finally got around to committing some code to the sandbox today. Too bad it doesn't do what I want. I'm trying to use the leastsq function from scipy.optimize, but it looks like something's not working right. In particular, the optimization routine bails out after calling the max number of iterations, which no difference in the sum-squared-error on the test set (which is weird, given that the mlp's weights have definitely changed).