Neural nets, also known as artificial neural networks, mathematically model bioelectrical networks in the brain. Massively parallel and more inductive than deductive, they are used for everything from voice and character recognition to artificial intelligence. Python developer Andrew Blais introduces you to the simplest of the neural nets, the Hopfield — and his net.py application gives you a hands-on opportunity to explore its ability to reconstruct distorted patterns.
good article – more like this please.
there is definitely a thirst for introductory articles on interesting topics like neural nets, clustering, image prcvessing, sound recognition, the us use of OS and IT technologies particular to scientific computing (parallel, mpi, how to tackle large datasets, distributed FS, etc etc)… biocomputing is currentkly very topical.
and less of the “how to set up a user” articles.
I agree. But from the number of comments for the article, it looks like this kind of articles are not very popular. People prefer to keep discussing who’s best ad infinitum (KDE x Gnome, C# x Java, Python x Ruby, Windows x Linux, etc)
Anyway, remember that you can also submit news, in case you find the kind of interesting stuff you mentioned.
Neural Nets just aren’t cool. What about hidden Markov models and support vector machines?
Perhaps you might want to elaborate on why they are lame?
@Rod
This article doesn’t have as much feedback mainly because most people don’t have an opinion on it. Contrast that with the latest update to KDE or the GTK filedialog.
I read it and found it very interesting.
I like Python and am interested in AI, but I don’t know much about it yet. Giving it a read now…
Perhaps you’ll like an article I wrote on perceptrons:
http://www.kuro5hin.org/story/2003/11/11/17383/475
Also, I think that articles like this get few responses because they’re difficult to comprehend, and don’t contain much social aspect, which is what people like to comment on.