Neural Network

The origin of convolutional neural networks

In 2000, it was not fashionable to process natural images with hierarchical convolutional neural networks (CNNs). Only a minority of computational neuroscientists were interested in leveraging the brain computational efficiency for image classification. I was lucky to work in one of these pioneering teams. As you will see below, we implemented multi-layer of convolutional neural networks for face detection and recognition. In annex 2 of my thesis, I wrote about 20 pages on 2-D pooling methods (it was not max pooling, merely average pooling). We also had fully connected layers for lateral inhibition in addition to stacked 2-D convolution layers and optional pooling. Our learning methods were biologically inspired (Hebbian learning) and often unsupervised, but as you will see below, it did work very well for image classification. We believe it was a precursor to modern CNNs implementations such as AlexNet.

Spiking Neural Network for face recognition

During my Ph.D. thesis and up until 2006, I designed neural networks to process natural images. With Simon Thorpe, I created SpikeNET [1, 2] is a program designed for simulating very large networks of asynchronous spiking neurons. Neurons are simulated with a limited number of parameters that include classic properties like the post-synaptic potential and threshold, but also more novel features like dendritic sensitivity that have been shown to be biologically plausible [3]. SpikeNET can be used to simulate networks with millions of neurons and hundreds of millions of synaptic weights. Optimization of computation time and the aim of real-time computation has been one of the driving forces behind the development of SpikeNET. SpikeNET is not actively developed anymore, but the program is open source and may be downloaded on the SpikeNET GitHub page.

I used spiking neurons for a variety of applications during my thesis. For example, SpikeNET was being used for face recognition, as shown below [4]. Faces are presented, and three layers of retinotopically organized neuronal maps extract orientation (first level), detect face features (mouth and eyes), and detect faces at the last layer. The white pixel on the image represents neuronal discharge at a specific retinotopic location. The originality of SpikeNET is the forward spiking connection scheme (the animation below is slowed down about 500 times compared to real-time).

Spiking Neural Network for face detection

The neural network is also extremely resilient to changes in contrast and noise, as shown in the two pictures below [5]. Resistance to contrast reduction and noise was found to be higher than the performance of the human visual system. The network's performance is affected only when contrast drops under 3% (the example image was detected until 0.005%).
Regarding resistance to noise, even with 50% noise, the network's performance remains unaffected (the example image was the last one to be detected - with more than 90% noise).
This type of network was used to simulate a very large network of 31 850 000 neurons and more than 245 000 000 000 connections and detect faces in giant photographs [5]. Faces inside green rectangles were accurately recognized. Faces inside red rectangles were missed. Click on the image to see the full-size image.

Spiking Neural Network and unsupervised learning

Finally, SpikeNET was also able to learn in an unsupervised manner. The animation below shows learning of orientation from a collection of natural images [6].

This work was later extended by Tim Masquelier and Simon Thorpe which were able to recognize objects based on similar principles [7]. The video below shows unsupervised learning of visual categories (faces and motorcycles).

The publication section contains all the references about SpikeNET.

[1] Delorme, A., Gautrais, J., VanRullen, R., & Thorpe, S.J. (1999). SpikeNET: A simulator for modeling large networks of integrate and fire neurons. Neurocomputing, 26-27, 989-996.

[2] Delorme, A., Thorpe, S. (2003). SpikeNET: An Event-driven Simulation Package for Modeling Large Networks of Spiking Neurons, Network: Comput. Neural Syst., 14, 613:627. Author's PDF, Pubmed abstract.

[3] Delorme, A. (2003) Early Cortical Orientation Selectivity: How Fast Shunting Inhibition Decodes the Order of Spike Latencies. Journal of Computational Neuroscience, 15, 357-365. Author's PDF, Pubmed link.

[4] Van Rullen, R., Gautrais, J., Delorme, A., & Thorpe, S. (1998). Face processing using one spike per neurone. Biosystems, 48(1-3), 229-239. Author's PDF, Science Direct.

[5] Delorme, A., Thorpe, S. (2001) Face processing using one spike per neuron: resistance to image degradation. Neural Networks,  14(6-7), 795-804. Author's PDF, Pubmed abstract.

[6] Delorme, A., Perrinet, L., Thorpe, S. (2001) Network of integrate-and-fire neurons using Rank Order Coding B: spike timing dependant plasticity and emergence of orientation selectivity. Neurocomputing, 38-40(1-4), 539-545. Author's PDF, Science Direct.

[7] Masquelier, T., Thorpe S. (2007) Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity. PLOS computational Biology, vol 3(2) e31, pp 247-257. Free link to article.