.comment-link {margin-left:.6em;}

Tuesday, September 06, 2005

 
Computing with Spikes

After assembling 100,000 Izhikevich-type neurons into a loosely connected network with 10,000,000 synapses, and brewing the resulting soup for a day, I fail to see anything interesting emerge. Rather, I did see some really cool patterns; depending on the parameters, I could make the patterns vary from noise to to fire-like protuberances to herds of smoke rings, to epileptic seizures.

In the limit as the number of neurons becomes large, their relative positioning begins to matter less and less, until they're more like pixels on a grid. The resulting mesh is essentially a 2d cellular automaton.

run 1. Each of 15,000 neurons has a 2d coordinate and forms up to 100 synapses with 50% of neurons within a radius of 0.05. The field is a square 1x1. 20% of all neurons are inhibitory. All the other parameters are visible.



run 2. This run shows the spike window, in which the X coordinate is time, the Y coordinate is the neuron number, and a dot is drawn if that neuron fired at at that time. Notice the throttles on the left.



Here, the processing surface was created according to the following template, which is a .bmp file:



Green means "input neurons", blue is "output", red is "all synapses must lead to the right", and white is the default. The idea is that different geometries can be constructed by using a bitmap to influence what kinds of neurons can grow where. Anyway, in this simulation a text file served as stimulus. Each input neuron picked a letter according to its Y coordinate (with a to z running top-to-bottom). This letter then activated the entire group of input neurons associated with it . These neurons then emitted the information, which was being inexorably forced right by the red neurons, but also chewed on by the white recombination neurons. When blue neurons finally got activated, their Y coordinate again being interpreted as letters of english alphabet, the output was printed. Each output neuron also had a synapse to the corresponding input neuron. Without this rule, zero coherence would be expected as there would be no "hebbian backpropagation" of strong synapses back to the input. Each time an output neuron fired, it would print a letter. Here is some typical output:

afyglhlsokmljyvuvurazcixgkwdftavhcfxcrzghktmiupwhnrzoglexskgfxyvbvqgznsdlhlzwrayj
zurdbdjtngvnjthefikopewtucbcpilhpdfyrhpvdwzobclunodikycrjkzuxeospdqwrlmecgycjinlt

Obviously, this is garbage. Forget the fact that you can occasionally spot a semi-random word or two. So, why report this non-result at all? Because it makes the following needs clear:
1. Hebbian learning (STDP) is not enough. It causes "autistic" behavior, as inputs self-organize into closed loops and happily travel along those loops.
2. Any output mechanism requires modulation. None of the neural networks I've seen have modulatory control (except inhibitory neurons). Think control theory. Whether expressed indirectly in the ratio of inhibitory to excitatory neurons, or more explicitely by the connectivity of the network, the network should have a feedback mechanism that ensures that it converges to a certain type of computation.

Comments: Post a Comment



<< Home

This page is powered by Blogger. Isn't yours?