.comment-link {margin-left:.6em;}

Thursday, September 29, 2005

 
Consciousness and the Brain

An nicely detailed bibliography

Wednesday, September 28, 2005

 
Schrodinger, "On Mind and Matter" (1958)

Any succession of events in which we take part with sensations, perceptions and possibly with actions gradually drops out of the domain of consciousness when the same string of events repeats itself in the same way very often. But it is immediately shot up into the conscious region, if at such a repetition either the occasion or the environmental conditions met with on its pursuit differ from what they were on all the previous incidences. Even so, at first anyhow, only those modifications or 'differentials' intrude into the conscious sphere that distinguish the new incidence from previous ones and thereby usually call for 'new considerations'...

I would summarize my general hypothesis thus: consciousness is associated with the learning of the living substance; its knowing how is unconscious"

Sunday, September 18, 2005

 
Constant parameters in the anatomy and wiring of the mammalian brain

There are a number of useful observations here; but the one most often cited is the network diameter of the cortex -- the average number of hops from any neuron to any other neuron.

According to the paper, the average distance from any neuron in your cortex to any other neuron is 2.6 steps.

But that's a bullshit measure, in my opinion. Those short-cut connections that serve to make the network diameter so small can only be useful for synchronization, I argue, not for computation. The reason is simple -- they have no bandwidth relative to the rest of the network, thus can't carry enough useful information across.

Saturday, September 17, 2005

 
One Intelligence or Many?
an easy read

Friday, September 09, 2005

 
Worm Atlas

These guys have mapped out the entire neural connectivity graph of a worm Caenorhabditis Elegans (C Elegans). Quite an accomplishment, considering the worm has 393 neurons and 7833 neuron-to-neuron connections, which had to be enumerated by hand (each connection is one or more synapses, as indicated by a number at the end of each line). Here is the graph.

Thursday, September 08, 2005

 
Long-term consolidation

How do long-term memories get formed?

"Karni and Sagi (1993) found that after learning a visual skill, performance improved substantially following a delay of eight hours or more."

Wednesday, September 07, 2005

 
"A line needs to span less than 0.2 mm on the cortical surface in order to be recognised as oriented." (link)

Also, general brain facts: http://faculty.washington.edu/chudler/facts.html

 
A rat's cortex has about 60 000 000 neurons (6cm^2, assuming 100,000 neurons per mm^2).

A human cortex has about 20 000 000 000 neurons (2500 cm^2).

We should be able to build a rat.

 
Artificial Development - News

"Artificial Development is building the largest neural network to date to model a state-of-the-art simulation of the human brain. With 20 billion neurons and 20 trillion connections"

Why bother to sell something you clearly don't have? This company joins Numenta in the list of fake software companies to try to simulate human intelligence before it's clear to anyone how to do even something so simple as associate "red" and "apple" to produce "red apple" in a neural network, spiking or not. This is known as the combinatorial problem. Nobody knows how to teach the spiking networks anything except simple auto-association (which is why everyone is excited about them in the first place). Also, nobody knows the algorithm that wires the human brain during growth. Finally, simulating 40 million neurons on one computer (something the company is claiming to be doing) is simply out of reach even on dual-CPU dual-core opterons (unless we're talking about 1-bit neurons) so we can safely say this press release is full of shit.

How can I judge? Experience. I've been able to run ~15,000 x 100 neurons on one CPU in real-time (that's 1ms per step); I'm using non-linear neurons, so of course this slows me down a bit; But in reality, the number of synapses and the delay of each synapse, not the number of neurons, is the killer -- the total number of spikes traveling in a system is Sum(synapse[i].delay), and unless you're careful this number will creep into billions. The maximum I can comfortably run in under 1 gig of ram on one node is 200,000 neurons with 100 synapses each. That's 20,000,000 synapses up to length 10; the maximal spike capacity of this network is 200,000,000 spikes. When the number of spikes is small (2 million spikes/ms), this beast runs at 5 fps. To get real-time, you need 1,000 fps assuming your time resolution is 1ms.

 
The cellular automata (CA) people are particularly pathetic and impractical when it comes to producing something useful. They tend to just sit there, looking at pretty pictures, then philosophize about the globality and profundity of what CA could do -- "it's alive!". The most practical applications of CAs have been snowflakes and fire for computer graphics. That's about it.

But the sad truth is that no one is even trying to compute something using CA. By compute I don't mean make a turing machine out of CA -- the game of Life is universal, so is rule 110. I really mean using CAs to solve problems that cannot be solved otherwise.

The neural network guys are far ahead of the CA guys in this respect; at least from the very beginning, they were trying to do something practical. As early as 1959, buckets of cold water were poured over perceptrons; this lead to progress. But CA borrows from the mystical reputation of fractals to avoid any real criticism. For one, the problem with CA's is that one cell constantly polls its surroundings in order to change its state; Neural networks use more precise node-to-node communication, where every weight can be reduced to zero if necessary. No such flexibility with CAs. Next, the most pathetic of CAs are one-dimensional; we won't even consider those. 2D CAs generally use the nearest 8 neighbors to update a cell's state -- not a lot. Finally, each node in a CA generally limits itself to 2 or 3 states not a lot, considering that a good neuron has at least 4 floats.

Spiking neural networks with non-linear nodes are really a generalization of CAs. By choosing an appropriate tolopology and edge delays (e.g. a 2D grid with all edge delays = 1), you can reproduce any of the existing CA patterns. You also get tons of degres of freedom that you don't get with CAs.

Tuesday, September 06, 2005

 
The fact that the human cortex is exactly 6 layers deep, but in all other respects two-dimensional, is encouraging. It means that using 2D cellular automata (with synaptic connections instead of neighborhood rules) is OK. The only adjustments would be a non-linear distance metric (to account for the folds) and a bunch of off-the-plane connections from a "thalamus" or another input device.

 
Interneuron density series: circuit complexity and axon wiring economy of cortical interneurons

"Brain systems with ‘simple’ computational demands evolved only a few neuron types. The basal ganglia, thalamus and the cerebellum possess a low degree of variability in their neuron types. By contrast, cortical structures have evolved in a manner that most closely resembles a relatively sparsely connected network of few principal cell types and many classes of GABAergic interneurons."

"Three major groups of cortical interneurons are recognized: (i) interneurons controlling principal cell output (by perisomatic inhibition), (ii) interneurons controlling the principal cell input (by dendritic inhibition) and (iii) longrange interneurons coordinating interneuron assemblies."

The interesting implication here is that the computation may be shaped and controlled primarily by the interneurons (inhibitory neurons).

The idea of separately controlling cell input and output with additional neurons inspired LSTM (long short-term memory networks). However, LTSM is a classical "average firing rate" network, not a spiking one.

Another observation is that a small number of long-range connections is required so achieve global synchronization by reducing the network diameter

 
Computing with Spikes

After assembling 100,000 Izhikevich-type neurons into a loosely connected network with 10,000,000 synapses, and brewing the resulting soup for a day, I fail to see anything interesting emerge. Rather, I did see some really cool patterns; depending on the parameters, I could make the patterns vary from noise to to fire-like protuberances to herds of smoke rings, to epileptic seizures.

In the limit as the number of neurons becomes large, their relative positioning begins to matter less and less, until they're more like pixels on a grid. The resulting mesh is essentially a 2d cellular automaton.

run 1. Each of 15,000 neurons has a 2d coordinate and forms up to 100 synapses with 50% of neurons within a radius of 0.05. The field is a square 1x1. 20% of all neurons are inhibitory. All the other parameters are visible.



run 2. This run shows the spike window, in which the X coordinate is time, the Y coordinate is the neuron number, and a dot is drawn if that neuron fired at at that time. Notice the throttles on the left.



Here, the processing surface was created according to the following template, which is a .bmp file:



Green means "input neurons", blue is "output", red is "all synapses must lead to the right", and white is the default. The idea is that different geometries can be constructed by using a bitmap to influence what kinds of neurons can grow where. Anyway, in this simulation a text file served as stimulus. Each input neuron picked a letter according to its Y coordinate (with a to z running top-to-bottom). This letter then activated the entire group of input neurons associated with it . These neurons then emitted the information, which was being inexorably forced right by the red neurons, but also chewed on by the white recombination neurons. When blue neurons finally got activated, their Y coordinate again being interpreted as letters of english alphabet, the output was printed. Each output neuron also had a synapse to the corresponding input neuron. Without this rule, zero coherence would be expected as there would be no "hebbian backpropagation" of strong synapses back to the input. Each time an output neuron fired, it would print a letter. Here is some typical output:

afyglhlsokmljyvuvurazcixgkwdftavhcfxcrzghktmiupwhnrzoglexskgfxyvbvqgznsdlhlzwrayj
zurdbdjtngvnjthefikopewtucbcpilhpdfyrhpvdwzobclunodikycrjkzuxeospdqwrlmecgycjinlt

Obviously, this is garbage. Forget the fact that you can occasionally spot a semi-random word or two. So, why report this non-result at all? Because it makes the following needs clear:
1. Hebbian learning (STDP) is not enough. It causes "autistic" behavior, as inputs self-organize into closed loops and happily travel along those loops.
2. Any output mechanism requires modulation. None of the neural networks I've seen have modulatory control (except inhibitory neurons). Think control theory. Whether expressed indirectly in the ratio of inhibitory to excitatory neurons, or more explicitely by the connectivity of the network, the network should have a feedback mechanism that ensures that it converges to a certain type of computation.

Friday, September 02, 2005

 
The truth is that no real progress has been made in any of the attempts to implement things like consciousness or general intelligence. This doesn't prevent any number of people from writing entire books about their analogy-laden pet theories, which they will defend against other competing theories, of what general intelligence or consciousness are.

Technologically, to assemble enough hardware to compete with one entire adult brain is achievable. On the down side, to simulate 100 billion neurons and 100 trillion synapses, you will need dedicated hardware -- a cluster of even 10,000 linux boxes won't do, because one node can (optimistically) handle at most a million neurons in real-time. So custom hardware is required. On the upside, no synchronization is required, so you can assemble neuron-simulating fpgas on motherboards by the dozen. Inter-board bandwidth again cannot be that hard to achieve, because you just need to physically queue tons of spikes, and existing memory bus architectures are great for that.

So, "all" that remains is the secret sauce. That's why everybody is lusting to discover it, like the alchemists philosopher's stone. But do we know enough about the way the brain works to make the right simplifying assumptions? Of course not.

So there are two kinds of people trying to crack the problem: the top-downers, and the bottom-uppers. The top-downers will do silly things like assemble a bunch of objects in the same order as the components of the human brain (as we know it), then admire the structure from the outside without even the sliver of hope that it will do the right thing; The bottom-uppers will assemble neurons into networks, run them ad infinitum and hope that something "emerges" (word of the day) out of the assembly.

The big question is whether you believe that we have at hand all the tools necessary, that all that's required is a clever implementation. If that's the case, the race is on. But I think it's too early. What's needed is a yet unmade discovery about how information can be stored and processed in spiking neural networks; these things are so unbelievably tangled and alien to our structured, reductionist way of thinking that it's almost as if a different kind of science (no, not the Wolfram kind) is required to analyze them. Only once we understand the properties of these networks will we be able to build something out of them.

 
The neocortex and its connections form a massive 80% by volume of the human brain (passingham, 1982)

Association cortex is a convenient description for the cortex whose function has yet to be discovered (SOB, p501)

 
Numbers of synapses

In the cat visual cortex, 1 cubic mm of gray matter contains approximately 50,000 neurons, each of which gives rise on average to some 6,000 synapses, making a total of 300 million synapses. Approximately 84% are excitatory, 16% inhibitory. (SOB, p7)

A typical CA3 (cat?) hippocampal pyramidal neuron has a total dendritic length of approximately 16mm and receives approximately 25,000 excitatory synapses and fewer inhibitory synapses (SOB, p488)

 
Presynaptic Inhibition

There may be a maintained depolarization of the presynaptic terminal, reducing the amplitude of an invading impulse and with it the amount of transmitter released from the terminal. The essential operating characteristic of this microcircuit is that the effect of an input A on a cell C may be reduced or abolished by B without there being any direct action of B on the cell C itself. Control of the input A to the dendrite or cell body can thus be much more specific. (SOB, p12)

Thursday, September 01, 2005

 
ROBOCORE 2

This guy has implemented a "brain builder" with a nice UI based on the Izhikevich neuron model (all in C#). He seems to be mostly concerned with the user interface, and I'm not sure what his results are, but then again, nobody has any results in this field.

This page is powered by Blogger. Isn't yours?