Tuesday, May 13, 2008
The average numbers of neocortical neurons were 19 billion in female brains and 23 billion in male brains, a 16% difference. In our study, which covered the age range from 20 years to 90 years, approximately 10% of all neocortical neurons are lost over the life span in both sexes
http://www.ncbi.nlm.nih.gov/pubmed/9215725
http://www.ncbi.nlm.nih.gov/pubmed/9215
Sunday, April 20, 2008
There is a theory that states that the mind thinks in metaphors. Metaphors are templates of thinking, and basically all new knowledge is understood "in terms of" something else. In "Metaphors we live by" (Lakoff & Johnson 1976), some of the metaphors listed were these (a severely incomplete list is given):
Happy is up; sad is down
(feeling up, boost the spirit, spirit rose, high spirit, lift the spirit, feeling down, depressed, low, falling into depression, spirit sinking)
Conscious is up; unconscious is down
(get up, wake up, i'm up, he rises early, fell asleep, dropped off to sleep, under hypnosis, sank into a comma)
Health and life are up; sickness and death are down
(peak of health, rose from the dead, top shape, fell ill, sinking fast, came down with the flu, health declining, dropped dead)
Having control or force is up; Being subject to control or force is down
(control over something, on top of the situation, in a siperior position, height of one's power, high command, upper echelon, rise to power, ranking above, under control, falling from power, power on the decline, social inferior, low on the totem pole)
More is up; Less is down
(number of X keeps going up, the numbers are high, income rose, output has gone down, number of errors is low, income fell, he is underage, turn the heat down)
Foreseeable future events are up (and ahead)
High status is up; Low status is down
(Lofty position; rise to the top; at the peak of his career; climbing the ladder; upward mobility; bottom of the social hierarchy; fell in status, etc)
Good is up; Bad is down
Virtue is up; Depravity is down
Rational is up; Emotional is down
Theories (and arguments) are buildings
Ideas are food
Ideas are people
Ideas are plants
Ideas are products
Ideas are commodities
Ideas are resources
Ideas are money
Ideas are fashions
(the idea went out of style years ago, XYZ is in these days, ABC is fashionable, old hat, outdated, new trends, old-fashioned, up-to-date, avant-garde thought, quite chic, no longer in vogue, XYZ craze)
Understanding is seeing; IDeas are light-sources; Discource is a light-medium
Love is a physical force (electromagnetic, graviational, etc)
Love is as patient
(sick relationship, strong, healthy marriage, marriage is dead, it can't be revived, marriage is on the mend, getting back on our feet, relationship is in good shape, listless relationship, relationship on its last legs, tired affair)
Love is madness
(i'm crazy about her, she drives me out of my mind, he raves about her, he's gone mad over her, i'm wild about X, i'm insane about her)
Love is magic
(she cast her spell on him, the magic is gone, i was spellbound, she had me hypnotized, he has me in a trance, i was entranced, i'm charmed, she is bewitching)
Love is war
(many conquests, fought for him, fled from her advances, pursued him relentlessly, gaining ground with her, won her hand, overpowered her, besieged by suitors, fending off advances, made an ally of her mother, misalliance)
Wealth is a hidden object
Significant is big
Seeing is touching; Eyes are limbs
The eyes are containers for the emotions
Emotional effect is physical contact
Physical and emotional states are entities within a person
Viatality is a subtance
Life is a container
Life is a gambling game
Time is a moving object
An argument is a journey
A journey defines a path
An argument defines a path
The path of a journey is a surface
An argument is a container
An argument is a building
An argument is a container
An instrument is a companion
(Example: I went to the movies with Sally, I sliced the salami with a knife; Accompaniment is instrumentality in nearly all languages of the world)
(you should get the whole book)
Happy is up; sad is down
(feeling up, boost the spirit, spirit rose, high spirit, lift the spirit, feeling down, depressed, low, falling into depression, spirit sinking)
Conscious is up; unconscious is down
(get up, wake up, i'm up, he rises early, fell asleep, dropped off to sleep, under hypnosis, sank into a comma)
Health and life are up; sickness and death are down
(peak of health, rose from the dead, top shape, fell ill, sinking fast, came down with the flu, health declining, dropped dead)
Having control or force is up; Being subject to control or force is down
(control over something, on top of the situation, in a siperior position, height of one's power, high command, upper echelon, rise to power, ranking above, under control, falling from power, power on the decline, social inferior, low on the totem pole)
More is up; Less is down
(number of X keeps going up, the numbers are high, income rose, output has gone down, number of errors is low, income fell, he is underage, turn the heat down)
Foreseeable future events are up (and ahead)
High status is up; Low status is down
(Lofty position; rise to the top; at the peak of his career; climbing the ladder; upward mobility; bottom of the social hierarchy; fell in status, etc)
Good is up; Bad is down
Virtue is up; Depravity is down
Rational is up; Emotional is down
Theories (and arguments) are buildings
Ideas are food
Ideas are people
Ideas are plants
Ideas are products
Ideas are commodities
Ideas are resources
Ideas are money
Ideas are fashions
(the idea went out of style years ago, XYZ is in these days, ABC is fashionable, old hat, outdated, new trends, old-fashioned, up-to-date, avant-garde thought, quite chic, no longer in vogue, XYZ craze)
Understanding is seeing; IDeas are light-sources; Discource is a light-medium
Love is a physical force (electromagnetic, graviational, etc)
Love is as patient
(sick relationship, strong, healthy marriage, marriage is dead, it can't be revived, marriage is on the mend, getting back on our feet, relationship is in good shape, listless relationship, relationship on its last legs, tired affair)
Love is madness
(i'm crazy about her, she drives me out of my mind, he raves about her, he's gone mad over her, i'm wild about X, i'm insane about her)
Love is magic
(she cast her spell on him, the magic is gone, i was spellbound, she had me hypnotized, he has me in a trance, i was entranced, i'm charmed, she is bewitching)
Love is war
(many conquests, fought for him, fled from her advances, pursued him relentlessly, gaining ground with her, won her hand, overpowered her, besieged by suitors, fending off advances, made an ally of her mother, misalliance)
Wealth is a hidden object
Significant is big
Seeing is touching; Eyes are limbs
The eyes are containers for the emotions
Emotional effect is physical contact
Physical and emotional states are entities within a person
Viatality is a subtance
Life is a container
Life is a gambling game
Time is a moving object
An argument is a journey
A journey defines a path
An argument defines a path
The path of a journey is a surface
An argument is a container
An argument is a building
An argument is a container
An instrument is a companion
(Example: I went to the movies with Sally, I sliced the salami with a knife; Accompaniment is instrumentality in nearly all languages of the world)
(you should get the whole book)
Tuesday, April 24, 2007
"Early on it became evident that automatic behavior and awareness are often opposed -- the more efficient the performance, the less aware we become. Between reflex (automatic) action and mind there seems to be actual opposition. Reflex action and mind seem almost mutually exclusive (Sherrington 1911/1947)...
"Evidence (Pribram, 1971) indicates that automatic processing is programmed by neural circuitry mediated by nerve impulses, whereas awareness, which provides an opportunity for conscious learning, is due to delay in processing occuring in the brain's connective web. The longer the delay between the initiation in the dendritic network of postsynaptic arrival patterns and the ultimate production of axonic departure patterns, the longer the duration of awareness and the opportunity for distributed storage."
Karl Pribram
"Evidence (Pribram, 1971) indicates that automatic processing is programmed by neural circuitry mediated by nerve impulses, whereas awareness, which provides an opportunity for conscious learning, is due to delay in processing occuring in the brain's connective web. The longer the delay between the initiation in the dendritic network of postsynaptic arrival patterns and the ultimate production of axonic departure patterns, the longer the duration of awareness and the opportunity for distributed storage."
Karl Pribram
Friday, March 02, 2007
An overview of the Blue Brain project
http://www.hss.caltech.edu/~steve/markham.pdf (Feb 2006)
Blue Brain is a project to build a simulation of the entire brain by first modeling the columns, then using the column module to build the entire thing.
Each column takes about 10,000 neurons, and they are estimating that an accurate simulation of the column will be achieved within a year from now.
Currently, Blue Brain can simulate about 100,000,000 simple neurons. But what's the number of synapses on that? The number of synapses adds several orders of magnitude of complexity to any simulation.
http://www.hss.caltech.edu/~steve/markham.pdf (Feb 2006)
Blue Brain is a project to build a simulation of the entire brain by first modeling the columns, then using the column module to build the entire thing.
Each column takes about 10,000 neurons, and they are estimating that an accurate simulation of the column will be achieved within a year from now.
Currently, Blue Brain can simulate about 100,000,000 simple neurons. But what's the number of synapses on that? The number of synapses adds several orders of magnitude of complexity to any simulation.
Dominique Martinez
from the Cortex group at Lorrain labs (http://cortex.loria.fr/)
Simulated integrate-and-fire neurons on linux (1.86ghz, what cpu?). Achieved 1,000,000 firings/sec, or 20 simulated minutes per 1 sec on a N=100 network. This is pretty good, but only possible for integrateable neurons. The paper is here:
http://www.loria.fr/~dmartine/papers/ArnaudHanaDom.pdf
I wonder if using a 2D lookup table for Izhikevich neurons would allow looking several steps ahead.
from the Cortex group at Lorrain labs (http://cortex.loria.fr/)
Simulated integrate-and-fire neurons on linux (1.86ghz, what cpu?). Achieved 1,000,000 firings/sec, or 20 simulated minutes per 1 sec on a N=100 network. This is pretty good, but only possible for integrateable neurons. The paper is here:
http://www.loria.fr/~dmartine/papers/ArnaudHanaDom.pdf
I wonder if using a 2D lookup table for Izhikevich neurons would allow looking several steps ahead.
Wednesday, October 04, 2006
Saturday, January 07, 2006
http://www.novamente.net/
A company created by Dr. Goertzel, its main goal is "strong AI".
The basic idea is to dig for patterns using a bayesian optimization algorithm (a heuristically improved genetic algorithm). Patterns are cloned, mutated, and tested. Good patterns are kept and associated with weights. These weights control how much "attention" each pattern later receives.
Later, patterns are connected into a hypergraph, which is a graph, but with special edges that may point to more than two vertices, and be of various types. One edge type indicates that a vertex A is a special case of vertex B; another says that several vertices are similar.
The actual system is not implemented yet (the articles on the web site claim that the system is somewhere between 20% and 60% complete).
In my opinion, another case of a highly oversold system whose actual capabilities are not even clear to its designer, given that it's not implemented yet.
Dr. Goertzel has a web site goertzel.org, and is an author of a variety of essays. One essay has the following title: "Encouraging a Positive Transcension: AI Buddha versus AI Big Brother, Voluntary Joyous Growth, the Global Brain Singularity Steward Mindplex, and Other Issues of Transhumanist Ethical Philosophy". It actually makes sense once you start reading it.
Dr. Goertzel also co-founded a company called Intelligenesis, later renamed Webmind, Inc.
A company created by Dr. Goertzel, its main goal is "strong AI".
The basic idea is to dig for patterns using a bayesian optimization algorithm (a heuristically improved genetic algorithm). Patterns are cloned, mutated, and tested. Good patterns are kept and associated with weights. These weights control how much "attention" each pattern later receives.
Later, patterns are connected into a hypergraph, which is a graph, but with special edges that may point to more than two vertices, and be of various types. One edge type indicates that a vertex A is a special case of vertex B; another says that several vertices are similar.
The actual system is not implemented yet (the articles on the web site claim that the system is somewhere between 20% and 60% complete).
In my opinion, another case of a highly oversold system whose actual capabilities are not even clear to its designer, given that it's not implemented yet.
Dr. Goertzel has a web site goertzel.org, and is an author of a variety of essays. One essay has the following title: "Encouraging a Positive Transcension: AI Buddha versus AI Big Brother, Voluntary Joyous Growth, the Global Brain Singularity Steward Mindplex, and Other Issues of Transhumanist Ethical Philosophy". It actually makes sense once you start reading it.
Dr. Goertzel also co-founded a company called Intelligenesis, later renamed Webmind, Inc.
Sunday, November 20, 2005
"Understanding luminance is important because our perception of depth, three-dimensionality, movement (or lack of it), and spatial organization are all carried by a part of our visual system that responds only to luminance and is insensitive to color."
"Vision and Art: The biology of seeing" by Margaret Livingstone
"Vision and Art: The biology of seeing" by Margaret Livingstone
Thursday, September 29, 2005
Wednesday, September 28, 2005
Schrodinger, "On Mind and Matter" (1958)
Any succession of events in which we take part with sensations, perceptions and possibly with actions gradually drops out of the domain of consciousness when the same string of events repeats itself in the same way very often. But it is immediately shot up into the conscious region, if at such a repetition either the occasion or the environmental conditions met with on its pursuit differ from what they were on all the previous incidences. Even so, at first anyhow, only those modifications or 'differentials' intrude into the conscious sphere that distinguish the new incidence from previous ones and thereby usually call for 'new considerations'...
I would summarize my general hypothesis thus: consciousness is associated with the learning of the living substance; its knowing how is unconscious"
Any succession of events in which we take part with sensations, perceptions and possibly with actions gradually drops out of the domain of consciousness when the same string of events repeats itself in the same way very often. But it is immediately shot up into the conscious region, if at such a repetition either the occasion or the environmental conditions met with on its pursuit differ from what they were on all the previous incidences. Even so, at first anyhow, only those modifications or 'differentials' intrude into the conscious sphere that distinguish the new incidence from previous ones and thereby usually call for 'new considerations'...
I would summarize my general hypothesis thus: consciousness is associated with the learning of the living substance; its knowing how is unconscious"
Sunday, September 18, 2005
Constant parameters in the anatomy and wiring of the mammalian brain
There are a number of useful observations here; but the one most often cited is the network diameter of the cortex -- the average number of hops from any neuron to any other neuron.
According to the paper, the average distance from any neuron in your cortex to any other neuron is 2.6 steps.
But that's a bullshit measure, in my opinion. Those short-cut connections that serve to make the network diameter so small can only be useful for synchronization, I argue, not for computation. The reason is simple -- they have no bandwidth relative to the rest of the network, thus can't carry enough useful information across.
There are a number of useful observations here; but the one most often cited is the network diameter of the cortex -- the average number of hops from any neuron to any other neuron.
According to the paper, the average distance from any neuron in your cortex to any other neuron is 2.6 steps.
But that's a bullshit measure, in my opinion. Those short-cut connections that serve to make the network diameter so small can only be useful for synchronization, I argue, not for computation. The reason is simple -- they have no bandwidth relative to the rest of the network, thus can't carry enough useful information across.
Saturday, September 17, 2005
Friday, September 09, 2005
Worm Atlas
These guys have mapped out the entire neural connectivity graph of a worm Caenorhabditis Elegans (C Elegans). Quite an accomplishment, considering the worm has 393 neurons and 7833 neuron-to-neuron connections, which had to be enumerated by hand (each connection is one or more synapses, as indicated by a number at the end of each line). Here is the graph.
These guys have mapped out the entire neural connectivity graph of a worm Caenorhabditis Elegans (C Elegans). Quite an accomplishment, considering the worm has 393 neurons and 7833 neuron-to-neuron connections, which had to be enumerated by hand (each connection is one or more synapses, as indicated by a number at the end of each line). Here is the graph.
Thursday, September 08, 2005
Long-term consolidation
How do long-term memories get formed?
"Karni and Sagi (1993) found that after learning a visual skill, performance improved substantially following a delay of eight hours or more."
How do long-term memories get formed?
"Karni and Sagi (1993) found that after learning a visual skill, performance improved substantially following a delay of eight hours or more."
Wednesday, September 07, 2005
"A line needs to span less than 0.2 mm on the cortical surface in order to be recognised as oriented." (link)
Also, general brain facts: http://faculty.washington.edu/chudler/facts.html
Also, general brain facts: http://faculty.washington.edu/chudler/facts.html
A rat's cortex has about 60 000 000 neurons (6cm^2, assuming 100,000 neurons per mm^2).
A human cortex has about 20 000 000 000 neurons (2500 cm^2).
We should be able to build a rat.
A human cortex has about 20 000 000 000 neurons (2500 cm^2).
We should be able to build a rat.
Artificial Development - News
"Artificial Development is building the largest neural network to date to model a state-of-the-art simulation of the human brain. With 20 billion neurons and 20 trillion connections"
Why bother to sell something you clearly don't have? This company joins Numenta in the list of fake software companies to try to simulate human intelligence before it's clear to anyone how to do even something so simple as associate "red" and "apple" to produce "red apple" in a neural network, spiking or not. This is known as the combinatorial problem. Nobody knows how to teach the spiking networks anything except simple auto-association (which is why everyone is excited about them in the first place). Also, nobody knows the algorithm that wires the human brain during growth. Finally, simulating 40 million neurons on one computer (something the company is claiming to be doing) is simply out of reach even on dual-CPU dual-core opterons (unless we're talking about 1-bit neurons) so we can safely say this press release is full of shit.
How can I judge? Experience. I've been able to run ~15,000 x 100 neurons on one CPU in real-time (that's 1ms per step); I'm using non-linear neurons, so of course this slows me down a bit; But in reality, the number of synapses and the delay of each synapse, not the number of neurons, is the killer -- the total number of spikes traveling in a system is Sum(synapse[i].delay), and unless you're careful this number will creep into billions. The maximum I can comfortably run in under 1 gig of ram on one node is 200,000 neurons with 100 synapses each. That's 20,000,000 synapses up to length 10; the maximal spike capacity of this network is 200,000,000 spikes. When the number of spikes is small (2 million spikes/ms), this beast runs at 5 fps. To get real-time, you need 1,000 fps assuming your time resolution is 1ms.
"Artificial Development is building the largest neural network to date to model a state-of-the-art simulation of the human brain. With 20 billion neurons and 20 trillion connections"
Why bother to sell something you clearly don't have? This company joins Numenta in the list of fake software companies to try to simulate human intelligence before it's clear to anyone how to do even something so simple as associate "red" and "apple" to produce "red apple" in a neural network, spiking or not. This is known as the combinatorial problem. Nobody knows how to teach the spiking networks anything except simple auto-association (which is why everyone is excited about them in the first place). Also, nobody knows the algorithm that wires the human brain during growth. Finally, simulating 40 million neurons on one computer (something the company is claiming to be doing) is simply out of reach even on dual-CPU dual-core opterons (unless we're talking about 1-bit neurons) so we can safely say this press release is full of shit.
How can I judge? Experience. I've been able to run ~15,000 x 100 neurons on one CPU in real-time (that's 1ms per step); I'm using non-linear neurons, so of course this slows me down a bit; But in reality, the number of synapses and the delay of each synapse, not the number of neurons, is the killer -- the total number of spikes traveling in a system is Sum(synapse[i].delay), and unless you're careful this number will creep into billions. The maximum I can comfortably run in under 1 gig of ram on one node is 200,000 neurons with 100 synapses each. That's 20,000,000 synapses up to length 10; the maximal spike capacity of this network is 200,000,000 spikes. When the number of spikes is small (2 million spikes/ms), this beast runs at 5 fps. To get real-time, you need 1,000 fps assuming your time resolution is 1ms.
The cellular automata (CA) people are particularly pathetic and impractical when it comes to producing something useful. They tend to just sit there, looking at pretty pictures, then philosophize about the globality and profundity of what CA could do -- "it's alive!". The most practical applications of CAs have been snowflakes and fire for computer graphics. That's about it.
But the sad truth is that no one is even trying to compute something using CA. By compute I don't mean make a turing machine out of CA -- the game of Life is universal, so is rule 110. I really mean using CAs to solve problems that cannot be solved otherwise.
The neural network guys are far ahead of the CA guys in this respect; at least from the very beginning, they were trying to do something practical. As early as 1959, buckets of cold water were poured over perceptrons; this lead to progress. But CA borrows from the mystical reputation of fractals to avoid any real criticism. For one, the problem with CA's is that one cell constantly polls its surroundings in order to change its state; Neural networks use more precise node-to-node communication, where every weight can be reduced to zero if necessary. No such flexibility with CAs. Next, the most pathetic of CAs are one-dimensional; we won't even consider those. 2D CAs generally use the nearest 8 neighbors to update a cell's state -- not a lot. Finally, each node in a CA generally limits itself to 2 or 3 states not a lot, considering that a good neuron has at least 4 floats.
Spiking neural networks with non-linear nodes are really a generalization of CAs. By choosing an appropriate tolopology and edge delays (e.g. a 2D grid with all edge delays = 1), you can reproduce any of the existing CA patterns. You also get tons of degres of freedom that you don't get with CAs.
But the sad truth is that no one is even trying to compute something using CA. By compute I don't mean make a turing machine out of CA -- the game of Life is universal, so is rule 110. I really mean using CAs to solve problems that cannot be solved otherwise.
The neural network guys are far ahead of the CA guys in this respect; at least from the very beginning, they were trying to do something practical. As early as 1959, buckets of cold water were poured over perceptrons; this lead to progress. But CA borrows from the mystical reputation of fractals to avoid any real criticism. For one, the problem with CA's is that one cell constantly polls its surroundings in order to change its state; Neural networks use more precise node-to-node communication, where every weight can be reduced to zero if necessary. No such flexibility with CAs. Next, the most pathetic of CAs are one-dimensional; we won't even consider those. 2D CAs generally use the nearest 8 neighbors to update a cell's state -- not a lot. Finally, each node in a CA generally limits itself to 2 or 3 states not a lot, considering that a good neuron has at least 4 floats.
Spiking neural networks with non-linear nodes are really a generalization of CAs. By choosing an appropriate tolopology and edge delays (e.g. a 2D grid with all edge delays = 1), you can reproduce any of the existing CA patterns. You also get tons of degres of freedom that you don't get with CAs.
Tuesday, September 06, 2005
The fact that the human cortex is exactly 6 layers deep, but in all other respects two-dimensional, is encouraging. It means that using 2D cellular automata (with synaptic connections instead of neighborhood rules) is OK. The only adjustments would be a non-linear distance metric (to account for the folds) and a bunch of off-the-plane connections from a "thalamus" or another input device.
Interneuron density series: circuit complexity and axon wiring economy of cortical interneurons
"Brain systems with ‘simple’ computational demands evolved only a few neuron types. The basal ganglia, thalamus and the cerebellum possess a low degree of variability in their neuron types. By contrast, cortical structures have evolved in a manner that most closely resembles a relatively sparsely connected network of few principal cell types and many classes of GABAergic interneurons."
"Three major groups of cortical interneurons are recognized: (i) interneurons controlling principal cell output (by perisomatic inhibition), (ii) interneurons controlling the principal cell input (by dendritic inhibition) and (iii) longrange interneurons coordinating interneuron assemblies."
The interesting implication here is that the computation may be shaped and controlled primarily by the interneurons (inhibitory neurons).
The idea of separately controlling cell input and output with additional neurons inspired LSTM (long short-term memory networks). However, LTSM is a classical "average firing rate" network, not a spiking one.
Another observation is that a small number of long-range connections is required so achieve global synchronization by reducing the network diameter
"Brain systems with ‘simple’ computational demands evolved only a few neuron types. The basal ganglia, thalamus and the cerebellum possess a low degree of variability in their neuron types. By contrast, cortical structures have evolved in a manner that most closely resembles a relatively sparsely connected network of few principal cell types and many classes of GABAergic interneurons."
"Three major groups of cortical interneurons are recognized: (i) interneurons controlling principal cell output (by perisomatic inhibition), (ii) interneurons controlling the principal cell input (by dendritic inhibition) and (iii) longrange interneurons coordinating interneuron assemblies."
The interesting implication here is that the computation may be shaped and controlled primarily by the interneurons (inhibitory neurons).
The idea of separately controlling cell input and output with additional neurons inspired LSTM (long short-term memory networks). However, LTSM is a classical "average firing rate" network, not a spiking one.
Another observation is that a small number of long-range connections is required so achieve global synchronization by reducing the network diameter
Computing with Spikes
After assembling 100,000 Izhikevich-type neurons into a loosely connected network with 10,000,000 synapses, and brewing the resulting soup for a day, I fail to see anything interesting emerge. Rather, I did see some really cool patterns; depending on the parameters, I could make the patterns vary from noise to to fire-like protuberances to herds of smoke rings, to epileptic seizures.
In the limit as the number of neurons becomes large, their relative positioning begins to matter less and less, until they're more like pixels on a grid. The resulting mesh is essentially a 2d cellular automaton.
run 1. Each of 15,000 neurons has a 2d coordinate and forms up to 100 synapses with 50% of neurons within a radius of 0.05. The field is a square 1x1. 20% of all neurons are inhibitory. All the other parameters are visible.
run 2. This run shows the spike window, in which the X coordinate is time, the Y coordinate is the neuron number, and a dot is drawn if that neuron fired at at that time. Notice the throttles on the left.
Here, the processing surface was created according to the following template, which is a .bmp file:
Green means "input neurons", blue is "output", red is "all synapses must lead to the right", and white is the default. The idea is that different geometries can be constructed by using a bitmap to influence what kinds of neurons can grow where. Anyway, in this simulation a text file served as stimulus. Each input neuron picked a letter according to its Y coordinate (with a to z running top-to-bottom). This letter then activated the entire group of input neurons associated with it . These neurons then emitted the information, which was being inexorably forced right by the red neurons, but also chewed on by the white recombination neurons. When blue neurons finally got activated, their Y coordinate again being interpreted as letters of english alphabet, the output was printed. Each output neuron also had a synapse to the corresponding input neuron. Without this rule, zero coherence would be expected as there would be no "hebbian backpropagation" of strong synapses back to the input. Each time an output neuron fired, it would print a letter. Here is some typical output:
afyglhlsokmljyvuvurazcixgkwdftavhcfxcrzghktmiupwhnrzoglexskgfxyvbvqgznsdlhlzwrayj
zurdbdjtngvnjthefikopewtucbcpilhpdfyrhpvdwzobclunodikycrjkzuxeospdqwrlmecgycjinlt
Obviously, this is garbage. Forget the fact that you can occasionally spot a semi-random word or two. So, why report this non-result at all? Because it makes the following needs clear:
1. Hebbian learning (STDP) is not enough. It causes "autistic" behavior, as inputs self-organize into closed loops and happily travel along those loops.
2. Any output mechanism requires modulation. None of the neural networks I've seen have modulatory control (except inhibitory neurons). Think control theory. Whether expressed indirectly in the ratio of inhibitory to excitatory neurons, or more explicitely by the connectivity of the network, the network should have a feedback mechanism that ensures that it converges to a certain type of computation.
After assembling 100,000 Izhikevich-type neurons into a loosely connected network with 10,000,000 synapses, and brewing the resulting soup for a day, I fail to see anything interesting emerge. Rather, I did see some really cool patterns; depending on the parameters, I could make the patterns vary from noise to to fire-like protuberances to herds of smoke rings, to epileptic seizures.
In the limit as the number of neurons becomes large, their relative positioning begins to matter less and less, until they're more like pixels on a grid. The resulting mesh is essentially a 2d cellular automaton.
run 1. Each of 15,000 neurons has a 2d coordinate and forms up to 100 synapses with 50% of neurons within a radius of 0.05. The field is a square 1x1. 20% of all neurons are inhibitory. All the other parameters are visible.
run 2. This run shows the spike window, in which the X coordinate is time, the Y coordinate is the neuron number, and a dot is drawn if that neuron fired at at that time. Notice the throttles on the left.
Here, the processing surface was created according to the following template, which is a .bmp file:
Green means "input neurons", blue is "output", red is "all synapses must lead to the right", and white is the default. The idea is that different geometries can be constructed by using a bitmap to influence what kinds of neurons can grow where. Anyway, in this simulation a text file served as stimulus. Each input neuron picked a letter according to its Y coordinate (with a to z running top-to-bottom). This letter then activated the entire group of input neurons associated with it . These neurons then emitted the information, which was being inexorably forced right by the red neurons, but also chewed on by the white recombination neurons. When blue neurons finally got activated, their Y coordinate again being interpreted as letters of english alphabet, the output was printed. Each output neuron also had a synapse to the corresponding input neuron. Without this rule, zero coherence would be expected as there would be no "hebbian backpropagation" of strong synapses back to the input. Each time an output neuron fired, it would print a letter. Here is some typical output:
afyglhlsokmljyvuvurazcixgkwdftavhcfxcrzghktmiupwhnrzoglexskgfxyvbvqgznsdlhlzwrayj
zurdbdjtngvnjthefikopewtucbcpilhpdfyrhpvdwzobclunodikycrjkzuxeospdqwrlmecgycjinlt
Obviously, this is garbage. Forget the fact that you can occasionally spot a semi-random word or two. So, why report this non-result at all? Because it makes the following needs clear:
1. Hebbian learning (STDP) is not enough. It causes "autistic" behavior, as inputs self-organize into closed loops and happily travel along those loops.
2. Any output mechanism requires modulation. None of the neural networks I've seen have modulatory control (except inhibitory neurons). Think control theory. Whether expressed indirectly in the ratio of inhibitory to excitatory neurons, or more explicitely by the connectivity of the network, the network should have a feedback mechanism that ensures that it converges to a certain type of computation.
Friday, September 02, 2005
The truth is that no real progress has been made in any of the attempts to implement things like consciousness or general intelligence. This doesn't prevent any number of people from writing entire books about their analogy-laden pet theories, which they will defend against other competing theories, of what general intelligence or consciousness are.
Technologically, to assemble enough hardware to compete with one entire adult brain is achievable. On the down side, to simulate 100 billion neurons and 100 trillion synapses, you will need dedicated hardware -- a cluster of even 10,000 linux boxes won't do, because one node can (optimistically) handle at most a million neurons in real-time. So custom hardware is required. On the upside, no synchronization is required, so you can assemble neuron-simulating fpgas on motherboards by the dozen. Inter-board bandwidth again cannot be that hard to achieve, because you just need to physically queue tons of spikes, and existing memory bus architectures are great for that.
So, "all" that remains is the secret sauce. That's why everybody is lusting to discover it, like the alchemists philosopher's stone. But do we know enough about the way the brain works to make the right simplifying assumptions? Of course not.
So there are two kinds of people trying to crack the problem: the top-downers, and the bottom-uppers. The top-downers will do silly things like assemble a bunch of objects in the same order as the components of the human brain (as we know it), then admire the structure from the outside without even the sliver of hope that it will do the right thing; The bottom-uppers will assemble neurons into networks, run them ad infinitum and hope that something "emerges" (word of the day) out of the assembly.
The big question is whether you believe that we have at hand all the tools necessary, that all that's required is a clever implementation. If that's the case, the race is on. But I think it's too early. What's needed is a yet unmade discovery about how information can be stored and processed in spiking neural networks; these things are so unbelievably tangled and alien to our structured, reductionist way of thinking that it's almost as if a different kind of science (no, not the Wolfram kind) is required to analyze them. Only once we understand the properties of these networks will we be able to build something out of them.
Technologically, to assemble enough hardware to compete with one entire adult brain is achievable. On the down side, to simulate 100 billion neurons and 100 trillion synapses, you will need dedicated hardware -- a cluster of even 10,000 linux boxes won't do, because one node can (optimistically) handle at most a million neurons in real-time. So custom hardware is required. On the upside, no synchronization is required, so you can assemble neuron-simulating fpgas on motherboards by the dozen. Inter-board bandwidth again cannot be that hard to achieve, because you just need to physically queue tons of spikes, and existing memory bus architectures are great for that.
So, "all" that remains is the secret sauce. That's why everybody is lusting to discover it, like the alchemists philosopher's stone. But do we know enough about the way the brain works to make the right simplifying assumptions? Of course not.
So there are two kinds of people trying to crack the problem: the top-downers, and the bottom-uppers. The top-downers will do silly things like assemble a bunch of objects in the same order as the components of the human brain (as we know it), then admire the structure from the outside without even the sliver of hope that it will do the right thing; The bottom-uppers will assemble neurons into networks, run them ad infinitum and hope that something "emerges" (word of the day) out of the assembly.
The big question is whether you believe that we have at hand all the tools necessary, that all that's required is a clever implementation. If that's the case, the race is on. But I think it's too early. What's needed is a yet unmade discovery about how information can be stored and processed in spiking neural networks; these things are so unbelievably tangled and alien to our structured, reductionist way of thinking that it's almost as if a different kind of science (no, not the Wolfram kind) is required to analyze them. Only once we understand the properties of these networks will we be able to build something out of them.
The neocortex and its connections form a massive 80% by volume of the human brain (passingham, 1982)
Association cortex is a convenient description for the cortex whose function has yet to be discovered (SOB, p501)
Association cortex is a convenient description for the cortex whose function has yet to be discovered (SOB, p501)
Numbers of synapses
In the cat visual cortex, 1 cubic mm of gray matter contains approximately 50,000 neurons, each of which gives rise on average to some 6,000 synapses, making a total of 300 million synapses. Approximately 84% are excitatory, 16% inhibitory. (SOB, p7)
A typical CA3 (cat?) hippocampal pyramidal neuron has a total dendritic length of approximately 16mm and receives approximately 25,000 excitatory synapses and fewer inhibitory synapses (SOB, p488)
In the cat visual cortex, 1 cubic mm of gray matter contains approximately 50,000 neurons, each of which gives rise on average to some 6,000 synapses, making a total of 300 million synapses. Approximately 84% are excitatory, 16% inhibitory. (SOB, p7)
A typical CA3 (cat?) hippocampal pyramidal neuron has a total dendritic length of approximately 16mm and receives approximately 25,000 excitatory synapses and fewer inhibitory synapses (SOB, p488)
Presynaptic Inhibition
There may be a maintained depolarization of the presynaptic terminal, reducing the amplitude of an invading impulse and with it the amount of transmitter released from the terminal. The essential operating characteristic of this microcircuit is that the effect of an input A on a cell C may be reduced or abolished by B without there being any direct action of B on the cell C itself. Control of the input A to the dendrite or cell body can thus be much more specific. (SOB, p12)
There may be a maintained depolarization of the presynaptic terminal, reducing the amplitude of an invading impulse and with it the amount of transmitter released from the terminal. The essential operating characteristic of this microcircuit is that the effect of an input A on a cell C may be reduced or abolished by B without there being any direct action of B on the cell C itself. Control of the input A to the dendrite or cell body can thus be much more specific. (SOB, p12)
Thursday, September 01, 2005
ROBOCORE 2
This guy has implemented a "brain builder" with a nice UI based on the Izhikevich neuron model (all in C#). He seems to be mostly concerned with the user interface, and I'm not sure what his results are, but then again, nobody has any results in this field.
This guy has implemented a "brain builder" with a nice UI based on the Izhikevich neuron model (all in C#). He seems to be mostly concerned with the user interface, and I'm not sure what his results are, but then again, nobody has any results in this field.
Tuesday, August 30, 2005
Synfire chains and cortical songs -- authors report long, repeatable with millisecond accuracy chains of neuron activations, which they call "cortial songs".
Monday, August 29, 2005
Modeling STDP
Spike timing dependent plasticity (STDP). These guys come up with a biophysical model for STDP and do some modeling in C++, but I didn't think the solution was very useful from a software standpoint.
Spike timing dependent plasticity (STDP). These guys come up with a biophysical model for STDP and do some modeling in C++, but I didn't think the solution was very useful from a software standpoint.
Polychronization: computation with spikes
A very interesting read; Not really about computing with spikes (yet), just running-it-and-seeing-what-happens. The main find is that if edge (axon) delays are introduced, certain neurons start ringing in a loop, because edge weights sum up to a value that causes the loop to self-sustain. Another result is that the number of such groups can be much greater than even the number of synapses in the network.
"In our view, attention is not a command that comes from the "higher" or "execute" center and tells which input to attend to. Instead, we view attention as an emerging property of simultaneous and regenerative activation (via positive feedback) of a large subset of groups representing a stimulus, thereby impeding activation of other groups corresponding to other stimuli"
A very interesting read; Not really about computing with spikes (yet), just running-it-and-seeing-what-happens. The main find is that if edge (axon) delays are introduced, certain neurons start ringing in a loop, because edge weights sum up to a value that causes the loop to self-sustain. Another result is that the number of such groups can be much greater than even the number of synapses in the network.
"In our view, attention is not a command that comes from the "higher" or "execute" center and tells which input to attend to. Instead, we view attention as an emerging property of simultaneous and regenerative activation (via positive feedback) of a large subset of groups representing a stimulus, thereby impeding activation of other groups corresponding to other stimuli"
A scalable cortical simulation framework
These guys can run 35,000 neurons and 6.1 million synapses per node, and the number of nodes are scalable. Performance scales linearly with nodes (assuming you don't have N^2 connectivity!)
These guys can run 35,000 neurons and 6.1 million synapses per node, and the number of nodes are scalable. Performance scales linearly with nodes (assuming you don't have N^2 connectivity!)
Sunday, August 28, 2005
Which model to use for cortical spiking neurons?
A good overview of the various types of spiking and bursting neurons observed in nature.
The author Izhikevich also proposes a neuron model which takes only 13 flops to implement, while having biologically realistic behavior. The guy's web page is here : http://www.izhikevich.com
Here is another article, analyzing the various possible spiking/bursting behaviors with really nice and intuitive 2-d illustrations of their behavior:
A good overview of the various types of spiking and bursting neurons observed in nature.
The author Izhikevich also proposes a neuron model which takes only 13 flops to implement, while having biologically realistic behavior. The guy's web page is here : http://www.izhikevich.com
Here is another article, analyzing the various possible spiking/bursting behaviors with really nice and intuitive 2-d illustrations of their behavior:
Saturday, August 27, 2005
"When the presynaptic input arrives to assist the postsynaptic neuron to discharge action potentials, LTP (long-term potentiation) is observed, and when the presynaptic input arrives after the postsynaptic neuron discharges, LTD (long-term depression) is induced, revealing a causal-reward/acausal-punishment. This form of plasticity is referred to as spike-time-dependent plasticity (STDP)"
(SOB,p533)
(SOB,p533)
"One theory holds that during theta (exploratory) activity (5-10hz), the hippocampus is acquiring a new representation of its environment, whereas during sharp-wave (quiet) activity (and also during slow-wave sleep), the hippocampus is facilitating the consolidation of this information in the form of long-term memories elsewhere in the cortex"
(SOB, p497)
(SOB, p497)
Friday, August 26, 2005
"An analysis of the circuits of other cortical structures such as the olfactory cortex and hippocampus reveals that they, too, bear many resemblances to the circuits of the neocortex. Thus, it is tempting to suppose that there may be some common basic principles that underly the organization and operation of all cortical circuits..."
(The synaptic organization of the brain, p556)
(The synaptic organization of the brain, p556)
"Presently the most intensely studied hypothesis is that this binding (between neurons) is accomplished by temporal correlation, a synchronization of the neural activity in even widely separated nodes that constitutde parts of the whole" [PNCC, p278]
"Every neuron in every neocortical microcircuit receives direct synaptic inputs from the cortically projecting modulatory systems of brainstem and basilar forebrain origin." [Mountcastle, Perceptual Neuroscience: The Cerebral Cortex (from now on PNCC)]
Which means that there exists control for exciting & inhibiting sections of the brain.
Which means that there exists control for exciting & inhibiting sections of the brain.
"Helmholtz contended that this seamless flow is accomplished by referring the central displays of sensory stimuli to "mental constructs" of the world and events within it. These constructs are thought to be generated by past experience and to be stored in and readily recalled from memory. Perceptions are then thought to be produced by the comparison of recalled and evoked neural images and perceptual identification inferred by the likeness between them, the simplest and most appropriate recalled construction winning the day. On this hypothesis, perhaps what we perceive are patterns of neural activity recalled from memory for the matching operation, rather than the activity evoked directly by sensory stimuli themselves."
(Mountcastle, Perception and the Cerebral Cortex)
(Mountcastle, Perception and the Cerebral Cortex)
Monday, August 22, 2005
"The main purpose of language is to provide information about who does what to whom"
But isn't that just a different way of saying that sentences are primarily of the form "subject verb object"? It's just one of the possible forms that a valid sentence can take. Consider the sentence "if you're OK with it, fuck off". What form does it have? It's a modal-imperative, but does it even make sense to categorize it? Its meaning is in the effect.
It seems to me that the reason that NLP (natural language processing) is so difficult to do is because the grown-up human language is really a scripting language -- the grammar itself is not that difficult (every major language on earth has had parsers written for it), but that it shamelessly utilizes all the other faculties that are so easily avaliable to a human -- sound, vision, general intelligence.
Try porting a Windows app to another platform. You'll find that it relies on a bunch of uniniteresting in themselves, but difficult-to-reproduce features. The primary purpose of language, is of course, communication, and the meaning of communication is always measured by what effect it has on the recepient. To simulate the proper effect on an NLP program, that program would have to have much more than a parser and a wordnet.
Wordnet
But isn't that just a different way of saying that sentences are primarily of the form "subject verb object"? It's just one of the possible forms that a valid sentence can take. Consider the sentence "if you're OK with it, fuck off". What form does it have? It's a modal-imperative, but does it even make sense to categorize it? Its meaning is in the effect.
It seems to me that the reason that NLP (natural language processing) is so difficult to do is because the grown-up human language is really a scripting language -- the grammar itself is not that difficult (every major language on earth has had parsers written for it), but that it shamelessly utilizes all the other faculties that are so easily avaliable to a human -- sound, vision, general intelligence.
Try porting a Windows app to another platform. You'll find that it relies on a bunch of uniniteresting in themselves, but difficult-to-reproduce features. The primary purpose of language, is of course, communication, and the meaning of communication is always measured by what effect it has on the recepient. To simulate the proper effect on an NLP program, that program would have to have much more than a parser and a wordnet.
Wordnet
Amazon.com: Books: On Intelligence
The premise of this book is that the brain barely does any computing at all, it just looks things up in its huge memory. Oh yes, and it also predicts, but only one step ahead. What's meant by prediction is that the brain knows where it is inside of any number of sequences, which span all scales in distance, time, and logical hierarchy, and expects the next item in the sequence by preemptively firing a group of cells. Each group of cells thus logically represents a "name" for an item in the sequence. This behavior is assumed to be the same for vision, hearing, touch, proprioception, and language (yes, language).
But wait, more simplifications are on the way. The cortex has a uniform structure, so Hawkins conjectures that it starts as a blank slate, acquiring all of its features simply by analyzing sensory data. This is where Hawkins runs into trouble, in my opinion. He seems to be woefully ignorant of language. If the cortex started out blank and simply feature-detected its way to general intelligence, we would expect to find a more randomized distribution of various cortical modules. In particular, there would no reason to expect someone to be able to construct complicated, recursive sentences by the age of five; it would be a high school subject, like algebra and programming. We would also expect people to handle center-embedded sentences with the same ease as right-branched sentences.
onintelligence.org forums attract crackpots, but what did Hawkins expect, preaching revolutionary ideas about how general intelligence to the uneducated public?
"When you see, feel, or hear something, the cortex takes the detailed, highly specific input and converts it to an invariant form. It is the invariant form that is stored in memory." Great, forming invariant representations is only the biggest question in vision.
"The next higher region recognizes sequences of phonemes to create words. The next higher region recognizes sequences of words to create phrases, and so on." -- You really have to demonstrate a mechanism for generating phrases and reconcile it with existing knowledge about language on planet Earth. Dismissing all of linguistics with "create phrases, and so on" exposes a profound ignorance of what language really is.
"Consciousness is what it feels like to have a cortex" -- How blase
The premise of this book is that the brain barely does any computing at all, it just looks things up in its huge memory. Oh yes, and it also predicts, but only one step ahead. What's meant by prediction is that the brain knows where it is inside of any number of sequences, which span all scales in distance, time, and logical hierarchy, and expects the next item in the sequence by preemptively firing a group of cells. Each group of cells thus logically represents a "name" for an item in the sequence. This behavior is assumed to be the same for vision, hearing, touch, proprioception, and language (yes, language).
But wait, more simplifications are on the way. The cortex has a uniform structure, so Hawkins conjectures that it starts as a blank slate, acquiring all of its features simply by analyzing sensory data. This is where Hawkins runs into trouble, in my opinion. He seems to be woefully ignorant of language. If the cortex started out blank and simply feature-detected its way to general intelligence, we would expect to find a more randomized distribution of various cortical modules. In particular, there would no reason to expect someone to be able to construct complicated, recursive sentences by the age of five; it would be a high school subject, like algebra and programming. We would also expect people to handle center-embedded sentences with the same ease as right-branched sentences.
onintelligence.org forums attract crackpots, but what did Hawkins expect, preaching revolutionary ideas about how general intelligence to the uneducated public?
"When you see, feel, or hear something, the cortex takes the detailed, highly specific input and converts it to an invariant form. It is the invariant form that is stored in memory." Great, forming invariant representations is only the biggest question in vision.
"The next higher region recognizes sequences of phonemes to create words. The next higher region recognizes sequences of words to create phrases, and so on." -- You really have to demonstrate a mechanism for generating phrases and reconcile it with existing knowledge about language on planet Earth. Dismissing all of linguistics with "create phrases, and so on" exposes a profound ignorance of what language really is.
"Consciousness is what it feels like to have a cortex" -- How blase
Amit, D.J. (1994). The Hebbian paradigm reintegrated: Local reverberations as internal representations.
quotes:
...Hebb's paradigm ... can be summarized as a process generating the feed-back connectivity required for maintaining reverberations (persistent spike distributions) in a local network by the activity in the same network.
...It is known anatomically, physiologically and neurologically that as one proceeds along the elaboration path in the cortex, one always finds back projections, as far back as into the primary sensory areas. On the other hand, it is a very familiar experience to have a given sensory power notably improved when the content of the observed stimulus is known. For example, when vision is impeded by distance or haze so that a given object cannot be discerned (or read), receiving a cue as to the nature of the object (or the written text) often produces a clear perception of the target.
quotes:
...Hebb's paradigm ... can be summarized as a process generating the feed-back connectivity required for maintaining reverberations (persistent spike distributions) in a local network by the activity in the same network.
...It is known anatomically, physiologically and neurologically that as one proceeds along the elaboration path in the cortex, one always finds back projections, as far back as into the primary sensory areas. On the other hand, it is a very familiar experience to have a given sensory power notably improved when the content of the observed stimulus is known. For example, when vision is impeded by distance or haze so that a given object cannot be discerned (or read), receiving a cue as to the nature of the object (or the written text) often produces a clear perception of the target.