Friday, September 02, 2005
The truth is that no real progress has been made in any of the attempts to implement things like consciousness or general intelligence. This doesn't prevent any number of people from writing entire books about their analogy-laden pet theories, which they will defend against other competing theories, of what general intelligence or consciousness are.
Technologically, to assemble enough hardware to compete with one entire adult brain is achievable. On the down side, to simulate 100 billion neurons and 100 trillion synapses, you will need dedicated hardware -- a cluster of even 10,000 linux boxes won't do, because one node can (optimistically) handle at most a million neurons in real-time. So custom hardware is required. On the upside, no synchronization is required, so you can assemble neuron-simulating fpgas on motherboards by the dozen. Inter-board bandwidth again cannot be that hard to achieve, because you just need to physically queue tons of spikes, and existing memory bus architectures are great for that.
So, "all" that remains is the secret sauce. That's why everybody is lusting to discover it, like the alchemists philosopher's stone. But do we know enough about the way the brain works to make the right simplifying assumptions? Of course not.
So there are two kinds of people trying to crack the problem: the top-downers, and the bottom-uppers. The top-downers will do silly things like assemble a bunch of objects in the same order as the components of the human brain (as we know it), then admire the structure from the outside without even the sliver of hope that it will do the right thing; The bottom-uppers will assemble neurons into networks, run them ad infinitum and hope that something "emerges" (word of the day) out of the assembly.
The big question is whether you believe that we have at hand all the tools necessary, that all that's required is a clever implementation. If that's the case, the race is on. But I think it's too early. What's needed is a yet unmade discovery about how information can be stored and processed in spiking neural networks; these things are so unbelievably tangled and alien to our structured, reductionist way of thinking that it's almost as if a different kind of science (no, not the Wolfram kind) is required to analyze them. Only once we understand the properties of these networks will we be able to build something out of them.
Technologically, to assemble enough hardware to compete with one entire adult brain is achievable. On the down side, to simulate 100 billion neurons and 100 trillion synapses, you will need dedicated hardware -- a cluster of even 10,000 linux boxes won't do, because one node can (optimistically) handle at most a million neurons in real-time. So custom hardware is required. On the upside, no synchronization is required, so you can assemble neuron-simulating fpgas on motherboards by the dozen. Inter-board bandwidth again cannot be that hard to achieve, because you just need to physically queue tons of spikes, and existing memory bus architectures are great for that.
So, "all" that remains is the secret sauce. That's why everybody is lusting to discover it, like the alchemists philosopher's stone. But do we know enough about the way the brain works to make the right simplifying assumptions? Of course not.
So there are two kinds of people trying to crack the problem: the top-downers, and the bottom-uppers. The top-downers will do silly things like assemble a bunch of objects in the same order as the components of the human brain (as we know it), then admire the structure from the outside without even the sliver of hope that it will do the right thing; The bottom-uppers will assemble neurons into networks, run them ad infinitum and hope that something "emerges" (word of the day) out of the assembly.
The big question is whether you believe that we have at hand all the tools necessary, that all that's required is a clever implementation. If that's the case, the race is on. But I think it's too early. What's needed is a yet unmade discovery about how information can be stored and processed in spiking neural networks; these things are so unbelievably tangled and alien to our structured, reductionist way of thinking that it's almost as if a different kind of science (no, not the Wolfram kind) is required to analyze them. Only once we understand the properties of these networks will we be able to build something out of them.