[Comp-neuro] From Socrates to Ptolemy

Brad Wyble bwyble at gmail.com
Thu Aug 7 09:39:32 CEST 2008


On Fri, Aug 1, 2008 at 9:12 AM, jim bower <bower at uthscsa.edu> wrote:
> One more point which just came up in a student presentation at LASCON just now. This student, an engineer who previously had worked on abstract neural network model, has spent the last 3 weeks working the the realistic turtle visual cortex model of the turtle built by Phil Ulinsky over the last many years.
>
> As a neural network modeler would, he gave different types of input to the model to see if the model generated different types of output in response. It did, and in his conclusions he said he was surprised to see this behavior, given there was no "learning rule" in the model.
>
> I asked, and will now ask here to a larger group, what is the evidence that cerebral cortex "learns" in the neural network sense at all. Brain structures represent learning over evolutionary time. Photo receptors don't have to "learn" to detect photons, and the retina doesn't have to "learn" its fundamental structure. Why do we think that cortex has to "learn" most (maybe vastly most) of what it does. We now suspect that the olfactory system already "knows" about the metsbolic structure of the natural world.
>


That's a very interesting point to ponder.

I would suggest that the transmission of DNA from parent to child
seems to be an important bottleneck in evolution's ability to pass
information along.  The human genome is on the order of a gigabyte.
So the argument would seem to hinge on how much organizational
information you think one might cram into  the fraction of that
gigabyte that forms the brain.

I would guess that at some level of complexity it's cheaper (in terms
of base pairs) to pass along the mechanism to learn, than to pass
along the learning.

-Brad Wyble


More information about the Comp-neuro mailing list