[Comp-neuro] On learning mechanisms of the brain

Asim Roy ASIM.ROY at asu.edu
Thu Aug 14 10:55:31 CEST 2008

Hi All,

I read with interest this ongoing debate in the computational neuroscience community. Not being a computational neuroscientist myself, I was a little hesitant to wade into this debate. However, I thought I would raise some issues of deep interest to me and try to get some feedback from the community.

My hunch is that one part of computational neuroscience is about discovering the existing wiring and operating mechanisms of certain parts of the brain that generally come predefined or prewired to us, like parts of the vision system. Some modules that may not come predefined or prewired are like the ones that a biologist has to create in his/her brain to learn a bit of mathematics in graduate school. That stuff is new for the brain and it's hard to learn. Wish it came prewired.

So I hope that there is another side of computational neuroscience that looks at learning mechanisms within the brain. That's the side that is of great interest to many of us who work on learning algorithms. I was wondering if there are any new insights or theories in computational neuroscience on how the brain "learns." Here are some issues that I would love to get some feedback on:

1. The artificial neural network community still believes that learning in the brain is real-time, almost instantaneous. It's real-time in the sense of Hebbian-style learning. And I believe computational neuroscience predominantly uses Hebbian-style models of learning. I personally doubt that the brain learns in real-time. There is plenty of evidence in experimental psychology to refute the real-time learning (Hebbian-style synaptic modification) claim. And there is also enough recent evidence in cognitive neuroscience too to refute that claim, although one has to carefully read between the lines the conclusions of these papers. (In one such case, I suggested a different interpretation to the results and the authors agreed with it.) One can also logically argue that real-time instantaneous learning amounts to "magic" since no system, biological or otherwise, can set up a network and start learning in it without knowing anything about the problem before the start of learning. 

My question is, is computational neuroscience still a firm believer in Hebbian-style real-time learning or have researchers looked at other forms of learning, like memory-based learning that is not real-time? 

2. It appears that the brain has the capacity to design networks when a new skill has to be learnt. Are there any studies/insights in computational neuroscience on how this design process works?

3. In machine learning and neural networks, there are two extremes sides to designing of algorithms. At one end are back-propagation type algorithms where neurons in the network use local learning laws to learn. At the other end are Support Vector Machines (SVM) type algorithms, which were mentioned in that NSF report, which bring in heavy computational machinery (e.g. quadratic programming) to both design and train neural networks. SVMs don't use local learning laws. I don't believe we have SVM-style mechanisms in our brains; it's just too complicated. So SVM algorithms are unrealistic for the brain, although they are widely used to solve learning problems for the machine learning community. But Hebbian-style or back-propagation-style real-time learning also has problems and that's not just with evidence from cognitive neuroscience and experimental psychology, but logical ones too.

The question here is related to the first one: Is there a way in computational neuroscience to verify any of these theories of learning? 

Hope I am not asking stupid questions. Would love to get some thoughts and feedback. And any references would help.

Best wishes,
Asim Roy
Arizona State University

More information about the Comp-neuro mailing list