[Comp-neuro] role of noise in learning

Patrick Simen psimen at Math.Princeton.EDU
Thu Jul 24 16:38:33 CEST 2008


This is a fascinating discussion. We have a paper in review that is 
similar in spirit to this interesting abstract, and a poster highlighting 
its main points at:

http://www.math.princeton.edu/~psimen/CCNC2007poster.pdf.

It is based on a much coarser-grained model, and is intended to model 
choice behavior by animals as arising from an integrating or smoothing 
process applied to intrinsically "noisy" communication processes between 
neurons. We analyzed a behavioral choice model built from a simple 
combination of low-pass filters/leaky-integrators/capacitors coupled 
together by connections that add Gaussian white noise to the signals they 
propagate. From this model, we prove that a classic "law" of animal 
behavior from the instrumental conditioning literature emerges (e.g., the 
"matching law" and its generalized form). There are several earlier models 
that have made the same point (e.g., Soltani and Wang, J. Neurosci, 2006; 
Loewenstein and Seung, PNAS, 2006; just to name two). Our model also makes 
additional analytical predictions about inter-response times that we 
believe are novel.

The point being that not only do neurons perhaps need to explore the space 
of connection strengths or transfer functions, but that animals similarly 
need to explore the space of possible responses in order to find rewarding 
behaviors.

I realize that this modeling level may be a bit too abstract for the 
tastes of this list's readership, but I find it encouraging that very 
simple learning principles may apply at such widely different temporal and 
spatial scales.

Best wishes, and many thanks for a fresh perspective on this very 
interesting topic!

--Pat.

------------------------------------------------
Patrick Simen, PhD
Research Fellow
Program in Applied & Computational Mathematics
Center for the Study of Brain, Mind & Behavior
Princeton University
email: psimen at math.princeton.edu
http://www.math.princeton.edu/~psimen
------------------------------------------------

On Thu, 24 Jul 2008, Wolfgang Maass wrote:

> I would like to add to your discussion that "noise" is obviously
> needed for reward-based learning in networks of neurons:
>
> If such networks have to learn without a supervisor (which tells the neurons 
> during training when they should fire), they have to explore different ways 
> of responding to a stimulus, until the come across responses that are 
> "rewarded" because they provide good system performance. This exploration 
> would appear as "noise" in most analyses. In fact, one might conjecture that 
> networks of neurons are genetically endowed with the capability to go through 
> rather clever exploration patterns (i.e, particular types of "noise"), in 
> order to enable fast convergence of such reinforcement learning schemes.
>
> The role of noise in reward-based learning has been analyzed by a number of 
> people, see #183 on
> http://www.igi.tugraz.at/maass/publications.html
> for a very recent contribution (and references to earlier work).
>
> -Wolfgang
>
> -- 
> Prof. Dr. Wolfgang Maass
> Institut fuer Grundlagen der Informationsverarbeitung
> Technische Universitaet Graz
> Inffeldgasse 16b ,   A-8010 Graz,  Austria
> Tel.:  ++43/316/873-5811
> Fax   ++43/316/873-5805
> http://www.igi.tugraz.at/maass/Welcome.html
> _______________________________________________
> Comp-neuro mailing list
> Comp-neuro at neuroinf.org
> http://www.neuroinf.org/mailman/listinfo/comp-neuro
>
>



More information about the Comp-neuro mailing list