[Comp-neuro] Re: role of noise in learning

Ali Minai minai_ali at yahoo.com
Thu Jul 24 15:45:49 CEST 2008


Indeed this is an example of the general method by which biological complex systems learn: Explore and reinforce preferentially. This applies to evolution (of course), developmental models of cognitive learning (e.g., ideomotor models), swarm construction (e.g., termite nests), etc. So two questions:

1. Is it even possible for a non-teleological system (i.e., a system that is no driven by a pre-determined target/goal, but does have an implicitly defined fitness/quality function) embedded in a complex environment to (self-)optimize efficiently without the help of noise?

2. Are some types of systems better at this than others? And if so, are there generic principles underlying this?


Ali


---------------------------------------------------------------------
Ali A. Minai
Associate Professor
Department of Electrical & Computer Engineering
University of Cincinnati
Cincinnati, OH 45221-0030

Phone: (513) 556-4783
Fax: (513) 556-7326
Email: Ali.Minai at uc.edu
          minai_ali at yahoo.com

WWW: http://www.ece.uc.edu/~aminai/

----------------------------------------------------------------------

--- On Thu, 7/24/08, Wolfgang Maass <wolfgang.maass at igi.tugraz.at> wrote:
From: Wolfgang Maass <wolfgang.maass at igi.tugraz.at>
Subject: role of noise in learning
To: minai_ali at yahoo.com, comp-neuro at neuroinf.org
Cc: "Etienne B. Roesch" <Etienne.Roesch at pse.unige.ch>
Date: Thursday, July 24, 2008, 4:47 AM

I would like to mention that "noise" is obviously also needed for
reinforcement learning in networks of neurons:

If such networks have to learn without a supervisor (which tells the 
neurons when they should fire), they have to explore different ways of 
responding to a stimulus, until the come across responses that are
"rewarded" because they provide good network performance. This 
exploration would appear as "noise" in most analyses. In fact, one
might 
conjecture that networks of neurons are genetically endowed with the 
capability to carry out particularly useful exploration patterns (i.e, 
particular types of "noise"), in order to enable fast convergence of 
such reinforcement learning schemes.

This has been demonstrated by a number of people, see #183 on
http://www.igi.tugraz.at/maass/publications.html
for a very recent contribution (and references to earlier work).

w

Ali Minai wrote:
> Noise does this and much, much more. It can inject variety, break 
> symmetry, generate novelty, provide energy, facilitate search, carry 
> signal, and do many other things. Indeed, the only time noise is really 
> a problem is when one is trying to do achieve a pre-determined goal 
> (e.g., following a pre-computed trajectory). Since natural systems - 
> notably the nervous system - rarely (if ever) try to do this, they 
> thrive on noise. Perhaps we should give the phenomenon a less pejorative 
> name. "Noise" signals such a linear mindset:-).
> 
> Ali
> 
> 
> ---------------------------------------------------------------------
> Ali A. Minai
> Associate Professor
> Associate Head for Electrical Engineering
> Department of Electrical & Computer Engineering
> University of Cincinnati
> Cincinnati, OH 45221-0030
> 
> Phone: (513) 556-4783
> Fax: (513) 556-7326
> Email: aminai at ececs.uc.edu
>           minai_ali at yahoo.com
> 
> WWW: http://www.ececs.uc.edu/~aminai/
> 
> ----------------------------------------------------------------------
> 
> --- On *Tue, 7/22/08, Etienne B. Roesch
/<Etienne.Roesch at pse.unige.ch>/* 
> wrote:
> 
>     From: Etienne B. Roesch <Etienne.Roesch at pse.unige.ch>
>     Subject: Re: [Comp-neuro] Review announcement
>     To: comp-neuro at neuroinf.org
>     Cc: comp-neuro-bounces at neuroinf.org
>     Date: Tuesday, July 22, 2008, 11:28 AM
> 
> 
>     Yeah, I am loving the discussion! More, more!
> 
>     As an early postdoc, I still have in my working memory the classes I
>     went through in grad school, and I remember this connectionist
>     lecturer arguing that noise was actually a good thing for
>     classifier-like systems (and by extension neural nets, and by
>     extension plausible neural nets -- which are not classifiers stricto
>     senso I agree) in that it allows an easier discrimination of the
>     input in a probabilistic context. Given that redundancy of
>     information/signal plays a big part in how the brain does the job,
>     wouldn't noise be a clever mechanism to discriminate
>     close-to-threshold stimuli? What do you think?
> 
>     Best regards,
> 
> 
>     Le 22 juil. 08 à 17:17, jim bower a écrit :
> 
>>     I am actually in a remote part of brazil at the moment, so limited
>>     to typing on my blackberry.
> 
>     Impressive typing skills, I have to admit. ;-)
> 
> 
>>     However, yes I was curious if a discussion could be induced. That
>>     was originally what this mailing list was set up for, I know,
>>     because I started it. ;-). However things have become a bit
>>     complacent so I figured what the heck.
>>
>>     Again limited in my ability to respond but a couple of things. I
>>     think as computational neurobiologists or scientists in general,
>>     we need to be aware of the extent which what we can measure
>>     (oscillations, synchronous spikes, etc) limits the way we think
>>     about how things work. Many many years ago now when cortical
>>     oscillations became more generally interesting to people once
>>     found in visual cortex we suggested based on our realistic
>>     cortical models that they were an epiphenomina more (loosly)
>>     reflecting and underlying mechanism for coordinating communication
>>     and processing between regions than carriers of any information
>>     themselves. I continue to believe or set my primary assumption
>>     that until proven otherwise, every spike is significant for
>>     something and worse yet so is the lack of a spike.
>>     (Certainly in digital coding 0s are as important as 1s.
>>
>>     Yes "serious scientists" prefer more constrained and
defined
>>     discussions than this. - but we can easily get lost "drinking
our
>>     own whisky". As a famous computational math-bio guy is fond
of
>>     saying.
>>
>>     ;-)
>>
>>     Truth is all these issues really remain wide open.
>>
>>     But and the big but, no evidence that nature is sloppy or
>>     unsophisticated.
>>
>>     One last point, the assumption that in fact nature is very
>>     sophisticated and that the structure of the brain deeply reflects
>>     a complex, sophisticated function pushes in the direction of first
>>     building models reflecting that structure, even if you are still
>>     clueless about function.
>>
>>     I am in brazil teaching at the latin american school for
>>     computational neuroscience, where realistic modeling lives on. ;-)
>>
>>     Best to all
>>
>>     Jim
>>
>>     Sent via BlackBerry by AT&T
> 
>     -----
>     Etienne Roesch
>     Department of Computing
>     Imperial College
>     London SW7 2AZ
> 
>     _______________________________________________
>     Comp-neuro mailing list
>     Comp-neuro at neuroinf.org
>     http://www.neuroinf.org/mailman/listinfo/comp-neuro
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Comp-neuro mailing list
> Comp-neuro at neuroinf.org
> http://www.neuroinf.org/mailman/listinfo/comp-neuro

-- 
Prof. Dr. Wolfgang Maass
Institut fuer Grundlagen der Informationsverarbeitung
Technische Universitaet Graz
Inffeldgasse 16b ,   A-8010 Graz,  Austria
Tel.:  ++43/316/873-5811
Fax   ++43/316/873-5805
http://www.igi.tugraz.at/maass/Welcome.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.neuroinf.org/pipermail/comp-neuro/attachments/20080724/b4a0a42c/attachment-0001.html


More information about the Comp-neuro mailing list