[Comp-neuro] RE: Attractors, variability and noise

Ross Gayler r.gayler at gmail.com
Tue Aug 5 14:06:35 CEST 2008


Mario,

I'll make some responses to your post, although they speak rather indirectly
to issues of variability and noise.  Also, although I am working with
connectionist systems my perspective is one of artificial intelligence
engineering - I am trying to achieve functional performance without any
specific concern for biological plausibility.  (Having said that, the
mechanisms I am investigating are not obviously incompatible with neural
implementation.)

> What do you think of recurrent neural networks (RNNs), with their wealth
of attractors, as a model for variability/noise?

In general, I have no idea.  However, for my own work I am using
architectures very similar to Arathorn's Map-Seeking Circuits
(http://www.sup.org/book.cgi?book_id=4277). MSCs are recurrent neural
networks for recognising an despite it being arbitrarily transformed (e.g.
an image that has been scaled, rotated and translated).  MSCs simultaneously
settle over the memory that is retrieved and the set of transformations that
transform the input to match the stored memory.  MSCs have an ascending
pathway that transforms the input to match the memory and a descending
pathway that inverse transforms the retrieved memory to reconstruct the
input.  Arathorn claims that this parallels some aspects of cortical
architecture.  I wouldn't know about that.  The point is that this allows a
level of variability that is greater than most people would assume when
thinking about RNNs.  The point is that there are multiplicative
interactions and there are internal degrees of freedom (the selection of the
transforms) - so that effectively the system has attractors that evolve over
the same time scale as the settling of the RNN.

Let me return to an argument from cognitive function.  We can recognise
novel situations almost as rapidly as familiar ones.  All the time we are
exposed to and deal with novel situations (e.g. my hovercraft is full of
eels).  If we assume (as I do) that RNNs are a natural basis for neural
computation, then you think in terms of attractors.  Familiar situations are
recognised rapidly by the process of settling into an attractor.  But even
novel situations are recognised so rapidly that it suggests they too are
recognised by the same process.  However, it is implausible to believe that
the brain comes pre-stocked with an attractor for every situation which
might ever arise (this is the dynamic systems equivalent of the "grandmother
cell" argument).  I believe the solution to this is to have dynamic
attractors, i.e. attractors that are created on the fly to meet the need of
the moment.  Where there are multiplicative interactions one RNN can
modulate the dynamics of another RNN leading to new attractors.  

The notion is to have the attractor corresponding to the entire input arise
as a function of the attactors corresponding to components of the input.
This relates to the notion of systematicity, which holds that a system able
to represent some situations (e.g. John loves Mary) will *necessarily* be
able to represent other related situations (e.g. Mary loves John).  This
arises as a consequence of representations being composites of components
(which can be interchanged).

The only point I really want to make out of this is that when you are
dealing with RNNs in cognitive systems you should allow for the likelihood
that the attractors are dynamic and novel unless you are probing with
exceptionally impoverished stimuli (which is probably the norm in
electrophysiology).

Ross


-----Original Message-----
From: Mario Negrello [mailto:mnegrello at gmail.com] 
Sent: Saturday, 2 August 2008 12:33 AM
To: r.gayler at mbox.com.au
Cc: comp-neuro at neuroinf.org
Subject: Attractors, variability and noise

Dear Ross, Jim, David and list,

First off, thank you for amazing discussion. I hope there is still some
momentum in it.
I take the opportunity to summarize and combine a couple of points, and pose
a question.

David Tam, among others, remarked that
> "So do neurons (or the brain) use noise in its computation?  If the 
> neurons care about those signals, it is not noise.  If they don't 
> care, yes, then it is noise."

This is a good working definition for noise, as it takes the 'receiver' into
consideration. I'll phrase it in terms of variability, if i may (it's for a
purpose):
- When the receiving system is indifferent to incoming variability, then
that variability is noise.

Jim and others insisted that it is hard to tell noise and variability apart,
when the systems are complex.
>> the more sophisticated a coding system, the harder it is likely to be 
>> to distinguish signal from "noise".

1. As Ross points out, concerning abstract connectionist models, the
response of a single neuron may look random, because we d.on't see the
multidimensional pattern of activity, by recording one neuron.
2. And as david adds, neural computation is distinction, if there is no
distinction, no computation.
3. Moreover, Jim added that (3) neural computation is likely to be
sequential.

The points above fit nicely with the idea that brain networks are analogous
to recurrent neural networks, which have, instead of a lot of noise, complex
transients.

The question: What do you think of recurrent neural networks (RNNs), with
their wealth of attractors, as a model for variability/noise?  
Large networks produce much repeatable variability in 'unpredictable'  
oscillatory patterns. But with knowledge of the network structure, much can
be known about the possible dynamical patterns that the system may produce.
And with respect to functional levels,  one can distinguish between
meaningful and meaningless variability. (by functional levels, i mean levels
of the network we may attrbitute particular functions, say perceptual
categorization or motor pattern formation).

Regarding the items above:
To (1), It is self-evident when one takes a large network of, say, sigmoidal
units, and looks at the activity of any one neuron. Though the path of the
transients in phase space is highly structured, the activity of the  single
neuron is scarcely predictable without knowledge of the network structure.
But if one has knowledge about the structure, she also has info about
possible activities patterns.

Will these patterns resemble those of more complex biological networks? My
guess is yes, to the extent of the level of abstraction introduced.

To (2), there is something that can be said in terms of distinction
mechanisms, and transient activity. If one considers the transient activity
of a module/area as an open path in phase space (an orbit), then the
activity of one neuron must be a projection of all the incoming activity,
onto the hyperplane defined by the projections weight matrix (attached
diagram).



More information about the Comp-neuro mailing list