[Comp-neuro] Re: Attractors, variability and noise

Sabine and Marc de Kamps dekamps at tiscali.co.uk
Sat Aug 16 15:07:28 CEST 2008

Robert's point is fascinating - not least because I couldn't disagree more.

I don't see how we can dispense with what Robert calls the self-indulgent
notion of understanding. I think a good model is a good model because it
changes something inside us: we are able to see the world around us as we
have not seen it before and because of it we are better equiped to deal with
it. The point about Newton's laws is not that it makes better predictions
than epycycle theory. The latter was carefully honed to make the right
predictions (at least in Newton's time). Newton's and their consequences
deliver better understanding: it is easier to explain and the curious
differences between the inner and the outer planets make much more sense if
you don't put the earth in the centre of the solar system, but right between

We have two issues here and I think they are being confused. We have the
word model being used as an idea, as a transformational insight that allows
us to see the world around us in a new way. And we have model as a
mechanical aid, a machine that allows us to examine the ultimate
consequences of those ideas. In this form a model can be a set of equations
or a computer program. Both are mechanical aids to explore the consequences
of ideas.

When Robert talks about "half a dozen equations and a bucket-full of well
known physical data.", then apparently that is the simplest collection of
ideas that you can have about the formation of dwarfs. And it still seems
relatively easy to understand the process in these terms. When he then says
that there are no high level equations that capture dwarf formation, but
argues that only a computer program is able to evaluate the consequences of
small variations in the data, I say fine. Apparently we need a mechanical 
aid to explore the consequences of a handful of equations and some data. But
when he then says "...but it turns out that if you compute what would happen
with slightly different physics (bigger gravitational constant, different
opacities etc) then stars don't necessarily turn into giants. So they are
not actually insensitive to the quantitative details.", he installs a
powerful new idea into my mind with those very words. He has changed my
perception of the world (universe even!).

It is very hard to see how we can do science if we don't start from these
high level ideas. And these high level ideas may explain something about the
brain that is not easily explained in terms of neuroscience or biology. As
an example, consider compositional representations and the binding problem.
Many researchers are convinced that it makes sense to have brain
representations of form that are devoid of any other connotation, (ie, they
are colour, motion, position .. invariant representations of a given form.
Likewise, there are abstract (= without form, position, etc. info)
representations of colour, motion, etc. The pro's and con's have been
discussed extensively in the literature and there's still an ongoing debate
of whether such representations exist (many researchers now seem to agree)
and if so, how (synchrony, neural blackboard architecture; disagreement all
over the place). But whatever your position in this debate, it is a powerful
idea, one that's not going to go away once you've heard about it. And it
involves very little biology. Its validation as a data processing principle
of the brain may need massive amounts of data and simulations, but perhaps
it may not. A computer scientist might consider it as a question on how you
can design a good representation that is being able to deal with novelty in
the outside world in an agent that uses networks for information processing.
It might be an idea that works for artificial agents using artificial
networks. As a neuroscientist you might be motivated to look for a neural
mechanism that implements it. Models like that are certainly not idea free:
the whole work is being driven by ideas and any modelling that would come
out of it would not exist if it weren't for the ideas.

There are many other examples of explanations on aspects of brain function
that involve very little biology. Olshausen & Field's work for example. 
I sometimes get the feeling that many people in this group see the study of
brain function exclusively as a biological problem. I'd disagree with that.

My main point is that there may be many things about the brain that, in the
end, can be explained in relatively simple terms. That does not mean that we
don't have to graft to get there: some of these ideas will only be validated
by very realistic and detailed simulations. If I call equations or
simulators mechanical aids, then that doesn't mean I don't appreciate their
value. And if Robert argues that the relative merits of software compared to
equations are underestimated, he is right. We are lightyears behind high
energy physics in terms of creating a software infrastructure and we have a
far greater need for it.


-----Original Message-----
From: comp-neuro-bounces at neuroinf.org
[mailto:comp-neuro-bounces at neuroinf.org] On Behalf Of Robert Cannon
Sent: 13 August 2008 19:23
To: ,james bower
Cc: CompNeuro List
Subject: Re: [Comp-neuro] Re: Attractors, variability and noise

> As I understand the  other end of the spectrum, we construct 
> increasingly realistic models and end up with a simulated brain without 
> a real understanding of how it works, which makes no sense to me.  
> Understanding is what we're after, and that understanding can only 
> reside in the brains of the population of scientists, not in their models.


Brad's point is fascinating - not least because I couldn't disagree more. :)

I do like the notion of understanding, but I suspect it is also somewhat
self-indulgent, because there may not be a level on which it can be shared
above that of working models.

To help explain why, when I was working in astronomy there was a
feeling among many of my colleagues that there should be a moratorium
on publication of papers purporting to explain a particular classical
phenomenon because the type of explanations being sought couldn't
actually exist. The problem is a fairly basic bit of astrophysics -
the transition of many stars from dwarfs to giants for the last tenth
of their active lives. There is no mystery here: there are
half a dozen equations and a bucket-full of well known physical data.
You implement them on a computer and you get something that behaves pretty
much like a real star.  Then you've got your "prodable brain"
equivalent and it is natural to seek higher level,
intuitive, easily communicated, mathematically elegant explanations
of what's going on. Quite a few (mutually incompatible) explanations
  were published.

The whole game unraveled however when people began addressing
"what-if" questions with these models. By definition, the explanations
are insensitive to quantitative details (like the opacity or
pressure-density relationship for stellar material) but it turns out
that if you compute what would happen with slightly different physics
(bigger gravitational constant, different opacities etc) then stars
don't necessarily turn into giants. So they are not actually insensitive
to the quantitative details. In effect the parameter
space is lumpy and we're in a particular patch (of course, you can
theorize about why we have to be in that patch but that's another
question entirely). Elegant explanations assume smoothness but
the space isn't smooth, so no such explanations can be correct.

Another observation was that when you ask people to predict
the outcomes of these what-if questions (about the only type of
experimental intervention that is possible in astronomy) then the people
who write and run the programs often do better than the theorists.

So, like most areas of expertise, you can develop an intuitive
understanding and internal model of the domain by years of
application, but there's no short cut - you can't get it from a
book. Other people who want the same abilities will have to get
there in the same way by internalizing the same mass of data.

My point is that for this particular problem, high-level theory is
not much use. Some of it is epiphenomenal, and the rest is just plain
wrong. The models work fine but they are too complicated to run in
your head. The simpler things that you can run in your head or on
paper are too coarse to be any use.

My personal guess, which I realize is deeply unpopular, is that this
also applies to much of neuroscience. If we do have a simulated brain,
then it will have been built using a vary large volume of data, a few
equations and a lot of extremely sophisticated software
engineering. I'm not sure there will be any point in looking for
theories at a higher level than the design documents and software
architecture that went into making it. If such complexity reduction
were possible, then you'd hope the engineers have got there by then.

The issue of whether, when you have a 64 bit floating point unit at
your disposal instead of a mass of synapses and ion channels, you can
make an equivalent but more mathematical and easily computed version
of a purkinje cell seems like a case of premature optimization
(or perhaps of engineering expediency) driven by todays prevalent
technology, not a question of durable scientific relevance.

As I see it, the area that needs the most attention, (both funding
and education) but receives practically none, is not maths, but how on
earth we develop the software engineering and data management
concepts, languages and technologies that will enable us to build the
next n generations of models.


Comp-neuro mailing list
Comp-neuro at neuroinf.org

More information about the Comp-neuro mailing list