[Comp-neuro] "realistic models"
Ali Minai
minai_ali at yahoo.com
Wed Aug 20 05:47:36 CEST 2008
Jim,
That's a great, thought-provoking post. If the human brain is KCC, would you agree that the brains of all animals should be KCC, each solving a problem that cannot be solved by any simpler system? If that is the case, we would do much better at understanding human cognition (not just sensory or motor function) by starting with lampreys and salamanders and gradually building up to reptile, bird, mammal, etc. An interesting question in this regard would be this: Has evolution discovered radically new structures and processes in the evolution of brains from, say, the simplest vertebrates to the higher mammals, or is it just a matter of degree - reorganization, duplication and divergence, ramification, multiplication, etc., of the same basic set of canonical structures? If so, what are those canonical structures and what principles do they use?
Let me also play devil's advocate for a minute abd turn the issue on its head. Isn't KCC just another way of saying that a given brain solves all problems up to the greatest level of complexity it can handle - which seems self-evident. The issue isn't the complete set of problems that the brain is solving, but the problems that we are interested in - which may not be the most complex ones being "solved" by the brain. For example, perhaps the brain is "solving" the problem of generating the particular spiking pattern of every neuron at all times, but that does not mean that "solving" this problem is essential to cognitive function. Perhaps what we consider cognitive function is just a sub-phenomenon or a coarse-graining of this larger "solution". After all, the brain (actually, the organism as a whole) is not out to acquire a specific target functionality or solve any pre-specified problem. It does what occurs in the course of its natural dynamics, which
we later describe as "pattern recognition", "motor control", "planning", etc., in our reductionistic terminology. These are "functions" of our making, our way of dividing up the actual functionality of the organism, and we have no way of knowing what we have left out. It has no name - yet! In this situation, it is possible that what we, in our limited wisdom, define as the constituents of perception, cognition and action are feasible in a simpler system. Perhaps that simpler system cannot keep track of as many spikes or modulate channels in as many ways, but it may still work at the level where we define function.
The truth is that we know nothing about the actual problems that the brain is solving. Nor do we know how to define the problems of perception, action and cognition, or even know their "dimensionality". As such, we have no means of deciding whether the brain is KCC, or if it matters in our enterprise. Indeed, asking what problems the brain (or the organism) solves might not be that useful. It is like
asking what problem a waterfall solves. It does what it
does, and nothing less complex could do the same. But we go in there post facto and discover patterns and waves and turbulence, and call them functions. It does not follow that these functions need the waterfall dfown to its molecular level, or could not be produced by a simpler system. Of course, we may have missed the really interesting functions altogether. I am sure that as neuroscience and cognitive science progress, we will discover functions that we cannot even think of today, but that should not keep us from trying to understand the ones we can think of.
All this said, I think that overselling what this or that modeling approach can accomplish is a real danger, and, as scientists, we have to approach living systems with great humility.
Ali
---------------------------------------------------------------------
Ali A. Minai
Complex Adaptive Systems Lab
Associate Professor
Department of Electrical & Computer Engineering
University of Cincinnati
Cincinnati, OH 45221-0030
Phone: (513) 556-4783
Fax: (513) 556-7326
Email: aminai at ece.uc.edu
minai_ali at yahoo.com
WWW: http://www.ece.uc.edu/~aminai/
----------------------------------------------------------------------
--- On Mon, 8/18/08, james bower <bower at uthscsa.edu> wrote:
From: james bower <bower at uthscsa.edu>
Subject: Re: [Comp-neuro] "realistic models"
To: bard at math.pitt.edu
Cc: comp-neuro at neuroinf.org
Date: Monday, August 18, 2008, 1:41 PM
Bard and everyone else:
On Aug 17, 2008, at 8:30 AM, G. Bard Ermentrout wrote:
> Carson Chow, a former colleague, has an interesting summary of this
> doscussion on
> sciencehouse.blogspot.com
Yes, a well written summary of several of the points -- and the
introduction of an idea that, in fact, does lie somewhere near the
foundation of this debate. The question as to whether the brain can
be represented by a structure (whatever it is) less complex than the
brain itself -- formally, this moves us into complexity theory and a
Kolmogorovian (sic) framework for thinking about levels of
complexity. I actually like the characterization "Kolmogorov
Complexity Complete" (KCC). And yes, I do suspect that the brain is
KCC - and more formally, that the brain approaches in its complexity
the complexity of the problem(s) it evolved to solve.
Which brings us to a specific aspect of Kolmogorov complexity which is
directly relevant to neuroscience, and that is the relationship
between the complexity of the solution to a problem and the intrinsic
complexity of the problem itself. In his book "Vision", Marr
proposed
(in what he referred to actually as a 'bottom up' approach to
understanding the nervous system), that one must first understand the
nature of the computational problem being solved, and then consider
the set of algorythms that could solve the problem and then and only
then, look at the particular instance of that set implemented in the
brain (under the assumption that the brain was not KCC). Complexity
theory provides a formal framework (oh that again) for considering the
relationship between the inherent complexity of a particular problem,
and the complexity of the solution to that problem -- I believe (and
someone will surely correct me if I am wrong), there is a fundamental
principle that the complexity of the problem sets a kind of floor for
the complexity of the solutions to the problem. That is, you can
find more complex solutions to a problem -- but you can't find a
solution with less complexity than the problem itself, if you did,
then you could recast the original problem in a less complex form.
Accordingly, if the brain is KCC, then, by definition, solutions to
real brain problems involving 4 input 'neurons', 10 in the hidden
layer, and 3 output 'neurons' must be underestimating the real nature
of the problem the brain solves. Or in other words, if the solution
to the problem is less complex than the brain, then one has
misunderstood the problem.
In this context, Todd Troyer's post today makes the rather important
point, that probably our greatest deficiency in studying the brain is
that our understanding of the complexities of natural behavior
significantly lags even our understanding of the structure of neurons.
And yes, neuroethology is the branch of neuroscience that has made the
effort to try to link behavior (and even natural behavior) to the
brain. Linking to a previous post, most (although not all)
neuroethologists study "simpler" systems. Furthermore, the roots of
neuroethology are european, and place much more emphasis on innate
patterns of behavior, than did american behaviorists, who did think
that with the right combination of m and m's, animals could
significantly stretch what they associated with what.
Anyway, neurobiologists are very adept at designing behavioral
experiments in which the complexites of real behavior are
"controlled". In doing so, they run the risk of turning the nervous
systems of monkeys (for example) into "oriented bar detectors"
rather
than real functioning nervous systems. Of course, monkeys do
everything in their power to use their full brains to second guess
experimentalists - and therefore an important feature of the process
of training monkeys is to defeat their efforts (as one of my monkey
studying friends has said) to find a complex solution to what is
really a simple problem you want them to solve. ie. one has to
convince the monkey's brain that you really only want it to do
something dumb - because, of course, the monkey's brain is not
inclined to believe that it is supposed to do dumb things, especially
when it is extremely thirsty. Anyway, I assume that most of you would
agree that our lack of understanding (or even efforts to understand)
complex natural behavior is a rather significant problem. If you
don't really know what the thing does (ie. you don't study the
circumstances for which it was engineered), that should at least
complicate the process of understanding the engineering.
But anyway, returning to Bard's post and something a bit more
concrete, lets discuss dendrites, which are the brain objects most
brutalized by abstract modeling.
In one view (offered by Bard below - although not necessarily
completely reflecting his point of view I am sure), practical
considerations of building stuff in real stuff (carbon), means that
if, for whatever reasons, a neuron needs to receive 150,000 excitatory
synaptic inputs (like the Cerebellar Purkinje cell), it has no choice
but to have a large dendrite -- and one uses currents in those
dendrites to effectively negate its existence by making the spatial
position of a particular input on the dendrite irrelevant with respect
to the soma. Ironically enough, to my knowledge the first
demonstration of this effective form of spatial independence in a
complex realistic model was generated by our own work (De Schutter,
E., and Bower, J.M. (1994) Responses of cerebellar Purkinje cells
are independent of the dendritic location of granule cell synaptic
inputs. Proc. Natl. Acad. Sci. (US). 91: 4736-4740). If this was all
that was going on, it is likely that there is some (non-stuff
constrained) mathematical description that could capture the essence
of the cell with less complexity. In fact, several examples for the
Purkinje cell have already been generated.
However, there is another consequence of dendrites occupying physical
space that almost certainly is important to neuronal function, and
that is the opportunity it allows for local interactions to produce
differences in local responses due to the particular patterns of local
inputs. Here I am not talking about the relatively simple "soma-
centric" form of pattern recognition, that underlies a great deal of
current thinking about how neurons work (including sadly cerebellar
Purkinje cell: Steuber, V, Mittmann, W, Hoebeek, F.E., De Zeeuw, C.I.,
Hausser, M., De Schutter, E. Cerebellar LTD and pattern recognition
in Purkinje cells, Neuron54: 121-136, 2007), but instead about the
kind of local response complexities that can result from slight
differences in timing between different inputs (as was demonstrated in
the earliest models by Rall). Unfortunately, we know next to nothing
about the actual (natural) complexities of input patterns on neurons
in mammalian brains. Is there a reduced model of a neuron that can
still deal with all the possible combinations of effects produced by
variations in local patterns of inputs, that has less than the
complexity of the neuron -- I doubt it - although I also suspect that
some of the interest in cortical oscillations is driven by the desire
to constrain the possible spatial temporal patterns of inputs to
single cells (which of course, oscillations don't). Anyway, if one
considers in addition, that inputs to dendrites are not only
excititory, but inhibitory, and the interactions between excitation
and inhibition can be very complex, not only post-synaptically, but
also because the circuitry dictating the timing for each is often
different, And then at the level of spines and channels (and
molecules), their are timing dependent interacts operating over
multiple time scales (including those governing the plastic changes we
all like to attribute to the physical manifestation of "learning")
and
for sure these interactions are primarily local, and then throw in the
fact that it appears that different regions of the dendrite have
different types of channels (as well as different kinds of inhibition,
etc), all of which can be modulated by chemicals (modulators) that
also often have different distributions in dendrites, it sure looks
like evolution has "used" the physical space it has no choice but to
deal with, to pack in a rather spectacular amount of complexity.
Finally, to return to the grand, one of the arguments used (completely
inappropriately) by the creationists against evolutionists, is that
the second law of thermodynamics precludes the generation of complex
structures without some form of intelligent intervention (for your
amusement if you are interested in this debate:
http://video.google.com/videoplay?docid=4007930854195650071&ei=V6qpSOvtIYiE4QLNx7zRAg&q=James+bower+creationism&hl=en)
. Of course, this is a fundamental misunderstanding of the second
law, which is stated and considered in the context of a closed system
(an example of the use of assumptions in physics to reduce complexity
and thus facilitate understanding - but perhaps miss the point). The
second law could just as well be formulated to consider the case in
which there is a continual source of energy (the sun), influencing
chemistry -- under those conditions, the chemistry becomes more and
more complicated -- producing what we call "life" (in my case only
for convenience). Even with the sun shining, selection is a very
tough master -- and efficiency matters -- everywhere it has been
measured (frogs, etc), selection by females pushes males to their
physical "phelpian" limits (swim Michael swim). This, I think is
what
has pushed the brain to KCCness, along with one female related factor.
In fact, space in the brain is not a giveaway - there are neurons with
almost no dendrites, and also neurons with extensive dendrites. In
KCC terms, the brain would, for certain, limit its size by any means
necessary. Why?
A number of years ago - I was at one of the early Neural Network
meetings, and was eating lunch with a bunch of NN practisioners - the
question at hand was, given that large brains (read 'intellegance") is
so adaptive, why aren't our brains twice as large? My response (and
you already know sometimes I say things I shouldn't), was -- well --
perhaps you should ask your wife, but perhaps the fact that most of
you are still single suggests that big brains might not be as adaptive
as you think.
Ah well, another misjudged effort at humor.
Best to all,
Jim
>
>
> - Years ago Carson and I would go to neuro lectures (which I
> generally find far more accessible than math colloquia - and I am a
> professional mathematician! - which speaks on the issue that I think
> it is far easier form a mathematician to gain an appreciation for
> biology than vice versa, but I digress) and there were a number on
> the complex channels found in dendrites which from an evolutionary
> point of view, must be quite costly. However, in almost all the
> cases, the final point of the speaker was that this was to
> compensate fro being out at the end of the dendrite, so that we used
> to say that nature is trying to make all neurons point neurons.
> Computationally, we can put as many inputs as we want into a point -
> but anatomy and physiology prevent this in real cells, hence the
> complex structure.
>
> - This leads to a second point - the neural turing test. (There have
> been contests related to this). I recently heard Eugene Izhikevich
> give a talk and he showed a picture of a recording froma cortical
> pyramidal cell receiving a complex stimulus pattern (whatever that
> means, Jim) and his 2 variable 4 parameter model - the sub and super
> threshold behavior was almost indistinguishable and this model was
> fit for the FI curve only. I realize that the pyramidal cell
> stimulus was quite simplistic, but one could presumably do the same
> stim mixed with other stimuli in the dendrites. maybe there are
> complex dendritic calculations going on - but the bottom line is
> what is the output of the cell - that is all that matters. So any
> model that does this in a reasonable way will, to me, be a realistic
> model since the cell on the other side of the wall cannot
> distinguish it. I would guess that Jim Bower would claim there is no
> such model that does this except the most delatiled model with all
> the channels and structure. However, I am less pessimistic about
> this for the following reason:
>
> -Yannis Kevrikides has deveolped some very useful numerical tools
> that exploit a common freature in many complex physical and
> biological systems (here, I am a strong reductionist and believe
> with every fiber of my body that biology is describable by physical
> pronciples - I lost all shreds of mysticism in Nov 1969 - although I
> continued to exploit others making money casting horoscopes - a
> mathematical exercise, in fact)
> basically, most systems, even complex ones, behave in such a manner
> as to drastically reduce dimensionality. They are strongly
> contracting or dissipative and as a consequence, are captured by far
> fewer degrees of freedom. Kevrikides methods allow one to compute in
> these lower degrees without knowing the underlying reduced
> equations. Nevertheless, they are there. Mathematicians and
> physicists have used these ideas for years and call it averaging,
> mean field reduction, etc and of course experimentalists do use
> these ideas as well and call it PCA in which they show that only a
> few modes capture the majority of the variance. Thus, the Turing
> test neuron is not pie in the sky and I believe that there are
> reduced models that will do what the "realistic" model does with
as
> much precision as you would like.
>
> Regards
>
> Bard Ermentrout
> _______________________________________________
> Comp-neuro mailing list
> Comp-neuro at neuroinf.org
> http://www.neuroinf.org/mailman/listinfo/comp-neuro
==================================
Dr. James M. Bower Ph.D.
Professor of Computational Neuroscience
Research Imaging Center
University of Texas Health Science Center -
- San Antonio
8403 Floyd Curl Drive
San Antonio Texas 78284-6240
Main Number: 210- 567-8100
Fax: 210 567-8152
Mobile: 210-382-0553
CONFIDENTIAL NOTICE:
The contents of this email and any attachments to it may be privileged
or
contain privileged and confidential information. This information is
only
for the viewing or use of the intended recipient. If you have received
this
e-mail in error or are not the intended recipient, you are hereby
notified
that any disclosure, copying, distribution or use of, or the taking of
any
action in reliance upon, any of the information contained in this e-
mail, or
any of the attachments to this e-mail, is strictly prohibited and that
this
e-mail and all of the attachments to this e-mail, if any, must be
immediately returned to the sender or destroyed and, in either case,
this
e-mail and all attachments to this e-mail must be immediately deleted
from
your computer without making any copies hereof and any and all hard
copies
made must be destroyed. If you have received this e-mail in error,
please
notify the sender by e-mail immediately.
_______________________________________________
Comp-neuro mailing list
Comp-neuro at neuroinf.org
http://www.neuroinf.org/mailman/listinfo/comp-neuro
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.neuroinf.org/pipermail/comp-neuro/attachments/20080819/5765c57e/attachment-0001.html
More information about the Comp-neuro
mailing list