[Comp-neuro] KCC addendum

Rob Clewley rob.clewley at gmail.com
Thu Aug 21 16:18:17 CEST 2008


Hi Carson,

On Wed, Aug 20, 2008 at 11:22 AM, Carson Chow <ccchow at pitt.edu> wrote:
> The test would be that it would have to produce the exact
> same output to whatever input conditions are given.  My guess is that for a
> finite set of data, this can be done but as you provide more input data
> you'll have to constantly refine the simple model.  It may be hard to prove
> if this process will converge to something simpler than the original model.

Not only do you have the issue of what is the adequate description of
the model but also what is the adequate definition of the "same
output". There are so many ways to judge that, both quantitatively and
qualitatively. So we still have a fundamental problem defining such a
test to everyone's satisfaction.

I would define what is the "same output" of a single neuron model
(say) based on how that output is "used" by other neurons in a
network. If you define "exactly the same" too strictly (e.g.,
Euclidean distance between all internal variable trajectories), I
think you might well converge back to the detailed model as you
suggest. If a qualitatively similar output arises from the simple
model that is not distinguished in any detectable way by the neurons
connected to it (maybe doesn't cause any bifurcations in their
behaviour) then I'd call that "exactly the same", but others might
disagree. This isn't really a definition, more a self-consistency
condition.

Besides, it would be hard to know that there isn't another type of
input pattern that would break the simple model's output vs. the
detailed model, although how do we know if it's a meaningful case for
the other neurons it would be connected to?

Cheers,
Rob


More information about the Comp-neuro mailing list