[Comp-neuro] Re: Attractors, variability, noise,
and other subversive ideas
Ross Gayler
r.gayler at gmail.com
Tue Aug 19 23:30:13 CEST 2008
> This is why I believe that text, graphics, and
> descriptions should be written around published models, rather than
> publishing models in a database unlinked and after publication of a
> written paper.
You might find it helpful to dredge through those "reproducible research"
links I posted earlier, particularly the ones from the statistics
community dealing with "literate staistics", e.g.
(http://www.ci.tuwien.ac.at/Conferences/DSC-2001/Proceedings/Rossini.pdf)
The term "literate statistics" parallel's Knuth's "literate
programming". The notion is that the code which implements the
research should be in the same document as the text which describes
the research and should be executable so that all the results can be
easily regenerated by the reader. The R community
(http://www.r-project.org/)
Has developed some tools to support this (e.g.
http://www.statistik.lmu.de/~leisch/Sweave/).
> Here for example is a list of features of a model that could now be
auto-reviewed.
This seems awfully close in spirit to test-driven development in
software development (http://en.wikipedia.org/wiki/Test-driven_development).
A dredge through the TDD literature might throw up some ways that
TDD tools have been designed that you could usefully apply in the
neural modelling domain.
A final comment: I think the hardest problem is going to be
persuading people that they need to use these tools. The
reproducible computational research idea has been around for years
with only minimal penetration of the relevant research fields.
Likewise, test driven development and extreme programming have
not completely dominated the world of software development.
Developing the tools to make it easy to do will address part
of the issue, but in order to get widespread uptake you will
need to have some pretty serious incentives for researchers
to use these tools (e.g. they won't get published if they don't
use them).
--Ross
-----Original Message-----
From: james bower [mailto:bower at uthscsa.edu]
Sent: Wednesday, 20 August 2008 1:23 AM
To: r.gayler at gmail.com
Cc: ted.carnevale at yale.edu; 'CompNeuro List'
Subject: Re: [Comp-neuro] Re: Attractors, variability, noise, and other
subversive ideas
> availability of source code is a necessary but but not sufficient
> condition.
I agree with this completely - while it is a good first step, without
additional infrastructure, even with a common software platform (like
GENESIS or NEURON or XPP) as a base, it is still difficult to figure
these models out. This is why I believe that text, graphics, and
descriptions should be written around published models, rather than
publishing models in a database unlinked and after publication of a
written paper. Model-DB is a good first step, but the infrastructure
necessary to really publish models in a way that they can be
understood (and improved on) by those that didn't built them, still
needs to be constructed - and we are working on it.
Let me also comment on the issue of auto-review of published models --
first, I am not suggesting that a submitted model can be completely
autoreviewed -- while that may be a goal to strive for, for sure human
review will continue to be an essential component of the review
process for some time. However, with model-based publishing, human
review can be assisted with auto-review. Here for example is a list
of features of a model that could now be auto-reviewed. I would love
to have anyone else add to this list:
1) parameter ranges -- in single cell models everything from the
conductance values of channels, to their voltage dependences can be
evaluated and outliers identified. This function already exists in
neuron and GENESIS in effect. Auto-review could flag outliers for
reviewers -- or authors for that matter - who would then be under
obligation to justify the unusual values. This kind of analysis could
also flag parameters that it is particularly important for next step
experimental studies.
2) Certified (or previous used) components -- auto-review could
identify components (channels, cell morphologies, etc) which are
unchanged in the current model and have been approved through previous
peer review - This evaluation could provide some more formal measure
of what a new model has actually contributed that is new. there is a
risk here of course, of propagating bad assumptions -- but, on the
other hand, this is potentially useful information -- in addition,
this kind of auto-evaluation can be used to generate a lineage for a
model. How would it change tenure decisions or grant reviews, for
example, if one had this kind of measure of "impact on the field".
This measure could also help identify what is new
3) Robustness -- This is of course a major area of research in the use
of numerical techniques, and can be expected to continue to develop --
we need to be in a position, as a field, to take advantage of
technical advances. As auto-review, think of it as reverse parameter
searching. In some ways, evaluating what happens when you step down
the hill is easier than figuring out how to climb up it.
Specifically, once a model is submitted, we can (in principle) easily
change key (needs to be defined) parameters by some percentage while
remaining in the range of published values for the parameters, and
evaluate the stability of core results. In principle this is the kind
of analysis that should be included in every model publication anyway,
but seldom is - in fact, adding this as a feature of auto-review might
very well push modelers to do more extensive (and quantitative)
parameter sensitivity studies on their own. Of course, there is an
assumption here about how robust to parameter changes neurons and
networks are in real life -- obviously, if there are some parameters
that in real life have an inordinate effect on behavior, that is
useful information to have as well, and leads directly to experimental
questions. As I already said, however, this area of parameter space
testing and model evaluation can, of course, become quite complex,
computationally intensive and is a subject of much core research in
techniques for numerical simulation.
4) Completeness -- at present, there are cases I know of (not to my
knowledge my own :-) ), where slightly different models were used to
generate different figures in papers -- this is not necessarily
intentional on the part of authors, as it is sometimes hard to keep
track of versions of models (another thing that we are hoping to fix
for GENESIS with 3.0). Auto-review, by its nature, would assure that
all figures are actually generated by the same model.
5) References -- if one has a system for annotating the antecedents
for model components, in principle, one also has a way to check
reference lists for omitted citations.
6) Minimal standards testing -- one could well imagine that for
neurons or networks in general, or specific cases in particular, the
field could agree on some minimal required ability for models -
"generates an action potential", or "firing frequencies between 10 and
100 Hz", or no dendritic back propagation (as is the case for Purkinje
cells) against which all submitted models would be tested. Again, one
has to be aware that the current minimal standards might not be
appropriate -- but simply identifying what they are, and testing
models against them, would both help to standardize the field, and
also provide a fixed target to argue for some other set of standards.
Such a list would also be very useful for more abstracted models
intent on capturing what the field considers to be essential features
of more detailed models. At present there are no standards of this
sort for any kind of neural modeling that I am aware of.
These are all pretty straight forward to at least get started (are
their others in this category? -- please suggest some). More complex
to develop, but in principle possible, and at the core of modeling are:
1) a more formal means of evaluating the relationship between real
data and model output
2) a means of evaluating the advance made by a new model over previous
models.
3) a more formal means of comparing models
A formal mechanism to take either of these measures will almost
certainly come out of the large ongoing research efforts involved in
finding solutions to complex problems -- building the initial auto-
review infrastructure will put us in a position to work on these more
complex problems.
Again, if you have additions to this list, please send me a note, as
we are building systems to start to do this.
Jim
On Aug 18, 2008, at 8:58 PM, Ross Gayler wrote:
>> By making his model source code freely available for years, Jim Bower
>> has actually been shoring up reproducibility in computational
>> modeling.
>> It is hardly necessary to mention that reproducibility has elsewhere
>> been called a cornerstone of scientific method.
>> ...
>> To everyone who has published results obtained by studying
>> computational
>> models of neurons or neural systems--at any level of abstraction--
>> free your source code!
>
> I couldn't agree more with the call for reproducible computational
> research.
> However, availability of source code is a necessary but but not
> sufficient
> condition. There has been a steady thread of writing on reproducible
> research
> And associated tools over the last decade but it tends to be scattered
> across
> disciplines.
>
> http://www.reproducibleresearch.org/ provides pointers into some of
> this
> literature.
>
> Fully reproducible research is something to aspire to.
>
> --Ross
>
>
> -----Original Message-----
> From: comp-neuro-bounces at neuroinf.org
> [mailto:comp-neuro-bounces at neuroinf.org] On Behalf Of Ted Carnevale
> Sent: Thursday, 14 August 2008 6:06 AM
> To: CompNeuro List
> Subject: Re: [Comp-neuro] Re: Attractors, variability, noise,and other
> subversive ideas
>
> james bower wrote:
>> This is one of the first times in history that a complex realistic
>> model
>> has spread across labs and opinions -- and speaks very well for the
>> future - this is what the GENESIS project was about to begin with
>> -- and
>> now, more than 20 years later, it is starting to happen, not only
>> with
>> GENESIS but through Neuro-DB built by Michael Hines and the Neuron
>> group
>> at Yale as well.
>
> Thanks for the plug, Jim. And also for your advocacy of fresh ideas
> (whether I agree with all of them or not) in computational/theoretical
> neuroscience, or whatever it should be called.
>
> Allow me this minor typographical correction: it's ModelDB.
> For those who may not yet know about it, here's the URL:
> http://senselab.med.yale.edu/modeldb/
> As of today, it contains source code for 394 published models,
> most of which is ready to run. We invite authors of published
> models to submit them to ModelDB for attributed re-use and extension.
>
> Now for my own bit of advocacy--
>
> By making his model source code freely available for years, Jim Bower
> has actually been shoring up reproducibility in computational
> modeling.
> It is hardly necessary to mention that reproducibility has elsewhere
> been called a cornerstone of scientific method. How subversive of
> his own polemic is that?!
>
> As much as we might disagree on other issues, I am sure that Jim,
> and also Bard (who has likewise shared code freely) will join me in
> this call to arms:
> To everyone who has published results obtained by studying
> computational
> models of neurons or neural systems--at any level of abstraction--
> free your source code!
>
> --Ted
> _______________________________________________
> Comp-neuro mailing list
> Comp-neuro at neuroinf.org
> http://www.neuroinf.org/mailman/listinfo/comp-neuro
>
> _______________________________________________
> Comp-neuro mailing list
> Comp-neuro at neuroinf.org
> http://www.neuroinf.org/mailman/listinfo/comp-neuro
==================================
Dr. James M. Bower Ph.D.
Professor of Computational Neuroscience
Research Imaging Center
University of Texas Health Science Center -
- San Antonio
8403 Floyd Curl Drive
San Antonio Texas 78284-6240
Main Number: 210- 567-8100
Fax: 210 567-8152
Mobile: 210-382-0553
CONFIDENTIAL NOTICE:
The contents of this email and any attachments to it may be privileged
or
contain privileged and confidential information. This information is
only
for the viewing or use of the intended recipient. If you have received
this
e-mail in error or are not the intended recipient, you are hereby
notified
that any disclosure, copying, distribution or use of, or the taking of
any
action in reliance upon, any of the information contained in this e-
mail, or
any of the attachments to this e-mail, is strictly prohibited and that
this
e-mail and all of the attachments to this e-mail, if any, must be
immediately returned to the sender or destroyed and, in either case,
this
e-mail and all attachments to this e-mail must be immediately deleted
from
your computer without making any copies hereof and any and all hard
copies
made must be destroyed. If you have received this e-mail in error,
please
notify the sender by e-mail immediately.
More information about the Comp-neuro
mailing list