[Comp-neuro] Re: Attractors, variability, noise, and other subversive ideas

james bower bower at uthscsa.edu
Wed Aug 20 18:18:41 CEST 2008


How do you know if a new theory is 'bold' and different, unless there  
is a base for comparison of models?

10 years ago I was at an NSF meeting to consider the future of  
computational neuroscience (the last I was invited to  :-)  ).

A famous physicist who had just turned his attention to neuroscience  
presented a model of a famous part of cerebral cortex, claiming that  
this model could account for all the most famous data.  I pointed out  
that there were several other (similarly abstract) modelers who  
claimed the same thing -- and asked what was the difference between  
his model and theirs?  He said they were completely different, but  
didn't go into details.  I then asked him what his model couldn't  
account for (a question addressed far less frequently than it should  
be in talks or papers).  He said that his model could account for all  
the data.  I suggested that if his model was completely different from  
their models, but like their models his model could account for all  
the data, then none of these models were of the least bit of use.

As I said, I haven't had any return invitations since (thanks heavens  
actually  :-)  ).

Jim






On Aug 20, 2008, at 6:53 AM, Brad Wyble wrote:

> I have to say that as someone who doesn't share your (Jim's)  
> perspective of what modelling should be like, that this proposal  
> sounds a little bit horrific.  It is clearly written with the best  
> of intentions, but I fear that this is a recipe for stifling  
> innovation in the modelling field.   With these rules in place, it  
> would be easy to turn the crank on existing models, producing tiny,  
> incremental improvements using elements from a pre-approved  
> toychest.   Bold theories, which we are in dire need of, would be  
> that much more difficult to publish.
>
> -Brad Wyble
>
>
>
>
>
> On Tue, Aug 19, 2008 at 4:23 PM, james bower <bower at uthscsa.edu>  
> wrote:
>  availability of source code is a necessary but but not sufficient
> condition.
>
> I agree with this completely - while it is a good first step,  
> without additional infrastructure, even with a common software  
> platform (like GENESIS or NEURON or XPP) as a base, it is still  
> difficult to figure these models out.  This is why I believe that  
> text, graphics, and descriptions should be written around published  
> models, rather than publishing models in a database unlinked and  
> after publication of a written paper.  Model-DB is a good first  
> step, but the infrastructure necessary to really publish models in a  
> way that they can be understood (and improved on) by those that  
> didn't built them, still needs to be constructed - and we are  
> working on it.
>
> Let me also comment on the issue of auto-review of published models  
> -- first, I am not suggesting that a submitted model can be  
> completely autoreviewed -- while that may be a goal to strive for,  
> for sure human review will continue to be an essential component of  
> the review process for some time.  However, with model-based  
> publishing, human review can be assisted with auto-review.  Here for  
> example is a list of features of a model that could now be auto- 
> reviewed.  I would love to have anyone else add to this list:
>
> 1) parameter ranges -- in single cell models everything from the  
> conductance values of channels, to their voltage dependences can be  
> evaluated and outliers identified. This function already exists in  
> neuron and GENESIS in effect.  Auto-review could flag outliers for  
> reviewers -- or authors for that matter  - who would then be under  
> obligation to justify the unusual values.  This kind of analysis  
> could also flag parameters that it is particularly important for  
> next step experimental studies.
>
> 2) Certified (or previous used) components -- auto-review could  
> identify components (channels, cell morphologies, etc) which are  
> unchanged in the current model and have been approved through  
> previous peer review - This evaluation could provide some more  
> formal measure of what a new model has actually contributed that is  
> new.  there is a risk here of course, of propagating bad assumptions  
> -- but, on the other hand, this is potentially useful information --  
> in addition, this kind of auto-evaluation can be used to generate a  
> lineage for a model.  How would it change tenure decisions or grant  
> reviews, for example, if one had this kind of measure of "impact on  
> the field".   This measure could also help identify what is new
>
> 3) Robustness -- This is of course a major area of research in the  
> use of numerical techniques, and can be expected to continue to  
> develop -- we need to be in a position, as a field, to take  
> advantage of technical advances.  As auto-review, think of it as  
> reverse parameter searching.  In some ways, evaluating what happens  
> when you step  down the hill is easier than figuring out how to  
> climb up it.  Specifically, once a model is submitted,  we can (in  
> principle) easily change key (needs to be defined) parameters by  
> some percentage while remaining in the range of published values for  
> the parameters, and evaluate the stability of core results.  In  
> principle this is the kind of analysis that should be included in  
> every model publication anyway, but seldom is - in fact, adding this  
> as a feature of auto-review might very well push modelers to do more  
> extensive (and quantitative) parameter sensitivity studies on their  
> own.  Of course, there is an assumption here about how robust to  
> parameter changes neurons and networks are in real life --  
> obviously, if there are some parameters that in real life have an  
> inordinate effect on behavior, that is useful information to have as  
> well, and leads directly to experimental questions.  As I already  
> said, however, this area of parameter space testing and model  
> evaluation can, of course, become quite complex,  computationally  
> intensive and is a subject of much core research in techniques for  
> numerical simulation.
>
> 4) Completeness -- at present, there are cases I know of (not to my  
> knowledge my own  :-)  ), where slightly different models were used  
> to generate different figures in papers -- this is not necessarily  
> intentional on the part of authors, as it is sometimes hard to keep  
> track of versions of models (another thing that we are hoping to fix  
> for GENESIS with 3.0).  Auto-review, by its nature, would assure  
> that all figures are actually generated by the same model.
>
> 5) References -- if one has a system for annotating the antecedents  
> for model components, in principle, one also has a way to check  
> reference lists for omitted citations.
>
> 6) Minimal standards testing -- one could well imagine that for  
> neurons or networks in general, or specific cases in particular, the  
> field could agree on some minimal required ability for models -  
> "generates an action potential", or "firing frequencies between 10  
> and 100 Hz", or no dendritic back propagation (as is the case for  
> Purkinje cells) against which all submitted models would be tested.   
> Again, one has to be aware that the current minimal standards might  
> not be appropriate -- but simply identifying what they are, and  
> testing models against them, would both help to standardize the  
> field, and also provide a fixed target to argue for some other set  
> of standards.  Such a list would also be very useful for more  
> abstracted models intent on capturing what the field considers to be  
> essential features of more detailed models.  At present there are no  
> standards of this sort for any kind of neural modeling that I am  
> aware of.
>
>
>
> These are all pretty straight forward to at least get started (are  
> their others in this category? -- please suggest some).  More  
> complex to develop, but in principle possible, and at the core of  
> modeling are:
>
> 1) a more formal means of evaluating the relationship between real  
> data and model output
> 2) a means of evaluating the advance made by a new model over  
> previous models.
> 3) a more formal means of comparing models
>
> A formal mechanism to take either of these measures will almost  
> certainly come out of the large ongoing research efforts involved in  
> finding solutions to complex problems -- building the initial auto- 
> review infrastructure will put us in a position to work on these  
> more complex problems.
>
> Again, if you have additions to this list, please send me a note, as  
> we are building systems to start to do this.
>
> Jim
>
>
>
>
> On Aug 18, 2008, at 8:58 PM, Ross Gayler wrote:
>
> By making his model source code freely available for years, Jim Bower
> has actually been shoring up reproducibility in computational  
> modeling.
> It is hardly necessary to mention that reproducibility has elsewhere
> been called a cornerstone of scientific method.
> ...
> To everyone who has published results obtained by studying  
> computational
> models of neurons or neural systems--at any level of abstraction--
> free your source code!
>
> I couldn't agree more with the call for reproducible computational  
> research.
> However, availability of source code is a necessary but but not  
> sufficient
> condition.  There has been a steady thread of writing on reproducible
> research
> And associated tools over the last decade but it tends to be scattered
> across
> disciplines.
>
> http://www.reproducibleresearch.org/ provides pointers into some of  
> this
> literature.
>
> Fully reproducible research is something to aspire to.
>
> --Ross
>
>
> -----Original Message-----
> From: comp-neuro-bounces at neuroinf.org
> [mailto:comp-neuro-bounces at neuroinf.org] On Behalf Of Ted Carnevale
> Sent: Thursday, 14 August 2008 6:06 AM
> To: CompNeuro List
> Subject: Re: [Comp-neuro] Re: Attractors, variability, noise,and other
> subversive ideas
>
> james bower wrote:
> This is one of the first times in history that a complex realistic  
> model
> has spread across labs and opinions -- and speaks very well for the
> future - this is what the GENESIS project was about to begin with --  
> and
> now, more than 20 years later, it is starting to happen, not only with
> GENESIS but through Neuro-DB built by Michael Hines and the Neuron  
> group
> at Yale as well.
>
> Thanks for the plug, Jim.  And also for your advocacy of fresh ideas
> (whether I agree with all of them or not) in computational/theoretical
> neuroscience, or whatever it should be called.
>
> Allow me this minor typographical correction:  it's ModelDB.
> For those who may not yet know about it, here's the URL:
> http://senselab.med.yale.edu/modeldb/
> As of today, it contains source code for 394 published models,
> most of which is ready to run.  We invite authors of published
> models to submit them to ModelDB for attributed re-use and extension.
>
> Now for my own bit of advocacy--
>
> By making his model source code freely available for years, Jim Bower
> has actually been shoring up reproducibility in computational  
> modeling.
> It is hardly necessary to mention that reproducibility has elsewhere
> been called a cornerstone of scientific method.  How subversive of
> his own polemic is that?!
>
> As much as we might disagree on other issues, I am sure that Jim,
> and also Bard (who has likewise shared code freely) will join me in
> this call to arms:
> To everyone who has published results obtained by studying  
> computational
> models of neurons or neural systems--at any level of abstraction--
> free your source code!
>
> --Ted
> _______________________________________________
> Comp-neuro mailing list
> Comp-neuro at neuroinf.org
> http://www.neuroinf.org/mailman/listinfo/comp-neuro
>
> _______________________________________________
> Comp-neuro mailing list
> Comp-neuro at neuroinf.org
> http://www.neuroinf.org/mailman/listinfo/comp-neuro
>
>
>
>
> ==================================
>
> Dr. James M. Bower Ph.D.
>
> Professor of Computational Neuroscience
>
> Research Imaging Center
> University of Texas Health Science Center -
> -  San Antonio
> 8403 Floyd Curl Drive
> San Antonio Texas  78284-6240
>
> Main Number:  210- 567-8100
> Fax: 210 567-8152
> Mobile:  210-382-0553
>
> CONFIDENTIAL NOTICE:
> The contents of this email and any attachments to it may be  
> privileged or
> contain privileged and confidential information. This information is  
> only
> for the viewing or use of the intended recipient. If you have  
> received this
> e-mail in error or are not the intended recipient, you are hereby  
> notified
> that any disclosure, copying, distribution or use of, or the taking  
> of any
> action in reliance upon, any of the information contained in this e- 
> mail, or
> any of the attachments to this e-mail, is strictly prohibited and  
> that this
> e-mail and all of the attachments to this e-mail, if any, must be
> immediately returned to the sender or destroyed and, in either case,  
> this
> e-mail and all attachments to this e-mail must be immediately  
> deleted from
> your computer without making any copies hereof and any and all hard  
> copies
> made must be destroyed. If you have received this e-mail in error,  
> please
> notify the sender by e-mail immediately.
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Comp-neuro mailing list
> Comp-neuro at neuroinf.org
> http://www.neuroinf.org/mailman/listinfo/comp-neuro
>




==================================

Dr. James M. Bower Ph.D.

Professor of Computational Neuroscience

Research Imaging Center
University of Texas Health Science Center -
-  San Antonio
8403 Floyd Curl Drive
San Antonio Texas  78284-6240

Main Number:  210- 567-8100
Fax: 210 567-8152
Mobile:  210-382-0553

CONFIDENTIAL NOTICE:
The contents of this email and any attachments to it may be privileged  
or
contain privileged and confidential information. This information is  
only
for the viewing or use of the intended recipient. If you have received  
this
e-mail in error or are not the intended recipient, you are hereby  
notified
that any disclosure, copying, distribution or use of, or the taking of  
any
action in reliance upon, any of the information contained in this e- 
mail, or
any of the attachments to this e-mail, is strictly prohibited and that  
this
e-mail and all of the attachments to this e-mail, if any, must be
immediately returned to the sender or destroyed and, in either case,  
this
e-mail and all attachments to this e-mail must be immediately deleted  
from
your computer without making any copies hereof and any and all hard  
copies
made must be destroyed. If you have received this e-mail in error,  
please
notify the sender by e-mail immediately.








-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.neuroinf.org/pipermail/comp-neuro/attachments/20080820/30c4be80/attachment-0001.html


More information about the Comp-neuro mailing list