[Comp-neuro] Parallel network simulations at the NEURON Simulator Meeting

Ted Carnevale ted.carnevale at yale.edu
Wed Mar 22 19:51:18 CET 2006


The registration deadline (April 21) for the 2006 NEURON Simulator
Meeting is rapidly approaching!  For more information see
   http://www.utexas.edu/neuroscience/NEURON2006/nsm2006.html

One of the featured speakers is Michael Hines, who will present a
tutorial and run a workshop on using NEURON to implement network
models that are distributed over multiple processors.

Tutorial Title:  Parallel network simulations with NEURON

Abstract:
Parallel network management services (i.e. the ability to create
and execute network models that are distributed over multiple
processors) are now available when NEURON is configured with the
--with-mpi option.  We have run extensive tests using published
network models of conductance based neurons, on parallel hardware
with dozens to thousands of CPUs.  These tests demonstrate speedup
that is linear with the number of CPUs, or even superlinear (due
to larger effective high speed memory cache), until there are so
many CPUs that each one is solving fewer than ~100 equations.


Workshop Title:  Implementing parallel network simulations with NEURON

Abstract:
This workshop is devoted to teaching how to transform serial network
NEURON models into a parallel program.  Transformation turns out to
be fairly straightforward if the network model was originally developed
from a synapse-centric or target cell viewpoint.  In other words, since
a NetCon that connects to a target cell exists only on the CPU where
the target cell exists, it is easier if one organizes the code around
the question "who projects to me?" than from the source cell perspective
"to whom do I project?".  The discussion will cover important practical
and theoretical considerations, including the following:
--mpi installation and building NEURON on Beowulf clusters and other
   multiprocessor systems
--how to handle random connections and random spike inputs in a way
   that preserves double precision quantitative identity regardless of
   number of CPUs and how cells are distributed among the CPUs
--how to measure performance




More information about the Comp-neuro mailing list