[Comp-neuro] Brains and Bits: Neuroscience Meets Machine Learning - NIPS Workshop: 9 & 10 December 2016

Konrad Kording koerding at gmail.com
Tue Sep 13 18:49:23 CEST 2016

Join us (submit papers to) for the

*Brains and Bits: Neuroscience Meets Machine Learning*

*NIPS Workshop:  9 & 10 December 2016        Barcelona, Spain*

The goal of this workshop is to bring together researchers in deep
learning, machine learning, statistics, and computational neuroscience, and
facilitate discussion about a) shared approaches for analyzing biological
and artificial neural systems, b) how insights and challenges from
neuroscience can inspire progress in machine learning, and c) machine
learning methods for interpreting the revolutionary large scale datasets
produced by new experimental neuroscience techniques.

We invite high-quality submissions of abstracts for posters and contributed
talks though CMT. Instructions  are below.

OverviewExperimental methods for measuring neural activity and structure
have undergone recent revolutionary advances, including in high-density
recording arrays, population calcium imaging, and large-scale
reconstructions of anatomical circuitry. These developments promise
unprecedented insights into the collective dynamics of neural populations
and thereby the underpinnings of brain-like computation. However, these
next-generation methods for measuring the brain’s architecture and function
produce high-dimensional, large scale, and complex datasets, raising
challenges for analysis. What are the machine learning and analysis
approaches that will be indispensable for analyzing these next-generation
datasets? What are the computational bottlenecks and challenges that must
be overcome?

In parallel to experimental progress in neuroscience, the rise of deep
learning methods has shown that hard computational problems can be solved
by machine learning algorithms that are inspired by biological neural
networks, and built by cascading many nonlinear units. In contrast to the
brain, artificial neural systems are fully observable, so that experimental
data-collection constraints are not relevant. Nevertheless, it has proven
challenging to develop a theoretical understanding of how neural networks
solve tasks, and what features are critical to their performance. Thus,
while deep networks differ from biological neural networks in many ways,
they provide an interesting testing ground for evaluating strategies for
understanding neural processing systems. Are there synergies between
analysis methods for biological and artificial neural systems? Has the
resurgence of deep learning resulted in new hypotheses or strategies for
trying to understand biological neural networks? Conversely, can
neuroscience provide inspiration for the next generation of
machine-learning algorithms?

We welcome participants from a range of disciplines in statistics, applied
physics, machine learning, and both theoretical and experimental
neuroscience, with the goal of fostering interdisciplinary insights. We
hope that active discussions among these groups can set in motion new
collaborations and facilitate future breakthroughs on fundamental research

   - Eva Dyer <http://kordinglab.com/people/eva_dyer/>

     Northwestern University
   - Allie Fletcher <http://www.stat.ucla.edu/~akfletcher/>
     Statistics, UCLA & UCB Redwood Center for Neuroscience
   - Konrad Kording <http://koerding.com//>
     Rehabilitation Institute of Chicago, Northwestern University
   - Jascha Sohl-Dickstein <http://www.sohldickstein.com/>

     Google Research
   - Joshua Vogelstein <http://jovo.me/>

     Biomedical Engineering, Johns Hopkins University
   - Jakob Macke <http://www.mackelab.org/>

    caesar Bonn, an Institute of the Max Planck Society

Current Speakers

   - Christos Papadimitriou <https://people.eecs.berkeley.edu/~christos/>
    EECS, UC Berkeley
   - Adrienne Fairhall <https://fairhalllab.com/>

     Psychology and Biophysics, University of Washington
   - Yoshua Bengio
     Computer Science and Operations Research, Université de Montréal
   - Sophie Denève
     Group for Neural Theory, LNC, DEC, ENS
   - Demis Hassabis <http://demishassabis.com/>
    Google DeepMind
   - Terry Sejnowski <http://www.salk.edu/scientist/terrence-sejnowski/>
     Howard Hughes Medical Institute and Salk Institute, UCSD
   - Mitya Chklovskii
     Simons Foundation
   - Anima Anandkumar <http://newport.eecs.uci.edu/anandkumar/>
     EECS, UC Irvine
   - David Cox <http://www.coxlab.org/>

     Molecular and Cellular Biology and Computer Science, Harvard University
   - Surya Ganguli
     Applied Physics, Stanford University
   - Maneesh Sahani <http://www.gatsby.ucl.ac.uk/~maneesh/>
     Gatsby Institute, University College, London
   - Emily Fox <https://www.stat.washington.edu/~ebfox/>
    Statistics and Computer Science, University of Washington
   - Jonathan Pillow <http://pillowlab.princeton.edu/>
     Center for Statistics and Machine Learning, Princeton University
   - Fred Hamprecht <https://hciweb.iwr.uni-heidelberg.de/people/fhamprec>
     Heidelberg Collaboratory for Image Processing (HCI), Heidelberg
   - Max Welling,  University of Amsterdam

Submission Information & Important Dates
Submissions will be considered both for poster and oral presentation. See
instructions on the workshop website at http://www.stat.ucla.edu/~

Submission deadline: 29 September 2016 11:59 PM PDT (UTC -7 hours)
Acceptance notification: 12 October 2016
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.tnb.ua.ac.be/pipermail/comp-neuro/attachments/20160913/eb262b3d/attachment.html>

More information about the Comp-neuro mailing list