[Comp-neuro] Deep Learning and Continuous Time Computing - Track at SAC 2016 - Call for Papers

Davide Zambrano D.Zambrano at cwi.nl
Thu Jun 11 14:48:18 CEST 2015


Dear all, 

Please consider joining our track at SAC2016.
Apologies for cross posting.

Best regards.

—————————————————————————————————————
ACM Symposium on Applied Computing (SAC) 2016
The 31st Annual ACM Symposium on Applied Computing 
in Pisa, Italy, April 3 – 8, 2016.
(webpage : http://www.acm.org/conferences/sac/sac2016/)

Deep Learning and Continuous-Time Computing
(webpage : http://event.cwi.nl/sac2016/)
—————————————————————————————————————

SAC 2016: For the past thirsty years, the ACM Symposium on Applied Computing has been a primary gathering forum for applied computer scientists, computer engineers, software engineers, and application developers from around the world. SAC 2016 is sponsored by the ACM Special Interest Group on Applied Computing (SIGAPP), and will be hosted by the University of Pisa and Scuola Superiore Sant’Anna University, Italy.

Artificial Intelligence (AI) is one of the objectives of Machine Learning: how to create computers capable of intelligent behavior. Deep Learning in artificial neural networks represents a remarkable step toward this direction. Current state-of-the-art AI in the form of deep neural networks has recently demonstrated breakthrough performance in various AI-cognitive tasks, from image and speech recognition to natural language generation and playing ATARI games. However, in real-world applications like video processing or robot control, a deep neural network has then to be updated continuously, causing a high computational load. Specialized acceleration hardware is being developed for deep learning in continuous-time environments. In addition, or alternatively, Spiking Neural Networks (SNNs) represent another possible solution for efficient continuous-time representation and computing in deep neural networks. This Track aims to consolidate the current state-of-the-art in deep learning, continuous-time computing, spiking neural networks and related acceleration hardware tools (such as GPUs, SpiNNaker or Neuromorphic Silicon systems) showing recent and future progresses of this rising and growing research field.

—————————————————————————————————————

Topics of interests:

Deep learning
Continuous-time learning
Spiking Neural Networks
Asynchronous computation
Large scale parallel simulations and computing
Neuromorphic computing

—————————————————————————————————————

Important Dates are published on the Conference or Track webpages.

—————————————————————————————————————

Track Program Committee:

Sander Bohte, CWI Amsterdam (NL) (Track Chair —  email to s.m.bohte at cwi.nl )
Davide Zambrano, CWI Amsterdam (NL) (Track Chair —  email to d.zambrano at cwi.nl )
Thomas Nowotny, University of Sussex (UK)
Steven Furber, University of Manchester (UK)
Karl Tuyls, University of Liverpool (UK)
Shih-Chii Liu, University of Zurich/ETH Zurich (CH)
Robert Babuska, Delft University of Technology (NL)
Andre Gruning, University of Surrey (UK)
Max Welling, University of Amsterdam (NL)
Narayan Srinavasa, HRL Laboratories LLC (USA)
Eleni Vasilaki, University of Sheffield (UK)

—————————————————————————————————————
Important notice: Paper registration is required, allowing the inclusion of the paper, poster, or SRC abstract in the conference proceedings. An author or a proxy attending SAC MUST present the paper. This is a requirement for including the work in the ACM/IEEE digital library. No-show of registered papers, posters, and SRC abstracts will result in excluding them from the ACM/IEEE digital library. 

—————————————————————————————————————

Dr. Davide Zambrano
PostDoc at CWI, Amsterdam (NL)
MSc in Biomedical Eng. - PhD in Biorobotics
D.Zambrano at cwi.nl

Centrum Wiskunde & Informatica (CWI)
Science Park, 123
1098 XG - Amsterdam

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.neuroinf.org/pipermail/comp-neuro/attachments/20150611/7977aae4/attachment.html>


More information about the Comp-neuro mailing list