[Comp-neuro] IJCNN special session reminder: "Autonomous learning in hierarchical neural architectures"

Alexander Gepperth alexander.gepperth at ensta-paristech.fr
Sun Feb 17 15:45:40 CET 2013


Dear colleagues,

we wish to remind you about the special session "Autonomous learning in hierarchical neural architectures", to be held at IJCNN 2013 
( www.ijcnn2013.org ). The deadline is 22nd of February. This special session (see below for a more detailed description) aims to promote the subject of autonomous learning, and to build a bridge to recent advances in deep neural architectures. All received contributions will be refereed by a panel of experts according to the policies of the IJCNN conference. For more information please see the abstract below.

We apologize if you receive multiple copies of this call.

Best regards,
Alexander Gepperth


----------------------------------------------------------------------
  Alexander GEPPERTH   -   Enseignant-Chercheur / Assistant Professor

  ENSTA ParisTech/UEI Lab               http://www.gepperth.net/alexander
  Cognitive Robotics Theme              http://cogrob.ensta.fr
  -and-
  INRIA FLOWERS team                    http:/flowers.inria.fr
  858 Blvd des Maréchaux                Ph.: +33/ (0) 18 18 72 04 1
  91762 Palaiseau, France
----------------------------------------------------------------------


Autonomous learning in hierarchical neural architectures

Autonomous Learning, as it is commonly understood, characterizes learning algorithms capable of  choosing when (and when not) to learn, of generating learning signals internally (or from interaction with their environment), and of continuing to learn beyond a single pre-specified task without human redesign of the learner.
In this special session, we want to focus on questions that arise when performing autonomous learning in complex processing hierarchies. In addition to the issue of when to learn, such systems additionally face the issue of where to learn (first). It seems to us that the organization of learning is the really crucial question, in contrast to a particular choice of algorithm. However, algorithms must be chosen to be applicable in (deep) hierarchies while being compatible with the goals of autonomous learning. In particular, we are interested in the following (non-exhaustive) set of topics:

- combination of online-offline learning
- transfer of short-term to long-term memory
- interplay between short-term and long-term memory
- control of learning by, e.g., novelty or mismatch
- guidance of learning by curiosity
- active learning
- implementations in real-world agents
- adaptation of successful hierarchical models (e.g., deep belief networks) to the needs of autonomous learning


More information about the Comp-neuro mailing list