[Comp-neuro] CFP: NIPS workshop on Bounded rational analysis of human cognition

Ed Vul evul at mit.edu
Sat Oct 10 01:49:16 CEST 2009


Apologies for cross-posting.

----------------------------------
CALL FOR CONTRIBUTIONS
NIPS 2009 Workshop:
Bounded-rational analyses of human cognition: Bayesian models,  
approximate inference, and the brain.
http://www.mit.edu/~ndg/NIPS09Workshop.html
Whistler, BC, Canada.
Dec 12, 2009.
----------------------------------

We invite poster submissions for the NIPS 2009 workshop "Bounded- 
rational analyses of human cognition: Bayesian models, approximate  
inference, and the brain". Relevant topics include (but are not  
limited to): state-of-the-art algorithms for bounded and time-limited  
inference, process-level limitations on human Bayesian inference,  
inference algorithms in humans, and neural implementations of Bayesian  
inference algorithms.

Abstracts, no longer than one page, may be submitted by email to: ndg at mit.edu 
, no later than October 31, 2009. Please include "NIPS Workshop  
Abstract" in the subject of your email.


DESCRIPTION

Bayesian, or "rational", accounts of human cognition have enjoyed much  
success in recent years: human behavior is well described by  
probabilistic inference in low-level perceptual and motor tasks as  
well as high level cognitive tasks like category and concept learning,  
language, and theory of mind. However, these models are typically  
defined at the abstract "computational" level: they successfully  
describe the computational task solved by human cognition without  
committing to the algorithm which carries it out. Bayesian models  
usually assume unbounded cognitive resources available for  
computation, yet traditional cognitive psychology has emphasized the  
severe limitations of human cognition. Thus, a key challenge for the  
Bayesian approach to cognition is to describe the algorithms used to  
cary out approximate probabilistic inference using the bounded  
computational resources of the human brain.

Inspired by the success of Monte Carlo methods in machine learning,  
several different groups have suggested that humans make inferences  
not by manipulating whole distributions, but my drawing a small number  
of samples from the appropriate posterior distribution. Monte Carlo  
algorithms are attractive as algorithmic models of cognition both  
because of they have been used to do inference in a wide variety of  
structured probabilistic models, scaling to complex situations while  
minimizing the curse of dimensionality, and because they use resources  
efficiently, and degrade gracefully when time does not permit many  
samples to be generated.  Indeed, given parsimonious assumptions about  
the cost of obtaining a sample for a bounded agent, it is often best  
to make decisions using just one sample.

The claim that human cognition works by sampling identifies the broad  
class of Monte Carlo algorithms as candidate cognitive process  
models.  Recent evidence from human behavior supports this coarse  
description of human inference: people seem to operate with a limited  
set of samples at a time.  Further narrowing the class of algorithm  
makes additional predictions if the samples drawn by these algorithms  
are imperfect samples (not exact samples from the posterior  
distribution). That is, while most Monte Carlo algorithms yield  
unbiased estimators given unlimited resources, they all have  
characteristic biases and dynamics in practice -- it is these biases  
and dynamics which result in process-level predictions about human  
cognition. For instance, it has been argued that the characteristic  
order effects exhibited by sequential Monte Carlo algorithms (particle  
filters) when run with few particles can explain the primacy and  
recency effects observed in human category learning, and the "garden  
path" phenomena of human sentence processing.  Similarly, others have  
argued that the temporal correlation of samples obtained from Markov  
Chain Monte Carlo (MCMC) sampling can account for bistable percepts in  
visual processing.

Ultimately the processes of human cognition must be implemented in the  
brain. Relatively little work has examined how probabilistic inference  
may be carried out by neural mechanisms, and even less of this work  
has been based on Monte Carlo algorithms.  Several different neural  
implementations of probabilistic inference, both approximate and  
exact, have been proposed, but the relationship among these  
implementations and to algorithmic and behavioral constraints remains  
to be understood. Accordingly, this workshop will foster discussion of  
neural implementations in light of work on bounded-rational cognitive  
processes.

The goal of this workshop is to explore the connections between  
Bayesian models of cognition, human cognitive processes, modern  
inference algorithms, and neural information processing. We believe  
that this will be an exciting opportunity to make progress on a set of  
interlocking questions:  Can we derive precise predictions about the  
dynamics of human cognition from state-of-the-art inference  
algorithms?  Can machine learning be improved by understanding the  
efficiency tradeoffs made by human cognition?  Can descriptions of  
neural behavior be constrained by theories of human inference processes?


ORGANIZERS:

Noah Goodman
Ed Vul
Tom Griffiths
Josh Tenenbaum


INVITED SPEAKERS (confirmed)

Matt Botvinik
Noah Goodman
Tom Griffiths
Stuart Russell
Paul Schrater
Ed Vul
Jerry Zhu


WORKSHOP FORMAT

8:00 introductory remarks
8:10 1st talk
8:40 2nd talk
9:10 break
9:30 3rd talk
10:00 4th talk
10:30 discussion

11:00 - 1:00 posters

4:00 5th talk
4:30 6th talk
5:00 7th talk
5:30 8th talk
6:00 discussion

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.neuroinf.org/pipermail/comp-neuro/attachments/20091009/22832cb8/attachment-0001.html


More information about the Comp-neuro mailing list