[Comp-neuro] DISCUSSION - The sniffing brain - and free will

Asim Roy ASIM.ROY at asu.edu
Wed Aug 27 07:15:39 CEST 2008


Bryan Bishop wrote:

> Yes, but what is a "blank slate" in this context? Traditionally it 
> implies that it is something that could be written on, as easily as I 
> write characters on tty7. But we can hardly paint on dendrites so 
> analogously.

From an artificial neural network point of view, a "blank slate" simply implies a network whose connection weights and other parameters have not been set yet - in a sense a "blank network" that can be trained to perform a task. And "writing" to the "blank network" occurs when the network is actually "trained" to perform a task through a set of examples. 

This "blank slate" idea is corroborated by recent studies in neuroscience. For example, research of the last few years has discovered that "new cells" are generated in the brain not only during the development phase, but also during adult life (adult neurogenesis). These studies show that several thousand new cells are generated daily in adult brains, some of which are used to create new functional networks and others simply discarded. So "writing" or "painting" in the brain, from the context of these "new cells," would imply connecting them up into appropriate networks and "training" them, through synaptic adjustments, to perform certain tasks.




Asim Roy wrote:

>>>  And that "learning" implies writing to the slate.

Bryan Bishop wrote in response:

> There's a few too many layers of folk psychology here, I suspect. 
> Writing to the slate might mean the simple stimulation you're providing 
> via simulation of action potentials and the synaptic dynamics. But 
> traditionally writing to the slate would mean "knowledge", which --

See response to the next set of questions.




Asim Roy wrote:

>>> are not contesting the "tabula rasa" idea. If you contest the "tabula
>>> rasa" idea, you are claiming that all knowledge comes predefined and
>>> prewired and that might be a hard thing to prove.

Bryan Bishop wrote in response:

> Yeah, so you're trying to make a direct connection between "writing to 
> the brain" via experience/stimulation, to "knowledge" which is not 
> something I've seen an information theoretic analyses wrt the brain on. 
> You're trying to go full circle (your working, conscious conception 
> of "knowledge") to connect it back to the neurobiological reality 
> (however it's actually done in the brain) and if you had that then 
> you're already done, no? How do you know that "knowledge" is the right 
> idea/heuristic? And so on.


In artificial neural networks, "learning" always implied "learning from examples," 
or "experience/stimulation" as you call it. And "knowledge" results from "learning." 
So, for example, one "learns" rules for diagnosing a certain disease once one has 
seen and studied lots of data related to that disease. Again, "knowledge" is created 
from "learning," not the other way round. Hope there is no "full circle" issue here, as you imply.





Asim Roy wrote:

>>>From what I read, you are questioning the idea that the "brain" is
>>> somehow  "free" to design a special type of network (e.g. a
>>> multilayer network) to solve some "odd unknown problem" - e.g. to

Bryan Bishop wrote in response:

> Special in a graph theory sense? Violating rules of plasticity? I don't 
> understand.

Special design from a structural point of view.





Asim Roy wrote:

>>> learn mathematics or music or a language. You can obviously prove
>>> this part of your theory by showing that certain standard network
>>> structures, the ones that are actually found in biological systems
>>> (say in the olfactory system), can solve other types of odd unknown
>>> problems too, such as learning mathematics, music or a language.

Bryan Bishop wrote in response:

> That smells more like a hack of a use of neurons than anything else. 
> While I can write an ANN to play chess, this is hardly evidence that 
> the real things are meant to play chess. In fact, ironically, Wikipedia 
> says the function of a neural network is, I quote, "given a specific 
> task to solve, and a class of functions F, learning means using a set 
> of observations, in order to find f^* \in F which solves the task in 
> an /optimal/ sense." [[Personally, I'm investigating the origins 
> of 'optimal', in the sense of computer science (compiler optimization 
> strategies) and, one of the reasons why I'm here*, computational 
> neuroscience.]]


I am not sure what the question is: is it about whether our brains 
can learn to play chess or about "optimality" in some context?




Asim Roy wrote:

>>> That way, you can be consistent with the "tabula rasa" idea and say
>>> that "learning" is just adjusting some very standard structures
>>> found in our brains. I would venture to say that if you can do
>>> that, that would be a huge step forward for this field because you
>>> have simplified the "learning" task. But contesting the "tabula
>>> rasa" idea itself might be a bit difficult.

Bryan Bishop wrote in response:

> It's difficult because the /context/ of the tabula rasa. It's not 
> whether or not the brain receives inputs from outside the system, but 
> the folk psychology attached to the idea of "upload knowledge here -- 
> press button to continue". It doesn't seem to be that way. 
> There's some literature worth citing here that I am completely 
> neglecting.

Artificial neural networks (ANN) has moved away from the "upload knowledge here --" idea. That was the original AI idea, that our brains simply store a set of rules (knowledge) that were given to us. 

Learning, and knowledge acquisition, in ANN implies generalization from examples, rather than simple storing of facts and information for later recall. Generalization implies the ability to derive a succinct description of a phenomenon, using a simple set of rules or statements, from a set of observations of the phenomenon. So, in this sense, the simpler the derived description of the phenomenon, the better is the generalization (and the knowledge).


Asim Roy
Arizona State University


More information about the Comp-neuro mailing list