New Developments, Ideas, and Errata

What is Thought? just appeared, but it's been several months since I was able to add to/modify it... this page will be updated relatively frequently as I have new thoughts on the subject... at first just as brief notes but hopefully I will eventually expand on these ideas... I apologize in advance to any readers of this page who have not read What is Thought?... it is hard to make sketchy notes like this clear as they refer to the text... I also apologize that some of these notes will likely be more speculative than most of the discussion in the book, it is more or less an infrequent blog of ongoing thoughts...

(1) Section 12.7 suggests that, if the theories discussed in What is Thought? are right and many computationally modules are programmed or heavily biased in the genome, then there ought to be substantial local structure in gene expression that will be observed. However, it should have been emphasized that this structure may only be observable in neonates or young animals. As discussed in chapter 12, evolution builds many kinds of learning/development into critical periods. The kind of modules and bias that What is Thought? suggests will be evolved into the genome, which form a basis for our programming during life, will quite likely manifest themselves only at early stages of development and learning. (Posted 2/6/04)

(2) Section 14.3 suggests that our minds do vast computations of which we are not aware to extract meaningful output of which we are aware. Because these computations evolved to work rapidly and effectively in natural environments, learning theory suggests they should sometimes be fooled in unnatural environments. As discussed, this leads to optical illusions. However, I should have emphasized that such illusions occur not only in vision, but also in other qualia (sensations).

A good example is given by the discussion of phantom limb pain in V.S. Ramanchandran and S. Blakeslee, Phantoms in the Brain. Individuals who have limbs amputated usually later experience pain and other sensations in the missing limb. Ramanchandran explains such sensations as arising from the brain's misinterpretation of signals. A particularly striking instance is the following. Some such individuals have episodes where they feel they are clenching their phantom hand so tightly that it cramps, and they even feel their phantom fingernails gouging their phantom hand. They strive to unclench their phantom hand, but to no avail. This is a serious problem, as the phantom pain is intense, and very real to the patients.

Ramanchandran's explanation is that when these patients attempt to unclench their phantom hand, their brain receives no feedback as it ordinarily would confirming that the hand has in fact relaxed, and misinterprets this lack of feedback. This understanding (see their book for more detail) allowed him to greatly ameliorate the problem. He supplied patients with a mirror device in which they could simultaneously unclench their real hand and attempt to unclench their phantom hand, and by observing the mirror image of the real hand, receive visual feedback that they had successfully unclenched the phantom hand. Their pain often vanished immediately.

As we understand the computations performed by the mind better and better, we should be able to make many more such psychophysical predictions across many sensory modalities. It seems a particularly striking argument for physicalism that various strong qualia can be so understood in computational terms.(posted 2/6/04)

(3) Section 14.3 also discusses some theories of dreams, including that of Crick and Mitchison's that dreams are a form of unlearning, in which spurious memories are removed. The main virtue of this theory is that it explains why dreams are so hard to remember. But we can explain this in other ways. To explain why dreams are hard to remember, we must understand why long term memory formation is turned off. Here's a different possibility.

According to the theory in What is Thought? reasoning is ordinarily fast because we follow constraints, exploring only meaningful possiblities. But, to the extent that we follow the most natural path in our thoughts, we get trapped when these paths lead to dead ends, or local optima, or when have jumped to the wrong conclusion and use it to constrain our thoughts. To get out of such local optima, we need to engage in various forms of lateral thinking. Thus it would be useful to do searches where we do not strictly follow all the constraints that we believe are meaningful, but allow one or a few to be violated and cast a wider net. While we are engaging in such flights of imagination, we would not want to store our conclusions unless they have been validated, so it would make sense to turn off long term memory storage.

Moreover, if we have a central decision maker, we would want to use its powerful circuitry in such search. We can't easily do this while we are awake, but while we are asleep it is available. Then if we are awakened, the short term memory and circuitry would be in the midst of a computation which we would briefly perceive, because as discussed in 14.3, what we sense is precisely the computations of the decision modules.

Of course, if we come to a brilliant conclusion, we would want to validate it and use it when awake. Maybe it is no accident that we sometimes come to our best insights in dreams or while asleep. (posted 2/6/04)

(4)Sections 14.3 and 14.4 explain awareness and qualia as the response of what I somewhat awkwardly referred to as the "upper level" or "decision" modules to their inputs. (Part of the point of this note will be to propose a less awkward name.) Roughly speaking, my proposal is that the decision module or modules can talk and think about its inputs because that is what it knows about, what it bases its decisions on. Its inputs are meaningful summaries fed to it by other computational modules, much as the Main function in a C program might call other subroutines and receive the results of computation. It is thus unaware of the massive computations done by lower level modules to extract these meaningful summaries, just as Main in a C program has no direct access to the internals of its subroutines. But it must know about and care about its inputs because it is evolved to make decisions about them. By definition, it can not react in an uncaring way as a zombie when it makes detailed and sensitive discriminations regarding these inputs. (One does have, and can learn/program new, modules to take actions that do not involve the decision modules, for example precomputed reflexes, and these are perceived as zombies.) When we describe what we are aware of, or when we think about our awareness (that is, when we are aware) what we report is the output of the decision module(s), because it is the decision maker and its decisions control our words and thoughts. And I argue that its program must be evolved to report that these inputs feel to it exactly the way our qualia feel to us, because that is the nature of their meaning. If something is painful, the decision module must report that it is awful because evolution has crafted the system with an internal reward and pain function precisely so that the decisions tend to advance the interest of the genes. Thus the decision maker is specifically built to regard this input as awful, and to be able to weigh the awfulness against other potential benefits. Evolution has also built in urges to scratch when some noxious substance is on our skin, and the decision module reporting that it has such an urge (which it can decide to override based on valid concerns) is what constitutes itchiness. I thus argue that if one simply makes the ansatz that all thought is execution of certain evolved programs, one gets a compact and simple theory that explains what we are aware of, the nature of awareness, and the nature of qualia in terms of the evolved, meaningful inputs to this evolved decision circuitry.

The first additional point I want to make here is that this "decision module" corresponds to what is sometimes called "an observing self" or a homunculus (c.f. (Koch 2004), (Baars 2004)). My picture differs from other pictures of homunculus in that I have an explicit model of qualia, awareness, and all that as evaluation of computer code, but otherwise my decision module has the qualities usually associated with a homunculus: it sits inside your head and observes and is the center of conscious sensation. So I will henceforth replace the term "decision module" with the less awkward term homunculus. Please keep in mind, however, that I am referring to a logical entity: execution of a module or modules in a program.

In the past, the notion of a homunculus, a little man in your head observing the world, has often been ridiculed as solving nothing because it was claimed to lead to infinite regress c.f.(Ryle 1949). Is there a smaller homunculus inside the homunculus? Otherwise how do you explain the awareness of the homunculus? But this objection is misguided in our context (and see also (Baars 2004 and elsewhere). The homunculus is not separately aware, nor aware of its internal workings. The computation that reports our awareness is precisely the workings of the chunk of evolved code that is the homunculus, and there is no reason to look for an inner homunculus.

In particular, the main reason I am writing this note is to remark that my theory also explains simply why you are unconscious of the inner workings of the homunculus itself, a fact remarked on without explanation elsewhere (c.f. Koch 2004). The homunculus can no more report its internal workings than my computer can display its internal transistor values on my screen. In principle, I suppose, the homunculus (or my laptop) could be wired up to control a robot with voltmeter that it could direct to measure the voltage of its internal neurons or transistors or whatever, but it isn't evolved to do that so it can't. It isn't evolved to do that because these internal values wouldn't have any particular useful "meaning" and it wouldn't particularly care about them. Meaning extraction and useful decision making requires code of a certain size, and you really don't have any reason to look in the middle of it (except maybe to debug it if you were writing it as opposed to evolving it.) The brain runs its homunculus code on this code's inputs, and this computation reports what it thinks about the inputs and the outcome, but it can not examine the internals of this code. Such an examination would be meaningless.

This is presumably related to why we are often unconscious of computational deficits. The qualia of color, for example, comes from running some computational module. We implicitly interpret the outputs of this module as having color meaning, because that's the way we are coded-- this module was evolved to do color computations, feeding into other circuitry in ways that are usually useful. If the computation is broken in some way so that it is actually insensitive to color in a quarter of the visual field, we never notice(Koch 2004, footnote 9 p138). We never notice this, I would argue, because we do not have circuitry for debugging its innards. We simply continue performing the computations as wired up. As with other illusions, the computation may become decoupled from reality, but we don't "feel" that-- what we feel depends on how the computation is wired, not on reality.

Koch, Christof (2004) The Quest for Consciousness: A neurobiological approach, Roberts and Co.
Baers, B (2004) The evidence is overwhelming for an observing self in the brain, Science and Consciousness Review.
Ryle, G. (1949) The Concept of Mind, Hutchinson, London

Posted 6/11/04.

(5) Mouse Brain Organization Revealed Through Direct Genome-Scale TF Expression Analysis. The link will take you to a nifty Science magazine paper (where to see more than the abstract you'll have to have a log-in) that used a new technique to visualize expression of more than a thousand transcription factors in a mouse embryo's developing brain. The authors (Paul A. Gray et al) found 349 genes that showed restricted expression patterns, with quite a bit of fine structure. Science 306 Dec 24 2004 pp 2255-2257.

Return to main page

Last modified: Tue Jan 4 11:20:17 EST 2005