Only $35.99/year

Philosophy of Mind midterm

Terms in this set (82)

Problem 1: Why focus on causation?
Is the starting definition good one? Are all mental states "essentially states apt for causing behaviour"?
Ryle: they are not causes
Descartes: they are contingent causes
Strawson (1994): they are causes of other mental states, but not of behaviour
We need to challenge this assumption or reasons that led Armstrong to believe it is a good starting point

Problem 2: The painfulness of pain
Is the causation of pain all there is to pain?
Quality and experience of painfulness that identity theory does not seem to capture
Thought experiment: automata (or zombies):
Physical duplicate of me
Its internal states (c-fibre simulation) causes pain behaviour
But still it does not feel pain
The experiential quality seems to be logically independent of the causal relations pain states bear to behaviour

Problem 3: Multiple realizability
Armstrong: Pain is identical with c-fibre simulation.
What if there are no other animals who have c-fibres?
Then, those other animals would not feel pain.

Armstrong: Belief "Elvis lives" is identical with, e.g., e-fibre simulation.
How likely is it that everyone who believes that "Elvis lives" has the same type of brain configuration with e-fibres?

Objection from multiple realizability: mental states might be realized in diverse creatures in multiple ways.
Example of pain: Pain might be realized in humans by c-fibres, and it might be realized in octopi by something else.
Saying that only creatures who have c-fibres firing can feel pain is indefensible.

Question: is it still pain
How many aspects of mind can it account for?
1. Reasoning: keeping a 'rational relation' in sync with causal relation. But when does a mental process count as reasoning? Three types of theoretical reasoning:
A) Deductive: conclusion is logically entailed in the premises (If Ps, then Q).
If ravens are black, and Arch is a raven, Arch is black.
B) Inductive: generalizing from observing a representative sample to an unobserved whole
We infer that all ravens are black from seeing seeing many black ravens
C) Abductive: inference to the best explanation
Best explanation for certain cosmological facts about motions of stars: dark matter
Best explanation for why the diamonds were found in John's safe and his fingerprints on them: he stole them.
No computational process that can implement inductive and abductive reasoning
2. Emotions: greatly 'impair' rational thinking (Crawford 2010, p. 103).
Either emotions are computational processes that CTM has left out, or they resist being captured by computation, and we need another explanation.
3. Imagination:
Can computer be creative? (Manipulation of 0-1)
Can creativity be understood computationally?
Alternative model: connectionism?
(Modeled on interconnected neural networks)
4. Mental representations (part of CTM): how do they get their meanings?
According to CTM, to believe that x ('Turing cracked the enigma code'), is o have a mental symbol in your head that means, or has the content that Turing cracked the enigma code.
But where does it get this meaning from? And how can the thought be directed at things that do not exist?
= Problem of intentionality(the power of minds to be about, to represent, or to stand for, things, properties and states of affairs): What determines what we do is what our mental states are about, but aboutness is not a category of natural science.
Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that he or she is talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?[6][c] Searle calls the first position "strong AI" and the latter "weak AI".[d]

Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.

Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted as demonstrating intelligent conversation. However, Searle would not be able to understand the conversation. ("I don't speak a word of Chinese,"[9] he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

Searle argues that without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false.

The experiment aims to show that computers cannot process information or think like human beings. Human thoughts are about things, therefore, they have semantic contents. Computers only process syntax (formal, grammatical structures). Searle argued that syntax does not provide one with semantics for free, concluding that computers cannot think like humans do.

The experiment is supposed to show that only Weak AI is plausible; that is, a machine running a program is at most only capable of simulating real human behavior and consciousness. Thus, computers can act 'as if' they were intelligent, but can never be truly intelligent in the same way as human beings are.
Putnam's original formulation of the experiment was this: We begin by supposing that elsewhere in the universe there is a planet exactly like Earth in virtually all respects, which we refer to as "Twin Earth". (We should also suppose that the relevant surroundings are exactly the same as for Earth; it revolves around a star that appears to be exactly like our sun, and so on). On Twin Earth, there is a Twin equivalent of every person and thing here on Earth. The one difference between the two planets is that there is no water on Twin Earth. In its place there is a liquid that is superficially identical, but is chemically different, being composed not of H2O, but rather of some more complicated formula which we abbreviate as "XYZ". The Twin Earthlings who refer to their language as "English" call XYZ "water". Finally, we set the date of our thought experiment to be several centuries ago, when the residents of Earth and Twin Earth would have no means of knowing that the liquids they called "water" were H2O and XYZ respectively. The experience of people on Earth with water, and that of those on Twin Earth with XYZ would be identical.
Now the question arises: when an Earthling (or Oscar for simplicity sake) and his twin on Twin Earth say 'water' do they mean the same thing? (The twin is also called 'Oscar' on his own planet, of course. Indeed, the inhabitants of that planet call their own planet 'Earth'. For convenience, we refer to this putative planet as 'Twin Earth', and extend this naming convention to the objects and people that inhabit it, in this case referring to Oscar's twin as Twin-Oscar, and Twin-Earth water as water.) Ex hypothesi, their brains are molecule-for-molecule identical. Yet, at least according to Putnam, when Oscar says 'water', the term refers to H2O, whereas when Twin Oscar says 'water' it refers to XYZ. The result of this is that the contents of a person's brain are not sufficient to determine the reference of terms they use, as one must also examine the causal history that led to this individual acquiring the term. (Oscar, for instance, learned the word 'water' in a world filled with H2O, whereas Twin Oscar learned 'water' in a world filled with XYZ.)
This is the essential thesis of semantic externalism. Putnam famously summarized this conclusion with the statement that "'meanings' just ain't in the head." (Putnam 1975/1985, p. 227)
In his original article, Putnam had claimed that the reference of the twins' "water" varied even though their psychological states were the same. Tyler Burge subsequently argued in "Other Bodies" (1982) that the twins' mental states are different: Oscar has the concept H2O, while Twin Oscar has the concept XYZ. Putnam has since expressed agreement with Burge's interpretation of the thought experiment. (See Putnam's introduction in Pessin and Goldberg 1996, xxi.)
Everything is just the way it is for us on earth except that the stuff in lakes, rivers, etc. is XYZ rather than H2O.
XYZ is exactly like water in its superficial properties and how it behaves
Every Twin-Earthian is a duplicate of an Earthian
Both Earth girl and Twin Earth girl assert that 'water is wet'
Their thoughts differ in content, and their sentences differ in meaning, because the substances they refer to are the different
Even if they were told that 'water is h2o' and they believed it, they cannot have formed the very same belief, since Earth girl's belief is true while Twin Earth girl's belief is false
Truth-conditions of thoughts are essential to their identity
Even if they don't know that water is H2O
Because meaning is not in the head
All experiencing is the result of electronic impulses traveling from the computer to the nerve endings

Epistemology: against skeptical argument
If you were a brain in a vat, all your thoughts would be false! Why?
Brains in a vat cannot refer to things outside them, only to their own images
"the brains in a vat are not thinking about real trees... because there is nothing by virtue of which their thought 'tree' represents actual trees" (Putnam 14).
The sensory inputs do not represent
Brains in a vat cannot even think of themselves as brains in a vat
'I am a brain in a vat' is self refuting:
If my brain is in a vat and my experiences are of matrix, then 'I am a brain in a vat' would mean 'I am a brain in a vat in the image' that the computer feeds ne.
"Part of the hypothesis that we are brains in a vat is that we aren't brains in a vat in the image" (15)
So, if I am a brain in a vat, saying 'I am a brain in a vat' is false, because I cannot refer to the vat world that is not simulated (and since the experience is only simulated, "I'm not a brain in a vat" is true).

For a brain in a vat that had only ever experienced the simulated world, the statement "I'm not a brain in a vat" is true. The only possible brains and vats it could be referring to are simulated, and it is true that it is not a simulated brain in a simulated vat. By the same argument, saying "I'm a brain in a vat" would be false, because you cannot refer to the vat world that is not simulated

Pictures in the Head
"For the image, if not accompanied by the ability to act in a certain way, is just a picture, and acting in accordance with a picture is itself an ability that one may or may not have" (Putnam, 1982, p. 19).
Having images alone is not enough to have understanding, you need to be able to use them in a context.
Maps and pictures alone do not say anything (do not mean or refer). You need to be able to read them.
Imaginings do not have conditions of
satisfaction; I can imagine anything I want and
it will always be 'true'

Preconditions for thinking about X, representing X, referring to X:
X actually exists (in physical world or social discourse)
Causal connection between the world and your thought
Rejection of the power of the mind to be about things out of nowhere ('intentionality' with the power to refer)
Our thoughts are still about something, they have meanings but the meanings are external
FP = 'naïve psychology' = commonsense psychology = theory of mind

Weak: refers to particular set of cognitive capacities that allow for prediction and explanation of behaviour
Having beliefs and desires

Strong: theory of behaviour represented in the brain
Empirical theory (Fodor)

Why FP should be regarded as a theory
Churchland argues that if we regard folk psychology as theory, we can unify the most important problems in the philosophy of mind, including:
1. the explanation and prediction of behaviour
2. the meanings of our terms for mental states
3. the problem of other minds
4. introspection
5. the intentionality of mental states
6. the mind-body problem

1. FP explains and predicts behavior:

1. All of us can explain and even predict the behavior of other people and even animals rather easily and successfully
2. These explanations and predictions attribute beliefs and desires to others
3. Also, explanations and predictions presuppose laws
4. Churchland believes that rough and ready common-sense laws can be reconstructed from everyday folk psychology explanations

Each of us understands others, as well as we do, because we share a tacit command of an integrated body of lore concerning the law-like relations holding among external circumstances, internal states, and over behaviour" (p. 69).

FP deals with the problem of the meanings of our terms for mental states :
If folk psychology is a theory, the semantics of our terms is understood in the same way as the semantics of any other theoretical terms
The meaning of a theoretical term derives from the network of laws in which it figures

3. FP as theory deals with the problem of other minds:
1. We don't infer that others have minds from their behavior (if shouting, then pain, or if broke leg and shouted before, then pain)
2. It's risky to generalise from our own case (target: Simulation Theory)
3. Rather, the belief that others have minds is an explanatory hypothesis that belongs to folk psychology. FP provides explanations and predictions (through a set of laws).

4. Introspective judgment - is just a special case of the theory

5. FP deals with intentionality :
Intentionality of mental states is not a mysterious feature of nature but rather a "structural feature" of the concepts of folk psychology
These structural features reveal how much FP is like theories in the physical sciences
E.g., Propositional attitudes (has a belief that p or desire that p) are like "numerical attitudes:" has a mass n, a velocity n, etc.,
In both cases, the logical relations that hold among the "attitudes" are the same as those that hold among their contents,
In both cases we can form laws by quantifying "for all n" or "for all p"
The only difference between FP and theories in the physical sciences is in the sort of abstract entities it utilizes: propositions instead of numbers

6. FP sheds light on mind-body problem
1) FP is not an empirical theory (is not refutable by the facts). It is a normative theory:
it doesn't describe how people actually act, but characterises an ideal: how they ought to act if they were to act rationally on the basis of beliefs and desires
hence, FP could not be replaced by an description of what's going on at the neuronal level

What do you think?
FP explanations depend on logical relations among beliefs and desires, like mathematics; does not make FP a normative theory. The relations are objective, we add valued to them
People are not ideally rational

2) FP is an abstract theory
FP characterises internal states such as beliefs and desires in terms of a network of relationships to sensory inputs, behavioral outputs, and other mental states
This abstract network of relations could be realised in a variety of different kinds of physical systems
Hence, we cannot eliminate this functional characterisation in favour of some physical one
Churchland's rebuttal: This shifts the burden of proof
From FP being a good theory to trying to certain physical systems supporting FP
It's removed from empirical criticism
To defend eliminativism - attack functionalism

Churchland's attack: functionalism is like alchemy:

Alchemy explained the properties of matter in terms of four different sorts of spirits
e.g., the spirit of mercury explained the shininess of metals
This theory got eliminated by atoms and elements
Reduction not good enough: the old and the new theories gave different classifications of things
Functionalist rebuttal: can redefine spirits as functional states
e.g., being ensouled by mercury just is the disposition to reflect light
Churchland: if you can make that move, you can make any move that will be an outrage against truth and reason. The functionalist stratagem can be used as a smokescreen for error .

More worries with eliminationism:
1. Maybe not a theory at all?
Alternative: social practice
2. Could we talk in 'neural' language?
3. Is there a contradiction?

Reply to 3: it begs the question
It assumes that for something to have any meaning, it must express a belief. It is this theory of meaning that should be rejected.
Analogy: it would be like a 17th-century vitalist arguing that if someone denies they have a vital spirit, they must be dead, and hence not saying anything (Pat Churchland)
- method for attributing beliefs and desires

"Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in many - but not all - instances yield a decision about what the agent ought to do; that is what you predict the agent will do" (p. 325).

Dennett, D. (1981). "True Believers: The Intentional Strategy and Why it Works." Reprinted in Lycan, W. and Prinz, J. (2008). Mind and Cognition, An Anthology. Blackwell Publishing

How do we do it?
Driven by the reasonable assumption that all humans are rational beings — who do have specific beliefs and desires and do act on the basis of those beliefs and desires in order to get what they want
Our beliefs are based on "perceptual capacities", "epistemic needs" and "biography", and are often true
Based on our fixed personal views of what all humans ought to believe, desire and do, we predict (or explain) the beliefs, desires and actions of others "by calculating in a normative system"

The attributor works with beliefs and desires
You don't attribute to the computer that it has beliefs to predict how it will behave, but you figure out how it will behave on the basis of 'as if' beliefs
Otto and Inga are looking for the MoMA

Inga consults her biological memory: she remembers that MoMA is on E 53rd str

= Inga believed that MoMA is on E 53rd str; not occurrent, but waiting to be accessed

Otto has Alzheimer's Disease. He consults his notebook. He goes to the museum.

= Otto didn't just learn that MoMA is on E 53rd str. Otto believed that MoMA is on E 53rd str even before consulting his notebook. (Not occurrent, waiting to be accessed)

C&C: Alternative is strange
"Otto has no belief about where MoMA is until he consults the notebook"
Otto constantly uses his notebook (like memory)
The information about MoMA is reliably available; it does not 'disappear'
He automatically endorses it (without questioning it)
His notebook plays the same function as memory

= Parity. No reason not to see the notebook as part of the mind (believing/remembering process)

Argument for extended belief
The notebook plays the same role for Otto as memory for Inga
"Beliefs can be constituted partly by features of the environment, when those features play the right sort of role in driving cognitive processes"
Functionalist notion of belief: no reason the function must be played from inside the body
Beliefs as dispositions, not occurrences: there waiting to be accessed

Possible Objections:
Inga has more reliable access?
No (e.g., brain damage; drunk)

Otto's beliefs come and go when we puts away the notebook?
Yes if the notebook would be unavailable; must be available at relevant situations (just like memory)

Inga has direct access: knows her beliefs by introspection. Otto uses perception.
No: Otto and notebook directly coupled.
Doesn't matter: no impact of phenomenology on the status of belief

= Shallow differences
Argument for insufficiency of reductionism: against limitations of our current concepts and theories for understanding human consciousness

Reductionism = trying to reduce A to B
(Reduction of the mental to the physical = Physicalism/Materialism)

"Any reductionist program has to be based on an analysis of what is to be reduced.
If the analysis leaves something out, the problem will be falsely posed - it is useless to pose the defense of materialism on any analysis of mental phenomena that fails to deal explicitly with their subjective character.
For there is no reason to suppose that a reduction which seems plausible when no attempt is made to account for consciouness can be extended to include consciousness" (p. 323).

Nagel: "fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism - something it is like for the organism" (p. 323).

Subjective character of experience

Example: Bat
Bats perceive through a sonar: echolocation
The sonar is a different 'sense' medium; no reason that it is subjectively like anything we can experience or imagine (p. 324).

Starting point: Realism about experiences: there are experiential/phenomenological facts

Phenomenological facts are both
objective (what the quality of the experience is) and
subjective (what the quality of experience is like from the point of view of the experiencing subject)
Subjectivity of these facts is crucial aspect, or real nature, of the experience.

As objective facts, they could be accessed by others
As subjective facts, they can only be accessed by someone 'like me'/sufficiently similar to adopt that person's point of view (p. 325)

If the subjective character of experience is fully comprehensible only from one point of view, then any shift to greater objectivity - that is, less attachment to a specific viewpoint - does not take us nearer to the real nature of the phenomenon, it takes us farther away from it." (p. 327).

Scientific language - 3rd person, objective, birds-eye view - will then take us farther away from the experience.
Therefore, reductionism fails.

Conclusion: inadequacy of physicalist hypotheses.
Does it mean physicalism is false?
No: it follows that physicalism is a position we cannot understand (p. 328).
Phenomenal (P-) Consciousness: Cannot define, can only point to it:
Qualia, raw feels, 'What it is to be like', Whatever is experienced; e.g., sensations, feelings, perceptions, thoughts, wants, emotions
Access (A-) Consciousness: All items of access consciousness are representational. A state is A-Conscious when its content is:
informationally promiscuous (available to other parts of the brain for use in reasoning),
poised for rational control of action,
reportable; e.g., perception, sensation, etc. as information that can be used in modifying behaviour.
P-Consciousness contains qualia/experience; A-Consciousness contains information that it can control

Can A-Consciousness and P-consciousness come apart?
Yes: Thought experiment of philosophical "zombies" (shortly)
Yes: Real cases

A-Consciousness without P-Consciousness
Blindsight: Patients claim to be blind: they perceive no visual images, yet they respond successfully (unimpaired functioning)

P-Consciousness without A-Consciousness
Mental processing of background noise, e.g., noise of the pneumatic drill outside your window. You don't notice it until you become aware of the drill and realize that you have been hearing it for a long time.

A-conscious without P-conscious:
If you are A-conscious but not P-conscious, you can use information for rational thought, but you don't experience knowledge of this information. Does this differ from unconscious information processing?
P-conscious without A-conscious:
You are 'aware' but not consciously aware - is it a contradiction?
Case: Sleepwalking
Sleepwalkers have their eyes open and use vision to navigate the world. Visual information is poised for use in action. Sleepwalkers can eat, drink, even drive a car. But if you speak to them, they are slow or unresponsive and seemingly unaware of what they are doing.
Are they A-conscious? P-Conscious? Is there anything it is like to sleepwalk?
Computationally' identical to people: act like people, talk like people
A-conscious but not P-conscious: dead inside, have no experience
There is nothing it is like to be a zombie
(Note: Many people would say that zombies have no consciousness)

A 'zombie' is a creature that is physically and behaviourally indistinguishable from us, but that has no conscious experiences.
Physicalism is the theory that mental states and processes (logically) supervene on physical states and processes
I.e., any physical duplicate of me would also be a psychological duplicate
Physicalism zombies are not conceivable
But zombies are 'conceivable'
So, Physicalism is false (also true of epiphenomenalism)

The Zombie Argument against Functionalism

physics and biology. Mental properties can't be logically derived from physical properties .
"Since my twin is an exact physical duplicate of me, his inner psychological states will be functionally isomorphic with my own (assuming he is located in an identical environment). Whatever physical stimulus is applied, he will process the stimulus in the same way as I do, and produce exactly the same behavioral responses."
"On the assumption that non-phenomenal psychological states are functional states (definable in terms of their role or function in mediating between stimuli and behavior), my zombie twin has just the same beliefs, thoughts, and desires as I do. He differs from me only with respect to experience. For him, there is nothing it is like to stare at the waves or to sip wine."

Are Zombies Conceivable? The Modal Argument
Modal Logic: Saul Kripke
The Zombie argument depend only what is logically possible, not what is actually the case or what is necessarily the case.
The modal argument: "No amount of physical information about another logically entails that he or she is conscious or feels anything at all."
Modal intuition: zombies are imaginable

The Zombie Argument: Conclusion
Zombies are physically and functionally identical to us, but have no conscious experience or qualia
If Zombies are conceivable, then it is possible to imagine a world in which all the physical and functional facts are the same, but there is no conscious experience. Then conscious experience is independent of the physical and functional facts, and cannot be explained in terms of them.
So, experiential phenomena (e.g., qualia) are over and above physical phenomena
Are zombies really conceivable?
If I cannot rely on your physical make-up or behaviour, how can I tell that you are not a zombie?
Intuition pump 1: watching you eat cauliflower
Someone who hates cauliflower watches someone else eat cauliflower. It leads the person to wonder how someone could possibly enjoy the taste.

Hypothesis: cauliflower tastes different to them. Plausible: surely things can taste different when mixed, or if one has a different palette.

But! If these processes are subjective in nature, how could we narrow them down to some fundamental properties? I could categorize every 'qualia' into each time I subjectively experienced something.

Big mistake: presuming we can isolate qualia from everything else that is going on, and presume that there is this residual property to be taken seriously.
So is there objective qualia? Think about the 'wine' example from last week.

Intuition pump 2: The wine-tasting machine
It is feasible for us to create a machine that, associating what professional wine tasters find pleasing in wine to the chemical makeup of wine, judges the taste of the wine for us.
However, no qualia believer would assent that our experience of tasting wine is anything remotely similar to the way a robot tastes wine. The arguments defending this are: qualia are "ineffable", "intrinsic," "private," and "directly or immediately comprehensible to consciousness."
But if this is so, how can we be certain that robots do not experience them? - Intuition.
Can a wine-tasking robot appreciate a bottle of Bordeau 1984?
Can a future AI robot, with cameras, 'see' red?

Intuition pump 3: The Inverted Spectrum
Consider someone whose spectrum has been inverted from birth. What words will they use to describe the experiences they are having?
Dennett: Even the inverted-spectrum person would behave exactly as we do. That is, he would call fire engines 'red', the sky 'blue', etc. People who see green instead of red may use the same word to refer to it, but this confuses this objective qualia that we are supposed to intuit.
There would be no functional or physical difference between us and the colour-inverted person.
How can we even tell, then, that we are not suffering spectrum inversion?

Intuition pump 5+6: The neurosurgical prank and the alternative surgery
Evil neurophysiologist tampered with your neurons. Now grass looks red, sky looks yellow.
Which operation the neurophysiologist actually perform? Did he change all of my memories of past qualia, or was my qualia switched at the ocular nerve?
How do you you know?
Both stories are plausible explanations for a perceptive switch, but we couldn't then pinpoint what the "original" qualia were with any accuracy.

Intuition pump 9: the experienced beer drinker
No one likes the taste of beer on the first sip, but come to like it over time.
The qualia of the beer, if they are said to change with further drinking, actually change with the quantity consumed.
This challenges beer's qualia (its taste) as being an "intrinsic" property (a property of the beer independent of relation), but poses it instead is an "extrinsic" property (that requires relation to the quantity of beer drunk). However, if this can be generalized to all cases, then there are no intrinsic properties to which qualia (as a term) can provide any insight.
= Criticism of intrinsic qualia
Dennett denies that qualia can be both intrinsic/non-relational (2) and directly knowable (4)at the same time.
non-relational: they do not play a part in the kinds of causal relations analysed by physicalists
directly knowable: known non-inferentially from the first-person perspective
Exponents of qualia must claim either
(i) that qualia influence behaviour independently of our beliefs, or
(ii) that qualia influence behaviour in conjunction with our beliefs.
If qualia affect behaviour independently of our beliefs (i), then qualia have to be relational
If qualia directly influence behaviour, then they must play a direct causal role in behaviour (Red quale → "I see red")
If qualia affect our behaviour in conjunction with beliefs, then qualia cannot be directly knowable.
If qualia only influence behaviour when they are conjoined to other beliefs (Red quale + I believe what I am seeing is red → "I see red"), then one cannot know one's qualia without knowing those other beliefs. Qualia can only be known when the surrounding circumstances are known.
Therefore, qualia cannot be both non-relational and directly knowable.

Dennett's approach to qualia
Verificationist argument: you cannot know the correctness of qualia.
Conscious experience has no properties that are special in any of the ways qualia have been supposed to be special
Theoretical foundation for believing qualia to be of extra explanatory value is fundamentally flawed
More radical than Wittgenstein: "Qualia are not even 'something about which nothing can be said'; 'qualia' is a philosophers' term that which fosters nothing but confusion" (p. 386).

If Dennett is right and it is impossible to tell the difference between Chase and Sanborn, then there is no need to postulate "qualia" to explain the taste-judgments we make.
There are just these judgments themselves, but we can explain these fully in terms of physical and functional facts that are perfectly accessible from a third-person, objective point of view.
"They are not qualia, for the simple reason that one's epistemic relation to them is exactly the same as one's epistemic relation to such external, but readily -if fallibly - detectable properties as room temperature or weight" (p. 396).
Thus there is no special "hard" problem of explaining consciousness. It'ss just a matter of time until we have a good physicalist explanation of how consciousness works.
What properties remain to be studied? Dispositional properties.
Your Theory of Mind forms a specific functional module of your mind
Imagining, Remembering, Theory of Mind
Modules = domain-specific, special purpose 'cognitive mechanisms'
Characteristics of modules:
Informationally encapsulated (i.e. only receptive to certain kinds of inputs);
Mandatory, high speed deliveries;
Cognitively impenetrable;
Have fixed neural architecture
Nativism of TT: As a mental faculty, it is innate

Why believe in ToMM (Theory of Mind modules)

"The idea that learning a theory of mind would be enormously difficult is close to received wisdom" (Sterelny, 2003, p. 214).
The acquisition of folk psychology "poses the same degree of a learnability problem as does the rapid acquisition of linguistic skills, which appears to be similarly rapid, universal and without sufficient stimulus from the environment" (Mithen 2000b, p. 490, emphasis added, Carruthers 2003, p. 71, cf. Botterill & Carruthers 1999, p. 52-3).
Chomsky: Poverty of Stimulus argument: The learning mechanisms are too weak to derive the kind of knowledge we have from the kinds of information we get from the outside world
(so the knowledge must be innate - Plato)

Evolutionary Genesis:
Darwinian genesis: Perhaps modules were forged by natural selection. Perhaps they are mother nature's response to the need to solve specific adaptive problems.

The common assumption is that ToMMs "emerged in hominid evolution as an adaptive response to an increasingly complex social environment" (Brothers, 1990).

TT is an ancient cognitive endowment

Fit with developmental data?
Children become progressively more sophisticated in their understanding of why others act as they get older.
Many regard this as evidence that their concepts of mind are developing during ontogeny.
Example: Children of age four pass the Sally-Anne Test
"The basic idea is that children develop their everyday knowledge of the world by using the same cognitive devices that adults use in science. In particular, children develop abstract, coherent, systems of entities and rules, particularly causal entities and rules. That is, they develop theories. These theories enable children to make predictions about new evidence, to interpret evidence, and to explain evidence. Children actively experiment with the world, testing the predictions of the theory and gathering relevant evidence" (Gopnik, 2003)

Similarity with Modular TT: children theorize to make sense of the world (and others)

Difference with Modular TT: FP is not innate, but learned. FP is a product of social intruction.
Children gradually acquire the theories through scientific method: observation and testing, revising the theory

Gopnik: "The same mechanisms used by scientists to develop scientific theories are used by children to develop causal models of their environment"

Example: Children's understanding of objects and object appearances is highly theoretical: making inference rules, such as 'if the object is occluded, then it appears invisible'. Same for understanding other minds: if he cries, then he is sad.

Modular Theory Theorist's response to developmental data:
Nativists: young children already must have the concept of belief, because it cannot be learned. So we interpret the evidence differently.
Core ToM is already in place early on, only children's performance develops (Fodor 1992/1995)
"The child's theory of mind undergoes no alteration; what changes is only his ability to exploit what he knows" (Fodor 1995, p. 110).
"the 3-year old does indeed have a metarepresentational notion of BELIEF which is simply obscured by performance limitations" (Scholl & Leslie 1999, p. 147).
The core knowledge base is there. It only matures. How else to explain that children achieve the same mindreading abilities at much the same age? (Botterill & Carruthers, 1991, p. 80-81)
We understand reasons for action and ascribe mental states not by theorizing about the other, but by replicating a target's thoughts/feelings in ourselves imaginatively.

Understanding minds essentially involves modeling those minds by making use of our own mind.

Introspective Modeling: using ourselves as model for the other' - i.e. 'we put ourselves in their shoes'
Offline Practical Reasoning: we put our practical reasoning mechanism to a new use. We can get at another's reason for action (to understand/ predict/explain that action) by
feeding our own practical reasoning mechanism pretend beliefs and desires as 'inputs';
using the resulting 'output' to prediction or explanation of other's action rather than to produce an action of our own
The "great virtue of a simulation approach to understanding others is that, unlike the familiar versions of TT, it can explain how mirroring processes may directly influence our efforts to anticipate and to understand another's behavior; that is, without first issuing in judgments about the other's mental states, without even requiring possession of a repertoire of mental state classifications" (Gordon 2008, p. 221).
Stage 1: Selection of targets (e.g. activation of the 'mindreading device' or intentionality detectors);
Stage 2: Running the simulation (e.g. off-line practical reasoning);
Stage 3: Final or 'attribution' stage - involving the ascription of mental state concepts.
Belief-desire machinery (the 'practical reasoning mechanism')
Assuming we are alike ('Like-me' hypothesis)
This makes simulation process-driven, not theory-driven: sequences of mental states are driven by the same cognitive process, not implicit body of theoretical knowledge.
ST "imputes to the attributor no knowledge of psychological laws ... [not even] any want-belief-decision generalization" (Goldman 2006, p. 19).
The possession and use of the central FP principles is unnecessary because such work can be done by directly manipulating our own mental states and exercising structured practical reasoning abilities
ST posits one and the same sub-personal mechanism in order to explain:
how we deliberate and generate intentions to act;
how we consider possible actions in counterfactual situations;
how we manage to predict and explain the actions of others.
ST assumes that practical reasoning-mechanisms are already in place and are ancient enough to allow for simulation to be an inherited capacity.