Monthly Archives: January 2012

Sensations: A Defense of Type Materialism by Christopher Hill (excerpt)

Summary

In chapter 2 of his book Sensations: A Defense of Type Materialism, Christopher Hill presents an argument in favor of type materialism – the thesis that mental states are brain states and that mental types are brain types – with regard to sensations. Importantly, Hill’s argument applies only to those mental states which have an accompanying ‘feel’ to them, such as pain (or, the mental states which have a qualitative aspect to them).

Hill’s argument presupposes that what he calls the ‘psychophysical correlation thesis’ is correct: the thesis that for any qualitative aspect of a sensation S, it is a law of nature that neural event N always accompanies any occurrence of S. Thus, a creature x is experiencing S iff (if and only if) N is occurring in x’s brain. Supposing this is true, we can allow Hill to advance his argument: the identity theory (which says that sensations just are neural events) best explains the psychophysical correlation thesis. That is to say, the identity theory provides a good explanation of the correlation thesis and this explanation is better than any rival theory.

The rival theories Hill mentions are dualism and what he calls the ‘double aspect theory.’ Dualism is the view that mental events are non-physical, perhaps occurring in a separate mental substance of some sort. Hill avoids delimiting the varieties of dualism and says that the whole lot of them falls victim to not being able to explain the correlation thesis at all: why should neural event N always accompany sensation S? The dualist has no answer at all except to appeal to mysterious forces like God; and even after such an appeal is made, more questions have to be raised: why should God have made it such that there are two types of thing rather than one? Why did God correlate N with S rather than S2? These questions are unanswerable and thus the dualist is stuck with saying that these correlations are simply brute facts, not explicable by any further reasoning. And this lack of explanation is clearly inferior to the positive explanation proffered by the identity theory.

Meanwhile, Hill thinks that the double aspect theory (which, by the way, is a form of token physicalism), which says that mental events are physical events but also that qualitative properties are not equivalent to physical properties but rather are intrinsic properties, also runs into the problem of failing to provide an explanation for the psychophysical correlation thesis. While the double aspect theory is in better shape than dualism because it can say that a conscious experience E is identical to a neural event N. Where it runs into an explanatory impasse, however, is when one asks why the property of ‘being an experience of type E’ is always correlated with the property of ‘being neural event N.’ The identity theorist can answer this question by explaining that the property of ‘being an experience of type E‘ is actually identical with the property of ‘being a neural event N.’ The double aspect theorist, meanwhile, cannot explain this correlation. Thus, the identity and double aspect theory can both explain the correlation of mental and physical events, but only the identity theory can explain the correlation of their accompanying properties.

Logical Outline

Argument: The Identity Theory Best Explains the Psychophysical Correlation Thesis (the major premises are right out of the book, whereas the minor premises are the result of my usual argumentative reconstructions):

1. If a theory provides a good explanation of a set of facts, and the explanation is better than any explanation provided by a competing theory, then one has a good ans sufficient reason to believe that the theory is true.

2. Type materialism provides a good explanation of the psychophysical correlations that are claimed to exist by the psychophysical correlation thesis.

3. Moreover, the explanation that it provides is superior to the explanations provided by all competing theories.

S1. Type materialism provides a superior explanation than dualism.

SS1. Type materialism successfully explains the correlations posited by the psychophysical correlation thesis.

SS2. Dualism does not explain the correlations posited by the psychophysical correlation thesis, but takes them to be inexplicable, brute facts.

SS3. Therefore, type materialism provides a superior explanation than dualism. [SS1, SS2]

S2. Type materialism provides a superior explanation than the double aspect theory.

SS1. Type materialism can explain the correlation of sensations and brain processes and the correlation of their accompanying properties.

SS2. The double aspect theory can only explain the correlation between sensations and brain processes, not the correlation of their accompanying properties.

SS3. Therefore, type materialism provides a superior explanation than the doublr aspect theory. [SS1,SS2]

S3. Therefore, the explanation type materialism provides is superior to the explanations of all competing theories. [S1 & S2]

4. Therefore, provided that the psychophysical correlation thesis is true, we have good and sufficient reason to suppose that type materialism is true.

Leave a comment

Filed under Article Summaries

Article Summary: “Troubles with Functionalism” by Ned Block

Summary

Block presents his now famous “Absent Qualia Argument” against functionalism. The argument implies that there can be functionally equivalent systems which nonetheless do not have the same mentality, e.g., one system (such as a person) may be in pain, whereas an alternative hypothetical system is not, despite their functional equivalence.

Block begins by describing functionalism. He describes it as a successor to behaviorism in the sense that it also specifies mental states in terms of behavioral dispositions. The difference is that functionalism conjoins these behavioral dispositions with tendencies to experience mental states. As such, functionalism is stricter in its regime for mental state attribution: it requires the system in question to have behavioral dispositions plus certain internal states.

Block then introduces the notions of ‘liberalism’ and ‘chauvinism’: liberalism is the problem a theory of mentality faces when it attributes mentality to systems which clearly do not have it. Block thinks behaviorism is such a theory: a behavioral disposition may be necessary for the possession of a certain mental state, but it is not sufficient. Chauvinism, meanwhile, is the problem faced by a theory which withholds attributing mentality to systems which clearly seem to possess it. Block’s example of a theory which falls victim to chauvinism is what he calls ‘physicalism’: the view that mental state types are equivalent to physical state types (I don’t know why he doesn’t just say type physicalism, since this is what he is describing); the theory is chauvinist because it denies mentality to any creature which does not have the same physical structures as we do.

Then Block delimits two sorts of functionalism: Functionalism (with a capital ‘F’) and Psychofunctionalism. Block describes Functionalism as the theory that functional analysis is primarily about the meaning of mental state terms, whereas Psychofunctionalism takes each functional analysis to be an empirical hypothesis. Further, if we characterize each functional property (or state) in the Ramsey sentence of a theory T (see “What is Functionalism?” for an explication of the Ramsey method) as a ‘Ramsey functional correlate,’ then we can say that Functionalism identifies mental states with the Ramsey functional correlates of commonsense psychology whereas Psychofunctionalism identifies mental states with the Ramsey functional correlates of a scientific psychology.

Block then presents two hypothetical examples of systems which can realize the same functional organization as a person, yet which we are loth to attribute mentality to. This in turn seems to indicate that functionalism falls victim to liberalism (attributing mentality to systems which do not in fact have any). The first hypothetical is a ‘homunculi head’: a body exactly like yours on the outside, with the same set of neurons leading to the head. Yet in the head, a set of tiny men run an operation, complete with a bulletin board with lights that indicate to sub-sets of the men their job in implementing an internal machine table state (i.e., their job in pushing a button connected to an output neuron and putting a card on the wall indicating the next state, which in turn serves as a guide for other little men) such that the body realizes the same machine table state as you do. All the little men are very dumb. On functionalism, the activity of the homunculi-head indicates that it has mentality just like we do; however, we clearly do not want to attribute experiences of consciousness, pain, and all the rest to the homunculi head. The example seems to indicate that functionalism is bedeviled by the problem of liberalism.

The next hypothetical is a modification of the first: the entire state of China is set up to realize the same machine table state as you do, with each individual member of the state acting like a neuron in your brain. Each person has a radio connected to the artificial body in the previous example and to the other appropriate persons. The radio is connected to the input-output neurons of the artificial body in the appropriate way, and satellites broadcast the current system state of the body. This complex system could realize your functional organization for a brief time, and yet we certainly do not want to attribute to it any mentality.

Block then dispels some ambiguity surrounding his second hypothetical: one, the system is functionally equivalent to a person, because all it is to be functionally equivalent with another thing is to have the same set of input-output-internal state relations as it does. And a Chinese-controlled homunculi head could meet this criteria, even if just for a moment. Second, the time scale of the functional realization is irrelevant. Sure, Chinese-controlled homunculi head would be extremely inefficient, realizing functional states in a slow and haphazard fashion. But all that matters is that that realization is the same as takes place in a person. Block suggests we imagine a person whose mental processes are slowed severely. Now, the person and the homunculi head can realize functional states at the same speed, and the worry is dispelled.

The above hypotheticals were directed at machine state functionalism because they made explicit reference to machine table states of a system. But an extrapolation from the examples allows Block to apply the spirit of the hypotheticals to all versions of functionalism, and this extrapolation relates explicitly to qualitative states (rather than mental states in general). All versions of functionalism say that a qualitative state (e.g., a pain) is a functional state. Yet we can imagine beings in the functional state who aren’t in pain at all. Thus, functionalism is guilty of liberalism in its extension of the ‘is in pain’ predicate to a number of systems which don’t really feel pain.

Logical Outline

Argument: The Absent Qualia Argument*

1. Functionalism argues that qualitative states (e.g., pain) are functional states of a system, interrelated to inputs, outputs, and other internal states.

2. If one can imagine a plausible case of a system which realizes the same set of functional states that a person does, yet where we intuitively want to avoid attributing the ability to experience qualitative states to the system, then, prima facie, functionalism is plagued by the problem of liberalism. [from 1]

3. The homunculi head, whether operated by little men or the nation of China, can plausibly be said to realize the same set of functional states as a person, yet intuitively we do not want to attribute the experience of qualitative states to it.

4. Therefore, functionalism is plagued by the problem of liberalism. [3,2]

Leave a comment

Filed under Article Summaries

Article Summary: “Psychophysical and Theoretical Identifications” by David Lewis

Summary

Lewis supplements his original argument for the identity theory of mind (presented in “An Argument for the Identity Theory,” also summarized on this website) by positing a theory of the meaning of mental state terms. In conjunction with the idea that folk psychology is just a term-introducing theory, Lewis’s theory of the meaning of mental state terms implies that these meanings can be used to reduce mental states to physical states.

Lewis begins with an account of theorizing. The account involves an investigator trying to figure out who committed a crime. The investigator introduces terms X,Y, and Z for people who were responsible, in various ways, for the crime in question. The terms are not worked on any further but are treated as if everyone understands their meaning. These terms are T-terms, or theoretical terms, whereas the other terms used to describe the crime are O-terms, or old terms. Lewis points out that this method of introducing terms seems just like treating the terms as existentially bound variables (a point that is relevant later).

Suppose we find out that Plum, Peacock, and Mustard committed the crime in the fashion described, thus making the theory true. We would say of this triplet that they realize the theory. Further, this is the only triplet which realized the theory: there cannot be some other set of persons which also realized it, thus committing the same crime as well. Thus we say that the triplet uniquely realizes the theory. Lewis extrapolates from this example to the meaning of theoretical terms more generally: theoretical terms are introduced with an implicit functional definition. Further, we did nothing else to imbue the terms with meaning, and therefore the meaning of the terms just are the implicit functional definitions. So, X,Y, and Z in the crime case are just names for people who occupied causal roles R1, R2, and R3. These are the sole meanings of the theoretical terms.

If a term ends up being realized, then it is a definite description of its realizer. Meanwhile, if it is unrealized, then it is an improper description. Lewis says as well that just because a term ends up being an improper description, it is not meaningless. For the description obtains in some possible world or other, just not ours. Further, if the description of a theoretical term is almost met by an entity in the world, where a minor revision to the description would make the entity a realization of the term, the entity is a near realization of the term. In these cases, Lewis thinks that near realizations should be treated as realizations. It is only when a description is totally off the mark that we should say a term is non-referential.

Lewis then presents a formalization of the above understanding of the meaning of mental state terms. This method is a modified version of the Ramsey method for identifying theoretical terms. The method has a number of definitive steps, all explicated in formal logic:

1. We begin by noticing that a theory can be described as a long conjunction of sentences where theoretical terms appear. Call this the postulate of theory T.

T[t]

2. Then we replace the t-terms with existentially bound variables (Lewis breaks this step into two, replacing the terms with free variables and then existentially quantifying them in step 3. There is literally no difference in rolling them up into one step):

(∃x) T[x]

3. Lewis wants to avoid the possibility of multiple realization (in his example of the crime investigator he states that only one triplet can realize the theory, and so too he thinks of all theoretical terms) so he adds notation stating that there is ‘exactly one’ realization of T:

(∃!x) T[x]

4. Now we formulate what is known as the Carnap sentence of a theory, which is a conditional with the Ramsey sentence as antecedent and the postulate as the consequent:

(∃x) T[x]→T[t]

All this means is that if there is a realization of T, then the t-terms in T name components of its realization.

5. Now we do something similar to the Carnap sentence that we did to the Ramsey sentence: we add a condition that there be just one, unique realization of T:

(∃!x) T[x]T[t]

6. Then, the remaining cases where T is not realized can be described with another conditional:

~(∃!x) T[x]→ t = *

You can figure out that the antecedent just means ‘there is no realization of T’; the consequent meanwhile means that all t-terms of T are denotationless. Such is the formal method for specifying the meaning of mental terms (now called the Ramsey-Lewis method).

The method leads to the reduction of theoretical terms to their referents via two potential avenues. The first is where it is discovered that theoretical posits of some other theory ‘r’ are found to realize T. Let T[r] be the sentence expressing this discovery. This sentence, in conjunction with the postulate of T, implies that t =r. Lewis calls this conjoined sentence a ‘weak reduction premise.’

Another potentiality is that a set of theoretical terms from other theory are found to uniquely realize T. If this is so, then taken with T[r], the set of theoretical terms will be reducible to the theoretical terms of T. We will not need to use the postulate of T to reduce its terms. Lewis calls this alternative method a ‘strong reduction premise.’

Lewis concludes by returning to the case of mental state terms. He thinks that mental state terms are best treated as theoretical posits of folk psychology (although he recognizes that this is not actually true) because believing as much best explains the analyticity of folk psychological platitudes and the plausibility of behaviorism. If we accept this, then the meaning of mental state terms becomes, for any given term, ‘occupant of causal role R‘ where folk psychological platitudes specify the relations inherent to R. And this paves the way for psychoneural reduction: if mental state term M just means ‘occupant of R,’ then an empirical investigation will in all probability find a neural state N which occupies R. This will mean that M = N, and the identity theory will be redeemed.

Logical Outline

Argument One: The Meaning of Theoretical Terms

1. A theoretical term is introduced by introducing an occupant of a causal role.

2. After a theoretical term is introduced, it has meaning.

3. Yet, nothing is done to imbue a term with meaning besides introducing it.

4. Therefore, the meaning of a theoretical term is just a statement along the lines of ‘occupant of causal role R‘ (i.e., a functional definition). [1-3]

Argument Two: Folk Psychology As a Term-introducing Theory

1. We should treat folk psychology as a term-introducing theory if it best explains the meaning of mental state terms.

2. Treating f-psychology as a term-introducing theory explains the apparent analyticity of f-psychology platitudes.

3. Treating f-psychology as a term-introducing theory explains the plausibility of behaviorism.

4. Therefore, treating f-psychology as a term-introducing theory best explains the meaning of mental state terms. [2,3]

5. Therefore, we should treat folk psychology as a term-introducing theory. [4,1]

How these conclusions supplement Lewis’s original argument:

Argument Outline – Mental States Equal Physical States

1. For any given mental state M, M fills causal role R

S1. Folk psychology posits mental states as theoretical terms (from Argument Two)

S2. Theoretical terms just mean ‘occupant of causal role R.’ (from Argument One)

S3. Therefore, a mental state term M just means ‘occupant of causal role R.’ [1,2]

2. Because of the explanatory adequacy of physics, only a physical state P could possibly fill causal role R.

3. Therefore, M = P. [1,2]

Leave a comment

Filed under Article Summaries

Article Summary: “Mental Events and the Brain” by Jerome Shaffer

Summary

Shaffer presents a pair of objections to the identity theory of mind – the theory which says that mental events (and properties) are just brain events (and properties). The first objection is a critique of JJC Smart’s ‘topic-neutral’ analyses of mental events, and the second an epistemological objection against the identity theory in general.

Shaffer’s first critique is an attack on Smart’s ‘topic-neutral’ analyses, which were Smart’s way of getting around the possibility of mental events being brain events while still having irreducibly mental properties. If a mental state could be analyzed in ‘topic-neutral’ terms, or terms which categorized the event as a product of stimulus impingement yet did not reveal whether the event was physical or not, then an empirical investigation would be able to identity a physiological event with the topically-neutral mental event, and thus the properties of the event would end up being physical.

Shaffer has three problems with the topic-neutral strategy: for one thing, he does not think it is viable. He thinks that such an analysis of a mental state is incompletable, even in theory. But, even if it were completable, Shaffer thinks that the definition would be so full of complicated descriptions of physical processes that it would not actually reflect what people meant when they asserted the presence of mental events. The definition would not at all reflect the meaning of what we say when we describe mentality.

Shaffer also questions the appropriateness of categorizing the mental events in terms of stimulus impingement. Why suppose that the meaning of the event is captured by such terms? Shaffer thinks rather that meaning can be acquired via stimulus but that there is no good reason to suppose it can actually be characterized in terms of that stimulus. He gives an example: it might be the case that we can learn how an expression is learned (e.g., ‘Seeing stars’) without thereby knowing the meaning at all. And so too for the meaning in mental states: we might know all about the stimulus impingement surrounding a mental state without thereby being able to capture its full meaning in terms of those impingements.

Shaffer then presents an epistemological argument against the identity theory in general, couched in terms of noticing. Shaffer thinks that when a mental event occurs and we subsequently notice some property of it, we are not thereby noticing some property of our brain. The thing being noticed is not stimulus impingement or some state of the brain. Thus it follows that what is being noticed is a non-physical feature of the mental event, even if the event ends up being a physical one. Shaffer clarifies however that such non-physical properties might be reducible to physical terms in the sense that science may establish perfect correlation of mental events and neural events, thereby reducing psychological laws to neurological ones. The properties will still be non-physical, but will be fully explicable in terms of physical laws.

Logical Outline

Argument One: Against Topic-Neutral Analyses

  1. In order to be plausible, topic-neutral analyses need to fully capture the meaning of mental state terms, and they need to be completable.*

  2. Topic-neutral analyses do not capture the full meaning of mental state terms, for they merely describe stimulus impingements related to such terms.

  3. Topic-neutral analyses are not completable because indefinitely many factors would be needed to be state the causally sufficient conditions for a mental event’s occurrence.

  4. Therefore, topic-neutral analyses are not completable and in any case do not capture the full meaning of mental state terms. [3 & 2]

  5. Therefore, topic-neutral analyses of mental states are not plausible. [4,1]

Argument Two: Noticed, non-physical properties

  1. If the identity theory true and mental events (and properties) were just brain events (and properties), then when one noticed a feature of a mental event, that feature would necessarily be physical.

  2. Yet, when one notices a feature of a mental event, one is not noticing something physical such as, say, stimuli or some feature of one’s brain.

  3. Therefore, the identity theory is false. [2,1]

*This premise is an ‘enthymeme,’ or suppressed premise. All this means is that the premise is implicit to the argument and not explicitly mentioned by the author.

Leave a comment

Filed under Article Summaries

Article Summary: “Mad Pain and Martian Pain” by David Lewis

Summary

Lewis presents an account of mental states which, he believes, can account for two problematic cases of supposed pain: the case of pain in a madman and the case of pain in a martian. The madman is in the state of pain, yet that state is not realizing the causal role R typically associated with the concept of pain. The martian, meanwhile, is in a different state than humans are in when they experience pain, yet this state all the same is realizing R just like the other physical state humans experience. How can the madman and the martian both be in pain? Lewis builds upon his previous theorizing in order to tackle the above question.

Since Lewis is really addressing two distinct questions, i.e., how can a madman feel pain, and, how can a Martian feel pain, there are two separate but intimately related arguments for each case. Lewis’s goal, of course, is to make the two accounts compatible with one another. A naïve identity theory (the thesis that mental states are brain states) will say that the madman is in pain but the martian is not. A naïve functionalist view, meanwhile, will say that the martian is in pain while the madman is not. Meanwhile, conjoining at random two theories which end up being compatible (which state that both are in pain) is ad hoc and therefore unjustifiable.

Lewis begins by presenting his proposal for what the concept of pain is: the concept just means ‘occupant of causal role R,’ where R is a conjunctive sentence of a state’s causal relations to stimuli, other mental states, and behavior. Importantly, Lewis thinks that the concept of pain (and thus the word ‘pain’) is non-rigid: it does not refer necessarily to the state(s) that it does, but only contingently. That is to say, if neural state N is the referent of the concept pain, then in some other world the referent is N2 rather than N. If this is right, then the concept can refer to different physical states in different possible worlds. And if this in turn is correct, then the concept can refer to different physical states in our own universe, since actualities (things which actually exist in our universe) are just a breed of possibilities.

Lewis thus has a solution to the problem of the martian: the concept pain still refers to the physical state which realizes causal role R in martians, because the concept is non-rigid and can have multiple referents. As such, martians are in pain in the sense that they have a state which realizes R, even though that state is very different from the one which realizes R in us humans. These respective states are, then, ‘pain-in-martians’ and ‘pain-in-humans.’

Under the above scheme, the madman is also in pain, albeit in a different sense than the martian. The madman is in pain because he is experiencing a state which typically realizes R in human beings. In him, (say, because his neural wiring is messed up), the state which typically realizes R in human beings does not in fact realize R in him. But he is still in the very same state which realizes R (pain) in the rest of us, despite it having different causes and effects in him. He is an exception to the definition of mental states under Lewis’s theory, and is handled by the typicality clause in Lewis’s definition of mental states: a mental state is the state which typically realizes causal role R in human beings, or whatever species under consideration.

If both the martian and the madman are in pain under this proposal, albeit in different senses, Lewis still needs to answer the question of the appropriate population the mental state realizer should be relativized to (in other words, which group do we plug in to ‘Y’ in ‘X is in pain just in case X is in the state which realizes the causal role of pain in Y’?). Lewis answers this question by providing criteria for determining the appropriate population: the population should either be us, since we developed the concept of pain, or otherwise a natural kind such as a species, or if we are trying to decide if X is in pain, it should be the group X is a member of, and/or a group where X is not exceptional.

The above criteria allows for vagueness, or indeterminate cases where it is not clear whether the person (or whatever sort of being under consideration) is in pain or not. Lewis gives an example: suppose there is a sub-population of human beings where the state that typically plays the causal role of pain in human beings causes thirstiness in the sub-population, and vice versa. If this is so, we might be tempted to think of the sub-population as a group of madmen, or of martians. There is no clear way of deciding in which sense they are in pain.

Lewis thinks that the above example parallels the problem of inverted spectra. Some people supposedly see red where we see green, and vice versa. Lewis thinks that such persons, when observing a patch of grass, see red in some sense, since they are in the state which typically realizes the causal role of seeing red in most people, whereas in some other sense they see green, since they are in the state which is typically associated with the causal role of seeing green in people with inverted spectra.

Lewis acknowledges that there is a case that his theory cannot handle: the case of a mad martian who is unique (i.e., who does not belong to any population). If such a being were possible and could experience pain, that pain would defy either sense of ‘pain’ established in Lewis’s account. As such, Lewis simply denies that such a case is possible. He says, rather, that it borrows elements from possible cases without itself being possible.

Finally, Lewis responds to a potential objection regarding qualia. The objection states that Lewis’s account fails entirely to describe the ‘what it feels like,’ or phenomenal aspect of pain. Pain is a certain feeling, regardless of what causal role it plays. Lewis responds by saying that he agrees with the objection, to a limited extent. The causal role R described by the concept pain is both the causal role and the feeling. So if a state is ‘pain-in-humans’ (the realization of R in typical human beings), then having the state is feeling it.

Logical Outline

Argument: L-theory (a term I’m using to denote Lewis’s theory presented in the paper) can account for martian pain and madman pain, a prerequisite to any theory of mind.

1. L-theory can account for martian pain.

S1. A theory which accounts for martian pain has to allow for martians to experience pain (in some importance sense of ‘pain’).

S2. L-theory allows for a creature to be in pain just in case it is experiencing a state which realizes R (the causal role defined by the concept ‘pain’) or the state which typically realizes. R in the appropriate population.

S3. Under L-theory, martians experience a state which realizes R.

S4. Therefore, L-theory can account for martian pain. [S1-S3]

2. L-theory can account for madman pain.

S1. A theory which accounts for madman pain has to allow for madmen to experience pain (in some importance sense of ‘pain’).

S2. L-theory allows for a creature to be in pain just in case it is experiencing a state which realizesR (the causal role defined by the concept ‘pain’) or the state which typically realizesR in the appropriate population.

S3. Under L-theory, madmen experience the state which typically realizes R in human beings.

S4. Therefore, L-theory can account for madmen pain. [S1-S3]

3. Therefore, L-theory can account for both martian and madman pain. [1-2]

1 Comment

Filed under Article Summaries

Article Summary: “An Argument for the Identity Theory” by David Lewis

Summary

David Lewis presents an argument for the identity theory of mind – the view that mental states (Lewis here calls them ‘experiences,’ but I shall call them mental states since that is a more common term) are equivalent (in the strict sense of identity) to neural states. His argument deploys the functional understanding of mental states, which says that such states are characterized by the causal roles they fill. Lewis then advocates for the explanatory adequacy of physics – the view that physics can explain all of the causal relations of physical phenomena. Since mental states cause physical phenomena, and only physical things are needed to explain such a causal relation, mental states just are physical phenomena causing other physical phenomena (e.g., behavior). In simpler terms, take a mental state M. M fills causal role R. Due to the explanatory adequacy of physics, only a physical state P can fill R. Therefore, M = P.

Lewis begins by sketching his theory in relation to an example of a lock. The state of being unlocked is at first characterized in functional terms. That is, ‘being unlocked’ is a functional state. Then, with regard to a specific lock, we can see just what it is which fulfills that functional state. And in the case of, say, a cylindrical combination lock for bicycle chains, the functional state of ‘being unlocked’ will be found to be a proper alignment of slotted discs (a purely physical state). Lewis wants to say that the same is true of the mental states which fill functional states in theories of mind.

Lewis then clarifies the nature of the identity theory of mind in general, defending it against objection. The first point Lewis makes is that the identity theory intends for certain physical states to be mental states, but not that the physical state is the object of the experience. So the physical state of seeing red is not itself red. Lewis then diffuses the objection that since mental states are by analytic necessity unlocated, and physical states meanwhile are located, that mental states cannot be physical states. Lewis thinks this is an unwarranted objection because he sees no basis for claiming any form of necessity for the claim that that mental states are unlocated.

Lewis then proceeds to objections relating to the difference between neural-state and mental state-ascriptions. Identity theory claims that the two sorts of ascriptions refer to the same underlying phenomena, not that they do so in the same sense. So just because in some instances the truth value of the two sorts of ascription are different, the identity theory is not necessarily false. The two sorts of ascription can refer to the same phenomena, albeit in difference sense.

Additionally, for Lewis it is not the case that the identity theory is false just because mental-state and neural-state ascriptions are not synonymous, while the identity of attributes predicated by such ascriptions is established via such synonymy. For ‘having‘ an experience is being in a definitive state which fulfills a specific causal role, whereas having the attribute predicated of someone when it is said they are having the experience is the attribute of being in that state.

Lewis then presents the first premise of his argument: mental states’ defining characteristics are their causal roles. Lewis says that a mental state is in fact the causal role that it fills. He says that this notion is an expansion of the ‘topic-neutral’ analyses of mental states presented by JJC Smart; the only difference is that Lewis makes explicit the causal connections of the state. Such a theory is in opposition to both epiphenomenalism and behaviorism because it explicitly holds that mental states are causally efficacious, unlike these theories.

Lewis characterizes this view of mental states as a successor theory to behaviorism: like behaviorism, Lewis’s theory recognizes that the causal roles of a mental state are analytic; unlike behaviorism, his theory allows for mental states to be causes and effects, allows for the interdefinition of mental states in terms of each other, and can handle exceptions. Behaviorism could not handle an exception because it insisted that mental states were mere behavioral dispositions. Yet the complete paralytic could still be in pain despite not being able to exhibit any behavior. Lewis’s theory skirts the issue of exceptions by characterizing mental states as the typical (rather than exceptionless) occupants of specific causal roles.

Lewis concludes his defense of his first premise by saying that he is relying upon the analytic statements of mental states inherited from behaviorism when he says that such states are characterized primarily by causal role occupation. It is these causal-centric statements that he is utilizing, all the while avoiding the pitfalls of behaviorism.

Lewis’s second premise is that physics can explain all physical phenomena. That is to say, when a phenomena occurs in, say, a special science such as cognitive science, that phenomena can be explained in terms of a more fundamental science, and in turn that explanation can be explained in terms of a still more fundamental science, until the phenomena is explainable purely in terms of fundamental physics. As such, all causal occurrences can be explained in physical terms. Lewis does not mean to say that necessarily no non-physical phenomena exist. Rather, he thinks that physical states can explain all physical phenomena. If non-physical phenomena exist, they are explanatorily (causally) superfluous. Lewis’s justification for this assertion is that it is a working hypothesis of natural scientists.

It follows from the two premises of Lewis’s argument that mental states just are physical states. For mental states occupy causal roles, and according to the explanatory adequacy of physics, only physical states fill causal roles. As such mental states just are physical states. Further, it is very likely on this view that the specific sort of physical states that mental states are are neural states. The argument for identity theory is thus complete.

Lewis concludes by tackling a potential epiphenomenalist alternative which is consistent with his premises. This view would hold that mental states are non-physical correlates of physical states which are causally efficacious just like the underlying physical states since they are perfectly correlated with them. Lewis denies that such states would be causally efficacious because again, due to the adequacy of physics, there is just no need to posit such a non-physical causal force. Further, even if the theory were correct, it would actually implicate such mental states as duplicates of the physical mental states, rather than as mere correlates of them. And this is a very different position than what the epiphenomenalist wants to argue for.

Logical Outline

Argument Outline – Mental States Equal Physical States

  1. For any given mental state M, M fills causal role R.

  2. Because of the explanatory adequacy of physics, only a physical state P could possibly fill causal role R.

  3. Therefore, M = P.

1 Comment

Filed under Article Summaries

Article Summary: “What is Functionalism?” by Ned Block

“What is Functionalism?” originally printed in Readings in Philosophy of Psychology vol.1, 171-184 (MA: Harvard University Press, 1980).

Summary

“What is Functionalism” is first and foremost an encyclopedic article; as such it is mostly descriptive rather than argumentative. Nonetheless, in its exploration of the theory of functionalism in the philosophy of mind, it does present an argument against the idea that functionalism supports physicalism – the view that all things and all properties are physical.

Block begins by delimiting several senses of ‘functionalism’ used by philosophers. The first is functional analysis, whereby a system is decomposed into functionally isolated parts and the function of the system explained in terms of these parts. Then there is computation-representation functionalism, which is an explanatory strategy in cognitive science whereby mental states are analyzed as computations taking place over representations. Finally, there is metaphysical functionalism, which examines what mental states are, and in virtue of what are states such as pain similar to one another. The article is concerned solely with this latter sense of functionalism.

Block states that metaphysical functionalism identifies mental states with functional states. A functional state is just a specific causal role in a larger system (in this case, a mind) and is intricately linked with stimuli, other mental states, and behavior. As such the functionalist answers the question “In virtue of what are X’s, X’s rather than Y’s?” (where X and Y are types of mental state, e.g., pain) by saying that X’s are all X’s in virtue of their functional role in the systems of which they are a part, regardless of their physical realization.

Then Block proceeds to describe the first version of functionalism, machine functionalism. Presented by Hilary Putnam in the early 1960’s, machine functionalism argues that mental states are not just functional states, but more specifically are machine table states of a hypothetical machine called a Turing Machine. Turing Machines are automatons which can, in principle, compute any problem and which do so in virtue of what are called ‘system states,’ which are tied to instructions for computational steps (e.g., “If in system state S, perform computation C and then transition into system state S2, and so on). (Readers are encouraged to look at my summary of Putnam’s article if they want more detail on his presentation of machine functionalism.)

Block points out that machine functionalism is not the most general formulation of functionalism available; as such, he presents a more general version, using the tools of formal logic (persons not familiar with these tools might want to skip this part of the summary unless they are overly curious):

1. The first step is to reformulate a theory T (either ‘folk’ or ‘scientific’) of psychology such that is is a long conjunctive sentence with all mental state terms as singular (i.e., in the form of ‘has pain’ rather than ‘is in pain’):

T (s1 . . .sn)

‘s1 . . . sn’ are sentences including mental state terms.

2. Then, variables should be placed in the spot of mental state terms, and existential quantifiers for these terms prefixed. This forms what is called the ‘Ramsey sentence’ of the theory (named after Frank Ramsey, the genius who formulated this methodology):

∃x1 . . . xn T(x1 . . . xn)

‘∃x1 . . . xn‘ is the set of existential quantifiers prefixed to the theory, and there should be as many of them as there are mental state terms in the theory. ‘(x1 . . . xn)’ is the set of variables which now sit in the theory where the mental state terms used to.

3. Thus, if ‘x1‘ is the variable that replaced ‘pain’ in the psychological theory, we can now define pain as follows:

Y is in pain iff: ∃x1 . . . xn [T(x1 . . . xn) & Y has x1].

Since the sentence formerly expressing the mental state term ‘pain’ described it in terms of its relations to stimuli, other mental states, and behavior, pain is found to be whatever fulfills the relations which x1 exhibits.

Pain can be expressed as the property ascribed when one says ‘X has pain.’ Then, pain can be identified with the property expressed by the predicate ‘∃x1 . . . xn [T(x1 . . . xn) & Y has x1]’. Block gives us an example: if our psychological theory defines pain as being caused by pin pricks, causing worry & loud noise emission, and if worry subsequently caused brow wrinkling, then pain can be identified with the property expressed by the predicate ‘∃x1∃x2[(x1 is caused by pin pricks and causes loud noise emission and x2 & x2 causes brow wrinkling) & Y has x1]’.

This is the essence of the Ramsey method for defining mental state terms causally. Block supplements this method with extra notation for convenience’ sake. He establishes that ‘%xFx’ should be understood as ‘being an x such that x is F.’ Once this notation is introduced, we can define pain as follows: pain = %y∃x1∃x2[(x1 is caused by pin pricks and causes loud noise emission and x2 & x2 causes brow wrinkling) & Y has x1]. This just means that pain equals being a y (a thing) and being in x1, where x1 is caused by pin pricks . . . etc.

Block clarifies functionalism’s relation to behaviorism and traces the historical origins of functionalism as well. Behaviorism is the doctrine that mental states are to be analyzed purely in terms of behavior and behavioral dispositions. So, desiring an ice cream cone is just the disposition to grab a cone when one is present; and nothing more. Functionalism is importantly related to behaviorism because it also analyzes mental terms using behavioral responses to stimuli. The differences is that functionalism also refers to other mental states; further, these other mental states are interlinked with each other, stimuli, and behavior in a web of causal relations.

Block traces the development of functionalism to two developments in the philosophy of mind. The first is the ‘multiple realizability’ argument, originating with Hilary Putnam in his presentation of machine functionalism and subsequently used by Jerry Fodor in developing a method of explanation in psychology. The multiple realizability argument states that a mental state is unlikely to be a physical structure (such as a brain state) because it is eminently plausible that different physical structures – non-brain structures, for example – can realize mental states. So, pain is unlikely to just be C-fiber stimulation (or some other appropriate brain state), because octopuses and other such creatures can probably feel pain, despite their not having C-fiber stimulatory capacity. This led to the development of functionalism, which promised to unify physically different phenomena under the banner of causal (functional) similarity.

The second historical development leading to functionalism’s rise was JJC Smart’s development of ‘topic-neutral’ analyses of mental states. Smart was arguing for the identity theory of mind (see my summary of his article), the view that mental states equal brain states in the strict sense of identity. He needed, however, to respond to the objection that even if mental states were brain states, certain properties of these states were irreducibly mental, such as ‘sharpness.’ Smart responded by analyzing such property concepts in topic-neutral terms, or terms which did not indicate whether the property was physical or not. Other philosophers expanded upon these analyses, turning them into functional definitions of mental state terms (see, e.g., my summary of David Lewis’s philosophy of mind articles).

The introduction to this summary stated that Block argues against the idea that functionalism at all implies physicalism. The argument is strewn about the text and takes the form of two mini-arguments: one against the idea that functionalism supports type physicalism, and the other against the idea that functionalism supports token physicalism. Type physicalism is the view that, e.g., pain is a type of physical thing. Token physicalism, meanwhile, is a weaker thesis, holding merely that each individual instance of pain is a physical thing. The former implies the latter, but not vice versa.

Block delineates those who think that functionalism implies that physicalism is false from those who think it is true with labels: functional state identity theorists think physicalism is false because mental states are equivalent to functional states, which are non-physical. Functional specification theorists, meanwhile, think that physicalism is true because functional definitions (where pain = occupant of Causal role R, in the manner specified by the Ramsey method) lead us to physical mechanisms which fulfill the causal role in an organism. This leads to the view that pain-in-humans is C-fiber stimulation, since it occupies causal role R, and so on for all species.

Blocks argues against the idea that functionalism supports type physicalism by pointing out that, even if functional reductions are correct in the sense that a physical mechanism will be found to occupy causal role R, there is some non-physical facet of the various physical mechanisms which makes them pains rather than, say, pleasures. So type physicalism is false because mental state types are irreducibly functional. Meanwhile, even the weaker thesis of token physicalism is not supported by functionalism, in Block’s eyes. This is because there are conceivable entities which have non-physical states (such as soul states) which function just as mental states do. With functionalism negating type physicalism and not at all implying token physicalism, it seems as if functionalism does not support physicalism at all.

Logical Outline

Functionalism does not support physicalism

  1. Functionalism negates type physicalism.

    S1. If type physicalism is true, then mental state types are physical, e.g. pain is a brain state.

    S2. If functionalism is true, then mental state types are functional, i.e., non-physical states.

    S3. A mental state type cannot be both physical and non-physical.*

    S4. Therefore, functionalism negates type physicalism. [S1-S3]

      2.  Functionalism does not imply token physicalism.

S1. In order for functionalism to imply token physicalism, all conceivable tokens of mental states types would need to be physical.

S2. Yet, non-physical mental state token are conceivable, e.g., soul states rather than brain states.

S3. Therefore, functionalism does not imply token physicalism. [S1,S2]

     3. Therefore, functionalism does not support physicalism. [1,2]

Leave a comment

Filed under Article Summaries