In what ways is the semantics of a language compositional?
What reasons are there for expecting it to be? 


This is one of those philosophical issues that on first encounter seem to be so obvious as to be puzzling why someone should be concerned about it.  The semantics of language is the study of meaning as it is used to communicate between humans using language.(1)  The principle of compositionality (as applied to language) is that the meaning of a complex expression is fully determined by the meaning of its parts, and the way those parts are assembled.  Since words are composed of letters (or phonemes), sentences composed from words, arguments composed from sentences, it would seem to be obvious that the semantics of language is compositional.  "Anything that deserves to be called a language must contain meaningful expressions built up from other meaningful expressions."(2)  

However obvious the compositionality of linguistic semantics might be, the essay title enquires about the particular ways that it is compositional, and the reasons for supposing it to be.  I will deal first with the arguments offered in support of the hypothesis of compositionality.  And then I will outline some aspects of the debate over the ways it is compositional.

There are two primary arguments supporting the compositionality of linguistic semantics.  The first is the argument from productivity, and the second is the argument from systematicity.  The argument from productivity is (as one would expect) an ancient one.  In the modern tradition of philosophy of language, it can be found voiced by Frege -- "the possibility of our understanding sentences which we have never heard before rests evidently on this, that we can construct the sense of a sentence out of parts that correspond to words."(3)  The argument is an inference to the best explanation, based on the obvious fact that we can understand never before encountered sentences composed of words we have encountered before.  The argument can be appropriately rephrased according to whatever theory of meaning is chosen.  It will work with a Chomsky-style word-level notion of meaning, or with a Quine-style holistic sentence-level notion of meaning.

The argument from productivity is sometimes couched in more ambitious terms.  Since natural languages supposedly contain an infinity of complex expressions, and we are finite beings, linguistic semantics must therefore, by necessity, be compositional.  Unfortunately both of the premises of this version of the productivity argument are contentious.  Although any natural language, like English, clearly contains a large number of complex expressions, it is debatable whether it contains an infinity of expressions that are comprehensible.  There is some empirical evidence that once linguistic expressions get too complex, we lose track of the meaning of earlier parts.  (I can't count the number of times I have encountered paragraph-long sentences in philosophical works that are intelligible only when broken down into simpler pieces.)  Moreover, the argument from productivity does not require an infinite corpus of intelligible expressions, nor an ability on our part to understand every such expression.  The argument supports the contention that linguistic semantics is compositional if only most of the expressions we have never encountered would be comprehended based on previous familiarity with the meaning of the constituent words.

Another aspect of the argument from productivity is the fact that natural languages are learnable.  We learn the meaning of a smallish collection of words, phrases, and idioms as a child.  From there on, our learning and language understanding is productive.  We can learn new words, phrases, and idioms through their role in context.  And we can comprehend never before encountered expressions, based on what we already know about the meaning for that collection of words, phrases, and idioms previously learned.

The argument from systematicity is based on fact that there are definite and predictable patterns among the sentences we understand.  In linguistics, the study of grammar is the study of the set of structural rules that governs the composition of clauses, phrases, and sentences in any given natural language.  Recent advances in computerized speech recognition and automatic typing have depended on developments in computational linguistics involving detailed modelling of English grammar.(4)  The argument is that by knowing a limited set of rules of grammar, and the meaning of a limited set of words (phrases, and idioms), we can construct and understand a large number of never before encountered sentences.

As with the argument from productivity, the argument from systematicity is sometimes couched in more ambitious terms.  Kent Johnson(5), for example, takes an overly narrow view of language.  Although he agrees that "any version of the systematicity of natural language presupposes some theory of linguistic kinds of expressions"(6), he insists on viewing the systematicity of language in an all-or-nothing way.  He argues that the claim that natural languages are systematic is the claim that a theory must provide a "natural non-overlapping linguistic categorization of all the expressions"(7).  But as with the argument from productivity, the argument from systematicity does not require an exhaustive non-overlapping grammatical categorization of expressions.  Using context and conversational implicature, combined with the grammatical rules we do know, we are capable of "translating/interpreting" ungrammatical (or competing grammatical analyses of) linguistic expressions into "propositions" that we do understand.  (I use "propositions" here in the unproblematic sense of whatever it is that we think about when we understand a sentence.)  So the systematicity of language need be neither exhaustive, nor non-overlapping.  The systematicity argument supports the contention that linguistic semantics is compositional if only most of the expressions we have never encountered would be comprehended based on previous familiarity with some systematic rules of grammar we have learned, and the context of utterance.

The third argument supporting the compositionality of linguistic semantics, is a pragmatic one.  The assumption that linguistic semantics is compositional works.  There does not appear to be a better explanation of how we are able to communicate in real time, than by assuming that the rules governing the interpretation of language are compositional.  In fact, given the nature of evolutionary selection, it is plausible to assume that the compositional rules are minimally complex.  Linguists have adopted this principle of compositionality as a working hypothesis. Their theories have offered satisfactory explanations for much linguistic performance data.  Whenever it has been argued that certain phenomena entail abandonment of the principle, it is often subsequently shown that this is not so(8).

Having thus defended the assumption that the semantics of language is indeed compositional, I can now address the first question in the essay's title and discuss in what ways the semantics of language is compositional. 

In the philosophy of language, the traditional orthodoxy of Frege, Wittgenstein, Quine, and Dummett is that the kernel of linguistic meaning is the sentence, and the sentence in use by a linguistic community.  The meaning of individual words is derived from how those words contribute to the sentences in which they are used.  The semantics of language is thus compositional both upward (from the meaning of words to the meaning of newly constructed/encountered sentences), and downward (from the meaning of past uttered/encountered sentences to the words contained in them).  Words thus acquire a "public meaning" constituted by the contribution of those words to the sentences that have in the past been used by a population to communicate meaning.  The meaning of a newly encountered sentence is composed based on the "public meaning" of the words constituting the sentence, and the rules of grammar applicable.

The alternative approach to the compositionality of the semantics of language is that pioneered by Chomsky.  In response to Dummett's argument for the real existence of "a language", Chomsky argues:

"The concept of language that Dummett takes to be essential involves complex and obscure sociopolitical, historical, cultural, and normative-teleological elements.  Such elements may be of some interest for the sociology of identification within various social and political communities and the study of authority structure, but they plainly lie far beyond any useful inquiry into the nature of language or the psychology of users of language."(9)

Robert Stainton takes Chomsky's logic as it applies to the problem of individuating languages, and extends it to the problem of individuating words.  Given that the only empirical evidence available is the audio quality of a string of phonemes (or the written or visual equivalent), there is just as much of a challenge with regards to individuating the words being interpreted/translated as there is with regards to the language(s) involved. 

"Because there is no objective way to individuate/count words (across or within a 'dialect'), and because what makes something a shared, public word, if there really were any, would need to appeal to 'ought' rather than 'is', the Chomskian concludes that there aren't really any 'public words'."(10)

In other words, there simply is no way to delineate a "linguistic community" that can have a stock of previously encountered sentences with associated meanings that can be used to derive the "public meaning" of individual words.  If this Chomsky inspired criticism of the current orthodoxy is valid, and I can see no reason why it is not, then there is no unique existent linguistic population within which the use of a particular sentence can provide the "public meaning" for the words employed.  There is only an "utterer" and the utterer's "audience" who may or may not have learned to associate a given meaning with a given symbol.

What we are left with is a set of symbols (whether spoken, written, or otherwise) that are employed by an "utterer" (to generalize between speaker, writer, etc.) with the intention of communicating some ideas (thoughts, concepts, propositions, whatever) to an audience.  The communication succeeds only if, and to the extent to which, the audience somehow comes to associate with those symbols an idea sufficiently close the one intended by the utterer.  Quine is right that there is no external objective ontological entity called "a meaning".  But he is wrong to assume that this does not mean that when a symbol is employed to communicate an idea to an audience, there is not "a meaning" intended in the mind of the utterer, and "a meaning" comprehended in the mind of the audience (which may or may not be similar).  Contra the current orthodoxy, there is no uniquely identifiable linguistic community.  There are only the symbols you have learned, the meaning that you have learned to attach to those symbols, and the rules of grammar that you have acquired to put the words together to communicate complex meanings.  So in the Chomskian view, the semantics of language is upwardly compositional only.

The current orthodoxy views meaning as "external" -- not mentalistic.  Meaning is given by such things as truth-conditions (Davidson) or assertability conditions (Dummett).  The meaning of names is given by a casual chain back to the initial baptism of the thing being named (Kripke).  The Chomskian view sees meaning as "internal" -- clearly mentalistic, involving intentions and desires.  Thus, in the semantics of language, the traditional orthodoxy considers the principle of compositionality to be one of many external influences that govern the semantics of language.  The semantics of a complex expression is not fully determined by the meaning of its parts, and the way those parts are assembled.  Also playing essential roles are the delimitation of the language community, the extant body of previously encountered sentences that provide the meaning of contained words, and the causal chain that controls the meaning of the names of particulars.  But in the Chomskian view, the semantics of language is strictly compositional -- expanding upward and outward from the intentions and desires of the utterer.  The semantics of a complex expression is fully determined by the meaning of its constituent symbols, and the way those symbols are assembled.

So, if the reasons provided for expecting the semantics of language to be compositional are valid, then a Chomskian approach to a theory of meaning is more plausible than the current orthodoxy.


Notes & References

(1)  Wikipedia contributors;  "Semantics" in Wikipedia, The Free Encyclopedia. URL=<>.

(2)  Szabó, Zoltán Gendler, "Compositionality", The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.), URL=<>.

(3)  Frege, F.L.G.; "Letter to Jourdain" in G. Gabriel et al. (eds.), Philosophical and Mathematical Correspondence, Chicago University Press, Chicago, Illinois. 1980. Pg 79.

(4)  Wikipedia contributors;  "Computational linguistics" in Wikipedia, The Free Encyclopedia. URL=<>.

(5)  Johnson, K.;  "On the Systematicity of Language and Thought" in The Journal of Philosophy, Vol 101, No 3 (Mar 2004). URL=<>.  Pg 111--139.

(6)  ibid.

(7)  Szabó, Zoltán Gendler, "Compositionality", Op Cit.

(8)  Zimmermann, Thomas E.;  "Compositionality Problems and How to Solve Them" in M. Werning, W. Hinzen, and E. Machery (eds.), The Oxford Handbook of Compositionality, Oxford University Press, Oxford, England. 2012.  pp. 81--106.

(9)  Chomsky, Noam & Smith, Neal;  New Horizons in the Study of Language and Mind, Cambridge University Press, Cambridge, England. 2000. ISBN 978-0-521-65822-5.

(10)  Stainton, Robert J.;  "Meaning and Reference: Some Chomskian Themes", The Oxford Handbook of the Philosophy of Language, Ernest Lepore & Barry C. Smith (Eds.), Clarendon Press, Oxford, England, 2006. ISBN 978-0-19-955223-8. Pg 920


[Up] [Home] [Next]