Formal semantics (natural language)

Formal semantics is the study of grammatical meaning in natural languages using formal tools from logic and theoretical computer science. It is an interdisciplinary field, sometimes regarded as a subfield of both linguistics and philosophy of language. It provides accounts of what linguistic expressions mean and how their meanings are composed from the meanings of their parts. The enterprise of formal semantics can be thought of as that of reverse-engineering the semantic components of natural languages' grammars.

Formal semantics studies the denotations of natural language expressions. High-level concerns include compositionality, reference, and the nature of meaning. Key topic areas include scope, modality, binding, tense, and aspect. Semantics is distinct from pragmatics, which encompasses aspects of meaning which arise from interaction and communicative intent.

Formal semantics is an interdisciplinary field, often viewed as a subfield of both linguistics and philosophy, while also incorporating work from computer science, mathematical logic, and cognitive psychology. Within philosophy, formal semanticists typically adopt a Platonistic ontology and an externalist view of meaning.[1] Within linguistics, it is more common to view formal semantics as part of the study of linguistic cognition. As a result, philosophers put more of an emphasis on conceptual issues while linguists are more likely to focus on the syntax-semantics interface and crosslinguistic variation.[2][3]

The fundamental question of formal semantics is what you know when you know how to interpret expressions of a language. A common assumption is that knowing the meaning of a sentence requires knowing its truth conditions, or in other words knowing what the world would have to be like for the sentence to be true. For instance, to know the meaning of the English sentence "Nancy smokes" one has to know that it is true when the person Nancy performs the action of smoking.[1][4]

However, many current approaches to formal semantics posit that there is more to meaning than truth-conditions.[5] In the formal semantic framework of inquisitive semantics, knowing the meaning of a sentence also requires knowing what issues (i.e. questions) it raises. For instance "Nancy smokes, but does she drink?" conveys the same truth-conditional information as the previous example but also raises an issue of whether Nancy drinks.[6] Other approaches generalize the concept of truth conditionality or treat it as epiphenomenal. For instance in dynamic semantics, knowing the meaning of a sentence amounts to knowing how it updates a context.[7] Pietroski treats meanings as instructions to build concepts.[8]

The Principle of Compositionality is the fundamental assumption in formal semantics. This principle states that the denotation of a complex expression is determined by the denotations of its parts along with their mode of composition. For instance, the denotation of the English sentence "Nancy smokes" is determined by the meaning of "Nancy", the denotation of "smokes", and whatever semantic operations combine the meanings of subjects with the meanings of predicates. In a simplified semantic analysis, this idea would be formalized by positing that "Nancy" denotes Nancy herself, while "smokes" denotes a function which takes some individual x as an argument and returns the truth value "true" if x indeed smokes. Assuming that the words "Nancy" and "smokes" are semantically composed via function application, this analysis would predict that the sentence as a whole is true if Nancy indeed smokes.[9][10][11]

Scope can be thought of as the semantic order of operations. For instance, in the sentence "Paulina doesn't drink beer but she does drink wine," the proposition that Paulina drinks beer occurs within the scope of negation, but the proposition that Paulina drinks wine does not. One of the major concerns of research in formal semantics is the relationship between operators' syntactic positions and their semantic scope. This relationship is not transparent, since the scope of an operator need not directly correspond to its surface position and a single surface form can be semantically ambiguous between different scope construals. Some theories of scope posit a level of syntactic structure called logical form, in which an item's syntactic position corresponds to its semantic scope. Others theories compute scope relations in the semantics itself, using formal tools such as type shifters, monads, and continuations.[12][13][14][15]

Binding is the phenomenon in which anaphoric elements such as pronouns are grammatically associated with their antecedents. For instance in the English sentence "Mary saw herself", the anaphor "herself" is bound by its antecedent "Mary". Binding can be licensed or blocked in certain contexts or syntactic configurations, e.g. the pronoun "her" cannot be bound by "Mary" in the English sentence "Mary saw her". While all languages have binding, restrictions on it vary even among closely related languages. Binding was a major for the government and binding theory paradigm.

Modality is the phenomenon whereby language is used to discuss potentially non-actual scenarios. For instance, while a non-modal sentence such as "Nancy smoked" makes a claim about the actual world, modalized sentences such as "Nancy might have smoked" or "If Nancy smoked, I'll be sad" make claims about alternative scenarios. The most intensely studied expressions include modal auxiliaries such as "could", "should", or "must"; modal adverbs such as "possibly" or "necessarily"; and modal adjectives such as "conceivable" and "probable". However, modal components have been identified in the meanings of countless natural language expressions including counterfactuals, propositional attitudes, evidentials, habituals and generics. The standard treatment of linguistic modality was proposed by Angelika Kratzer in the 1970s, building on an earlier tradition of work in modal logic.[16][17][18]

The logical analysis of the meaning of declarative sentences began with Aristotelian logic. However, it took until the early 1970s (with the pioneering work of the philosopher and logician Richard Montague) for formal semantics to emerge as a major area of research. Montague proposed a formal system now known as Montague grammar which consisted of a novel syntactic formalism for English, a logical system called Intensional Logic, and a set of homomorphic translation rules linking the two. In retrospect, Montague Grammar has been compared to a Rube Goldberg machine, but it was regarded as earth-shattering when first proposed, and many of its fundamental insights survive in the various semantic models which have superseded it.[19][20][21]

Barbara Partee was one of the founders and major contributors to the field.

Montague Grammar was a major advance because it showed that natural languages could be treated as interpreted formal languages. Before Montague, many linguists had doubted that this was possible, and logicians of that era tended to view logic as a replacement for natural language rather than a tool for analyzing it.[21] Montague's work was published during the Linguistics Wars, and many linguists were initially puzzled by it. While linguists wanted a restrictive theory that could only model phenomena that occur in human languages, Montague sought a flexible framework that characterized the concept of meaning at its most general. At one conference, Montague told Barbara Partee that she was "the only linguist who it is not the case that I can't talk to".[21]

Formal semantics grew into a major subfield of linguistics in the late 1970s and early 1980s, due to the seminal work of Barbara Partee. Partee developed a linguistically plausible system which incorporated the key insights of both Montague Grammar and Transformational grammar. Early research in linguistic formal semantics used Partee's system to achieve a wealth of empirical and conceptual results.[21] Later work by Irene Heim, Angelika Kratzer, Tanya Reinhart, Robert May and others built on Partee's work to further reconcile it with the generative approach to syntax. The resulting framework is known as the Heim and Kratzer system, after the authors of the textbook Semantics in Generative Grammar which first codified and popularized it. The Heim and Kratzer system differs from earlier approaches in that it incorporates a level of syntactic representation called logical form which undergoes semantic interpretation. Thus, this system often includes syntactic representations and operations which were introduced by translation rules in Montague's system.[22][21] However, work by others such as Gerald Gazdar proposed models of the syntax-semantics interface which stayed closer to Montague's, providing a system of interpretation in which denotations could be computed on the basis of surface structures. These approaches live on in frameworks such as categorial grammar and combinatory categorial grammar.[23][21]

Cognitive semantics emerged as a reaction against formal semantics, but there have been recently several attempts at reconciling both positions.[24]