This series aims to publish high quality books that contribute to a better understanding of phenomena in morphology, syntax and the interplay between both. Since the recent history in linguistics showed that basic assumptions may change rather quickly, the ideal submission to this series is one that separates data discussion resulting in pre-theoretic generalizations and analysis. Such a separation ensures that published books will have some value even if the linguistic framework is radically changed or becomes irrelevant one day. The separation of the description of the phenomenon from the analysis ensures that readers that are non-experts in more formal frameworks will find the books useful. Such a separation is not a strict precondition for acceptance in this series, but rather a suggestion. In any case the work should reflect a good understanding of the data and provide some new insights.
The data discussion should be theoretically informed: Positive and negative examples appearing in the literature on the phenomena discussed should be included in the discussion, regardless of the specific framework used in the publication. Controversial examples ideally are backed up by experimental or corpus data.
We invite submissions from all frameworks and theoretical schools. Thorough formalization of the kind that is known for example from work in HPSG, LFG, or TAG is not required but is highly welcome. Work that is backed up by a computer-processable implementation is also eligible to be published in the subseries Implemented Grammars.
Synopsis:
Dan Everett is a renowned linguist with an unparalleled breadth of contributions, ranging from fieldwork to linguistic theory, including phonology, morphology, syntax, semantics, sociolinguistics, psycholinguistics, historical linguistics, philosophy of language, and philosophy of linguistics. Born on the U.S. Mexican border, Daniel Everett faced much adversity growing up and was sent as a missionary to convert the Pirahã in the Amazonian jungle, a group of people who speak a language that no outsider had been able to become proficient in. Although no Pirahã person was successfully converted, Everett successfully learned and studied Pirahã, as well as multiple other languages in the Americas. Ever steadfast in pursuing data-driven language science, Everett debunked generativist claims about syntactic recursion, for which he was repeatedly attacked. In addition to conducting fieldwork with many understudied languages and revolutionizing linguistics, Everett has published multiple works for the general public: "Don’t sleep, there are snakes, Language: The cultural tool, and how language began". This book is a collection of 15 articles that are related to Everett’s work over the years, released after a tribute event for Dan Everett that was held at MIT on June 8th 2023.
Weniger anzeigenSynopsis:
This book examines extractions out of the subject, which is traditionally considered to be an island for extraction. There is a debate among linguists regarding whether the “subject island constraint” is a syntactic phenomenon or an illusion caused by cognitive or pragmatic factors. The book focusses on French, that provides an interesting case study because it allows certain extractions out of the subject despite not being a typical null-subject language. The book takes a discourse-based approach and introduces the “Focus-Background Conflict” constraint, which posits that a focused element cannot be part of a backgrounded constituent due to a pragmatic contradiction. The major novelty of this proposal is that it predicts a distinction between extractions out of the subject in focalizing and non-focalizing constructions.
The central contribution of this book is to offer the detailed results of a series of empirical studies (corpus studies and experiments) on extractions out of the subject is French. These studies offer evidence for the possibility of extraction out of the subject in French. But they also reveal a clear distinction between constructions. While extractions out of the subject are common and highly acceptable in relative clauses, this is not the case for interrogatives and clefts.
Finally, the book proposes a Head-Driven Phrase Structure Grammar (HPSG) analysis of subject islands. It demonstrates the interaction between information structure and syntax using a representation of information structure based on Minimal Recursion Semantics (MRS).
Weniger anzeigenLexical Functional Grammar (LFG) is a nontransformational theory of linguistic structure, first developed in the 1970s by Joan Bresnan and Ronald M. Kaplan, which assumes that language is best described and modeled by parallel structures representing different facets of linguistic organization and information, related by means of functional correspondences. This volume has five parts. Part I, Overview and Introduction, provides an introduction to core syntactic concepts and representations. Part II, Grammatical Phenomena, reviews LFG work on a range of grammatical phenomena or constructions. Part III, Grammatical modules and interfaces, provides an overview of LFG work on semantics, argument structure, prosody, information structure, and morphology. Part IV, Linguistic disciplines, reviews LFG work in the disciplines of historical linguistics, learnability, psycholinguistics, and second language learning. Part V, Formal and computational issues and applications, provides an overview of computational and formal properties of the theory, implementations, and computational work on parsing, translation, grammar induction, and treebanks. Part VI, Language families and regions, reviews LFG work on languages spoken in particular geographical areas or in particular language families. The final section, Comparing LFG with other linguistic theories, discusses LFG work in relation to other theoretical approaches.
Weniger anzeigenSynopsis: It is well-known that derivational affixes can be highly polysemous, producing a range of different, often related, meanings. For example, English deverbal nouns with the suffix -er can denote instruments (opener), agents (writer), locations (diner), or patients (loaner). It is commonly assumed that this polysemy arises through a compositional process in which the affix interacts with the semantics of the base. Yet, despite intensive research in recent years, a workable model for this interaction is still under debate.
In order to study and model the semantic contributions of the base and of the affix, a framework is needed in which meanings can be composed and decomposed. In this book, I formalize the semantic input and output of derivation by means of frames, that is, recursive attribute-value structures that serve to model mental representations of concepts. In my approach, the input frame offers an array of semantic elements from which an affix may select to construct the derivative's meaning. The relationship between base and derivative is made explicit by integrating their respective frame-semantic representations into lexical rules and inheritance hierarchies.
I apply this approach to a qualitative corpus study of the productive relationship between the English nominalizing suffix -ment and a semantically delimited set of verbal bases. My data set consists of 40 neologisms with base verbs from two semantic classes, namely change-of-state verbs and verbs of psychological state. I analyze 369 attestations which were elicited from various corpora with a purposeful sampling approach, and which were hand-coded using common semantic categories such as event, state, patient and stimulus.
My results show that -ment can target a systematically restricted set of elements in the frame of a given base verb. It thereby produces a range of possible readings in each derivative, which becomes ultimately interpretable only within a specific context. The derivational process is governed by an interaction of the semantic elements provided by the base on the one hand, with properties of the affix (e.g. -ment's aversion to [+animate] readings) on the other. For instance, a shift from the verb annoy to a result-state reading in annoyment is possible because the input frame of verbs of psychological state offers a RESULT-STATE attribute, which, as is fixed in the inheritance hierarchy, is compatible with -ment. Meanwhile, a shift from annoy to an experiencer reading in annoyment fails because the value range of the attribute EXPERIENER is fixed to [+animate] entities, so that -ment's animacy constraint blocks the inheritance mechanism.
Furthermore, a quantitative exploration of my data set reveals a likely blocking effect for some -ment readings. Thus, while I have found most expected combinations of nominalization and reading attested, there are pronounced gaps for readings like instrument or stimulus. Such readings are likely to be produced by standardly subject-denoting suffixes such as -er or -ant, which may reduce the probability for -ment derivation. The quantitative analysis furthermore shows that, within the subset of attested combinations, ambiguity is widespread, with 43% of all combinations of nominalization and reading being only attested ambiguously.
This book shows how a derivational process acts on the semantics of a given verbal base by reporting on an in-depth qualitative study of the semantic contributions of both the base and the affix. Furthermore, it demonstrates that an explicit semantic decomposition of the base is essential for the analysis of the resulting derivative's semantics.
Weniger anzeigenSynopsis: In most grammatical models, hierarchical structuring and dependencies are considered as central features of grammatical structures, an idea which is usually captured by the notion of “head” or “headedness”. While in most models, this notion is more or less taken for granted, there is still much disagreement as to the precise properties of grammatical heads and the theoretical implications that arise of these properties. Moreover, there are quite a few linguistic structures that pose considerable challenges to the notion of “headedness”. Linking to the seminal discussions led in Zwicky (1985) and Corbett, Fraser, & Mc-Glashan (1993), this volume intends to look more closely upon phenomena that are considered problematic for an analysis in terms of grammatical heads. The aim of this book is to approach the concept of “headedness” from its margins. Thus, central questions of the volume relate to the nature of heads and the distinction between headed and non-headed structures, to the process of gaining and losing head status, and to the thought-provoking question as to whether grammar theory could do without heads at all. The contributions in this volume provide new empirical findings bearing on phenomena that challenge the conception of grammatical heads and/or discuss the notion of head/headedness and its consequences for grammatical theory in a more abstract way. The collected papers view the topic from diverse theoretical perspectives (among others HPSG, Generative Syntax, Optimality Theory) and different empirical angles, covering typological and corpus-linguistic accounts, with a focus on data from German.
Weniger anzeigenThe volume discusses the breadth of applications for an extended notion of paradigm. Paradigms in this sense are not only tools of morphological description but constitute the inherent structure of grammar. Grammatical paradigms are structural sets forming holistic, semiotic structures with an informational value of their own. We argue that as such, paradigms are a part of speaker knowledge and provide necessary structuring for grammaticalization processes. The papers discuss theoretical as well as conceptual questions and explore different domains of grammatical phenomena, ranging from grammaticalization, morphology, and cognitive semantics to modality, aiming to illustrate what the concept of grammatical paradigms can and cannot (yet) explain.
Weniger anzeigenHead-Driven Phrase Structure Grammar (HPSG) is a constraint-based or declarative approach to linguistic knowledge, which analyses all descriptive levels (phonology, morphology, syntax, semantics, pragmatics) with feature value pairs, structure sharing, and relational constraints. In syntax it assumes that expressions have a single relatively simple constituent structure. This volume provides a state-of-the-art introduction to the framework. Various chapters discuss basic assumptions and formal foundations, describe the evolution of the framework, and go into the details of the main syntactic phenomena. Further chapters are devoted to non-syntactic levels of description. The book also considers related fields and research areas (gesture, sign languages, computational linguistics) and includes chapters comparing HPSG with other frameworks (Lexical Functional Grammar, Categorial Grammar, Construction Grammar, Dependency Grammar, and Minimalism).
Weniger anzeigenThis book addresses the complexity of Russian verbal prefixation system that has been extensively studied but yet not explained. Traditionally, different meanings have been investigated and listed in the dictionaries and grammars and more recently linguists attempted to unify various prefix usages under more general descriptions. The existent semantic approaches, however, do not aim to use semantic representations in order to account for the problems of prefix stacking and aspect determination. This task has been so far undertaken by syntactic approaches to prefixation, that divide verbal prefixes in classes and limit complex verb formation by restricting structural positions available for the members of each class. I show that these approaches have two major drawbacks: the implicit prediction of the non-existence of complex biaspectual verbs and the absence of uniformly accepted formal criteria for the underlying prefix classification. In this book the reader can find an implementable formal semantic approach to prefixation that covers five prefixes: za-, na-, po-, pere-, and do-. It is shown how to predict the existence, semantics, and aspect of a given complex verb with the help of the combination of an LTAG and frame semantics. The task of identifying the possible affix combinations is distributed between three modules: syntax, which is kept simple (only basic structural assumptions), frame semantics, which ensures that the constraints are respected, and pragmatics, which rules out some prefixed verbs and restricts the range of available interpretations. For the purpose of the evaluation of the theory, an implementation of the proposed analysis for a grammar fragment using a metagrammar description is provided. It is shown that the proposed analysis delivers more accurate and complete predictions with respect to the existence of complex verbs than the most precise syntactic account.
Weniger anzeigenSynopsis
The standard view of the form-meaning interfaces, as embraced by the great majority of contemporary grammatical frameworks, consists in the assumption that meaning can be associated with grammatical form in a one-to-one correspondence. Under this view, composition is quite straightforward, involving concatenation of form, paired with functional application in meaning. In this book, we discuss linguistic phenomena across several grammatical sub-modules (morphology, syntax, semantics) that apparently pose a problem to the standard view, mapping out the potential for deviation from the ideal of one-to-one correspondences, and develop formal accounts of the range of phenomena. We argue that a constraint-based perspective is particularly apt to accommodate deviations from one-to-many correspondences, as it allows us to impose constraints on full structures (such as a complete word or the interpretation of a full sentence) instead of deriving such structures step by step.
Most of the papers in this volume are formulated in a particular constraint-based grammar framework, Head-driven Phrase Structure Grammar. The contributions investigate how the lexical and constructional aspects of this theory can be combined to provide an answer to this question across different linguistic sub-theories.
Weniger anzeigenWie in vielen anderen Sprachen der Welt hat sich auch im Deutschen der Definitartikel aus einem adnominal gebrauchten Demonstrativum herausgebildet. In der vorliegenden Arbeit wird dieser funktionale Wandel, der sich vornehmlich in der althochdeutschen Sprachperiode (750–1050 n. Chr.) abspielte, erstmals computergestützt und mit korpuslinguistischen Methoden anhand der fünf größten ahd. Textdenkmäler aus dem Referenzkorpus Altdeutsch rekonstruiert. Dabei wird die Entwicklung des Definitartikels als Konstruktionalisierung der Struktur [dër + N] begriffen: Das ursprüngliche Demonstrativum dër verliert seine zeigende Bedeutung und erschließt neue Gebrauchskontexte, in denen die eindeutige Identifizierbarkeit des Referenten auch unabhängig von der Gesprächssituation gewährleistet ist. In der Arbeit wird gezeigt, dass diese Kontextexpansion maßgeblich von der kognitiv-linguistischen Kategorie Belebtheit beeinflusst wird.
Weniger anzeigenThe organization of the lexicon, and especially the relations between groups of lexemes is a strongly debated topic in linguistics. Some authors have insisted on the lack of any structure of the lexicon. In this vein, Di Sciullo & Williams (1987: 3) claim that “[t]he lexicon is like a prison – it contains only the lawless, and the only thing that its inmates have in commonis lawlessness”. In the alternative view, the lexicon is assumed to have a rich structure that captures all regularities and partial regularities that exist between lexical entries.Two very different schools of linguistics have insisted on the organization of the lexicon. On the one hand, for theories like HPSG (Pollard & Sag 1994), but also some versions of construction grammar (Fillmore & Kay 1995), the lexicon is assumed to have a very rich structure which captures common grammatical properties between its members. In this approach, a type hierarchy organizes the lexicon according to common properties between items. For example, Koenig (1999: 4, among others), working from an HPSG perspective, claims that the lexicon “provides a unified model for partial regularties, medium-size generalizations, and truly productive processes”. On the other hand, from the perspective of usage-based linguistics, several authors have drawn attention to the fact that lexemes which share morphological or syntactic properties, tend to be organized in clusters of surface (phonological or semantic) similarity (Bybee & Slobin 1982; Skousen 1989; Eddington 1996). This approach, often called analogical, has developed highly accurate computational and non-computational models that can predict the classes to which lexemes belong. Like the organization of lexemes in type hierarchies, analogical relations between items help speakers to make sense of intricate systems, and reduce apparent complexity (Köpcke & Zubin 1984). Despite this core commonality, and despite the fact that most linguists seem to agree that analogy plays an important role in language, there has been remarkably little work on bringing together these two approaches. Formal grammar traditions have been very successful in capturing grammatical behaviour, but, in the process, have downplayed the role analogy plays in linguistics (Anderson 2015). In this work, I aim to change this state of affairs. First, by providing an explicit formalization of how analogy interacts with grammar, and second, by showing that analogical effects and relations closely mirror the structures in the lexicon. I will show that both formal grammar approaches, and usage-based analogical models, capture mutually compatible relations in the lexicon.
Weniger anzeigenAfter being dominant during about a century since its invention by Baudouin de Courtenay at the end of the nineteenth century, morpheme is more and more replaced by lexeme in contemporary descriptive and theoretical morphology.
The notion of a lexeme is usually associated with the work of P. H. Matthews (1972, 1974), who characterizes it as a lexical entity abstracting over individual inflected words. Over the last three decades, the lexeme has become a cornerstone of much work in both inflectional morphology and word formation (or, as it is increasingly been called, lexeme formation). The papers in the present volume take stock of the descriptive and theoretical usefulness of the lexeme, but also adress many of the challenges met by classical lexeme-based theories of morphology.
Weniger anzeigenOn Looking into Words is a wide-ranging volume spanning current research into word structure and morphology, with a focus on historical linguistics and linguistic theory. The papers are offered as a tribute to Stephen R. Anderson, the Dorothy R. Diebold Professor of Linguistics at Yale, who is retiring at the end of the 2016-2017 academic year. The contributors are friends, colleagues, and former students of Professor Anderson, all important contributors to linguistics in their own right. As is typical for such volumes, the contributions span a variety of topics relating to the interests of the honorand. In this case, the central contributions that Anderson has made to so many areas of linguistics and cognitive science, drawing on synchronic and diachronic phenomena in diverse linguistic systems, are represented through the papers in the volume.
The 26 papers that constitute this volume are unified by their discussion of the interplay between synchrony and diachrony, theory and empirical results, and the role of diachronic evidence in understanding the nature of language. Central concerns of the volume include morphological gaps, learnability, increases and declines in productivity, and the interaction of different components of the grammar. The papers deal with a range of linked synchronic and diachronic topics in phonology, morphology, and syntax (in particular, cliticization), and their implications for linguistic theory.
Weniger anzeigenIn this book, I propose a grammar fragment which accounts for the main properties of two elliptical constructions (‘lacking’ for the verbal head) called gapping (1a) and verbless relative adjuncts (therefore, VRA) (1b) respectively.
(1) a. Jean aime les pommes [et Marie les bananes]. 'Jean likes apples and Maria bananas'
b. Plusieurs personnes sont venues cette semaine, [dont Marie (hier)]. 'Several people have come this week, among which Marie yesterday'
We mainly argue for the fact that elliptical clauses in gapping and VRA constructions don’t behave as regular verbal clauses. Their syntactic and semantic properties don’t provide evidence for deriving this kind of elliptical clause from a complete clause. An analysis in terms of syntactic reconstruction of the missing material is therefore inadequate. Thus, the elliptical clause in both constructions (gapping and VRA) has a specific syntactic behaviour and must be assigned an independent status in the grammar, more precisely the status of a fragmentary clause, that is a syntactic unit having a propositional content of message type, but an incomplete syntax. This dissertation gives new arguments in favor of a semantic reconstruction with parallelism constraints, cf. Ginzburg & Sag (2000), Culicover & Jackendoff (2005).
Weniger anzeigenDiese Arbeit untersucht das Verhältnis zwischen Syntaxmodell und lexikalischen Valenzeigenschaften anhand der Familie der Baumadjunktionsgrammatiken (TAG) und anhand der Phänomenbereiche Kohärenz und Ellipse. Wie die meisten prominenten Syntaxmodelle betreibt TAG eine Amalgamierung von Syntax und Valenz, die oft zu Realisierungsidealisierungen führt. Es wird jedoch gezeigt, dass TAG dabei gewisse Realisierungsidealisierungen vermeidet und Diskontinuität bei Kohärenz direkt repräsentieren kann; dass TAG trotzdem und trotz der im Vergleich zu GB, LFG und HPSG wesentlich eingeschränkten Ausdrucksstärke zu einer linguistisch sinnvollen Analyse kohärenter Konstruktionen herangezogen werden kann; dass der TAG-Ableitungsbaum für die indirekte Gapping-Modellierung eine ausreichend informative Bezugsgröße darstellt. Für die direkte Repräsentation von Gapping-Strukturen wird schließlich ein baumbasiertes Syntaxmodell, STUG, vorgeschlagen, in dem Syntax und Valenz getrennt, aber verlinkt sind.
Weniger anzeigen