No scientific work proceeds without conceptual foundations. In language science, our concepts about language underlie our thinking and organize our work. They determine our assumptions, direct our attention, and guide our hypotheses and our reasoning. Only with clarity about conceptual foundations can we pose coherent research questions, design critical experiments, and collect crucial data.
This series publishes short and accessible books that explore well-defined topics in the conceptual foundations of language science. The series provides a venue for conceptual arguments and explorations that do not require the traditional book-length treatment, yet that demand more space than a typical journal article allows.
We welcome original submissions, as well as expanded versions of previously published full-length articles or chapters that fit the series theme. Topics may cover any conceptual or theoretical issue of importance for research on language, from sound to syntax to semantics, from language contact to acquisition to the ethnography of speaking. To be considered for this series, a book must be short (length around 35,000 words, or 90 pages) and must be written in clear, accessible prose, to maximize its appeal across the fields of language science. Queries should be sent to the series editors.
More information can be found on conceptualfoundations.org.
Research in linguistics, as in most other scientific domains, is usually approached in a modular way – narrowing the domain of inquiry in order to allow for increased depth of study. This is necessary and productive for a topic as wide-ranging and complex as human language. However, precisely because language is a complex system, tied to perception, learning, memory, and social organization, the assumption of modularity can also be an obstacle to understanding language at a deeper level. This book examines the consequences of enforcing non-modularity along two dimensions: the temporal, and the cognitive. Along the temporal dimension, synchronic and diachronic domains are linked by the requirement that sound changes must lead to viable, stable language states. Along the cognitive dimension, sound change and variation are linked to speech perception and production by requiring non-trivial transformations between acoustic and articulatory representations.
The methodological focus of this work is on computational modeling. By formalising and implementing theoretical accounts, modeling can expose theoretical gaps and covert assumptions. To do so, it is necessary to formally assess the functional equivalence of specific implementational choices, as well as their mapping to theoretical structures. This book applies this analytic approach to a series of implemented models of sound change. As theoretical inconsistencies are discovered, possible solutions are proposed, incrementally constructing a set of sufficient properties for a working model. Because internal theoretical consistency is enforced, this model corresponds to an explanatorily adequate theory. And because explicit links between modules are required, this is a theory, not only of sound change, but of many aspects of phonological competence.
The book highlights two aspects of modeling work that receive relatively little attention: the formal mapping from model to theory, and the scalability of demonstration models. Focusing on these aspects of modeling makes it clear that any theory of sound change in the specific is impossible without a more general theory of language: of the relationship between perception and production, the relationship between phonetics and phonology, the learning of linguistic units, and the nature of underlying representations. Theories of sound change that do not explicitly address these aspects of language are making tacit, untested assumptions about their properties. Addressing so many aspects of language may seem to complicate the linguist's task. However, as this book shows, it actually helps impose boundary conditions of ecological validity that reduce the theoretical search space.View less
This volume provides an up-to-date discussion of a foundational issue that has recently taken centre stage in linguistic typology and which is relevant to the language sciences more generally: To what extent can cross-linguistic generalizations, i.e. statistical universals of linguistic structure, be explained by the diachronic sources of these structures? Everyone agrees that typological distributions are the result of complex histories, as “languages evolve into the variation states to which synchronic universals pertain” (Hawkins 1988). However, an increasingly popular line of argumentation holds that many, perhaps most, typological regularities are long-term reflections of their diachronic sources, rather than being ‘target-driven’ by overarching functional-adaptive motivations. On this view, recurrent pathways of reanalysis and grammaticalization can lead to uniform synchronic results, obviating the need to postulate global forces like ambiguity avoidance, processing efficiency or iconicity, especially if there is no evidence for such motivations in the genesis of the respective constructions. On the other hand, the recent typological literature is equally ripe with talk of "complex adaptive systems", "attractor states" and "cross-linguistic convergence". One may wonder, therefore, how much room is left for traditional functional-adaptive forces and how exactly they influence the diachronic trajectories that shape universal distributions. The papers in the present volume are intended to provide an accessible introduction to this debate. Covering theoretical, methodological and empirical facets of the issue at hand, they represent current ways of thinking about the role of diachronic sources in explaining grammatical universals, articulated by seasoned and budding linguists alike.View less
Currently, there are two prominent schools in linguistics: Minimalism (Chomsky) and Construction Grammar (Goldberg, Tomasello). Minimalism comes with the claim that our linguistic capabilities consist of an abstract, binary combinatorial operation (Merge) and a lexicon. Most versions of Construction Grammar assume that language consists of flat phrasal schemata that contribute their own meaning and may license additional arguments. This book examines a variant of Lexical Functional Grammar, which is lexical in principle but was augmented by tools that allow for the description of phrasal constructions in the Construction Grammar sense. These new tools include templates that can be used to model inheritance hierarchies and a resource driven semantics. The resource driven semantics makes it possible to reach the effects that lexical rules had, for example remapping of arguments, by semantic means. The semantic constraints can be evaluated in the syntactic component, which is basically similar to the delayed execution of lexical rules. So this is a new formalization that might be suitable to provide solutions to longstanding problems that are not available for other formalizations.View less
What causes a language to be the way it is? Some features are universal, some are inherited, others are borrowed, and yet others are internally innovated. But no matter where a bit of language is from, it will only exist if it has been diffused and kept in circulation through social interaction in the history of a community. This book makes the case that a proper understanding of the ontology of language systems has to be grounded in the causal mechanisms by which linguistic items are socially transmitted, in communicative contexts. A biased transmission model provides a basis for understanding why certain things and not others are likely to develop, spread, and stick in languages. Because bits of language are always parts of systems, we also need to show how it is that items of knowledge and behavior become structured wholes. The book argues that to achieve this, we need to see how causal processes apply in multiple frames or 'time scales' simultaneously, and we need to understand and address each and all of these frames in our work on language. This forces us to confront implications that are not always comfortable: for example, that "a language" is not a real thing but a convenient fiction, that language-internal and language-external processes have a lot in common, and that tree diagrams are poor conceptual tools for understanding the history of languages. By exploring avenues for clear solutions to these problems, this book suggests a conceptual framework for ultimately explaining, in causal terms, what languages are like and why they are like that.View less