Using cross-country data for 167 countries with broad post–World War II coverage and historical extensions to earlier periods, we quantify how deficits, taxes, and civilian spending respond to increases and decreases in military spending. On average, deficit financing dominates: a 1 percentage point (pp) increase in military spending (as share of GDP) is associated with 62% deficit financing, 20% higher taxation, and 18% reductions in civilian spending. The composition shifts with intensity, as larger armaments are accompanied by deeper civilian cuts. Fiscal space systematically moderates these patterns: low-debt countries rely primarily on borrowing—funding a 1 pp increase by 75% deficits, 11% taxation, and 14% cuts—whereas high-debt countries adjust more through taxation (33%) and civilian spending cuts (26%). Armaments and disarmaments are asymmetric. Disarmaments only partially reverse prior fiscal expansions, sustaining elevated civilian spending as a peace dividend: a 10 pp decline in the military share raises the civilian share by 5 pp, whereas a symmetric increase reduces it by less than 2 pp. By quantifying these adjustment patterns jointly across fiscal instruments, countries, and episodes of armament and disarmament, we provide an integrated global perspective that connects insights from the literatures on war finance, fiscal space, and ratchet effects. We conclude by discussing implications for the current wave of rearmament.
View lessWe study how inflation expectations can be anchored through different forms of communication and whether such anchoring survives political change. Using a two-wave panel RCT around the 2025 German federal election, we show that providing the ECB’s target and projections lowers expectations by about 100 basis points. We then introduce a teaching-style intervention explaining the ECB’s institutional role using simple language and an intuitive metaphor, which proves equally effective. Treatment effects persist through the election, and partisan polarization remains modest. Our results suggest that well-designed communication– combining quantitative information with clear explanations of institutional responsibility–can durably anchor beliefs even in changing political environments.
View lessUsing data from the National Longitudinal Survey of Youth 1997, this paper exploits withinsibling differences in vocational coursework credits taken during high school to estimate their effects on educational and labor market outcomes. I find that additional vocational coursework reduces four-year college attendance without affecting college graduation among those who enroll, and is associated with higher annual earnings that persist into the mid-thirties. This evidence suggests that vocational education helps students realize their comparative advantage and sort into different educational paths, which benefit their labor market outcomes. The findings point to high school vocational education providing sustained economic benefits without compromising overall educational attainment, and benefiting students with diverse educational trajectories.
View lessThis paper estimates the causal effect of renewable water conditions and water use on violent conflict in rural agrifood systems. We implement a fixed-effects instrumental variables strategy that uses plausibly exogenous temperature and precipitation shocks to instrument multiple water outcomes. Annual specifications are imprecise, but five-year aggregations yield sharper inference and show that higher renewable freshwater availability significantly reduces conflict risk. Water use margins are central: freshwater withdrawals are associated with lower conflict, whereas higher aggregate water-use efficiency is associated with increased conflict risk. Overall, the results indicate that climate-driven water shocks operate through distinct channels— stocks, withdrawals, and efficiency—and that empirical conclusions depend critically on time aggregation and the definition of water being instrumented. The findings imply that climate adaptation and water policy should be paired with conflictsensitive governance and improved measurement of local water use and access.
View lessThe weaponization of agricultural trade has once again emerged as critical in the study of modern geopolitics due to Russia’s full-scale invasion of Ukraine. Although Russia has used its wheat exports as a means of enhancing its geopolitical influence over countries in the Global South, evidence on the impact of such a policy is scarce. This paper assesses the impact of reliance on Russian and Ukrainian wheat imports on food security and political development in sub-Saharan African countries. The panel data for the analysis come from 35 African countries between 2005 and 2024. The Bartik-style shift-share instrumental variables (IV) model utilizes exogenous variables derived from the historical shares of wheat that African countries imported from Russia and Ukraine multiplied by the export contractions caused by geopolitical conflicts in 2014 (Crimea annexation) and 2022 (full-scale invasion of Ukraine). The dependence on Russian wheat has had a uniquely adverse impact upon the development of sub-Saharan Africa, whereas this has not been the case for the dependence on Ukrainian wheat. Prior to 2022, the dependence on Russian wheat had no significant impact upon the reduction of undernourishment in Africa, but had a significant impact on the rise of political instability. After 2022, though, the Russian wheat played a crucial role in the food insecurity within the region. While democratic indices remained unaffected by Russian wheat, other geopolitical factors such as U.S. development aid and Chinese development finance were not able to counter the negative effects of Russian wheat exports. Our findings identify an independent vector of autocratic influence enabled through Russian agricultural exports. For sustainable political development within sub-Saharan Africa, the diversification of staple food suppliers is urgently required.
View lessWas bedeutet die verteidigungspolitische Wende Deutschlands für seine öffentlichen Finanzen? Dieser Beitrag offeriert einfache Formeln, mit denen zentrale finanzpolitische Implikationen quantifiziert werden können. Es zeigt sich, dass eine reine Kreditfinanzierung bereits nach wenigen Jahren eine kräftige Schuldenspirale auslösen würde. Die optimale Steuerung der Staatsverschuldung sieht vielmehr so aus, dass die Schuldenstandsquote bis Anfang 2030 fallen soll und danach konstant bleiben soll. Die erforderliche Erhöhung der Steuerquote ist beträchtlich. Die verteidigungspolitische Wende geht mit einer permanenten Erhöhung der Steuerquote um rund drei Prozentpunkte einher. Hinzu kommt die von der Bevölkerungsalterung bedingte Erhöhung der Steuerquote.
View lessThis thesis operationalises the European Health Data Space (EHDS) notion of a Secure Processing Environment (SPE) by translating technology-neutral regulatory obligations into implementable and testable platform requirements. Following a Design Science Research approach, it introduces a traceability-centred translation mechanism that incrementally refines legal clauses into rolespecific requirements and consolidates them in a traceability matrix, keeping interpretations, responsibilities and control points explicit along the EHDS secondary-use journey. Based on this requirements baseline, the thesis designs and implements Badger, a cloud-native SPE prototype on Kubernetes that supports end-to-end workflows, from identity and access management to permitbound workspaces and controlled result export via an airlock. Evaluation using EHDS-aligned scenarios and a traceability-matrix cross-check indicates strong coverage where obligations can be mapped to enforceable workflow steps and isolation boundaries, while exposing open challenges in enforcing permit conditions and providing a complete end-to-end audit trail. The thesis contributes a reusable method for regulatory grounding under legal uncertainty. For practitioners, it offers a checklist-style traceability matrix and a modular blueprint for realising EHDS-compliant SPE workflows.
View lessThis paper examines the possibilities of creating synthetic train trip data with Generative Adversarial Networks (GANs). A real data set from Deutsche Bahn is enhanced with synthetic data created by using a Conditional Wasserstein Generative Adversarial Network (CWGAN). The synthetic data is analyzed and compared with the original data using statistical methods as well as machine learning models. The results show that the synthetic data is very similar to the original data in terms of data structure and dependencies, but at the same time contains enough noise to not just copy already existing instances. To analyze and measure the quality of the synthetic data, different supervised machine learning models are trained to predict the change of delay of trains at a specific station based on the arrival delays of other trains at that station. These models are then each trained once using the real data and once using the real data enhanced by synthetic data. All models are evaluated using a test set containing only real data that was not used to train the models. The results show that the R2 value of delay predictions increases significantly when using the enhanced data set. In particular, neural network-based models can benefit from the larger amount of input data. The proposed approach of generating synthetic train trip data with a CWGAN can also be applied to various other railway data analysis projects that require a large amount of input data. In addition, the presented approach is particularly interesting because, unlike most GAN approaches discussed in current literature, the data basis contains numerical data and not image data.
View lessWe study a principal who allocates a good to agents with private, independently distributed values through an optimal mechanism. The principal can strategically shape these value distributions by modifying the good’s features, which affect agents’ valuations. Our analysis reveals that optimal designs are frequently divisive – creating goods that appeal strongly to specific agents or agent types while being less valued by others. These divisive designs reduce information rents and increase total surplus, even though they reduce competition. Even when total surplus is constrained, some divisiveness in designs remains optimal.
View lessNATO enlargement and Russian annexation of Crimea marked crucial turning points. According to one narrative, the Russian occupation was part of a plan to re-establish dominion over Eastern Europe. According to a rival view, it was an attempt to counter a U.S. plan to subjugate Russia. I scrutinize the logical requirements of those narratives in a multi-stage game of incomplete information that produces equilibrium play such that first NATO is enlarged and then Russia attacks Ukraine. The two competing narratives correspond to two different separating equilibria. Conditions for their existence inform about the consistency and plausibility of the associated narratives.
View lessThe availability of geocoordinates offers valuable insights into spatial patterns of economic, demographic and health outcomes. However, disclosing the exact geolocation of statistical units to secondary analysts contravenes the responsible use of data. To protect privacy, anonymisation methods are used. A commonly applied anonymisation method is the one used by Demographic and Health Surveys (DHS). The DHS anonymisation scheme works by first aggregating data at small spatial units followed by random (donut) displacement of the geocoordinates. It is reasonable for secondary analysts to be concerned about the impact of anonymisation on the analyses. In this paper, the DHS anonymisation scheme is used as a basis for studying how anonymisation impacts on kernel density estimation. We propose methodology to account for the impact of the anonymisation process on density estimation. The proposed methodology is based on deriving the distribution of the true coordinates given the observed (anonymised) coordinates. Density estimation is then implemented by using the theoretical distribution and an iterative algorithm that accounts for both aggregation and displacement. The aim is to approximate the original population density using generated pseudo-coordinates under the assumption that the anonymisation process is known. The proposed method is illustrated by using DHS data from the Rajshahi Division in Bangladesh to estimate the density of households below the poverty line. The results show that accounting for measurement error due to anonymisation leads to a more accurate picture of the spatial distribution of poverty.
View lessThe rare access to exact official geocoordinates opens new methodological possibilities for analyzing highly sensitive tax data. We explore their visualization potential and systematically evaluate aggregation as an anonymization strategy, with particular attention to its methodological and analytical implications. For an analysis of high-income taxpayers in Berlin, Germany, the focus is on the presentation of regional shares. In addition to frequency maps, smoothed representations using kernel density estimation are analyzed in particular, and their cartographic characteristics are discussed. Due to the high sensitivity of individual-level data, such data are generally not published, which is why anonymization is required in official statistics. This applies in particular to the group of high-income taxpayers. Using exact data as a gold standard makes it possible to systematically analyze the distortions caused by aggregation, one of the most commonly used anonymization methods in official statistics. In order to correct these distortions, a measurement error model is employed that explicitly accounts for the aggregation process and produces smoothed kernel density estimates for interpretable cartographic representations. In addition, the measurement error model is linked with census information to demonstrate a realistic application scenario. Local and global error measures are intended to empirically substantiate the improvement achieved through the use of the measurement error model.
View lessWe analyze how value added taxes (VATs) affect labor market outcomes (firms’ employee costs, wages, hours worked, employment). While VATs are designed to tax consumption, they are levied at the firm level, which creates potential spillovers to labor markets. We hypothesize that VATs affect wages and employment through two channels: an inflation adjustment effect, where employees demand higher wages to compensate for VAT-induced price increases; and a profitability effect, where incomplete pass-through reduces firms’ net sales and profits, putting downward pressure on wages and employment. We exploit variation in VAT rates, measuring labor market outcomes at the firm and country level. We find economically significant negative effects of VAT rates at the firm level on employee costs and at the country level on wages and employment. At the firm level, a one percentage point increase in the standard VAT rate corresponds to a 3.886% reduction in employee costs. At the country level, the same increase is associated with a 2.802% decline in average nominal wages. We find a 1.444% decline in employment at the country level following a one percentage point increase in the VAT rate. For working hours, the evidence is inconclusive and at most suggests a reduction. Heterogeneity analyses suggest that small firms and firms with high profit margins react stronger; among the employees, the age group 15-24 years is hit hardest. Our study provides the first systematic cross-country evidence on the labor market consequences of VATs.
View lessDie Arbeit entwickelt eine Architektur für einen EHDS-konformen Datenraum, der sektorübergreifende Forschungsdatenkooperationen im Gesundheitswesen unterstützt. Ausgangspunkt ist die Analyse rechtlicher, organisatorischer und technischer Anforderungen an Datennutzung und -austausch, die im Kontext fragmentierter IT-Landschaften, heterogener Prozesse und datenschutzrechtlicher Unsicherheiten besondere Bedeutung haben. Methodisch folgt die Arbeit dem Ansatz des Action Design Research und verbindet theoretische Meta- Anforderungen mit empirischen Erkenntnissen aus Interviews und Workshops im Projekt CaringS. Ergebnis ist ein Wegweiser, der Konsortien in fünf Phasen von der gemeinsamen Vision über die Identifikation relevanter Datenquellen und Fragen der Data-Governance bis zur technischen Architektur und langfristigen Nutzung begleitet. Wissenschaftlich leistet die Arbeit einen Beitrag zur Entwicklung von Gestaltungsprinzipien für Datenräume, die Zielkonflikte zwischen Generalisierbarkeit und Praxistauglichkeit sichtbar machen. Praktisch stellt sie ein anwendbares Instrument bereit, das EHDS-Vorgaben in konkrete Arbeitsschritte übersetzt und so Forschung und Versorgung verbindet.
View lessThis paper investigates how the ECB’s monetary policy affects consumers’ perceptions about the credibility of the inflation target. Monetary policy is assessed by the gap between the actual policy rate and a Taylor rate to approximate the interest rate expected by the public. Drawing on survey data for German consumers from 2019 to 2024, we find that the ECB’s interest rate policy contributes significantly to the credibility of the inflation target. In particular, the massive dent in inflation target credibility observed from 2021 to the end of 2023 could have been ameliorated by an earlier and more decisive tightening of monetary policy. This suggests that simple outcome-based Taylor rules may deserve more attention in the communication of the ECB’s monetary policy strategy.
View lessDifference-in-differences (DiD) is one of the most popular approaches for empirical research in economics, political science, and beyond. Identification in these models is based on the conditional parallel trends assumption: In the absence of treatment, the average outcome of the treated and untreated group are assumed to evolve in parallel over time, conditional on pre-treatment covariates. We introduce a novel approach to sensitivity analysis for DiD models that assesses the robustness of DiD estimates to violations of this assumption due to unobservable confounders, allowing researchers to transparently assess and communicate the credibility of their causal estimation results. Our method focuses on estimation by Double Machine Learning and extends previous work on sensitivity analysis based on Riesz Representation in cross-sectional settings. We establish asymptotic bounds for point estimates and confidence intervals in the canonical 2 × 2 setting and group-time causal parameters in settings with staggered treatment adoption. Our approach makes it possible to relate the formulation of parallel trends violation to empirical evidence from (1) pre-testing, (2) covariate benchmarking and (3) standard reporting statistics and visualizations. We provide extensive simulation experiments demonstrating the validity of our sensitivity approach and diagnostics and apply our approach to two empirical applications.
View lessThe smearing effect of kernel estimates of the local density, local proportions and local means is used as a means for the construction of anonymized maps. The standard anonymization criteria were derived for the display of case numbers of a predefined area system. However, for kernel estimates there does not exist such a defined area system. We discuss the resulting difficulties of the application of these criteria for kernel estimates. Besides, there are some de-anonymization risks which are specific for kernel estimates. We discuss these topics for data from 1.9 million Berlin taxpayers with known exact address and taxable income. In the conclusions we vote for a much stronger emphasis on the output format of a map and the labelling of the displayed values in the map.
View lessWe study a principal who allocates a good to agents with private, independently distributed values through an optimal mechanism. The principal can strategically shape these value distributions by modifying the good’s features, which affect agents’ valuations. Our analysis reveals that optimal designs are frequently divisive—creating goods that appeal strongly to specific agents or agent types while being less valued by others. These divisive designs reduce information rents and increase total surplus, at the expense of competition. Even when total surplus is constrained, some divisiveness in designs remains optimal.
View lessTax evasion is associated with high social and fiscal costs. To address these, many governments employ behavioral interventions given their low implementation costs and high potential efficiency. Although many studies report positive effects of behavioral interventions to combat tax evasion, the effect sizes are often quite small. This may result from the partial cancellation of heterogeneous effects and prompts calls in the literature for individualized or group-tailored interventions. While classification approaches for taxpayer types exist, their practical implementation is limited by data availability. We systematically review 144 studies conducted between 1996 and 2024 and show that grouptailored interventions along key inequality dimensions—gender, income, age, and regionality—may not only enhance tax compliance but also help address inequality. Furthermore, our heterogeneity analysis shows that intervention effectiveness can be enhanced by the incorporation of specific characteristics related to framing, intervention frequency, and communication channels. Finally, we present a theoretical model to support group-tailored interventions and thus provide policymakers with an efficient strategy to combat tax evasion.
View less