Aging is a complex biological process associated with systemic and cellular dysfunctions. Both fasting (i.e., time-restricted feeding) and dietary protein restriction (PR) are among the most promising interventions to promote healthy aging. Since the two paradigms are hardly compatible, it has remained unclear whether they may exert synergistic effects. However, recent studies have shown that the endogenous polyamine spermidine increases during fasting in both the fruit fly, Drosophila, and humans. In my dissertation, I therefore treated Drosophila with a combination of dietary spermidine supplementation (SPD) and PR to assess whether their effects are additive or synergistic. My observations confirmed that both interventions act through orthogonal mechanisms, which encourages future clinical studies in this direction. The Drosophila study included behavioral analyses (lifespan, locomotion, and fecundity) combined with polyamine quantification and proteome profiling. Both SPD and PR alone promoted healthspan and lifespan in flies, while their combination provided additional benefits — including extended lifespan, improved locomotion, and prolonged fecundity in aging flies. SPD (but not PR) increased putrescine and spermidine levels in PR-fed flies and reprogrammed the proteome toward specific metabolic pathways — including mitochondrial metabolism, autophagy, and hypusination. Hypusination is a posttranslational modification of the eukaryotic initiation factor 5A (eIF5A), in which spermidine donates an aminobutyl group to a specific lysine residue, forming hypusine (Nε-[4-amino-2-hydroxybutyl]-lysine). ____________________________________________________________________________________________________ In the second part of this work (ImmuneAge trial in human participants), we investigated the role of SPD in immune rejuvenation and conducted a comprehensive analysis of polyamine biokinetics in blood fractions (plasma, serum, and cellular components) in collaboration Charité – Universitätsmedizin Berlin, German Institute of Human Nutrition Potsdam-Rehbrücke (DIfE), Leibniz Institute for Analytical Sciences – ISAS – e.V., and The Longevity Labs GmbH (TLL). The ImmuneAge Trial used a multi-parameter molecular profiling approach, including ELISA for inflammatory markers, LC-MS for polyamine quantification, proteomics, western blot analysis for autophagy and hypusination markers, and flow cytometry to evaluate immune responses. We showed that a 20-day SPD significantly enhanced autophagy and hypusination in peripheral blood mononuclear cells (PBMCs), particularly in younger participants (20–40 years), with similar trends observed in older individuals (60–90 years). These effects correlated with SPD’s strong ability to counteract the age-related decline of spermidine levels in PBMCs. Moreover, SPD reduced inflammaging and improved aspects of both innate and adaptive immune responses to the SARS-CoV-2 spike peptide, with stronger effects in younger participants. Plasma proteomics revealed that SPD strengthened the immune system, lowered thrombosis risk, and reduced inflammation in older participants, while in younger participants, it improved lipid metabolism and triggered changes considered cardioprotective.
View lessDriving fundamentally challenges the established equilibrium paradigms of interacting quantum many-body systems. Quantum spin systems exposed to periodic, quasiperiodic and direct-current driving can enter non-equilibrium phases of matter with no equilibrium analog. A striking example is a Floquet time crystal, a phase of matter stabilized by periodic driving, that breaks time-translation symmetry and is thus at odds with thermal equilibrium. The time crystal owes its robustness against perturbations to the interactions between the particles. It is thus a representative case that highlights the important role of interactions in the presence of driving.
This thesis employs the quantum Ising model as a platform to explore driven many-body dynamics. First, we study the dynamics of correlation functions of the periodically driven quantum Ising model. In the open chain, periodic driving stabilizes Floquet Majorana zero modes (MZMs) and Majorana π modes (MPMs) at its boundaries, implying characteristic level pairings throughout the many-body spectrum. We show that the level pairing statistics differ markedly between MZMs and MPMs in the presence of random symmetry-breaking fields, with implications for the boundary spin correlations. In the coexisting regime of MZMs and MPMs, we construct a composite boundary mode as their operator product. We analyze the resilience of the composite mode against integrability-breaking perturbations, surpassing the stability of the individual boundary modes. Next, we apply a quasiperiodic Fibonacci drive to the quantum Ising chain. The boundaries can host Majorana golden-ratio modes (MGMs) unique to the quasiperiodic setting. We map out the dynamic phase diagram which contains self-similar structures under time evolution. Returning to periodic drives, we revisit the relation of the Floquet time crystal to the spectral π pairings of the many-body Floquet operator. Our work shows that the level pairing statistics provides analytical expressions for the temporal spin correlations in Floquet time crystals.
Finally, we turn to magnetic impurities in superconductors. We treat the spins of a Yu-Shiba- Rusinov dimer fully quantum in a zero-bandwidth approximation, accounting for the complex interplay of screening, superconducting correlations, magnetic anisotropies and substrate-mediated interactions. The dimer shows rich phases and excitation spectra which underscore the role of interactions between spins under direct-current driving in a solid-state platform.
View lessThis dissertation explores how macroeconomic policy and conditions interact with expectations, institutional constraints, and country, as well as household heterogeneity. In four chapters, I study how monetary policy reaches the public via the media, how fiscal rules shape governments' ability to respond to shocks, and how we can track changes in income inequality in real time. Chapter 1 examines how media coverage of the European Central Bank (ECB) shapes consumer inflation expectations. The media act as a key intermediary between central banks and the public; I investigate which monetary policy topics in media reporting matter most to consumers. I identify seven core topics - interest rates, inflation, economic growth, purchase programs, uncertainty, fiscal policy, and financial markets - and measure their prominence in leading economic newspapers in the euro area’s four largest economies using Latent Semantic Indexing with factor rotation. To isolate the impact of topic-specific coverage, I construct media topic shifts around ECB press conferences in an event-study framework and estimate their effect using local projections. The results suggest that media coverage significantly influences inflation expectations: discussions on inflation and economic growth raise expectations, while talk about the financial topic dampens them. Consumers respond more strongly to the media's interpretation of ECB messages than to the messages themselves. Furthermore, the media generally reinforce ECB messaging, with the exception of the fiscal topic, where consumer expectations move in opposing directions depending on the source. These findings offer new insights into how monetary policy (communication) is filtered through the media and received by the public. Chapter 2, co-authored with Vegard H. Larsen and Nicolò Maffei-Faccioli, studies how ECB-related inflation news affect consumer inflation expectations heterogeneously across the four largest euro area countries. Using Latent Dirichlet Allocation, we measure the intensity of inflation coverage in the national media and estimate its effects in a Structural Vector Autoregressive model. We find that German and Italian consumers respond significantly to ECB inflation news, while no clear effect emerges for Spain and France. These results point to substantial cross-country heterogeneity in how central bank communication is received, highlighting the importance of national media in shaping policy transmission in a diverse monetary union. Chapter 3, joint with Christoph Große-Steffen and Malte Rieth, examines how fiscal rules affect macroeconomic stabilization in response to exogenous shocks. Using the unpredictability of natural disasters in an instrumental-variable approach, we construct a shock measure that is exogenous and comparable across countries. We combine the resulting shock series with quarterly macroeconomic data for 89 countries over nearly fifty years in a dynamic panel model. The results suggest that countries with fiscal rules absorb shocks better, with stronger recovery of GDP and private demand. These effects are linked to more expansionary fiscal policy and depend on available fiscal space, suggesting that rules can enhance countercyclical capacity. We further explore the role of rule design in a sovereign default model, focusing on the interplay between rule tightness and flexibility. The model shows that tight fiscal rules with escape clauses can support countercyclical responses and generate welfare gains, even under market discipline. Overall, our findings offer new evidence that well-designed fiscal rules can enhance resilience to economic shocks rather than constrain the policy response. Chapter 4, co-authored with Nina Maria Brehl and Geraldine Dany-Knedlik, examines whether macroeconomic developments shape the labor income distribution. Accurate and timely data on income dynamics are essential for informed policy responses, yet such information is typically published only annually and with substantial delays. We propose a method to nowcast the income distribution using dynamic factor models that combine high-frequency macroeconomic indicators with low-frequency household survey data from the German Socio-Economic Panel. A pseudo-real time evaluation shows that incorporating macroeconomic signals and inter-decile dynamics substantially improves forecast accuracy over a univariate benchmark - particularly for the middle and lower parts of the distribution. We apply the best-performing model to project income growth through 2024 and estimate inequality under the assumption of a generalized Pareto distribution. The results highlight a heterogeneous response of different income groups to macroeconomic shifts and show that inequality in Germany has likely risen in the aftermath of the European energy crisis. Our approach offers a practical framework for real-time monitoring of distributional developments.
View lessHumans are remarkably fast at processing scenes and making decisions based on the information they contain. Within a few hundred milliseconds of viewing a scene, our brain can extract the most important information through a hierarchical cascade starting with perceptual attributes (color, edges, etc.) and ending with abstract properties (category, relationship between objects, etc.), eventually supporting decision-making. Despite the central role of scene processing, many aspects of how it unfolds in the brain remain poorly understood. In particular, the intermediate stages linking perceptual and abstract scene understanding, i.e., mid-level feature processing, are largely unresolved. Moreover, the link between neural activity and behavior, i.e., when, where and what kind of scene information arising in the brain influences decision-making, remains unclear. This thesis addresses these gaps through three studies implementing empirical and computational methods. In Study 1, we used a novel stimulus set to reveal that various mid-level features of scenes are processed in humans between ∼100 ms and ∼250 ms after stimulus onset, bridging low- and high-level feature representations, and with a temporal hierarchy that is mirrored by convolutional neural networks (CNNs). In Study 2, we showed that neural representations of scenes are suitably formatted for behavioral readout of scene naturalness between ∼100 ms and ∼200 ms, i.e., in the intermediate processing stages, and that intermediate CNN layers best correlated with the neural representations in this time-window, suggesting that mid-level features underlie behaviorally-relevant representations. In Study 3, we showed that neural representations of scenes are suitably formatted for behavioral readout of scene naturalness in the early visual cortex and in the object-selective high-level cortex, and that intermediate CNN layers best explain this brain-behavior relationship, indicating that behaviorally-relevant representations in these areas are driven by mid-level features. Taken together, the studies included in this thesis revealed the timing, spatial localization, and behavioral relevance of mid-level feature representations in scene processing, contributing to a better understanding of how the human brain extracts information from the surrounding world.
View lessThe assessment of skin sensitization is a crucial aspect of toxicology and regulatory safety testing, particularly for industries involving pharmaceuticals, chemicals, and cosmetics. Traditional animal-based assays, such as the Local Lymph Node Assay have been widely used for this purpose. However, these models face ethical concerns, regulatory restrictions, and limited predictive accuracy due to interspecies differences. To address these challenges, in vitro and in silico assays have been developed, but current methods only assess individual key events in the Adverse Outcome Pathway for skin sensitization, lacking a comprehensive evaluation of the sensitization process. This thesis aimed to develop a fully immunocompetent human skin model derived entirely from induced pluripotent stem cells (iPSC) to provide a more physiologically relevant and scalable alternative for skin sensitization assessment. The skin model was built using iPSC-derived fibroblasts, keratinocytes, and dendritic cells (iPSC-FB, iPSC-KC and iPSC-DC), integrated into a three-dimensional skin structure. This innovation allows for the simultaneous evaluation of multiple key events in the sensitization process, including keratinocyte activation, dendritic cell maturation, and cytokine secretion, making it a more robust and mechanistically relevant tool for the detection of skin sensitizing substances. To achieve this, hair follicle-derived keratinocytes were reprogrammed into iPSC using non-integrative Sendai virus vectors, ensuring genomic stability and ethical sourcing. These iPSC were then efficiently differentiated into functional skin-resident cells, including fibroblasts, keratinocytes, and dendritic cells. The differentiated cells exhibited characteristics comparable to their primary cell counterparts, with iPSC-FB demonstrating robust collagen secretion and extracellular matrix formation, and iPSC-KC expressing key epidermal differentiation markers. Additionally, iPSC-DC displayed antigen-presenting capabilities, as confirmed by the expression of CD86, HLA-DR, and CD209, and were able to induce allogeneic T-cell proliferation, confirming their immune functionality. The developed iPSC-derived immunocompetent skin models were functionally evaluated using a Lucifer Yellow permeability assay to confirm epidermal barrier integrity, as well as assays evaluating the cell viability. Furthermore, a skin sensitization assay was conducted, where the model was exposed to sensitizers of varying potencies, including dinitrochlorobenzene, p-phenylenediamine, isoeugenol, resorcinol and the non-sensitizer glycerol. The results demonstrated increased dendritic cell maturation and cytokine secretion (IL-8, IL-1β, MIP-1β, IL-18, TSLP, and TGF-β1) in response to sensitizers, confirming the model’s ability to distinguish between sensitizing and non-sensitizing compounds. Notably, the immunocompetent skin model outperformed conventional skin models by integrating both key event 2 (keratinocyte activation) and key event 3 (dendritic cell activation), aligning with the Adverse Outcome Pathway framework and improving predictive accuracy. This thesis represents a significant advancement in skin model development by creating a fully human, reproducible, and scalable system that eliminates the need for primary cell sourcing and animal testing. The iPSC-derived immunocompetent skin model holds broad applications in toxicology, the assessment of skin sensitizers, and disease modeling, offering a powerful platform for studying inflammatory skin conditions such as atopic dermatitis and psoriasis. Furthermore, the ability to generate patient-specific iPSC lines enables the development of personalized in vitro models for precision medicine and drug screening in the future. By providing an ethically responsible and human-relevant alternative to current testing models, this study contributes to the ongoing efforts to replace, reduce, and refine (3R principles) the use of animal testing in toxicology. The fully integrated iPSC-derived skin model presents a scalable and standardized approach for evaluating chemical sensitization, advancing both scientific research and regulatory safety assessment. Future research should focus on further refining the model for clinical applications, integrating additional immune components, and expanding its use in genetic disease modeling and personalized therapeutic testing.
View lessChemical synapses are the points of contact through which information flows between neurons. The fusion of neurotransmitter-filled vesicles with the plasma membrane is at the center of this cellular process. This fusion is orchestrated by three soluble N-ethylmaleimide sensitive factor attachment protein receptor (SNARE) proteins, among which Syntaxin-1 (STX1) is paramount. Anchored to the plasma membrane through a transmembrane domain (TMD), STX1 interacts with the other two SNARE proteins (Synaptobrevin-2 and SNAP-25) through its SNARE domain to form the SNARE complex. Regulatory proteins including Synaptotagmin-1, complexin and Munc18-1, also interact with STX1 through the SNARE domain. Additionally, STX1 has many other regulatory domains such as the N-peptide, the Habc-domain, the linker region, and the juxtamembrane domain (JMD). Understanding the individual roles of these domains can help us unravel the complex process that is vesicle fusion. In this compilation of studies, we make use of a mouse hippocampal neuronal model lacking STX1 (STX1-null). In these STX1-null neurons we can express genetically modified constructs of STX1 which target different domains and perform structure-function analysis by studying the electrophysiological responses of these neurons. Our goal is to elucidate the specific roles of distinct STX1 domains. First, we characterized the role of the N-peptide of STX1, which has been reported to be indispensable for the function of STX1 in vesicular release, in the natural and constitutively open conformations of STX1, which has been reported to be indispensable for the function of STX1 in vesicular release. For this we created increasing deletions of the N-peptide up until the Habc-domain of STX1 and expressed these mutant constructs in our STX1-null neuron model. Our results concluded that the N-peptide is non-essential for neurotransmitter release, however it has a regulatory role in the Ca2+-triggered release. Second, we performed an in-depth analysis of the role of the juxtamembrane domain (JMD) and the transmembrane domain (TMD) of STX1, and the role of palmitoylation of these two domains in vesicular release. In order to do this first we created STX1 constructs which carry elongations at either side of the JMD, to understand the precise function of the continuity between SNARE domain and JMD and JMD and TMD. Our results showed that the structural continuity between SNARE domain and JMD and JMD and TMD is essential for spontaneous and Ca2+-evoked vesicular release. Additionally, mutating residues that are targets of palmitoylation of these domains showed that the palmitoylation of the TMD (which is regulated by the JMD) is an important regulatory mechanism in spontaneous release. Finally, we studied the function of the SNARE domain of STX1. For this we performed a chimeric approach using STX1 and an isoform, STX2. This analysis consisted on interchanging the Nterminal half, the C-terminal half or the entire SNARE domain between the two isoforms. Our results showed that the C-terminal half of the SNARE complex of STX1 has a regulatory function on the stability of the primed vesicles and in the spontaneous release of vesicles. Furthermore, we created point mutations in the C-terminal half of the SNARE domain of residues that differed between STX2 and STX1 and found that residues D231 and R232 seemed to be especially important for these two functions. This comprehensive study contributes to our understanding of the nuanced mechanisms governing synaptic vesicle fusion.
View lessSoil restoration strategies are integrated management practices to recover soil health and enhance plant growth without compromising future demands. Despite land management practices being generally well-studied , effects of joint application of soil restoration practices with more than three factors are rarely addressed. A main reason might be the complex interactions between individual practices, such as two-way interactions. Such interactions can be affected by the fluctuation of external environmental patterns and other additional restoration factors, making the joint effect hard to predict. This doctoral work investigates the joint effects of multiple restoration factors on soil properties and plant growth. First, a laboratory study explores the joint effects of restoration practices (up to 8) for different factor numbers. This experiment acts as a proof- of- concept study to investigate the multiple restoration practices effects on soil properties. Next, two climate chamber experiments investigate the joint effects of restoration amendments on soil properties and plant growth. These two experiments focus on the effect of factor diversity for restoration amendments. One experiment investigates the carbon diversity effects by applying five diverse organic amendments, while for the other experiment nine restoration amendments were allocated into three distinct functional groups according to their physicochemical properties and effect mechanisms- namely, organic , inorganic and bio-fertilizer amendments. Both experiments unravel the importance of jointly applying diverse restoration practices under well-watered conditions, whereas applying a single factor may result in better outcomes to address a specific challenge, such as drought stress. Last but not least, we conducted an experiment to investigate restoration effects on soil properties and the plant community. Previous laboratory works showed that, in contrast to multiple global change factors, a higher factor number often cannot result in better performance in comparison to single factors. Therefore, we assessed the full spectrum of all two factor combinations with replicates to investigate the pairwise interactions and diagnosed the interaction type. We found that an increasing factor number can enhance soil properties, such as soil water holding capacity and soil pH, whereas the plant community composition is influenced mainly by the dissimilarity between factors. Moreover, we showed that pairwise synergistic interactions may enhance plant growth. Together, this work reports findings of joint effects from multiple restoration practices with various analytical approaches across different scales. Our results offer valuable insights into the complex interactions under multiple restoration contexts and provide solid suggestions on integrated restoration strategies for farmers, researchers and decision makers.
View lessAs plastic production and consumption continue to increase, so does its impact on the environment. The extent to which plastic is present in our ecosystems and the manner in which it affects them has not yet been sufficiently investigated. Therefore, the following thesis focuses on the investigation of possi- ble micro- and nanoplastics entry paths into the environment, the optimization of its analysis methods in environmental samples, and the approach to generate a reference material for future toxicity studies of nano-sized plastics in organisms.
View lessAntimicrobial resistance (AMR) is one of the most pressing global public health challenges, posing a significant and immediate threat to both human and animal health. The use of antimicrobials in human medicine, veterinary medicine, and agriculture is a major driver of resistance. Regulation (EU) 2019/6 aims to curb the development and spread of AMR by introducing stricter controls on the use of antibiotics in animals. While the regulation harmonizes the rules for veterinary antimicrobial products across the EU, there is currently no coordinated European system for monitoring AMR in bacterial pathogens from diseased animals. As part of the HKP-Mon project, this study aimed to establish baseline data on the prevalence of methicillin-resistant Staphylococcus aureus (MRSA) and Staphylococcus pseudintermedius (MRSP) in dogs and cats in Germany from 2019 to 2021, and to assess the potential of retrospective laboratory data for continuous AMR monitoring. The analysis was based on a large dataset of routine diagnostic results provided by Laboklin, an accredited veterinary diagnostic laboratory. Samples originated from 3,491 veterinary practices and clinics, representing approximately one-third of small animal practices in Germany. Out of 175,171 total samples, 25.6% (5,526) were identified as S. aureus and 3.2% (44,880) as S. pseudintermedius. Data were stratified by year, animal species, and sample type. Phenotypic methicillin resistance was detected in 17.8% of S. aureus and 7.5% of S. pseudintermedius isolates. MRSA prevalence was lower in cats (15.6%) than in dogs (20.4%), while MRSP prevalence was higher in cats (16.1%) compared to dogs (7.1%). In contrast to veterinary findings, the average MRSA prevalence in human medicine during the same period was lower at 5.4%. For both MRSA and MRSP, the highest prevalences were observed in wound samples, with S. aureus exceeding 30% in dogs and 20% in cats, and S. pseudintermedius exceeding 15% in dogs and 20% in cats. Notably, feline urogenital tract samples also showed high MRSP prevalence, exceeding 20%. MRSA isolates exhibited the highest resistance to clindamycin (59.8%) and enrofloxacin (36.4%), while resistance to sulfamethoxazole-trimethoprim and gentamicin was moderate (13–14%), and resistance to chloramphenicol, doxycycline, and rifampicin remained below 6%. MRSP isolates showed even higher resistance rates, particularly to clindamycin (85.2%), enrofloxacin (50.5%), and sulfamethoxazole-trimethoprim (66.3%). The results confirm that MRSA and MRSP remain among the most relevant resistant pathogens in companion animals and highlight their concerning resistance profiles across multiple sample types. Comparisons with other studies revealed substantial variability in methodology, pathogens, and sample sizes, underscoring the need for harmonized monitoring approaches. Our findings demonstrate the value of routine diagnostic data as a scalable and sustainable resource for passive AMR monitoring and highlight its potential for integration into active surveillance systems and broader One Health monitoring frameworks.
View lessThis study aims to understand the position and path of the North Korean Kim Jong Un regime on the issue of reunification of the two Koreas and to explore desirable alternatives to the unification of the Korean Peninsula. Applying the analytical tools, East Germany has taken a Two-state policy based on the following factors: Structural Influence of Socialist Suzerainty, External system supported by statehood recognition, Cooperative Relationship with other divided Party, while North Korea has pursued a One-state policy, influenced by the following factors such as relatively superior economic power, strong nationalism, relatively superior military power and strong leader’s perception of unification issues. This study also attempts to derive policy implications by analyzing the situation in which the ‘One-state theory’ and the ‘Two-state theory’ have been raised as unification discourse in South Korean society. Considering that it is not easy to establish a single unified nation-state, achieved when one side is absorbed, and to form a federal system for two countries with different political and ideological systems, the possibility of forming a ‘Confederative Structure’ could be examined as an alternative measure for unification.
View lessFollowing the 26 December 2004 Sumatra-Andaman (Mw 9.3) earthquake, back-projection became a widely used technique for remote imaging of large earthquake ruptures. We introduce a new teleseismic back-projection method that uses multiple seismic arrays and combines both P and pP seismic phases. For earthquakes deeper than 40 km, we incorporate pP backprojections, particularly when the pP amplitude is at least 40% of the P wave amplitude. The contribution of each array to the rupture image is controlled by its azimuthal distribution. This approach allows us to algorithmically determine key rupture parameters, including rupture length, directivity, speed, and aspect ratio.
Multi-array and multi-phase back-projection enhances resolution, facilitating the tracking and analysis of short-period earthquake rupture complexities. Early developments and applications include imaging the 23 January 2018 Gulf of Alaska (Mw 7.9) intraplate rupture, the 24 January 2020 Doğanyol-Sivrice (Mw 6.7) earthquake (Türkiye), and the 30 October 2020 Néon-Karlovásion (Mw 7.0) earthquake (Greece). The finalized method was also used to characterize the 12 August 2021 South Sandwich tsunamigenic earthquake (Mw > 8.2; South Atlantic) and the 6 February 2023 Türkiye seismic sequence (Mw 7.7 and 7.6).
We applied the newly developed back-projection method to characterize all large earthquakes with magnitudes Mw ≥ 7.5 and depths less than 200 km that occurred between 01/2010 and 12/2022 (56 events). For subduction megathrust earthquakes, we observed complex short-period ruptures (0.5-2.0 Hz) outlining megathrust asperities. Our results confirmed the prevalence of short-period radiation from the central and down-dip parts of the megathrust. Notably, we found that up-dip emissions from the main asperity are more common than previously reported. We also evaluated the prevalence of supershear ruptures and established new magnitude-rupture length scaling relationships for thrust, normal, and strike-slip earthquakes, consistent with previously published relationships based on aftershocks and total slip estimates.
We observed asperity encircling short-period ruptures of the 14 November 2007 Tocopilla (Mw 7.7) earthquake in Northern Chile using teleseismic and local strong-motion backprojections. The complex rupture was attributed to several factors: 1) the high-stress gradient caused by a kink in the slab interface, which had previously been proposed as the main mechanism arresting the trenchward rupture propagation; 2) the down-dip limit of the rupture, coinciding with the depth of the Continental Moho; and 3) the high-stress gradient surrounding the asperities.
Finally, we explore back-projection applications for locating volcano-induced landslides and assessing tsunami-warning strategies in Indonesia, focusing on the 22 December 2018 flank collapse of Anak Krakatau. Using long-period back-projection (40-70 s) of surface wave envelopes from the Indonesian seismic network, we demonstrated the flank collapse localization with two minutes of data after its initiation. Spectral analysis of the first 100 s of seismic data can distinguish the flank collapse from typical tectonic earthquakes, using stations at epicentral distances of 1.0°-2.5°. The results showed that massive landslides can be distinguished from earthquakes using a simple frequency ratio. We conclude by discussing some practical aspects for tsunami early warning systems, particularly for detecting and locating volcanic collapses and landslide-triggered tsunamis using real-time seismic data.
View lessThe conversion of atmospheric gases in organisms represents a crucial interface between the inanimate and animate nature, facilitated by biological catalysts called enzymes. These enzymes bear the potential to reduce air pollution emitted by a highly industrialized society. For this geoengineering purpose, the molybdenum-dependent formate dehydrogenase from Rhodobacter capsulatus might represent a promising candidate, due to its ability to reversibly convert formate into carbon dioxide by withdrawing two electrons. Withdrawn electrons are transferred via an electron transfer chain consisting of iron-sulfur (FeS) clusters within the enzyme to another protein domain, where an electron acceptor becomes reduced. However, the reaction mechanism at the molybdenum cofactor (MoCo) acting as the catalytic center and the electron transfer mechanism are not precisely described. A well-established method for investigating such metalloproteins is the electron paramagnetic resonance (EPR) spectroscopy, which is utilized in this work to examine the binding site of formate as a substrate and azide as a competitive inhibitor, potentially adopting similar orientations close to the MoCo. The application of pulse EPR spectroscopy revealed that carbon dioxide is already released in an azide-inhibited paramagnetic Mo(V) state, while the formate proton resides in close proximity to the MoCo. By employing a combined approach of density functional theory (DFT) and pulse EPR spectroscopy, the precise binding site of the formate proton was identified and the orientation of the azide inhibitor elucidated in proximity to the formate proton. Furthermore, the redox potentials of the electron transferring moieties were investigated using EPR-mediated redox titration elucidating the electron transfer path through the enzyme. This investigation along with the results from the analyses of the azide and formate proton positions lead to the proposal that MoCo acts as an electron transfer transducer, converting the two-electron transfer into a sequential one-electron transfer over the FeS clusters. This effect is reversed in the FdsGB domain by a further proposed electron transfer transducer. Additionally, it was demonstrated in this work that the investigation of such metalloproteins with multiple paramagnetic centers is only feasible after extensive prior analysis and assignment of the signals to their corresponding centers. To simplify such analyses for similar research subjects, a computational method based on deep learning was developed, separating EPR signals with respect to their associated lifetimes. This algorithm was extensively assessed using a defined test set to estimate its applicability.
View lessLearning theory provides the structural framework for analysing the complexity of problems related to obtaining a description of some unknown object (such as an unknown function, a distribution, a quantum state or process) based on indirect access to it namely via \emph{data}. It is not difficult to see then that a great number of central questions in quantum information and computing can be seen through the lens of learning theory.
In this thesis we will be dealing with questions of this type. As the title suggests, we differentiate between those problems which take the quantum device and its internal workings as the subject of consideration and those in which the quantum device is seen as an integral part of the learning procedure while the object to be learned need not be quantum at all. Put differently, we will study and develop algorithms which allow us to obtain tomographic information about the inner workings of quantum devices on the one hand and study the potential of quantum devices as tools for developing faster learning algorithms.
Motivated by the quest for showing provable quantum advantages for quantum machine learning, we study the problem of learning quantum circuit output distributions. In contrast to what was colloquially expected, we show that this task is in fact hard even for quantum learners barring any meaningful quantum advantage. Even more surprisingly, we show that there exists a sharp transition between the complexity of the task with respect to the types of gates allowed in the circuit. When the circuit consists of purely Clifford gates there exists in fact an efficient algorithm for solving the problem. However, adding a single $T$-gate, as we will see, renders the task inefficient. Further, we show that this hardness is not merely an artefact of some outliers within the class of distributions but that the hardness holds also in the average case.
Obtaining sufficient tomographic information about a quantum device in order to evaluate its performance is an important building block within the engineering cycle which will cultimate in the development of large scale quantum computers. However, at the current point techniques such as quantum state and process tomography are way too costly to be of any practical use. With this in mind we turn our attention towards techniques to efficiently obtain information about the noise present in near term quantum devices. We begin by presenting a variation of the well known randomized benchmarking protocol which both provides more information from the same set of measurements compared to traditional randomized benchmarking. Building upon this general formulation we then develop a randomized benchmarking protocol for analog quantum devices. Switching gears we then focus on the scenario in which we want to verify the correct operation of quantum devices operating in removed locations and potentially completely different physical platforms. Operating within the framework of \emph{cross-device verification} we develop and analyse a protocol for this task based on distributed inner product estimation using Pauli-sampling.
Finally, we combine ideas to study the problem of verification in quantum learning. In particular we study the problem of learning given the chance to interact with an untrusted but more resourceful party. We study different aspects of this problem focusing on different modes of communication and resource discrepancies and finding limitations and possibilities to exploiting the additional resources in a provable manner.
View lessPHF13 is an H3K4me3 epigenetic reader that participates in important biological processes, including transcription, DNA damage response, and the organization of chromatin structure. Aberrant regulation of PHF13 disrupts the epigenetic landscape of key transcription factors involved in the epithelial-to-mesenchymal transition and is linked to various cancers. This thesis reveals that PHF13 employs alternative mechanisms to differentially regulate chromatin organization, highlighting its diverse biological roles. I demonstrate that PHF13 can oligomerize through conserved structured regions in its N- and C-terminal domains, which enhances its chromatin affinity, leading to chromatin condensation and transcriptional changes. Remarkably, I discovered that PHF13 can also self-associate independently of these regulatory domains via intrinsically disordered regions. This alternative mechanism reduces its chromatin affinity, facilitating the formation of liquid- liquid phase separation-like foci and activating distinct transcriptional programs. Finally, I developed two PHF13 cell lines to fine-tune its expression, allowing for a more thorough exploration of its role in chromatin organization. My findings suggest that the intrinsic balance between PHF13’s structured and disordered regions plays a critical role in regulating its chromatin affinity, chromatin condensation, and transcriptional outcomes. Moreover, I propose that PHF13 can employ distinct mechanisms to modulate its chromatin functions in different biological processes.
View lessAls AIDS in den 1980er Jahren in Westdeutschland sichtbar wurde, existierte bereits eine ausdifferenzierte Schwulenbewegung. Durch AIDS waren schwule Männer mit einer radikalen Veränderung ihrer Lebenswelt konfrontiert. Neben der unmittelbaren Bedrohung durch Leiden und Tod hatten auch Stigmatisierung und Debatten um staatliche Maßnahmen zur Seuchenbekämpfung Folgen für die Bewegung. In der Konsequenz mussten Kernforderungen überdacht und das Verhältnis zum Staat neu ausgelotet werden. Zudem wandelte sich das eigene Selbstverständnis sowie Vorstellungen von Zugehörigkeit und Solidarität. Der Band fragt nach diesen Transformationsprozessen und richtet den Blick hierbei auf das Recht als subjektivierende Instanz und Arena der Auseinandersetzung mit dem Staat.
View lessIn meiner Dissertation behandele ich Siedlungen des 6. und 5. Jh. v. u. Z. in Nordmesopotamien und im zentralen Zagros, die in den Ruinen monumentaler Gebäude errichtet wurden. Diese in der Forschung oft als squatter occupation bezeichneten Siedlungen finden sich in vielen historischen Kontexten, in der Regel nach dem Zusammenbruch politischer Machtzentren. Obwohl sie ein häufig beobachtetes Phänomen sind, werden sie selten ausführlich behandelt.
Ich vergleiche daher vier Squattersiedlungen, die räumlich und zeitlich nahe beieinander liegen und analysiere wie Herrschaftsräume angeeignet und umgenutzt wurden. Die Fallbeispiele hierfür sind Tell Sheikh Hamad und Nimrud, die beide postimperiale assyrische Besiedlungen aufweisen und Nush-i Jan und Godin Tepe, die den sog. Meder*innen oder der Iron Age III-Zeit zugeordnet werden. Um dieses Phänomen auf einer theoretischen Ebene zu verstehen, bediene ich mich vor allem der Werke Henri Lefebvres, um Konzepte wie „Herrschaftsraum“ und „Aneignung“ zu definieren. Anschließend stellte ich die Methodik vor, mit der ich diese Theorie auf die archäologischen Quellen anwenden kann. Ich entschied mich einerseits für die Space Syntax, um Grundrisse vergleichbar darzustellen. Andererseits bediente ich mich der Sequence of Events-Analyse von Victor Klinkenberg, um Abfolgen von Ereignissen zu vergleichen. Mit diesen beiden Methoden arbeitete ich die Fundorte auf und verglich sie anschließend in drei Aspekten: Erstens stellte ich einen Vergleich der Architekturtypen an, zweitens verglich ich die Space Syntaxen und drittens die Sequences of Events. Im abschließenden Kapitel interpretierte und abstrahierte ich diese Ergebnisse und stellte sie in Zusammenhang mit dem gesellschaftlichen Phänomen des Kollapses, der Dezentralisierung und der Deurbanisierung, die auf das assyrische Reich folgten.
Ich begann meine Arbeit mit der theoretischen Basis von Lefebvres Raumtheorie. Diese kann als ein dialektisches Raumverständnis betrachtet werden, das vor allem gesellschaftliche Konflikte im Raum fokussiert. Es gibt auf der einen Seite die räumliche Praxis, die die Realität des Raumes darstellt und auf der anderen Seite den gelebten und den geplanten Raum, die beide die räumliche Praxis beeinflussen und von ihr beeinflusst werden, aber im Konflikt miteinander stehen. Der geplante Raum ist der Raum der Herrscher*innen, während der gelebte Raum den der Subalternen darstellt. Der monumentale Raum des neuassyrischen Reiches kann so als geplanten Raum oder eben als Herrschaftsraum verstanden werden. Aneignung stellt dann die Übernahme des geplanten Raumes durch den gelebten Raum dar, der seinerseits in den Squattersiedlungen manifestiert wird. Durch die räumliche Praxis lässt sich dieses Konzept gut mit den archäologischen Quellen zusammenbringen, da diese die Materialität des Raumes darstellt und Befunde und Funde das Resultat dieser Praxen sind.
Die räumliche Praxis lässt sich anhand von zwei Methoden sichtbar machen: die Space Syntax-Analyse und die Sequence of Events-Analyse. Die Space Syntax-Analyse wurde schon in den 1980er Jahren von Bill Hillier und Julienne Hanson konzeptualisiert und in den 1990er Jahren von Richard Blanton verfeinert. Heute stellt sie eine etablierte Methode in der Archäologie dar. Hierbei werden Grundrisse abstrahiert und als Graph erstellt, in dem jeder Kreis ein Raum und jeder Durchgang eine Linie zwischen den Räumen ist. So ergibt sich ein Diagramm unabhängig von der Größe des Raumes, das mit anderen verglichen werden kann. Die Sequence of Events-Analyse stammt von Klinkenberg, der sie erstmals im mittelassyrischen Dunnu in Tell Sabi Abyad nutzte. Hier werden Ereignisse anhand der Stratigrafie und der Zusammensetzung der Befunde und Funde interpretiert und in einem Flow Chart in eine Reihenfolge gesetzt. Es handelt sich also um eine interpretierte Harris-Matrix.
Mit diesen beiden Analyseformen untersuchte ich Tell Sheikh Hamad und Godin Tepe. Nimrud und Nush-i Jan untersuchte ich nur anhand der publizierten Daten, da die pandemische Lage eine Reise zu den Archiven nicht zuließ. Die publizierten Daten reichten für die Analyse nicht aus, dafür war es mir aber möglich in Nimrud Squattersiedlungen in einer größeren Skala zu beobachten und so qualitativ andere Beobachtungen zu machen.
Im Vergleich der Squattersiedlungen konnten einige offensichtliche Gemeinsamkeiten herausgearbeitet werden. Aber auch Unterschiede zwischen Fundorten und innerhalb der Fundorte ließe sich erkennen. Anhand des architekturtypologischen Vergleichs konnte ich typische Architekturelemente einer Squattersiedlung identifizieren: z.B. die Blockade von Türen und das Teilen von Räumen durch Wände. Interessanter ist jedoch, dass viele der Squattersiedlungen Installationen zur Wasserversorgung aus den monumentalen Phasen instand hielten und weiternutzten. Der Vergleich der Space Syntaxen führte zu sehr komplexen Ergebnissen, die Gestaltung des Raums durch die Bewohner*innen widerspiegelt. Hier gibt es Siedlungen, die additiv bauen, also in jeder neuen Phase neue Räume der Ruine erschließen und ihre Siedlung vergrößern. Es gibt aber auch Siedlungen, die alte Phasen verwerfen und komplett neue Raumkomplexe errichten. Ich konnte über die Verortung der Installationen zur Nahrungsmittelproduktion in der Space Syntax spekulieren, ob die Verarbeitung von Nahrungsmitteln eher öffentlich oder privat stattfand. Der Vergleich der Sequences of Events zeigt, wie dynamisch die Raumnutzung in Squattersiedlungen war und wie häufig sich bestimmte Konstellationen verändern konnten. Es stellte sich auch heraus, dass Squattersiedlungen keineswegs provisorische Unterkünfte waren, sondern durchaus als etablierte Siedlungen bezeichnet werden sollten.
Am Ende widerspreche ich dem Bild der Squattersiedlungen als verarmte Überreste „glorreicher Zivilisationen“. Häufig werden Squattersiedlungen nur als Argument für Kontinuität oder Wandel genutzt. Stattdessen betrachte ich sie als eigenständiges Phänomen. Die Dauer und die substanzielle Architektur, die errichtet wurde, sprechen gegen einen provisorischen Charakter. Zwar erhalten sich monumentale Gebäude häufig länger als Squattersiedlungen, aber die Squattersiedlungen weisen oft mehr Phasen auf als zeitgleiche Wohnhäuser. Auch den Charakter der verarmten Siedlungen halte ich für falsch, da gezielt Räume mit Wasserversorgung ausgewählt wurden. Eine Wasserversorgung verbessert die Lebenserwartung drastisch, war aber in der Eisenzeit auf wohlhabende Schichten beschränkt. Sollten die Bewohner*innen aus den unteren Klassen gekommen sein, hat sich ihr Lebensstandard in den Squattersiedlungen verbessert. Die Aneignung und Umnutzung von Tempeln und Palästen in der postimperialen Epoche Nordmesopotamiens und der Iron Age III-Zeit im zentralen Zagros kann also als eine positive Entwicklung für die lokale Bevölkerung gesehen werden. Ein allgemeines Umdenken bezüglich sog. Dark Ages und der Squattersiedlungen ist erforderlich, wenn wir Geschichte jenseits großer Herrscher denken wollen.
View lessWährend häufige Autoimmun- und Entzündungskrankheiten wie PSO, AD oder rheumatoide Arthritis in den letzten Jahren gründlich untersucht wurden, fanden weniger häufige Krankheiten (wie LP oder Pemphigus) weniger Beachtung. Dies führte zu einer Lücke im Wissen über die Immunpathogenese dieser Krankheiten. Dies wirkt sich auf therapeutische Fortschritte und folglich auf die Lebensqualität der betroffenen Patienten aus. Die Analyse der T-Zell-Landschaft ist sowohl bei chronisch-entzündlichen (LP) als auch bei autoimmunen, antikörpervermittelten Krankheiten (Pemphigus) hilfreich. Bei der ersten Gruppe von Krankheiten sind die T-Zellen direkt für die Gewebeschädigung verantwortlich, während bei antikörpervermittelten Krankheiten autoreaktive T-Zellen die B-Zell-vermittelte Antikörperproduktion unterstützen. Ansätze, die darauf abzielen zu verstehen, welche Zytokine zentrale Vermittler der Krankheit sind, haben dazu geführt, dass IL-4 und IL-17 als relevante therapeutische Ziele bei AD bzw. PSO entdeckt wurden. Derselbe Ansatz wurde bei LP angewandt und half uns zu erkennen, dass IFN-ɣ ein zentrales Zytokin in der Immunpathogenese ist. Außerdem analysierten wir die Unterschiede zwischen CLP und OLP und stellten fest, dass IL-17 bei OLP eine wichtige Rolle zu spielen scheint. Diese Erkenntnisse halfen uns, die Behandlung mit monoklonalen Antikörpern oder JAK-Inhibitoren zu beginnen, die unsere experimentellen Ergebnisse bestätigten. Bei Pemphigus konzentrierten sich die meisten Forschungsarbeiten auf B-Zellen und die Wirkung von Antikörpern auf Keratinozyten und desmosomale Strukturen. Unsere Studien konnten zeigen, dass die Aktivität der B-Zellen in hohem Maße von der Funktion der T-Zellen abhängt. Eine gründliche Immunphänotypisierung ergab, dass bestimmte Untergruppen (Tfh17) für eine starke Antikörperproduktion erforderlich sind. Darüber hinaus konnten wir spezifische T-Zell-Untergruppen nachweisen, die einzigartige dsg-Peptide erkennen und eine pathogene Rolle ausüben können. Diese Ergebnisse sind für künftige therapeutische Ansätze relevant, die darauf abzielen, die T/B-Zell-Achse zu unterbrechen, indem autoreaktive Klone dezimiert werden, ohne die gesamte hämatopoetische Linie zu unterdrücken. Zusammenfassend lässt sich sagen, dass diese Studien durchgeführt wurden, um besser zu verstehen, wie T-Zellen an LP und Pemphigus, zwei schweren Hauterkrankungen, beteiligt sind. Unsere Ergebnisse lieferten Hinweise auf zentrale Akteure der Krankheit und liefern eine Grundlage für die Anwendung und/oder Entwicklung neuer zielgerichteter Therapien.
View lessZiel der vorliegenden Habilitationsarbeit war die Evaluation innovativer diagnostischer sowie laborchemischer Parameter als prognostische Marker für das Ansprechen auf eine PRRT bei Patienten mit NET. Das Prinzip der Theranostik bildet die Grundlage für die PRRT im Kontext von NET, wobei die initiale funktionelle Diagnostik eine entscheidende Rolle bei der Patientenselektion, der Behandlungsstrategie und der Nachsorge spielt. Vor Beginn einer PRRT ermöglicht die SSTR-Diagnostik die Evaluation der globalen SSTR Expression aller Tumorläsionen. Die intratherapeutische SSTR-Diagnostik dient primär dem therapeutischen Monitoring sowie der Durchführung einer Dosimetrie, der Erhebung der tumorumschließenden Dosis sowie der Abschätzung der Strahlendosis risikoassoziierter Organe. Auf Grundlage der funktionellen, morphologischen sowie laborchemischen Routinediagnostik wurden umfangreiche quantitative Daten analysiert. Durch die Anwendung von Heterogenitätsparametern in der funktionellen Diagnostik konnten subtile Bildmerkmale, die dem menschlichen Auge oftmals verborgen bleiben, identifiziert werden und hinsichtlich des Informationsgehalts mit der Diagnose, der Prognose und dem Therapieverlauf korreliert werden. Ferner wurden prä- sowie intratherapeutische Laborparameter sowie deren Fluktuation im Krankheitsverlauf analysiert. Im Zentrum stand die personalisierte Medizin, d.h. präzisere und individualisierte Therapieansätze, indem versucht wird ungenutzte, therapierelevante Informationen der Routinediagnostik zu extrahieren.
View lessDer Endoskopische Ultraschall ist eine endoskopische Methode, die Ende des letzten Jahrhunderts auf den Markt kam. Er hat sich seitdem in Diagnostik und Therapie fest etabliert. Im Rahmen der vorgelegten Habilitation und auf Basis eigener Publikationen aus verschiedenen Bereichen werden die Möglichkeiten in Diagnostik und Therapie aufgezeigt.
Machine learning (ML) is a branch of artificial intelligence (AI) that specializes in studying how computers simulate human learning behavior, summarize patterns from experience, and develop decision-making and analytical capabilities. It has numerous applications in the industrial field, particularly in areas such as quality control, defect detection, and process optimization. Laser Beam Welding (LBW) is an advanced welding technique that utilizes a high-energy density laser beam to fuse materials together. It has the advantages of high precision, high efficiency, minimal heat-affected zone, and low welding deformation, making it widely adopted in aerospace, automotive manufacturing, electronics, medical devices, and other modern industrial fields.
ML can be applied to LBM for automated defect detection and process optimization. This thesis explores two applications in LBM: solidification crack detection and strain estimation, both helping prevent defects and ensure structural integrity of welded components. Solidification cracks result from internal stresses caused by shrinkage and temperature changes during solidification. Convolutional Neural Networks (CNNs) can analyze high-speed welding videos to detect crack formation and predict potential defects in real time. Strain estimation reflects deformation and residual stresses in a welded material, as excessive strain can lead to distortions and cracks. Compared to traditional measurements, ML-based methods offer faster and more efficient strain predictions.
However, integrating ML into industrial systems introduces security challenges such as data poisoning attacks, adversarial attacks, model stealing attacks, and membership inference attacks. To address these concerns, we investigate the reliability and robustness of the crack detection system. Specifically, we design a backdoor attack, a type of data poisoning attack that injects malicious data into the training set, causing the ML model to misclassify welding defects and fail to detect their occurrence. To ensure safe and reliable deployment of AI in welding applications, security problems must be effectively addressed. To counter backdoor attacks, we propose a novel defense strategy, NT-ML, which defends against more robust backdoor attacks compared to existing methods.
View less