Automated content moderation in a modern sense starts taking many forms: moderating social media, debates, therapy diaries and student learning processes such as essay writing. To tackle these tasks one can apply different AI techniques, such as classifications, information retrieval, chatbots, symbolic logical reasoners, and sometimes all of the above, combining them into so-called hybrid AI systems. Combining and running multiple AI components with different characteristics in a connected manner, or employing one model to elucidate another, emerges as a viable alternative to end-to-end systems. This is primarily because of their manageable and transparent nature, offering a potential improvement over end-to-end systems. Additionally, they may provide a more accurate representation of the various elements found in human cognition, blending resilient learning with rapid pattern recognition alongside reasoning facilitated by logical operations. In this thesis, two instances of hybrid AI systems are developed in combination with two content moderation use cases. “Check News in One Click” is a web application designed for streamlined news verification. It incorporates a fusion of statistical linguistic, transformer-based, and rule-based components that I developed and integrated into a productive system with a user-friendly interface. Specifically, this application specializes in verifying content from both conventional news sources and social media news channels, with a focus on identifying manipulative language and the presence of pro-Kremlin propaganda, which became a major problem in light of the Russian invasion of Ukraine. PapagAI is an online platform for higher education students, where I created, combined and implemented an AI module for automated moderation of reflective essays using supervised models, a clustering, a linguistic processing module and a heuristic determiner which mines a prompt database for appropriate questions and amelioration suggestions. Through this application, my objective was to address the German educational system’s requirement for improving teacher trainee retention rates at universities and easing the workload of tutors by streamlining the feedback process. In addition to the user tests, to evaluate the developed systems, here I also discuss questions related to the Ethics of AI, the European Union legal framework regarding automated content moderation, as well as the interpretability and sustainability of deep learning models.