Aggressive and hateful user comments on news sites and social media threaten discussions on the Internet and pose a difficult challenge for content regulation. Previous research has mainly focused on the analysis of moderation strategies in dealing with such comments. In contrast, little attention was paid to the issue of which comments are considered problematic by content moderators in the first place. The answer to this question has more than theoretical relevance, but practical significance against the backdrop of increasing efforts to automate the detection of hate speech or toxicity in user comments. Based on 20 interviews, this paper explores what comment moderators in Germany consider to be hate comments, how they moderate them, and how differences in moderation practices can be explained. Our findings show strong agreement regarding extreme cases of hate comments, whereby there is overlap with the theoretical concept of hate speech, but also forms of incivility. Moreover, the interviews revealed differences in the perception and handling of hate comments, which can be linked to explanatory factors at the levels of the individual, professional routines, and the organization.