Research suggests that people prefer human over algorithmic decision-makers at work. Most of these studies, however, use hypothetical scenarios and it is unclear whether such results replicate in more realistic contexts. We conducted two between-subjects studies (N=270; N=183) in which the decision-maker (human vs. algorithmic, Study 1 and 2), explanations regarding the decision- process (yes vs. no, Study 1 and 2), and the type of selection test (requiring human vs. mechanical skills for evaluation, Study 2) were manipulated. While Study 1 was based on a hypothetical scenario, participants in pre-registered Study 2 volunteered to participate in a qualifying session for an attractively remunerated product test, thus competing for real incentives. In both studies, participants in the human condition reported higher levels of trust and acceptance. Providing explanations also positively influenced trust, acceptance, and perceived transparency in Study 1, while it did not exert any effect in Study 2. Type of the selection test affected fairness ratings, with higher ratings for tests requiring human vs. mechanical skills for evaluation. Results show that algorithmic decision-making in personnel selection can negatively impact trust and acceptance both in studies with hypothetical scenarios as well as studies with real incentives.