The field of eXplainable Artificial Intelligence (XAI) has witnessed significant advancements in recent years. However, the majority of progress has been concentrated in the domains of computer vision and natural language processing. For time series data, where the input itself is often not interpretable, dedicated XAI research is scarce. In this work, we put forward a virtual inspection layer for transforming the time series to an interpretable representation and allows to propagate relevance attributions to this representation via local XAI methods. In this way, we extend the applicability of XAI methods to domains (e.g. speech) where the input is only interpretable after a transformation. In this work, we focus on the Fourier Transform which, is prominently applied in the preprocessing of time series, with Layer-wise Relevance Propagation (LRP) and refer to our method as DFT-LRP. We demonstrate the usefulness of DFT-LRP in various time series classification settings like audio and medical data. We showcase how DFT-LRP reveals differences in the classification strategies of models trained in different domains (e.g., time vs. frequency domain) or helps to discover how models act on spurious correlations in the data.