In this position paper, I propose that the technical, designerly as well as the ethical dimension of interpretability for machine learning (ML) are irreducibly intertwined, and even commensurate. With ML-driven systems, engineers and designers wield considerable power in shaping the values of the artefacts that govern our access to the world. This statement in itself is neither radical or new, with Winner's article on the politics of technological artefacts a ubiquitous reference, and the post-phenomenological stance of mediation theory gaining ground in the ethical discussions of HCI. Additionally, design methodologies such as participatory (PD) or value-sensitive design (VSD) are well articulated and poised to enter the discourse on interpretability. As a caveat, however, I suggest that any according assessment and design attempts for ML-driven systems ought to consider two co-constitutive factors: distributed hybrid reasoning and emergent values.