In response to Li et al.'s (2023) and Zhai and Nehm's (2023) commentaries on Zhai et al.'s 2022 paper, Applying Machine Learning to Automatically Assess Scientific Models, we offer the perspective that these commentaries are talking past each other around several key issues related to artificial intelligence (AI) in science education assessment. In part, this “talking past” stems from the fact that each set of authors is approaching the conversation from a distinct perspective: Li et al. address AI through a sociopolitical lens, while Zhai and Nehm address it from a technical lens. These perspectives are not explicitly recognized by either set of authors; and as a result, while they use common terminology, there is a mismatch of (unarticulated) definitions between these two commentaries. Specifically for this commentary, we will focus on the conflation of multiple definitions of bias, which we also find to be a common conflation across the field.
We ultimately view this mismatch as a missed opportunity and a barrier to generative ethical conversations about the role of AI in education. We emphasize here how and why both perspectives are valuable, and argue that they are most valuable when in critical but productive conversation with each other.