The growing AI field faces trust, transparency, fairness, and discrimination challenges. Despite the need for new regulations, there is a mismatch between regulatory science and AI, preventing a consistent framework. A five-layer nested model for AI design and validation aims to address these issues and streamline AI application design and validation, improving fairness, trust, and AI adoption. This model aligns with regulations, addresses AI practitioners’ daily challenges, and offers prescriptive guidance for determining appropriate evaluation approaches by identifying unique validity threats. We have three recommendations motivated by this model: (1) Authors should distinguish between layers when claiming contributions to clarify the specific areas in which the contribution is made and to avoid confusion; (2) authors should explicitly state upstream assumptions to ensure that the context and limitations of their AI system are clearly understood, (3) AI venues should promote thorough testing and validation of AI systems and their compliance with regulatory requirements.