As AI systems become increasingly integrated into our lives, the need to support appropriate human understanding of AI continues to grow. With new AI capabilities being deployed in different contexts, human-centered explainability is crucial to ensure people can interact with novel AI systems safely and effectively. To address evolving explainability needs, the field of Explainable AI (XAI) has produced numerous frameworks. But what do these frameworks entail and how can they be used in practice? What drives their development? As AI systems continue to grow in complexity, it is important to understand and reflect upon the value of these frameworks and their potential to address upcoming human-centered needs for XAI. Towards this, we performed a scoping review following the PRISMA-ScR procedure, gathering and analyzing a corpus of 73 papers to understand how XAI frameworks can support different stages of human-centered XAI design. We present a unified model and a set of guiding questions to help identify, compare and select relevant XAI frameworks across various design stages, making it easier for designers and researchers to apply human-centered approaches in real-world XAI contexts. We also analyze how frameworks are developed and evaluated, highlighting gaps and opportunities to improve both methodological as well as existing HCXAI practices.
