COMMITTEE CHAIR: Dr. Xishuang Dong
TITLE: INTERPRETABLE AI FOR DEEP KNOWLEDGE TRACING TO IMPLEMENT PERSONALIZED LEARNING AT HBCUS
ABSTRACT: Deep Knowledge Tracing (DKT) has emerged as an AI-based transformative technology in educational data science, enabling the modeling and forecasting of students’ learning path through their interactions with academic tasks. While DKT has shown impressive predictive performance, conventional approaches often lack transparency, constraining their adoption in educational environments that require interpretable, data-driven decision making. This limitation is especially consequential for Historically Black Colleges and Universities (HBCUs), which play a pivotal role in expanding access to STEM education but remain underexplored in the explainable DKT area. This dissertation proposal investigates the explainability of DKT for student learning outcome (SLO) tracing within HBCU settings, focusing on data from multiple colleges and departments at HBCUs. Two methodologies are employed and proposed. First, the DeepIRT model, an interpretable extension of DKT, is applied to capture the relation between student abilities and course difficulties, treating courses as items in an outcome-prediction framework. Second, an ensemble strategy is proposed that integrates Item Response Theory (IRT) with bagging to improve both predictive accuracy and interpretability. Multiple IRT-based DKT models are trained and ensembled, allowing the system to retain the strengths of individual learners while reducing variance and producing more stable insights. Experimental results demonstrate that DeepIRT achieves strong predictive performance under diverse conditions while offering meaningful interpretative perspectives on the dynamics between student proficiency and course challenge. The ensemble-IRT approach further enhances accuracy, precision, and area under the curve (AUC) compared with individual models, and generates intuitive visualizations of student ability, item difficulty, and success probabilities. Together, these findings highlight that explainability need not come at the cost of accuracy while designed interpretable models can support evidence-based instructional strategies. By advancing explainable DKT methodologies and applying them to SLO tracing at an HBCU, this research contributes both a methodological foundation and actionable insights for promoting transparency, fairness, and effectiveness in STEM education.
Room Location: Electrical and Computer Engineering Department Conference Room 315D