
Hussein Rajabu Master’s Thesis Defense, Friday, April 17, 2026 @ 2:30 pm Central Time
April 17 @ 2:30 pm - 3:30 pm
COMMITTEE CHAIR: Dr. Xishuang Dong
TITLE: MULTI-TASK GROWING INTERPRETABLE NEURAL NETWORK FOR MULTI-TARGET SYMBOLIC REGRESSION
ABSTRACT: Over the past decade, deep learning has achieved remarkable success across a wide range of domains, including computer vision and natural language processing. Despite their strong performance, these models often operate as black boxes, making it difficult to understand how decisions are made. This lack of transparency poses significant challenges for deploying deep learning techniques in high-stakes applications such as healthcare and business analytics, where interpretability is essential. To address this issue, explainable artificial intelligence (XAI) and interpretable AI techniques have been developed to provide insights into model behavior. However, most existing approaches primarily capture statistical associations between inputs and outputs rather than uncovering the underlying functional mechanisms driving predictions. Symbolic regression (SR) has recently gained attention as a promising approach within interpretable AI. Unlike traditional methods, SR aims to discover explicit mathematical expressions that describe the relationships between variables, offering both interpretability and competitive predictive performance. Although SR has advanced significantly in recent years, two major challenges remain. First, SR methods are primarily developed and validated on scientific datasets, such as those from physics and chemistry, where underlying relationships are relatively well understood. This limits their applicability to broader, data-driven machine learning tasks. Second, most SR approaches focus on single-target regression, while many real-world problems involve multiple correlated outputs that share common information. To address these limitations, this thesis proposes Multi-Task Regression GINN-LP (MTRGINN-LP), a novel neuro-symbolic framework for multi-target symbolic regression. Building upon GINN-LP, the model introduces Power-Term Approximator Blocks to effectively capture power-law relationships in data. It further integrates multi-task learning through a shared backbone combined with task-specific output layers, enabling the discovery of shared symbolic representations while maintaining task-level interpretability. Additionally, a symbolic loss function is introduced to align symbolic predictions with regression outputs during training. The proposed method is evaluated on diverse multi-target regression tasks, including energy efficiency analysis and sustainable agriculture. Experimental results demonstrate that the approach achieves competitive predictive performance while preserving strong interpretability, effectively bridging the gap between symbolic regression and practical multi-output learning scenarios.
Keywords: Interpretable AI, Multi-task Learning, Multi-target Regression, Symbolic AI
Room Location: Electrical Engineering Conference Room 315D

