fbpx Defense of a Master’s Thesis by Ahmad Al Khatib in the Data Science and Business Analytics Program | ARAB AMERICAN UNIVERSITY
Contact information for Technical Support and Student Assistance ... Click here

Defense of a Master’s Thesis by Ahmad Al Khatib in the Data Science and Business Analytics Program

Monday, March 18, 2024

Researcher Ahmad Hassan Al Khatib, a student in the Master’s program in Data Science and Business Analytics, has defended his thesis titled "Explainable Deep Learning Methods for Neuroscience Data to Analyze the Extracted Features in The Hidden Layers"

In recent years, deep learning models have provided various applications in various fields, especially in the medical fields such as neuroscience. Thus, ensuring the interpretation of the results predicted by deep learning models has been an important challenge, especially exploring the hidden layers in these models, and this is called black box interpretation. This study, titled Explainable Deep Learning Methods for Neuroscience Data to Analyze the Extracted Features in The Hidden Layers", using Explainable Artificial Intelligent (XAI like LIME and SHAP) deep learning techniques, through shedding light on the decision-making processes of integrative neural networks applied to Neurological data.

The present thesis focused on using magnetoencephalography (MEG) data containing six frequency bands as features, since visual interpretation of XAI results is hampered by the complexity of MEG images. However, this research interprets deep learning models of dog-cat distinction and handwritten digit differentiation in the MNIST database accurately demonstrating the effectiveness of interpretive deep learning techniques combined with visual interpretation. The present thesis also focused on detecting and interpreting the hidden layers that are considered black boxes in deep learning models for human brain image data. This thesis investigates the importance of features using SHAP values for each pixel in MEG images, which in turn measures the importance of the features on which the deep learning model is based. As such, this would provide insights into decision-making processes and enhances the overall transparency of the model.

As for neuroscience data, especially MEG images, this research identified the Gamma1 frequency with the highest SHAP values, noting its prominent influence on deep learning predictions. This complex understanding contributes to explaining the model's decision-making process, providing valuable insights into the serial effect between frequency bands within specific domains within given data and is considered important for prediction. In conclusion, this research addresses the challenge of interpretation in neuroscience by revealing the black box in hidden layers, enhancing informed decision-making processes in neuroscience applications and early detection of diseases affecting the nervous system such as Alzheimer's and Parkinson's.

The thesis was supervised by Dr. Ahmad Hassasneh and Dr. Jürgen Dammers from the Jülich Research Institute in Germany. The committee of examiners included Dr. Anas Samara and Dr. Mahmoud Obaid.