Abstract:
In the realm of health-related machine learning classifications, understanding the decisions made by models is of paramount importance. This study presents a comprehensive comparative analysis of two prominent model-agnostic interpretability tools, SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations), to illuminate the workings of machine learning models in obesity classification. While powerful machine learning models often operate as “black boxes”, leaving users and stakeholders in the dark about the rationale behind their decisions, SHAP and LIME offer pathways to shed light on the inner workings of these models by unveiling their feature importance and local model behaviour. This study aims to compare the different techniques used in obesity classification and identify their respective strengths and weaknesses. The study found that a family history of diseases like diabetes, high blood pressure, and heart disease are strong predictors of obesity across all classification techniques we considered thus emphasizing the robustness of these factors. Moreover, our analysis showcases the impact of lifestyle factors, such as dietary habits and physical activity, on obesity classification.
Reference:
If you would like to obtain a copy of this Research Output, please contact the Research Outputs curators at researchoutputs@hsrc.ac.za
Attribution-NonCommercial
CC BY-NC
This license lets others remix, adapt, and build upon your work non-commercially, and although their new works must also acknowledge you and be non-commercial, they don’t have to license their derivative works on the same terms.