dc.contributor.author
Zihni, Esra
dc.contributor.author
Madai, Vince Istvan
dc.contributor.author
Livne, Michelle
dc.contributor.author
Galinovic, Ivana
dc.contributor.author
Khalil, Ahmed Abdelrahim
dc.contributor.author
Fiebach, Jochen B.
dc.contributor.author
Frey, Dietmar
dc.date.accessioned
2020-07-17T11:55:09Z
dc.date.available
2020-07-17T11:55:09Z
dc.identifier.uri
https://refubium.fu-berlin.de/handle/fub188/27831
dc.identifier.uri
http://dx.doi.org/10.17169/refubium-27584
dc.description.abstract
State-of-the-art machine learning (ML) artificial intelligence methods are increasingly leveraged in clinical predictive modeling to provide clinical decision support systems to physicians. Modern ML approaches such as artificial neural networks (ANNs) and tree boosting often perform better than more traditional methods like logistic regression. On the other hand, these modern methods yield a limited understanding of the resulting predictions. However, in the medical domain, understanding of applied models is essential, in particular, when informing clinical decision support. Thus, in recent years, interpretability methods for modern ML methods have emerged to potentially allow explainable predictions paired with high performance. To our knowledge, we present in this work the first explainability comparison of two modern ML methods, tree boosting and multilayer perceptrons (MLPs), to traditional logistic regression methods using a stroke outcome prediction paradigm. Here, we used clinical features to predict a dichotomized 90 days post-stroke modified Rankin Scale (mRS) score. For interpretability, we evaluated clinical features' importance with regard to predictions using deep Taylor decomposition for MLP, Shapley values for tree boosting and model coefficients for logistic regression. With regard to performance as measured by Area under the Curve (AUC) values on the test dataset, all models performed comparably: Logistic regression AUCs were 0.83, 0.83, 0.81 for three different regularization schemes; tree boosting AUC was 0.81; MLP AUC was 0.83. Importantly, the interpretability analysis demonstrated consistent results across models by rating age and stroke severity consecutively amongst the most important predictive features. For less important features, some differences were observed between the methods. Our analysis suggests that modern machine learning methods can provide explainability which is compatible with domain knowledge interpretation and traditional method rankings. Future work should focus on replication of these findings in other datasets and further testing of different explainability methods.
en
dc.rights.uri
https://creativecommons.org/licenses/by/4.0/
dc.subject
Clinical Decision-Making
en
dc.subject
Retrospective Studies
en
dc.subject
Logistic Models
en
dc.subject
Supervised Machine Learning
en
dc.subject
Outcome Assessment, Health Care
en
dc.subject.ddc
600 Technik, Medizin, angewandte Wissenschaften::610 Medizin und Gesundheit::610 Medizin und Gesundheit
dc.title
Opening the black box of artificial intelligence for clinical decision support: A study predicting stroke outcome
dc.type
Wissenschaftlicher Artikel
dcterms.bibliographicCitation.articlenumber
e0231166
dcterms.bibliographicCitation.doi
10.1371/journal.pone.0231166
dcterms.bibliographicCitation.journaltitle
PLoS ONE
dcterms.bibliographicCitation.number
4
dcterms.bibliographicCitation.originalpublishername
Public Library of Science (PLoS)
dcterms.bibliographicCitation.volume
15
refubium.affiliation
Charité - Universitätsmedizin Berlin
refubium.resourceType.isindependentpub
no
dcterms.accessRights.openaire
open access
dcterms.bibliographicCitation.pmid
32251471
dcterms.isPartOf.eissn
1932-6203