id,collection,dc.contributor.author,dc.date.accessioned,dc.date.available,dc.date.issued,dc.description.abstract[en],dc.format.extent,dc.identifier.uri,dc.identifier.urn,dc.language,dc.rights.uri,dc.subject.ddc,dc.subject[en],dc.title,dc.type[],dcterms.accessRights.dnb,dcterms.accessRights.openaire,refubium.affiliation,refubium.resourceType.isindependentpub "87161bd3-1006-46a7-948c-67e47ae4cacd","fub188/25573","Gücükbel, Esra","2023-04-13T10:20:35Z","2023-04-13T10:20:35Z","2023","Through progressively evolved technology, applications of machine learning and deep learning methods become prevalent with the increased size of the collected data and the data processing capacity. Among these methods, deep neural networks achieve high accuracy results in various classification tasks; nonetheless, they have the characteristic of opaqueness that causes called them black box models. As a trade-off, black box models fall short in terms of interpretability by humans. Without a supportive explanation of why the model reaches a particular conclusion, the output causes an intrusive situation for decision-makers who will take action with the outcome of predictions. In this context, various explanation methods have been developed to enhance the interpretability of black box models. LIME, SHAP, and Integrated Gradients techniques are examples of more adaptive approaches due to their welldeveloped and easy-to-use libraries. While LIME and SHAP are post-hoc analysis tools, Integrated Gradients provide model-specific outcomes using the model’s inner workings. In this thesis, four widely used explanation methods are quantitatively evaluated for text classification tasks using the Bidirectional LSTM model and DistillBERT model on four benchmark data sets, such as SMS Spam, IMDB Reviews, Yelp Polarity, and Fake News data sets. The results of the experiments reveal that analysis methods and evaluation metrics provide an auspicious foundation for assessing the strengths and weaknesses of explanation methods.","vii, 57 Seiten","https://refubium.fu-berlin.de/handle/fub188/38114||http://dx.doi.org/10.17169/refubium-37827","urn:nbn:de:kobv:188-refubium-38114-3","eng","http://www.fu-berlin.de/sites/refubium/rechtliches/Nutzungsbedingungen","000 Computer science, information, and general works::000 Computer Science, knowledge, systems::005 Computer programming, programs, data","XAI||Interpretability||Explainable AI||Natural Language Processing||Evaluation","Evaluating The Explanation of Black Box Decision for Text Classification","Masterarbeit","free","open access","Mathematik und Informatik","yes"