dc.contributor.author
Gücükbel, Esra
dc.date.accessioned
2023-04-13T10:20:35Z
dc.date.available
2023-04-13T10:20:35Z
dc.identifier.uri
https://refubium.fu-berlin.de/handle/fub188/38114
dc.identifier.uri
http://dx.doi.org/10.17169/refubium-37827
dc.description.abstract
Through progressively evolved technology, applications of machine learning
and deep learning methods become prevalent with the increased size of the
collected data and the data processing capacity. Among these methods, deep
neural networks achieve high accuracy results in various classification tasks;
nonetheless, they have the characteristic of opaqueness that causes called them
black box models. As a trade-off, black box models fall short in terms of interpretability
by humans. Without a supportive explanation of why the model
reaches a particular conclusion, the output causes an intrusive situation for
decision-makers who will take action with the outcome of predictions. In this
context, various explanation methods have been developed to enhance the
interpretability of black box models. LIME, SHAP, and Integrated Gradients
techniques are examples of more adaptive approaches due to their welldeveloped
and easy-to-use libraries. While LIME and SHAP are post-hoc
analysis tools, Integrated Gradients provide model-specific outcomes using the
model’s inner workings. In this thesis, four widely used explanation methods
are quantitatively evaluated for text classification tasks using the Bidirectional
LSTM model and DistillBERT model on four benchmark data sets, such as
SMS Spam, IMDB Reviews, Yelp Polarity, and Fake News data sets. The results
of the experiments reveal that analysis methods and evaluation metrics
provide an auspicious foundation for assessing the strengths and weaknesses of
explanation methods.
en
dc.format.extent
vii, 57 Seiten
dc.rights.uri
http://www.fu-berlin.de/sites/refubium/rechtliches/Nutzungsbedingungen
dc.subject
Interpretability
en
dc.subject
Explainable AI
en
dc.subject
Natural Language Processing
en
dc.subject.ddc
000 Computer science, information, and general works::000 Computer Science, knowledge, systems::005 Computer programming, programs, data
dc.title
Evaluating The Explanation of Black Box Decision for Text Classification
dc.identifier.urn
urn:nbn:de:kobv:188-refubium-38114-3
refubium.affiliation
Mathematik und Informatik
refubium.resourceType.isindependentpub
yes
dcterms.accessRights.dnb
free
dcterms.accessRights.openaire
open access