dc.contributor.author
Boenisch, Franziska
dc.date.accessioned
2023-01-02T13:42:19Z
dc.date.available
2023-01-02T13:42:19Z
dc.identifier.uri
https://refubium.fu-berlin.de/handle/fub188/37279
dc.identifier.uri
http://dx.doi.org/10.17169/refubium-36991
dc.description.abstract
In recent years, the advances of Machine Learning (ML) have led to its increased application within critical applications and on highly sensitive data. This drew attention to the aspects of ML security and privacy. ML models should operate correctly and not reveal sensitive data that they were trained on. However, assessing and implementing ML security and privacy is a challenging task. This is, first of all, because the effects of current ML practices on these aspects are not yet fully understood. Consequently, the array of known risks still contains a multitude of blind spots. In a similar vein, the implicit assumptions under which ML security and privacy can be achieved in a given practical application often remain unexplored.
In this work, we present a study on security and privacy in ML that contributes to overcoming the existing limitations. Therefore, we first provide insights into the current state of security and privacy of ML in practice by surveying ML practitioners. We find that ML practitioners exhibit a particularly low awareness when it comes to ML privacy and that they trust third-party frameworks and services for its implementation. These insights motivate the necessity to investigate ML privacy more in depth. We do so with a focus on Federated Learning (FL) since FL is a commonly used framework for real-world applications that affect hundreds of thousands of users and their private data. In this setup, we study privacy leakage from ML models and show that model gradients can directly leak private information on large fractions of their sensitive training data. Building on these findings, we extend existing research on maliciously attacking the privacy of this training data by proposing a novel attack vector, namely adversarial initialization of the model weights. By thoroughly exploring this attack vector, we assess the assumptions on trust required to obtain meaningful privacy guarantees in FL. In particular, we focus on trust assumptions regarding the central server in FL. Finally, to explore the intersection of ML security and privacy, we investigate what impact the implementation of privacy guarantees has on ML models’ robustness. Eventually, through this work, we aim to advocate the importance of a secure and privacy-preserving design of ML methods---in particular when these are applied in real-world scenarios.
en
dc.format.extent
xvii, 176 Seiten
dc.rights.uri
http://www.fu-berlin.de/sites/refubium/rechtliches/Nutzungsbedingungen
dc.subject
machine learning
en
dc.subject
differential privacy
en
dc.subject
federated learning
en
dc.subject.ddc
000 Computer science, information, and general works::000 Computer Science, knowledge, systems::000 Computer science, information, and general works
dc.title
Secure and Private Machine Learning
dc.contributor.gender
female
dc.contributor.firstReferee
Margraf, Marian
dc.contributor.furtherReferee
Papernot, Nicolas
dc.date.accepted
2022-11-17
dc.identifier.urn
urn:nbn:de:kobv:188-refubium-37279-3
dc.title.translated
Sicheres und privates maschinelles Lernen
de
refubium.affiliation
Mathematik und Informatik
dcterms.accessRights.dnb
free
dcterms.accessRights.openaire
open access
dcterms.accessRights.proquest
accept