dc.contributor.author
Münchmeyer, Jannes
dc.contributor.author
Woollam, Jack
dc.contributor.author
Rietbrock, Andreas
dc.contributor.author
Tilmann, Frederik
dc.contributor.author
Lange, Dietrich
dc.contributor.author
Bornstein, Thomas
dc.contributor.author
Diehl, Tobias
dc.contributor.author
Giunchi, Carlo
dc.contributor.author
Haslinger, Florian
dc.contributor.author
Jozinović, Dario
dc.date.accessioned
2022-03-01T11:51:43Z
dc.date.available
2022-03-01T11:51:43Z
dc.identifier.uri
https://refubium.fu-berlin.de/handle/fub188/34258
dc.identifier.uri
http://dx.doi.org/10.17169/refubium-33976
dc.description.abstract
Seismic event detection and phase picking are the base of many seismological workflows. In recent years, several publications demonstrated that deep learning approaches significantly outperform classical approaches, achieving human-like performance under certain circumstances. However, as studies differ in the datasets and evaluation tasks, it is unclear how the different approaches compare to each other. Furthermore, there are no systematic studies about model performance in cross-domain scenarios, that is, when applied to data with different characteristics. Here, we address these questions by conducting a large-scale benchmark. We compare six previously published deep learning models on eight data sets covering local to teleseismic distances and on three tasks: event detection, phase identification and onset time picking. Furthermore, we compare the results to a classical Baer-Kradolfer picker. Overall, we observe the best performance for EQTransformer, GPD and PhaseNet, with a small advantage for EQTransformer on teleseismic data. Furthermore, we conduct a cross-domain study, analyzing model performance on data sets they were not trained on. We show that trained models can be transferred between regions with only mild performance degradation, but models trained on regional data do not transfer well to teleseismic data. As deep learning for detection and picking is a rapidly evolving field, we ensured extensibility of our benchmark by building our code on standardized frameworks and making it openly accessible. This allows model developers to easily evaluate new models or performance on new data sets. Furthermore, we make all trained models available through the SeisBench framework, giving end-users an easy way to apply these models.
en
dc.format.extent
22 Seiten
dc.rights.uri
https://creativecommons.org/licenses/by/4.0/
dc.subject
Seismic Pickers
en
dc.subject
Deep Learning
en
dc.subject.ddc
500 Naturwissenschaften und Mathematik::550 Geowissenschaften, Geologie::550 Geowissenschaften
dc.title
Which Picker Fits My Data? A Quantitative Evaluation of Deep Learning Based Seismic Pickers
dc.type
Wissenschaftlicher Artikel
dcterms.bibliographicCitation.articlenumber
e2021JB023499
dcterms.bibliographicCitation.doi
10.1029/2021JB023499
dcterms.bibliographicCitation.journaltitle
Journal of Geophysical Research: Solid Earth
dcterms.bibliographicCitation.number
1
dcterms.bibliographicCitation.volume
127
dcterms.bibliographicCitation.url
https://doi.org/10.1029/2021JB023499
refubium.affiliation
Geowissenschaften
refubium.affiliation.other
Institut für Geologische Wissenschaften / Fachrichtung Geophysik

refubium.resourceType.isindependentpub
no
dcterms.accessRights.openaire
open access
dcterms.isPartOf.eissn
2169-9356
refubium.resourceType.provider
WoS-Alert