dc.contributor.author
Cejudo, Jose E.
dc.contributor.author
Chaurasia, Akhilanand
dc.contributor.author
Feldberg, Ben
dc.contributor.author
Krois, Joachim
dc.contributor.author
Schwendicke, Falk
dc.date.accessioned
2021-09-02T15:22:56Z
dc.date.available
2021-09-02T15:22:56Z
dc.identifier.uri
https://refubium.fu-berlin.de/handle/fub188/31800
dc.identifier.uri
http://dx.doi.org/10.17169/refubium-31532
dc.description.abstract
Objectives: To retrospectively assess radiographic data and to prospectively classify radiographs (namely, panoramic, bitewing, periapical, and cephalometric images), we compared three deep learning architectures for their classification performance. Methods: Our dataset consisted of 31,288 panoramic, 43,598 periapical, 14,326 bitewing, and 1176 cephalometric radiographs from two centers (Berlin/Germany; Lucknow/India). For a subset of images L (32,381 images), image classifications were available and manually validated by an expert. The remaining subset of images U was iteratively annotated using active learning, with ResNet-34 being trained on L, least confidence informative sampling being performed on U, and the most uncertain image classifications from U being reviewed by a human expert and iteratively used for re-training. We then employed a baseline convolutional neural networks (CNN), a residual network (another ResNet-34, pretrained on ImageNet), and a capsule network (CapsNet) for classification. Early stopping was used to prevent overfitting. Evaluation of the model performances followed stratified k-fold cross-validation. Gradient-weighted Class Activation Mapping (Grad-CAM) was used to provide visualizations of the weighted activations maps. Results: All three models showed high accuracy (>98%) with significantly higher accuracy, F1-score, precision, and sensitivity of ResNet than baseline CNN and CapsNet (p < 0.05). Specificity was not significantly different. ResNet achieved the best performance at small variance and fastest convergence. Misclassification was most common between bitewings and periapicals. For bitewings, model activation was most notable in the inter-arch space for periapicals interdentally, for panoramics on bony structures of maxilla and mandible, and for cephalometrics on the viscerocranium. Conclusions: Regardless of the models, high classification accuracies were achieved. Image features considered for classification were consistent with expert reasoning.
en
dc.rights.uri
https://creativecommons.org/licenses/by/4.0/
dc.subject
artificial intelligence
en
dc.subject
classification deep learning
en
dc.subject
machine learning
en
dc.subject.ddc
600 Technik, Medizin, angewandte Wissenschaften::610 Medizin und Gesundheit::610 Medizin und Gesundheit
dc.title
Classification of Dental Radiographs Using Deep Learning
dc.type
Wissenschaftlicher Artikel
dcterms.bibliographicCitation.articlenumber
1496
dcterms.bibliographicCitation.doi
10.3390/jcm10071496
dcterms.bibliographicCitation.journaltitle
Journal of Clinical Medicine
dcterms.bibliographicCitation.number
7
dcterms.bibliographicCitation.originalpublishername
MDPI AG
dcterms.bibliographicCitation.volume
10
refubium.affiliation
Charité - Universitätsmedizin Berlin
refubium.resourceType.isindependentpub
no
dcterms.accessRights.openaire
open access
dcterms.bibliographicCitation.pmid
33916800
dcterms.isPartOf.eissn
2077-0383