dc.contributor.author
Morger, Andrea
dc.contributor.author
Garcia de Lomana, Marina
dc.contributor.author
Norinder, Ulf
dc.contributor.author
Svensson, Fredrik
dc.contributor.author
Kirchmair, Johannes
dc.contributor.author
Mathea, Miriam
dc.contributor.author
Volkamer, Andrea
dc.date.accessioned
2024-03-04T15:33:16Z
dc.date.available
2024-03-04T15:33:16Z
dc.identifier.uri
https://refubium.fu-berlin.de/handle/fub188/42633
dc.identifier.uri
http://dx.doi.org/10.17169/refubium-42357
dc.description.abstract
Machine learning models are widely applied to predict molecular properties or the biological activity of small molecules on a specific protein. Models can be integrated in a conformal prediction (CP) framework which adds a calibration step to estimate the confidence of the predictions. CP models present the advantage of ensuring a predefined error rate under the assumption that test and calibration set are exchangeable. In cases where the test data have drifted away from the descriptor space of the training data, or where assay setups have changed, this assumption might not be fulfilled and the models are not guaranteed to be valid. In this study, the performance of internally valid CP models when applied to either newer time-split data or to external data was evaluated. In detail, temporal data drifts were analysed based on twelve datasets from the ChEMBL database. In addition, discrepancies between models trained on publicly-available data and applied to proprietary data for the liver toxicity and MNT in vivo endpoints were investigated. In most cases, a drastic decrease in the validity of the models was observed when applied to the time-split or external (holdout) test sets. To overcome the decrease in model validity, a strategy for updating the calibration set with data more similar to the holdout set was investigated. Updating the calibration set generally improved the validity, restoring it completely to its expected value in many cases. The restored validity is the first requisite for applying the CP models with confidence. However, the increased validity comes at the cost of a decrease in model efficiency, as more predictions are identified as inconclusive. This study presents a strategy to recalibrate CP models to mitigate the effects of data drifts. Updating the calibration sets without having to retrain the model has proven to be a useful approach to restore the validity of most models.
en
dc.rights.uri
https://creativecommons.org/licenses/by/4.0/
dc.subject
chemical toxicity data
en
dc.subject
Machine learning (ML) models
en
dc.subject.ddc
600 Technik, Medizin, angewandte Wissenschaften::610 Medizin und Gesundheit::610 Medizin und Gesundheit
dc.title
Studying and mitigating the effects of data drifts on ML model performance at the example of chemical toxicity data
dc.type
Wissenschaftlicher Artikel
dcterms.bibliographicCitation.articlenumber
7244
dcterms.bibliographicCitation.doi
10.1038/s41598-022-09309-3
dcterms.bibliographicCitation.journaltitle
Scientific Reports
dcterms.bibliographicCitation.number
1
dcterms.bibliographicCitation.originalpublishername
Springer Nature
dcterms.bibliographicCitation.volume
12
refubium.affiliation
Charité - Universitätsmedizin Berlin
refubium.funding
Springer Nature DEAL
refubium.resourceType.isindependentpub
no
dcterms.accessRights.openaire
open access
dcterms.bibliographicCitation.pmid
35508546
dcterms.isPartOf.eissn
2045-2322