This study addresses the gap in machine learning tools for positive results classification by evaluating the performance of SciBERT, a transformer model pretrained on scientific text, and random forest in clinical psychology abstracts. Over 1,900 abstracts were annotated into two categories: positive results only and mixed or negative results. Model performance was evaluated on three benchmarks. The best-performing model was utilized to analyze trends in over 20,000 psychotherapy study abstracts. SciBERT outperformed all benchmarks and random forest in in-domain and out-of-domain data. The trend analysis revealed nonsignificant effects of publication year on positive results for 1990–2005, but a significant decrease in positive results between 2005 and 2022. When examining the entire time span, significant positive linear and negative quadratic effects were observed. Machine learning could support future efforts to understand patterns of positive results in large data sets. The fine-tuned SciBERT model was deployed for public use.