Pragmatic Apparatus – Putting Psychology to the Test: Rethinking Model Evaluation Through Benchmarking and Prediction

Cited by Lee Sonogan

Benchmarking as Resource Prediction• About

Abstract by Roberta Rocca, Tal Yarkoni

Consensus on standards for evaluating models and theories is an integral part of every science. Nonetheless, in psychology, relatively little focus has been placed on defining reliable communal metrics to assess model performance. Evaluation practices are often idiosyncratic and are affected by a number of shortcomings (e.g., failure to assess models’ ability to generalize to unseen data) that make it difficult to discriminate between good and bad models. Drawing inspiration from fields such as machine learning and statistical genetics, we argue in favor of introducing common benchmarks as a means of overcoming the lack of reliable model evaluation criteria currently observed in psychology. We discuss a number of principles benchmarks should satisfy to achieve maximal utility, identify concrete steps the community could take to promote the development of such benchmarks, and address a number of potential pitfalls and concerns that may arise in the course of implementation. We argue that reaching consensus on common evaluation benchmarks will foster cumulative progress in psychology and encourage researchers to place heavier emphasis on the practical utility of scientific models.

Publication: Advances in Methods and Practices in Psychological Science (Peer-Reviewed Journal)

Pub Date: Sep 23, 2021 Doi: https://doi.org/10.1177/25152459211026864

Keywords: psychology, model evaluation, benchmarking, machine learning, open data, open materials

https://journals.sagepub.com/doi/full/10.1177/25152459211026864 (Plenty more sections and references in this research article)

https://www.patreon.com/GROOVYGORDS

https://entertainmentcultureonline.com/

https://ungroovygords.com/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.