Semantic Apparatus – Research community dynamics behind popular AI benchmarks

Cited by Lee Sonogan

AI and Gaming Research Summit - Microsoft Research

Abstact by Fernando Martínez-Plumed,Pablo Barredo,Seán Ó hÉigeartaigh &José Hernández-Orallo 

The widespread use of experimental benchmarks in AI research has created competition and collaboration dynamics that are still poorly understood. Here we provide an innovative methodology to explore these dynamics and analyse the way different entrants in these challenges, from academia to tech giants, behave and react depending on their own or others’ achievements. We perform an analysis of 25 popular benchmarks in AI from Papers With Code, with around 2,000 result entries overall, connected with their underlying research papers. We identify links between researchers and institutions (that is, communities) beyond the standard co-authorship relations, and we explore a series of hypotheses about their behaviour as well as some aggregated results in terms of activity, performance jumps and efficiency. We characterize the dynamics of research communities at different levels of abstraction, including organization, affiliation, trajectories, results and activity. We find that hybrid, multi-institution and persevering communities are more likely to improve state-of-the-art performance, which becomes a watershed for many community members. Although the results cannot be extrapolated beyond our selection of popular machine learning benchmarks, the methodology can be extended to other areas of artificial intelligence or robotics, and combined with bibliometric studies.

Publication: Nature Machine Intelligence (Peer-Reviewed Journals)

Pub Date: 17 May 2021 Doi: (Plenty more sections, figures and references in this paid article.)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.