Cited by Lee Sonogan

Abstract by Partha P. Mitra
Textbook wisdom advocates for smooth function fits and implies that interpolation of noisy data should lead to poor generalization. A related heuristic is that fitting parameters should be fewer than measurements (Occam’s razor). Surprisingly, contemporary machine learning approaches, such as deep nets, generalize well, despite interpolating noisy data. This may be understood via statistically consistent interpolation (SCI), that is, data interpolation techniques that generalize optimally for big data. Here, we elucidate SCI using the weighted interpolating nearest neighbours algorithm, which adds singular weight functions to k nearest neighbours. This shows that data interpolation can be a valid machine learning strategy for big data. SCI clarifies the relation between two ways of modelling natural phenomena: the rationalist approach (strong priors) of theoretical physics with few parameters, and the empiricist (weak priors) approach of modern machine learning with more parameters than data. SCI shows that the purely empirical approach can successfully predict. However, data interpolation does not provide theoretical insights, and the training data requirements may be prohibitive. Complex animal brains are between these extremes, with many parameters, but modest training data, and with prior structure encoded in species-specific mesoscale circuitry. Thus, modern machine learning provides a distinct epistemological approach that is different both from physical theories and animal brains.
Publication: Nature Machine Intelligence (Peer-Reviewed Journal)
Pub Date: 19 May 2021 Doi: https://doi.org/10.1038/s42256-021-00345-8
https://www.nature.com/articles/s42256-021-00345-8#citeas (More sections, figures and references in this paid article)
https://entertainmentcultureonline.com/