Cited by Lee Sonogan
Abstract .by Katherine R. Storrs,Barton L. Anderson &Roland W. Fleming
Reflectance, lighting and geometry combine in complex ways to create images. How do we disentangle these to perceive individual properties, such as surface glossiness? We suggest that brains disentangle properties by learning to model statistical structure in proximal images. To test this hypothesis, we trained unsupervised generative neural networks on renderings of glossy surfaces and compared their representations with human gloss judgements. The networks spontaneously cluster images according to distal properties such as reflectance and illumination, despite receiving no explicit information about these properties. Intriguingly, the resulting representations also predict the specific patterns of ‘successes’ and ‘errors’ in human perception. Linearly decoding specular reflectance from the model’s internal code predicts human gloss perception better than ground truth, supervised networks or control models, and it predicts, on an image-by-image basis, illusions of gloss perception caused by interactions between material, shape and lighting. Unsupervised learning may underlie many perceptual dimensions in vision and beyond.
Publication: Nature Human Behaviour (Peer-Reviewed Journal)
Pub Date: 06 May 2021 Doi: https://doi.org/10.1038/s41562-021-01097-6
https://www.nature.com/articles/s41562-021-01097-6#citeas (Plenty more sections, figures and references in the article)