Open Access
Predicting human complexity perception of real-world scenes
Author(s) -
Fintan Nagle,
Nilli Lavie
Publication year - 2020
Publication title -
royal society open science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.84
H-Index - 51
ISSN - 2054-5703
DOI - 10.1098/rsos.191487
Subject(s) - computer science , artificial intelligence , convolutional neural network , perception , human visual system model , visual perception , computational complexity theory , pattern recognition (psychology) , pascal (unit) , computer vision , image (mathematics) , algorithm , psychology , neuroscience , programming language
Perceptual load is a well-established determinant of attentional engagement in a task. So far, perceptual load has typically been manipulated by increasing either the number of task-relevant items or the perceptual processing demand (e.g. conjunction versus feature tasks). The tasks used often involved rather simple visual displays (e.g. letters or single objects). How can perceptual load be operationalized for richer, real-world images? A promising proxy is the visual complexity of an image. However, current predictive models for visual complexity have limited applicability to diverse real-world images. Here we modelled visual complexity using a deep convolutional neural network (CNN) trained to learn perceived ratings of visual complexity. We presented 53 observers with 4000 images from the PASCAL VOC dataset, obtaining 75 020 2-alternative forced choice paired comparisons across observers. Image visual complexity scores were obtained using the TrueSkill algorithm. A CNN with weights pre-trained on an object recognition task predicted complexity ratings with r = 0.83. By contrast, feature-based models used in the literature, working on image statistics such as entropy, edge density and JPEG compression ratio, only achieved r = 0.70. Thus, our model offers a promising method to quantify the perceptual load of real-world scenes through visual complexity.