View-based approaches to spatial representation in human vision


Autoria(s): Glennerster, Andrew; Hansard, Miles E.; Fitzgibbon, Andrew W.
Data(s)

2009

Resumo

In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.

Formato

text

Identificador

http://centaur.reading.ac.uk/2054/1/ghf2009_preprint.pdf

Glennerster, A. <http://centaur.reading.ac.uk/view/creators/90000482.html>, Hansard, M. E. and Fitzgibbon, A. W. (2009) View-based approaches to spatial representation in human vision. In: Statistical and geometrical approaches to visual motion analysis. Lecture Notes in Computer Science , 5604. Springer, Berlin, pp. 193-208. ISBN 9783642030604 doi: 10.1007/978-3-642-03061-1_10 <http://dx.doi.org/10.1007/978-3-642-03061-1_10>

Idioma(s)

en

Publicador

Springer

Relação

http://centaur.reading.ac.uk/2054/

http://dx.doi.org/10.1007/978-3-642-03061-1_10

doi:10.1007/978-3-642-03061-1_10

doi:10.1007/978-3-642-03061-1_10

Palavras-Chave #571 Physiology & related subjects
Tipo

Book or Report Section

PeerReviewed

Direitos