946 resultados para Spatial representation
Resumo:
Within the building evacuation context, wayfinding describes the process in which an individual located within an arbitrarily complex enclosure attempts to find a path which leads them to relative safety, usually the exterior of the enclosure. Within most evacuation modelling tools, wayfinding is completely ignored; agents are either assigned the shortest distance path or use a potential field to find the shortest path to the exits. In this paper a novel wayfinding technique that attempts to represent the manner in which people wayfind within structures is introduced and demonstrated through two examples. The first step is to encode the spatial information of the enclosure in terms of a graph. The second step is to apply search algorithms to the graph to find possible routes to the destination and assign a cost to the routes based on their personal route preferences such as "least time" or "least distance" or a combination of criteria. The third step is the route execution and refinement. In this step, the agent moves along the chosen route and reassesses the route at regular intervals and may decide to take an alternative path if the agent determines that an alternate route is more favourable e.g. initial path is highly congested or is blocked due to fire.
Resumo:
The aim of the present study was to determine whether and how rats can use local olfactory cues for spatial orientation. Rats were trained in an eight-arm radial maze under different conditions as defined by the presence or absence of supplementary olfactory cues marking each arm, the availability of distant visuospatial information, and the illumination of the maze (light or darkness). The different visual conditions were designed to dissociate among the effects of light per se and those of visuospatial cues, on the use of olfactory cues for accurate arm choice. Different procedures with modifications of the arrangement of olfactory cues were used to determine if rats formed a representation of the spatial configuration of the olfactory cues and if they could rely on such a representation for accurate arm choice in the radial maze. The present study demonstrated that the use of olfactory cues to direct arm choice in the radial arm maze was critically dependent on the illumination conditions and implied two different modes of processing of olfactory information according to the presence or the absence of light. Olfactory cues were used in an explicit manner and enabled accurate arm choice only in the absence of light. Rats, however, had an implicit memory of the location of the olfactory cues and formed a representation of the spatial position of these cues, whatever the lighting conditions. They did not memorize the spatial configuration of the olfactory cues per se but needed these cues to be linked to the external spatial frame of reference.
Resumo:
In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.
Resumo:
The nature of the spatial representations that underlie simple visually guided actions early in life was investigated in toddlers with Williams syndrome (WS), Down syndrome (DS), and healthy chronological age- and mental age-matched controls, through the use of a "double-step" saccade paradigm. The experiment tested the hypothesis that, compared to typically developing infants and toddlers, and toddlers with DS, those with WS display a deficit in using spatial representations to guide actions. Levels of sustained attention were also measured within these groups, to establish whether differences in levels of engagement influenced performance on the double-step saccade task. The results showed that toddlers with WS were unable to combine extra-retinal information with retinal information to the same extent as the other groups, and displayed evidence of other deficits in saccade planning, suggesting a greater reliance on sub-cortical mechanisms than the other populations. Results also indicated that their exploration of the visual environment is less developed. The sustained attention task revealed shorter and fewer periods of sustained attention in toddlers with DS, but not those with WS, suggesting that WS performance on the double-step saccade task is not explained by poorer engagement. The findings are also discussed in relation to a possible attention disengagement deficit in WS toddlers. Our study highlights the importance of studying genetic disorders early in development. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Husserl left many unpublished drafts explaining (or trying to) his views on spatial representation and geometry, such as, particularly, those collected in the second part of Studien zur Arithmetik und Geometrie (Hua XXI), but no completely articulate work on the subject. In this paper, I put forward an interpretation of what those views might have been. Husserl, I claim, distinguished among different conceptions of space, the space of perception (constituted from sensorial data by intentionally motivated psychic functions), that of physical geometry (or idealized perceptual space), the space of the mathematical science of physical nature (in which science, not only raw perception has a word) and the abstract spaces of mathematics (free creations of the mathematical mind), each of them with its peculiar geometrical structure. Perceptual space is proto-Euclidean and the space of physical geometry Euclidean, but mathematical physics, Husserl allowed, may find it convenient to represent physical space with a non-Euclidean structure. Mathematical spaces, on their turn, can be endowed, he thinks, with any geometry mathematicians may find interesting. Many other related questions are addressed here, in particular those concerning the a priori or a posteriori character of the many geometric features of perceptual space (bearing in mind that there are at least two different notions of a priori in Husserl, which we may call the conceptual and the transcendental a priori). I conclude with an overview of Weyl's ideas on the matter, since his philosophical conceptions are often traceable back to his former master, Husserl.
Resumo:
Numerosi studi mostrano che gli intervalli temporali sono rappresentati attraverso un codice spaziale che si estende da sinistra verso destra, dove gli intervalli brevi sono rappresentati a sinistra rispetto a quelli lunghi. Inoltre tale disposizione spaziale del tempo può essere influenzata dalla manipolazione dell’attenzione-spaziale. La presente tesi si inserisce nel dibattito attuale sulla relazione tra rappresentazione spaziale del tempo e attenzione-spaziale attraverso l’uso di una tecnica che modula l’attenzione-spaziale, ovvero, l’Adattamento Prismatico (AP). La prima parte è dedicata ai meccanismi sottostanti tale relazione. Abbiamo mostrato che spostando l’attenzione-spaziale con AP, verso un lato dello spazio, si ottiene una distorsione della rappresentazione di intervalli temporali, in accordo con il lato dello spostamento attenzionale. Questo avviene sia con stimoli visivi, sia con stimoli uditivi, nonostante la modalità uditiva non sia direttamente coinvolta nella procedura visuo-motoria di AP. Questo risultato ci ha suggerito che il codice spaziale utilizzato per rappresentare il tempo, è un meccanismo centrale che viene influenzato ad alti livelli della cognizione spaziale. La tesi prosegue con l’indagine delle aree corticali che mediano l’interazione spazio-tempo, attraverso metodi neuropsicologici, neurofisiologici e di neuroimmagine. In particolare abbiamo evidenziato che, le aree localizzate nell’emisfero destro, sono cruciali per l’elaborazione del tempo, mentre le aree localizzate nell’emisfero sinistro sono cruciali ai fini della procedura di AP e affinché AP abbia effetto sugli intervalli temporali. Infine, la tesi, è dedicata allo studio dei disturbi della rappresentazione spaziale del tempo. I risultati ci indicano che un deficit di attenzione-spaziale, dopo danno emisferico destro, provoca un deficit di rappresentazione spaziale del tempo, che si riflette negativamente sulla vita quotidiana dei pazienti. Particolarmente interessanti sono i risultati ottenuti mediante AP. Un trattamento con AP, efficace nel ridurre il deficit di attenzione-spaziale, riduce anche il deficit di rappresentazione spaziale del tempo, migliorando la qualità di vita dei pazienti.
Resumo:
Avian influenza, or 'bird 'flu' arrived in Norfolk in April 2006 in the form of the low pathogenic strain H7N3. In February 2007 a highly pathogenic strain, H5N1, which can pose a risk to humans, was discovered in Suffolk. We examine how a local newspaper reported the outbreaks, focusing on the linguistic framing of biosecurity. Consistent with the growing concern with securitisation among policymakers, issues were discussed in terms of space (indoor–outdoor; local–global; national–international) and flows (movement, barriers and vectors) between spaces (farms, sheds and countries). The apportioning of blame along the lines of 'them and us'– Hungary and England – was tempered by the reporting on the Hungarian operations of the British poultry company. Explanations focused on indoor and outdoor farming and alleged breaches of biosecurity by the companies involved. As predicted by the idea of securitisation, risks were formulated as coming from outside the supposedly secure enclaves of poultry production.
Resumo:
RatSLAM is a biologically-inspired visual SLAM and navigation system that has been shown to be effective indoors and outdoors on real robots. The spatial representation at the core of RatSLAM, the experience map, forms in a distributed fashion as the robot learns the environment. The activity in RatSLAM’s experience map possesses some geometric properties, but still does not represent the world in a human readable form. A new system, dubbed RatChat, has been introduced to enable meaningful communication with the robot. The intention is to use the “language games” paradigm to build spatial concepts that can be used as the basis for communication. This paper describes the first step in the language game experiments, showing the potential for meaningful categorization of the spatial representations in RatSLAM.
Resumo:
The process of learning symbolic Arabic digits in early childhood requires that magnitude and spatial information integrates with the concept of symbolic digits. Previous research has separately investigated the development of automatic access to magnitude and spatial information from symbolic digits. However, developmental trajectories of symbolic number knowledge cannot be fully understood when considering components in isolation. In view of this, we have synthesized the existing lines of research and tested the use of both magnitude and spatial information with the same sample of British children in Years 1, 2 and 3 (6-8 years of age). The physical judgment task of the numerical Stroop paradigm (NSP) demonstrated that automatic access to magnitude was present from Year 1 and the distance effect signaled that a refined processing of numerical information had developed. Additionally, a parity judgment task showed that the onset of the Spatial-Numerical Association of Response Codes (SNARC) effect occurs in Year 2. These findings uncover the developmental timeline of how magnitude and spatial representations integrate with symbolic number knowledge during early learning of Arabic digits and resolve inconsistencies between previous developmental and experimental research lines.
Resumo:
A neural model is described of how the brain may autonomously learn a body-centered representation of 3-D target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation--otherwise known as a parcellated distributed representation--of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of non-foveated target position to learn a visuomotor representation of both foveated and non-foveated target position that is capable of commanding yoked eye movementes. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates arc compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Software visualization can be of great use for understanding and exploring a software system in an intuitive manner. Spatial representation of software is a promising approach of increasing interest. However, little is known about how developers interact with spatial visualizations that are embedded in the IDE. In this paper, we present a pilot study that explores the use of Software Cartography for program comprehension of an unknown system. We investigated whether developers establish a spatial memory of the system, whether clustering by topic offers a sound base layout, and how developers interact with maps. We report our results in the form of observations, hypotheses, and implications. Key findings are a) that developers made good use of the map to inspect search results and call graphs, and b) that developers found the base layout surprising and often confusing. We conclude with concrete advice for the design of embedded software maps
Resumo:
Spatial scaling is an integral aspect of many spatial tasks that involve symbol-to-referent correspondences (e.g., map reading, drawing). In this study, we asked 3–6-year-olds and adults to locate objects in a two-dimensional spatial layout using information from a second spatial representation (map). We examined how scaling factor and reference features, such as the shape of the layout or the presence of landmarks, affect performance. Results showed that spatial scaling on this simple task undergoes considerable development, especially between 3 and 5 years of age. Furthermore, the youngest children showed large individual variability and profited from landmark information. Accuracy differed between scaled and un-scaled items, but not between items using different scaling factors (1:2 vs. 1:4), suggesting that participants encoded relative rather than absolute distances.