116 resultados para gaze perception
Resumo:
Purpose - To present an account of cognition integrating second-order cybernetics (SOC) together with enactive perception and dynamic systems theory. Design/methodology/approach - The paper presents a brief critique of classical models of cognition then outlines how integration of SOC, enactive perception and dynamic systems theory can overcome some weaknesses of the classical paradigm. Findings - Presents the critique of evolutionary robotics showing how the issues of teleology and autonomy are left unresolved by this paradigm although their solution fits within the proposed framework. Research limitations/implications - The paper highlights the importance of genuine autonomy in the development of artificial cognitive systems. It sets out a framework within which the robofic research of cognitive systems could succeed. Practical implications - There are no immediate practical implications but see research implications. Originality/value - It joins the discussion on the fundamental nature of cognitive systems and emphasise the importance of autonomy and embodiment.
Resumo:
Our eyes are input sensors which Provide our brains with streams of visual data. They have evolved to be extremely efficient, and they will constantly dart to-and-fro to rapidly build up a picture of the salient entities in a viewed scene. These actions are almost subconscious. However, they can provide telling signs of how the brain is decoding the visuals and call indicate emotional responses, prior to the viewer becoming aware of them. In this paper we discuss a method of tracking a user's eye movements, and Use these to calculate their gaze within an immersive virtual environment. We investigate how these gaze patterns can be captured and used to identify viewed virtual objects, and discuss how this can be used as a, natural method of interacting with the Virtual Environment. We describe a flexible tool that has been developed to achieve this, and detail initial validating applications that prove the concept.
Resumo:
In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.
Resumo:
For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Several theories of the mechanisms linking perception and action require that the links are bidirectional, but there is a lack of consensus on the effects that action has on perception. We investigated this by measuring visual event-related brain potentials to observed hand actions while participants prepared responses that were spatially compatible (e.g., both were on the left side of the body) or incompatible and action type compatible (e.g., both were finger taps) or incompatible, with observed actions. An early enhanced processing of spatially compatible stimuli was observed, which is likely due to spatial attention. This was followed by an attenuation of processing for both spatially and action type compatible stimuli, likely to be driven by efference copy signals that attenuate processing of predicted sensory consequences of actions. Attenuation was not response-modality specific; it was found for manual stimuli when participants prepared manual and vocal responses, in line with the hypothesis that action control is hierarchically organized. These results indicate that spatial attention and forward model prediction mechanisms have opposite, but temporally distinct, effects on perception. This hypothesis can explain the inconsistency of recent findings on action-perception links and thereby supports the view that sensorimotor links are bidirectional. Such effects of action on perception are likely to be crucial, not only for the control of our own actions but also in sociocultural interaction, allowing us to predict the reactions of others to our own actions.
Resumo:
A year-long field study of the thermal environment in university classrooms was conducted from March 2005 to May 2006 in Chongqing, China. This paper presents the occupants’ thermal sensation votes and discusses the occupants’ adaptive response and perception of the thermal environment in a naturally conditioned space. Comparisons between the Actual Mean Vote (AMV) and Predicted Mean Vote (PMV) have been made as well as between the Actual Percentage of Dissatisfied (APD) and Predicted Percentage of Dissatisfied (PPD). The adaptive thermal comfort zone for the naturally conditioned space for Chongqing, which has hot summer and cold winter climatic characteristics, has been proposed based on the field study results. The Chongqing adaptive comfort range is broader than that of the ASHRAE Standard 55-2004 in general, but in the extreme cold and hot months, it is narrower. The thermal conditions in classrooms in Chongqing in summer and winter are severe. Behavioural adaptation such as changing clothing, adjusting indoor air velocity, taking hot/cold drinks, etc., as well as psychological adaptation, has played a role in adapting to the thermal environment.
Resumo:
Recent theories propose that semantic representation and sensorimotor processing have a common substrate via simulation. We tested the prediction that comprehension interacts with perception, using a standard psychophysics methodology.While passively listening to verbs that referred to upward or downward motion, and to control verbs that did not refer to motion, 20 subjects performed a motion-detection task, indicating whether or not they saw motion in visual stimuli containing threshold levels of coherent vertical motion. A signal detection analysis revealed that when verbs were directionally incongruent with the motion signal, perceptual sensitivity was impaired. Word comprehension also affected decision criteria and reaction times, but in different ways. The results are discussed with reference to existing explanations of embodied processing and the potential of psychophysical methods for assessing interactions between language and perception.
Resumo:
A speech message played several metres from the listener in a room is usually heard to have much the same phonetic content as it does when played nearby, even though the different amounts of reflected sound make the temporal envelopes of these signals very different. To study this ‘constancy’ effect, listeners heard speech messages and speech-like sounds comprising 8 auditory-filter shaped noise-bands that had temporal envelopes corresponding to those in these filters when the speech message is played. The ‘contexts’ were “next you’ll get _to click on”, into which a “sir” or “stir” test word was inserted. These test words were from an 11-step continuum, formed by amplitude modulation. Listeners identified the test words appropriately, even in the 8-band conditions where the speech had a ‘robotic’ quality. Constancy was assessed by comparing the influence of room reflections on the test word across conditions where the context had either the same level of room reflections (i.e. from the same, far distance), or where it had a much lower level (i.e. from nearby). Constancy effects were obtained with both the natural- and the 8-band speech. Results are considered in terms of the degree of ‘matching’ between the context’s and test-word’s bands.
Resumo:
Background: The cognitive bases of language impairment in specific language impairment (SLI) and autism spectrum disorders (ASD) were investigated in a novel non-word comparison task which manipulated phonological short-term memory (PSTM) and speech perception, both implicated in poor non-word repetition. Aims: This study aimed to investigate the contributions of PSTM and speech perception in non-word processing and whether individuals with SLI and ASD plus language impairment (ALI) show similar or different patterns of deficit in these cognitive processes. Method & Procedures: Three groups of adolescents (aged 14–17 years), 14 with SLI, 16 with ALI, and 17 age and non-verbal IQ matched typically developing (TD) controls, made speeded discriminations between non-word pairs. Stimuli varied in PSTM load (two- or four-syllables) and speech perception load (mismatches on a word-initial or word-medial segment). Outcomes & Results: Reaction times showed effects of both non-word length and mismatch position and these factors interacted: four-syllable and word-initial mismatch stimuli resulted in the slowest decisions. Individuals with language impairment showed the same pattern of performance as those with typical development in the reaction time data. A marginal interaction between group and item length was driven by the SLI and ALI groups being less accurate with long items than short ones, a difference not found in the TD group. Conclusions & Implications: Non-word discrimination suggests that there are similarities and differences between adolescents with SLI and ALI and their TD peers. Reaction times appear to be affected by increasing PSTM and speech perception loads in a similar way. However, there was some, albeit weaker, evidence that adolescents with SLI and ALI are less accurate than TD individuals, with both showing an effect of PSTM load. This may indicate, at some level, the processing substrate supporting both PSTM and speech perception is intact in adolescents with SLI and ALI, but also in both there may be impaired access to PSTM resources.