23 resultados para visual objects
em CentAUR: Central Archive University of Reading - UK
Resumo:
The coding of body part location may depend upon both visual and proprioceptive information, and allows targets to be localized with respect to the body. The present study investigates the interaction between visual and proprioceptive localization systems under conditions of multisensory conflict induced by optokinetic stimulation (OKS). Healthy subjects were asked to estimate the apparent motion speed of a visual target (LED) that could be located either in the extrapersonal space (visual encoding only, V), or at the same distance, but stuck on the subject's right index finger-tip (visual and proprioceptive encoding, V-P). Additionally, the multisensory condition was performed with the index finger kept in position both passively (V-P passive) and actively (V-P active). Results showed that the visual stimulus was always perceived to move, irrespective of its out- or on-the-body location. Moreover, this apparent motion speed varied consistently with the speed of the moving OKS background in all conditions. Surprisingly, no differences were found between V-P active and V-P passive conditions in the speed of apparent motion. The persistence of the visual illusion during the active posture maintenance reveals a novel condition in which vision totally dominates over proprioceptive information, suggesting that the hand-held visual stimulus was perceived as a purely visual, external object despite its contact with the hand.
Resumo:
Many older adults wish to gain competence in using a computer, but many application interfaces are perceived as complex and difficult to use, deterring potential users from investing the time to learn them. Hence, this study looks at the potential of ‘familiar’ interface design which builds upon users’ knowledge of real world interactions, and applies existing skills to a new domain. Tools are provided in the form of familiar visual objects, and manipulated like real-world counterparts, rather than with buttons, icons and menus found in classic WIMP interfaces. This paper describes the formative evaluation of computer interactions that are based upon familiar real world tasks, which supports multitouch interaction, involves few buttons and icons, no menus, no right-clicks or double-clicks and no dialogs. Using an example of an email client to test the principles of using “familiarity”, the initial feedback was very encouraging, with 3 of the 4 participants being able to undertake some of the basic email tasks with no prior training and little or no help. The feedback has informed a number of refinements of the design principles, such as providing clearer affordance for visual objects. A full study is currently underway.
Resumo:
A wealth of literature suggests that emotional faces are given special status as visual objects: Cognitive models suggest that emotional stimuli, particularly threat-relevant facial expressions such as fear and anger, are prioritized in visual processing and may be identified by a subcortical “quick and dirty” pathway in the absence of awareness (Tamietto & de Gelder, 2010). Both neuroimaging studies (Williams, Morris, McGlone, Abbott, & Mattingley, 2004) and backward masking studies (Whalen, Rauch, Etcoff, McInerney, & Lee, 1998) have supported the notion of emotion processing without awareness. Recently, our own group (Adams, Gray, Garner, & Graf, 2010) showed adaptation to emotional faces that were rendered invisible using a variant of binocular rivalry: continual flash suppression (CFS, Tsuchiya & Koch, 2005). Here we (i) respond to Yang, Hong, and Blake's (2010) criticisms of our adaptation paper and (ii) provide a unified account of adaptation to facial expression, identity, and gender, under conditions of unawareness
Resumo:
We use a detailed study of the knowledge work around visual representations to draw attention to the multidimensional nature of `objects'. Objects are variously described in the literatures as relatively stable or in flux; as abstract or concrete; and as used within or across practices. We clarify these dimensions, drawing on and extending the literature on boundary objects, and connecting it with work on epistemic and technical objects. In particular, we highlight the epistemic role of objects, using our observations of knowledge work on an architectural design project to show how, in this setting, visual representations are characterized by a `lack' or incompleteness that precipitates unfolding. The conceptual design of a building involves a wide range of technical, social and aesthetic forms of knowledge that need to be developed and aligned. We explore how visual representations are used, and how these are meaningful to different stakeholders, eliciting their distinct contributions. As the project evolves and the drawings change, new issues and needs for knowledge work arise. These objects have an `unfolding ontology' and are constantly in flux, rather than fully formed. We discuss the implications for wider understandings of objects in organizations and for how knowledge work is achieved in practice.
Resumo:
Recent interest in material objects - the things of everyday interaction - has led to articulations of their role in the literature on organizational knowledge and learning. What is missing is a sense of how the use of these 'things' is patterned across both industrial settings and time. This research addresses this gap with a particular emphasis on visual materials. Practices are analysed in two contrasting design settings: a capital goods manufacturer and an architectural firm. Materials are observed to be treated both as frozen, and hence unavailable for change; and as fluid, open and dynamic. In each setting temporal patterns of unfreezing and refreezing are associated with the different types of materials used. The research suggests that these differing patterns or rhythms of visual practice are important in the evolution of knowledge and in structuring social relations for delivery. Hence, to improve their performance practitioners should not only consider the types of media they use, but also reflect on the pace and style of their interactions.
Resumo:
Defensive behaviors, such as withdrawing your hand to avoid potentially harmful approaching objects, rely on rapid sensorimotor transformations between visual and motor coordinates. We examined the reference frame for coding visual information about objects approaching the hand during motor preparation. Subjects performed a simple visuomanual task while a task-irrelevant distractor ball rapidly approached a location either near to or far from their hand. After the distractor ball appearance, single pulses of transcranial magnetic stimulation were delivered over the subject's primary motor cortex, eliciting motor evoked potentials (MEPs) in their responding hand. MEP amplitude was reduced when the ball approached near the responding hand, both when the hand was on the left and the right of the midline. Strikingly, this suppression occurred very early, at 70-80ms after ball appearance, and was not modified by visual fixation location. Furthermore, it was selective for approaching balls, since static visual distractors did not modulate MEP amplitude. Together with additional behavioral measurements, we provide converging evidence for automatic hand-centered coding of visual space in the human brain.
Resumo:
The authors assessed rats' encoding of the appearance or egocentric position of objects within visual scenes containing 3 objects (Experiment 1) or I object (Experiment 2A). Experiment 2B assessed encoding of the shape and fill pattern of single objects, and encoding of configurations (object + position, shape + fill). All were assessed by testing rats' ability to discriminate changes from familiar scenes (constant-negative paradigm). Perirhinal cortex lesions impaired encoding of objects and their shape; postrhinal cortex lesions impaired encoding of egocentric position, but the effect may have been partly due to entorhinal involvement. Neither lesioned group was impaired in detecting configural change. In Experiment 1, both lesion groups were impaired in detecting small changes in relative position of the 3 objects, suggesting that more sensitive tests might reveal configural encoding deficits.
Resumo:
Seventeen-month-old infants were presented with pairs of images, in silence or with the non-directive auditory stimulus 'look!'. The images had been chosen so that one image depicted an item whose name was known to the infant, and the other image depicted an image whose name was not known to the infant. Infants looked longer at images for which they had names than at images for which they did not have names, despite the absence of any referential input. The experiment controlled for the familiarity of the objects depicted: in each trial, image pairs presented to infants had previously been judged by caregivers to be of roughly equal familiarity. From a theoretical perspective, the results indicate that objects with names are of intrinsic interest to the infant. The possible causal direction for this linkage is discussed and it is concluded that the results are consistent with Whorfian linguistic determinism, although other construals are possible. From a methodological perspective, the results have implications for the use of preferential looking as an index of early word comprehension.
Resumo:
Previous functional imaging studies have shown that facilitated processing of a visual object on repeated, relative to initial, presentation (i.e., repetition priming) is associated with reductions in neural activity in multiple regions, including fusiforin/lateral occipital cortex. Moreover, activity reductions have been found, at diminished levels, when a different exemplar of an object is presented on repetition. In one previous study, the magnitude of diminished priming across exemplars was greater in the right relative to the left fusiform, suggesting greater exemplar specificity in the right. Another previous study, however, observed fusiform lateralization modulated by object viewpoint, but not object exemplar. The present fMRI study sought to determine whether the result of differential fusiform responses for perceptually different exemplars could be replicated. Furthermore, the role of the left fusiform cortex in object recognition was investigated via the inclusion of a lexical/semantic manipulation. Right fusiform cortex showed a significantly greater effect of exemplar change than left fusiform, replicating the previous result of exemplar-specific fusiform lateralization. Right fusiform and lateral occipital cortex were not differentially engaged by the lexical/semantic manipulation, suggesting that their role in visual object recognition is predominantly in the. C visual discrimination of specific objects. Activation in left fusiform cortex, but not left lateral occipital cortex, was modulated by both exemplar change and lexical/semantic manipulation, with further analysis suggesting a posterior-to-anterior progression between regions involved in processing visuoperceptual and lexical/semantic information about objects. The results are consistent with the view that the right fusiform plays a greater role in processing specific visual form information about objects, whereas the left fusiform is also involved in lexical/semantic processing. (C) 2003 Elsevier Science (USA). All rights reserved.
Resumo:
The visual perception of size in different regions of external space was studied in Parkinson's disease (PD). A group of patients with worse left-sided symptoms (LPD) was compared with a group with worse right-sided symptoms (RPD) and with a group of age-matched controls on judgements of the relative height or width of two rectangles presented in different regions of external space. The relevant dimension of one rectangle (the 'standard') was held constant, while that of the other (the 'variable') was varied in a method of constant stimuli. The point of subjective equality (PSE) of rectangle width or height was obtained by probit analysis as the mean of the resulting psychometric function. When the standard was in left space, the PSE of the LPD group occurred when the variable was smaller, and when the standard was in right space, when the variable was larger. Similarly, when the standard rectangle was presented in upper space, and the variable in lower space, the PSE occurred when the variable was smaller, an effect which was similar in both left and right spaces. In all these experiments, the PSEs for both the controls and the RPD group did not differ significantly, and were close to a physical match, and the slopes of the psychometric functions were steeper in the controls than the patients, though not significantly so. The data suggest that objects appear smaller in the left and upper visual spaces in LPD, probably because of right hemisphere impairment. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Perirhinal cortex in monkeys has been thought to be involved in visual associative learning. The authors examined rats' ability to make associations between visual stimuli in a visual secondary reinforcement task. Rats learned 2-choice visual discriminations for secondary visual reinforcement. They showed significant learning of discriminations before any primary reinforcement. Following bilateral perirhinal cortex lesions, rats continued to learn visual discriminations for visual secondary reinforcement at the same rate as before surgery. Thus, this study does not support a critical role of perirhinal cortex in learning for visual secondary reinforcement. Contrasting this result with other positive results, the authors suggest that the role of perirhinal cortex is in "within-object" associations and that it plays a much lesser role in stimulus-stimulus associations between objects.
Resumo:
This paper describes a real-time multi-camera surveillance system that can be applied to a range of application domains. This integrated system is designed to observe crowded scenes and has mechanisms to improve tracking of objects that are in close proximity. The four component modules described in this paper are (i) motion detection using a layered background model, (ii) object tracking based on local appearance, (iii) hierarchical object recognition, and (iv) fused multisensor object tracking using multiple features and geometric constraints. This integrated approach to complex scene tracking is validated against a number of representative real-world scenarios to show that robust, real-time analysis can be performed. Copyright (C) 2007 Hindawi Publishing Corporation. All rights reserved.
Resumo:
The impact of novel labels on visual processing was investigated across two experiments with infants aged between 9 and 21 months. Infants viewed pairs of images across a series of preferential looking trials. On each trial, one image was novel, and the other image had previously been viewed by the infant. Some infants viewed images in silence; other infants viewed images accompanied by novel labels. The pattern of fixations both across and within trials revealed that infants in the labelling condition took longer to develop a novelty preference than infants in the silent condition. Our findings contrast with prior research by Robinson and Sloutsky (e.g., Robinson & Sloutsky, 2007a; Sloutsky & Robinson, 2008) who found that novel labels did not disrupt visual processing for infants aged over a year. Provided that overall task demands are sufficiently high, it appears that labels can disrupt visual processing for infants during the developmental period of establishing a lexicon. The results suggest that when infants are processing labels and objects, attentional resources are shared across modalities.
Resumo:
It has long been assumed that there is a distorted mapping between real and ‘perceived’ space, based on demonstrations of systematic errors in judgements of slant, curvature, direction and separation. Here, we have applied a direct test to the notion of a coherent visual space. In an immersive virtual environment, participants judged the relative distance of two squares displayed in separate intervals. On some trials, the virtual scene expanded by a factor of four between intervals although, in line with recent results, participants did not report any noticeable change in the scene. We found that there was no consistent depth ordering of objects that can explain the distance matches participants made in this environment (e.g. A > B > D yet also A < C < D) and hence no single one-to-one mapping between participants’ perceived space and any real 3D environment. Instead, factors that affect pairwise comparisons of distances dictate participants’ performance. These data contradict, more directly than previous experiments, the idea that the visual system builds and uses a coherent 3D internal representation of a scene.
Resumo:
This paper presents a video surveillance framework that robustly and efficiently detects abandoned objects in surveillance scenes. The framework is based on a novel threat assessment algorithm which combines the concept of ownership with automatic understanding of social relations in order to infer abandonment of objects. Implementation is achieved through development of a logic-based inference engine based on Prolog. Threat detection performance is conducted by testing against a range of datasets describing realistic situations and demonstrates a reduction in the number of false alarms generated. The proposed system represents the approach employed in the EU SUBITO project (Surveillance of Unattended Baggage and the Identification and Tracking of the Owner).