47 resultados para Visual Information


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous functional imaging studies have shown that facilitated processing of a visual object on repeated, relative to initial, presentation (i.e., repetition priming) is associated with reductions in neural activity in multiple regions, including fusiforin/lateral occipital cortex. Moreover, activity reductions have been found, at diminished levels, when a different exemplar of an object is presented on repetition. In one previous study, the magnitude of diminished priming across exemplars was greater in the right relative to the left fusiform, suggesting greater exemplar specificity in the right. Another previous study, however, observed fusiform lateralization modulated by object viewpoint, but not object exemplar. The present fMRI study sought to determine whether the result of differential fusiform responses for perceptually different exemplars could be replicated. Furthermore, the role of the left fusiform cortex in object recognition was investigated via the inclusion of a lexical/semantic manipulation. Right fusiform cortex showed a significantly greater effect of exemplar change than left fusiform, replicating the previous result of exemplar-specific fusiform lateralization. Right fusiform and lateral occipital cortex were not differentially engaged by the lexical/semantic manipulation, suggesting that their role in visual object recognition is predominantly in the. C visual discrimination of specific objects. Activation in left fusiform cortex, but not left lateral occipital cortex, was modulated by both exemplar change and lexical/semantic manipulation, with further analysis suggesting a posterior-to-anterior progression between regions involved in processing visuoperceptual and lexical/semantic information about objects. The results are consistent with the view that the right fusiform plays a greater role in processing specific visual form information about objects, whereas the left fusiform is also involved in lexical/semantic processing. (C) 2003 Elsevier Science (USA). All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The coding of body part location may depend upon both visual and proprioceptive information, and allows targets to be localized with respect to the body. The present study investigates the interaction between visual and proprioceptive localization systems under conditions of multisensory conflict induced by optokinetic stimulation (OKS). Healthy subjects were asked to estimate the apparent motion speed of a visual target (LED) that could be located either in the extrapersonal space (visual encoding only, V), or at the same distance, but stuck on the subject's right index finger-tip (visual and proprioceptive encoding, V-P). Additionally, the multisensory condition was performed with the index finger kept in position both passively (V-P passive) and actively (V-P active). Results showed that the visual stimulus was always perceived to move, irrespective of its out- or on-the-body location. Moreover, this apparent motion speed varied consistently with the speed of the moving OKS background in all conditions. Surprisingly, no differences were found between V-P active and V-P passive conditions in the speed of apparent motion. The persistence of the visual illusion during the active posture maintenance reveals a novel condition in which vision totally dominates over proprioceptive information, suggesting that the hand-held visual stimulus was perceived as a purely visual, external object despite its contact with the hand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual control of locomotion is essential for most mammals and requires coordination between perceptual processes and action systems. Previous research on the neural systems engaged by self-motion has focused on heading perception, which is only one perceptual subcomponent. For effective steering, it is necessary to perceive an appropriate future path and then bring about the required change to heading. Using function magnetic resonance imaging in humans, we reveal a role for the parietal eye fields (PEFs) in directing spatially selective processes relating to future path information. A parietal area close to PEFs appears to be specialized for processing the future path information itself. Furthermore, a separate parietal area responds to visual position error signals, which occur when steering adjustments are imprecise. A network of three areas, the cerebellum, the supplementary eye fields, and dorsal premotor cortex, was found to be involved in generating appropriate motor responses for steering adjustments. This may reflect the demands of integrating visual inputs with the output response for the control device.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the development of an interface to a hospital portal system for information, communication and entertainment such that it can be used easily and effectively by all patients regardless of their age, disability, computer experience or native language. Specifically, this paper reports on the work conducted to ensure that the interface design took into account the needs of visually impaired users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Locomoting through the environment typically involves anticipating impending changes in heading trajectory in addition to maintaining the current direction of travel. We explored the neural systems involved in the “far road” and “near road” mechanisms proposed by Land and Horwood (1995) using simulated forward or backward travel where participants were required to gauge their current direction of travel (rather than directly control it). During forward egomotion, the distant road edges provided future path information, which participants used to improve their heading judgments. During backward egomotion, the road edges did not enhance performance because they no longer provided prospective information. This behavioral dissociation was reflected at the neural level, where only simulated forward travel increased activation in a region of the superior parietal lobe and the medial intraparietal sulcus. Providing only near road information during a forward heading judgment task resulted in activation in the motion complex. We propose a complementary role for the posterior parietal cortex and motion complex in detecting future path information and maintaining current lane positioning, respectively. (PsycINFO Database Record (c) 2010 APA, all rights reserved)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embodied theories of cognition propose that neural substrates used in experiencing the referent of a word, for example perceiving upward motion, should be engaged in weaker form when that word, for example ‘rise’, is comprehended. Motivated by the finding that the perception of irrelevant background motion at near-threshold, but not supra-threshold, levels interferes with task execution, we assessed whether interference from near-threshold background motion was modulated by its congruence with the meaning of words (semantic content) when participants completed a lexical decision task (deciding if a string of letters is a real word or not). Reaction times for motion words, such as ‘rise’ or ‘fall’, were slower when the direction of visual motion and the ‘motion’ of the word were incongruent — but only when the visual motion was at nearthreshold levels. When motion was supra-threshold, the distribution of error rates, not reaction times, implicated low-level motion processing in the semantic processing of motion words. As the perception of near-threshold signals is not likely to be influenced by strategies, our results support a close contact between semantic information and perceptual systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dorsolateral prefrontal cortex (DLPFC) is recruited during visual working memory (WM) when relevant information must be maintained in the presence of distracting information. The mechanism by which DLPFC might ensure successful maintenance of the contents of WM is, however, unclear; it might enhance neural maintenance of memory targets or suppress processing of distracters. To adjudicate between these possibilities, we applied time-locked transcranial magnetic stimulation (TMS) during functional MRI, an approach that permits causal assessment of a stimulated brain region's influence on connected brain regions, and evaluated how this influence may change under different task conditions. Participants performed a visual WM task requiring retention of visual stimuli (faces or houses) across a delay during which visual distracters could be present or absent. When distracters were present, they were always from the opposite stimulus category, so that targets and distracters were represented in distinct posterior cortical areas. We then measured whether DLPFC-TMS, administered in the delay at the time point when distracters could appear, would modulate posterior regions representing memory targets or distracters. We found that DLPFC-TMS influenced posterior areas only when distracters were present and, critically, that this influence consisted of increased activity in regions representing the current memory targets. DLPFC-TMS did not affect regions representing current distracters. These results provide a new line of causal evidence for a top-down DLPFC-based control mechanism that promotes successful maintenance of relevant information in WM in the presence of distraction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n the past decade, the analysis of data has faced the challenge of dealing with very large and complex datasets and the real-time generation of data. Technologies to store and access these complex and large datasets are in place. However, robust and scalable analysis technologies are needed to extract meaningful information from these datasets. The research field of Information Visualization and Visual Data Analytics addresses this need. Information visualization and data mining are often used complementary to each other. Their common goal is the extraction of meaningful information from complex and possibly large data. However, though data mining focuses on the usage of silicon hardware, visualization techniques also aim to access the powerful image-processing capabilities of the human brain. This article highlights the research on data visualization and visual analytics techniques. Furthermore, we highlight existing visual analytics techniques, systems, and applications including a perspective on the field from the chemical process industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decision strategies in multi-attribute Choice Experiments are investigated using eye-tracking. The visual attention towards, and attendance of, attributes is examined. Stated attendance is found to diverge substantively from visual attendance of attributes. However, stated and visual attendance are shown to be informative, non-overlapping sources of information about respondent utility functions when incorporated into model estimation. Eye-tracking also reveals systematic nonattendance of attributes only by a minority of respondents. Most respondents visually attend most attributes most of the time. We find no compelling evidence that the level of attention is related to respondent certainty, or that higher or lower value attributes receive more or less attention

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual motion cues play an important role in animal and humans locomotion without the need to extract actual ego-motion information. This paper demonstrates a method for estimating the visual motion parameters, namely the Time-To-Contact (TTC), Focus of Expansion (FOE), and image angular velocities, from a sparse optical flow estimation registered from a downward looking camera. The presented method is capable of estimating the visual motion parameters in a complicated 6 degrees of freedom motion and in real time with suitable accuracy for mobile robots visual navigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard CCD camera and a ToF camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time of light information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localization. Further development of these methods will make it possible to identify objects and their position in the real world, and to use this information to prevent possible collisions between the robot and such objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When the sensory consequences of an action are systematically altered our brain can recalibrate the mappings between sensory cues and properties of our environment. This recalibration can be driven by both cue conflicts and altered sensory statistics, but neither mechanism offers a way for cues to be calibrated so they provide accurate information about the world, as sensory cues carry no information as to their own accuracy. Here, we explored whether sensory predictions based on internal physical models could be used to accurately calibrate visual cues to 3D surface slant. Human observers played a 3D kinematic game in which they adjusted the slant of a surface so that a moving ball would bounce off the surface and through a target hoop. In one group, the ball’s bounce was manipulated so that the surface behaved as if it had a different slant to that signaled by visual cues. With experience of this altered bounce, observers recalibrated their perception of slant so that it was more consistent with the assumed laws of kinematics and physical behavior of the surface. In another group, making the ball spin in a way that could physically explain its altered bounce eliminated this pattern of recalibration. Importantly, both groups adjusted their behavior in the kinematic game in the same way, experienced the same set of slants and were not presented with low-level cue conflicts that could drive the recalibration. We conclude that observers use predictive kinematic models to accurately calibrate visual cues to 3D properties of world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.