995 resultados para Visual input


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Despite the close interrelation between vestibular and visual processing (e.g., vestibulo-ocular reflex), surprisingly little is known about vestibular function in visually impaired people. In this study, we investigated thresholds of passive whole-body motion discrimination (leftward vs. rightward) in nine visually impaired participants and nine age-matched sighted controls. Participants were rotated in yaw, tilted in roll, and translated along the interaural axis at two different frequencies (0.33 and 2 Hz) by means of a motion platform. Superior performance of visually impaired participants was found in the 0.33 Hz roll tilt condition. No differences were observed in the other motion conditions. Roll tilts stimulate the semicircular canals and otoliths simultaneously. The results could thus reflect a specific improvement in canal–otolith integration in the visually impaired and are consistent with the compensatory hypothesis, which implies that the visually impaired are able to compensate the absence of visual input.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the fundamental questions in neuroscience is to understand how encoding of sensory inputs is distributed across neuronal networks in cerebral cortex to influence sensory processing and behavioral performance. The fact that the structure of neuronal networks is organized according to cortical layers raises the possibility that sensory information could be processed differently in distinct layers. The goal of my thesis research is to understand how laminar circuits encode information in their population activity, how the properties of the population code adapt to changes in visual input, and how population coding influences behavioral performance. To this end, we performed a series of novel experiments to investigate how sensory information in the primary visual cortex (V1) emerges across laminar cortical circuits. First, it is commonly known that the amount of information encoded by cortical circuits depends critically on whether or not nearby neurons exhibit correlations. We examined correlated variability in V1 circuits from a laminar-specific perspective and observed that cells in the input layer, which have only local projections, encode incoming stimuli optimally by exhibiting low correlated variability. In contrast, output layers, which send projections to other cortical and subcortical areas, encode information suboptimally by exhibiting large correlations. These results argue that neuronal populations in different cortical layers play different roles in network computations. Secondly, a fundamental feature of cortical neurons is their ability to adapt to changes in incoming stimuli. Understanding how adaptation emerges across cortical layers to influence information processing is vital for understanding efficient sensory coding. We examined the effects of adaptation, on the time-scale of a visual fixation, on network synchronization across laminar circuits. Specific to the superficial layers, we observed an increase in gamma-band (30-80 Hz) synchronization after adaptation that was correlated with an improvement in neuronal orientation discrimination performance. Thus, synchronization enhances sensory coding to optimize network processing across laminar circuits. Finally, we tested the hypothesis that individual neurons and local populations synchronize their activity in real-time to communicate information about incoming stimuli, and that the degree of synchronization influences behavioral performance. These analyses assessed for the first time the relationship between changes in laminar cortical networks involved in stimulus processing and behavioral performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modeled shape tuning in visual AIP neurons and its relationship with curvature and gradient information from the caudal intraparietal area (CIP). The main goal was to gain insight into the kinds of shape parameterizations that can account for AIP tuning and that are consistent with both the inputs to AIP and the role of AIP in grasping. We first experimented with superquadric shape parameters. We considered superquadrics because they occupy a role in robotics that is similar to AIP , in that superquadric fits are derived from visual input and used for grasp planning. We also experimented with an alternative shape parameterization that was based on an Isomap dimension reduction of spatial derivatives of depth (i.e., distance from the observer to the object surface). We considered an Isomap-based model because its parameters lacked discontinuities between similar shapes. When we matched the dimension of the Isomap to the number of superquadric parameters, the superquadric model fit the AIP data somewhat more closely. However, higher-dimensional Isomaps provided excellent fits. Also, we found that the Isomap parameters could be approximated much more accurately than superquadric parameters by feedforward neural networks with CIP-like inputs. We conclude that Isomaps, or perhaps alternative dimension reductions of visual inputs to AIP, provide a promising model of AIP electrophysiology data. Further work is needed to test whether such shape parameterizations actually provide an effective basis for grasp control.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The majority of neurons in the primary visual cortex of primates can be activated by stimulation of either eye; moreover, the monocular receptive fields of such neurons are located in about the same region of visual space. These well-known facts imply that binocular convergence in visual cortex can explain our cyclopean view of the world. To test the adequacy of this assumption, we examined how human subjects integrate binocular events in time. Light flashes presented synchronously to both eyes were compared to flashes presented alternately (asynchronously) to one eye and then the other. Subjects perceived very-low-frequency (2 Hz) asynchronous trains as equivalent to synchronous trains flashed at twice the frequency (the prediction based on binocular convergence). However, at higher frequencies of presentation (4-32 Hz), subjects perceived asynchronous and synchronous trains to be increasingly similar. Indeed, at the flicker-fusion frequency (approximately 50 Hz), the apparent difference between the two conditions was only 2%. We suggest that the explanation of these anomalous findings is that we parse visual input into sequential episodes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives. It has been proposed that disruption of the internal proprioceptive representation, via incongruent sensory input, may underpin pathological pain states, but experimental evidence relies on conflicting visual input, which is not clinically relevant. We aimed to determine the symptomatic effect of incongruent proprioceptive input, imparted by vibration of the wrist tendons, which evokes the illusion of perpetual wrist flexion and disrupts cortical proprioceptive representation. Methods. Twenty-nine healthy and naive volunteers reported symptoms during five conditions: control, active and passive wrist flexion, extensor carpi radialis tendon vibration to evoke illusion of perpetual wrist flexion, and ulnar styloid (sham) vibration. No advice was given about possible illusions. Results. Twenty-one subjects reported the illusion of perpetual wrist flexion during tendon vibration. There was no effect of condition or of whether or not subjects reported an illusion on discomfort/pain (P > 0.28). Peculiarity, swelling and foreignness were greater during tendon vibration than during the other conditions, and greater during tendon vibration in those who reported an illusion of wrist flexion than in those who did not (P < 0.05 for all). Symptoms were reported by at least two subjects in each condition and four subjects reported systemic symptoms (e.g. nausea). Conclusions. In healthy volunteers, incongruent proprioceptive input does not cause discomfort or pain but does evoke feelings of peculiarity, swelling and foreignness in the limb.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To navigate successfully in a novel environment a robot needs to be able to Simultaneously Localize And Map (SLAM) its surroundings. The most successful solutions to this problem so far have involved probabilistic algorithms, but there has been much promising work involving systems based on the workings of part of the rodent brain known as the hippocampus. In this paper we present a biologically plausible system called RatSLAM that uses competitive attractor networks to carry out SLAM in a probabilistic manner. The system can effectively perform parameter self-calibration and SLAM in onedimension. Tests in two dimensional environments revealed the inability of the RatSLAM system to maintain multiple pose hypotheses in the face of ambiguous visual input. These results support recent rat experimentation that suggest current competitive attractor models are not a complete solution to the hippocampal modelling problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background - When a moving stimulus and a briefly flashed static stimulus are physically aligned in space the static stimulus is perceived as lagging behind the moving stimulus. This vastly replicated phenomenon is known as the Flash-Lag Effect (FLE). For the first time we employed biological motion as the moving stimulus, which is important for two reasons. Firstly, biological motion is processed by visual as well as somatosensory brain areas, which makes it a prime candidate for elucidating the interplay between the two systems with respect to the FLE. Secondly, discussions about the mechanisms of the FLE tend to recur to evolutionary arguments, while most studies employ highly artificial stimuli with constant velocities. Methodology/Principal Finding - Since biological motion is ecologically valid it follows complex patterns with changing velocity. We therefore compared biological to symbolic motion with the same acceleration profile. Our results with 16 observers revealed a qualitatively different pattern for biological compared to symbolic motion and this pattern was predicted by the characteristics of motor resonance: The amount of anticipatory processing of perceived actions based on the induced perspective and agency modulated the FLE. Conclusions/Significance - Our study provides first evidence for an FLE with non-linear motion in general and with biological motion in particular. Our results suggest that predictive coding within the sensorimotor system alone cannot explain the FLE. Our findings are compatible with visual prediction (Nijhawan, 2008) which assumes that extrapolated motion representations within the visual system generate the FLE. These representations are modulated by sudden visual input (e.g. offset signals) or by input from other systems (e.g. sensorimotor) that can boost or attenuate overshooting representations in accordance with biased neural competition (Desimone & Duncan, 1995).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Electronic Perception Technology (EPT) enables automated equipment to gain artificial sight commonly referred to as "machine-vision” by employing specialty software and embedded sensors to create a “Visual" input field that can be used as a front-end application for transactional behavior. The authors review this new technology and present feasible future applications to the food service industry in enhancing guest services while providing a competitive advantage.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work investigates novel alternative means of interaction in a virtual environment (VE).We analyze whether humans can remap established body functions to learn to interact with digital information in an environment that is cross-sensory by nature and uses vocal utterances in order to influence (abstract) virtual objects. We thus establish a correlation among learning, control of the interface, and the perceived sense of presence in the VE. The application enables intuitive interaction by mapping actions (the prosodic aspects of the human voice) to a certain response (i.e., visualization). A series of single-user and multiuser studies shows that users can gain control of the intuitive interface and learn to adapt to new and previously unseen tasks in VEs. Despite the abstract nature of the presented environment, presence scores were generally very high.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work investigates novel alternative means of interaction in a virtual environment (VE).We analyze whether humans can remap established body functions to learn to interact with digital information in an environment that is cross-sensory by nature and uses vocal utterances in order to influence (abstract) virtual objects. We thus establish a correlation among learning, control of the interface, and the perceived sense of presence in the VE. The application enables intuitive interaction by mapping actions (the prosodic aspects of the human voice) to a certain response (i.e., visualization). A series of single-user and multiuser studies shows that users can gain control of the intuitive interface and learn to adapt to new and previously unseen tasks in VEs. Despite the abstract nature of the presented environment, presence scores were generally very high.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Seventeen-month-old infants were presented with pairs of images, in silence or with the non-directive auditory stimulus 'look!'. The images had been chosen so that one image depicted an item whose name was known to the infant, and the other image depicted an image whose name was not known to the infant. Infants looked longer at images for which they had names than at images for which they did not have names, despite the absence of any referential input. The experiment controlled for the familiarity of the objects depicted: in each trial, image pairs presented to infants had previously been judged by caregivers to be of roughly equal familiarity. From a theoretical perspective, the results indicate that objects with names are of intrinsic interest to the infant. The possible causal direction for this linkage is discussed and it is concluded that the results are consistent with Whorfian linguistic determinism, although other construals are possible. From a methodological perspective, the results have implications for the use of preferential looking as an index of early word comprehension.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The processing of biological motion is a critical, everyday task performed with remarkable efficiency by human sensory systems. Interest in this ability has focused to a large extent on biological motion processing in the visual modality (see, for example, Cutting, J. E., Moore, C., & Morrison, R. (1988). Masking the motions of human gait. Perception and Psychophysics, 44(4), 339-347). In naturalistic settings, however, it is often the case that biological motion is defined by input to more than one sensory modality. For this reason, here in a series of experiments we investigate behavioural correlates of multisensory, in particular audiovisual, integration in the processing of biological motion cues. More specifically, using a new psychophysical paradigm we investigate the effect of suprathreshold auditory motion on perceptions of visually defined biological motion. Unlike data from previous studies investigating audiovisual integration in linear motion processing [Meyer, G. F. & Wuerger, S. M. (2001). Cross-modal integration of auditory and visual motion signals. Neuroreport, 12(11), 2557-2560; Wuerger, S. M., Hofbauer, M., & Meyer, G. F. (2003). The integration of auditory and motion signals at threshold. Perception and Psychophysics, 65(8), 1188-1196; Alais, D. & Burr, D. (2004). No direction-specific bimodal facilitation for audiovisual motion detection. Cognitive Brain Research, 19, 185-194], we report the existence of direction-selective effects: relative to control (stationary) auditory conditions, auditory motion in the same direction as the visually defined biological motion target increased its detectability, whereas auditory motion in the opposite direction had the inverse effect. Our data suggest these effects do not arise through general shifts in visuo-spatial attention, but instead are a consequence of motion-sensitive, direction-tuned integration mechanisms that are, if not unique to biological visual motion, at least not common to all types of visual motion. Based on these data and evidence from neurophysiological and neuroimaging studies we discuss the neural mechanisms likely to underlie this effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neural comparisons of bilateral sensory inputs are essential for visual depth perception and accurate localization of sounds in space. All animals, from single-cell prokaryotes to humans, orient themselves in response to environmental chemical stimuli, but the contribution of spatial integration of neural activity in olfaction remains unclear. We investigated this problem in Drosophila melanogaster larvae. Using high-resolution behavioral analysis, we studied the chemotaxis behavior of larvae with a single functional olfactory neuron on either the left or right side of the head, allowing us to examine unilateral or bilateral olfactory input. We developed new spectroscopic methods to create stable odorant gradients in which odor concentrations were experimentally measured. In these controlled environments, we observed that a single functional neuron provided sufficient information to permit larval chemotaxis. We found additional evidence that the overall accuracy of navigation is enhanced by the increase in the signal-to-noise ratio conferred by bilateral sensory input.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vision provides a primary sensory input for food perception. It raises expectations on taste and nutritional value and drives acceptance or rejection. So far, the impact of visual food cues varying in energy content on subsequent taste integration remains unexplored. Using electrical neuroimaging, we assessed whether high- and low-calorie food cues differentially influence the brain processing and perception of a subsequent neutral electric taste. When viewing high-calorie food images, participants reported the subsequent taste to be more pleasant than when low-calorie food images preceded the identical taste. Moreover, the taste-evoked neural activity was stronger in the bilateral insula and the adjacent frontal operculum (FOP) within 100 ms after taste onset when preceded by high- versus low-calorie cues. A similar pattern evolved in the anterior cingulate (ACC) and medial orbitofrontal cortex (OFC) around 180 ms, as well as, in the right insula, around 360 ms. The activation differences in the OFC correlated positively with changes in taste pleasantness, a finding that is an accord with the role of the OFC in the hedonic evaluation of taste. Later activation differences in the right insula likely indicate revaluation of interoceptive taste awareness. Our findings reveal previously unknown mechanisms of cross-modal, visual-gustatory, sensory interactions underlying food evaluation.