981 resultados para visual processing
Resumo:
To elucidate the roles of visual areas V1 and V2 and their interaction in early perceptual processing, we studied the responses of V1 and V2 neurons to statically displayed Kanizsa figures. We found evidence that V1 neurons respond to illusory contours of the Kanizsa figures. The illusory contour signals in V1 are weaker than in V2, but are significant, particularly in the superficial layers. The population averaged response to illusory contours emerged 100 msec after stimulus onset in the superficial layers of V1, and around 120–190 msec in the deep layers. The illusory contour response in V2 began earlier, occurring at 70 msec in the superficial layers and at 95 msec in the deep layers. The temporal sequence of the events suggests that the computation of illusory contours involves intercortical interaction, and that early perceptual organization is likely to be an interactive process.
Resumo:
In optimal foraging theory, search time is a key variable defining the value of a prey type. But the sensory-perceptual processes that constrain the search for food have rarely been considered. Here we evaluate the flight behavior of bumblebees (Bombus terrestris) searching for artificial flowers of various sizes and colors. When flowers were large, search times correlated well with the color contrast of the targets with their green foliage-type background, as predicted by a model of color opponent coding using inputs from the bees' UV, blue, and green receptors. Targets that made poor color contrast with their backdrop, such as white, UV-reflecting ones, or red flowers, took longest to detect, even though brightness contrast with the background was pronounced. When searching for small targets, bees changed their strategy in several ways. They flew significantly slower and closer to the ground, so increasing the minimum detectable area subtended by an object on the ground. In addition, they used a different neuronal channel for flower detection. Instead of color contrast, they used only the green receptor signal for detection. We relate these findings to temporal and spatial limitations of different neuronal channels involved in stimulus detection and recognition. Thus, foraging speed may not be limited only by factors such as prey density, flight energetics, and scramble competition. Our results show that understanding the behavioral ecology of foraging can substantially gain from knowledge about mechanisms of visual information processing.
Resumo:
Event-related brain potentials (ERPs) provide high-resolution measures of the time course of neuronal activity patterns associated with perceptual and cognitive processes. New techniques for ERP source analysis and comparisons with data from blood-flow neuroimaging studies enable improved localization of cortical activity during visual selective attention. ERP modulations during spatial attention point toward a mechanism of gain control over information flow in extrastriate visual cortical pathways, starting about 80 ms after stimulus onset. Paying attention to nonspatial features such as color, motion, or shape is manifested by qualitatively different ERP patterns in multiple cortical areas that begin with latencies of 100–150 ms. The processing of nonspatial features seems to be contingent upon the prior selection of location, consistent with early selection theories of attention and with the hypothesis that spatial attention is “special.”
Resumo:
Previous studies of cortical retinotopy focused on influences from the contralateral visual field, because ascending inputs to cortex are known to be crossed. Here, functional magnetic resonance imaging was used to demonstrate and analyze an ipsilateral representation in human visual cortex. Moving stimuli, in a range of ipsilateral visual field locations, revealed activity: (i) along the vertical meridian in retinotopic (presumably lower-tier) areas; and (ii) in two large branches anterior to that, in presumptive higher-tier areas. One branch shares the anterior vertical meridian representation in human V3A, extending superiorly toward parietal cortex. The second branch runs antero-posteriorly along lateral visual cortex, overlying motion-selective area MT. Ipsilateral stimuli sparing the region around the vertical meridian representation also produced signal reductions (perhaps reflecting neural inhibition) in areas showing contralaterally driven retinotopy. Systematic sampling across a range of ipsilateral visual field extents revealed significant increases in ipsilateral activation in V3A and V4v, compared with immediately posterior areas V3 and VP. Finally, comparisons between ipsilateral stimuli of different types but equal retinotopic extent showed clear stimulus specificity, consistent with earlier suggestions of a functional segregation of motion vs. form processing in parietal vs. temporal cortex, respectively.
Resumo:
Vision extracts useful information from images. Reconstructing the three-dimensional structure of our environment and recognizing the objects that populate it are among the most important functions of our visual system. Computer vision researchers study the computational principles of vision and aim at designing algorithms that reproduce these functions. Vision is difficult: the same scene may give rise to very different images depending on illumination and viewpoint. Typically, an astronomical number of hypotheses exist that in principle have to be analyzed to infer a correct scene description. Moreover, image information might be extracted at different levels of spatial and logical resolution dependent on the image processing task. Knowledge of the world allows the visual system to limit the amount of ambiguity and to greatly simplify visual computations. We discuss how simple properties of the world are captured by the Gestalt rules of grouping, how the visual system may learn and organize models of objects for recognition, and how one may control the complexity of the description that the visual system computes.
Resumo:
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel “what” and “where” processing by the primate visual cortex. If “where” information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
Resumo:
When the illumination of a visual scene changes, the quantity of light reflected from objects is altered. Despite this, the perceived lightness of the objects generally remains constant. This perceptual lightness constancy is thought to be important behaviorally for object recognition. Here we show that interactions from outside the classical receptive fields of neurons in primary visual cortex modulate neural responses in a way that makes them immune to changes in illumination, as is perception. This finding is consistent with the hypothesis that the responses of neurons in primary visual cortex carry information about surface lightness in addition to information about form. It also suggests that lightness constancy, which is sometimes thought to involve “higher-level” processes, is manifest at the first stage of visual cortical processing.
Resumo:
Interactions between stimulus-induced oscillations (35-80 Hz) and stimulus-locked nonoscillatory responses were investigated in the visual cortex areas 17 and 18 of anaesthetized cats. A single square-wave luminance grating was used as a visual stimulus during simultaneous recordings from up to seven electrodes. The stimulus movement consisted of a superposition of a smooth movement with a sequence of dynamically changing accelerations. Responses of local groups of neurons at each electrode were studied on the basis of multiple unit activity and local slow field potentials (13-120 Hz). Oscillatory and stimulus-locked components were extracted from multiple unit activity and local slow field potentials and quantified by a combination of temporal and spectral correlation methods. We found fast stimulus-locked components primarily evoked by sudden stimulus accelerations, whereas oscillatory components (35-80 Hz) were induced during slow smooth movements. Oscillations were gradually reduced in amplitude and finally fully suppressed with increasing amplitudes of fast stimulus-locked components. It is argued that suppression of oscillations is necessary to prevent confusion during sequential processing of stationary and fast changing retinal images.
Resumo:
Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities.
Resumo:
The role of intrinsic cortical connections in processing sensory input and in generating behavioral output is poorly understood. We have examined this issue in the context of the tuning of neuronal responses in cortex to the orientation of a visual stimulus. We analytically study a simple network model that incorporates both orientation-selective input from the lateral geniculate nucleus and orientation-specific cortical interactions. Depending on the model parameters, the network exhibits orientation selectivity that originates from within the cortex, by a symmetry-breaking mechanism. In this case, the width of the orientation tuning can be sharp even if the lateral geniculate nucleus inputs are only weakly anisotropic. By using our model, several experimental consequences of this cortical mechanism of orientation tuning are derived. The tuning width is relatively independent of the contrast and angular anisotropy of the visual stimulus. The transient population response to changing of the stimulus orientation exhibits a slow "virtual rotation." Neuronal cross-correlations exhibit long time tails, the sign of which depends on the preferred orientations of the cells and the stimulus orientation.
Resumo:
Paper submitted to the 43rd International Symposium on Robotics (ISR), Taipei, Taiwan, August 29-31, 2012.
Resumo:
Event-based visual servoing is a recently presented approach that performs the positioning of a robot using visual information only when it is required. From the basis of the classical image-based visual servoing control law, the scheme proposed in this paper can reduce the processing time at each loop iteration in some specific conditions. The proposed control method enters in action when an event deactivates the classical image-based controller (i.e. when there is no image available to perform the tracking of the visual features). A virtual camera is then moved through a straight line path towards the desired position. The virtual path used to guide the robot improves the behavior of the previous event-based visual servoing proposal.
Resumo:
New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.
Resumo:
The Gaia-ESO Survey is a large public spectroscopic survey that aims to derive radial velocities and fundamental parameters of about 105 Milky Way stars in the field and in clusters. Observations are carried out with the multi-object optical spectrograph FLAMES, using simultaneously the medium-resolution (R ~ 20 000) GIRAFFE spectrograph and the high-resolution (R ~ 47 000) UVES spectrograph. In this paper we describe the methods and the software used for the data reduction, the derivation of the radial velocities, and the quality control of the FLAMES-UVES spectra. Data reduction has been performed using a workflow specifically developed for this project. This workflow runs the ESO public pipeline optimizing the data reduction for the Gaia-ESO Survey, automatically performs sky subtraction, barycentric correction and normalisation, and calculates radial velocities and a first guess of the rotational velocities. The quality control is performed using the output parameters from the ESO pipeline, by a visual inspection of the spectra and by the analysis of the signal-to-noise ratio of the spectra. Using the observations of the first 18 months, specifically targets observed multiple times at different epochs, stars observed with both GIRAFFE and UVES, and observations of radial velocity standards, we estimated the precision and the accuracy of the radial velocities. The statistical error on the radial velocities is σ ~ 0.4 km s-1 and is mainly due to uncertainties in the zero point of the wavelength calibration. However, we found a systematic bias with respect to the GIRAFFE spectra (~0.9 km s-1) and to the radial velocities of the standard stars (~0.5 km s-1) retrieved from the literature. This bias will be corrected in the future data releases, when a common zero point for all the set-ups and instruments used for the survey is be established.
Resumo:
Magnetic fluid hyperthermia (MFH) is considered a promising therapeutic technique for the treatment of cancer cells, in which magnetic nanoparticles (MNPs) with superparamagnetic behavior generate mild-temperatures under an AC magnetic field to selectively destroy the abnormal cancer cells, in detriment of the healthy ones. However, the poor heating efficiency of most NMPs and the imprecise experimental determination of the temperature field during the treatment, are two of the majors drawbacks for its clinical advance. Thus, in this work, different MNPs were developed and tested under an AC magnetic field (~1.10 kA/m and 200 kHz), and the heat generated by them was assessed by an infrared camera. The resulting thermal images were processed in MATLAB after the thermographic calibration of the infrared camera. The results show the potential to use this thermal technique for the improvement and advance of MFH as a clinical therapy.