893 resultados para Motion perception (Vision)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large and powerful ocean predators such as swordfishes, some tunas, and several shark species are unique among fishes in that they are capable of maintaining elevated body temperatures (endothermy) when hunting for prey in deep and cold water [1-3]. In these animals, warming the central nervous system and the eyes is the one common feature of this energetically costly adaptation [4]. In the swordfish (Xiphias gladius), a highly specialized heating system located in an extraocular muscle specifically warms the eyes and brain up to 10degreesC-15degreesC above ambient water temperatures [2, 5]. Although the function of neural warming in fishes has been the subject of considerable speculation [1, 6, 7], the biological significance of this unusual ability has until now remained unknown. We show here that warming the retina significantly improves temporal resolution, and hence the detection of rapid motion, in fast-swimming predatory fishes such as the swordfish. Depending on diving depth, temporal resolution can be more than ten times greater in these fishes than in fishes with eyes at the same temperature as the surrounding water. The enhanced temporal resolution allowed by heated eyes provides warm-blooded and highly visual oceanic predators, such as swordfishes, tunas, and sharks, with a crucial advantage over their agile, cold-blooded prey.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-fidelity eye tracking is combined with a perceptual grouping task to provide insight into the likely mechanisms underlying the compensation of retinal image motion caused by movement of the eyes. The experiments describe the covert detection of minute temporal and spatial offsets incorporated into a test stimulus. Analysis of eye motion on individual trials indicates that the temporal offset sensitivity is actually due to motion of the eye inducing artificial spatial offsets in the briefly presented stimuli. The results have strong implications for two popular models of compensation for fixational eye movements, namely efference copy and image-based models. If an efference copy model is assumed, the results place constraints on the spatial accuracy and source of compensation. If an image-based model is assumed then limitations are placed on the integration time window over which motion estimates are calculated. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fundamental question about the perception of time is whether the neural mechanisms underlying temporal judgements are universal and centralized in the brain or modality specific and distributed []. Time perception has traditionally been thought to be entirely dissociated from spatial vision. Here we show that the apparent duration of a dynamic stimulus can be manipulated in a local region of visual space by adapting to oscillatory motion or flicker. This implicates spatially localized temporal mechanisms in duration perception. We do not see concomitant changes in the time of onset or offset of the test patterns, demonstrating a direct local effect on duration perception rather than an indirect effect on the time course of neural processing. The effects of adaptation on duration perception can also be dissociated from motion or flicker perception per se. Although 20 Hz adaptation reduces both the apparent temporal frequency and duration of a 10 Hz test stimulus, 5 Hz adaptation increases apparent temporal frequency but has little effect on duration perception. We conclude that there is a peripheral, spatially localized, essentially visual component involved in sensing the duration of visual events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the challenging problem of designing systems able to perceive objects in underwater environments. In the last few decades research activities in robotics have advanced the state of art regarding intervention capabilities of autonomous systems. State of art in fields such as localization and navigation, real time perception and cognition, safe action and manipulation capabilities, applied to ground environments (both indoor and outdoor) has now reached such a readiness level that it allows high level autonomous operations. On the opposite side, the underwater environment remains a very difficult one for autonomous robots. Water influences the mechanical and electrical design of systems, interferes with sensors by limiting their capabilities, heavily impacts on data transmissions, and generally requires systems with low power consumption in order to enable reasonable mission duration. Interest in underwater applications is driven by needs of exploring and intervening in environments in which human capabilities are very limited. Nowadays, most underwater field operations are carried out by manned or remotely operated vehicles, deployed for explorations and limited intervention missions. Manned vehicles, directly on-board controlled, expose human operators to risks related to the stay in field of the mission, within a hostile environment. Remotely Operated Vehicles (ROV) currently represent the most advanced technology for underwater intervention services available on the market. These vehicles can be remotely operated for long time but they need support from an oceanographic vessel with multiple teams of highly specialized pilots. Vehicles equipped with multiple state-of-art sensors and capable to autonomously plan missions have been deployed in the last ten years and exploited as observers for underwater fauna, seabed, ship wrecks, and so on. On the other hand, underwater operations like object recovery and equipment maintenance are still challenging tasks to be conducted without human supervision since they require object perception and localization with much higher accuracy and robustness, to a degree seldom available in Autonomous Underwater Vehicles (AUV). This thesis reports the study, from design to deployment and evaluation, of a general purpose and configurable platform dedicated to stereo-vision perception in underwater environments. Several aspects related to the peculiar environment characteristics have been taken into account during all stages of system design and evaluation: depth of operation and light conditions, together with water turbidity and external weather, heavily impact on perception capabilities. The vision platform proposed in this work is a modular system comprising off-the-shelf components for both the imaging sensors and the computational unit, linked by a high performance ethernet network bus. The adopted design philosophy aims at achieving high flexibility in terms of feasible perception applications, that should not be as limited as in case of a special-purpose and dedicated hardware. Flexibility is required by the variability of underwater environments, with water conditions ranging from clear to turbid, light backscattering varying with daylight and depth, strong color distortion, and other environmental factors. Furthermore, the proposed modular design ensures an easier maintenance and update of the system over time. Performance of the proposed system, in terms of perception capabilities, has been evaluated in several underwater contexts taking advantage of the opportunity offered by the MARIS national project. Design issues like energy power consumption, heat dissipation and network capabilities have been evaluated in different scenarios. Finally, real-world experiments, conducted in multiple and variable underwater contexts, including open sea waters, have led to the collection of several datasets that have been publicly released to the scientific community. The vision system has been integrated in a state of the art AUV equipped with a robotic arm and gripper, and has been exploited in the robot control loop to successfully perform underwater grasping operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223–233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The perception of an object as a single entity within a visual scene requires that its features are bound together and segregated from the background and/or other objects. Here, we used magnetoencephalography (MEG) to assess the hypothesis that coherent percepts may arise from the synchronized high frequency (gamma) activity between neurons that code features of the same object. We also assessed the role of low frequency (alpha, beta) activity in object processing. The target stimulus (i.e. object) was a small patch of a concentric grating of 3c/°, viewed eccentrically. The background stimulus was either a blank field or a concentric grating of 3c/° periodicity, viewed centrally. With patterned backgrounds, the target stimulus emerged--through rotation about its own centre--as a circular subsection of the background. Data were acquired using a 275-channel whole-head MEG system and analyzed using Synthetic Aperture Magnetometry (SAM), which allows one to generate images of task-related cortical oscillatory power changes within specific frequency bands. Significant oscillatory activity across a broad range of frequencies was evident at the V1/V2 border, and subsequent analyses were based on a virtual electrode at this location. When the target was presented in isolation, we observed that: (i) contralateral stimulation yielded a sustained power increase in gamma activity; and (ii) both contra- and ipsilateral stimulation yielded near identical transient power changes in alpha (and beta) activity. When the target was presented against a patterned background, we observed that: (i) contralateral stimulation yielded an increase in high-gamma (>55 Hz) power together with a decrease in low-gamma (40-55 Hz) power; and (ii) both contra- and ipsilateral stimulation yielded a transient decrease in alpha (and beta) activity, though the reduction tended to be greatest for contralateral stimulation. The opposing power changes across different regions of the gamma spectrum with 'figure/ground' stimulation suggest a possible dual role for gamma rhythms in visual object coding, and provide general support of the binding-by-synchronization hypothesis. As the power changes in alpha and beta activity were largely independent of the spatial location of the target, however, we conclude that their role in object processing may relate principally to changes in visual attention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We sought to determine the extent to which colour (and luminance) signals contribute towards the visuomotor localization of targets. To do so we exploited the movement-related illusory displacement a small stationary window undergoes when it has a continuously moving carrier grating behind it. We used drifting (1.0-4.2 Hz) red/green-modulated isoluminant gratings or yellow/black luminance-modulated gratings as carriers, each curtailed in space by a stationary, two-dimensional window. After each trial, the perceived location of the window was recorded with reference to an on-screen ruler (perceptual task) or the on-screen touch of a ballistic pointing movement made without visual feedback (visuomotor task). Our results showed that the perceptual displacement measures were similar for each stimulus type and weakly dependent on stimulus drift rate. However, while the visuomotor displacement measures were similar for each stimulus type at low drift rates (<4 Hz), they were significantly larger for luminance than colour stimuli at high drift rates (>4 Hz). We show that the latter cannot be attributed to differences in perceived speed between stimulus types. We assume, therefore, that our visuomotor localization judgements were more susceptible to the (carrier) motion of luminance patterns than colour patterns. We suggest that, far from being detrimental, this susceptibility may indicate the operation of mechanisms designed to counter the temporal asynchrony between perceptual experiences and the physical changes in the environment that give rise to them. We propose that perceptual localisation is equally supported by both colour and luminance signals but that visuomotor localisation is predominantly supported by luminance signals. We discuss the neural pathways that may be involved with visuomotor localization. © 2007 Springer-Verlag.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Edge detection is crucial in visual processing. Previous computational and psychophysical models have often used peaks in the gradient or zero-crossings in the 2nd derivative to signal edges. We tested these approaches using a stimulus that has no such features. Its luminance profile was a triangle wave, blurred by a rectangular function. Subjects marked the position and polarity of perceived edges. For all blur widths tested, observers marked edges at or near 3rd derivative maxima, even though these were not 1st derivative maxima or 2nd derivative zero-crossings, at any scale. These results are predicted by a new nonlinear model based on 3rd derivative filtering. As a critical test, we added a ramp of variable slope to the blurred triangle-wave luminance profile. The ramp has no effect on the (linear) 2nd or higher derivatives, but the nonlinear model predicts a shift from seeing two edges to seeing one edge as the ramp gradient increases. Results of two experiments confirmed such a shift, thus supporting the new model. [Supported by the Engineering and Physical Sciences Research Council].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motion discontinuities can signal object boundaries where few or no other cues, such as luminance, colour, or texture, are available. Hence, motion-defined contours are an ecologically important counterpart to luminance contours. We developed a novel motion-defined Gabor stimulus to investigate the nature of neural operators analysing visual motion fields in order to draw parallels with known luminance operators. Luminance-defined Gabors have been successfully used to discern the spatial-extent and spatial-frequency specificity of possible visual contour detectors. We now extend these studies into the motion domain. We define a stimulus using limited-lifetime moving dots whose velocity is described over 2-D space by a Gabor pattern surrounded by randomly moving dots. Participants were asked to determine whether the orientation of the Gabor pattern (and hence of the motion contours) was vertical or horizontal in a 2AFC task, and the proportion of correct responses was recorded. We found that with practice participants became highly proficient at this task, able in certain cases to reach 90% accuracy with only 12 limited-lifetime dots. However, for both practised and novice participants we found that the ability to detect a single boundary saturates with the size of the Gaussian envelope of the Gabor at approximately 5 deg full-width at half-height. At this optimal size we then varied spatial frequency and found the optimum was at the lowest measured spatial frequency (0.1 cycle deg-1 ) and then steadily decreased with higher spatial frequencies, suggesting that motion contour detectors may be specifically tuned to a single, isolated edge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous studies have suggested separate channels for the detection of first-order luminance (LM) and second-order modulations of the local amplitude (AM) of a texture (Schofield and Georgeson, 1999 Vision Research 39 2697 - 2716; Georgeson and Schofield, 2002 Spatial Vision 16 59). It has also been shown that LM and AM mixtures with different phase relationships are easily separated in identification tasks, and (informally) appear very different with the in-phase compound (LM + AM), producing the most realistic depth percept. We investigated the role of these LM and AM components in depth perception. Stimuli consisted of a noise texture background with thin bars formed as local increments or decrements in luminance and/or noise amplitude. These stimuli appear as embossed surfaces with wide and narrow regions. When luminance and amplitude changes have the same sign and magnitude (LM + AM) the overall modulation is consistent with multiplicative shading, but this is not so when the two modulations have opposite sign (LM - AM). Keeping the AM modulation depth fixed at a suprathreshold level, we determined the amount of luminance contrast required for observers to correctly indicate the width (narrow or wide) of raised regions in the display. Performance (compared to the LM-only case) was facilitated by the presence of AM, but, unexpectedly, performance for LM - AM was even better than for LM + AM. Further tests suggested that this improvement in performance is not due to an increase in the detectability of luminance in the compound stimuli. Thus, contrary to previous findings, these results suggest the possibility of interaction between first-order and second-order mechanisms in depth perception.