991 resultados para visual motion
Resumo:
Coordinated eye and head movements simultaneously occur to scan the visual world for relevant targets. However, measuring both eye and head movements in experiments allowing natural head movements may be challenging. This paper provides an approach to study eye-head coordination: First, we demonstra- te the capabilities and limits of the eye-head tracking system used, and compare it to other technologies. Second, a beha- vioral task is introduced to invoke eye-head coordination. Third, a method is introduced to reconstruct signal loss in video- based oculography caused by cornea reflection artifacts in order to extend the tracking range. Finally, parameters of eye- head coordination are identified using EHCA (eye-head co- ordination analyzer), a MATLAB software which was developed to analyze eye-head shifts. To demonstrate the capabilities of the approach, a study with 11 healthy subjects was performed to investigate motion behavior. The approach presented here is discussed as an instrument to explore eye-head coordination, which may lead to further insights into attentional and motor symptoms of certain neurological or psychiatric diseases, e.g., schizophrenia.
Resumo:
OBJECTIVE Visuoperceptual deficits are common in dementia with Lewy bodies (DLB) and Alzheimer disease (AD). Testing visuoperception in dementia is complicated by decline in other cognitive domains and extrapyramidal features. To overcome these issues, we developed a computerized test, the Newcastle visuoperception battery (NEVIP), which is independent of motor function and has minimal cognitive load.We aimed to test its utility to identify visuoperceptual deficits in people with dementia. PARTICIPANTS AND MEASUREMENTS We recruited 28 AD and 26 DLB participants with 35 comparison participants of similar age and education. The NEVIP was used to test angle, color, and form discrimination along with motion perception to obtain a composite visuoperception score. RESULTS Those with DLB performed significantly worse than AD participants on the composite visuoperception score (Mann-Whitney U = 142, p = 0.01). Visuoperceptual deficits (defined as 2 SD below the performance of comparisons) were present in 71% of the DLB group and 40% of the AD group. Performance was not significantly correlated with motor impairment, but was significantly related to global cognitive impairment in DLB (rs = -0.689, p <0.001), but not in AD. CONCLUSION Visuoperceptual deficits can be detected in both DLB and AD participants using the NEVIP, with the DLB group performing significantly worse than AD. Visuoperception scores obtained by the NEVIP are independent of participant motor deficits and participants are able to comprehend and perform the tasks.
Resumo:
OBJECTIVE To quantify visual discrimination, space-motion, and object-form perception in patients with Parkinson disease dementia (PDD), dementia with Lewy bodies (DLB), and Alzheimer disease (AD). METHODS The authors used a cross-sectional study to compare three demented groups matched for overall dementia severity (PDD: n = 24; DLB: n = 20; AD: n = 23) and two age-, sex-, and education-matched control groups (PD: n = 24, normal controls [NC]: n = 25). RESULTS Visual perception was globally more impaired in PDD than in nondemented controls (NC, PD), but was not different from DLB. Compared to AD, PDD patients tended to perform worse in all perceptual scores. Visual perception of patients with PDD/DLB and visual hallucinations was significantly worse than in patients without hallucinations. CONCLUSIONS Parkinson disease dementia (PDD) is associated with profound visuoperceptual impairments similar to dementia with Lewy bodies (DLB) but different from Alzheimer disease. These findings are consistent with previous neuroimaging studies reporting hypoactivity in cortical areas involved in visual processing in PDD and DLB.
Resumo:
We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform and asked to indicate the direction of motion. A total of eleven participants underwent 3,360 practice trials, distributed over twelve (Experiment 1) or 6 days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness.
Resumo:
BACKGROUND: The observation of conspecifics influences our bodily perceptions and actions: Contagious yawning, contagious itching, or empathy for pain, are all examples of mechanisms based on resonance between our own body and others. While there is evidence for the involvement of the mirror neuron system in the processing of motor, auditory and tactile information, it has not yet been associated with the perception of self-motion. METHODOLOGY/PRINCIPAL FINDINGS: We investigated whether viewing our own body, the body of another, and an object in motion influences self-motion perception. We found a visual-vestibular congruency effect for self-motion perception when observing self and object motion, and a reduction in this effect when observing someone else's body motion. The congruency effect was correlated with empathy scores, revealing the importance of empathy in mirroring mechanisms. CONCLUSIONS/SIGNIFICANCE: The data show that vestibular perception is modulated by agent-specific mirroring mechanisms. The observation of conspecifics in motion is an essential component of social life, and self-motion perception is crucial for the distinction between the self and the other. Finally, our results hint at the presence of a "vestibular mirror neuron system".
Resumo:
Perceptual learning can occur when stimuli are only imagined, i.e., without proper stimulus presentation. For example, perceptual learning improved bisection discrimination when only the two outer lines of the bisection stimulus were presented and the central line had to be imagined. Performance improved also with other static stimuli. In non-learning imagery experiments, imagining static stimuli is different from imagining motion stimuli. We hypothesized that those differences also affect imagery perceptual learning. Here, we show that imagery training also improves motion direction discrimination. Learning occurs when no stimulus at all is presented during training, whereas no learning occurs when only noise is presented. The interference between noise and mental imagery possibly hinders learning. For static bisection stimuli, the pattern is just the opposite. Learning occurs when presented with the two outer lines of the bisection stimulus, i.e., with only a part of the visual stimulus, while no learning occurs when no stimulus at all is presented.
Resumo:
The recurrent interaction among orientation-selective neurons in the primary visual cortex (V1) is suited to enhance contours in a noisy visual scene. Motion is known to have a strong pop-up effect in perceiving contours, but how motion-sensitive neurons in V1 support contour detection remains vastly elusive. Here we suggest how the various types of motion-sensitive neurons observed in V1 should be wired together in a micro-circuitry to optimally extract contours in the visual scene. Motion-sensitive neurons can be selective about the direction of motion occurring at some spot or respond equally to all directions (pandirectional). We show that, in the light of figure-ground segregation, direction-selective motion neurons should additively modulate the corresponding orientation-selective neurons with preferred orientation orthogonal to the motion direction. In turn, to maximally enhance contours, pandirectional motion neurons should multiplicatively modulate all orientation-selective neurons with co-localized receptive fields. This multiplicative modulation amplifies the local V1-circuitry among co-aligned orientation-selective neurons for detecting elongated contours. We suggest that the additive modulation by direction-specific motion neurons is achieved through synaptic projections to the somatic region, and the multiplicative modulation by pandirectional motion neurons through projections to the apical region of orientation-specific pyramidal neurons. For the purpose of contour detection, the V1-intrinsic integration of motion information is advantageous over a downstream integration as it exploits the recurrent V1-circuitry designed for that task.
Resumo:
BACKGROUND: Higher visual functions can be defined as cognitive processes responsible for object recognition, color and shape perception, and motion detection. People with impaired higher visual functions after unilateral brain lesion are often tested with paper pencil tests, but such tests do not assess the degree of interaction between the healthy brain hemisphere and the impaired one. Hence, visual functions are not tested separately in the contralesional and ipsilesional visual hemifields. METHODS: A new measurement setup, that involves real-time comparisons of shape and size of objects, orientation of lines, speed and direction of moving patterns, in the right or left visual hemifield, has been developed. The setup was implemented in an immersive environment like a hemisphere to take into account the effects of peripheral and central vision, and eventual visual field losses. Due to the non-flat screen of the hemisphere, a distortion algorithm was needed to adapt the projected images to the surface. Several approaches were studied and, based on a comparison between projected images and original ones, the best one was used for the implementation of the test. Fifty-seven healthy volunteers were then tested in a pilot study. A Satisfaction Questionnaire was used to assess the usability of the new measurement setup. RESULTS: The results of the distortion algorithm showed a structural similarity between the warped images and the original ones higher than 97%. The results of the pilot study showed an accuracy in comparing images in the two visual hemifields of 0.18 visual degrees and 0.19 visual degrees for size and shape discrimination, respectively, 2.56° for line orientation, 0.33 visual degrees/s for speed perception and 7.41° for recognition of motion direction. The outcome of the Satisfaction Questionnaire showed a high acceptance of the battery by the participants. CONCLUSIONS: A new method to measure higher visual functions in an immersive environment was presented. The study focused on the usability of the developed battery rather than the performance at the visual tasks. A battery of five subtasks to study the perception of size, shape, orientation, speed and motion direction was developed. The test setup is now ready to be tested in neurological patients.
Resumo:
Visually impaired people show superior abilities in various perception tasks such as auditory attention, auditory temporal resolution, auditory spatial tuning, and odor discrimination. However, with the use of psychophysical methods, auditory and olfactory detection thresholds typically do not differ between visually impaired and sighted participants. Using a motion platform we investigated thresholds of passive whole-body motion discrimination in nine visually impaired participants and nine age-matched sighted controls. Participants were rotated in yaw, tilted in roll, and translated along the y-axis at two different frequencies (0.3 Hz and 2 Hz). An adaptive 3-down 1-up staircase procedure was used along with a two-alternative direction (leftward vs. rightward) discrimination task. Superior performance of visually impaired participants was found in the 0.3 Hz roll tilt condition. No differences between the visually impaired and controls were observed in all other types of motion. The superior performance in the 0.3 Hz roll tilt condition could reflect differences in the integration of extra-vestibular cues and increased sensitivity towards changes in the direction of the gravito-inertial force. In the absence of visual information, roll tilts entail a more pronounced risk of falling, and this could eventually account for the group difference. It is argued that differences in experimental procedures (i.e. detection vs. discrimination of stimuli) explain the discrepant findings across perceptual tasks comparing blind and sighted participants.
Resumo:
BACKGROUND AND PURPOSE: In stroke patients, neglect diagnostic is often performed by means of paper-pencil cancellation tasks. These tasks entail static stimuli, and provide no information concerning possible changes in the severity of neglect symptoms when patients are confronted with motion. We therefore aimed to directly contrast the cancellation behaviour of neglect patients under static and dynamic conditions. Since visual field deficits often occur in neglect patients, we analysed whether the integrity of the optic radiation would influence cancellation behaviour. METHODS: Twenty-five patients with left spatial neglect after right-hemispheric stroke were tested with a touchscreen cancellation task, once when the evenly distributed targets were stationary, and once when the identic targets moved with constant speed on a random path. The integrity of the right optic radiation was analysed by means of a hodologic probabilistic approach. RESULTS: Motion influenced the cancellation behaviour of neglect patients, and the direction of this influence (i.e., an increase or decrease of neglect severity) was modulated by the integrity of the right optic radiation. In patients with an intact optic radiation, the severity of neglect significantly decreased in the dynamic condition. Conversely, in patients with damage to the optic radiation, the severity of neglect significantly increased in the dynamic condition. CONCLUSION: Motion may influence neglect in stroke patients. The integrity of the optic radiation may be a predictor of whether motion increases or decreases the severity of neglect symptoms.
Resumo:
Over recent years, it has repeatedly been shown that optimal gaze strategies enhance motor control (e.g., Foulsham, 2015). However, little is known, whether, vice versa, visual performance can be improved by optimized motor control. Consequently, in two studies, we investigated visual performance as a function of motor control strategies and task parameters, respectively. In Experiment 1, 72 participants were tested on visual acuity (Landolt) and contrast sensitivity (Grating), while standing in two different postures (upright vs. squat) on a ZEPTOR-platform that vibrated at four different frequencies (0, 4, 8, 12 Hz). After each test, perceived exertion (Borg) was assessed. Significant interactions were revealed for both tests, Landolt: F(3,213)=13.25, p<.01, ηp2=.16, Grating: F(3,213)=4.27, p<.01, ηp2=.06, elucidating a larger loss of acuity/contrast sensitivity with increasing frequencies for the upright compared with the squat posture. For perceived exertion, however, a diametrical interaction for frequency was found for acuity, F(3,213)=7.45, p<.01, ηp2=.09, and contrast sensitivity, F(3,213)=7.08, p < .01, ηp2=.09, substantiating that the impaired visual performance cannot be attributed to exertion. Consequently, the squat posture could permit better head and, hence, gaze stabilization. In Experiment 2, 64 participants performed the same tests while standing in a squat position on a ski-simulator, which vibrated with two different frequencies (2.4, 3.6 Hz) and amplitudes (50, 100 mm) in a predictable or unpredictable manner. Control strategies were identified by tracking segmental motion, which allows to derive damping characteristics. Considerable main effects were found for frequency, all F’s(1,52)>10.31, all p’s<.01, all ηp2’s>.16, as well as, in the acuity test, for predictability, F(1,52)=10.31, p<.01, ηp2=.17, and by tendency for amplitude, F(1,52)=3.53, p=.06, ηp2=.06. A significant correlation between the damping amplitude in the knee joint and the performance drop in visual acuity, r=-.97, p<.001, again points towards the importance of motor control strategies to maintain optimal visual performance.
Resumo:
Brain lesions in the visual associative cortex are known to impair visual perception, i.e., the capacity to correctly perceive different aspects of the visual world, such as motion, color, or shapes. Visual perception can be influenced by non-invasive brain stimulation such as transcranial direct current stimulation (tDCS). In a recently developed technique called high definition (HD) tDCS, small HD-electrodes are used instead of the sponge electrodes in the conventional approach. This is believed to achieve high focality and precision over the target area. In this paper we tested the effects of cathodal and anodal HD-tDCS over the right V5 on motion and shape perception in a single blind, within-subject, sham controlled, cross-over trial. The purpose of the study was to prove the high focality of the stimulation only over the target area. Twenty one healthy volunteers received 20 min of 2 mA cathodal, anodal and sham stimulation over the right V5 and their performance on a visual test was recorded. The results showed significant improvement in motion perception in the left hemifield after cathodal HD-tDCS, but not in shape perception. Sham and anodal HD-tDCS did not affect performance. The specific effect of influencing performance of visual tasks by modulating the excitability of the neurons in the visual cortex might be explained by the complexity of perceptual information needed for the tasks. This provokes a "noisy" activation state of the encoding neuronal patterns. We speculate that in this case cathodal HD-tDCS may focus the correct perception by decreasing global excitation and thus diminishing the "noise" below threshold.
Resumo:
Despite the close interrelation between vestibular and visual processing (e.g., vestibulo-ocular reflex), surprisingly little is known about vestibular function in visually impaired people. In this study, we investigated thresholds of passive whole-body motion discrimination (leftward vs. rightward) in nine visually impaired participants and nine age-matched sighted controls. Participants were rotated in yaw, tilted in roll, and translated along the interaural axis at two different frequencies (0.33 and 2 Hz) by means of a motion platform. Superior performance of visually impaired participants was found in the 0.33 Hz roll tilt condition. No differences were observed in the other motion conditions. Roll tilts stimulate the semicircular canals and otoliths simultaneously. The results could thus reflect a specific improvement in canal–otolith integration in the visually impaired and are consistent with the compensatory hypothesis, which implies that the visually impaired are able to compensate the absence of visual input.
Resumo:
Introduction: In team sports the ability to use peripheral vision is essential to track a number of players and the ball. By using eye-tracking devices it was found that players either use fixations and saccades to process information on the pitch or use smooth pursuit eye movements (SPEM) to keep track of single objects (Schütz, Braun, & Gegenfurtner, 2011). However, it is assumed that peripheral vision can be used best when the gaze is stable while it is unknown whether motion changes can be equally well detected when SPEM are used especially because contrast sensitivity is reduced during SPEM (Schütz, Delipetkose, Braun, Kerzel, & Gegenfurtner, 2007). Therefore, peripheral motion change detection will be examined by contrasting a fixation condition with a SPEM condition. Methods: 13 participants (7 male, 6 female) were presented with a visual display consisting of 15 white and 1 red square. Participants were instructed to follow the red square with their eyes and press a button as soon as a white square begins to move. White square movements occurred either when the red square was still (fixation condition) or moving in a circular manner with 6 °/s (pursuit condition). The to-be-detected white square movements varied in eccentricity (4 °, 8 °, 16 °) and speed (1 °/s, 2 °/s, 4 °/s) while movement time of white squares was constant at 500 ms. 180 events should be detected in total. A Vicon-integrated eye-tracking system and a button press (1000 Hz) was used to control for eye-movements and measure detection rates and response times. Response times (ms) and missed detections (%) were measured as dependent variables and analysed with a 2 (manipulation) x 3 (eccentricity) x 3 (speed) ANOVA with repeated measures on all factors. Results: Significant response time effects were found for manipulation, F(1,12) = 224.31, p < .01, ηp2 = .95, eccentricity, F(2,24) = 56.43; p < .01, ηp2 = .83, and the interaction between the two factors, F(2,24) = 64.43; p < .01, ηp2 = .84. Response times increased as a function of eccentricity for SPEM only and were overall higher than in the fixation condition. Results further showed missed events effects for manipulation, F(1,12) = 37.14; p < .01, ηp2 = .76, eccentricity, F(2,24) = 44.90; p < .01, ηp2 = .79, the interaction between the two factors, F(2,24) = 39.52; p < .01, ηp2 = .77 and the three-way interaction manipulation x eccentricity x speed, F(2,24) = 3.01; p = .03, ηp2 = .20. While less than 2% of events were missed on average in the fixation condition as well as at 4° and 8° eccentricity in the SPEM condition, missed events increased for SPEM at 16 ° eccentricity with significantly more missed events in the 4 °/s speed condition (1 °/s: M = 34.69, SD = 20.52; 2 °/s: M = 33.34, SD = 19.40; 4 °/s: M = 39.67, SD = 19.40). Discussion: It could be shown that using SPEM impairs the ability to detect peripheral motion changes at the far periphery and that fixations not only help to detect these motion changes but also to respond faster. Due to high temporal constraints especially in team sports like soccer or basketball, fast reaction are necessary for successful anticipation and decision making. Thus, it is advised to anchor gaze at a specific location if peripheral changes (e.g. movements of other players) that require a motor response have to be detected. In contrast, SPEM should only be used if a single object, like the ball in cricket or baseball, is necessary for a successful motor response. References: Schütz, A. C., Braun, D. I., & Gegenfurtner, K. R. (2011). Eye movements and perception: A selective review. Journal of Vision, 11, 1-30. Schütz, A. C., Delipetkose, E., Braun, D. I., Kerzel, D., & Gegenfurtner, K. R. (2007). Temporal contrast sensitivity during smooth pursuit eye movements. Journal of Vision, 7, 1-15.
Resumo:
El principal objetivo de esta tesis es dotar a los vehículos aéreos no tripulados (UAVs, por sus siglas en inglés) de una fuente de información adicional basada en visión. Esta fuente de información proviene de cámaras ubicadas a bordo de los vehículos o en el suelo. Con ella se busca que los UAVs realicen tareas de aterrizaje o inspección guiados por visión, especialmente en aquellas situaciones en las que no haya disponibilidad de estimar la posición del vehículo con base en GPS, cuando las estimaciones de GPS no tengan la suficiente precisión requerida por las tareas a realizar, o cuando restricciones de carga de pago impidan añadir sensores a bordo de los vehículos. Esta tesis trata con tres de las principales áreas de la visión por computador: seguimiento visual y estimación visual de la pose (posición y orientación), que a su vez constituyen la base de la tercera, denominada control servo visual, que en nuestra aplicación se enfoca en el empleo de información visual para controlar los UAVs. Al respecto, esta tesis se ocupa de presentar propuestas novedosas que permitan solucionar problemas relativos al seguimiento de objetos mediante cámaras ubicadas a bordo de los UAVs, se ocupa de la estimación de la pose de los UAVs basada en información visual obtenida por cámaras ubicadas en el suelo o a bordo, y también se ocupa de la aplicación de las técnicas propuestas para solucionar diferentes problemas, como aquellos concernientes al seguimiento visual para tareas de reabastecimiento autónomo en vuelo o al aterrizaje basado en visión, entre otros. Las diversas técnicas de visión por computador presentadas en esta tesis se proponen con el fin de solucionar dificultades que suelen presentarse cuando se realizan tareas basadas en visión con UAVs, como las relativas a la obtención, en tiempo real, de estimaciones robustas, o como problemas generados por vibraciones. Los algoritmos propuestos en esta tesis han sido probados con información de imágenes reales obtenidas realizando pruebas on-line y off-line. Diversos mecanismos de evaluación han sido empleados con el propósito de analizar el desempeño de los algoritmos propuestos, entre los que se incluyen datos simulados, imágenes de vuelos reales, estimaciones precisas de posición empleando el sistema VICON y comparaciones con algoritmos del estado del arte. Los resultados obtenidos indican que los algoritmos de visión por computador propuestos tienen un desempeño que es comparable e incluso mejor al de algoritmos que se encuentran en el estado del arte. Los algoritmos propuestos permiten la obtención de estimaciones robustas en tiempo real, lo cual permite su uso en tareas de control visual. El desempeño de estos algoritmos es apropiado para las exigencias de las distintas aplicaciones examinadas: reabastecimiento autónomo en vuelo, aterrizaje y estimación del estado del UAV. Abstract The main objective of this thesis is to provide Unmanned Aerial Vehicles (UAVs) with an additional vision-based source of information extracted by cameras located either on-board or on the ground, in order to allow UAVs to develop visually guided tasks, such as landing or inspection, especially in situations where GPS information is not available, where GPS-based position estimation is not accurate enough for the task to develop, or where payload restrictions do not allow the incorporation of additional sensors on-board. This thesis covers three of the main computer vision areas: visual tracking and visual pose estimation, which are the bases the third one called visual servoing, which, in this work, focuses on using visual information to control UAVs. In this sense, the thesis focuses on presenting novel solutions for solving the tracking problem of objects when using cameras on-board UAVs, on estimating the pose of the UAVs based on the visual information collected by cameras located either on the ground or on-board, and also focuses on applying these proposed techniques for solving different problems, such as visual tracking for aerial refuelling or vision-based landing, among others. The different computer vision techniques presented in this thesis are proposed to solve some of the frequently problems found when addressing vision-based tasks in UAVs, such as obtaining robust vision-based estimations at real-time frame rates, and problems caused by vibrations, or 3D motion. All the proposed algorithms have been tested with real-image data in on-line and off-line tests. Different evaluation mechanisms have been used to analyze the performance of the proposed algorithms, such as simulated data, images from real-flight tests, publicly available datasets, manually generated ground truth data, accurate position estimations using a VICON system and a robotic cell, and comparison with state of the art algorithms. Results show that the proposed computer vision algorithms obtain performances that are comparable to, or even better than, state of the art algorithms, obtaining robust estimations at real-time frame rates. This proves that the proposed techniques are fast enough for vision-based control tasks. Therefore, the performance of the proposed vision algorithms has shown to be of a standard appropriate to the different explored applications: aerial refuelling and landing, and state estimation. It is noteworthy that they have low computational overheads for vision systems.