974 resultados para Visual Tracking
Resumo:
This study investigated the influence of top-down and bottom-up information on speech perception in complex listening environments. Specifically, the effects of listening to different types of processed speech were examined on intelligibility and on simultaneous visual-motor performance. The goal was to extend the generalizability of results in speech perception to environments outside of the laboratory. The effect of bottom-up information was evaluated with natural, cell phone and synthetic speech. The effect of simultaneous tasks was evaluated with concurrent visual-motor and memory tasks. Earlier works on the perception of speech during simultaneous visual-motor tasks have shown inconsistent results (Choi, 2004; Strayer & Johnston, 2001). In the present experiments, two dual-task paradigms were constructed in order to mimic non-laboratory listening environments. In the first two experiments, an auditory word repetition task was the primary task and a visual-motor task was the secondary task. Participants were presented with different kinds of speech in a background of multi-speaker babble and were asked to repeat the last word of every sentence while doing the simultaneous tracking task. Word accuracy and visual-motor task performance were measured. Taken together, the results of Experiments 1 and 2 showed that the intelligibility of natural speech was better than synthetic speech and that synthetic speech was better perceived than cell phone speech. The visual-motor methodology was found to demonstrate independent and supplemental information and provided a better understanding of the entire speech perception process. Experiment 3 was conducted to determine whether the automaticity of the tasks (Schneider & Shiffrin, 1977) helped to explain the results of the first two experiments. It was found that cell phone speech allowed better simultaneous pursuit rotor performance only at low intelligibility levels when participants ignored the listening task. Also, simultaneous task performance improved dramatically for natural speech when intelligibility was good. Overall, it could be concluded that knowledge of intelligibility alone is insufficient to characterize processing of different speech sources. Additional measures such as attentional demands and performance of simultaneous tasks were also important in characterizing the perception of different kinds of speech in complex listening environments.
Resumo:
Visual search and oculomotor behaviour are believed to be very relevant for athlete performance, especially for sports requiring refined visuo-motor coordination skills. Modern coaches believe that a correct visuo-motor strategy may be part of advanced training programs. In this thesis two experiments are reported in which gaze behaviour of expert and novice athletes were investigated while they were doing a real sport specific task. The experiments concern two different sports: judo and soccer. In each experiment, number of fixations, fixation locations and mean fixation duration (ms) were considered. An observational analysis was done at the end of the paper to see perceptual differences between near and far space. Purpose: The aim of the judo study was to delineate differences in gaze behaviour characteristics between a population of athletes and one of non athletes. Aspects specifically investigated were: search rate, search order and viewing time across different conditions in a real-world task. The second study was aimed at identifying gaze behaviour in varsity soccer goalkeepers while facing a penalty kick executed with instep and inside foot. Then an attempt has been done to compare the gaze strategies of expert judoka and soccer goalkeepers in order to delineate possible differences related to the different conditions of reacting to events occurring in near (peripersonal) or far (extrapersonal) space. Judo Methods: A sample of 9 judoka (black belt) and 11 near judoka (white belt) were studied. Eye movements were recorded at 500Hz using a video based eye tracker (EyeLink II). Each subject participated in 40 sessions for about 40 minutes. Gaze behaviour was considered as average number of locations fixated per trial, the average number of fixations per trial, and mean fixation duration. Soccer Methods: Seven (n = 7) intermediate level male volunteered for the experiment. The kickers and goalkeepers, had at least varsity level soccer experience. The vision-in-action (VIA) system (Vickers 1996; Vickers 2007) was used to collect the coupled gaze and motor behaviours of the goalkeepers. This system integrated input from a mobile eye tracking system (Applied Sciences Laboratories) with an external video of the goalkeeper’s saving actions. The goalkeepers took 30 penalty kicks on a synthetic pitch in accordance with FIFA (2008) laws. Judo Results: Results indicate that experts group differed significantly from near expert for fixations duration, and number of fixations per trial. The expert judokas used a less exhaustive search strategy involving fewer fixations of longer duration than their novice counterparts and focused on central regions of the body. The results showed that in defence and attack situation expert group did a greater number of transitions with respect to their novice counterpart. Soccer Results: We found significant main effect for the number of locations fixated across outcome (goal/save) but not for foot contact (instep/inside). Participants spent more time fixating the areas in instep than inside kick and in goal than in save situation. Mean and standard error in search strategy as a result of foot contact and outcome indicate that the most gaze behaviour start and finish on ball interest areas. Conclusions: Expert goalkeepers tend to spend more time in inside-save than instep-save penalty, differences that was opposite in scored penalty kick. Judo results show that differences in visual behaviour related to the level of expertise appear mainly when the test presentation is continuous, last for a relatively long period of time and present a high level of uncertainty with regard to the chronology and the nature of events. Expert judoist performers “anchor” the fovea on central regions of the scene (lapel and face) while using peripheral vision to monitor opponents’ limb movements. The differences between judo and soccer gaze strategies are discussed on the light of physiological and neuropsychological differences between near and far space perception.
Resumo:
Speech is often a multimodal process, presented audiovisually through a talking face. One area of speech perception influenced by visual speech is speech segmentation, or the process of breaking a stream of speech into individual words. Mitchel and Weiss (2013) demonstrated that a talking face contains specific cues to word boundaries and that subjects can correctly segment a speech stream when given a silent video of a speaker. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2013). In Experiment 1, subjects were found to spend the most time watching the eyes and mouth, with a trend suggesting that the mouth was viewed more than the eyes. Although subjects displayed significant learning of word boundaries, performance was not correlated with gaze duration on any individual feature, nor was performance correlated with a behavioral measure of autistic-like traits. However, trends suggested that as autistic-like traits increased, gaze duration of the mouth increased and gaze duration of the eyes decreased, similar to significant trends seen in autistic populations (Boratston & Blakemore, 2007). In Experiment 2, the same video was modified so that a black bar covered the eyes or mouth. Both videos elicited learning of word boundaries that was equivalent to that seen in the first experiment. Again, no correlations were found between segmentation performance and SRS scores in either condition. These results, taken with those in Experiment, suggest that neither the eyes nor mouth are critical to speech segmentation and that perhaps more global head movements indicate word boundaries (see Graf, Cosatto, Strom, & Huang, 2002). Future work will elucidate the contribution of individual features relative to global head movements, as well as extend these results to additional types of speech tasks.
Resumo:
Visual fixation is employed by humans and some animals to keep a specific 3D location at the center of the visual gaze. Inspired by this phenomenon in nature, this paper explores the idea to transfer this mechanism to the context of video stabilization for a handheld video camera. A novel approach is presented that stabilizes a video by fixating on automatically extracted 3D target points. This approach is different from existing automatic solutions that stabilize the video by smoothing. To determine the 3D target points, the recorded scene is analyzed with a stateof- the-art structure-from-motion algorithm, which estimates camera motion and reconstructs a 3D point cloud of the static scene objects. Special algorithms are presented that search either virtual or real 3D target points, which back-project close to the center of the image for as long a period of time as possible. The stabilization algorithm then transforms the original images of the sequence so that these 3D target points are kept exactly in the center of the image, which, in case of real 3D target points, produces a perfectly stable result at the image center. Furthermore, different methods of additional user interaction are investigated. It is shown that the stabilization process can easily be controlled and that it can be combined with state-of-theart tracking techniques in order to obtain a powerful image stabilization tool. The approach is evaluated on a variety of videos taken with a hand-held camera in natural scenes.
Resumo:
Coordinated eye and head movements simultaneously occur to scan the visual world for relevant targets. However, measuring both eye and head movements in experiments allowing natural head movements may be challenging. This paper provides an approach to study eye-head coordination: First, we demonstra- te the capabilities and limits of the eye-head tracking system used, and compare it to other technologies. Second, a beha- vioral task is introduced to invoke eye-head coordination. Third, a method is introduced to reconstruct signal loss in video- based oculography caused by cornea reflection artifacts in order to extend the tracking range. Finally, parameters of eye- head coordination are identified using EHCA (eye-head co- ordination analyzer), a MATLAB software which was developed to analyze eye-head shifts. To demonstrate the capabilities of the approach, a study with 11 healthy subjects was performed to investigate motion behavior. The approach presented here is discussed as an instrument to explore eye-head coordination, which may lead to further insights into attentional and motor symptoms of certain neurological or psychiatric diseases, e.g., schizophrenia.
Resumo:
Parkinson's disease, typically thought of as a movement disorder, is increasingly recognized as causing cognitive impairment and dementia. Eye movement abnormalities are also described, including impairment of rapid eye movements (saccades) and the fixations interspersed between them. Such movements are under the influence of cortical and subcortical networks commonly targeted by the neurodegeneration seen in Parkinson's disease and, as such, may provide a marker for cognitive decline. This study examined the error rates and visual exploration strategies of subjects with Parkinson's disease, with and without cognitive impairment, whilst performing a battery of visuo-cognitive tasks. Error rates were significantly higher in those Parkinson's disease groups with either mild cognitive impairment (P = 0.001) or dementia (P < 0.001), than in cognitively normal subjects with Parkinson's disease. When compared with cognitively normal subjects with Parkinson's disease, exploration strategy, as measured by a number of eye tracking variables, was least efficient in the dementia group but was also affected in those subjects with Parkinson's disease with mild cognitive impairment. When compared with control subjects and cognitively normal subjects with Parkinson's disease, saccade amplitudes were significantly reduced in the groups with mild cognitive impairment or dementia. Fixation duration was longer in all Parkinson's disease groups compared with healthy control subjects but was longest for cognitively impaired Parkinson's disease groups. The strongest predictor of average fixation duration was disease severity. Analysing only data from the most complex task, with the highest error rates, both cognitive impairment and disease severity contributed to a predictive model for fixation duration [F(2,76) = 12.52, P ≤ 0.001], but medication dose did not (r = 0.18, n = 78, P = 0.098, not significant). This study highlights the potential use of exploration strategy measures as a marker of cognitive decline in Parkinson's disease and reveals the efficiency by which fixations and saccades are deployed in the build-up to a cognitive response, rather than merely focusing on the outcome itself. The prolongation of fixation duration, present to a small but significant degree even in cognitively normal subjects with Parkinson's disease, suggests a disease-specific impact on the networks directing visual exploration, although the study also highlights the multi-factorial nature of changes in exploration and the significant impact of cognitive decline on efficiency of visual search.
Resumo:
Past research has shown that the gender typicality of applicants’ faces affects leadership selection irrespective of a candidate’s gender: A masculine facial appearance is congruent with masculine-typed leadership roles, thus masculine-looking applicants are hired more certainly than feminine-looking ones. In the present study, we extended this line of research by investigating hiring decisions for both masculine- and feminine-typed professional roles. Furthermore, we used eye tracking to examine the visual exploration of applicants’ portraits. Our results indicate that masculine-looking applicants were favored for the masculine-typed role (leader) and feminine-looking applicants for the feminine-typed role (team member). Eye movement patterns showed that information about gender category and facial appearance was integrated during first fixations of the portraits. Hiring decisions, however, were not based on this initial analysis, but occurred at a second stage, when the portrait was viewed in the context of considering the applicant for a specific job.
Resumo:
Previous research has demonstrated that adults are successful at visually tracking rigidly moving items, but experience great difficulties when tracking substance-like ‘‘pouring’’ items. Using a comparative approach, we investigated whether the presence/absence of the grammatical count–mass distinction influences adults and children’s ability to attentively track objects versus substances. More specifically, we aimed to explore whether the higher success at tracking rigid over substance-like items appears universally or whether speakers of classifier languages (like Japanese, not marking the object–substance distinction) are advantaged at tracking substances as compared to speakers of non-classifier languages (like Swiss German, marking the object–substance distinction). Our results supported the idea that language has no effect on low-level cognitive processes such as the attentive visual processing of objects and substances. We concluded arguing that the tendency to prioritize objects is universal and independent of specific characteristics of the language spoken.
Resumo:
According to the direct matching hypothesis, perceived movements automatically activate existing motor components through matching of the perceived gesture and its execution. The aim of the present study was to test the direct matching hypothesis by assessing whether visual exploration behavior correlate with deficits in gestural imitation in left hemisphere damaged (LHD) patients. Eighteen LHD patients and twenty healthy control subjects took part in the study. Gesture imitation performance was measured by the test for upper limb apraxia (TULIA). Visual exploration behavior was measured by an infrared eye-tracking system. Short videos including forty gestures (20 meaningless and 20 communicative gestures) were presented. Cumulative fixation duration was measured in different regions of interest (ROIs), namely the face, the gesturing hand, the body, and the surrounding environment. Compared to healthy subjects, patients fixated significantly less the ROIs comprising the face and the gesturing hand during the exploration of emblematic and tool-related gestures. Moreover, visual exploration of tool-related gestures significantly correlated with tool-related imitation as measured by TULIA in LHD patients. Patients and controls did not differ in the visual exploration of meaningless gestures, and no significant relationships were found between visual exploration behavior and the imitation of emblematic and meaningless gestures in TULIA. The present study thus suggests that altered visual exploration may lead to disturbed imitation of tool related gestures, however not of emblematic and meaningless gestures. Consequently, our findings partially support the direct matching hypothesis.
Resumo:
We investigated the neural mechanisms and the autonomic and cognitive responses associated with visual avoidance behavior in spider phobia. Spider phobic and control participants imagined visiting different forest locations with the possibility of encountering spiders, snakes, or birds (neutral reference category). In each experimental trial, participants saw a picture of a forest location followed by a picture of a spider, snake, or bird, and then rated their personal risk of encountering these animals in this context, as well as their fear. The greater the visual avoidance of spiders that a phobic participant demonstrated (as measured by eye tracking), the higher were her autonomic arousal and neural activity in the amygdala, orbitofrontal cortex (OFC), anterior cingulate cortex (ACC), and precuneus at picture onset. Visual avoidance of spiders in phobics also went hand in hand with subsequently reduced cognitive risk of encounters. Control participants, in contrast, displayed a positive relationship between gaze duration toward spiders, on the one hand, and autonomic responding, as well as OFC, ACC, and precuneus activity, on the other hand. In addition, they showed reduced encounter risk estimates when they looked longer at the animal pictures. Our data are consistent with the idea that one reason for phobics to avoid phobic information may be grounded in heightened activity in the fear circuit, which signals potential threat. Because of the absence of alternative efficient regulation strategies, visual avoidance may then function to down-regulate cognitive risk evaluations for threatening information about the phobic stimuli. Control participants, in contrast, may be characterized by a different coping style, whereby paying visual attention to potentially threatening information may help them to actively down-regulate cognitive evaluations of risk.
Resumo:
Over recent years, it has repeatedly been shown that optimal gaze strategies enhance motor control (e.g., Foulsham, 2015). However, little is known, whether, vice versa, visual performance can be improved by optimized motor control. Consequently, in two studies, we investigated visual performance as a function of motor control strategies and task parameters, respectively. In Experiment 1, 72 participants were tested on visual acuity (Landolt) and contrast sensitivity (Grating), while standing in two different postures (upright vs. squat) on a ZEPTOR-platform that vibrated at four different frequencies (0, 4, 8, 12 Hz). After each test, perceived exertion (Borg) was assessed. Significant interactions were revealed for both tests, Landolt: F(3,213)=13.25, p<.01, ηp2=.16, Grating: F(3,213)=4.27, p<.01, ηp2=.06, elucidating a larger loss of acuity/contrast sensitivity with increasing frequencies for the upright compared with the squat posture. For perceived exertion, however, a diametrical interaction for frequency was found for acuity, F(3,213)=7.45, p<.01, ηp2=.09, and contrast sensitivity, F(3,213)=7.08, p < .01, ηp2=.09, substantiating that the impaired visual performance cannot be attributed to exertion. Consequently, the squat posture could permit better head and, hence, gaze stabilization. In Experiment 2, 64 participants performed the same tests while standing in a squat position on a ski-simulator, which vibrated with two different frequencies (2.4, 3.6 Hz) and amplitudes (50, 100 mm) in a predictable or unpredictable manner. Control strategies were identified by tracking segmental motion, which allows to derive damping characteristics. Considerable main effects were found for frequency, all F’s(1,52)>10.31, all p’s<.01, all ηp2’s>.16, as well as, in the acuity test, for predictability, F(1,52)=10.31, p<.01, ηp2=.17, and by tendency for amplitude, F(1,52)=3.53, p=.06, ηp2=.06. A significant correlation between the damping amplitude in the knee joint and the performance drop in visual acuity, r=-.97, p<.001, again points towards the importance of motor control strategies to maintain optimal visual performance.
Resumo:
BACKGROUND: Crossing a street can be a very difficult task for older pedestrians. With increased age and potential cognitive decline, older people take the decision to cross a street primarily based on vehicles' distance, and not on their speed. Furthermore, older pedestrians tend to overestimate their own walking speed, and could not adapt it according to the traffic conditions. Pedestrians' behavior is often tested using virtual reality. Virtual reality presents the advantage of being safe, cost-effective, and allows using standardized test conditions. METHODS: This paper describes an observational study with older and younger adults. Street crossing behavior was investigated in 18 healthy, younger and 18 older subjects by using a virtual reality setting. The aim of the study was to measure behavioral data (such as eye and head movements) and to assess how the two age groups differ in terms of number of safe street crossings, virtual crashes, and missed street crossing opportunities. Street crossing behavior, eye and head movements, in older and younger subjects, were compared with non-parametric tests. RESULTS: The results showed that younger pedestrians behaved in a more secure manner while crossing a street, as compared to older people. The eye and head movements analysis revealed that older people looked more at the ground and less at the other side of the street to cross. CONCLUSIONS: The less secure behavior in street crossing found in older pedestrians could be explained by their reduced cognitive and visual abilities, which, in turn, resulted in difficulties in the decision-making process, especially under time pressure. Decisions to cross a street are based on the distance of the oncoming cars, rather than their speed, for both groups. Older pedestrians look more at their feet, probably because of their need of more time to plan precise stepping movement and, in turn, pay less attention to the traffic. This might help to set up guidelines for improving senior pedestrians' safety, in terms of speed limits, road design, and mixed physical-cognitive trainings.
Resumo:
BACKGROUND Patients with downbeat nystagmus syndrome suffer from oscillopsia, which leads to an unstable visual perception and therefore impaired visual acuity. The aim of this study was to use real-time computer-based visual feedback to compensate for the destabilizing slow phase eye movements. METHODS The patients were sitting in front of a computer screen with the head fixed on a chin rest. The eye movements were recorded by an eye tracking system (EyeSeeCam®). We tested the visual acuity with a fixed Landolt C (static) and during real-time feedback driven condition (dynamic) in gaze straight ahead and (20°) sideward gaze. In the dynamic condition, the Landolt C moved according to the slow phase eye velocity of the downbeat nystagmus. The Shapiro-Wilk test was used to test for normal distribution and one-way ANOVA for comparison. RESULTS Ten patients with downbeat nystagmus were included in the study. Median age was 76 years and the median duration of symptoms was 6.3 years (SD +/- 3.1y). The mean slow phase velocity was moderate during gaze straight ahead (1.44°/s, SD +/- 1.18°/s) and increased significantly in sideward gaze (mean left 3.36°/s; right 3.58°/s). In gaze straight ahead, we found no difference between the static and feedback driven condition. In sideward gaze, visual acuity improved in five out of ten subjects during the feedback-driven condition (p = 0.043). CONCLUSIONS This study provides proof of concept that non-invasive real-time computer-based visual feedback compensates for the SPV in DBN. Therefore, real-time visual feedback may be a promising aid for patients suffering from oscillopsia and impaired text reading on screen. Recent technological advances in the area of virtual reality displays might soon render this approach feasible in fully mobile settings.
Resumo:
In the current study it is investigated whether peripheral vision can be used to monitor multi-ple moving objects and to detect single-target changes. For this purpose, in Experiment 1, a modified MOT setup with a large projection and a constant-position centroid phase had to be checked first. Classical findings regarding the use of a virtual centroid to track multiple ob-jects and the dependency of tracking accuracy on target speed could be successfully replicat-ed. Thereafter, the main experimental variations regarding the manipulation of to-be-detected target changes could be introduced in Experiment 2. In addition to a button press used for the detection task, gaze behavior was assessed using an integrated eye-tracking system. The anal-ysis of saccadic reaction times in relation to the motor response shows that peripheral vision is naturally used to detect motion and form changes in MOT because the saccade to the target occurred after target-change offset. Furthermore, for changes of comparable task difficulties, motion changes are detected better by peripheral vision than form changes. Findings indicate that capabilities of the visual system (e.g., visual acuity) affect change detection rates and that covert-attention processes may be affected by vision-related aspects like spatial uncertainty. Moreover, it is argued that a centroid-MOT strategy might reduce the amount of saccade-related costs and that eye-tracking seems to be generally valuable to test predictions derived from theories on MOT. Finally, implications for testing covert attention in applied settings are proposed.
Resumo:
Recordings from the PerenniAL Acoustic Observatory in the Antarctic ocean (PALAOA) show seasonal acoustic presence of 4 Antarctic ice-breeding seal species (Ross seal, Ommatophoca rossii, Weddell seal, Leptonychotes weddellii, crabeater, Lobodon carcinophaga, and leopard seal, Hydrurga leptonyx). Apart from Weddell seals, inhabiting the fast-ice in Atka Bay, the other three (pack-ice) species however have to date never (Ross and leopard seal) or only very rarely (crabeater seals) been sighted in the Atka Bay region. The aim of the PASATA project is twofold: the large passive acoustic hydrophone array (hereafter referred to as large array) aims to localize calling pack-ice pinniped species to obtain information on their location and hence the ice habitat they occupy. This large array consists of four autonomous passive acoustic recorders with a hydrophone sensor deployed through a drilled hole in the sea ice. The PASATA recordings are time-stamped and can therefore be coupled to the PALAOA recordings so that the hydrophone array spans the bay almost entirely from east to west. The second, smaller hydrophone array (hereafter referred to as small array), also consists of four autonomous passive acoustic recorders with hydrophone sensors deployed through drilled holes in the sea ice. The smaller array was deployed within a Weddell seal breeding colony, located further south in the bay, just off the ice shelf. Male Weddell seals are thought to defend underwater territories around or near tide cracks and breathing holes used by females. Vocal activity increases strongly during the breeding season and vocalizations are thought to be used underwater by males for the purpose of territorial defense and advertisement. With the smaller hydrophone array we aim to investigate underwater behaviour of vocalizing male and female Weddell seals to provide further information on underwater movement patterns in relation to the location of tide cracks and breathing holes. As a pilot project, one on-ice and three underwater camera systems have been deployed near breathing holes to obtain additional visual information on Weddell seal behavioural activity. Upon each visit in the breeding colony, a census of colony composition on the ice (number of animals, sex, presence of dependent pups, presence and severity of injuries-indicative of competition intensity) as well as GPS readings of breathing holes and positions of hauled out Weddell seals are taken.