986 resultados para Auditory-visual Interaction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study was conducted to investigate how visually impaired people perform distance estimation tasks by movement and navigation during deprivation of effective perceptual and proprioceptive information. For that they performed the task of walking three distances, being the first and second of 100 meters and the third of 140 meters (triangulation) from a point of origin in open field on a inverted L shaped trajectory and then returning to the origin. The first and second tasks were driven by means of a guide with GPS adapted to the study coordinates, and the third one was freeform with three sessions, the first without perceptual and proprioceptive restrictions, the second without auditory perception, and the third in a wheelchair, without proprioception. The objective of this study was to indicate the differences in distance reproduction in relation to accuracy and investigate the spatial representation of participants in a navigation task, in which there is active movement, but no effective perceptual and proprioceptive information. Results showed that the average participants underestimated distances producing average angles close to the value of 45°. And by means of the "t" students test no significant differences between subjects can be pointed out. To achieve these results we used remote monitoring by GPS and software TrackMaker.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the influence of top-down and bottom-up information on speech perception in complex listening environments. Specifically, the effects of listening to different types of processed speech were examined on intelligibility and on simultaneous visual-motor performance. The goal was to extend the generalizability of results in speech perception to environments outside of the laboratory. The effect of bottom-up information was evaluated with natural, cell phone and synthetic speech. The effect of simultaneous tasks was evaluated with concurrent visual-motor and memory tasks. Earlier works on the perception of speech during simultaneous visual-motor tasks have shown inconsistent results (Choi, 2004; Strayer & Johnston, 2001). In the present experiments, two dual-task paradigms were constructed in order to mimic non-laboratory listening environments. In the first two experiments, an auditory word repetition task was the primary task and a visual-motor task was the secondary task. Participants were presented with different kinds of speech in a background of multi-speaker babble and were asked to repeat the last word of every sentence while doing the simultaneous tracking task. Word accuracy and visual-motor task performance were measured. Taken together, the results of Experiments 1 and 2 showed that the intelligibility of natural speech was better than synthetic speech and that synthetic speech was better perceived than cell phone speech. The visual-motor methodology was found to demonstrate independent and supplemental information and provided a better understanding of the entire speech perception process. Experiment 3 was conducted to determine whether the automaticity of the tasks (Schneider & Shiffrin, 1977) helped to explain the results of the first two experiments. It was found that cell phone speech allowed better simultaneous pursuit rotor performance only at low intelligibility levels when participants ignored the listening task. Also, simultaneous task performance improved dramatically for natural speech when intelligibility was good. Overall, it could be concluded that knowledge of intelligibility alone is insufficient to characterize processing of different speech sources. Additional measures such as attentional demands and performance of simultaneous tasks were also important in characterizing the perception of different kinds of speech in complex listening environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We review recent visualization techniques aimed at supporting tasks that require the analysis of text documents, from approaches targeted at visually summarizing the relevant content of a single document to those aimed at assisting exploratory investigation of whole collections of documents.Techniques are organized considering their target input materialeither single texts or collections of textsand their focus, which may be at displaying content, emphasizing relevant relationships, highlighting the temporal evolution of a document or collection, or helping users to handle results from a query posed to a search engine.We describe the approaches adopted by distinct techniques and briefly review the strategies they employ to obtain meaningful text models, discuss how they extract the information required to produce representative visualizations, the tasks they intend to support and the interaction issues involved, and strengths and limitations. Finally, we show a summary of techniques, highlighting their goals and distinguishing characteristics. We also briefly discuss some open problems and research directions in the fields of visual text mining and text analytics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study verifies the effects of contralateral noise on otoacoustic emissions and auditory evoked potentials. Short, middle and late auditory evoked potentials as well as otoacoustic emissions with and without white noise were assessed. Twenty-five subjects, normal-hearing, both genders, aged 18 to 30 years, were tested. In general, latencies of the various auditory potentials were increased at noise conditions, whereas amplitudes were diminished at noise conditions for short, middle and late latency responses combined in the same subject. The amplitude of otoacoustic emission decreased significantly in the condition with contralateral noise in comparison to the condition without noise. Our results indicate that most subjects presented different responses between conditions (with and without noise) in all tests, thereby suggesting that the efferent system was acting at both caudal and rostral portions of the auditory system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background: Coactivation may be both desirable (injury prevention) or undesirable (strength measurement). In this context, different styles of muscle strength stimulus have being investigated. In this study we evaluated the effects of verbal and visual stimulation on rectus femoris and biceps femoris muscles contraction during isometric and concentric. Methods: We investigated 13 men (age =23.1 ± 3.8 years old; body mass =75.6 ± 9.1 kg; height =1.8 ± 0.07 m). We used the isokinetic dynamometer BIODEX device and an electromyographic (EMG) system. We evaluated the maximum isometric and isokinetic knee extension and flexion at 60°/s. The following conditions were evaluated: without visual nor verbal command (control); verbal command; visual command and; verbal and visual command. In relation to the concentric contraction, the volunteers performed five reciprocal and continuous contractions at 60°/s. With respect to isometric contractions it was made three contractions of five seconds for flexion and extension in a period of one minute. Results: We found that the peak torque during isometric flexion was higher in the subjects in the VVC condition (p > 0.05). In relation to muscle coactivation, the subjects presented higher values at the control condition (p > 0.05). Conclusion We suggest that this type of stimulus is effective for the lower limbs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual tracking is the problem of estimating some variables related to a target given a video sequence depicting the target. Visual tracking is key to the automation of many tasks, such as visual surveillance, robot or vehicle autonomous navigation, automatic video indexing in multimedia databases. Despite many years of research, long term tracking in real world scenarios for generic targets is still unaccomplished. The main contribution of this thesis is the definition of effective algorithms that can foster a general solution to visual tracking by letting the tracker adapt to mutating working conditions. In particular, we propose to adapt two crucial components of visual trackers: the transition model and the appearance model. The less general but widespread case of tracking from a static camera is also considered and a novel change detection algorithm robust to sudden illumination changes is proposed. Based on this, a principled adaptive framework to model the interaction between Bayesian change detection and recursive Bayesian trackers is introduced. Finally, the problem of automatic tracker initialization is considered. In particular, a novel solution for categorization of 3D data is presented. The novel category recognition algorithm is based on a novel 3D descriptors that is shown to achieve state of the art performances in several applications of surface matching.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerosi studi mostrano che gli intervalli temporali sono rappresentati attraverso un codice spaziale che si estende da sinistra verso destra, dove gli intervalli brevi sono rappresentati a sinistra rispetto a quelli lunghi. Inoltre tale disposizione spaziale del tempo può essere influenzata dalla manipolazione dell’attenzione-spaziale. La presente tesi si inserisce nel dibattito attuale sulla relazione tra rappresentazione spaziale del tempo e attenzione-spaziale attraverso l’uso di una tecnica che modula l’attenzione-spaziale, ovvero, l’Adattamento Prismatico (AP). La prima parte è dedicata ai meccanismi sottostanti tale relazione. Abbiamo mostrato che spostando l’attenzione-spaziale con AP, verso un lato dello spazio, si ottiene una distorsione della rappresentazione di intervalli temporali, in accordo con il lato dello spostamento attenzionale. Questo avviene sia con stimoli visivi, sia con stimoli uditivi, nonostante la modalità uditiva non sia direttamente coinvolta nella procedura visuo-motoria di AP. Questo risultato ci ha suggerito che il codice spaziale utilizzato per rappresentare il tempo, è un meccanismo centrale che viene influenzato ad alti livelli della cognizione spaziale. La tesi prosegue con l’indagine delle aree corticali che mediano l’interazione spazio-tempo, attraverso metodi neuropsicologici, neurofisiologici e di neuroimmagine. In particolare abbiamo evidenziato che, le aree localizzate nell’emisfero destro, sono cruciali per l’elaborazione del tempo, mentre le aree localizzate nell’emisfero sinistro sono cruciali ai fini della procedura di AP e affinché AP abbia effetto sugli intervalli temporali. Infine, la tesi, è dedicata allo studio dei disturbi della rappresentazione spaziale del tempo. I risultati ci indicano che un deficit di attenzione-spaziale, dopo danno emisferico destro, provoca un deficit di rappresentazione spaziale del tempo, che si riflette negativamente sulla vita quotidiana dei pazienti. Particolarmente interessanti sono i risultati ottenuti mediante AP. Un trattamento con AP, efficace nel ridurre il deficit di attenzione-spaziale, riduce anche il deficit di rappresentazione spaziale del tempo, migliorando la qualità di vita dei pazienti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lesions to the primary geniculo-striate visual pathway cause blindness in the contralesional visual field. Nevertheless, previous studies have suggested that patients with visual field defects may still be able to implicitly process the affective valence of unseen emotional stimuli (affective blindsight) through alternative visual pathways bypassing the striate cortex. These alternative pathways may also allow exploitation of multisensory (audio-visual) integration mechanisms, such that auditory stimulation can enhance visual detection of stimuli which would otherwise be undetected when presented alone (crossmodal blindsight). The present dissertation investigated implicit emotional processing and multisensory integration when conscious visual processing is prevented by real or virtual lesions to the geniculo-striate pathway, in order to further clarify both the nature of these residual processes and the functional aspects of the underlying neural pathways. The present experimental evidence demonstrates that alternative subcortical visual pathways allow implicit processing of the emotional content of facial expressions in the absence of cortical processing. However, this residual ability is limited to fearful expressions. This finding suggests the existence of a subcortical system specialised in detecting danger signals based on coarse visual cues, therefore allowing the early recruitment of flight-or-fight behavioural responses even before conscious and detailed recognition of potential threats can take place. Moreover, the present dissertation extends the knowledge about crossmodal blindsight phenomena by showing that, unlike with visual detection, sound cannot crossmodally enhance visual orientation discrimination in the absence of functional striate cortex. This finding demonstrates, on the one hand, that the striate cortex plays a causative role in crossmodally enhancing visual orientation sensitivity and, on the other hand, that subcortical visual pathways bypassing the striate cortex, despite affording audio-visual integration processes leading to the improvement of simple visual abilities such as detection, cannot mediate multisensory enhancement of more complex visual functions, such as orientation discrimination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Four experiments examined how people operate on memory representations of familiar songs. The tasks were similar to those used in studies of visual imagery. In one task, subjects saw a one word lyric from a song and then saw a second lyric; then they had to say if the second lyric was from the same song as the first. In a second task, subjects mentally compared pitches of notes corresponding to song lyrics. In both tasks, reaction time increased as a function of the distance in beats between the two lyrics in the actual song, and in some conditions reaction time increased with the starting beat of the earlier lyric. Imagery instructions modified the main results somewhat in the first task, but not in the second, much harder task. The results suggest that song representations have temporal-like characteristics. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual symptoms are common in PD and PD dementia and include difficulty reading, double vision, illusions, feelings of presence and passage, and complex visual hallucinations. Despite the established prognostic implications of complex visual hallucinations, the interaction between cognitive decline, visual impairment, and other visual symptoms remains poorly understood. Our aim was to characterize the spectrum of visual symptomatology in PD and examine clinical predictors for their occurrence. Sixty-four subjects with PD, 26 with PD dementia, and 32 age-matched controls were assessed for visual symptoms, cognitive impairment, and ocular pathology. Complex visual hallucinations were common in PD (17%) and PD dementia (89%). Dementia subjects reported illusions (65%) and presence (62%) more frequently than PD or control subjects, but the frequency of passage hallucinations in PD and PD dementia groups was equivalent (48% versus 69%, respectively; P = 0.102). Visual acuity and contrast sensitivity was impaired in parkinsonian subjects, with disease severity and age emerging as the key predictors. Regression analysis identified a variety of factors independently predictive of complex visual hallucinations (e.g., dementia, visual acuity, and depression), illusions (e.g., excessive daytime somnolence and disease severity), and presence (e.g., rapid eye movement sleep behavior disorder and excessive daytime somnolence). Our results demonstrate that different "hallucinatory" experiences in PD do not necessarily share common disease predictors and may, therefore, be driven by different pathophysiological mechanisms. If confirmed, such a finding will have important implications for future studies of visual symptoms and cognitive decline in PD and PD dementia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The auditory cortex is anatomically segregated into a central core and a peripheral belt region, which exhibit differences in preference to bandpassed noise and in temporal patterns of response to acoustic stimuli. While it has been shown that visual stimuli can modify response magnitude in auditory cortex, little is known about differential patterns of multisensory interactions in core and belt. Here, we used functional magnetic resonance imaging and examined the influence of a short visual stimulus presented prior to acoustic stimulation on the spatial pattern of blood oxygen level-dependent signal response in auditory cortex. Consistent with crossmodal inhibition, the light produced a suppression of signal response in a cortical region corresponding to the core. In the surrounding areas corresponding to the belt regions, however, we found an inverse modulation with an increasing signal in centrifugal direction. Our data suggest that crossmodal effects are differentially modulated according to the hierarchical core-belt organization of auditory cortex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Auditory neuroscience has not tapped fMRI's full potential because of acoustic scanner noise emitted by the gradient switches of conventional echoplanar fMRI sequences. The scanner noise is pulsed, and auditory cortex is particularly sensitive to pulsed sounds. Current fMRI approaches to avoid stimulus-noise interactions are temporally inefficient. Since the sustained BOLD response to pulsed sounds decreases with repetition rate and becomes minimal with unpulsed sounds, we developed an fMRI sequence emitting continuous rather than pulsed gradient sound by implementing a novel quasi-continuous gradient switch pattern. Compared to conventional fMRI, continuous-sound fMRI reduced auditory cortex BOLD baseline and increased BOLD amplitude with graded sound stimuli, short sound events, and sounds as complex as orchestra music with preserved temporal resolution. Response in subcortical auditory nuclei was enhanced, but not the response to light in visual cortex. Finally, tonotopic mapping using continuous-sound fMRI demonstrates that enhanced functional signal-to-noise in BOLD response translates into improved spatial separability of specific sound representations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From Bush’s September 20, 2001 “War on Terror” speech to Congress to President-Elect Barack Obama’s acceptance speech on November 4, 2008, the U.S. Army produced visual recruitment material that addressed the concerns of falling enlistment numbers—due to the prolonged and difficult war in Iraq—with quickly-evolving and compelling rhetorical appeals: from the introduction of an “Army of One” (2001) to “Army Strong” (2006); from messages focused on education and individual identity to high-energy adventure and simulated combat scenarios, distributed through everything from printed posters and music videos to first-person tactical-shooter video games. These highly polished, professional visual appeals introduced to the American public during a time of an unpopular war fought by volunteers provide rich subject matter for research and analysis. This dissertation takes a multidisciplinary approach to the visual media utilized as part of the Army’s recruitment efforts during the War on Terror, focusing on American myths—as defined by Barthes—and how these myths are both revealed and reinforced through design across media platforms. Placing each selection in its historical context, this dissertation analyzes how printed materials changed as the War on Terror continued. It examines the television ad that introduced “Army Strong” to the American public, considering how the combination of moving image, text, and music structure the message and the way we receive it. This dissertation also analyzes the video game America’s Army, focusing on how the interaction of the human player and the computer-generated player combine to enhance the persuasive qualities of the recruitment message. Each chapter discusses how the design of the particular medium facilitates engagement/interactivity of the viewer. The conclusion considers what recruitment material produced during this time period suggests about the persuasive strategies of different media and how they create distinct relationships with their spectators. It also addresses how theoretical frameworks and critical concepts used by a variety of disciplines can be combined to analyze recruitment media utilizing a Selber inspired three literacy framework (functional, critical, rhetorical) and how this framework can contribute to the multimodal classroom by allowing instructors and students to do a comparative analysis of multiple forms of visual media with similar content.