966 resultados para Auditory-visual teaching


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Utilizing user-centred system design and evaluation method has become an increasingly important tool to foster better usability in the field of virtual environments (VEs). In recent years, although it is still the norm that designers and developers are concerning the technological advancement and striving for designing impressive multimodal multisensory interfaces, more and more awareness are aroused among the development team that in order to produce usable and useful interfaces, it is essential to have users in mind during design and validate a new design from users' perspective. In this paper, we describe a user study carried out to validate a newly developed haptically enabled virtual training system. By taking consideration of the complexity of individual differences on human performance, adoption and acceptance of haptic and audio-visual I/O devices, we address how well users learn, perform, adapt to and perceive object assembly training. We also explore user experience and interaction with the system, and discuss how multisensory feedback affects user performance, perception and acceptance. At last, we discuss how to better design VEs that enhance users perception, their interaction and motor activity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study investigated the influence of top-down and bottom-up information on speech perception in complex listening environments. Specifically, the effects of listening to different types of processed speech were examined on intelligibility and on simultaneous visual-motor performance. The goal was to extend the generalizability of results in speech perception to environments outside of the laboratory. The effect of bottom-up information was evaluated with natural, cell phone and synthetic speech. The effect of simultaneous tasks was evaluated with concurrent visual-motor and memory tasks. Earlier works on the perception of speech during simultaneous visual-motor tasks have shown inconsistent results (Choi, 2004; Strayer & Johnston, 2001). In the present experiments, two dual-task paradigms were constructed in order to mimic non-laboratory listening environments. In the first two experiments, an auditory word repetition task was the primary task and a visual-motor task was the secondary task. Participants were presented with different kinds of speech in a background of multi-speaker babble and were asked to repeat the last word of every sentence while doing the simultaneous tracking task. Word accuracy and visual-motor task performance were measured. Taken together, the results of Experiments 1 and 2 showed that the intelligibility of natural speech was better than synthetic speech and that synthetic speech was better perceived than cell phone speech. The visual-motor methodology was found to demonstrate independent and supplemental information and provided a better understanding of the entire speech perception process. Experiment 3 was conducted to determine whether the automaticity of the tasks (Schneider & Shiffrin, 1977) helped to explain the results of the first two experiments. It was found that cell phone speech allowed better simultaneous pursuit rotor performance only at low intelligibility levels when participants ignored the listening task. Also, simultaneous task performance improved dramatically for natural speech when intelligibility was good. Overall, it could be concluded that knowledge of intelligibility alone is insufficient to characterize processing of different speech sources. Additional measures such as attentional demands and performance of simultaneous tasks were also important in characterizing the perception of different kinds of speech in complex listening environments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The occurrence of a weak auditory warning stimulus increases the speed of the response to a subsequent visual target stimulus that must be identified. This facilitatory effect has been attributed to the temporal expectancy automatically induced by the warning stimulus. It has not been determined whether this results from a modulation of the stimulus identification process, the response selection process or both. The present study examined these possibilities. A group of 12 young adults performed a reaction time location identification task and another group of 12 young adults performed a reaction time shape identification task. A visual target stimulus was presented 1850 to 2350 ms plus a fixed interval (50, 100, 200, 400, 800, or 1600 ms, depending on the block) after the appearance of a fixation point, on its left or right side, above or below a virtual horizontal line passing through it. In half of the trials, a weak auditory warning stimulus (S1) appeared 50, 100, 200, 400, 800, or 1600 ms (according to the block) before the target stimulus (S2). Twelve trials were run for each condition. The S1 produced a facilitatory effect for the 200, 400, 800, and 1600 ms stimulus onset asynchronies (SOA) in the case of the side stimulus-response (S-R) corresponding condition, and for the 100 and 400 ms SOA in the case of the side S-R non-corresponding condition. Since these two conditions differ mainly by their response selection requirements, it is reasonable to conclude that automatic temporal expectancy influences the response selection process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The effect produced by a warning stimulus(i) (WS) in reaction time (RT) tasks is commonly attributed to a facilitation of sensorimotor mechanisms by alertness. Recently, evidence was presented that this effect is also related to a proactive inhibition of motor control mechanisms. This inhibition would hinder responding to the WS instead of the target stimulus (TS). Some studies have shown that auditory WS produce a stronger facilitatory effect than visual WS. The present study investigated whether the former WS also produces a stronger inhibitory effect than the latter WS. In one session, the RTs to a visual target in two groups of volunteers were evaluated. In a second session, subjects reacted to the visual target both with (50% of the trials) and without (50% of the trials) a WS. During trials, when subjects received a WS, one group received a visual WS and the other group was presented with an auditory WS. In the first session, the mean RTs of the two groups did not differ significantly. In the second session, the mean RT of the two groups in the presence of the WS was shorter than in their absence. The mean RT in the absence of the auditory WS was significantly longer than the mean RT in the absence of the visual WS. Mean RTs did not differ significantly between the present conditions of the visual and auditory WS. The longer RTs of the auditory WS group as opposed to the visual WS group in the WS-absent trials suggest that auditory WS exert a stronger inhibitory influence on responsivity than visual WS.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this functional magnetic resonance imaging (fMRI) study was to identify human brain areas that are sensitive to the direction of auditory motion. Such directional sensitivity was assessed in a hypothesis-free manner by analyzing fMRI response patterns across the entire brain volume using a spherical-searchlight approach. In addition, we assessed directional sensitivity in three predefined brain areas that have been associated with auditory motion perception in previous neuroimaging studies. These were the primary auditory cortex, the planum temporale and the visual motion complex (hMT/V5+). Our whole-brain analysis revealed that the direction of sound-source movement could be decoded from fMRI response patterns in the right auditory cortex and in a high-level visual area located in the right lateral occipital cortex. Our region-of-interest-based analysis showed that the decoding of the direction of auditory motion was most reliable with activation patterns of the left and right planum temporale. Auditory motion direction could not be decoded from activation patterns in hMT/V5+. These findings provide further evidence for the planum temporale playing a central role in supporting auditory motion perception. In addition, our findings suggest a cross-modal transfer of directional information to high-level visual cortex in healthy humans.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence. Hum Brain Mapp, 2008. (c) 2008 Wiley-Liss, Inc.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this article, it is shown that IWD incorporates topological perceptual characteristics of both spoken and written language, and it is argued that these characteristics should not be ignored or given up when synchronous textual CMC is technologically developed and upgraded.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The present study was designed to elucidate sex-related differences in two basic auditory and one basic visual aspect of sensory functioning, namely sensory discrimination of pitch, loudness, and brightness. Although these three aspects of sensory functioning are of vital importance in everyday life, little is known about whether men and women differ from each other in these sensory functions. Participants were 100 male and 100 female volunteers ranging in age from 18 to 30 years. Since sensory sensitivity may be positively related to individual levels of intelligence and musical experience, measures of psychometric intelligence and musical background were also obtained. Reliably better performance for men compared to women was found for pitch and loudness, but not for brightness discrimination. Furthermore, performance on loudness discrimination was positively related to psychometric intelligence, while pitch discrimination was positively related to both psychometric intelligence and levels of musical training. Additional regression analyses revealed that each of three predictor variables (sex, psychometric intelligence, and musical training) accounted for a statistically significant portion of unique variance in pitch discrimination. With regard to loudness discrimination, regression analysis yielded a statistically significant portion of unique variance for sex as a predictor variable, whereas psychometric intelligence just failed to reach statistical significance. The potential influence of sex hormones on sex-related differences in sensory functions is discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The present study investigated the relationship between psychometric intelligence and temporal resolution power (TRP) as simultaneously assessed by auditory and visual psychophysical timing tasks. In addition, three different theoretical models of the functional relationship between TRP and psychometric intelligence as assessed by means of the Adaptive Matrices Test (AMT) were developed. To test the validity of these models, structural equation modeling was applied. Empirical data supported a hierarchical model that assumed auditory and visual modality-specific temporal processing at a first level and amodal temporal processing at a second level. This second-order latent variable was substantially correlated with psychometric intelligence. Therefore, the relationship between psychometric intelligence and psychophysical timing performance can be explained best by a hierarchical model of temporal information processing.