955 resultados para audio modality
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
Situational awareness is achieved naturally by the human senses of sight and hearing in combination. Automatic scene understanding aims at replicating this human ability using microphones and cameras in cooperation. In this paper, audio and video signals are fused and integrated at different levels of semantic abstractions. We detect and track a speaker who is relatively unconstrained, i.e., free to move indoors within an area larger than the comparable reported work, which is usually limited to round table meetings. The system is relatively simple: consisting of just 4 microphone pairs and a single camera. Results show that the overall multimodal tracker is more reliable than single modality systems, tolerating large occlusions and cross-talk. System evaluation is performed on both single and multi-modality tracking. The performance improvement given by the audio–video integration and fusion is quantified in terms of tracking precision and accuracy as well as speaker diarisation error rate and precision–recall (recognition). Improvements vs. the closest works are evaluated: 56% sound source localisation computational cost over an audio only system, 8% speaker diarisation error rate over an audio only speaker recognition unit and 36% on the precision–recall metric over an audio–video dominant speaker recognition method.
Resumo:
People possess different sensory modalities to detect, interpret, and efficiently act upon various events in a complex and dynamic environment (Fetsch, DeAngelis, & Angelaki, 2013). Much empirical work has been done to understand the interplay of modalities (e.g. audio-visual interactions, see Calvert, Spence, & Stein, 2004). On the one hand, integration of multimodal input as a functional principle of the brain enables the versatile and coherent perception of the environment (Lewkowicz & Ghazanfar, 2009). On the other hand, sensory integration does not necessarily mean that input from modalities is always weighted equally (Ernst, 2008). Rather, when two or more modalities are stimulated concurrently, one often finds one modality dominating over another. Study 1 and 2 of the dissertation addressed the developmental trajectory of sensory dominance. In both studies, 6-year-olds, 9-year-olds, and adults were tested in order to examine sensory (audio-visual) dominance across different age groups. In Study 3, sensory dominance was put into an applied context by examining verbal and visual overshadowing effects among 4- to 6-year olds performing a face recognition task. The results of Study 1 and Study 2 support default auditory dominance in young children as proposed by Napolitano and Sloutsky (2004) that persists up to 6 years of age. For 9-year-olds, results on privileged modality processing were inconsistent. Whereas visual dominance was revealed in Study 1, privileged auditory processing was revealed in Study 2. Among adults, a visual dominance was observed in Study 1, which has also been demonstrated in preceding studies (see Spence, Parise, & Chen, 2012). No sensory dominance was revealed in Study 2 for adults. Potential explanations are discussed. Study 3 referred to verbal and visual overshadowing effects in 4- to 6-year-olds. The aim was to examine whether verbalization (i.e., verbally describing a previously seen face), or visualization (i.e., drawing the seen face) might affect later face recognition. No effect of visualization on recognition accuracy was revealed. As opposed to a verbal overshadowing effect, a verbal facilitation effect occurred. Moreover, verbal intelligence was a significant predictor for recognition accuracy in the verbalization group but not in the control group. This suggests that strengthening verbal intelligence in children can pay off in non-verbal domains as well, which might have educational implications.
Resumo:
Signifying road-related events with warnings can be highly beneficial, especially when imminent attention is needed. This thesis describes how modality, urgency and situation can influence driver responses to multimodal displays used as warnings. These displays utilise all combinations of audio, visual and tactile modalities, reflecting different urgency levels. In this way, a new rich set of cues is designed, conveying information multimodally, to enhance reactions during driving, which is a highly visual task. The importance of the signified events to driving is reflected in the warnings, and safety-critical or non-critical situations are communicated through the cues. Novel warning designs are considered, using both abstract displays, with no semantic association to the signified event, and language-based ones, using speech. These two cue designs are compared, to discover their strengths and weaknesses as car alerts. The situations in which the new cues are delivered are varied, by simulating both critical and non-critical events and both manual and autonomous car scenarios. A novel set of guidelines for using multimodal driver displays is finally provided, considering the modalities utilised, the urgency signified, and the situation simulated.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
TEMA: avaliação audiológica de pais de indivíduos com perda auditiva de herança autossômica recessiva. OBJETIVO: estudar o perfil audiológico de pais de indivíduos com perda auditiva, de herança autossômica recessiva, inferida pela história familial ou por testes moleculares que detectaram mutação no gene GJB2, responsável por codificar a Conexina 26. MÉTODO: 36 indivíduos entre 30 e 60 anos foram avaliados e divididos em dois grupos: grupo controle, sem queixas auditivas e sem história familiar de deficiência auditiva, e grupo de estudos composto por pais heterozigotos em relação a genes de surdez de herança autossômica recessiva inespecífica ou portadores heterozigotos de mutação no gene da Conexina 26. Todos foram submetidos à audiometria tonal liminar (0,25kHz a 8), audiometria de altas freqüências (9kHz a 20) e emissões otoacústicas produtos de distorção (EOAPD). RESULTADOS: houve diferenças significativas na amplitude das EOAPD nas freqüências 1001 e 1501Hz entre os grupos, sendo maior a amplitude no grupo controle. Não houve diferença significativa entre os grupos para os limiares tonais de 0,25 a 20KHz. CONCLUSÃO: as EOAPD foram mais eficazes, em comparação com a audiometria tonal liminar, para detectar diferenças auditivas entre os grupos. Mais pesquisas são necessárias para verificar a confiabilidade destes dados.
Resumo:
Este artigo relata a influência de fatores sociodemográficos e de saúde na autopercepção da audição entre os idosos do projeto " Saúde, Bem-Estar e Envelhecimento" (Projeto SABE) no município de São Paulo. O estudo incluiu 2.143 indivíduos de 60 anos e mais. Um modelo de regressão logística ordinal, considerando o desenho da amostra, foi usado na análise multivariável. O aumento da idade; o sexo masculino; morar acompanhado; relatar tontura; memória regular/ ruim e saúde regular ou ruim aumentaram a chance de autopercepção ruim da audição. O conhecimento da autopercepção da audição e dos seus fatores relacionados é importante para avaliar a qualidade de vida dos idosos e a necessidade de reabilitação auditiva
Resumo:
Participants in Experiments 1 and 2 performed a discrimination and counting task to assess the effect of lead stimulus modality on attentional modification of the acoustic startle reflex. Modality of the discrimination stimuli was changed across subjects. Electrodermal responses were larger during task-relevant stimuli than during task-irrelevant stimuli in all conditions. Larger blink magnitude facilitation was found during auditory and visual task-relevant stimuli, but not for tactile stimuli. Experiment 3 used acoustic, visual, and tactile conditioned stimuli (CSs) in differential conditioning with an aversive unconditioned stimulus (US). Startle magnitude facilitation and electrodermal responses were larger during a CS that preceded the US than during a CS that was presented alone regardless of lead stimulus modality. Although not unequivocal, the present data pose problems for attentional accounts of blink modification that emphasize the importance of lead stimulus modality.
Resumo:
In two experiments we investigated the effect of generalized orienting induced by changing the modality of the lead stimulus on the modulation of blink reflexes elicited by acoustic stimuli. In Experiment 1 (n = 32), participants were presented with acoustic or visual change stimuli after habituation training with tactile lead stimuli. In Experiment 2 (n = 64), modality of the lead stimulus (acoustic vs. visual) was crossed with experimental condition (change vs. no change). Lead stimulus change resulted in increased electrodermal orienting in both experiments. Blink latency shortening and blink magnitude facilitation increased from habituation to change trials regardless of whether the change stimulus was presented in the same or in a different modality as the reflex-eliciting stimulus. These results are not consistent with modality-specific accounts of attentional startle modulation.
Resumo:
Outcomes of treatment of musculoskeletal tumours are evaluated for effectiveness of chemotherapy protocols, function obtained after surgery and survival after treatment. Quality of life achieved after multi-modality treatment is dependent on a combination of all of these factors. Quality of life varies significantly along the treatment pathway, and continuously through the life of a patient. The patient's perception of outcome is based on the total effect of the disease and its treatment, rather than necessarily focussing on separate items of treatment. We have found that visual analogue scales can be used effectively to gauge the patient's perception of their quality of life. Such a method has shown that, overall, perceptions of quality of life seem to be better for those patients who have undergone successful limb salvage surgery when compared with those who have undergone amputation, but the differences are not as great as might be assumed.
Resumo:
Attentional accounts of blink facilitation during Pavlovian conditioning predict enhanced reflexes if reflex and unconditional stimuli (US) are from the same modality. Emotional accounts emphasize the importance of US intensity. In Experiment 1, we crossed US modality (tone vs, shock) and intensity in a 2 X 2 between-subjects design. US intensity but not US modality affected blink facilitation. Tn Experiment 2, we demonstrated that the results from Experiment 1 were not due to the motor task requirements employed. In Experiment 3, we used a within-subjects design to investigate the effects of US modality and intensity. Contrary to predictions derived from an attentional account, blink facilitation was larger during conditional stimuli that preceded shock than during those that preceded tones. The present results are not consistent with an attentional account of blink facilitation during Pavlovian conditioning in humans.
Resumo:
Two experiments investigated the effects of the sensory modality of the lead and of the blink-eliciting stimulus during lead stimulus modality change on blink modulation at lead intervals of 2500 and 3500 ins. Participants were presented with acoustic, visual, or tactile change stimuli after habituation training with lead stimuli from the same or a different sensory modality. In Experiment 1, latency and magnitude of the acoustic blink were facilitated during a change to acoustic or visual lead stimuli, but not during a change to tactile lead stimuli. After habituation to acoustic lead stimuli, blink magnitude was smaller during tactile change stimuli than during habituation stimuli. The latter finding was replicated in Experiment 2 in which blink was elicited by electrical stimulation of the trigeminal nerve. The consistency of the findings across different combinations of lead stimulus and blink-eliciting stimulus modalities does not support a modality-specific account of attentional blink modulation. Rather, blink modulation during generalized orienting reflects modality non-specific processes, although modulation may not always be found during tactile lead stimuli. (C) 2002 Elsevier Science B.V. All rights reserved.