906 resultados para Auditory masking
Resumo:
Membrane currents were recorded under voltage clamp from root hairs of Arabidopsis thaliana L. using the two-electrode method. Concurrent measurements of membrane voltage distal to the point of current injection were also carried out to assess the extent of current dissipation along the root hair axis. Estimates of the characteristic cable length, λ, showed this parameter to be a function both of membrane voltage and of substrate concentration for transport. The mean value for λ at 0 mV was 103 ± 20 μm (n=17), but ranged by as much as 6-fold in any one cell for membrane voltages from -300 to +40 mV and was affected by 0.25 to 3-fold at any one voltage on raising [K+]0 from 0.1 to 10 mol m-3. Current dissipation along the length of the cells lead to serious distortions of the current-voltage [I-V) characteristic, including consistent underestimates of membrane current as well as a general linearization of the I-V curve and a masking of conductance changes in the presence of transported substrates. In some experiments, microelectrodes were also placed in neighbouring epidermal cells to record the extent of intercellular coupling. Even with current-passing microelectrodes placed at the base of root hairs, coupling was ≤5% (voltage deflection of the epidermal cell ≤5% that recorded at the site of current injection), indicating an appreciable resistance to current passage between cells. These results demonstrate the feasibility of using root hairs as a 'single-cell model' in electrophysiological analyses of transport across the higher-plant plasma membrane; they also confirmed the need to correct for the cable properties of these cells on a cell-by-cell basis. © 1994 Oxford University Press.
Resumo:
Most cryptographic devices should inevitably have a resistance against the threat of side channel attacks. For this, masking and hiding schemes have been proposed since 1999. The security validation of these countermeasures is an ongoing research topic, as a wider range of new and existing attack techniques are tested against these countermeasures. This paper examines the side channel security of the balanced encoding countermeasure, whose aim is to process the secret key-related data under a constant Hamming weight and/or Hamming distance leakage. Unlike previous works, we assume that the leakage model coefficients conform to a normal distribution, producing a model with closer fidelity to real-world implementations. We perform analysis on the balanced encoded PRINCE block cipher with simulated leakage model and also an implementation on an AVR board. We consider both standard correlation power analysis (CPA) and bit-wise CPA. We confirm the resistance of the countermeasure against standard CPA, however, we find with a bit-wise CPA that we can reveal the key with only a few thousands traces.
Resumo:
OBJECTIVE:
To assess the methodologic quality of published studies of the surgical management of coexisting cataract and glaucoma.
DESIGN:
Literature review and analysis.
METHOD:
We performed a systematic search of the literature to identify all English language articles pertaining to the surgical management of coexisting cataract and glaucoma in adults. Quality assessment was performed on all randomized controlled trials, nonrandomized controlled trials, and cohort studies. Overall quality scores and scores for individual methodologic domains were based on the evaluations of two experienced investigators who independently reviewed articles using an objective quality assessment form.
MAIN OUTCOME MEASURES:
Quality in each of five domains (representativeness, bias and confounding, intervention description, outcomes and follow-up, and statistical quality and interpretation) measured as the percentage of methodologic criteria met by each study.
RESULTS:
Thirty-six randomized controlled trials and 45 other studies were evaluated. The mean quality score for the randomized, controlled clinical trials was 63% (range, 11%-88%), and for the other studies the score was 45% (range, 3%-83%). The mean domain scores were 65% for description of therapy (range, 0%-100%), 62% for statistical analysis (range, 0%-100%), 58% for representativeness (range, 0%-94%), 49% for outcomes assessment (range, 0%-83%), and 30% for bias and confounding (range, 0%-83%). Twenty-five of the studies (31%) received a score of 0% in the bias and confounding domain for not randomizing patients, not masking the observers to treatment group, and not having equivalent groups at baseline.
CONCLUSIONS:
Greater methodologic rigor and more detailed reporting of study results, particularly in the area of bias and confounding, could improve the quality of published clinical studies assessing the surgical management of coexisting cataract and glaucoma.
Resumo:
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of pauses) was manipulated. In this task, participants grouped sequences of auditory chimeras formed from musical instruments. These chimeras mimic the complexity of speech without being speech. We found that, while showing the same overall grouping preferences, the German speakers showed stronger biases than the French speakers in grouping complex sequences. Sound variability reduced all participants' biases, resulting in the French group showing no grouping preference for the most variable sequences, though this reduction was attenuated by musical experience. In sum, this study demonstrates that linguistic experience, musical experience, and complexity affect rhythmic grouping of non-linguistic sounds and suggests that experience with acoustic cues in a meaningful context (language or music) is necessary for developing a robust grouping preference that survives acoustic variability.
Resumo:
Experience continuously imprints on the brain at all stages of life. The traces it leaves behind can produce perceptual learning [1], which drives adaptive behavior to previously encountered stimuli. Recently, it has been shown that even random noise, a type of sound devoid of acoustic structure, can trigger fast and robust perceptual learning after repeated exposure [2]. Here, by combining psychophysics, electroencephalography (EEG), and modeling, we show that the perceptual learning of noise is associated with evoked potentials, without any salient physical discontinuity or obvious acoustic landmark in the sound. Rather, the potentials appeared whenever a memory trace was observed behaviorally. Such memory-evoked potentials were characterized by early latencies and auditory topographies, consistent with a sensory origin. Furthermore, they were generated even on conditions of diverted attention. The EEG waveforms could be modeled as standard evoked responses to auditory events (N1-P2) [3], triggered by idiosyncratic perceptual features acquired through learning. Thus, we argue that the learning of noise is accompanied by the rapid formation of sharp neural selectivity to arbitrary and complex acoustic patterns, within sensory regions. Such a mechanism bridges the gap between the short-term and longer-term plasticity observed in the learning of noise [2, 4-6]. It could also be key to the processing of natural sounds within auditory cortices [7], suggesting that the neural code for sound source identification will be shaped by experience as well as by acoustics.
Resumo:
Individuals with autism spectrum disorders (ASD) are reported to allocate less spontaneous attention to voices. Here, we investigated how vocal sounds are processed in ASD adults, when those sounds are attended. Participants were asked to react as fast as possible to target stimuli (either voices or strings) while ignoring distracting stimuli. Response times (RTs) were measured. Results showed that, similar to neurotypical (NT) adults, ASD adults were faster to recognize voices compared to strings. Surprisingly, ASD adults had even shorter RTs for voices than the NT adults, suggesting a faster voice recognition process. To investigate the acoustic underpinnings of this effect, we created auditory chimeras that retained only the temporal or the spectral features of voices. For the NT group, no RT advantage was found for the chimeras compared to strings: both sets of features had to be present to observe an RT advantage. However, for the ASD group, shorter RTs were observed for both chimeras. These observations indicate that the previously observed attentional deficit to voices in ASD individuals could be due to a failure to combine acoustic features, even though such features may be well represented at a sensory level.
Resumo:
Este estudo teve como finalidade compreender os efeitos da estimulação auditiva com uma voz desconhecida e familiar, na pessoa em coma nos parâmetros e curvas monitorizados em ambiente de cuidados intensivos. A revisão da literatura acerca da comunicação verbal em cuidados intensivos e consequente análise de conteúdo foi utilizada para construir a mensagem estímulo, que foi refinada e validada por um grupo de peritos. Esta mensagem é constituída por três partes: apresentação e orientação, informação e avaliação funcional e estimulação, e serviu como referência para a gravação das mensagens no estudo que se seguiu. Neste estudo também foi traduzida, adaptada para a realidade Portuguesa e convertida em linguagem CIPE® a Coma Recovery Scale – Revised, que deu origem ao Instrumento de Avaliação da Recuperação do Coma da Universidade de Aveiro (IARCUA), que foi sujeito a testes de fiabilidade.Os resultados da análise sugerem que o referido instrumento pode ser utilizado com fiabilidade, mesmo quando existem algumas flutuações no estado clínico das pessoas. A correlação dos scores das subescalas foi elevada e superior aos resultados apresentados para a escala original, indicando que esta escala é um instrumento indicado para a avaliação da função neuro-comportamental. O estudo da influência da estimulação auditiva foi realizado com uma amostra de 10 pessoas em coma internadas no Serviço de Cuidados Intensivos do Hospital de Santo António, no ano de 2009, com total autorização da Comissão de Ética do referido Hospital, sendo a selecção baseada numa avaliação preliminar através do instrumento referido e avaliação dos potenciais evocados auditivos do tronco cerebral. A pessoa significativa foi seleccionada através da aplicação de testes sociométricos. A todos os participantes foram dadas informações escritas acerca do estudo e foi concedido um período de tempo para reflexão e posterior decisão acerca da autorização ou não da aplicação do estudo. O tempo total de recolha de dados foi de 45 minutos distribuídos equitativamente por três períodos: pré-estimulação, estimulação e pós-estimulação. Os valores recolhidos foram os das curvas de ECG, das pressões arteriais e pletismografia de pulso e dos parâmetros de frequência cardíaca, pressão arterial sistólica, diastólica e média, temperatura corporal periférica e saturação parcial de oxigénio, utilizando-se o programa Datex-Ohmeda S/5 Collect para o efeito. A análise estatística e clínica dos dados, foi realizada por períodos de estimulação e fases da mensagem estímulo, aplicando-se testes estatísticos e uma análise baseada em critérios de relevância clínica.Os resultados demonstraram que na estimulação com uma voz desconhecida se verificou um aumento dos valores da frequência cardíaca, dos valores das pressões arteriais sistólicas, diastólicas e médias, na transição entre os períodos de préestimulação e estimulação e que estes valores tendem a normalizar quando termina a estimulação. Estas alterações foram corroboradas pela análise dos intervalos RR e da curva de pressões arteriais. Em relação à estimulação com uma voz familiar, as pessoas também reagiram aquando da estimulação com aumento dos valores da frequência cardíaca e dos valores das pressões arteriais sistólicas, diastólicas e médias. No entanto em alguns casos verificámos que os valores destes parâmetros continuaram a aumentar no período de pós-estimulação, o que revela que os utentes desenvolveram episódios de ansiedade de separação. Relativamente à temperatura corporal periférica e saturação parcial de oxigénio, em ambos os casos, não verificámos alterações aquando da estimulação. Relativamente às fases da mensagem estímulo, durante a estimulação com uma voz desconhecida, os participantes apresentaram uma maior variabilidade nos valores da frequência cardíaca, pressões arteriais sistólica, diastólica e média na fase de avaliação funcional e estimulação. Esta constatação é corroborada pela análise das curvas monitorizadas. Em relação à estimulação com uma voz familiar, além de reagirem nos mesmos parâmetros com maior intensidade na fase de avaliação funcional e estimulação, os participantes também reagiram de forma relevante na fase de apresentação e orientação. Este estudo contribui para a reflexão sobre a prática comunicacional com as pessoas inconscientes, no sentido de sensibilizar os enfermeiros e outros profissionais de saúde para a importância da comunicação nas unidades de cuidados intensivos e contribuir igualmente para a melhoria da qualidade de cuidados.
Resumo:
Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431–1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1–14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life.
Resumo:
Synesthesia based in visual modalities has been associated with reports of vivid visual imagery. We extend this finding to consider whether other forms of synesthesia are also associated with enhanced imagery, and whether this enhancement reflects the modality of synesthesia. We used self‐report imagery measures across multiple sensory modalities, comparing synesthetes’ responses (with a variety of forms of synesthesia) to those of nonsynesthete matched controls. Synesthetes reported higher levels of visual, auditory, gustatory, olfactory and tactile imagery and a greater level of imagery use. Furthermore, their reported enhanced imagery is restricted to the modalities involved in the individual’s synesthesia. There was also a relationship between the number of forms of synesthesia an individual has, and the reported vividness of their imagery, highlighting the need for future research to consider the impact of multiple forms of synesthesia. We also recommend the use of behavioral measures to validate these self‐report findings.
Resumo:
Sound localization can be defined as the ability to identify the position of an input sound source and is considered a powerful aspect of mammalian perception. For low frequency sounds, i.e., in the range 270 Hz-1.5 KHz, the mammalian auditory pathway achieves this by extracting the Interaural Time Difference between sound signals being received by the left and right ear. This processing is performed in a region of the brain known as the Medial Superior Olive (MSO). This paper presents a Spiking Neural Network (SNN) based model of the MSO. The network model is trained using the Spike Timing Dependent Plasticity learning rule using experimentally observed Head Related Transfer Function data in an adult domestic cat. The results presented demonstrate how the proposed SNN model is able to perform sound localization with an accuracy of 91.82% when an error tolerance of +/-10 degrees is used. For angular resolutions down to 2.5 degrees , it will be demonstrated how software based simulations of the model incur significant computation times. The paper thus also addresses preliminary implementation on a Field Programmable Gate Array based hardware platform to accelerate system performance.
Resumo:
In this paper, a spiking neural network (SNN) architecture to simulate the sound localization ability of the mammalian auditory pathways using the interaural intensity difference cue is presented. The lateral superior olive was the inspiration for the architecture, which required the integration of an auditory periphery (cochlea) model and a model of the medial nucleus of the trapezoid body. The SNN uses leaky integrateand-fire excitatory and inhibitory spiking neurons, facilitating synapses and receptive fields. Experimentally derived headrelated transfer function (HRTF) acoustical data from adult domestic cats were employed to train and validate the localization ability of the architecture, training used the supervised learning algorithm called the remote supervision method to determine the azimuthal angles. The experimental results demonstrate that the architecture performs best when it is localizing high-frequency sound data in agreement with the biology, and also shows a high degree of robustness when the HRTF acoustical data is corrupted by noise.
Resumo:
The Trembling Line is a film and multi-channel sound installation exploring the visual and acoustic echoes between decipherable musical gestures and abstract patterning, orchestral swells and extreme high-speed slow-motion close-ups of strings and percussion. It features a score by Leo Grant and a newly devised multichannel audio system by the Institute of Sound and Vibration Research, University of Southampton. The multi-channel speaker array is devised as an intimate sound spatialisation system in which each element of sound can be pried apart and reconfigured, to create a dynamically disorienting sonic experience. It becomes the inside of a musical instrument, an acoustic envelope or cage of sorts, through which viewers are invited to experience the film and generate cross-sensory connections and counterpoints between the sound and the visuals. Funded by a Leverhulme Artist-in-Residence Award and John Hansard Gallery, with support from ISVR and the Music Department, University of Southampton. The project provided a rare opportunity to work creatively with new cutting edge developments in sound distribution devised by ISVR, devising a new speaker array, a multi- channel surround listening sphere which spatialises the auditory experience. The sphere is currently used by ISVR for outreach and teaching purposes, and has enables future collaborations between music staff and students at Southampton University and staff and ISVR. Exhibitions: Solo exhibition at John Hansard Gallery, Southampton (Dec 2015-Jan 2016), across 5 rooms, including a retrospective of five previous film-works and a new series of photographic stills. Public lectures: two within the gallery. Reviews and interviews: Art Monthly, Studio International, The Quietus, The Wire Magazine.
Resumo:
Introduction: Previous research has suggested that visual images are more easily generated, more vivid and more memorable than other sensory modalities. This research examined whether or not imagery is experienced in similar ways by people with and without sight. Specifically, the imabeability of visual, auditory and tactile cue words was compared. The degree to which images were multimodal or unimodal was also examined. Method: Twelve participants totally blind from early infancy and 12 sighted participants generated images in response to 53 sensory and non sensory words, rating imageability and the sensory modality, and describing images. From these 53 items, 4 subgroups of words, which stimulated images that were predominantly visual, tactile, auditory and low-imagery, respectively, were created. Results: T-tests comparing imageability ratings from blind and sighted participants found no differences for auditory and tactile words (both p>.1). Nevertheless, whilst participants without sight found auditory and tactile images equally imageable, sighted participants found images in response to tactile cue words harder to generate than visual cue words (mean difference: -0.51, p=.025). Participants with sight were also more likely to develop multisensory images than were participants without sight (both U≥15.0, N1=12, N2=12, p≤.008). Discussion: For both the blind and sighted, auditory and tactile images were rich and varied and similar language was used. Sighted participants were more likely to generate multimodal images. This was particularly the case for tactile words. Nevertheless, cue words that resulted in multisensory images were not necessarily rated as more imageable. The discussion considers whether or not multimodal imagery represent a method of compensating for impoverished unimodal imagery. Implications for Practitioners: Imagery is important not only as a mnemonic in memory rehabilitation, but also everyday uses for things such as autobiographical memory. This research emphasises both the importance of not only auditory and tactile sensory imagery, but also spatial imagery for people without sight.
Resumo:
What is the best luminance contrast weighting-function for image quality optimization? Traditionally measured contrast sensitivity functions (CSFs), have been often used as weighting-functions in image quality and difference metrics. Such weightings have been shown to result in increased sharpness and perceived quality of test images. We suggest contextual CSFs (cCSFs) and contextual discrimination functions (cVPFs) should provide bases for further improvement, since these are directly measured from pictorial scenes, modeling threshold and suprathreshold sensitivities within the context of complex masking information. Image quality assessment is understood to require detection and discrimination of masked signals, making contextual sensitivity and discrimination functions directly relevant. In this investigation, test images are weighted with a traditional CSF, cCSF, cVPF and a constant function. Controlled mutations of these functions are also applied as weighting-functions, seeking the optimal spatial frequency band weighting for quality optimization. Image quality, sharpness and naturalness are then assessed in two-alternative forced-choice psychophysical tests. We show that maximal quality for our test images, results from cCSFs and cVPFs, mutated to boost contrast in the higher visible frequencies.
Resumo:
Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.