99 resultados para Auditory masking
Resumo:
Despite its importance in social interactions, laughter remains little studied in affective computing. Intelligent virtual agents are often blind to users’ laughter and unable to produce convincing laughter themselves. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received less attention. The aim of this study is threefold. First, to probe human laughter perception by analyzing patterns of categorisations of natural laughter animated on a minimal avatar. Results reveal that a low dimensional space can describe perception of laughter “types”. Second, to investigate observers’ perception of laughter (hilarious, social, awkward, fake, and non-laughter) based on animated avatars generated from natural and acted motion-capture data. Significant differences in torso and limb movements are found between animations perceived as laughter and those perceived as non-laughter. Hilarious laughter also differs from social laughter. Different body movement features were indicative of laughter in sitting and standing avatar postures. Third, to investigate automatic recognition of laughter to the same level of certainty as observers’ perceptions. Results show recognition rates of the Random Forest model approach human rating levels. Classification comparisons and feature importance analyses indicate an improvement in recognition of social laughter when localized features and nonlinear models are used.
Resumo:
As part of a genome-wide association study (GWAS) of perceptual traits in healthy adults, we measured stereo acuity, the duration of alternative percepts in binocular rivalry and the extent of dichoptic masking in 1060 participants. We present the distributions of the measures, the correlations between measures, and their relationships to other psychophysical traits. We report sex differences, and correlations with age, interpupillary distance, eye dominance, phorias, visual acuity and personality. The GWAS, using data from 988 participants, yielded one genetic association that passed a permutation test for significance: The variant rs1022907 in the gene VTI1A was associated with self-reported ability to see autostereograms. We list a number of other suggestive genetic associations (p<10-5).
Resumo:
In recent years, sonification of movement has emerged as a viable method for the provision of feedback in motor learning. Despite some experimental validation of its utility, controlled trials to test the usefulness of sonification in a motor learning context are still rare. As such, there are no accepted conventions for dealing with its implementation. This article addresses the question of how continuous movement information should be best presented as sound to be fed back to the learner. It is proposed that to establish effective approaches to using sonification in this context, consideration must be given to the processes that underlie motor learning, in particular the nature of the perceptual information available to the learner for performing the task at hand. Although sonification has much potential in movement performance enhancement, this potential is largely unrealised as of yet, in part due to the lack of a clear framework for sonification mapping: the relationship between movement and sound. By grounding mapping decisions in a firmer understanding of how perceptual information guides learning, and an embodied cognition stance in general, it is hoped that greater advances in use of sonification to enhance motor learning can be achieved.
Resumo:
Membrane currents were recorded under voltage clamp from root hairs of Arabidopsis thaliana L. using the two-electrode method. Concurrent measurements of membrane voltage distal to the point of current injection were also carried out to assess the extent of current dissipation along the root hair axis. Estimates of the characteristic cable length, λ, showed this parameter to be a function both of membrane voltage and of substrate concentration for transport. The mean value for λ at 0 mV was 103 ± 20 μm (n=17), but ranged by as much as 6-fold in any one cell for membrane voltages from -300 to +40 mV and was affected by 0.25 to 3-fold at any one voltage on raising [K+]0 from 0.1 to 10 mol m-3. Current dissipation along the length of the cells lead to serious distortions of the current-voltage [I-V) characteristic, including consistent underestimates of membrane current as well as a general linearization of the I-V curve and a masking of conductance changes in the presence of transported substrates. In some experiments, microelectrodes were also placed in neighbouring epidermal cells to record the extent of intercellular coupling. Even with current-passing microelectrodes placed at the base of root hairs, coupling was ≤5% (voltage deflection of the epidermal cell ≤5% that recorded at the site of current injection), indicating an appreciable resistance to current passage between cells. These results demonstrate the feasibility of using root hairs as a 'single-cell model' in electrophysiological analyses of transport across the higher-plant plasma membrane; they also confirmed the need to correct for the cable properties of these cells on a cell-by-cell basis. © 1994 Oxford University Press.
Resumo:
Most cryptographic devices should inevitably have a resistance against the threat of side channel attacks. For this, masking and hiding schemes have been proposed since 1999. The security validation of these countermeasures is an ongoing research topic, as a wider range of new and existing attack techniques are tested against these countermeasures. This paper examines the side channel security of the balanced encoding countermeasure, whose aim is to process the secret key-related data under a constant Hamming weight and/or Hamming distance leakage. Unlike previous works, we assume that the leakage model coefficients conform to a normal distribution, producing a model with closer fidelity to real-world implementations. We perform analysis on the balanced encoded PRINCE block cipher with simulated leakage model and also an implementation on an AVR board. We consider both standard correlation power analysis (CPA) and bit-wise CPA. We confirm the resistance of the countermeasure against standard CPA, however, we find with a bit-wise CPA that we can reveal the key with only a few thousands traces.
Resumo:
OBJECTIVE:
To assess the methodologic quality of published studies of the surgical management of coexisting cataract and glaucoma.
DESIGN:
Literature review and analysis.
METHOD:
We performed a systematic search of the literature to identify all English language articles pertaining to the surgical management of coexisting cataract and glaucoma in adults. Quality assessment was performed on all randomized controlled trials, nonrandomized controlled trials, and cohort studies. Overall quality scores and scores for individual methodologic domains were based on the evaluations of two experienced investigators who independently reviewed articles using an objective quality assessment form.
MAIN OUTCOME MEASURES:
Quality in each of five domains (representativeness, bias and confounding, intervention description, outcomes and follow-up, and statistical quality and interpretation) measured as the percentage of methodologic criteria met by each study.
RESULTS:
Thirty-six randomized controlled trials and 45 other studies were evaluated. The mean quality score for the randomized, controlled clinical trials was 63% (range, 11%-88%), and for the other studies the score was 45% (range, 3%-83%). The mean domain scores were 65% for description of therapy (range, 0%-100%), 62% for statistical analysis (range, 0%-100%), 58% for representativeness (range, 0%-94%), 49% for outcomes assessment (range, 0%-83%), and 30% for bias and confounding (range, 0%-83%). Twenty-five of the studies (31%) received a score of 0% in the bias and confounding domain for not randomizing patients, not masking the observers to treatment group, and not having equivalent groups at baseline.
CONCLUSIONS:
Greater methodologic rigor and more detailed reporting of study results, particularly in the area of bias and confounding, could improve the quality of published clinical studies assessing the surgical management of coexisting cataract and glaucoma.
Resumo:
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of pauses) was manipulated. In this task, participants grouped sequences of auditory chimeras formed from musical instruments. These chimeras mimic the complexity of speech without being speech. We found that, while showing the same overall grouping preferences, the German speakers showed stronger biases than the French speakers in grouping complex sequences. Sound variability reduced all participants' biases, resulting in the French group showing no grouping preference for the most variable sequences, though this reduction was attenuated by musical experience. In sum, this study demonstrates that linguistic experience, musical experience, and complexity affect rhythmic grouping of non-linguistic sounds and suggests that experience with acoustic cues in a meaningful context (language or music) is necessary for developing a robust grouping preference that survives acoustic variability.
Resumo:
Experience continuously imprints on the brain at all stages of life. The traces it leaves behind can produce perceptual learning [1], which drives adaptive behavior to previously encountered stimuli. Recently, it has been shown that even random noise, a type of sound devoid of acoustic structure, can trigger fast and robust perceptual learning after repeated exposure [2]. Here, by combining psychophysics, electroencephalography (EEG), and modeling, we show that the perceptual learning of noise is associated with evoked potentials, without any salient physical discontinuity or obvious acoustic landmark in the sound. Rather, the potentials appeared whenever a memory trace was observed behaviorally. Such memory-evoked potentials were characterized by early latencies and auditory topographies, consistent with a sensory origin. Furthermore, they were generated even on conditions of diverted attention. The EEG waveforms could be modeled as standard evoked responses to auditory events (N1-P2) [3], triggered by idiosyncratic perceptual features acquired through learning. Thus, we argue that the learning of noise is accompanied by the rapid formation of sharp neural selectivity to arbitrary and complex acoustic patterns, within sensory regions. Such a mechanism bridges the gap between the short-term and longer-term plasticity observed in the learning of noise [2, 4-6]. It could also be key to the processing of natural sounds within auditory cortices [7], suggesting that the neural code for sound source identification will be shaped by experience as well as by acoustics.
Resumo:
Individuals with autism spectrum disorders (ASD) are reported to allocate less spontaneous attention to voices. Here, we investigated how vocal sounds are processed in ASD adults, when those sounds are attended. Participants were asked to react as fast as possible to target stimuli (either voices or strings) while ignoring distracting stimuli. Response times (RTs) were measured. Results showed that, similar to neurotypical (NT) adults, ASD adults were faster to recognize voices compared to strings. Surprisingly, ASD adults had even shorter RTs for voices than the NT adults, suggesting a faster voice recognition process. To investigate the acoustic underpinnings of this effect, we created auditory chimeras that retained only the temporal or the spectral features of voices. For the NT group, no RT advantage was found for the chimeras compared to strings: both sets of features had to be present to observe an RT advantage. However, for the ASD group, shorter RTs were observed for both chimeras. These observations indicate that the previously observed attentional deficit to voices in ASD individuals could be due to a failure to combine acoustic features, even though such features may be well represented at a sensory level.