908 resultados para auditory EEG


Relevância:

20.00% 20.00%

Publicador:

Resumo:

La música puede afectar al individuo en todos sus niveles –físico, mental y espiritual–. El presente artículo se centra en el papel que ésta desempeña en el desarrollo de la vida espiritual y trascendental. Para ello, realizaremos un repaso histórico de su evolución estética y social, abordaremos dicho fenómeno a nivel fisiológico y presentaremos sus aplicaciones clínicas y sociales. Seguidamente y a modo de ejemplo de las concepciones de pensamiento occidental y oriental, trataremos la forma en que el cristianismo y el budismo conciben la música dentro de su doctrina. Finalizaremos con algunas reflexiones sobre el tema.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BARBOSA, André F. ; SOUZA, Bryan C. ; PEREIRA JUNIOR, Antônio ; MEDEIROS, Adelardo A. D.de, . Implementação de Classificador de Tarefas Mentais Baseado em EEG. In: CONGRESSO BRASILEIRO DE REDES NEURAIS, 9., 2009, Ouro Preto, MG. Anais... Ouro Preto, MG, 2009

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: Phenobarbital increases electroclinical uncoupling and our preliminary observations suggest it may also affect electrographic seizure morphology. This may alter the performance of a novel seizure detection algorithm (SDA) developed by our group. The objectives of this study were to compare the morphology of seizures before and after phenobarbital administration in neonates and to determine the effect of any changes on automated seizure detection rates. Methods: The EEGs of 18 term neonates with seizures both pre- and post-phenobarbital (524 seizures) administration were studied. Ten features of seizures were manually quantified and summary measures for each neonate were statistically compared between pre- and post-phenobarbital seizures. SDA seizure detection rates were also compared. Results: Post-phenobarbital seizures showed significantly lower amplitude (p < 0.001) and involved fewer EEG channels at the peak of seizure (p < 0.05). No other features or SDA detection rates showed a statistical difference. Conclusion: These findings show that phenobarbital reduces both the amplitude and propagation of seizures which may help to explain electroclinical uncoupling of seizures. The seizure detection rate of the algorithm was unaffected by these changes. Significance: The results suggest that users should not need to adjust the SDA sensitivity threshold after phenobarbital administration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BARBOSA, André F. ; SOUZA, Bryan C. ; PEREIRA JUNIOR, Antônio ; MEDEIROS, Adelardo A. D.de, . Implementação de Classificador de Tarefas Mentais Baseado em EEG. In: CONGRESSO BRASILEIRO DE REDES NEURAIS, 9., 2009, Ouro Preto, MG. Anais... Ouro Preto, MG, 2009

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Drowsy driving impairs motorists’ ability to operate vehicles safely, endangering both the drivers and other people on the road. The purpose of the project is to find the most effective wearable device to detect drowsiness. Existing research has demonstrated several options for drowsiness detection, such as electroencephalogram (EEG) brain wave measurement, eye tracking, head motions, and lane deviations. However, there are no detailed trade-off analyses for the cost, accuracy, detection time, and ergonomics of these methods. We chose to use two different EEG headsets: NeuroSky Mindwave Mobile (single-electrode) and Emotiv EPOC (14- electrode). We also tested a camera and gyroscope-accelerometer device. We can successfully determine drowsiness after five minutes of training using both single and multi-electrode EEGs. Devices were evaluated using the following criteria: time needed to achieve accurate reading, accuracy of prediction, rate of false positives vs. false negatives, and ergonomics and portability. This research will help improve detection devices, and reduce the number of future accidents due to drowsy driving.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of human brain electroencephalography (EEG) signals for automatic person identi cation has been investigated for a decade. It has been found that the performance of an EEG-based person identication system highly depends on what feature to be extracted from multi-channel EEG signals. Linear methods such as Power Spectral Density and Autoregressive Model have been used to extract EEG features. However these methods assumed that EEG signals are stationary. In fact, EEG signals are complex, non-linear, non-stationary, and random in nature. In addition, other factors such as brain condition or human characteristics may have impacts on the performance, however these factors have not been investigated and evaluated in previous studies. It has been found in the literature that entropy is used to measure the randomness of non-linear time series data. Entropy is also used to measure the level of chaos of braincomputer interface systems. Therefore, this thesis proposes to study the role of entropy in non-linear analysis of EEG signals to discover new features for EEG-based person identi- cation. Five dierent entropy methods including Shannon Entropy, Approximate Entropy, Sample Entropy, Spectral Entropy, and Conditional Entropy have been proposed to extract entropy features that are used to evaluate the performance of EEG-based person identication systems and the impacts of epilepsy, alcohol, age and gender characteristics on these systems. Experiments were performed on the Australian EEG and Alcoholism datasets. Experimental results have shown that, in most cases, the proposed entropy features yield very fast person identication, yet with compatible accuracy because the feature dimension is low. In real life security operation, timely response is critical. The experimental results have also shown that epilepsy, alcohol, age and gender characteristics have impacts on the EEG-based person identication systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we perform a first approach to emotion recognition from EEG single channel signals extracted in four (4) mother-child dyads experiment in developmental psychology -- Single channel EEG signals are analyzed and processed using several window sizes by performing a statistical analysis over features in the time and frequency domains -- Finally, a neural network obtained an average accuracy rate of 99% of classification in two emotional states such as happiness and sadness

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study investigates the acoustic, articulatory and sociophonetic properties of the Swedish /iː/ variant known as 'Viby-i' in 13 speakers of Central Swedish from Stockholm, Gothenburg, Varberg, Jönköping and Katrineholm. The vowel is described in terms of its auditory quality, its acoustic F1 and F2 values, and its tongue configuration. A brief, qualitative description of lip position is also included. Variation in /iː/ production is mapped against five sociolinguistic factors: city, dialectal region, metropolitan vs. urban location, sex and socioeconomic rating. Articulatory data is collected using ultrasound tongue imaging (UTI), for which the study proposes and evaluates a methodology. The study shows that Viby-i varies in auditory strength between speakers, and that strong instances of the vowel are associated with a high F1 and low F2, a trend which becomes more pronounced as the strength of Viby-i increases. The articulation of Viby-i is characterised by a lowered and backed tongue body, sometimes accompanied by a double-bunched tongue shape. The relationship between tongue position and acoustic results appears to be non-linear, suggesting either a measurement error or the influence of additional articulatory factors. Preliminary images of the lips show that Viby-i is produced with a spread but lax lip posture. The lip data also reveals parts of the tongue, which in many speakers appears to be extremely fronted and braced against the lower teeth, or sometimes protruded, when producing Viby-i. No sociophonetic difference is found between speakers from different cities or dialect regions. Metropolitan speakers are found to have an auditorily and acoustically stronger Viby-i than urban speakers, but this pattern is not matched in tongue backing or lowering. Overall the data shows a weak trend towards higher-class females having stronger Viby-i, but these results are tentative due to the limited size and stratification of the sample. Further research is needed to fully explore the sociophonetic properties of Viby-i.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current trends in speech-language pathology focus on early intervention as the preferred tool for promoting the best possible outcomes in children with language disorders. Neuroimaging techniques are being studied as promising tools for flagging at-risk infants. In this study, the auditory brainstem response (ABR) to the syllables /ba/ and /ga/ was examined in 41 infants between 3 and 12 months of age as a possible tool to predict language development in toddlerhood. The MacArthur-Bates Communicative Development Inventory (MCDI) was used to assess language development at 18 months of age. The current study compared the periodicity of the responses to the stop consonants and phase differences between /ba/ and /ga/ in both at-risk and low-risk groups. The study also examined whether there are correlations among ABR measures (periodicity and phase differentiation) and language development. The study found that these measures predict language development at 18 months.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Le traitement des émotions joue un rôle essentiel dans les relations interpersonnelles. Des déficits dans la reconnaissance des émotions évoquées par les expressions faciales et vocales ont été démontrés à la suite d’un traumatisme craniocérébral (TCC). Toutefois, la majorité des études n’ont pas différencié les participants selon le niveau de gravité du TCC et n’ont pas évalué certains préalables essentiels au traitement émotionnel, tels que la capacité à percevoir les caractéristiques faciales et vocales, et par le fait même, la capacité à y porter attention. Aucune étude ne s’est intéressée au traitement des émotions évoquées par les expressions musicales, alors que la musique est utilisée comme méthode d’intervention afin de répondre à des besoins de prise en charge comportementale, cognitive ou affective chez des personnes présentant des atteintes neurologiques. Ainsi, on ignore si les effets positifs de l’intervention musicale sont basés sur la préservation de la reconnaissance de certaines catégories d’émotions évoquées par les expressions musicales à la suite d’un TCC. La première étude de cette thèse a évalué la reconnaissance des émotions de base (joie, tristesse, peur) évoquées par les expressions faciales, vocales et musicales chez quarante et un adultes (10 TCC modéré-sévère, 9 TCC léger complexe, 11 TCC léger simple et 11 témoins), à partir de tâches expérimentales et de tâches perceptuelles contrôles. Les résultats suggèrent un déficit de la reconnaissance de la peur évoquée par les expressions faciales à la suite d’un TCC modéré-sévère et d’un TCC léger complexe, comparativement aux personnes avec un TCC léger simple et sans TCC. Le déficit n’est pas expliqué par un trouble perceptuel sous-jacent. Les résultats montrent de plus une préservation de la reconnaissance des émotions évoquées par les expressions vocales et musicales à la suite d’un TCC, indépendamment du niveau de gravité. Enfin, malgré une dissociation observée entre les performances aux tâches de reconnaissance des émotions évoquées par les modalités visuelle et auditive, aucune corrélation n’a été trouvée entre les expressions vocales et musicales. La deuxième étude a mesuré les ondes cérébrales précoces (N1, N170) et plus tardives (N2) de vingt-cinq adultes (10 TCC léger simple, 1 TCC léger complexe, 3 TCC modéré-sévère et 11 témoins), pendant la présentation d’expressions faciales évoquant la peur, la neutralité et la joie. Les résultats suggèrent des altérations dans le traitement attentionnel précoce à la suite d’un TCC, qui amenuisent le traitement ultérieur de la peur évoquée par les expressions faciales. En somme, les conclusions de cette thèse affinent notre compréhension du traitement des émotions évoquées par les expressions faciales, vocales et musicales à la suite d’un TCC selon le niveau de gravité. Les résultats permettent également de mieux saisir les origines des déficits du traitement des émotions évoquées par les expressions faciales à la suite d’un TCC, lesquels semblent secondaires à des altérations attentionnelles précoces. Cette thèse pourrait contribuer au développement éventuel d’interventions axées sur les émotions à la suite d’un TCC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Older adults frequently report that they can hear what they have been told but cannot understand the meaning. This is particularly true in noisy conditions, where the additional challenge of suppressing irrelevant noise (i.e. a competing talker) adds another layer of difficulty to their speech understanding. Hearing aids improve speech perception in quiet, but their success in noisy environments has been modest, suggesting that peripheral hearing loss may not be the only factor in the older adult’s perceptual difficulties. Recent animal studies have shown that auditory synapses and cells undergo significant age-related changes that could impact the integrity of temporal processing in the central auditory system. Psychoacoustic studies carried out in humans have also shown that hearing loss can explain the decline in older adults’ performance in quiet compared to younger adults, but these psychoacoustic measurements are not accurate in describing auditory deficits in noisy conditions. These results would suggest that temporal auditory processing deficits could play an important role in explaining the reduced ability of older adults to process speech in noisy environments. The goals of this dissertation were to understand how age affects neural auditory mechanisms and at which level in the auditory system these changes are particularly relevant for explaining speech-in-noise problems. Specifically, we used non-invasive neuroimaging techniques to tap into the midbrain and the cortex in order to analyze how auditory stimuli are processed in younger (our standard) and older adults. We will also attempt to investigate a possible interaction between processing carried out in the midbrain and cortex.