966 resultados para Sound recordings.
Resumo:
Cervical auscultation presents as a noninvasive screening assessment of swallowing. Until now the focus of acoustic research in swallowing has been the characterization of swallowing sounds,. However, it may be that the technique is also suitable for the detection of respiratory sounds post swallow. A healthy relationship between swallowing and respiration is widely accepted as pivotal to safe swallowing. Previous investigators have shown that the expiratory phase of respiration commonly occurs prior to and after swallowing. That the larynx is valved shut during swallowing is also accepted. Previous research indicates that the larynx releases valved air immediately post swallow in healthy individuals. The current investigation sought to explore acoustic evidence of a release of subglottic air post swallow in nondysphagic individuals using a noninvasive medium. Fifty-nine healthy individuals spanning the ages of 18 to 60+ years swallowed 5 and 10 milliliters (ml) of thin and thick liquid boluses. Objective acoustic analysis was used to verify presence of the sound and to characterize its morphological features. The sound, dubbed the glottal release sound, was found to consistently occur in close proximity following the swallowing sound. The results indicated that the sound has distinct morphological features and that these change depending on the volume and viscosity of the bolus swallowed. Further research will be required to translate this information to a clinical tool.
Resumo:
Interactive products are appealing objects in a technology-driven society and the offer in the market is wide and varied. Most of the existing interactive products only provide either light or sound experiences. Therefore, the goal of this project was to develop a product aimed for children combining both features. This project was developed by a team of four thirdyear students with different engineering backgrounds and nationalities during the European Project Semester at ISEP (EPS@ISEP) in 2012. This paper presents the process that led to the development of an interactive sound table that combines nine identical interaction blocks, a control block and a sound block. Each interaction block works independently and is composed of four light emitting diodes (LED) and one infrared (IR) sensor. The control is performed by an Arduino microcontroller and the sound block includes a music shield and a pair of loud speakers. A number of tests were carried out to assess whether the controller, IR sensors, LED, music shield and speakers work together properly and if the ensemble was a viable interactive light and sound device for children.
Resumo:
Este projecto de investigação teve como objectivo avaliar - através de uma série de workshops orientados pela mestranda no Centro Cultural de Belém em Abril de 2012 - o impacto da utilização de notação musical não-convencional num contexto não escolar. Traçando possíveis paralelos com o ensino especializado da música, propuseram-se metodologias de aprendizagem que permitissem a introdução da notação, de forma inovadora, no referido contexto escolar. O processo de investigação baseou-se em observação directa, na análise dos questionários preenchidos pelos participantes dos workshops e na observação e análise das gravações em vídeo que documentaram o processo artístico, pedagógico e de investigação. Numa sala escura, 6 retroprojectores projectaram uma Partitura de Luz. Entre crianças e adultos, 120 participantes (não-músicos) criaram empiricamente as suas composições, moldando o som em função da forma e a forma em função do som. O resultado foi compensador: a criação de condições favoráveis ao desenvolvimento máximo da expressão criativa individual ou colectiva dos participantes - através da utilização irrestrita de símbolos, imagens, objectos e matérias - culminou num efectivo estabelecimento de correspondência musical, a partir de recursos vocais. Este projecto, “Partitura de Luz”, foi uma oportunidade de relacionar a vertente artística – musical, plástica e gráfica - com a vertente humana: pedagógica e social.
Resumo:
Lisboa, cidade cenográfica é uma instalação que resulta dum processo de sucessivos registos de momentos, experiências e vivências da cidade de Lisboa, com diferentes narrativas. Através do método de assemblage de elementos retirados à rua e reconstruindo composições de blocos volumétricas que incluem desde imagens gráficas, à presença de fontes de luz e de som, e texturas várias, produzi uma instalação destinada a ser ocupada, como se do próprio processo de deambulação por uma cidade se tratasse – neste caso Lisboa. A instalação final, – Lisboa, cidade cenográfica -, constitui em si uma maqueta, como ponto de partida para um outro processo, quase interminável, que conduzisse a uma outra instalação que nos engolisse e se apoderasse da nossa presença. Manipulando diferentes escalas, composições e morfologias de espaço obter-se-ia uma instalação quase infindável, como a própria cidade. A actual instalação é como a síntese dum Fóssil Urbano. Na observação e captação de imagens da cidade houve a preocupação de efectuar a diferentes horas do dia. Os sons utilizados na instalação, foram gravados nas ruas de Lisboa e incluem desde sinos de igreja, ao chilrear de pássaros, aos aviões que sobrevoam, ao trânsito e respectivas buzinas e sirenes de ambulâncias, entre outros. No âmbito do desenvolvimento do projecto e desta Memória Descritiva, tive a preocupação de pedir a algumas pessoas – Cartas de Lisboa -, testemunhando o modo como habitam ou habitaram a cidade. Nos headphones presentes na instalação, ouve-se o poema Lisbon Revisited (1923), de Álvaro de Campos, completando assim o som ambiente de Lisboa, cidade cenográfica. Aquele poema só audível daquele modo, acaba por se sobrepor assim, dum modo subtil, aos outros sons ambiente (exteriores aos headphones).
Resumo:
It is well recognized that professional musicians are at risk of hearing damage due to the exposure to high sound pressure levels during music playing. However, it is important to recognize that the musicians’ exposure may start early in the course of their training as students in the classroom and at home. Studies regarding sound exposure of music students and their hearing disorders are scarce and do not take into account important influencing variables. Therefore, this study aimed to describe sound level exposures of music students at different music styles, classes, and according to the instrument played. Further, this investigation attempted to analyze the perceptions of students in relation to exposure to loud music and consequent health risks, as well as to characterize preventive behaviors. The results showed that music students are exposed to high sound levels in the course of their academic activity. This exposure is potentiated by practice outside the school and other external activities. Differences were found between music style, instruments, and classes. Tinnitus, hyperacusis, diplacusis, and sound distortion were reported by the students. However, students were not entirely aware of the health risks related to exposure to high sound pressure levels. These findings reflect the importance of starting intervention in relation to noise risk reduction at an early stage, when musicians are commencing their activity as students.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
Understanding how the brain works will require tools capable of measuring neuron elec-trical activity at a network scale. However, considerable progress is still necessary to reliably increase the number of neurons that are recorded and identified simultaneously with existing mi-croelectrode arrays. This project aims to evaluate how different materials can modify the effi-ciency of signal transfer from the neural tissue to the electrode. Therefore, various coating materials (gold, PEDOT, tungsten oxide and carbon nano-tubes) are characterized in terms of their underlying electrochemical processes and recording ef-ficacy. Iridium electrodes (177-706 μm2) are coated using galvanostatic deposition under different charge densities. By performing electrochemical impedance spectroscopy in phosphate buffered saline it is determined that the impedance modulus at 1 kHz depends on the coating material and decreased up to a maximum of two orders of magnitude for PEDOT (from 1 MΩ to 25 kΩ). The electrodes are furthermore characterized by cyclic voltammetry showing that charge storage capacity is im-proved by one order of magnitude reaching a maximum of 84.1 mC/cm2 for the PEDOT: gold nanoparticles composite (38 times the capacity of the pristine). Neural recording of spontaneous activity within the cortex was performed in anesthetized rodents to evaluate electrode coating performance.
Resumo:
Music and Healing, African Music, Music therapy, Healing Rituals, Kenyan Music
Resumo:
1
Resumo:
2
Resumo:
3
Resumo:
Auditory spatial functions, including the ability to discriminate between the positions of nearby sound sources, are subserved by a large temporo-parieto-frontal network. With the aim of determining whether and when the parietal contribution is critical for auditory spatial discrimination, we applied single pulse transcranial magnetic stimulation on the right parietal cortex 20, 80, 90 and 150 ms post-stimulus onset while participants completed a two-alternative forced choice auditory spatial discrimination task in the left or right hemispace. Our results reveal that transient TMS disruption of right parietal activity impairs spatial discrimination when applied at 20 ms post-stimulus onset for sounds presented in the left (controlateral) hemispace and at 80 ms for sounds presented in the right hemispace. We interpret our finding in terms of a critical role for controlateral temporo-parietal cortices over initial stages of the building-up of auditory spatial representation and for a right hemispheric specialization in integrating the whole auditory space over subsequent, higher-order processing stages.
Resumo:
Report for the scientific sojourn at the Stanford University from January until June 2007. Music is well known for affecting human emotional states, yet the relationship between specific musical parameters and emotional responses is still not clear. With the advent of new human-computer interaction (HCI) technologies, it is now possible to derive emotion-related information from physiological data and use it as an input to interactive music systems. Providing such implicit musical HCI will be highly relevant for a number of applications including music therapy, diagnosis, nteractive gaming, and physiologically-based musical instruments. A key question in such physiology-based compositions is how sound synthesis parameters can be mapped to emotional states of valence and arousal. We used both verbal and heart rate responses to evaluate the affective power of five musical parameters. Our results show that a significant correlation exists between heart rate and the subjective evaluation of well-defined musical parameters. Brightness and loudness showed to be arousing parameters on subjective scale while harmonicity and even partial attenuation factor resulted in heart rate changes typically associated to valence. This demonstrates that a rational approach to designing emotion-driven music systems for our public installations and music therapy applications is possible.
Resumo:
Interaural intensity and time differences (IID and ITD) are two binaural auditory cues for localizing sounds in space. This study investigated the spatio-temporal brain mechanisms for processing and integrating IID and ITD cues in humans. Auditory-evoked potentials were recorded, while subjects passively listened to noise bursts lateralized with IID, ITD or both cues simultaneously, as well as a more frequent centrally presented noise. In a separate psychophysical experiment, subjects actively discriminated lateralized from centrally presented stimuli. IID and ITD cues elicited different electric field topographies starting at approximately 75 ms post-stimulus onset, indicative of the engagement of distinct cortical networks. By contrast, no performance differences were observed between IID and ITD cues during the psychophysical experiment. Subjects did, however, respond significantly faster and more accurately when both cues were presented simultaneously. This performance facilitation exceeded predictions from probability summation, suggestive of interactions in neural processing of IID and ITD cues. Supra-additive neural response interactions as well as topographic modulations were indeed observed approximately 200 ms post-stimulus for the comparison of responses to the simultaneous presentation of both cues with the mean of those to separate IID and ITD cues. Source estimations revealed differential processing of IID and ITD cues initially within superior temporal cortices and also at later stages within temporo-parietal and inferior frontal cortices. Differences were principally in terms of hemispheric lateralization. The collective psychophysical and electrophysiological results support the hypothesis that IID and ITD cues are processed by distinct, but interacting, cortical networks that can in turn facilitate auditory localization.