662 resultados para Sounds
Resumo:
Temporomandibular joint (TMJ) sounds are important and common physical signs of temporomandibular disorders (TMD). The aim of this study was to evaluate the influence of the effect of the use of occlusal bite splints (stabilizing and repositioning) on the sounds produced in the TMJ, by means of the electrovibratography (EVG). Thirty-one patients with TMD from the Dental School of Ribeirão Preto, University of São Paulo, Brazil were selected for this study. Group 1 (n=23) wore stabilizing bite splints and Group 2 (n=8) used anterior repositioning splints. Before and after treatment with occlusal splints both groups were analyzed using the SonoPAK Q/S recording system (BioResearch System, Inc.). The treatments with stabilizing bite splints were satisfactory, since they reduced the total amount of the sound energies (p<0.05), but the use of anterior repositioning splints for no more than 4 weeks produced significantly better results (p<0.01). The total amount of vibration energy involved in the vibrating movements of the TMJ showed significant improvement using anterior repositioning splints.
Resumo:
Objective: Stimulability is the ability to produce an adequate sound under specific conditions. This study aimed to describe the stimulability of Brazilian Portuguese-speaking children with and without phonological disorders for the production of liquid sounds with the aid of visual and tactile cues. Patients and Methods: The study sample included 36 children between 5; 0 and 11; 6 years of age, 18 with phonological disorder and 18 without any speech-language disorders. Stimulability was measured for syllable imitation. The stimulability test employed includes 63 syllables with the sounds [1], [(sic)], and [(sic)], as well as seven oral vowels. If the subject was unable to imitate a sound, a visual cue was given. When necessary, a tactile cue was also given. Results: The sound [(sic)] required greater use of sensory cues. Children with phonological disorder needed a greater number of cues. Conclusion: The use of sensory cues seemed to facilitate sound stimulability, making it possible for the children with phonological disorder to accurately produce the sounds modeled. Copyright (C) 2009 S. Karger AG, Basel
Resumo:
Cervical auscultation is in the process of gaining clinical credibility. In order for it to be accepted by the clinical community, the procedure and equipment used must first be standardized. Takahashi et al. [Dysphagia 9:54-62, 1994] attempted to provide benchmark methodology for administering cervical auscultation. They provided information about the acoustic detector unit best suited to picking up swallowing sounds and the best cervical site to place it. The current investigation provides contrasting results to Takahashi et al. with respect to the best type of acoustic detector unit to use for detecting swallowing sounds. Our study advocates an electret microphone as opposed to an accelerometer for recording swallowing sounds. However, we agree on the optimal placement site. We conclude that cervical auscultation is within reach of the average dysphagia clinic.
Resumo:
In this text, we intend to explore the possibilities of sound manipulation in a context of augmented reality (AR) through the use of robots. We use the random behaviour of robots in a limited space for the real-time modulation of two sound characteristics: amplitude and frequency. We add the possibility of interaction with these robots, providing the user the opportunity to manipulate the physical interface by placing markers in the action space, which alter the behaviour of the robots and, consequently, the audible result produced. We intend to demonstrate through the agents, programming of random processes and direct manipulation of this application, that it is possible to generate empathy in interaction and obtain specific audible results, which would be difficult to otherwise reproduce due to the infinite loops that the interaction promotes.
Resumo:
This paper studies musical opus from the point of view of three mathematical tools: entropy, pseudo phase plane (PPP), and multidimensional scaling (MDS). The experiments analyze ten sets of different musical styles. First, for each musical composition, the PPP is produced using the time series lags captured by the average mutual information. Second, to unravel hidden relationships between the musical styles the MDS technique is used. The MDS is calculated based on two alternative metrics obtained from the PPP, namely, the average mutual information and the fractal dimension. The results reveal significant differences in the musical styles, demonstrating the feasibility of the proposed strategy and motivating further developments towards a dynamical analysis of musical sounds.
Resumo:
The relation of automatic auditory discrimination, measured with MMN, with the type of stimuli has not been well established in the literature, despite its importance as an electrophysiological measure of central sound representation. In this study, MMN response was elicited by pure-tone and speech binaurally passive auditory oddball paradigm in a group of 8 normal young adult subjects at the same intensity level (75 dB SPL). The frequency difference in pure-tone oddball was 100 Hz (standard = 1 000 Hz; deviant = 1 100 Hz; same duration = 100 ms), in speech oddball (standard /ba/; deviant /pa/; same duration = 175 ms) the Portuguese phonemes are both plosive bi-labial in order to maintain a narrow frequency band. Differences were found across electrode location between speech and pure-tone stimuli. Larger MMN amplitude, duration and higher latency to speech were verified compared to pure-tone in Cz and Fz as well as significance differences in latency and amplitude between mastoids. Results suggest that speech may be processed differently than non-speech; also it may occur in a later stage due to overlapping processes since more neural resources are required to speech processing.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Our aim was to analyse the impact of the characteristics of words used in spelling programmes and the nature of instructional guidelines on the evolution from grapho-perceptive writing to phonetic writing in preschool children. The participants were 50 5-year-old children, divided in five equivalent groups in intelligence, phonological skills and spelling. All the children knew the vowels and the consonants B, D, P, R, T, V, F, M and C, but didn’t use them on spelling. Their spelling was evaluated in a pre and post-test with 36 words beginning with the consonants known. In-between they underwent a writing programme designed to lead them to use the letters P and T to represent the initial phonemes of words. The groups differed on the kind of words used on training (words whose initial syllable matches the name of the initial letter—Exp. G1 and Exp. G2—versus words whose initial syllable is similar to the sound of the initial letter—Exp. G3 and Exp. G4). They also differed on the instruction used in order to lead them to think about the relations between the initial phoneme of words and the initial consonant (instructions designed to make the children think about letter names—Exp. G1 and Exp. G3 —versus instructions designed to make the children think about letter sounds—Exp. G2 and Exp. G4). The 5th was a control group. All the children evolved to syllabic phonetisations spellings. There are no differences between groups at the number of total phonetisations but we found some differences between groups at the quality of the phonetisations.
Resumo:
Introduction: Lower Respiratory Tract Infections (LRTIs) are highly prevalent in institutionalised people with dementia, constituting an important cause of morbidity and mortality. Computerised auscultation of Adventitious Lung Sounds (ALS) has shown to be objective and reliable to assess and monitor respiratory diseases, however its application in people with dementia is unknown. Aim: This study characterised ALS (crackles and wheezes) in institutionalised people with dementia. Methods: An exploratory descriptive study, including 6 long-term care institutions was conducted. The sample included a dementia group (DG) of 30 people with dementia and a match healthy group (HG) of 30 elderly people. Socio-demographic and anthropometric data, cognition, type and severity of dementia, cardio-respiratory parameters, balance, mobility and activities and participation were collected. Lung sounds were recorded with a digital stethoscope following Computerised Respiratory Sound Analysis (CORSA) guidelines. Crackles’ location, number (N), frequency (F), two-cycle duration (2CD), initial deflection width (IDW) and largest deflection width (LDW) and wheezes’ number (N), ratio (R) and frequency (F) were analysed per breathing phase. Statistical analyses were performed using PASW Statistics(v.19). Results: There were no significant differences between the two groups in relation to the mean N of crackles during inspiration and expiration in both trachea and thorax. DG trachea crackles had significant higher F during inspiration and lower IDW, 2CD and LDW during expiration when compared with HG. At the thorax, the LDW during inspiration was also significantly lower in the DG. A significant higher N of inspiratory wheezes was found in the HG. Both groups had a low ratio of high frequency wheezes. Conclusion: Computerised analyses of ALS informed on the respiratory system and function of people with dementia and elderly people. Hence, this could be the step towards prevention, early diagnosis and continuous monitoring of respiratory diseases in people with cognitive impairment.
Resumo:
Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), 2013
Resumo:
Humans can recognize categories of environmental sounds, including vocalizations produced by humans and animals and the sounds of man-made objects. Most neuroimaging investigations of environmental sound discrimination have studied subjects while consciously perceiving and often explicitly recognizing the stimuli. Consequently, it remains unclear to what extent auditory object processing occurs independently of task demands and consciousness. Studies in animal models have shown that environmental sound discrimination at a neural level persists even in anesthetized preparations, whereas data from anesthetized humans has thus far provided null results. Here, we studied comatose patients as a model of environmental sound discrimination capacities during unconsciousness. We included 19 comatose patients treated with therapeutic hypothermia (TH) during the first 2 days of coma, while recording nineteen-channel electroencephalography (EEG). At the level of each individual patient, we applied a decoding algorithm to quantify the differential EEG responses to human vs. animal vocalizations as well as to sounds of living vocalizations vs. man-made objects. Discrimination between vocalization types was accurate in 11 patients and discrimination between sounds from living and man-made sources in 10 patients. At the group level, the results were significant only for the comparison between vocalization types. These results lay the groundwork for disentangling truly preferential activations in response to auditory categories, and the contribution of awareness to auditory category discrimination.
Resumo:
Cough is a very frequent symptom in children. Different reviews have tried to delineate the best approach to pediatric cough.1 Clinical evaluation remains the most important diagnostic initial step. Although the relations between cough and asthma are not straightforward,2 wheeze should be considered as a physical sign of increased resistance to air flow. Lung function testing is the gold standard for analyzing pulmonary resistance to air flow but has a limited practical value in young children. The clinical evaluation of the presence or absence of wheeze thus remains a primary clinical step in coughing children. Young children do not necessarily breathe deeply in and out when asked to. For years, the author has used a so-called "squeeze and wheeze" maneuver (SWM, see Methods section for definition) to elicit chest signs in young children. The basic idea is to increase expiratory flows in children who do not cooperate adequately during their lung sounds analysis. This study was realized to communicate the author's experience of a yet unreported physical sign and to study its prevalence in young children cared for in a general pediatrics practice.
Resumo:
Action-related sounds are known to increase the excitability of motoneurones within the primary motor cortex (M1), but the role of this auditory input remains unclear. We investigated repetition priming-induced plasticity, which is characteristic of semantic representations, in M1 by applying transcranial magnetic stimulation pulses to the hand area. Motor evoked potentials (MEPs) were larger while subjects were listening to sounds related versus unrelated to manual actions. Repeated exposure to the same manual-action-related sound yielded a significant decrease in MEPs when right, hand area was stimulated; no repetition effect was observed for manual-action-unrelated sounds. The shared repetition priming characteristics suggest that auditory input to the right primary motor cortex is part of auditory semantic representations.
Resumo:
Evidence of multisensory interactions within low-level cortices and at early post-stimulus latencies has prompted a paradigm shift in conceptualizations of sensory organization. However, the mechanisms of these interactions and their link to behavior remain largely unknown. One behaviorally salient stimulus is a rapidly approaching (looming) object, which can indicate potential threats. Based on findings from humans and nonhuman primates suggesting there to be selective multisensory (auditory-visual) integration of looming signals, we tested whether looming sounds would selectively modulate the excitability of visual cortex. We combined transcranial magnetic stimulation (TMS) over the occipital pole and psychophysics for "neurometric" and psychometric assays of changes in low-level visual cortex excitability (i.e., phosphene induction) and perception, respectively. Across three experiments we show that structured looming sounds considerably enhance visual cortex excitability relative to other sound categories and white-noise controls. The time course of this effect showed that modulation of visual cortex excitability started to differ between looming and stationary sounds for sound portions of very short duration (80 ms) that were significantly below (by 35 ms) perceptual discrimination threshold. Visual perceptions are thus rapidly and efficiently boosted by sounds through early, preperceptual and stimulus-selective modulation of neuronal excitability within low-level visual cortex.
Resumo:
Repetition of environmental sounds, like their visual counterparts, can facilitate behavior and modulate neural responses, exemplifying plasticity in how auditory objects are represented or accessed. It remains controversial whether such repetition priming/suppression involves solely plasticity based on acoustic features and/or also access to semantic features. To evaluate contributions of physical and semantic features in eliciting repetition-induced plasticity, the present functional magnetic resonance imaging (fMRI) study repeated either identical or different exemplars of the initially presented object; reasoning that identical exemplars share both physical and semantic features, whereas different exemplars share only semantic features. Participants performed a living/man-made categorization task while being scanned at 3T. Repeated stimuli of both types significantly facilitated reaction times versus initial presentations, demonstrating perceptual and semantic repetition priming. There was also repetition suppression of fMRI activity within overlapping temporal, premotor, and prefrontal regions of the auditory "what" pathway. Importantly, the magnitude of suppression effects was equivalent for both physically identical and semantically related exemplars. That the degree of repetition suppression was irrespective of whether or not both perceptual and semantic information was repeated is suggestive of a degree of acoustically independent semantic analysis in how object representations are maintained and retrieved.