8 resultados para Freedom of speech.
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
An explicit, area-preserving and integrable magnetic field line map for a single-null divertor tokamak is obtained using a trajectory integration method to represent equilibrium magnetic surfaces. The magnetic surfaces obtained from the map are capable of fitting different geometries with freely specified position of the X-point, by varying free model parameters. The safety factor profile of the map is independent of the geometric parameters and can also be chosen arbitrarily. The divertor integrable map is composed of a nonintegrable map that simulates the effect of external symmetry-breaking resonances, so as to generate a chaotic region near the separatrix passing through the X-point. The composed field line map is used to analyze escape patterns (the connection length distribution and magnetic footprints on the divertor plate) for two equilibrium configurations with different magnetic shear profiles at the plasma edge.
Resumo:
Studies about cortical auditory evoked potentials using the speech stimuli in normal hearing individuals are important for understanding how the complexity of the stimulus influences the characteristics of the cortical potential generated. OBJECTIVE: To characterize the cortical auditory evoked potential and the P3 auditory cognitive potential with the vocalic and consonantal contrast stimuli in normally hearing individuals. METHOD: 31 individuals with no risk for hearing, neurologic and language alterations, in the age range between 7 and 30 years, participated in this study. The cortical auditory evoked potentials and the P3 auditory cognitive one were recorded in the Fz and Cz active channels using consonantal (/ba/-/da/) and vocalic (/i/-/a/) speech contrasts. Design: A crosssectional prospective cohort study. RESULTS: We found a statistically significant difference between the speech contrast used and the latencies of the N2 (p = 0.00) and P3 (p = 0.00) components, as well as between the active channel considered (Fz/Cz) and the P3 latency and amplitude values. These correlations did not occur for the exogenous components N1 and P2. CONCLUSION: The speech stimulus contrast, vocalic or consonantal, must be taken into account in the analysis of the cortical auditory evoked potential, N2 component, and auditory cognitive P3 potential.
Resumo:
O artigo procura aprofundar o tema da liberdade de expressão e do direito à informação, tal como concebidos na democracia que se estabelece a partir da idéia de que todo o poder emana do povo e em seu nome é exercido, nos marcos da comunicação contemporânea postos pelas novas tecnologias e pelas redes interconectadas. A liberdade de expressão e o direito à informação de fato se expandem na era digital? Em que termos? Há novos constrangimentos para esses direitos fundamentais? Quais os desafios?
Resumo:
New technology in the Freedom (R) speech processor for cochlear implants was developed to improve how incoming acoustic sound is processed; this applies not only for new users, but also for previous generations of cochlear implants. Aim: To identify the contribution of this technology - the Nucleus 22 (R) - on speech perception tests in silence and in noise, and on audiometric thresholds. Methods: A cross-sectional cohort study was undertaken. Seventeen patients were selected. The last map based on the Spectra (R) was revised and optimized before starting the tests. Troubleshooting was used to identify malfunction. To identify the contribution of the Freedom (R) technology for the Nucleus22 (R), auditory thresholds and speech perception tests were performed in free field in soundproof booths. Recorded monosyllables and sentences in silence and in noise (SNR = 0dB) were presented at 60 dBSPL. The nonparametric Wilcoxon test for paired data was used to compare groups. Results: Freedom (R) applied for the Nucleus22 (R) showed a statistically significant difference in all speech perception tests and audiometric thresholds. Conclusion: The reedom (R) technology improved the performance of speech perception and audiometric thresholds of patients with Nucleus 22 (R).
Resumo:
This study investigated whether there are differences in the Speech-Evoked Auditory Brainstem Response among children with Typical Development (TD), (Central) Auditory Processing Disorder (C) APD, and Language Impairment (LI). The speech-evoked Auditory Brainstem Response was tested in 57 children (ages 6-12). The children were placed into three groups: TD (n = 18), (C)APD (n = 18) and LI (n = 21). Speech-evoked ABR were elicited using the five-formant syllable/da/. Three dimensions were defined for analysis, including timing, harmonics, and pitch. A comparative analysis of the responses between the typical development children and children with (C)APD and LI revealed abnormal encoding of the speech acoustic features that are characteristics of speech perception in children with (C)APD and LI, although the two groups differed in their abnormalities. While the children with (C)APD might had a greater difficulty distinguishing stimuli based on timing cues, the children with LI had the additional difficulty of distinguishing speech harmonics, which are important to the identification of speech sounds. These data suggested that an inefficient representation of crucial components of speech sounds may contribute to the difficulties with language processing found in children with LI. Furthermore, these findings may indicate that the neural processes mediated by the auditory brainstem differ among children with auditory processing and speech-language disorders. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Background: Psychosis has various causes, including mania and schizophrenia. Since the differential diagnosis of psychosis is exclusively based on subjective assessments of oral interviews with patients, an objective quantification of the speech disturbances that characterize mania and schizophrenia is in order. In principle, such quantification could be achieved by the analysis of speech graphs. A graph represents a network with nodes connected by edges; in speech graphs, nodes correspond to words and edges correspond to semantic and grammatical relationships. Methodology/Principal Findings: To quantify speech differences related to psychosis, interviews with schizophrenics, manics and normal subjects were recorded and represented as graphs. Manics scored significantly higher than schizophrenics in ten graph measures. Psychopathological symptoms such as logorrhea, poor speech, and flight of thoughts were grasped by the analysis even when verbosity differences were discounted. Binary classifiers based on speech graph measures sorted schizophrenics from manics with up to 93.8% of sensitivity and 93.7% of specificity. In contrast, sorting based on the scores of two standard psychiatric scales (BPRS and PANSS) reached only 62.5% of sensitivity and specificity. Conclusions/Significance: The results demonstrate that alterations of the thought process manifested in the speech of psychotic patients can be objectively measured using graph-theoretical tools, developed to capture specific features of the normal and dysfunctional flow of thought, such as divergence and recurrence. The quantitative analysis of speech graphs is not redundant with standard psychometric scales but rather complementary, as it yields a very accurate sorting of schizophrenics and manics. Overall, the results point to automated psychiatric diagnosis based not on what is said, but on how it is said.
Resumo:
Introduction: In recent years, the benefits associated with the use of cochlear implants (CIs), especially with regard to speech perception, have proven to surpass those produced by the use of hearing aids, making CIs a highly efficient resource for patients with severe/profound hearing loss. However, few studies so far have assessed the satisfaction of adult users of CIs. Objective: To analyze the relationship between the level of speech perception and degree of satisfaction of adult users of CI. Method: This was a prospective cross-sectional study conducted in the Audiological Research Center (CPA) of the Hospital of Craniofacial Anomalies, University of São Paulo (HRAC/USP), in Bauru, São Paulo, Brazil. A total of 12 users of CIs with pre-lingual or post-lingual hearing loss participated in this study. The following tools were used in the assessment: a questionnaire, "Satisfaction with Amplification in Daily Life" (SADL), culturally adapted to Brazilian Portuguese, as well as its relationship with the speech perception results; a speech perception test under quiet conditions; and the Hearing in Noise Test (HINT)Brazil under free field conditions. Results: The participants in the study were on the whole satisfied with their devices, and the degree of satisfaction correlated positively with the ability to perceive monosyllabic words under quiet conditions. The satisfaction did not correlate with the level of speech perception in noisy environments. Conclusion: Assessments of satisfaction may help professionals to predict what other factors, in addition to speech perception, may contribute to the satisfaction of CI users in order to reorganize the intervention process to improve the users' quality of life.
Resumo:
OBJECTIVES: The purpose of the study was to acoustically compare the performance of children who do and do not stutter on diadochokinesis tasks in terms of syllable duration, syllable periods, and peak intensity. METHODS: In this case-control study, acoustical analyses were performed on 26 children who stutter and 20 aged-matched normally fluent children (both groups stratified into preschoolers and school-aged children) during a diadochokinesis task: the repetition of articulatory segments through a task testing the ability to alternate movements. Speech fluency was assessed using the Fluency Profile and the Stuttering Severity Instrument. RESULTS: The children who stutter and those who do not did not significantly differ in terms of the acoustic patterns they produced in the diadochokinesis tasks. Significant differences were demonstrated between age groups independent of speech fluency. Overall, the preschoolers performed poorer. These results indicate that the observed differences are related to speech-motor age development and not to stuttering itself. CONCLUSIONS: Acoustic studies demonstrate that speech segment durations are most variable, both within and between subjects, during childhood and then gradually decrease to adult levels by the age of eleven to thirteen years. One possible explanation for the results of the present study is that children who stutter presented higher coefficients of variation to exploit the motor equivalence to achieve accurate sound production (i.e., the absence of speech disruptions).