916 resultados para deliberative speech
Resumo:
This dissertation presents the concept of Deliberative Transformative Moment and the instrument to identify it, in a further attempt to bridge the gap between deliberation theory and practice. A transformative moment in the deliberative process occurs when the level of deliberation is either lifted from low to high or drops from high to low. In order to identify such a moment, one has to look at the context and dynamics of the group discussion. This broadening of the unit of analysis is a big difference from other existing instruments to measure the level of deliberation, such as the Deliberative Quality Index –DQI, which focuses primarily on the individual speech acts. Consistent with the theoretical framework of consociational and deliberation approaches, the observed discussions took place among two deeply divided groups, Colombian ex-combatants from both the extreme left and the extreme right. Moving beyond a pure Habermasian perspective, this study finds that besides pure rational arguments, there are some contexts in which personal stories, jokes and self-interests, acting as justification of arguments, have either a positive or a negative impact on deliberative transformative moments. Although this research has a strongly qualitative orientation, reliability tests scored high, giving it strength as a reliable and valid research method that shedding some light on the sort of speech acts that enhance deliberation and those that detract from it.
Resumo:
The goal of this paper is to study and propose a new technique for noise reduction used during the reconstruction of speech signals, particularly for biomedical applications. The proposed method is based on Kalman filtering in the time domain combined with spectral subtraction. Comparison with discrete Kalman filter in the frequency domain shows better performance of the proposed technique. The performance is evaluated by using the segmental signal-to-noise ratio and the Itakura-Saito`s distance. Results have shown that Kalman`s filter in time combined with spectral subtraction is more robust and efficient, improving the Itakura-Saito`s distance by up to four times. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The canonical representation of speech constitutes a perfect reconstruction (PR) analysis-synthesis system. Its parameters are the autoregressive (AR) model coefficients, the pitch period and the voiced and unvoiced components of the excitation represented as transform coefficients. Each set of parameters may be operated on independently. A time-frequency unvoiced excitation (TFUNEX) model is proposed that has high time resolution and selective frequency resolution. Improved time-frequency fit is obtained by using for antialiasing cancellation the clustering of pitch-synchronous transform tracks defined in the modulation transform domain. The TFUNEX model delivers high-quality speech while compressing the unvoiced excitation representation about 13 times over its raw transform coefficient representation for wideband speech.
Resumo:
The primary objective of this study was to assess the lingual kinematic strategies used by younger and older adults to increase rate of speech. It was hypothesised that the strategies used by the older adults would differ from the young adults either as a direct result of, or in response to a need to compensate for, age-related changes in the tongue. Electromagnetic articulography was used to examine the tongue movements of eight young (M526.7 years) and eight older (M567.1 years) females during repetitions of /ta/ and /ka/ at a controlled moderate rate and then as fast as possible. The younger and older adults were found to significantly reduce consonant durations and increase syllable repetition rate by similar proportions. To achieve these reduced durations both groups appeared to use the same strategy, that of reducing the distances travelled by the tongue. Further comparisons at each rate, however, suggested a speed-accuracy trade-off and increased speech monitoring in the older adults. The results may assist in differentiating articulatory changes associated with normal aging from pathological changes found in disorders that affect the older population.
Resumo:
Spectral peak resolution was investigated in normal hearing (NH), hearing impaired (HI), and cochlear implant (CI) listeners. The task involved discriminating between two rippled noise stimuli in which the frequency positions of the log-spaced peaks and valleys were interchanged. The ripple spacing was varied adaptively from 0.13 to 11.31 ripples/octave, and the minimum ripple spacing at which a reversal in peak and trough positions could be detected was determined as the spectral peak resolution threshold for each listener. Spectral peak resolution was best, on average, in NH listeners, poorest in CI listeners, and intermediate for HI listeners. There was a significant relationship between spectral peak resolution and both vowel and consonant recognition in quiet across the three listener groups. The results indicate that the degree of spectral peak resolution required for accurate vowel and consonant recognition in quiet backgrounds is around 4 ripples/octave, and that spectral peak resolution poorer than around 1–2 ripples/octave may result in highly degraded speech recognition. These results suggest that efforts to improve spectral peak resolution for HI and CI users may lead to improved speech recognition
Resumo:
The purpose of this study was to explore the potential advantages, both theoretical and applied, of preserving low-frequency acoustic hearing in cochlear implant patients. Several hypotheses are presented that predict that residual low-frequency acoustic hearing along with electric stimulation for high frequencies will provide an advantage over traditional long-electrode cochlear implants for the recognition of speech in competing backgrounds. A simulation experiment in normal-hearing subjects demonstrated a clear advantage for preserving low-frequency residual acoustic hearing for speech recognition in a background of other talkers, but not in steady noise. Three subjects with an implanted "short-electrode" cochlear implant and preserved low-frequency acoustic hearing were also tested on speech recognition in the same competing backgrounds and compared to a larger group of traditional cochlear implant users. Each of the three short-electrode subjects performed better than any of the traditional long-electrode implant subjects for speech recognition in a background of other talkers, but not in steady noise, in general agreement with the simulation studies. When compared to a subgroup of traditional implant users matched according to speech recognition ability in quiet, the short-electrode patients showed a 9-dB advantage in the multitalker background. These experiments provide strong preliminary support for retaining residual low-frequency acoustic hearing in cochlear implant patients. The results are consistent with the idea that better perception of voice pitch, which can aid in separating voices in a background of other talkers, was responsible for this advantage.
Resumo:
The purpose of the present study was to examine the benefits of providing audible speech to listeners with sensorineural hearing loss when the speech is presented in a background noise. Previous studies have shown that when listeners have a severe hearing loss in the higher frequencies, providing audible speech (in a quiet background) to these higher frequencies usually results in no improvement in speech recognition. In the present experiments, speech was presented in a background of multitalker babble to listeners with various severities of hearing loss. The signal was low-pass filtered at numerous cutoff frequencies and speech recognition was measured as additional high-frequency speech information was provided to the hearing-impaired listeners. It was found in all cases, regardless of hearing loss or frequency range, that providing audible speech resulted in an increase in recognition score. The change in recognition as the cutoff frequency was increased, along with the amount of audible speech information in each condition (articulation index), was used to calculate the "efficiency" of providing audible speech. Efficiencies were positive for all degrees of hearing loss. However, the gains in recognition were small, and the maximum score obtained by an listener was low, due to the noise background. An analysis of error patterns showed that due to the limited speech audibility in a noise background, even severely impaired listeners used additional speech audibility in the high frequencies to improve their perception of the "easier" features of speech including voicing
Resumo:
The purpose of this paper is to provide a cross-linguistic survey of the variation of coding strategies that are available for the grammatical distinction between direct and indirect speech representation with a particular focus on the expression of indirect reported speech. Cross-linguistic data from a sample of 42 languages will be provided to illustrate the range of available grammatical coding strategies.
Resumo:
Parkinson's disease (PD) is a neurodegenerative movement disorder primarily due to basal ganglia dysfunction. While much research has been conducted on Parkinsonian deficits in the traditional arena of musculoskeletal limb movement, research in other functional motor tasks is lacking. The present study examined articulation in PD with increasingly complex sequences of articulatory movement. Of interest was whether dysfunction would affect articulation in the same manner as in limb-movement impairment. In particular, since very Similar (homogeneous) articulatory sequences (the tongue twister effect) are more difficult for healthy individuals to achieve than dissimilar (heterogeneous) gestures, while the reverse may apply for skeletal movements in PD, we asked which factor would dominate when PD patients articulated various grades of artificial tongue twisters: the influence of disease or a possible difference between the two motor systems. Execution was especially impaired when articulation involved a sequence of motor program heterogeneous in terms of place of articulation. The results are suggestive of a hypokinesic tendency in complex sequential articulatory movement as in limb movement. It appears that PD patients do show abnormalities in articulatory movement which are similar to those of the musculoskeletal system. The present study suggests that an underlying disease effect modulates movement impairment across different functional motor systems. (C) 1998 Academic Press.
Resumo:
This work attempts to discuss, in the light of the French Analysis of the Discourse, how the concept of memory and heterogeneity in language actions can contribute to a reflection on information and documentation studies. Starting from cuttings of Clarice Lispector - the hour of the star exhibition pamphlet, accomplished in the second semester of 2007 by the Portuguese Language Museum (Luz train station, Sao Paulo), we interpreted the several voices that surround and sustain the subject and the sense.
Resumo:
Dysphagia is a symptom associated with an array of anatomical and functional changes which must be assessed by a multidisciplinary team to guarantee optimal evaluation and treatment, preventing potential complications. Aim: The aim of the present study is to present the combined protocol of clinical and swallowing videoendoscopy carried by ENT doctors and speech therapists in the Dysphagia Group of the ENT Department - University Hospital. Materials and Methods: Retrospective study concerning the use of a protocol made up of patient interview and clinical examination, followed by an objective evaluation with swallowing videoendoscopy. The exam was performed in 1,332 patients from May 2001 to December 2008. There were 726 (54.50%) males and 606 (45.50%) females, between 22 days and 99 years old. Results: We found: 427 (32.08%) cases of normal swallowing, 273 (20.48%) mild dysphagia, 224 (16.81%) moderate dysphagia, 373 (27.99%) severe dysphagia and 35 (2.64%) inconclusive exams. Conclusion: The combined protocol (Otolaryngology and Speech Therapy), is a good way to approach the dysphagic patient, helping to achieve early and safe deglutition diagnosis as far as disorder severity and treatment are concerned.
Resumo:
Speech disorder in monolingual Cantonese- or English-speaking children has been well described in the literature. There appear to be no reports, however, that describe speech-disordered children who have been exposed to both languages. Here we report on the error patterns of two preschool speech-disordered children who were learning two languages. Both children's first language was Cantonese, but they were also exposed to English through the media and child care. Their disorders were of unknown aetiology. The following questions were asked of the data: (a) Do bilingual children, suspected of having speech problems, make errors in Cantonese and English that reflect delay or disorder when compared with normative data on monolingual speech development in each language? (b) How does the children's speech differ from other bilingual children from the same language learning background? (c) Are the children's speech difficulties apparent in both languages? (d) Is the pattern of errors the same in both languages or do language-specific processes operate? The results bear on theories of acquisition, disorder and bilingualism; they also have clinical implications for speech-language pathologists whose caseloads include bilingual preschool children.