102 resultados para Fricative consonants


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous studies have indicated that consonant imprecision in Parkinson's disease (PD) may result from a reduction in the amplitude of lingual movements or articulatory undershoot. While this has been postulated, direct measurement of the tongue's contact with the hard palate during speech production has not been undertaken. Therefore, the present study aimed to use electropalatography (EPG) to determine the exact nature of tongue-palate contact in a group of individuals with PD and consonant imprecision (n=9). Furthermore, the current investigation also aimed to compare the results of the participants with PD to a group of aged (n=7) and young (n=8) control speakers to determine the relative contribution of ageing of the lingual musculature to any articulatory deficits noted. Participants were required to read aloud the phrase 'I saw a ___ today' with the artificial palate in-situ. Target words included the consonants /l/, /s/ and /t/ in initial position in both the /i/ and /a/ vowel environments. Phonetic transcription of phoneme productions and description of error types was completed. Furthermore, representative frames of contact were employed to describe the features of tongue-palate contact and to calculate spatial palatal indices. Results of the perceptual investigation revealed that perceived undershooting of articulatory targets distinguished the participant group with PD from the control groups. However, objective EPG assessment indicated that undershooting of the target consonant was not the cause of the perceived articulatory errors. It is, therefore, possible that reduced pressure of tongue contact with the hard palate, sub-lingual deficits or impaired articulatory timing resulted in the perceived undershooting of the target consonants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous investigations employing electropalatography (EPG) have identified articulatory timing deficits in individuals with acquired dysarthria. However, this technology is yet to be applied to the articulatory timing disturbance present in Parkinson's disease (PD). As a result, the current investigation aimed to use EPG to comprehensively examine the temporal aspects of articulation in a group of nine individuals with PD at sentence, word and segment level. This investigation followed on from a prior study (McAuliffe, Ward and Murdoch) and similarly, aimed to compare the results of the participants with PD to a group of aged (n=7) and young controls (n=8) to determine if ageing contributed to any articulatory timing deficits observed. Participants were required to read aloud the phrase I saw a ___ today'' with the EPG palate in-situ. Target words included the consonants /1/, /s/ and /t/ in initial position in both the /i/ and /a/ vowel environments. Perceptual investigation of speech rate was conducted in addition to objective measurement of sentence, word and segment duration. Segment durations included the total segment length and duration of the approach, closure/constriction and release phases of EPG consonant production. Results of the present study revealed impaired speech rate, perceptually, in the group with PD. However, this was not confirmed objectively. Electropalatographic investigation of segment durations indicated that, in general, the group with PD demonstrated segment durations consistent with the control groups. Only one significant difference was noted, with the group with PD exhibiting significantly increased duration of the release phase for /1a/ when compared to both the control groups. It is, therefore, possible that EPG failed to detect lingual movement impairment as it does not measure the complete tongue movement towards and away from the hard palate. Furthermore, the contribution of individual variation to the present findings should not be overlooked.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is the final paper in a special issue of Advances in Speech-Language Pathology. The paper presents an intervention case study of a 7 year old child with severe phonological difficulties described in Holm and Crosbie (2006). Dodd et al (2006) hypothesised that Jarrod had an underlying deficit of generating phonological plans for word production and suggested that Jarrod would benefit from core vocabulary therapy. This paper reports on one block of core vocabulary therapy undertaken with Jarrod. Pre- and post-intervention measures showed that Jarrod made significant progress. His speech became more consistent and his accuracy (percent consonants correct) increased.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

WWe present the case of two aphasic patients: one with fluent speech, MM, and one with dysfluent speech, DB. Both patients make similar proportions of phonological errors in speech production and the errors have similar characteristics. A closer analysis, however, shows a number of differences. DB's phonological errors involve, for the most part, simplifications of syllabic structure; they affect consonants more than vowels; and, among vowels, they show effects of sonority/complexity. This error pattern may reflect articulatory difficulties. MM's errors, instead, show little effect of syllable structure, affect vowels at least as much as consonants and, and affect all different vowels to a similar extent. This pattern is consistent with a more central impairment involving the selection of the right phoneme among competing alternatives. We propose that, at this level, vowel selection may be more difficult than consonant selection because vowels belong to a smaller set of repeatedly activated units.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report the performance of a group of adult dyslexics and matched controls in an array-matching task where two strings of either consonants or symbols are presented side by side and have to be judged to be the same or different. The arrays may differ either in the order or identity of two adjacent characters. This task does not require naming – which has been argued to be the cause of dyslexics’ difficulty in processing visual arrays – but, instead, has a strong serial component as demonstrated by the fact that, in both groups, Reaction times (RTs) increase monotonically with position of a mismatch. The dyslexics are clearly impaired in all conditions and performance in the identity conditions predicts performance across orthographic tasks even after age, performance IQ and phonology are partialled out. Moreover, the shapes of serial position curves are revealing of the underlying impairment. In the dyslexics, RTs increase with position at the same rate as in the controls (lines are parallel) ruling out reduced processing speed or difficulties in shifting attention. Instead, error rates show a catastrophic increase for positions which are either searched later or more subject to interference. These results are consistent with a reduction in the attentional capacity needed in a serial task to bind together identity and positional information. This capacity is best seen as a reduction in the number of spotlights into which attention can be split to process information at different locations rather than as a more generic reduction of resources which would also affect processing the details of single objects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research on aphasia has struggled to identify apraxia of speech (AoS) as an independent deficit affecting a processing level separate from phonological assembly and motor implementation. This is because AoS is characterized by both phonological and phonetic errors and, therefore, can be interpreted as a combination of deficits at the phonological and the motoric level rather than as an independent impairment. We apply novel psycholinguistic analyses to the perceptually phonological errors made by 24 Italian aphasic patients. We show that only patients with relative high rate (>10%) of phonetic errors make sound errors which simplify the phonology of the target. Moreover, simplifications are strongly associated with other variables indicative of articulatory difficulties - such as a predominance of errors on consonants rather than vowels -but not with other measures - such as rate of words reproduced correctly or rates of lexical errors. These results indicate that sound errors cannot arise at a single phonological level because they are different in different patients. Instead, different patterns: (1) provide evidence for separate impairments and the existence of a level of articulatory planning/programming intermediate between phonological selection and motor implementation; (2) validate AoS as an independent impairment at this level, characterized by phonetic errors and phonological simplifications; (3) support the claim that linguistic principles of complexity have an articulatory basis since they only apply in patients with associated articulatory difficulties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research investigated the nasality of vowels in the spontaneous speech of inhabitants of the quilombola communities of Brejo dos Crioulos and Poções (MG). As a theoretical framework, we based on the assumptions of Phonetics and Phonology, in renowned scholars on the investigation of nasality (CAGLIARI, 1977; CÂMARA JR., 1984, 2013; BISOL, 2013; ABAURRE; PAGOTTO, 1996; SILVA, 2015), with subsidies of the Corpus Linguistics. Its general goal was to investigate the occurrence of nasality, in the dialect of these quilombola communities, and their linguistic behavior, considering the linguistic factors that can interfere in the phenomenon. Specifically it was aimed to a) detect the occurrence of nasalized vowels with the help of the resources that the Corpus Linguistics provides (Praat and WorldSmith Tolls); b) discriminate the different types of occurring contexts of nasalized vowels; c) make quantitative and qualitative analyzes of the nasalized vowels in the study corpus; d) describe and analyze the behavior of nasalized vowels and; e) contrast the values of F1 and F2 of the oral and nasalized vowels. It was hypothesized that the nasality happens because it is conditioned by the nasal segment following the nasalized vowel - phonological process of “assimilation” - its position as the primary stress and grammatical category. It was believed that the quilombolas communities of Brejo dos Crioulos and Poções produce nasalized vowels in their speech and this linguistic phenomenon is favored by the adjacent presence of consonants or nasal vowels. Furthermore, it was hypothesized that the values of F1 and F2 of oral and nasalized vowels in these communities are distinct. The following research questions were elaborated: (i) is the presence of nasalized vowels in the speech of these quilombola communities conditioned to the presence of a nasal sound segment? (ii) does the nasal sound segment following the nasalized vowel favor the occurrence of the nasality phenomenon? is there a difference between the values of F1 and F2 of the oral and nasalized vowels in both quilombola communities considered? To compose our corpus, 24 interviews recordings were used (12 female speakers and 12 male speakers), a total of 24 participants. It was found that the following nasal sound segment tends to condition the nasalized vowel. In general, it assimilates the lowering of the soft palate of nasal consonant segment immediately following, but there are cases of nasal vowel segment - regressive assimilation; the stressed syllable tends to favor the nasality, but it occurs in pretonic and postonic position as well; F1 and F2 values of oral and nasalized vowels in the quilombola communities of Poções and Brejo dos Crioulos are distinct: the group of Brejo dos Crioulos tends to produce the F1 of oral and nasalized vowels more lowered than the group of Poções and the F2, in a more anterior position. The nasality tends to occur in verbs and nouns, although it is not specific to a grammatical category. This research found cases of spurious nasalization, confirming previous studies. In turn, it revealed cases of lexical items with favorable context for nasalization, but with its non-occurrence. This last case, considered as the lowering of the uniform soft palate in PB, presented pronounced vowels without the soft palate lowering. That is, it was detected variation in the phenomenon of nasalization in PB. With this work, it was promoted the discussion about nasality, in order to contribute to the linguistic studies about the functioning of Brazilian Portuguese in this geographical context.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is widely accepted that infants begin learning their native language not by learning words, but by discovering features of the speech signal: consonants, vowels, and combinations of these sounds. Learning to understand words, as opposed to just perceiving their sounds, is said to come later, between 9 and 15 mo of age, when infants develop a capacity for interpreting others' goals and intentions. Here, we demonstrate that this consensus about the developmental sequence of human language learning is flawed: in fact, infants already know the meanings of several common words from the age of 6 mo onward. We presented 6- to 9-mo-old infants with sets of pictures to view while their parent named a picture in each set. Over this entire age range, infants directed their gaze to the named pictures, indicating their understanding of spoken words. Because the words were not trained in the laboratory, the results show that even young infants learn ordinary words through daily experience with language. This surprising accomplishment indicates that, contrary to prevailing beliefs, either infants can already grasp the referential intentions of adults at 6 mo or infants can learn words before this ability emerges. The precocious discovery of word meanings suggests a perspective in which learning vocabulary and learning the sound structure of spoken language go hand in hand as language acquisition begins.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are two features of /æ/ in British Columbia (BC) English that are widely attested in the literature: it is undergoing retraction and lowering and it is sensitive to the influence of certain following consonants. The present study aims to utilize both features to evaluate the phonological status of /æ/ before nasal consonants in BC English by examining the progression of sound change and the phonemic organization of /æ/ in different environments. Specifically, production and perception results are taken together to evaluate the phonetic position of pre-nasal /æ/ relative to other environments. These results are interpreted within a modular feedforward architecture of phonology to establish the phonological (allophonic) and phonetic (shallowphonic) rules that govern the internal relationships between the subphonemic elements of /æ/ in BC English. Further, the findings of this study provide evidence for the allophone being the target of sound change, rather than the phoneme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las etapas del cambio fonético-fonológico han sido descritas desde hace décadas, especialmente desde un punto de vista articulatorio y casi siempre partiendo de los testimonios escritos de que se podía disponer. No obstante, recientemente han ido surgiendo nuevas teorías que defienden que el cambio puede ser explicado a través del estudio de la variación y los procesos fonéticos propios del habla actual, puesto que ambos están relacionados con fenómenos de hipo (e hiper) articulación y, a la postre, de coarticulación. Una de ellas es la Fonología Evolutiva (Blevins 2004), aun cuando no ofrece una explicación satisfactoria para la difusión del cambio. En este estudio, se ha recurrido a estas teorías para esclarecer las causas de la evolución de dos contextos de yod segunda: /nj/ y /lj/, que llevaron a la fonologización de // y //, en un primer estadio de la historia del español.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present study, Korean-English bilingual (KEB) and Korean monolingual (KM) children, between the ages of 8 and 13 years, and KEB adults, ages 18 and older, were examined with one speech perception task, called the Nonsense Syllable Confusion Matrix (NSCM) task (Allen, 2005), and two production tasks, called the Nonsense Syllable Imitation Task (NSIT) and the Nonword Repetition Task (NRT; Dollaghan & Campbell, 1998). The present study examined (a) which English sounds on the NSCM task were identified less well, presumably due to interference from Korean phonology, in bilinguals learning English as a second language (L2) and in monolinguals learning English as a foreign language (FL); (b) which English phonemes on the NSIT were more challenging for bilinguals and monolinguals to produce; (c) whether perception on the NSCM task is related to production on the NSIT, or phonological awareness, as measured by the NRT; and (d) whether perception and production differ in three age-language status groups (i.e., KEB children, KEB adults, and KM children) and in three proficiency subgroups of KEB children (i.e., English-dominant, ED; balanced, BAL; and Korean-dominant, KD). In order to determine English proficiency in each group, language samples were extensively and rigorously analyzed, using software, called Systematic Analysis of Language Transcripts (SALT). Length of samples in complete and intelligible utterances, number of different and total words (NDW and NTW, respectively), speech rate in words per minute (WPM), and number of grammatical errors, mazes, and abandoned utterances were measured and compared among the three initial groups and the three proficiency subgroups. Results of the language sample analysis (LSA) showed significant group differences only between the KEBs and the KM children, but not between the KEB children and adults. Nonetheless, compared to normative means (from a sample length- and age-matched database provided by SALT), the KEB adult group and the KD subgroup produced English at significantly slower speech rates than expected for monolingual, English-speaking counterparts. Two existing models of bilingual speech perception and production—the Speech Learning Model or SLM (Flege, 1987, 1992) and the Perceptual Assimilation Model or PAM (Best, McRoberts, & Sithole, 1988; Best, McRoberts, & Goodell, 2001)—were considered to see if they could account for the perceptual and production patterns evident in the present study. The selected English sounds for stimuli in the NSCM task and the NSIT were 10 consonants, /p, b, k, g, f, θ, s, z, ʧ, ʤ/, and 3 vowels /I, ɛ, æ/, which were used to create 30 nonsense syllables in a consonant-vowel structure. Based on phonetic or phonemic differences between the two languages, English sounds were categorized either as familiar sounds—namely, English sounds that are similar, but not identical, to L1 Korean, including /p, k, s, ʧ, ɛ/—or unfamiliar sounds—namely, English sounds that are new to L1, including /b, g, f, θ, z, ʤ, I, æ/. The results of the NSCM task showed that (a) consonants were perceived correctly more often than vowels, (b) familiar sounds were perceived correctly more often than unfamiliar ones, and (c) familiar consonants were perceived correctly more often than unfamiliar ones across the three age-language status groups and across the three proficiency subgroups; and (d) the KEB children perceived correctly more often than the KEB adults, the KEB children and adults perceived correctly more often than the KM children, and the ED and BAL subgroups perceived correctly more often than the KD subgroup. The results of the NSIT showed (a) consonants were produced more accurately than vowels, and (b) familiar sounds were produced more accurately than unfamiliar ones, across the three age-language status groups. Also, (c) familiar consonants were produced more accurately than unfamiliar ones in the KEB and KM child groups, and (d) unfamiliar vowels were produced more accurately than a familiar one in the KEB child group, but the reverse was true in the KEB adult and KM child groups. The KEB children produced sounds correctly significantly more often than the KM children and the KEB adults, though the percent correct differences were smaller than for perception. Production differences were not found among the three proficiency subgroups. Perception on the NSCM task was compared to production on the NSIT and NRT. Weak positive correlations were found between perception and production (NSIT) for unfamiliar consonants and sounds, whereas a weak negative correlation was found for unfamiliar vowels. Several correlations were significant for perceptual performance on the NSCM task and overall production performance on the NRT: for unfamiliar consonants, unfamiliar vowels, unfamiliar sounds, consonants, vowels, and overall performance on the NSCM task. Nonetheless, no significant correlation was found between production on the NSIT and NRT. Evidently these are two very different production tasks, where immediate imitation of single syllables on the NSIT results in high performance for all groups. Findings of the present study suggest that (a) perception and production of L2 consonants differ from those of vowels; (b) perception and production of L2 sounds involve an interaction of sound type and familiarity; (c) a weak relation exists between perception and production performance for unfamiliar sounds; and (d) L2 experience generally predicts perceptual and production performance. The present study yields several conclusions. The first is that familiarity of sounds is an important influence on L2 learning, as claimed by both SLM and PAM. In the present study, familiar sounds were perceived and produced correctly more often than unfamiliar ones in most cases, in keeping with PAM, though experienced L2 learners (i.e., the KEB children) produced unfamiliar vowels better than familiar ones, in keeping with SLM. Nonetheless, the second conclusion is that neither SLM nor PAM consistently and thoroughly explains the results of the present study. This is because both theories assume that the influence of L1 on the perception of L2 consonants and vowels works in the same way as for production of them. The third and fourth conclusions are two proposed arguments: that perception and production of consonants are different than for vowels, and that sound type interacts with familiarity and L2 experience. These two arguments can best explain the current findings. These findings may help us to develop educational curricula for bilingual individuals listening to and articulating English. Further, the extensive analysis of spontaneous speech in the present study should contribute to the specification of parameters for normal language development and function in Korean-English bilingual children and adults.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is a description of Tajio, a Western Malayo-Polynesian language spoken in Central Sulawesi, Indonesia. It covers the essential aspects of Tajio grammar without being exhaustive. Tajio has a medium sized phoneme inventory consisting of twenty consonants and five vowels. The language does not have lexical (word) stress; rather, it has a phrasal accent. This phrasal accent regularly occurs on the penultimate syllable of an intonational phrase, rendering this syllable auditorily prominent through a pitch rise. Possible syllable structures in Tajio are (C)V(C). CVN structures are allowed as closed syllables, but CVN syllables in word-medial position are not frequent. As in other languages in the area, the only sequence of consonants allowed in native Tajio words are sequences of nasals followed by a homorganic obstruent. The homorganic nasal-obstruent sequences found in Tajio can occur word-initially and word-medially but never in word-final position. As in many Austronesian languages, word class classification in Tajio is not straightforward. The classification of words in Tajio must be carried out on two levels: the morphosyntactic level and the lexical level. The open word classes in Tajio consist of nouns and verbs. Verbs are further divided into intransitive verbs (dynamic intransitive verbs and statives) and dynamic transitive verbs. Based on their morphological potential, lexical roots in Tajio fall into three classes: single-class roots, dual-class roots and multi-class roots. There are two basic transitive constructions in Tajio: Actor Voice and Undergoer Voice, where the actor or undergoer argument respectively serves as subjects. It shares many characteristics with symmetrical voice languages, yet it is not fully symmetric, as arguments in AV and UV are not equally marked. Neither subjects nor objects are marked in AV constructions. In UV constructions, however, subjects are unmarked while objects are marked either by prefixation or clitization. Evidence from relativization, control and raising constructions supports the analysis that AV and UV are in fact transitive, with subject arguments and object arguments behaving alike in both voices. Only the subject can be relativized, controlled, raised or function as the implicit subject of subjectless adverbial clauses. In contrast, the objects of AV and UV constructions do not exhibit these features. Tajio is a predominantly head-marking language with basic A-V-O constituent order. V and O form a constituent, and the subject can either precede or follow this complex. Thus, basic word order is S-V-O or V-O-S. Subject, as well as non-subject arguments, may be omitted when contextually specified. Verbs are marked for voice and mood, the latter of which is is obligatory. The two values distinguished are realis and non-realis. Depending on the type of predicate involved in clause formation, three clause types can be distinguished: verbal clauses, existential clauses and non-verbal clauses. Tajio has a small number of multi-verbal structures that appear to qualify as serial verb constructions. SVCs in Tajio always include a motion verb or a directional.