87 resultados para Phoneme
Resumo:
INTRODUCTION The orthographic depth hypothesis (Katz and Feldman, 1983) posits that different reading routes are engaged depending on the type of grapheme/phoneme correspondence of the language being read. Shallow orthographies with consistent grapheme/phoneme correspondences favor encoding via non-lexical pathways, where each grapheme is sequentially mapped to its corresponding phoneme. In contrast, deep orthographies with inconsistent grapheme/phoneme correspondences favor lexical pathways, where phonemes are retrieved from specialized memory structures. This hypothesis, however, lacks compelling empirical support. The aim of the present study was to investigate the impact of orthographic depth on reading route selection using a within-subject design. METHOD We presented the same pseudowords (PWs) to highly proficient bilinguals and manipulated the orthographic depth of PW reading by embedding them among two separated German or French language contexts, implicating respectively, shallow or deep orthography. High density electroencephalography was recorded during the task. RESULTS The topography of the ERPs to identical PWs differed 300-360 ms post-stimulus onset when the PWs were read in different orthographic depth context, indicating distinct brain networks engaged in reading during this time window. The brain sources underlying these topographic effects were located within left inferior frontal (German > French), parietal (French > German) and cingular areas (German > French). CONCLUSION Reading in a shallow context favors non-lexical pathways, reflected in a stronger engagement of frontal phonological areas in the shallow versus the deep orthographic context. In contrast, reading PW in a deep orthographic context recruits less routine non-lexical pathways, reflected in a stronger engagement of visuo-attentional parietal areas in the deep versus shallow orthographic context. These collective results support a modulation of reading route by orthographic depth.
Resumo:
Referred to as orthographic depth, the degree of consistency of grapheme/phoneme correspondences varies across languages from high in shallow orthographies to low in deep orthographies. The present study investigates the impact of orthographic depth on reading route by analyzing evoked potentials to words in a deep (French) and shallow (German) language presented to highly proficient bilinguals. ERP analyses to German and French words revealed significant topographic modulations 240-280ms post-stimulus onset, indicative of distinct brain networks engaged in reading over this time window. Source estimations revealed that these effects stemmed from modulations of left insular, inferior frontal and dorsolateral regions (German>French) previously associated to phonological processing. Our results show that reading in a shallow language was associated to a stronger engagement of phonological pathways than reading in a deep language. Thus, the lexical pathways favored in word reading are reinforced by phonological networks more strongly in the shallow than deep orthography.
Resumo:
Converging evidences from eye movement experiments indicate that linguistic contexts influence reading strategies. However, the question of whether different linguistic contexts modulate eye movements during reading in the same bilingual individuals remains unresolved. We examined reading strategies in a transparent (German) and an opaque (French) language of early, highly proficient French–German bilinguals: participants read aloud isolated French and German words and pseudo-words while the First Fixation Location (FFL), its duration and latency were measured. Since transparent linguistic contexts and pseudo-words would favour a direct grapheme/phoneme conversion, the reading strategy should be more local for German than for French words (FFL closer to the beginning) and no difference is expected in pseudo-words’ FFL between contexts. Our results confirm these hypotheses, providing the first evidence that the same individuals engage different reading strategy depending on language opacity, suggesting that a given brain process can be modulated by a given context.
Resumo:
Reading strategies vary across languages according to orthographic depth - the complexity of the grapheme in relation to phoneme conversion rules - notably at the level of eye movement patterns. We recently demonstrated that a group of early bilinguals, who learned both languages equally under the age of seven, presented a first fixation location (FFL) closer to the beginning of words when reading in German as compared with French. Since German is known to be orthographically more transparent than French, this suggested that different strategies were being engaged depending on the orthographic depth of the used language. Opaque languages induce a global reading strategy, and transparent languages force a local/serial strategy. Thus, pseudo-words were processed using a local strategy in both languages, suggesting that the link between word forms and their lexical representation may also play a role in selecting a specific strategy. In order to test whether corresponding effects appear in late bilinguals with low proficiency in their second language (L2), we present a new study in which we recorded eye movements while two groups of late German-French and French-German bilinguals read aloud isolated French and German words and pseudo-words. Since, a transparent reading strategy is local and serial, with a high number of fixations per stimuli, and the level of the bilingual participants' L2 is low, the impact of language opacity should be observed in L1. We therefore predicted a global reading strategy if the bilinguals' L1 was French (FFL close to the middle of the stimuli with fewer fixations per stimuli) and a local and serial reading strategy if it was German. Thus, the L2 of each group, as well as pseudo-words, should also require a local and serial reading strategy. Our results confirmed these hypotheses, suggesting that global word processing is only achieved by bilinguals with an opaque L1 when reading in an opaque language; the low level in the L2 gives way to a local and serial reading strategy. These findings stress the fact that reading behavior is influenced not only by the linguistic mode but also by top-down factors, such as readers' proficiency.
The mismatch negativity (MMN) response to complex tones and spoken words in individuals with aphasia
Resumo:
Background: The mismatch negativity (MMN) is a fronto-centrally distributed event-related potential (ERP) that is elicited by any discriminable auditory change. It is an ideal neurophysiological tool for measuring the auditory processing skills of individuals with aphasia because it can be elicited even in the absence of attention. Previous MMN studies have shown that acoustic processing of tone or pitch deviance is relatively preserved in aphasia, whereas the basic acoustic processing of speech stimuli can be impaired (e.g., auditory discrimination). However, no MMN study has yet investigated the higher levels of auditory processing, such as language-specific phonological and/or lexical processing, in individuals with aphasia. Aims: The aim of the current study was to investigate the MMN response of normal and language-disordered subjects to tone stimuli and speech stimuli that incorporate the basic auditory processing (acoustic, acoustic-phonetic) levels of non-speech and speech sound processing, and also the language-specific phonological and lexical levels of spoken word processing. Furthermore, this study aimed to correlate the aphasic MMN data with language performance on a variety of tasks specifically targeted at the different levels of spoken word processing. Methods M Procedures: Six adults with aphasia (71.7 years +/- 3.0) and six healthy age-, gender-, and education-matched controls (72.2 years +/- 5.4) participated in the study. All subjects were right-handed and native speakers of English. Each subject was presented with complex harmonic tone stimuli, differing in pitch or duration, and consonant-vowel (CV) speech stimuli (non-word /de:/versus real world/deI/). The probability of the deviant for each tone or speech contrast was 10%. The subjects were also presented with the same stimuli in behavioural discrimination tasks, and were administered a language assessment battery to measure their auditory comprehension skills. Outcomes O Results: The aphasic subjects demonstrated attenuated MMN responses to complex tone duration deviance and to speech stimuli (words and non-words), and their responses to the frequency, duration, and real word deviant stimuli were found to strongly correlate with performance on the auditory comprehension section of the Western Aphasia Battery (WAB). Furthermore, deficits in attentional lexical decision skills demonstrated by the aphasic subjects correlated with a word-related enhancement demonstrated during the automatic MMN paradigm, providing evidence to support the word advantage effect, thought to reflect the activation of language-specific memory traces in the brain for words. Conclusions: These results indicate that the MMN may be used as a technique for investigating general and more specific auditory comprehension skills of individuals with aphasia, using speech and/or non-speech stimuli, independent of the individual's attention. The combined use of the objective MMN technique and current clinical language assessments may result in improved rehabilitative management of aphasic individuals.
Resumo:
Previous studies have indicated that consonant imprecision in Parkinson's disease (PD) may result from a reduction in the amplitude of lingual movements or articulatory undershoot. While this has been postulated, direct measurement of the tongue's contact with the hard palate during speech production has not been undertaken. Therefore, the present study aimed to use electropalatography (EPG) to determine the exact nature of tongue-palate contact in a group of individuals with PD and consonant imprecision (n=9). Furthermore, the current investigation also aimed to compare the results of the participants with PD to a group of aged (n=7) and young (n=8) control speakers to determine the relative contribution of ageing of the lingual musculature to any articulatory deficits noted. Participants were required to read aloud the phrase 'I saw a ___ today' with the artificial palate in-situ. Target words included the consonants /l/, /s/ and /t/ in initial position in both the /i/ and /a/ vowel environments. Phonetic transcription of phoneme productions and description of error types was completed. Furthermore, representative frames of contact were employed to describe the features of tongue-palate contact and to calculate spatial palatal indices. Results of the perceptual investigation revealed that perceived undershooting of articulatory targets distinguished the participant group with PD from the control groups. However, objective EPG assessment indicated that undershooting of the target consonant was not the cause of the perceived articulatory errors. It is, therefore, possible that reduced pressure of tongue contact with the hard palate, sub-lingual deficits or impaired articulatory timing resulted in the perceived undershooting of the target consonants.
Resumo:
To investigate the importance of the connection between being able to speak and the emergence of phonological awareness abilities, the performance of children with cerebral palsy (five speakers and six non-speakers) was assessed at syllable, onset-rime, and phoneme levels. The children were matched with control groups of children for non-verbal intelligence. No group differences were found for the identification of syllables, reading non-words, or judging spoken rhyme. The children with cerebral palsy who could speak, however, performed better than the children with cerebral palsy who could not speak and the control group of children without disabilities, judging written words for rhyme. The children with cerebral palsy who could not speak performed poorly in comparison to those who could speak ( but not the control group of children) when segmenting syllables and on the phoneme manipulation task. The findings suggest that non-speaking children with cerebral palsy have phonological awareness performance that varies according to the mental processing demands of the task. The ability to speak facilitates performance when phonological awareness tasks ( written rhyme judgment, syllable segmentation, and phoneme manipulation) require the use of an articulatory loop.
Resumo:
WWe present the case of two aphasic patients: one with fluent speech, MM, and one with dysfluent speech, DB. Both patients make similar proportions of phonological errors in speech production and the errors have similar characteristics. A closer analysis, however, shows a number of differences. DB's phonological errors involve, for the most part, simplifications of syllabic structure; they affect consonants more than vowels; and, among vowels, they show effects of sonority/complexity. This error pattern may reflect articulatory difficulties. MM's errors, instead, show little effect of syllable structure, affect vowels at least as much as consonants and, and affect all different vowels to a similar extent. This pattern is consistent with a more central impairment involving the selection of the right phoneme among competing alternatives. We propose that, at this level, vowel selection may be more difficult than consonant selection because vowels belong to a smaller set of repeatedly activated units.
Phonological–lexical activation:a lexical component or anoutput buffer? Evidence from aphasic errors
Resumo:
Single word production requires that phoneme activation is maintained while articulatory conversion is taking place. Word serial recall, connected speech and non-word production (repetition and spelling) are all assumed to involve a phonological output buffer. A crucial question is whether the same memory resources are also involved in single word production. We investigate this question by assessing length and positional effects in the single word repetition and reading of six aphasic patients. We expect a damaged buffer to result in error rates per phoneme which increase with word length and in position effects. Although our patients had trouble with phoneme activation (they made mainly errors of phoneme selection), they did not show the effects expected from a buffer impairment. These results show that phoneme activation cannot be automatically equated with a buffer. We hypothesize that the phonemes of existing words are kept active though permanent links to the word node. Thus, the sustained activation needed for their articulation will come from the lexicon and will have different characteristics from the activation needed for the short-term retention of an unbound set of units. We conclude that there is no need and no evidence for a phonological buffer in single word production.
Resumo:
The mappings from grapheme to phoneme are much less consistent in English than they are for most other languages. Therefore, the differences found between English-speaking dyslexics and controls on sensory measures of temporal processing might be related more to the irregularities of English orthography than to a general deficit affecting reading ability in all languages. However, here we show that poor readers of Norwegian, a language with a relatively regular orthography, are less sensitive than controls to dynamic visual and auditory stimuli. Consistent with results from previous studies of English-readers, detection thresholds for visual motion and auditory frequency modulation (FM) were significantly higher in 19 poor readers of Norwegian compared to 22 control readers of the same age. Over two-thirds (68.4%) of the children identified as poor readers were less sensitive than controls to either or both of the visual coherent motion or auditory 2Hz FM stimuli. © 2003 Elsevier Science (USA). All rights reserved.
Resumo:
The essential first step for a beginning reader is to learn to match printed forms to phonological representations. For a new word, this is an effortful process where each grapheme must be translated individually (serial decoding). The role of phonological awareness in developing a decoding strategy is well known. We examined whether beginner readers recruit different skills depending on the nature of the words being read (familiar words vs. nonwords). Print knowledge, phoneme and rhyme awareness, rapid automatized naming (RAN), phonological short term memory (STM), nonverbal reasoning, vocabulary, auditory skills and visual attention were measured in 392 pre-readers aged 4 to 5 years. Word and nonword reading were measured 9 months later. We used structural equation modeling to examine the skills-reading relationship and modeled correlations between our two reading outcomes and among all pre-reading skills. We found that a broad range of skills were associated with reading outcomes: early print knowledge, phonological STM, phoneme awareness and RAN. Whereas all these skills were directly predictive of nonword reading, early print knowledge was the only direct predictor of word reading. Our findings suggest that beginner readers draw most heavily on their existing print knowledge to read familiar words.
Resumo:
Current models of word production assume that words are stored as linear sequences of phonemes which are structured into syllables only at the moment of production. This is because syllable structure is always recoverable from the sequence of phonemes. In contrast, we present theoretical and empirical evidence that syllable structure is lexically represented. Storing syllable structure would have the advantage of making representations more stable and resistant to damage. On the other hand, re-syllabifications affect only a minimal part of phonological representations and occur only in some languages and depending on speech register. Evidence for these claims comes from analyses of aphasic errors which not only respect phonotactic constraints, but also avoid transformations which move the syllabic structure of the word further away from the original structure, even when equating for segmental complexity. This is true across tasks, types of errors, and, crucially, types of patients. The same syllabic effects are shown by apraxic patients and by phonological patients who have more central difficulties in retrieving phonological representations. If syllable structure was only computed after phoneme retrieval, it would have no way to influence the errors of phonological patients. Our results have implications for psycholinguistic and computational models of language as well as for clinical and educational practices.
Resumo:
In this article, we present the first open-access lexical database that provides phonological representations for 120,000 Italian word forms. Each of these also includes syllable boundaries and stress markings and a comprehensive range of lexical statistics. Using data derived from this lexicon, we have also generated a set of derived databases and provided estimates of positional frequency use for Italian phonemes, syllables, syllable onsets and codas, and character and phoneme bigrams. These databases are freely available from phonitalia.org. This article describes the methods, content, and summarizing statistics for these databases. In a first application of this database, we also demonstrate how the distribution of phonological substitution errors made by Italian aphasic patients is related to phoneme frequency. © 2013 Psychonomic Society, Inc.
Resumo:
Purpose: Both phonological (speech) and auditory (non-speech) stimuli have been shown to predict early reading skills. However, previous studies have failed to control for the level of processing required by tasks administered across the two levels of stimuli. For example, phonological tasks typically tap explicit awareness e.g., phoneme deletion, while auditory tasks usually measure implicit awareness e.g., frequency discrimination. Therefore, the stronger predictive power of speech tasks may be due to their higher processing demands, rather than the nature of the stimuli. Method: The present study uses novel tasks that control for level of processing (isolation, repetition and deletion) across speech (phonemes and nonwords) and non-speech (tones) stimuli. 800 beginning readers at the onset of literacy tuition (mean age 4 years and 7 months) were assessed on the above tasks as well as word reading and letter-knowledge in the first part of a three time-point longitudinal study. Results: Time 1 results reveal a significantly higher association between letter-sound knowledge and all of the speech compared to non-speech tasks. Performance was better for phoneme than tone stimuli, and worse for deletion than isolation and repetition across all stimuli. Conclusions: Results are consistent with phonological accounts of reading and suggest that level of processing required by the task is less important than stimuli type in predicting the earliest stage of reading.