993 resultados para Chinese word segmentation
Resumo:
Statistical learning can be used to extract the words from continuous speech. Gómez, Bion, and Mehler (Language and Cognitive Processes, 26, 212–223, 2011) proposed an online measure of statistical learning: They superimposed auditory clicks on a continuous artificial speech stream made up of a random succession of trisyllabic nonwords. Participants were instructed to detect these clicks, which could be located either within or between words. The results showed that, over the length of exposure, reaction times (RTs) increased more for within-word than for between-word clicks. This result has been accounted for by means of statistical learning of the between-word boundaries. However, even though statistical learning occurs without an intention to learn, it nevertheless requires attentional resources. Therefore, this process could be affected by a concurrent task such as click detection. In the present study, we evaluated the extent to which the click detection task indeed reflects successful statistical learning. Our results suggest that the emergence of RT differences between within- and between-word click detection is neither systematic nor related to the successful segmentation of the artificial language. Therefore, instead of being an online measure of learning, the click detection task seems to interfere with the extraction of statistical regularities.
Resumo:
Studies in sensory neuroscience reveal the critical importance of accurate sensory perception for cognitive development. There is considerable debate concerning the possible sensory correlates of phonological processing, the primary cognitive risk factor for developmental dyslexia. Across languages, children with dyslexia have a specific difficulty with the neural representation of the phonological structure of speech. The identification of a robust sensory marker of phonological difficulties would enable early identification of risk for developmental dyslexia and early targeted intervention. Here, we explore whether phonological processing difficulties are associated with difficulties in processing acoustic cues to speech rhythm. Speech rhythm is used across languages by infants to segment the speech stream into words and syllables. Early difficulties in perceiving auditory sensory cues to speech rhythm and prosody could lead developmentally to impairments in phonology. We compared matched samples of children with and without dyslexia, learning three very different spoken and written languages, English, Spanish, and Chinese. The key sensory cue measured was rate of onset of the amplitude envelope (rise time), known to be critical for the rhythmic timing of speech. Despite phonological and orthographic differences, for each language, rise time sensitivity was a significant predictor of phonological awareness, and rise time was the only consistent predictor of reading acquisition. The data support a language-universal theory of the neural basis of developmental dyslexia on the basis of rhythmic perception and syllable segmentation. They also suggest that novel remediation strategies on the basis of rhythm and music may offer benefits for phonological and linguistic development.
Resumo:
Using fMRI, we conducted two types of property generation task that involved language switching, with early bilingual speakers of Korean and Chinese. The first is a more conventional task in which a single language (L1 or L2) was used within each trial, but switched randomly from trial to trial. The other consists of a novel experimental design where language switching happens within each trial, alternating in the direction of the L1/L2 translation required. Our findings support a recently introduced cognitive model, the 'hodological' view of language switching proposed by Moritz-Gasser and Duffau. The nodes of a distributed neural network that this model proposes are consistent with the informative regions that we extracted in this study, using both GLM methods and Multivariate Pattern Analyses: the supplementary motor area, caudate, supramarginal gyrus and fusiform gyrus and other cortical areas.
Resumo:
[Support Institutions:] Department of Administration of Health, University of Montreal, Canada Public Health School of Fudan University, Shanghai, China
Resumo:
Speech is often a multimodal process, presented audiovisually through a talking face. One area of speech perception influenced by visual speech is speech segmentation, or the process of breaking a stream of speech into individual words. Mitchel and Weiss (2013) demonstrated that a talking face contains specific cues to word boundaries and that subjects can correctly segment a speech stream when given a silent video of a speaker. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2013). In Experiment 1, subjects were found to spend the most time watching the eyes and mouth, with a trend suggesting that the mouth was viewed more than the eyes. Although subjects displayed significant learning of word boundaries, performance was not correlated with gaze duration on any individual feature, nor was performance correlated with a behavioral measure of autistic-like traits. However, trends suggested that as autistic-like traits increased, gaze duration of the mouth increased and gaze duration of the eyes decreased, similar to significant trends seen in autistic populations (Boratston & Blakemore, 2007). In Experiment 2, the same video was modified so that a black bar covered the eyes or mouth. Both videos elicited learning of word boundaries that was equivalent to that seen in the first experiment. Again, no correlations were found between segmentation performance and SRS scores in either condition. These results, taken with those in Experiment, suggest that neither the eyes nor mouth are critical to speech segmentation and that perhaps more global head movements indicate word boundaries (see Graf, Cosatto, Strom, & Huang, 2002). Future work will elucidate the contribution of individual features relative to global head movements, as well as extend these results to additional types of speech tasks.
Resumo:
In this paper, we present an innovative topic segmentation system based on a new informative similarity measure that takes into account word co-occurrence in order to avoid the accessibility to existing linguistic resources such as electronic dictionaries or lexico-semantic databases such as thesauri or ontology. Topic segmentation is the task of breaking documents into topically coherent multi-paragraph subparts. Topic segmentation has extensively been used in information retrieval and text summarization. In particular, our architecture proposes a language-independent topic segmentation system that solves three main problems evidenced by previous research: systems based uniquely on lexical repetition that show reliability problems, systems based on lexical cohesion using existing linguistic resources that are usually available only for dominating languages and as a consequence do not apply to less favored languages and finally systems that need previously existing harvesting training data. For that purpose, we only use statistics on words and sequences of words based on a set of texts. This solution provides a flexible solution that may narrow the gap between dominating languages and less favored languages thus allowing equivalent access to information.