155 resultados para SYLLABLES


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker's face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants' ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Audio-visual documents obtained from German TV news are classified according to the IPTC topic categorization scheme. To this end usual text classification techniques are adapted to speech, video, and non-speech audio. For each of the three modalities word analogues are generated: sequences of syllables for speech, “video words” based on low level color features (color moments, color correlogram and color wavelet), and “audio words” based on low-level spectral features (spectral envelope and spectral flatness) for non-speech audio. Such audio and video words provide a means to represent the different modalities in a uniform way. The frequencies of the word analogues represent audio-visual documents: the standard bag-of-words approach. Support vector machines are used for supervised classification in a 1 vs. n setting. Classification based on speech outperforms all other single modalities. Combining speech with non-speech audio improves classification. Classification is further improved by supplementing speech and non-speech audio with video words. Optimal F-scores range between 62% and 94% corresponding to 50% - 84% above chance. The optimal combination of modalities depends on the category to be recognized. The construction of audio and video words from low-level features provide a good basis for the integration of speech, non-speech audio and video.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite being one of the most extensively researched of Eastern Himalayan languages, the basic morphological and phonological-prosodic properties of Apatani (Tibeto-Burman > Tani > Western) have not yet been adequately described. This article attempts such a description, focusing especially on interactions between segmental-syllabic phonology and tone in Apatani. We highlight three features in particular – vowel length, nasality and a glottal stop – which contribute to contrastively-weighted syllables in Apatani, which are consistently under-represented in previous descriptions of Apatani, and in absence of which tone in Apatani cannot be effectively analysed. We conclude that Apatani has two “underlying”, lexically-specified tone categories H and L, whose interaction with word structure and syllable weight produce a maximum of three “surface” pitch contours – level, falling and rising – on disyllabic phonological words. Two appendices provide a set of diagnostic procedures for the discovery and description of Apatani tone categories, as well as an Apatani lexicon of approximately one thousand entries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Comprehending speech is one of the most important human behaviors, but we are only beginning to understand how the brain accomplishes this difficult task. One key to speech perception seems to be that the brain integrates the independent sources of information available in the auditory and visual modalities in a process known as multisensory integration. This allows speech perception to be accurate, even in environments in which one modality or the other is ambiguous in the context of noise. Previous electrophysiological and functional magnetic resonance imaging (fMRI) experiments have implicated the posterior superior temporal sulcus (STS) in auditory-visual integration of both speech and non-speech stimuli. While evidence from prior imaging studies have found increases in STS activity for audiovisual speech compared with unisensory auditory or visual speech, these studies do not provide a clear mechanism as to how the STS communicates with early sensory areas to integrate the two streams of information into a coherent audiovisual percept. Furthermore, it is currently unknown if the activity within the STS is directly correlated with strength of audiovisual perception. In order to better understand the cortical mechanisms that underlie audiovisual speech perception, we first studied the STS activity and connectivity during the perception of speech with auditory and visual components of varying intelligibility. By studying fMRI activity during these noisy audiovisual speech stimuli, we found that STS connectivity with auditory and visual cortical areas mirrored perception; when the information from one modality is unreliable and noisy, the STS interacts less with the cortex processing that modality and more with the cortex processing the reliable information. We next characterized the role of STS activity during a striking audiovisual speech illusion, the McGurk effect, to determine if activity within the STS predicts how strongly a person integrates auditory and visual speech information. Subjects with greater susceptibility to the McGurk effect exhibited stronger fMRI activation of the STS during perception of McGurk syllables, implying a direct correlation between strength of audiovisual integration of speech and activity within an the multisensory STS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES The objectives of the present study were to investigate temporal/spectral sound-feature processing in preschool children (4 to 7 years old) with peripheral hearing loss compared with age-matched controls. The results verified the presence of statistical learning, which was diminished in children with hearing impairments (HIs), and elucidated possible perceptual mediators of speech production. DESIGN Perception and production of the syllables /ba/, /da/, /ta/, and /na/ were recorded in 13 children with normal hearing and 13 children with HI. Perception was assessed physiologically through event-related potentials (ERPs) recorded by EEG in a multifeature mismatch negativity paradigm and behaviorally through a discrimination task. Temporal and spectral features of the ERPs during speech perception were analyzed, and speech production was quantitatively evaluated using speech motor maximum performance tasks. RESULTS Proximal to stimulus onset, children with HI displayed a difference in map topography, indicating diminished statistical learning. In later ERP components, children with HI exhibited reduced amplitudes in the N2 and early parts of the late disciminative negativity components specifically, which are associated with temporal and spectral control mechanisms. Abnormalities of speech perception were only subtly reflected in speech production, as the lone difference found in speech production studies was a mild delay in regulating speech intensity. CONCLUSIONS In addition to previously reported deficits of sound-feature discriminations, the present study results reflect diminished statistical learning in children with HI, which plays an early and important, but so far neglected, role in phonological processing. Furthermore, the lack of corresponding behavioral abnormalities in speech production implies that impaired perceptual capacities do not necessarily translate into productive deficits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I report on language variation in the unresearched variety of English emerging on Kosrae, Federated States of Micronesia. English is spoken as the inter-island lingua franca throughout Micronesia and has been the official language of FSM since gaining its independence in 1986, though still retaining close ties with the US through and economic “compact” agreement. I present here an analysis of a corpus of over 90 Kosraean English speakers, compiled during a three month fieldwork trip to the island in the Western Pacific. The 45 minute sociolinguistically sensitive recordings are drawn from a corpus of old and young, with varying levels of education and occupations, and off-island experiences. In the paper I analyse two variables. The first variable is the realisation of /h/, often subject to deletion in both L1 and L2 varieties of English. Such occurrences are commonly associated with Cockney English, but also found in Caribbean English and the postcolonial English of Australia. For example:  Male, 31: yeah I build their house their local huts and they pay me /h/ deletion is frequent in Kosraean English, but, perhaps expectedly, occurs slightly less among people with higher contact with American English, through having spent longer periods off island. The second feature under scrutiny is the variable epenthesis of [h] to provide a consonantal onset to vowel-initial syllables.  Male, 31: that guy is really hold now This practice is also found beyond Kosraean English. Previous studies find h-epenthesis arising in L1 varieties including Newfoundland and Tristan de Cunha English, while similar manifestations are identified in Francophone L2 learners of English. My variationist statistical analysis has shown [h] insertion:  to disproportionately occur intervocalically;  to be constrained by both speaker gender and age: older males are much more likely to epenthesis [h] in their speech;  to be more likely in the onset of stressed as opposed to unstressed syllables. In light of the findings of my analysis, I consider the relationship between h-deletion and h-epenthesis, the plausibility of hypercorrection as a motivation for the variation, and the potential influence of the substrate language, alongside sociolinguistic factors such as attitudes towards the US based on mobility. The analysis sheds light on the extent to which different varieties share this characteristic and the comparability of them in terms of linguistic constraints and attributes. Clarke, S. (2010). Newfoundland and Labrador English. Edinburgh: Edinburgh University Press Hackert, S. (2004). Urban Bahamian Creole: System and Variation. Varieties of English Around the World G32. Amsterdam: Benjamins Milroy, J. (1983). On the Sociolinguistic History of H-dropping in English in Current topics in English historical linguistics: Odense UP

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neurons in the songbird forebrain area HVc (hyperstriatum ventrale pars caudale or high vocal center) are sensitive to the temporal structure of the bird's own song and are capable of integrating auditory information over a period of several hundred milliseconds. Extracellular studies have shown that the responses of some HVc neurons depend on the combination and temporal order of syllables from the bird's own song, but little is known about the mechanisms underlying these response properties. To investigate these mechanisms, we recorded intracellular responses to a set of auditory stimuli designed to assess the degree of dependence of the responses on temporal context. This report provides evidence that HVc neurons encode information about temporal structure by using a variety of mechanisms including syllable-specific inhibition, excitatory postsynaptic potentials with a range of different time courses, and burst-firing nonlinearity. The data suggest that the sensitivity of HVc neurons to temporal combinations of syllables results from the interactions of several cells and does not arise in a single step from afferent inputs alone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One-page handwritten list of 20 numbered theses in Latin presumed to be copied by Bela Lincoln. The document is signed "Lincoln 1754." The document title translates as "Grammar of letters, syllables, words, and sentences" and includes all of the nine theses listed in the "Theses Grammaticae" section of the 1754 Commencement broadside.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’objectif de cette thèse est l’étude du développement de l’attention auditive et des capacités de discrimination langagière chez l’enfant né prématurément ou à terme. Les derniers mois de grossesse sont particulièrement importants pour le développement cérébral de l’enfant et les conséquences d’une naissance prématurée sur le développement peuvent être considérables. Les enfants nés prématurément sont plus à risque de développer une variété de troubles neurodéveloppementaux que les enfants nés à terme. Même en l’absence de dommages cérébraux visibles, de nombreux enfants nés avant terme sont à risque de présenter des troubles tels que des retards langagiers ou des difficultés attentionnelles. Dans cette thèse, nous proposons donc une méthode d’investigation des processus préattentionnels auditifs et de discrimination langagière, à l’aide de l’électrophysiologie à haute densité et des potentiels évoqués auditifs (PEAs). Deux études ont été réalisées. La première visait à mettre sur pied un protocole d’évaluation de l’attention auditive et de la discrimination langagière chez l’enfant en santé, couvrant différents stades de développement (3 à 7 ans, 8 à 13 ans, adultes ; N = 40). Pour ce faire, nous avons analysé la composante de Mismatch Negativity (MMN) évoquée par la présentation de sons verbaux (syllabes /Ba/ et /Da/) et non verbaux (tons synthétisés, Ba : 1578 Hz/2800 Hz ; Da : 1788 Hz/2932 Hz). Les résultats ont révélé des patrons d’activation distincts en fonction de l’âge et du type de stimulus présenté. Chez tous les groupes d’âge, la présentation des stimuli non verbaux a évoqué une MMN de plus grande amplitude et de latence plus rapide que la présentation des stimuli verbaux. De plus, en réponse aux stimuli verbaux, les deux groupes d’enfants (3 à 7 ans, 8 à 13 ans) ont démontré une MMN de latence plus tardive que celle mesurée dans le groupe d’adultes. En revanche, en réponse aux stimuli non verbaux, seulement le groupe d’enfants de 3 à 7 ans a démontré une MMN de latence plus tardive que le groupe d’adulte. Les processus de discrimination verbaux semblent donc se développer plus tardivement dans l’enfance que les processus de discrimination non verbaux. Dans la deuxième étude, nous visions à d’identifier les marqueurs prédictifs de déficits attentionnels et langagiers pouvant découler d’une naissance prématurée à l’aide des PEAs et de la MMN. Nous avons utilisé le même protocole auprès de 74 enfants âgés de 3, 12 et 36 mois, nés prématurément (avant 34 semaines de gestation) ou nés à terme (au moins 37 semaines de gestation). Les résultats ont révélé que les enfants nés prématurément de tous les âges démontraient un délai significatif dans la latence de la réponse MMN et de la P150 par rapport aux enfants nés à terme lors de la présentation des sons verbaux. De plus, les latences plus tardives de la MMN et de la P150 étaient également corrélées à des performances langagières plus faibles lors d’une évaluation neurodéveloppementale. Toutefois, aucune différence n’a été observée entre les enfants nés à terme ou prématurément lors de la discrimination des stimuli non verbaux, suggérant des capacités préattentionnelles auditives préservées chez les enfants prématurés. Dans l’ensemble, les résultats de cette thèse indiquent que les processus préattentionnels auditifs se développent plus tôt dans l'enfance que ceux associés à la discrimination langagière. Les réseaux neuronaux impliqués dans la discrimination verbale sont encore immatures à la fin de l'enfance. De plus, ceux-ci semblent être particulièrement vulnérables aux impacts physiologiques liés à la prématurité. L’utilisation des PEAs et de la MMN en réponse aux stimuli verbaux en bas âge peut fournir des marqueurs prédictifs des difficultés langagières fréquemment observées chez l’enfant prématuré.