950 resultados para read-aloud
Resumo:
Mode of access: Internet.
Resumo:
The common focus of the studies brought together in this work is the prosodic segmentation of spontaneous speech. The theoretically most central aspect is the introduction and further development of the IJ-model of intonational chunking. The study consists of a general introduction and five detailed studies that approach prosodic chunking from different perspectives. The data consist of recordings of face-to-face interaction in several spoken varieties of Finnish and Finland Swedish; the methodology is usage-based and qualitative. The term “speech prosody” refers primarily to the melodic and rhythmic characteristics of speech. Both speaking and understanding speech require the ability to segment the flow of speech into suitably sized prosodic chunks. In order to be usage-based, a study of spontaneous speech consequently needs to be based on material that is segmented into prosodic chunks of various sizes. The segmentation is seen to form a hierarchy of chunking. The prosodic models that have so far been developed and employed in Finland have been based on sentences read aloud, which has made it difficult to apply these models in the analysis of spontaneous speech. The prosodic segmentation of spontaneous speech has not previously been studied in detail in Finland. This research focuses mainly on the following three questions: (1) What are the factors that need to be considered when developing a model of prosodic segmentation of speech, so that the model can be employed regardless of the language or dialect under analysis? (2) What are the characteristics of a prosodic chunk, and what are the similarities in the ways chunks of different languages and varieties manifest themselves that will make it possible to analyze different data according to the same criteria? (3) How does the IJ-model of intonational chunking introduced as a solution to question (1) function in practice in the study of different varieties of Finnish and Finland Swedish? The boundaries of the prosodic chunks were manually marked in the material according to context-specific acoustic and auditory criteria. On the basis of the data analyzed, the IJ-model was further elaborated and implemented, thus allowing comparisons between different language varieties. On the basis of the empirical comparisons, a prosodic typology is presented for the dialects of Swedish in Finland. The general contention is that the principles of the IJ-model can readily be used as a methodological tool for prosodic analysis irrespective of language varieties.
Resumo:
Objectives. The sentence span task is a complex working memory span task used for estimating total working memory capacity for both processing (sentence comprehension) and storage (remembering a set of words). Several traditional models of working memory suggest that performance on these tasks relies on phonological short-term storage. However, long-term memory effects as well as the effects of expertise and strategies have challenged this view. This study uses a working memory task that aids the creation of retrieval structures in the form of stories, which have been shown to form integrated structures in longterm memory. The research question is whether sentence and story contexts boost memory performance in a complex working memory task. The hypothesis is that storage of the words in the task takes place in long-term memory. Evidence of this would be better recall for words as parts of sentences than for separate words, and, particularly, a beneficial effect for words as part of an organized story. Methods. Twenty stories consisting of five sentences each were constructed, and the stimuli in all experimental conditions were based on these sentences and sentence-final words, reordered and recombined for the other conditions. Participants read aloud sets of five sentences that either formed a story or not. In one condition they had to report all the last words at the end of the set, in another, they memorised an additional separate word with each sentence. The sentences were presented on the screen one word at a time (500 ms). After the presentation of each sentence, the participant verified a statement about the sentence. After five sentences, the participant repeated back the words in correct positions. Experiment 1 (n=16) used immediate recall, experiment 2 (n=21) both immediate recall and recall after a distraction interval (the operation span task). In experiment 2 a distracting mental arithmetic task was presented instead of recall in half of the trials, and an individual word was added before each sentence in the two experimental conditions when the participants were to memorize the sentence final words. Subjects also performed a listening span task (in exp.1) or an operation span task (exp.2) to allow comparison of the estimated span and performance in the story task. Results were analysed using correlations, repeated measures ANOVA and a chi-square goodness of fit test on the distribution of errors. Results and discussion. Both the relatedness of the sentences (the story condition) and the inclusion of the words into sentences helped memory. An interaction showed that the story condition had a greater effect on last words than separate words. The beneficial effect of the story was shown in all serial positions. The effects remained in delayed recall. When the sentences formed stories, performance in verification of the statements about sentence context was better. This, as well as the differing distributions of errors in different experimental conditions, suggest different levels of representation are in use in the different conditions. In the story condition, the nature of these representations could be in the form of an organized memory structure, a situation model. The other working memory tasks had only few week correlations to the story task. This could indicate that different processes are in use in the tasks. The results do not support short-term phonological storage, but instead are compatible with the words being encoded to LTM during the task.
Resumo:
In the field of second language (L2) acquisition, the term `foreign accent´ is often used to refer to speech characteristics that differ from the pronunciation of native speakers. Foreign accent may affect the intelligibility and perceived comprehensibility of speech and it is also sometimes associated with negative attitudes. The degree of L2 learners foreign accent and the speech characteristics that account for it have previously been studied through speech perception experiments and acoustic measurements. Perception experiments have shown that native listeners are easily able to identify foreign accent in speech. However to date, no studies have been done on the assessment of foreign accent in the speech of non-native speakers of Finnish. The aim of this study is to examine how native speakers of Finnish rate the degree of foreign accentedness in the speech of Russian L2 learners of Finnish. Furthermore, phonetic analysis is used to study the characteristics of speech that affect the perceived strength of foreign accent. Altogether 96 native speakers of Finnish listened to excerpts of read-aloud and spontaneous Finnish speech from ten Russian and six Finnish female speakers. The Russian speakers were intermediate and advanced learners of Finnish and had all immigrated to Finland as adults. Among the listeners, was a group of teachers of Finnish as an L2, and it was presumed that these teachers had been exposed to foreign accent in Finnish and were used to hearing it. The temporal aspects and segmental properties of speech were phonetically analysed in the speech of the Russian speakers in order to measure their effect on the perceived degree of accent. Although wide differences were observed in the use of the rating scale among the listeners, they were still quite unanimous on which speakers had the strongest foreign accent and which had the mildest. The listeners background factors had little effect on their ratings, and the ratings of the teachers of Finnish as an L2 did not differ from those of the other listeners. However, a clear difference was noted in the ratings of the two types of stimuli used in the perception experiment: the read-aloud speech was rated as more strongly accented than the spontaneous speech. It is important to note that the assessment of foreign accent is affected by many factors and their complex interactions in the experimental setting. Futher the study found that, both the temporal aspects of speech, often associated with fluency, and the number of single deviant phonetic segments contributed to the perceived degree of accentedness in the speech of the native Russian speakers.
Resumo:
Objective
To examine the psychometric properties of an internet version of a children and young person's quality of life measure originally designed as a paper questionnaire.
Methods
Participants were 3,440 10 and 11 year old children in Northern Ireland who completed the KIDSCREEN-27 online as part of a general attitudinal survey. The questionnaire was animated using cartoon characters that are familiar to most children and the questions appeared on screen and were read aloud by actors.
Results
Exploratory principal component analysis of the online version of the questionnaire supported the existence of five components in line with the paper version. The items loaded on the components that would be expected based on previous findings with five domains - physical well-being,psychological well-being, autonomy and parents, social support and peers and school environment.Internal consistency reliability of the five domains was measured using Cronbach's alpha and the results suggested that the scale scores were reliable. The domain scores were similar to those reported in the literature for the paper version.
Conclusions
These results suggest that the factor structure and internal consistency reliability scores of the KIDSCREEN-27 embedded within an online survey are comparable to those reported in the literature for the paper version.
Resumo:
A dedication service program for those who gave their lives during World War II from the city of St. Catharines. The list of over 150 names was to be read aloud and an address made by the Mayor (W.J. Macdonald) with prayers and hymn to follow.
Resumo:
Pour la plupart des gens, la lecture est une activité automatique, inhérente à leur vie quotidienne et ne demandant que peu d’effort. Chez les individus souffrant d’épilepsie réflexe à la lecture, le simple fait de lire déclenche des crises épileptiques et les personnes doivent alors renoncer à la lecture. Les facteurs responsables du déclenchement de l’activité épileptique dans l’épilepsie réflexe à la lecture demeurent encore mal définis. Certains auteurs suggèrent que le nombre ainsi que la localisation des pointes épileptiques seraient en lien avec la voie de lecture impliquée. Des études en imagerie cérébrale, menées auprès de populations sans trouble neurologique, ont dévoilé que la lecture active un réseau étendu incluant les cortex frontaux, temporo-pariétaux et occipito-temporaux bilatéralement avec des différences dans les patrons d’activation pour les voies de lecture lexicale et phonologique. La majorité des études ont eu recours à des tâches de lecture silencieuse qui ne permettent pas d'évaluer la performance des participants. Dans la première étude de cette thèse, qui porte sur une étude de cas d'un patient avec épilepsie réflexe à la lecture, nous avons déterminé les tâches langagières et les caractéristiques des stimuli qui influencent l'activité épileptique. Les résultats ont confirmé que la lecture était la principale tâche responsable du déclenchement de l’activité épileptique chez ce patient. En particulier, la fréquence des pointes épileptiques était significativement plus élevée lorsque le patient avait recours au processus de conversion grapho-phonémique. Les enregistrements électroencéphalographiques (EEG) ont révélé que les pointes épileptiques étaient localisées dans le gyrus précentral gauche, indépendamment de la voie de lecture. La seconde étude avait comme objectif de valider un protocole de lecture à voix haute ayant recours à la spectroscopie près du spectre de l’infrarouge (SPIR) pour investiguer les circuits neuronaux qui sous-tendent la lecture chez les normo-lecteurs. Douze participants neurologiquement sains ont lu à voix haute des mots irréguliers et des non-mots lors d’enregistrements en SPIR. Les résultats ont montré que la lecture des deux types de stimuli impliquait des régions cérébrales bilatérales communes incluant le gyrus frontal inférieur, le gyrus prémoteur et moteur, le cortex somatosensoriel associatif, le gyrus temporal moyen et supérieur, le gyrus supramarginal, le gyrus angulaire et le cortex visuel. Les concentrations totales d’hémoglobine (HbT) dans les gyri frontaux inférieurs bilatéraux étaient plus élevées dans la lecture des non-mots que dans celle des mots irréguliers. Ce résultat suggère que le gyrus frontal inférieur joue un rôle dans la conversion grapho-phonémique, qui caractérise la voie de lecture phonologique. Cette étude a confirmé le potentiel de la SPIR pour l’investigation des corrélats neuronaux des deux voies de lecture. Une des retombées importantes de cette thèse consiste en l’utilisation du protocole de lecture en SPIR pour investiguer les troubles de la lecture. Ces investigations pourraient aider à mieux établir les liens entre le fonctionnement cérébral et la lecture dans les dyslexies développementales et acquises.
Resumo:
Les systèmes éducatifs dans le monde et particulièrement au Québec visent à préparer les élèves à relever les défis de l’avenir et à continuer à apprendre tout au long de leur vie. À cet égard, la lecture est un volet important dans le développement d'un enfant et dans sa capacité de faire des liens avec le monde qui l'entoure. La lecture est un outil d’apprentissage, de communication et de création, et elle peut être une source de plaisir. La plupart des activités quotidiennes font appel à la lecture. Ainsi, elle est nécessaire pour effectuer une tâche, se renseigner ou se divertir. L’élève apprend à lire pour mieux s’intégrer dans la vie scolaire et sociale et pour apprendre dans différents contextes disciplinaires. Dans le but notamment de consolider les apprentissages et d’installer de bonnes habitudes de travail, les enseignants proposent aux élèves des devoirs de lecture à faire à la maison. Les recherches montrent que la participation des parents dans la vie scolaire des enfants, particulièrement lors de l’encadrement des devoirs, peut avoir une influence positive sur la réussite scolaire. La présente recherche vise à étudier la manière dont les parents d’élèves de première année encadrent leur enfant pendant la période des devoirs, notamment pendant la lecture. Notre échantillon est constitué de dix-sept parents d’élèves de première année. Nous avons privilégié l’entrevue semi-dirigée afin de recueillir les commentaires et les perceptions des parents sur le déroulement de la période des devoirs à la maison. Les résultats ont montré que tous les parents interrogés encadrent leur enfant pendant ses devoirs, les mères plus souvent que les pères, et qu’ils établissent une routine lors de cette période. L’encadrement des devoirs se fait majoritairement dans un climat agréable. La plupart des parents soutiennent leur enfant en restant à proximité de lui, en lui donnant des conseils, en l’encourageant et en s’assurant qu’il termine ses devoirs. La majorité des parents estiment avoir les ressources nécessaires pour encadrer leur enfant lors des devoirs. Durant la lecture, les parents écoutent généralement leur enfant et l’aident ou le corrigent s’il n’arrive pas à lire ou s’il fait une erreur. Par ailleurs, même si les parents sont convaincus de l’habileté de leur enfant en lecture, la plupart vérifient sa compréhension en posant des questions. En ce qui a trait aux effets des devoirs, tous les parents pensent que les devoirs favorisent la réussite scolaire de leur enfant et que leur encadrement a un effet positif sur la lecture. Les résultats obtenus ne peuvent être généralisés. Cependant, il serait intéressant de poursuivre ce travail par une recherche complémentaire qui étudierait les perceptions des enseignants et des élèves sur les devoirs de lecture.
Resumo:
Le rôle du parent est important dans le développement de la compétence en lecture de jeunes enfants et lire à son enfant est une pratique de littératie familiale fortement encouragée par la société. Cette étude a pour objectif de décrire cet accompagnement parental notamment en lien avec les stratégies de compréhension utilisées entre un parent et son enfant lors de la lecture à voix haute. Nous avons observé 10 parents lire un abécédaire, un texte narratif avec intrigue, un texte narratif sans intrigue et un texte informatif à leur enfant de cinq ans. Il s’avère que les stratégies utilisées par les parents et leurs enfants diffèrent selon le genre de texte. Les élèves ayant de faibles résultats (reconnaissance des lettres et de leurs sons, rappel du texte, compréhension du vocabulaire réceptif et de la morphosyntaxe) utilisent également moins de stratégies de compréhension lors de la lecture à voix haute que les enfants présentant de meilleurs résultats. Nous avons également vérifié l’étayage offert par les parents d’enfants présentant de bonnes et de faibles compétences en lecture. Ces deux groupes de parents se distinguent par la qualité et la fréquence de l’utilisation des stratégies de compréhension. En effet, nous remarquons que les parents qui guident leurs enfants dans l’utilisation des stratégies de compréhension sont davantage associés aux enfants démontrant une bonne compétence en lecture. Finalement, nous avons aussi vérifié les pratiques de littératie familiale (temps d’exposition et accessibilité à la lecture, modélisation par les membres de la famille, attitude des parents envers la lecture et mise en place d’activité favorisant la conscience phonologique de l’enfant). Seule la mise sur pied d’activités favorisant la conscience phonologique a pu être liée au rendement des enfants.
Resumo:
Parkinson’s disease (PD) is an increasing neurological disorder in an aging society. The motor and non-motor symptoms of PD advance with the disease progression and occur in varying frequency and duration. In order to affirm the full extent of a patient’s condition, repeated assessments are necessary to adjust medical prescription. In clinical studies, symptoms are assessed using the unified Parkinson’s disease rating scale (UPDRS). On one hand, the subjective rating using UPDRS relies on clinical expertise. On the other hand, it requires the physical presence of patients in clinics which implies high logistical costs. Another limitation of clinical assessment is that the observation in hospital may not accurately represent a patient’s situation at home. For such reasons, the practical frequency of tracking PD symptoms may under-represent the true time scale of PD fluctuations and may result in an overall inaccurate assessment. Current technologies for at-home PD treatment are based on data-driven approaches for which the interpretation and reproduction of results are problematic. The overall objective of this thesis is to develop and evaluate unobtrusive computer methods for enabling remote monitoring of patients with PD. It investigates first-principle data-driven model based novel signal and image processing techniques for extraction of clinically useful information from audio recordings of speech (in texts read aloud) and video recordings of gait and finger-tapping motor examinations. The aim is to map between PD symptoms severities estimated using novel computer methods and the clinical ratings based on UPDRS part-III (motor examination). A web-based test battery system consisting of self-assessment of symptoms and motor function tests was previously constructed for a touch screen mobile device. A comprehensive speech framework has been developed for this device to analyze text-dependent running speech by: (1) extracting novel signal features that are able to represent PD deficits in each individual component of the speech system, (2) mapping between clinical ratings and feature estimates of speech symptom severity, and (3) classifying between UPDRS part-III severity levels using speech features and statistical machine learning tools. A novel speech processing method called cepstral separation difference showed stronger ability to classify between speech symptom severities as compared to existing features of PD speech. In the case of finger tapping, the recorded videos of rapid finger tapping examination were processed using a novel computer-vision (CV) algorithm that extracts symptom information from video-based tapping signals using motion analysis of the index-finger which incorporates a face detection module for signal calibration. This algorithm was able to discriminate between UPDRS part III severity levels of finger tapping with high classification rates. Further analysis was performed on novel CV based gait features constructed using a standard human model to discriminate between a healthy gait and a Parkinsonian gait. The findings of this study suggest that the symptom severity levels in PD can be discriminated with high accuracies by involving a combination of first-principle (features) and data-driven (classification) approaches. The processing of audio and video recordings on one hand allows remote monitoring of speech, gait and finger-tapping examinations by the clinical staff. On the other hand, the first-principles approach eases the understanding of symptom estimates for clinicians. We have demonstrated that the selected features of speech, gait and finger tapping were able to discriminate between symptom severity levels, as well as, between healthy controls and PD patients with high classification rates. The findings support suitability of these methods to be used as decision support tools in the context of PD assessment.
Resumo:
Pós-graduação em Educação - FCT
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
'Sensing the self' relies on the ability to distinguish self-generated from external stimuli. It requires functioning mechanisms to establish feelings of agency and ownership. Agency is defined causally, where the subjects action is followed by an effect. Ownership is defined by the features of the effect, independent from the action. In our study, we manipulated these qualities separately. 13 right-handed healthy individuals performed the experiment while 76-channel EEG was recorded. Stimuli consisted of visually presented words, read aloud by the subject. The experiment consisted of six conditions: (a) subjects saw a word, read it aloud, heard it in their own voice; (b) like a, but the word was heard in an unfamiliar voice; (c) subject heard a word in his/her own voice without speaking; (d) like c, but the word was heard in an unfamiliar voice; (e) like a, but subjects heard the word with a delay; (f) subjects read without hearing. ERPs and difference maps were computed for all conditions. Effects were analysed topographically. The N100 (86-172 ms) displayed significant main effects of agency and ownership. The topographies of the two effects shared little common variance, suggesting independent effects. Later effects (174-400 ms) of agency and ownership were topographically similar, suggesting common mechanisms. Replicating earlier studies, significant N100 suppression was observed, with a topography resembling the agency effect. 'Sensing the self' appears to recruit from at least two very distinct processes: an agency assessment that represents causality and an ownership assessment that compares stimulus features with memory content.
Resumo:
Converging evidences from eye movement experiments indicate that linguistic contexts influence reading strategies. However, the question of whether different linguistic contexts modulate eye movements during reading in the same bilingual individuals remains unresolved. We examined reading strategies in a transparent (German) and an opaque (French) language of early, highly proficient French–German bilinguals: participants read aloud isolated French and German words and pseudo-words while the First Fixation Location (FFL), its duration and latency were measured. Since transparent linguistic contexts and pseudo-words would favour a direct grapheme/phoneme conversion, the reading strategy should be more local for German than for French words (FFL closer to the beginning) and no difference is expected in pseudo-words’ FFL between contexts. Our results confirm these hypotheses, providing the first evidence that the same individuals engage different reading strategy depending on language opacity, suggesting that a given brain process can be modulated by a given context.
Resumo:
Speech interface technology, which includes automatic speech recognition, synthetic speech, and natural language processing, is beginning to have a significant impact on business and personal computer use. Today, powerful and inexpensive microprocessors and improved algorithms are driving commercial applications in computer command, consumer, data entry, speech-to-text, telephone, and voice verification. Robust speaker-independent recognition systems for command and navigation in personal computers are now available; telephone-based transaction and database inquiry systems using both speech synthesis and recognition are coming into use. Large-vocabulary speech interface systems for document creation and read-aloud proofing are expanding beyond niche markets. Today's applications represent a small preview of a rich future for speech interface technology that will eventually replace keyboards with microphones and loud-speakers to give easy accessibility to increasingly intelligent machines.