879 resultados para Multimodal texts
Resumo:
This study investigated the influence of age, familiarity, and level of exposure on the metamemorial skill of prediction accuracy on a future test. Young (17 to 23 years old) and middle-aged adults (35 to 50 years old) were asked to predict their memory for text material. Participants made predictions on a familiar text and an unfamiliar text, at three different levels of exposure to each. The middle-aged adults were superior to the younger adults at predicting performance. This finding indicates that metamemory may increase from youth to middle age. Other findings include superior prediction accuracy for unfamiliar compared to familiar material, a result conflicting with previous findings, and an interaction between level of exposure and familiarity that appears to modify the main effects of those variables.
Resumo:
Ocular anatomy and radiation-associated toxicities provide unique challenges for external beam radiation therapy. For treatment planning, precise modeling of organs at risk and tumor volume are crucial. Development of a precise eye model and automatic adaptation of this model to patients' anatomy remain problematic because of organ shape variability. This work introduces the application of a 3-dimensional (3D) statistical shape model as a novel method for precise eye modeling for external beam radiation therapy of intraocular tumors.
Resumo:
Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker's face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants' ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.