2 resultados para 1147

em Bucknell University Digital Commons - Pensilvania - USA


Relevância:

10.00% 10.00%

Publicador:

Resumo:

SETTING: Cordoba, Spain, 1135 CE, 29th year of the reign of ‘Ali “amir al-muslimin,” second king of the Berber Almoravid dynasty, rulers of Moorish Spain from 1071 to 1147. Cordoba, the capital of Andalus and the center of the Almoravid holdings in Spain, is a bustling cosmopolitan center, a crossroads for Europe and the Middle East, and the meeting-point of three religious traditions. Most significantly, Cordoba at this time is the hub of European intellectual activity. From the square—itself impressively large and surrounded by a massive collonade, the regularity and ordered beauty of which typifies the Moorish taste for symmetry (so beloved of M.C. Escher)—can be seen the huge Cordoban mosque, erected in the 8th-century by Khalif Abd-er-Rahman I to the glory of Allah, oft forgiving, most merciful. It is the second largest building in Islam, and the bastion of the still entrenched but soon to fade Muslim presence in western Europe. SCENE: Three figures sit upon stone benches beneath the westernmost colonnade of the Cordoban mosque, involved in an animated, though friendly discussion on matters of faith and reason, knowledge and God, language and logic. The host is none other than Jehudah Halevi, and his esteemed guests Master Peter Abelard and the venerable Råmånuja, whose obviously advanced age belies his youthful voice, gleaming eye, quick hands, and general exuberance. It is autumn, early evening…

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speech is often a multimodal process, presented audiovisually through a talking face. One area of speech perception influenced by visual speech is speech segmentation, or the process of breaking a stream of speech into individual words. Mitchel and Weiss (2013) demonstrated that a talking face contains specific cues to word boundaries and that subjects can correctly segment a speech stream when given a silent video of a speaker. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2013). In Experiment 1, subjects were found to spend the most time watching the eyes and mouth, with a trend suggesting that the mouth was viewed more than the eyes. Although subjects displayed significant learning of word boundaries, performance was not correlated with gaze duration on any individual feature, nor was performance correlated with a behavioral measure of autistic-like traits. However, trends suggested that as autistic-like traits increased, gaze duration of the mouth increased and gaze duration of the eyes decreased, similar to significant trends seen in autistic populations (Boratston & Blakemore, 2007). In Experiment 2, the same video was modified so that a black bar covered the eyes or mouth. Both videos elicited learning of word boundaries that was equivalent to that seen in the first experiment. Again, no correlations were found between segmentation performance and SRS scores in either condition. These results, taken with those in Experiment, suggest that neither the eyes nor mouth are critical to speech segmentation and that perhaps more global head movements indicate word boundaries (see Graf, Cosatto, Strom, & Huang, 2002). Future work will elucidate the contribution of individual features relative to global head movements, as well as extend these results to additional types of speech tasks.