3 resultados para Audio-visual library service.
em Bucknell University Digital Commons - Pensilvania - USA
Resumo:
In 2011, researchers at Bucknell University and Illinois Wesleyan University compared the search efficacy of Serial Solutions Summon, EBSCO Discovery Service, Google Scholar and conventional library databases. Using a mixed-methods approach, qualitative and quantitative data was gathered on students’ usage of these tools. Regardless of the search system, students exhibited a marked inability to effectively evaluate sources and a heavy reliance on default search settings. On the quantitative benchmarks measured by this study, the EBSCO Discovery Service tool outperformed the other search systems in almost every category. This article describes these results and makes recommendations for libraries considering these tools.
Resumo:
Based on the Ricker/Witmer survey on Library Support for Science Research and Education, a brief statistical analysis of the Bucknell University community and library support for science and engineering research and education is provided. The position and responsibilities of Reference Librarian/Coordinator of Science and Engineering Resources in the Ellen Clarke Bertrand Library are detailed. Throughout the article, I describe the motivation and justification for an integrated university library collection, which serves not only the Science and Engineering faculty and students, but the entire Bucknell University community. The issues of finance and budget, public service, and information access and delivery in relation to a central university library are discussed.
Resumo:
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.