1 resultado para Speech language therapy
em Bucknell University Digital Commons - Pensilvania - USA
Filtro por publicador
- Repository Napier (2)
- Aberdeen University (1)
- Academic Archive On-line (Stockholm University; Sweden) (2)
- AMS Tesi di Laurea - Alm@DL - Università di Bologna (1)
- Applied Math and Science Education Repository - Washington - USA (1)
- ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha (1)
- Archive of European Integration (1)
- Aston University Research Archive (21)
- B-Digital - Universidade Fernando Pessoa - Portugal (2)
- Biblioteca de Teses e Dissertações da USP (7)
- Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (31)
- Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP) (113)
- BORIS: Bern Open Repository and Information System - Berna - Suiça (34)
- Brock University, Canada (9)
- Bucknell University Digital Commons - Pensilvania - USA (1)
- Bulgarian Digital Mathematics Library at IMI-BAS (2)
- CentAUR: Central Archive University of Reading - UK (40)
- Central European University - Research Support Scheme (5)
- Cochin University of Science & Technology (CUSAT), India (7)
- Consorci de Serveis Universitaris de Catalunya (CSUC), Spain (13)
- CORA - Cork Open Research Archive - University College Cork - Ireland (1)
- Dalarna University College Electronic Archive (8)
- Digital Commons - Michigan Tech (1)
- Digital Commons @ DU | University of Denver Research (1)
- Digital Commons at Florida International University (6)
- DigitalCommons@The Texas Medical Center (1)
- DigitalCommons@University of Nebraska - Lincoln (1)
- Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland (6)
- DRUM (Digital Repository at the University of Maryland) (4)
- Duke University (2)
- Illinois Digital Environment for Access to Learning and Scholarship Repository (1)
- Institute of Public Health in Ireland, Ireland (29)
- Instituto Politécnico do Porto, Portugal (10)
- Martin Luther Universitat Halle Wittenberg, Germany (1)
- Massachusetts Institute of Technology (2)
- Ministerio de Cultura, Spain (4)
- National Center for Biotechnology Information - NCBI (15)
- QSpace: Queen's University - Canada (1)
- QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast (2)
- RCAAP - Repositório Científico de Acesso Aberto de Portugal (2)
- RDBU - Repositório Digital da Biblioteca da Unisinos (1)
- ReCiL - Repositório Científico Lusófona - Grupo Lusófona, Portugal (1)
- Repositório Científico do Instituto Politécnico de Lisboa - Portugal (2)
- Repositório da Produção Científica e Intelectual da Unicamp (29)
- Repositório do Centro Hospitalar de Lisboa Central, EPE - Centro Hospitalar de Lisboa Central, EPE, Portugal (1)
- Repositorio Institucional de la Universidad de Málaga (1)
- Repositório Institucional UNESP - Universidade Estadual Paulista "Julio de Mesquita Filho" (83)
- Repositorio Institucional UNISALLE - Colombia (1)
- RUN (Repositório da Universidade Nova de Lisboa) - FCT (Faculdade de Cienecias e Technologia), Universidade Nova de Lisboa (UNL), Portugal (2)
- School of Medicine, Washington University, United States (15)
- Scielo Saúde Pública - SP (3)
- Universidad de Alicante (2)
- Universidad del Rosario, Colombia (4)
- Universidad Politécnica de Madrid (31)
- Universidade do Minho (3)
- Universidade Federal do Pará (3)
- Universidade Federal do Rio Grande do Norte (UFRN) (2)
- Universidade Metodista de São Paulo (4)
- Université de Lausanne, Switzerland (20)
- Université de Montréal, Canada (11)
- Université Laval Mémoires et thèses électroniques (1)
- University of Connecticut - USA (1)
- University of Michigan (48)
- University of Queensland eSpace - Australia (288)
- University of Washington (6)
- WestminsterResearch - UK (3)
- Worcester Research and Publications - Worcester Research and Publications - UK (1)
Resumo:
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.