2 resultados para Nonnative speaker
em Duke University
Resumo:
An abundance of research in the social sciences has demonstrated a persistent bias against nonnative English speakers (Giles & Billings, 2004; Gluszek & Dovidio, 2010). Yet, organizational scholars have only begun to investigate the underlying mechanisms that drive the bias against nonnative speakers and subsequently design interventions to mitigate these biases. In this dissertation, I offer an integrative model to organize past explanations for accent-based bias into a coherent framework, and posit that nonnative accents elicit social perceptions that have implications at the personal, relational, and group level. I also seek to complement the existing emphasis on main effects of accents, which focuses on the general tendency to discriminate against those with accents, by examining moderators that shed light on the conditions under which accent-based bias is most likely to occur. Specifically, I explore the idea that people’s beliefs about the controllability of accents can moderate their evaluations toward nonnative speakers, such that those who believe that accents can be controlled are more likely to demonstrate a bias against nonnative speakers. I empirically test my theoretical model in three studies in the context of entrepreneurial funding decisions. Results generally supported the proposed model. By examining the micro foundations of accent-based bias, the ideas explored in this dissertation set the stage for future research in an increasingly multilingual world.
Resumo:
Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.