966 resultados para Speech in Noise
Resumo:
We have investigated how optimal coding for neural systems changes with the time available for decoding. Optimization was in terms of maximizing information transmission. We have estimated the parameters for Poisson neurons that optimize Shannon transinformation with the assumption of rate coding. We observed a hierarchy of phase transitions from binary coding, for small decoding times, toward discrete (M-ary) coding with two, three and more quantization levels for larger decoding times. We postulate that the presence of subpopulations with specific neural characteristics could be a signiture of an optimal population coding scheme and we use the mammalian auditory system as an example.
Resumo:
Ultra-long mode-locked lasers are known to be strongly influenced by nonlinear interactions in long cavities that results in noise-like stochastic pulses. Here, by using an advanced technique of real-time measurements of both temporal and spatial (over round-trips) intensity evolution, we reveal an existence of wide range of generation regimes. Different kinds of coherent structures including dark and grey solitons and rogue-like bright coherent structures are observed as well as interaction between them are revealed.
Resumo:
One of the overarching questions in the field of infant perceptual and cognitive development concerns how selective attention is organized during early development to facilitate learning. The following study examined how infants' selective attention to properties of social events (i.e., prosody of speech and facial identity) changes in real time as a function of intersensory redundancy (redundant audiovisual, nonredundant unimodal visual) and exploratory time. Intersensory redundancy refers to the spatially coordinated and temporally synchronous occurrence of information across multiple senses. Real time macro- and micro-structural change in infants' scanning patterns of dynamic faces was also examined. ^ According to the Intersensory Redundancy Hypothesis, information presented redundantly and in temporal synchrony across two or more senses recruits infants' selective attention and facilitates perceptual learning of highly salient amodal properties (properties that can be perceived across several sensory modalities such as the prosody of speech) at the expense of less salient modality specific properties. Conversely, information presented to only one sense facilitates infants' learning of modality specific properties (properties that are specific to a particular sensory modality such as facial features) at the expense of amodal properties (Bahrick & Lickliter, 2000, 2002). ^ Infants' selective attention and discrimination of prosody of speech and facial configuration was assessed in a modified visual paired comparison paradigm. In redundant audiovisual stimulation, it was predicted infants would show discrimination of prosody of speech in the early phases of exploration and facial configuration in the later phases of exploration. Conversely, in nonredundant unimodal visual stimulation, it was predicted infants would show discrimination of facial identity in the early phases of exploration and prosody of speech in the later phases of exploration. Results provided support for the first prediction and indicated that following redundant audiovisual exposure, infants showed discrimination of prosody of speech earlier in processing time than discrimination of facial identity. Data from the nonredundant unimodal visual condition provided partial support for the second prediction and indicated that infants showed discrimination of facial identity, but not prosody of speech. The dissertation study contributes to the understanding of the nature of infants' selective attention and processing of social events across exploratory time.^
Resumo:
OBJECTIVES: In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. DESIGN: Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. RESULTS: In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. CONCLUSIONS: The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids.
Resumo:
This article argues that sonic technologies, such as telephones, voice recorders and phonographs, alongside more (audio)visual ones such as flickering fluorescent lights, videos, and the television sets are crucial to the world of Twin Peaks, and constitute this world as both a communications network with portals to the unknown, and an accumulation of recordings of ghosted voices and entities, perhaps finding its ultimate expression in the backwards reprocessed speech in the Black Lodge. This lodge can be understood as a space in which there are nothing but recordings, albeit now on a cosmic, spiritual and demonic level. Using a media archaeological approach to these devices in the series, this paper will argue that they were already operating by a media archaeological logic, generating the world of Twin Peaks as a haunted archive of sonic and other mediations.
Resumo:
Historically, Salome was an unexceptional figure who never catalyzed John the Baptist's death. However, in Christian Scripture, she becomes the dancing seductress as fallen daughter of Eve. Her stepfather Herod promises Salome his kingdom if she dances for him, but she follows her mother’s wish to have John beheaded. In Strauss’s opera, after Wilde's Symbolist-Decadent play, Salome becomes independent of Herodias’ will, and the mythic avatar of the femme fatale and persecuted artist who Herod has killed after she kisses John's severed head. Her signature key of C# major, resolving to the C major sung by Herod and Jokanaan at her death, represent her tragic fate musically.
Resumo:
During the civil war between Caesar and Pompey, the military oath which binds the soldier to his army is often openly violated. Yet despite this offense, commanders of armed struggle require recursively the oath to their men. Admittedly, this ritual act seems ineffective given the many desertions and mutinies identified, but military leaders use its symbolic and sacred meaning to legitimize one hand their “anti-republican” actions, on the other armies fighting in a context deemed impius.
Resumo:
In the present study, Korean-English bilingual (KEB) and Korean monolingual (KM) children, between the ages of 8 and 13 years, and KEB adults, ages 18 and older, were examined with one speech perception task, called the Nonsense Syllable Confusion Matrix (NSCM) task (Allen, 2005), and two production tasks, called the Nonsense Syllable Imitation Task (NSIT) and the Nonword Repetition Task (NRT; Dollaghan & Campbell, 1998). The present study examined (a) which English sounds on the NSCM task were identified less well, presumably due to interference from Korean phonology, in bilinguals learning English as a second language (L2) and in monolinguals learning English as a foreign language (FL); (b) which English phonemes on the NSIT were more challenging for bilinguals and monolinguals to produce; (c) whether perception on the NSCM task is related to production on the NSIT, or phonological awareness, as measured by the NRT; and (d) whether perception and production differ in three age-language status groups (i.e., KEB children, KEB adults, and KM children) and in three proficiency subgroups of KEB children (i.e., English-dominant, ED; balanced, BAL; and Korean-dominant, KD). In order to determine English proficiency in each group, language samples were extensively and rigorously analyzed, using software, called Systematic Analysis of Language Transcripts (SALT). Length of samples in complete and intelligible utterances, number of different and total words (NDW and NTW, respectively), speech rate in words per minute (WPM), and number of grammatical errors, mazes, and abandoned utterances were measured and compared among the three initial groups and the three proficiency subgroups. Results of the language sample analysis (LSA) showed significant group differences only between the KEBs and the KM children, but not between the KEB children and adults. Nonetheless, compared to normative means (from a sample length- and age-matched database provided by SALT), the KEB adult group and the KD subgroup produced English at significantly slower speech rates than expected for monolingual, English-speaking counterparts. Two existing models of bilingual speech perception and production—the Speech Learning Model or SLM (Flege, 1987, 1992) and the Perceptual Assimilation Model or PAM (Best, McRoberts, & Sithole, 1988; Best, McRoberts, & Goodell, 2001)—were considered to see if they could account for the perceptual and production patterns evident in the present study. The selected English sounds for stimuli in the NSCM task and the NSIT were 10 consonants, /p, b, k, g, f, θ, s, z, ʧ, ʤ/, and 3 vowels /I, ɛ, æ/, which were used to create 30 nonsense syllables in a consonant-vowel structure. Based on phonetic or phonemic differences between the two languages, English sounds were categorized either as familiar sounds—namely, English sounds that are similar, but not identical, to L1 Korean, including /p, k, s, ʧ, ɛ/—or unfamiliar sounds—namely, English sounds that are new to L1, including /b, g, f, θ, z, ʤ, I, æ/. The results of the NSCM task showed that (a) consonants were perceived correctly more often than vowels, (b) familiar sounds were perceived correctly more often than unfamiliar ones, and (c) familiar consonants were perceived correctly more often than unfamiliar ones across the three age-language status groups and across the three proficiency subgroups; and (d) the KEB children perceived correctly more often than the KEB adults, the KEB children and adults perceived correctly more often than the KM children, and the ED and BAL subgroups perceived correctly more often than the KD subgroup. The results of the NSIT showed (a) consonants were produced more accurately than vowels, and (b) familiar sounds were produced more accurately than unfamiliar ones, across the three age-language status groups. Also, (c) familiar consonants were produced more accurately than unfamiliar ones in the KEB and KM child groups, and (d) unfamiliar vowels were produced more accurately than a familiar one in the KEB child group, but the reverse was true in the KEB adult and KM child groups. The KEB children produced sounds correctly significantly more often than the KM children and the KEB adults, though the percent correct differences were smaller than for perception. Production differences were not found among the three proficiency subgroups. Perception on the NSCM task was compared to production on the NSIT and NRT. Weak positive correlations were found between perception and production (NSIT) for unfamiliar consonants and sounds, whereas a weak negative correlation was found for unfamiliar vowels. Several correlations were significant for perceptual performance on the NSCM task and overall production performance on the NRT: for unfamiliar consonants, unfamiliar vowels, unfamiliar sounds, consonants, vowels, and overall performance on the NSCM task. Nonetheless, no significant correlation was found between production on the NSIT and NRT. Evidently these are two very different production tasks, where immediate imitation of single syllables on the NSIT results in high performance for all groups. Findings of the present study suggest that (a) perception and production of L2 consonants differ from those of vowels; (b) perception and production of L2 sounds involve an interaction of sound type and familiarity; (c) a weak relation exists between perception and production performance for unfamiliar sounds; and (d) L2 experience generally predicts perceptual and production performance. The present study yields several conclusions. The first is that familiarity of sounds is an important influence on L2 learning, as claimed by both SLM and PAM. In the present study, familiar sounds were perceived and produced correctly more often than unfamiliar ones in most cases, in keeping with PAM, though experienced L2 learners (i.e., the KEB children) produced unfamiliar vowels better than familiar ones, in keeping with SLM. Nonetheless, the second conclusion is that neither SLM nor PAM consistently and thoroughly explains the results of the present study. This is because both theories assume that the influence of L1 on the perception of L2 consonants and vowels works in the same way as for production of them. The third and fourth conclusions are two proposed arguments: that perception and production of consonants are different than for vowels, and that sound type interacts with familiarity and L2 experience. These two arguments can best explain the current findings. These findings may help us to develop educational curricula for bilingual individuals listening to and articulating English. Further, the extensive analysis of spontaneous speech in the present study should contribute to the specification of parameters for normal language development and function in Korean-English bilingual children and adults.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
TEMA: a produção da fala nas modalidades de reabilitação oral protética. OBJETIVO: verificar se o tipo de reabilitação oral interfere na produção da fala. MÉTODO: 36 idosos (média = 68 anos), divididos em 3 grupos, foram avaliados: 13 com dentes naturais (A), 13 com prótese total mucosossuportada superior e inferior (B) e 10 com prótese total mucosossuportada superior e implantossuportada inferior (C). A estabilidade das próteses foi avaliada por um dentista e amostras de fala foram analisadas por 5 fonoaudiólogos. Para determinar a freqüência de alteração dos sons da fala utilizou-se o cálculo da Porcentagem de Consoantes Corretas (PCC). RESULTADOS: observou-se poucos casos com alteração de fala, com maior freqüência no grupo C (23,08%), sendo a articulação travada presente em todos os grupos, a redução dos movimentos labiais em dois grupos (A e B) e a articulação exagerada e a falta de controle salivar em um dos grupos (C e B). Quanto à PCC, menor valor foi observado para os fones linguodentais nos grupos B e C (maior ocorrência de alteração), seguido dos fones alveolares, predominando casos sem alteração no grupo A, contrariamente aos demais grupos, sendo a projeção lingual e o ceceio as alterações mais encontradas. Não houve diferença entre os grupos e a maioria do grupo B estava com a prótese inferior insatisfatória, não havendo associação entre alteração de fala e prótese insatisfatória. CONCLUSÃO: apesar da amostra pequena, indivíduos reabilitados com prótese total apresentam alteração nos fones linguodentais e alveolares e o tipo de prótese, bem como a estabilidade desta parece não interferir na produção da fala.
Resumo:
A literatura francesa e mundial passaram por uma enorme transformação ao longo do século XIX. A primorosa escrita de Flaubert é, para além de sua importância estética, reveladora de um novo modo de se escrever, ler e representar o homem no mundo moderno. O cuidado na construção do texto, a utilização de imagens plásticas e figuras de linguagem, ao mesmo tempo em que o narrador torna-se invisível no emprego do discurso indireto livre são característicos da escrita flaubertiana. O autor apresenta-nos em "Um coração simples" um "retrato" de Félicité, mulher pobre do interior da França, modelo virtuoso de uma ética evanescente. Ao acompanhar, por meio do conto, os passos e desventuras de Felicité, sob as lentes deste novo narrador, buscamos evidenciar as condições que circundam o aparecimento de um olhar científico-objetivo para a subjetividade humana, logo transformada em objeto de uma nascente Psicologia.
Resumo:
O conceito de sofrimento social caracteriza-se pela compreensão das situações de aflição e dor como experiências sociais e não como problemas individuais. Este trabalho analisa a natureza social e política do sofrimento de um adolescente em cumprimento de medida socioeducativa. Inspirado na abordagem de Veena Das, o artigo se apóia em "carne" e discurso para problematizar a relação entre cidadania e segmentos juvenis discriminados, que se manifesta nas ambiguidades das práticas institucionais presentes no fluxo de execução de medidas socioeducativas. O artigo analisa as contradições entre o objetivo institucional de evitar a reincidência de atos infracionais, auxiliando o adolescente a tornar-se um cidadão autônomo, e as narrativas e expressões corporais dos adolescentes durante o cumprimento das medidas. A trajetória aqui descrita leva ao reconhecimento de que o trânsito da medida de internação para as medidas em meio aberto se dá sob a tensão entre o discurso institucional de reorganizar a vida escolar, familiar e comunitária e a experiência cotidiana dos adolescentes, que segue marcada pela constante ameaça policial e pela privação de acessos a bens públicos. O cumprimento de medidas socioeducativas acaba por reforçar entre os adolescentes a aflição de serem socialmente tidos como suspeitos e fugitivos e, consequentemente, a incorporação de um lugar social particular, o de membro do "mundo do crime". O desempenho na vida cotidiana de um "estilo bandido" revela formas de resposta ao discurso dominante no sistema socioeducativo, contexto que indica o paradoxo do Estado brasileiro, que garante uma democracia formal enquanto viola direitos civis.
Resumo:
Three studies support the vicarious dissonance hypothesis that individuals change their attitudes when witnessing members of important groups engage in inconsistent behavior. Study 1, in which participants observed an actor in an induced-compliance paradigm, documented that students who identified with their college supported an issue more after hearing an ingroup member make a counterattitudinal speech in favor of that issue. In Study 2, vicarious dissonance occurred even when participants did not hear a speech, and attitude change was highest when the speaker was known to disagree with the issue. Study 3 showed that speaker choice and aversive consequences moderated vicarious dissonance, and demonstrated that vicarious discomfort-the discomfort observers imagine feeling if in an actor's place-was attenuated after participants expressed their revised attitudes.