976 resultados para Auditory perception.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

IQ Structure, Psycholinguistic and Visual-motor Abilities Study on Children Learning Disability TONG Fang Directed by professor Zhu Liqi (Developmental and educational psychology) ABSTRACT Objective To comprehensive analyze the IQ structures, and relationships among IQ, psychometric characteristics and visual-motor integration on children disability. At same time, to probe into the family factors that influenced IQ, psycholinguistic abilities and behavior of LD children. Method (1) Downloading the papers on children learning disability from www.cqvip.com and www.wanfangdata.com, in which, the articles were collected by key words from 1985 to 2005. To conduct meta-analysis on IQ construction, compare the case group and the control group, including full IQ, verbal and practice IQ. (2) Designed with model compared and self-compared, 59 diagnosed learning disability children, tested themes with WISC, ITPA and Berry’s VMI. WISC included 10 items, 5 of which subtotal to verbal and practice IQ respectively. IPTA included 10 items, too, 5 process of which subtotal to auditory and visual perception. The first 3 items shared representation level, the other 2 of that shared automatic level.VMI had one score. Analyzed factors and levels with description and Pearson Correlation. To probe to linguistic internal alternately functions of LD children, and compare the scores of groups in different IQ. (3) Analyzed the perspective questionnaire filled by parents. Early development facts compared with model groups. Factors relationships analyzed with Kendall correlation, KOM and Bartlett’s test of sphericity, Promax Rotation. Results: (1) There have been 319 papers related with LD, in which 36 with IQ and 14 valid reports have been analyzed by Meta. FIQ’s 95%CI (confidence interval) is 2.418 ~ 0.172, VIQ between the difficulty and non- difficulty group. C-WISC-R reports were 10 papers, of which, 95%CI of FIQ is 2.424 ~ 0.676, of VIQ is 2.314 ~ 1.196, of PIQ is 2.176 ~ 0.176. The VIQ comparing the PIQ, 95%CI is 1.1 ~ -0.07 in difficulty group and 0.5 ~ -0.0046 in non-difficult group. Nevertheless, in the other 4 tests, FIQ’s 95%CI is 2.00 ~ -0.818 between LD and NLD. (2) Children psycholinguistic abilities had strong relation with Berry’s VMI test excluding auditory reception, and with perceptive factor of intelligence excluding verbal expression. Auditory reception and visual closure had strong relation with FIQ and PIQ. Grammatic closure, visual association and manual expression had strong relation with concept factor. The representational and automatic levels are depended on integration of auditory and visual procession. Lower verbal expression (VE) let to lower expression process and low scores on representational level. Lower visual sequential memory (VSM) let to lower memory process and influenced automatic level. Groups compared by IQ 90 show that LD children with under IQ 90 had lower scores on items of IPTA than with up IQ 90 excluded verbal expression. It was proved that IQ administrated the linguistic ability. Nevertheless, general abilities deficiency didn’t show influencing on the types of the perceptive delay. There was mutual function among linguistic ability on LD children. Auditory and visual level are overlapped each other. Not only show higher Decoding and lower Encoding on Auditory perception, lower Decoding and higher Encoding on Visual perception, in representation, but also higher Sequential remember, lower Closure on Audition, and lower Sequential member, higher Closure on Vision, in Automation. Nevertheless, there was no different between Representational and Automatic level, which may be the relationship of parallel or evolution. (3) Major family factors were father’s education, occupation. Lower auditory perception related to unconcerned, lower visual perception related to premature delivery and written slowly. Threatened–abortion, childbirth-suffocated were known as influencing children’s IQ and later linguistic abilities. It wasn’t shown that dosage relationship with the types of perceptive delay. Conclusion: (1) The FIQ, VIQ and PIQ of Children with LD is lower than that of NLD group. There is no significantly different between VIQ and PIQ in LD and NLD groups. (2) The objectives of ITPA and WISC tests are differently. The psycholinguistic abilities had strong relation with perceptive factor and VMI. Some facts of IPTA related with FIQ. IQ had strong administration on linguistic abilities. There was mutual function among linguistic internal abilities. (3) Family facts on IQ and psycholinguistic abilities were Father’s education, abnormal pregnant and abortion. It would be pre-show development delay in early period.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As a species of internal representation, how is mental imagery organized in the brain? There are two issues related to this question: the time course and the nature of mental imagery. On the nature of mental imagery, today's imagery debate is influenced by two opposing theories: (1) Pylyshyn’s propositional theory and (2) Kosslyn’s depictive representation theory. Behavioural studies indicated that imagery encodes properties of the physical world, such as the spacial and size information of the visual world. Neuroimaging and neuropsychological data indicated that sensory cortex; especially the primary sensory cortex, is involved in imagery. In visual modality, neuroimaging data further indicated that during visual imagery, spatial information is mapped in the primary visual, providing strong evidences for depictive theory. In the auditory modality, behavioural studies also indicated that auditory imagery represents loudness and pitch of sound; this kind of neuroimaging evidence, however, is absent. The aim of the present study was to investigate the time course of auditory imagery processing, and to provide the neuroimaging evidence that imaginal auditory representations encode loudness and pitch information, using the ERP method and a cue-imagery (S1)-S2 paradigm. The results revealed that imagery effects started with an enhancement of the P2, probably indexing the top-down allocation of attention to the imagery task; and continued into a more positive-going late positive potentials (LPC), probably reflecting the formation of auditory imagery. The amplitude of this LPC was inversely related to the pitch of the imagined sound, but directly related to the loudness of the imagined sound, which were consistent with auditory perception related N1 component, providing evidences that auditory imagery encodes pitch and loudness information. When the S2 showed difference in pitch of loudness from the previously imagined S1, the behavioral performance were significantly worse and accordingly a conflict related N2 was elicited; and the high conflict elicited greater N2 amplitude than low conflict condition, providing further evidences that imagery is analog of perception and can encode pitch and loudness information. The present study suggests that imagery starts with an mechanism of top-down allocation of attention to the imagery task; and continuing into the step of imagery formation during which the physical features of the imagined stimulus can be encoded, providing supports to Kosslyn’s depictive representation theory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the present study, ratings of the memory of an important event from the previous week on the frequency of voluntary and involuntary retrieval, belief in its accuracy, visual imagery, auditory imagery, setting, emotional intensity, valence, narrative coherence, and centrality to the life story were obtained from 988 adults whose ages ranged from 15 to over 90. Another 992 adults provided the same ratings for a memory from their confirmation day, when they were at about age 14. The frequencies of involuntary and voluntary retrieval were similar. Both frequencies were predicted by emotional intensity and centrality to the life story. The results from the present study-which is the first to measure the frequency of voluntary and involuntary retrieval for the same events-are counter to both cognitive and clinical theories, which consistently claim that involuntary memories are infrequent as compared with voluntary memories. Age and gender differences are noted.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multisensory stimuli can improve performance, facilitating RTs on sensorimotor tasks. This benefit is referred to as the redundant signals effect (RSE) and can exceed predictions on the basis of probability summation, indicative of integrative processes. Although an RSE exceeding probability summation has been repeatedly observed in humans and nonprimate animals, there are scant and inconsistent data from nonhuman primates performing similar protocols. Rather, existing paradigms have instead focused on saccadic eye movements. Moreover, the extant results in monkeys leave unresolved how stimulus synchronicity and intensity impact performance. Two trained monkeys performed a simple detection task involving arm movements to auditory, visual, or synchronous auditory-visual multisensory pairs. RSEs in excess of predictions on the basis of probability summation were observed and thus forcibly follow from neural response interactions. Parametric variation of auditory stimulus intensity revealed that in both animals, RT facilitation was limited to situations where the auditory stimulus intensity was below or up to 20 dB above perceptual threshold, despite the visual stimulus always being suprathreshold. No RT facilitation or even behavioral costs were obtained with auditory intensities 30-40 dB above threshold. The present study demonstrates the feasibility and the suitability of behaving monkeys for investigating links between psychophysical and neurophysiologic instantiations of multisensory interactions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Un instrumento musical implica la presencia de un registro sonoro que afecta tanto a la organización de los sonidos, silencios y ruidos, como a la disposición corporal que con él se va forjando. Desde esta consideración, la organización de la música llevada a cabo con las tecnologías eléctricas y electrónicas supone una profunda modificación de ambos aspectos. La llegada de la electricidad implica una triple dislocación: respecto a la transmisión de lo sonoro, a su posibilidad de reproducción y a la escucha. Estas dislocaciones son puestas en relación con invenciones que, desde el órgano de Ctesibios hasta el clavecín ocular de Castel, nos dibujan un marco en el que música, técnica, sensibilidad y sistema económico-social, tejen sus nexos. A lo largo de este recorrido se trazan lo que se ha denominado contrapuntos de la invención, que pueden tomar en las figuras de J.S. Bach y de J. Cage sus ejemplos más prominentes

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Inconsistencies exist between traditional objective measures such as speech recognition and localization, and subjective reports of bimodal benefit. The purpose of this study was to expand the set of objective measures of bimodal benefit to include non-traditional listening tests, and to examine possible correlations between objective measures of auditory perception and subjective satisfaction reports.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The experiment asks whether constancy in hearing precedes or follows grouping. Listeners heard speech-like sounds comprising 8 auditory-filter shaped noise-bands that had temporal envelopes corresponding to those arising in these filters when a speech message is played. The „context‟ words in the message were “next you‟ll get _to click on”, into which a “sir” or “stir” test word was inserted. These test words were from an 11-step continuum that was formed by amplitude modulation. Listeners identified the test words appropriately and quite consistently, even though they had the „robotic‟ quality typical of this type of 8-band speech. The speech-like effects of these sounds appears to be a consequence of auditory grouping. Constancy was assessed by comparing the influence of room reflections on the test word across conditions where the context had either the same level of reflections, or where it had a much lower level. Constancy effects were obtained with these 8-band sounds, but only in „matched‟ conditions, where the room reflections were in the same bands in both the context and the test word. This was not the case in a comparison „mismatched‟ condition, and here, no constancy effects were found. It would appear that this type of constancy in hearing precedes the across-channel grouping whose effects are so apparent in these sounds. This result is discussed in terms of the ubiquity of grouping across different levels of representation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Research over the last decade has shown that auditorily cuing the location of visual targets reduces the time taken to locate and identify targets for both free-field and virtually presented sounds. The first study conducted for this thesis confirmed these findings over an extensive region of free-field space. However, the number of sound locations that are measured and stored in the data library of most 3-D audio spatial systems is limited, so that there is often a discrepancy in position between the cued and physical location of the target. Sampling limitations in the systems also produce temporal delays in which the stored data can be conveyed to operators. To investigate the effects of spatial and temporal disparities in audio cuing of visual search, and to provide evidence to alleviate concerns that psychological research lags behind the capabilities to design and implement synthetic interfaces, experiments were conducted to examine (a) the magnitude of spatial separation, and (b) the duration of temporal delay that intervened between auditory spatial cues and visual targets to alter response times to locate targets and discriminate their shape, relative to when the stimuli were spatially aligned, and temporally synchronised, respectively. Participants listened to free-field sound localisation cues that were presented with a single, highly visible target that could appear anywhere across 360° of azimuthal space on the vertical mid-line (spatial separation), or extended to 45° above and below the vertical mid-line (temporal delay). A vertical or horizontal spatial separation of 40° between the stimuli significantly increased response times, while separations of 30° or less did not reach significance. Response times were slowed at most target locations when auditory cues occurred 770 msecs prior to the appearance of targets, but not with similar durations of temporal delay (i.e., 440 msecs or less). When sounds followed the appearance of targets, the stimulus onset asynchrony that affected response times was dependent on target location, and ranged from 440 msecs at higher elevations and rearward of participants, to 1,100 msecs on the vertical mid-line. If targets appeared in the frontal field of view, no delay of acoustical stimulation affected performance. Finally, when conditions of spatial separation and temporal delay were combined, visual search times were degraded with a shorter stimulus onset asynchrony than when only the temporal relationship between the stimuli was varied, but responses to spatial separation were unaffected. The implications of the results for the development of synthetic audio spatial systems to aid visual search tasks was discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJETIVO: Verificar os desempenhos perceptual-auditivo e ortográfico de escolares no que se refere à identificação de contrastes entre as fricativas do Português Brasileiro, e investigar em que medida esses dois tipos de desempenhos se relacionam. MÉTODOS: Foram analisados dados de desempenho perceptual-auditivo e de desempenho ortográfico extraídos de 20 crianças das duas primeiras séries do ensino fundamental de uma escola pública do município de Mallet (PR). A coleta de dados de percepção auditiva foi feita com base no Instrumento de Avaliação da Percepção de Fala (PERCEFAL), com o uso do software Perceval. Já a coleta de dados de ortografia foi feita por meio de um ditado das mesmas palavras que compõem o instrumento PERCEFAL. RESULTADOS: Foram observadas: maior acurácia perceptual-auditiva do que ortográfica; tendência de menor tempo de resposta e de menor variabilidade nos acertos perceptuais-auditivos do que nos erros; não correspondência de erros de percepção-auditiva e ortografia, já que, na percepção, o maior percentual de erros envolveu o ponto de articulação das fricativas, enquanto que, na ortografia, o maior percentual envolveu o vozeamento. CONCLUSÃO: Embora se mostrem relacionados, os desempenhos perceptual-auditivo e ortográfico não apresentam correspondência termo a termo. Portanto, na prática clínica, a atenção deve-se voltar não apenas para os aspectos que aproximam esses dois desempenhos, mas, também, para os aspectos que os diferenciam.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJETIVO: caracterizar e comparar, por meio de testes comportamentais, o processamento auditivo de escolares com diagnóstico interdisciplinar de (I) distúrbio da aprendizagem, (II) dislexia e (III) escolares com bom desempenho acadêmico. MÉTODOS: participaram deste estudo 30 escolares na faixa etária de 8 a 16 anos de idade, de ambos os gêneros, de 2ª a 4ª séries do ensino fundamental, divididos em três grupos: GI composto por 10 escolares com diagnóstico interdisciplinar de distúrbio de aprendizagem, GII: composto por 10 escolares com diagnóstico interdisciplinar de dislexia e GIII composto por 10 escolares sem dificuldades de aprendizagem, pareados segundo gênero e faixa etária com os grupos GI e GII. Foram realizadas avaliação audiológica e de processamento auditivo. RESULTADOS: os escolares de GIII apresentaram desempenho superior nos testes de processamento auditivo em relação aos escolares de GI e GII. GI apresentou desempenho inferior nas habilidades auditivas avaliadas para testes dicóticos de dígitos e dissílabos alternados, logoaudiometria pediátrica, localização sonora, memória verbal e não-verbal, ao passo que GII apresentou as mesmas alterações de GI, com exceção do teste de logoaudiometria pediátrica. CONCLUSÃO: os escolares com transtornos de aprendizagem apresentaram desempenho inferior nos testes de processamento auditivo, sendo que os escolares com distúrbio de aprendizagem apresentaram maior número de habilidades auditivas alteradas, em comparação com os escolares com dislexia, por terem apresentado atenção sustentada reduzida. O grupo de escolares com dislexia apresentou alterações decorrentes da dificuldade relacionada à codificação e decodificação de estímulos sonoros.