16 resultados para auditory hallucinations

em Cambridge University Engineering Department Publications Database


Relevância:

60.00% 60.00%

Publicador:

Resumo:

IMPORTANCE: Forward models predict the sensory consequences of planned actions and permit discrimination of self- and non-self-elicited sensation; their impairment in schizophrenia is implied by an abnormality in behavioral force-matching and the flawed agency judgments characteristic of positive symptoms, including auditory hallucinations and delusions of control. OBJECTIVE: To assess attenuation of sensory processing by self-action in individuals with schizophrenia and its relation to current symptom severity. DESIGN, SETTING, AND PARTICIPANTS: Functional magnetic resonance imaging data were acquired while medicated individuals with schizophrenia (n = 19) and matched controls (n = 19) performed a factorially designed sensorimotor task in which the occurrence and relative timing of action and sensation were manipulated. The study took place at the neuroimaging research unit at the Institute of Cognitive Neuroscience, University College London, and the Maudsley Hospital. RESULTS: In controls, a region of secondary somatosensory cortex exhibited attenuated activation when sensation and action were synchronous compared with when the former occurred after an unexpected delay or alone. By contrast, reduced attenuation was observed in the schizophrenia group, suggesting that these individuals were unable to predict the sensory consequences of their own actions. Furthermore, failure to attenuate secondary somatosensory cortex processing was predicted by current hallucinatory severity. CONCLUSIONS AND RELEVANCE: Although comparably reduced attenuation has been reported in the verbal domain, this work implies that a more general physiologic deficit underlies positive symptoms of schizophrenia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A model of the auditory periphery assembled from analog network submodels of all the relevant anatomical structures is described. There is bidirectional coupling between networks representing the outer ear, middle ear and cochlea. A simple voltage source representation of the outer hair cells provides level-dependent basilar membrane curves. The networks are translated into efficient computational modules by means of wave digital filtering. A feedback unit regulates the average firing rate at the output of an inner hair cell module via a simplified modelling of the dynamics of the descending paths to the peripheral ear. This leads to a digital model of the entire auditory periphery with applications to both speech and hearing research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Virtual assembly environment (VAE) technology has the great potential for benefiting the manufacturing applications in industry. Usability is an important aspect of the VAE. This paper presents the usability evaluation of a developed multi-sensory VAE. The evaluation is conducted by using its three attributes: (a) efficiency of use; (b) user satisfaction; and (c) reliability. These are addressed by using task completion times (TCTs), questionnaires, and human performance error rates (HPERs), respectively. A peg-in-a-hole and a Sener electronic box assembly task have been used to perform the experiments, using sixteen participants. The outcomes showed that the introduction of 3D auditory and/or visual feedback could improve the usability. They also indicated that the integrated feedback (visual plus auditory) offered better usability than either feedback used in isolation. Most participants preferred the integrated feedback to either feedback (visual or auditory) or no feedback. The participants' comments demonstrated that nonrealistic or inappropriate feedback had negative effects on the usability, and easily made them feel frustrated. The possible reasons behind the outcomes are also analysed. © 2007 ACADEMY PUBLISHER.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acoustic communication in drosophilid flies is based on the production and perception of courtship songs, which facilitate mating. Despite decades of research on courtship songs and behavior in Drosophila, central auditory responses have remained uncharacterized. In this study, we report on intracellular recordings from central neurons that innervate the Drosophila antennal mechanosensory and motor center (AMMC), the first relay for auditory information in the fly brain. These neurons produce graded-potential (nonspiking) responses to sound; we compare recordings from AMMC neurons to extracellular recordings of the receptor neuron population [Johnston's organ neurons (JONs)]. We discover that, while steady-state response profiles for tonal and broadband stimuli are significantly transformed between the JON population in the antenna and AMMC neurons in the brain, transient responses to pulses present in natural stimuli (courtship song) are not. For pulse stimuli in particular, AMMC neurons simply low-pass filter the receptor population response, thus preserving low-frequency temporal features (such as the spacing of song pulses) for analysis by postsynaptic neurons. We also compare responses in two closely related Drosophila species, Drosophila melanogaster and Drosophila simulans, and find that pulse song responses are largely similar, despite differences in the spectral content of their songs. Our recordings inform how downstream circuits may read out behaviorally relevant information from central neurons in the AMMC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Natural sounds are structured on many time-scales. A typical segment of speech, for example, contains features that span four orders of magnitude: Sentences ($\sim1$s); phonemes ($\sim10$−$1$ s); glottal pulses ($\sim 10$−$2$s); and formants ($\sim 10$−$3$s). The auditory system uses information from each of these time-scales to solve complicated tasks such as auditory scene analysis [1]. One route toward understanding how auditory processing accomplishes this analysis is to build neuroscience-inspired algorithms which solve similar tasks and to compare the properties of these algorithms with properties of auditory processing. There is however a discord: Current machine-audition algorithms largely concentrate on the shorter time-scale structures in sounds, and the longer structures are ignored. The reason for this is two-fold. Firstly, it is a difficult technical problem to construct an algorithm that utilises both sorts of information. Secondly, it is computationally demanding to simultaneously process data both at high resolution (to extract short temporal information) and for long duration (to extract long temporal information). The contribution of this work is to develop a new statistical model for natural sounds that captures structure across a wide range of time-scales, and to provide efficient learning and inference algorithms. We demonstrate the success of this approach on a missing data task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Auditory scene analysis is extremely challenging. One approach, perhaps that adopted by the brain, is to shape useful representations of sounds on prior knowledge about their statistical structure. For example, sounds with harmonic sections are common and so time-frequency representations are efficient. Most current representations concentrate on the shorter components. Here, we propose representations for structures on longer time-scales, like the phonemes and sentences of speech. We decompose a sound into a product of processes, each with its own characteristic time-scale. This demodulation cascade relates to classical amplitude demodulation, but traditional algorithms fail to realise the representation fully. A new approach, probabilistic amplitude demodulation, is shown to out-perform the established methods, and to easily extend to representation of a full demodulation cascade.