36 resultados para Object categories


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evidence from human and non-human primate studies supports a dual-pathway model of audition, with partially segregated cortical networks for sound recognition and sound localisation, referred to as the What and Where processing streams. In normal subjects, these two networks overlap partially on the supra-temporal plane, suggesting that some early-stage auditory areas are involved in processing of either auditory feature alone or of both. Using high-resolution 7-T fMRI we have investigated the influence of positional information on sound object representations by comparing activation patterns to environmental sounds lateralised to the right or left ear. While unilaterally presented sounds induced bilateral activation, small clusters in specific non-primary auditory areas were significantly more activated by contra-laterally presented stimuli. Comparison of these data with histologically identified non-primary auditory areas suggests that the coding of sound objects within early-stage auditory areas lateral and posterior to primary auditory cortex AI is modulated by the position of the sound, while that within anterior areas is not.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using head-mounted eye tracker material, we assessed spatial recognition abilities (e.g., reaction to object permutation, removal or replacement with a new object) in participants with intellectual disabilities. The "Intellectual Disabilities (ID)" group (n=40) obtained a score totalling a 93.7% success rate, whereas the "Normal Control" group (n=40) scored 55.6% and took longer to fix their attention on the displaced object. The participants with an intellectual disability thus had a more accurate perception of spatial changes than controls. Interestingly, the ID participants were more reactive to object displacement than to removal of the object. In the specific test of novelty detection, however, the scores were similar, the two groups approaching 100% detection. Analysis of the strategies expressed by the ID group revealed that they engaged in more systematic object checking and were more sensitive than the control group to changes in the structure of the environment. Indeed, during the familiarisation phase, the "ID" group explored the collection of objects more slowly, and fixed their gaze for a longer time upon a significantly lower number of fixation points during visual sweeping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Action representations can interact with object recognition processes. For example, so-called mirror neurons respond both when performing an action and when seeing or hearing such actions. Investigations of auditory object processing have largely focused on categorical discrimination, which begins within the initial 100 ms post-stimulus onset and subsequently engages distinct cortical networks. Whether action representations themselves contribute to auditory object recognition and the precise kinds of actions recruiting the auditory-visual mirror neuron system remain poorly understood. We applied electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to sounds of man-made objects that were further subdivided between sounds conveying a socio-functional context and typically cuing a responsive action by the listener (e.g. a ringing telephone) and those that are not linked to such a context and do not typically elicit responsive actions (e.g. notes on a piano). This distinction was validated psychophysically by a separate cohort of listeners. Beginning approximately 300 ms, responses to such context-related sounds significantly differed from context-free sounds both in the strength and topography of the electric field. This latency is >200 ms subsequent to general categorical discrimination. Additionally, such topographic differences indicate that sounds of different action sub-types engage distinct configurations of intracranial generators. Statistical analysis of source estimations identified differential activity within premotor and inferior (pre)frontal regions (Brodmann's areas (BA) 6, BA8, and BA45/46/47) in response to sounds of actions typically cuing a responsive action. We discuss our results in terms of a spatio-temporal model of auditory object processing and the interplay between semantic and action representations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To identify factors associated with intent to stay in hospital among five different categories of healthcare professionals using an adapted version of the conceptual model of intent to stay (CMIS). DESIGN: A cross-sectional survey targeting Lausanne University Hospital employees performed in the fall of 2011. Multigroup structural equation modeling was used to test the adapted CMIS model among professional groups. Measures Satisfaction, self-fulfillment, workload, working conditions, burnout, overall job satisfaction, institutional identification and intent to stay. PARTICIPANTS: Surveys of 3364 respondents: 494 physicians, 1228 nurses, 509 laboratory technicians, 935 administrative staff and 198 psycho-social workers. RESULTS: For all professional categories, self-fulfillment increased intent to stay (all β > 0.14, P < 0.05). Burnout decreased intent to stay by weakening job satisfaction (β < -0.23 and β > 0.22, P < 0.05). Some factors were associated with specific professional categories: workload was associated with nurses' intent to stay (β = -0.15), and physicians' institutional identification mitigated the effect of burnout on intent to stay (β = -0.15 and β = 0.19). CONCLUSION: Respondents' intent to stay in a position depended both on global and profession-specific factors. The identification of these factors may help in mapping interventions and retention plans at both a hospital level and professional groups' level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The
 aim
 of
 this
 paper
 is
 to
 bring
 into
 consideration
 a
 way
 of
 studying
 culture
 in
 infancy.
 An
 emphasis 
is 
put 
on
 the
 role 
that 
the 
material 
object 
plays 
in 
early 
interactive
 processes. 
Accounted
 as
 a
 cultural
 artefact,
 the
 object
 is
 seen
 as
 a
 fundamental
 element
 within
 triadic
 mother‐object‐ infant
 interactions
 and
 is
 believed
 to
 be
 a
 driving
 force
 both
 for
 communicative
 and
 cognitive
 development.
 In
 order
 to
 reconsider
 the
 importance
 of
 the
 object
 in
 child
 development
 and
 to
 present
 an 
approach
 of 
studying
object
 construction,
accounts 
in 
literature 
on 
early 
communication
 development
 and
 the
 importance
 of
 the
 object
 are
 reviewed
 and
 discussed
 under
 the
 light
 of
 the
 cultural 
specificity 
of 
the 
material 
object.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multisensory processes facilitate perception of currently-presented stimuli and can likewise enhance later object recognition. Memories for objects originally encountered in a multisensory context can be more robust than those for objects encountered in an exclusively visual or auditory context [1], upturning the assumption that memory performance is best when encoding and recognition contexts remain constant [2]. Here, we used event-related potentials (ERPs) to provide the first evidence for direct links between multisensory brain activity at one point in time and subsequent object discrimination abilities. Across two experiments we found that individuals showing a benefit and those impaired during later object discrimination could be predicted by their brain responses to multisensory stimuli upon their initial encounter. These effects were observed despite the multisensory information being meaningless, task-irrelevant, and presented only once. We provide critical insights into the advantages associated with multisensory interactions; they are not limited to the processing of current stimuli, but likewise encompass the ability to determine the benefit of one's memories for object recognition in later, unisensory contexts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand.