841 resultados para object representations
Resumo:
Evidence from human and non-human primate studies supports a dual-pathway model of audition, with partially segregated cortical networks for sound recognition and sound localisation, referred to as the What and Where processing streams. In normal subjects, these two networks overlap partially on the supra-temporal plane, suggesting that some early-stage auditory areas are involved in processing of either auditory feature alone or of both. Using high-resolution 7-T fMRI we have investigated the influence of positional information on sound object representations by comparing activation patterns to environmental sounds lateralised to the right or left ear. While unilaterally presented sounds induced bilateral activation, small clusters in specific non-primary auditory areas were significantly more activated by contra-laterally presented stimuli. Comparison of these data with histologically identified non-primary auditory areas suggests that the coding of sound objects within early-stage auditory areas lateral and posterior to primary auditory cortex AI is modulated by the position of the sound, while that within anterior areas is not.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand.
Resumo:
A persistent issue of debate in the area of 3D object recognition concerns the nature of the experientially acquired object models in the primate visual system. One prominent proposal in this regard has expounded the use of object centered models, such as representations of the objects' 3D structures in a coordinate frame independent of the viewing parameters [Marr and Nishihara, 1978]. In contrast to this is another proposal which suggests that the viewing parameters encountered during the learning phase might be inextricably linked to subsequent performance on a recognition task [Tarr and Pinker, 1989; Poggio and Edelman, 1990]. The 'object model', according to this idea, is simply a collection of the sample views encountered during training. Given that object centered recognition strategies have the attractive feature of leading to viewpoint independence, they have garnered much of the research effort in the field of computational vision. Furthermore, since human recognition performance seems remarkably robust in the face of imaging variations [Ellis et al., 1989], it has often been implicitly assumed that the visual system employs an object centered strategy. In the present study we examine this assumption more closely. Our experimental results with a class of novel 3D structures strongly suggest the use of a view-based strategy by the human visual system even when it has the opportunity of constructing and using object-centered models. In fact, for our chosen class of objects, the results seem to support a stronger claim: 3D object recognition is 2D view-based.
Resumo:
Online geographic information systems provide the means to extract a subset of desired spatial information from a larger remote repository. Data retrieved representing real-world geographic phenomena are then manipulated to suit the specific needs of an end-user. Often this extraction requires the derivation of representations of objects specific to a particular resolution or scale from a single original stored version. Currently standard spatial data handling techniques cannot support the multi-resolution representation of such features in a database. In this paper a methodology to store and retrieve versions of spatial objects at, different resolutions with respect to scale using standard database primitives and SQL is presented. The technique involves heavy fragmentation of spatial features that allows dynamic simplification into scale-specific object representations customised to the display resolution of the end-user's device. Experimental results comparing the new approach to traditional R-Tree indexing and external object simplification reveal the former performs notably better for mobile and WWW applications where client-side resources are limited and retrieved data loads are kept relatively small.
Resumo:
Spatial generalization skills in school children aged 8-16 were studied with regard to unfamiliar objects that had been previously learned in a cross-modal priming and learning paradigm. We observed a developmental dissociation with younger children recognizing objects only from previously learnt perspectives whereas older children generalized acquired object knowledge to new viewpoints as well. Haptic and - to a lesser extent - visual priming improved spatial generalization in all but the youngest children. The data supports the idea of dissociable, view-dependent and view-invariant object representations with different developmental trajectories that are subject to modulatory effects of priming. Late-developing areas in the parietal or the prefrontal cortex may account for the retarded onset of view-invariant object recognition. © 2006 Elsevier B.V. All rights reserved.
Resumo:
It has been suggested that the deleterious effect of contrast reversal on visual recognition is unique to faces, not objects. Here we show from priming, supervised category learning, and generalization that there is no such thing as general invariance of recognition of non-face objects against contrast reversal and, likewise, changes in direction of illumination. However, when recognition varies with rendering conditions, invariance may be restored, and effects of continuous learning may be reduced, by providing prior object knowledge from active sensation. Our findings suggest that the degree of contrast invariance achieved reflects functional characteristics of object representations learned in a task-dependent fashion.
Resumo:
There is evidence for the late development in humans of configural face and animal recognition. We show that the recognition of artificial three-dimensional (3D) objects from part configurations develops similarly late. We also demonstrate that the cross-modal integration of object information reinforces the development of configural recognition more than the intra-modal integration does. Multimodal object representations in the brain may therefore play a role in configural object recognition. © 2003 Elsevier B.V. All rights reserved.
Resumo:
Spatial objects may not only be perceived visually but also by touch. We report recent experiments investigating to what extent prior object knowledge acquired in either the haptic or visual sensory modality transfers to a subsequent visual learning task. Results indicate that even mental object representations learnt in one sensory modality may attain a multi-modal quality. These findings seem incompatible with picture-based reasoning schemas but leave open the possibility of modality-specific reasoning mechanisms.
Resumo:
Dissertation presented at the Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa to obtain the Master degree in Electrical and Computer Engineering.
Resumo:
This review article summarizes evidence that multisensory experiences at one point in time have long-lasting effects on subsequent unisensory visual and auditory object recognition. The efficacy of single-trial exposure to task-irrelevant multisensory events is its ability to modulate memory performance and brain activity to unisensory components of these events presented later in time. Object recognition (either visual or auditory) is enhanced if the initial multisensory experience had been semantically congruent and can be impaired if this multisensory pairing was either semantically incongruent or entailed meaningless information in the task-irrelevant modality, when compared to objects encountered exclusively in a unisensory context. Processes active during encoding cannot straightforwardly explain these effects; performance on all initial presentations was indistinguishable despite leading to opposing effects with stimulus repetitions. Brain responses to unisensory stimulus repetitions differ during early processing stages (-100 ms post-stimulus onset) according to whether or not they had been initially paired in a multisensory context. Plus, the network exhibiting differential responses varies according to whether or not memory performance is enhanced or impaired. The collective findings we review indicate that multisensory associations formed via single-trial learning exert influences on later unisensory processing to promote distinct object representations that manifest as differentiable brain networks whose activity is correlated with memory performance. These influences occur incidentally, despite many intervening stimuli, and are distinguishable from the encoding/learning processes during the formation of the multisensory associations. The consequences of multisensory interactions that persist over time to impact memory retrieval and object discrimination.
Resumo:
SOUND OBJECTS IN TIME, SPACE AND ACTIONThe term "sound object" describes an auditory experience that is associated with an acoustic event produced by a sound source. At cortical level, sound objects are represented by temporo-spatial activity patterns within distributed neural networks. This investigation concerns temporal, spatial and action aspects as assessed in normal subjects using electrical imaging or measurement of motor activity induced by transcranial magnetic stimulation (TMS).Hearing the same sound again has been shown to facilitate behavioral responses (repetition priming) and to modulate neural activity (repetition suppression). In natural settings the same source is often heard again and again, with variations in spectro-temporal and spatial characteristics. I have investigated how such repeats influence response times in a living vs. non-living categorization task and the associated spatio-temporal patterns of brain activity in humans. Dynamic analysis of distributed source estimations revealed differential sound object representations within the auditory cortex as a function of the temporal history of exposure to these objects. Often heard sounds are coded by a modulation in a bilateral network. Recently heard sounds, independently of the number of previous exposures, are coded by a modulation of a left-sided network.With sound objects which carry spatial information, I have investigated how spatial aspects of the repeats influence neural representations. Dynamics analyses of distributed source estimations revealed an ultra rapid discrimination of sound objects which are characterized by spatial cues. This discrimination involved two temporo-spatially distinct cortical representations, one associated with position-independent and the other with position-linked representations within the auditory ventral/what stream.Action-related sounds were shown to increase the excitability of motoneurons within the primary motor cortex, possibly via an input from the mirror neuron system. The role of motor representations remains unclear. I have investigated repetition priming-induced plasticity of the motor representations of action sounds with the measurement of motor activity induced by TMS pulses applied on the hand motor cortex. TMS delivered to the hand area within the primary motor cortex yielded larger magnetic evoked potentials (MEPs) while the subject was listening to sounds associated with manual than non- manual actions. Repetition suppression was observed at motoneuron level, since during a repeated exposure to the same manual action sound the MEPs were smaller. I discuss these results in terms of specialized neural network involved in sound processing, which is characterized by repetition-induced plasticity.Thus, neural networks which underlie sound object representations are characterized by modulations which keep track of the temporal and spatial history of the sound and, in case of action related sounds, also of the way in which the sound is produced.LES OBJETS SONORES AU TRAVERS DU TEMPS, DE L'ESPACE ET DES ACTIONSLe terme "objet sonore" décrit une expérience auditive associée avec un événement acoustique produit par une source sonore. Au niveau cortical, les objets sonores sont représentés par des patterns d'activités dans des réseaux neuronaux distribués. Ce travail traite les aspects temporels, spatiaux et liés aux actions, évalués à l'aide de l'imagerie électrique ou par des mesures de l'activité motrice induite par stimulation magnétique trans-crânienne (SMT) chez des sujets sains. Entendre le même son de façon répétitive facilite la réponse comportementale (amorçage de répétition) et module l'activité neuronale (suppression liée à la répétition). Dans un cadre naturel, la même source est souvent entendue plusieurs fois, avec des variations spectro-temporelles et de ses caractéristiques spatiales. J'ai étudié la façon dont ces répétitions influencent le temps de réponse lors d'une tâche de catégorisation vivant vs. non-vivant, et les patterns d'activité cérébrale qui lui sont associés. Des analyses dynamiques d'estimations de sources ont révélé des représentations différenciées des objets sonores au niveau du cortex auditif en fonction de l'historique d'exposition à ces objets. Les sons souvent entendus sont codés par des modulations d'un réseau bilatéral. Les sons récemment entendus sont codé par des modulations d'un réseau du côté gauche, indépendamment du nombre d'expositions. Avec des objets sonores véhiculant de l'information spatiale, j'ai étudié la façon dont les aspects spatiaux des sons répétés influencent les représentations neuronales. Des analyses dynamiques d'estimations de sources ont révélé une discrimination ultra rapide des objets sonores caractérisés par des indices spatiaux. Cette discrimination implique deux représentations corticales temporellement et spatialement distinctes, l'une associée à des représentations indépendantes de la position et l'autre à des représentations liées à la position. Ces représentations sont localisées dans la voie auditive ventrale du "quoi".Des sons d'actions augmentent l'excitabilité des motoneurones dans le cortex moteur primaire, possiblement par une afférence du system des neurones miroir. Le rôle des représentations motrices des sons d'actions reste peu clair. J'ai étudié la plasticité des représentations motrices induites par l'amorçage de répétition à l'aide de mesures de potentiels moteurs évoqués (PMEs) induits par des pulsations de SMT sur le cortex moteur de la main. La SMT appliquée sur le cortex moteur primaire de la main produit de plus grands PMEs alors que les sujets écoutent des sons associée à des actions manuelles en comparaison avec des sons d'actions non manuelles. Une suppression liée à la répétition a été observée au niveau des motoneurones, étant donné que lors de l'exposition répétée au son de la même action manuelle les PMEs étaient plus petits. Ces résultats sont discuté en termes de réseaux neuronaux spécialisés impliqués dans le traitement des sons et caractérisés par de la plasticité induite par la répétition. Ainsi, les réseaux neuronaux qui sous-tendent les représentations des objets sonores sont caractérisés par des modulations qui gardent une trace de l'histoire temporelle et spatiale du son ainsi que de la manière dont le son a été produit, en cas de sons d'actions.
Resumo:
Repetition of environmental sounds, like their visual counterparts, can facilitate behavior and modulate neural responses, exemplifying plasticity in how auditory objects are represented or accessed. It remains controversial whether such repetition priming/suppression involves solely plasticity based on acoustic features and/or also access to semantic features. To evaluate contributions of physical and semantic features in eliciting repetition-induced plasticity, the present functional magnetic resonance imaging (fMRI) study repeated either identical or different exemplars of the initially presented object; reasoning that identical exemplars share both physical and semantic features, whereas different exemplars share only semantic features. Participants performed a living/man-made categorization task while being scanned at 3T. Repeated stimuli of both types significantly facilitated reaction times versus initial presentations, demonstrating perceptual and semantic repetition priming. There was also repetition suppression of fMRI activity within overlapping temporal, premotor, and prefrontal regions of the auditory "what" pathway. Importantly, the magnitude of suppression effects was equivalent for both physically identical and semantically related exemplars. That the degree of repetition suppression was irrespective of whether or not both perceptual and semantic information was repeated is suggestive of a degree of acoustically independent semantic analysis in how object representations are maintained and retrieved.
Resumo:
Recent findings suggest that the visuo-spatial sketchpad (VSSP) may be divided into two sub-components processing dynamic or static visual information. This model may be useful to elucidate the confusion of data concerning the functioning of the VSSP in schizophrenia. The present study examined patients with schizophrenia and matched controls in a new working memory paradigm involving dynamic (the Ball Flight Task - BFT) or static (the Static Pattern Task - SPT) visual stimuli. In the BFT, the responses of the patients were apparently based on the retention of the last set of segments of the perceived trajectory, whereas control subjects relied on a more global strategy. We assume that the patients' performances are the result of a reduced capacity in chunking visual information since they relied mainly on the retention of the last set of segments. This assumption is confirmed by the poor performance of the patients in the static task (SPT), which requires a combination of stimulus components into object representations. We assume that the static/dynamic distinction may help us to understand the VSSP deficits in schizophrenia. This distinction also raises questions about the hypothesis that visuo-spatial working memory can simply be dissociated into visual and spatial sub-components.
Resumo:
The term "sound object" describes an auditory experience that is associated with an acoustic event produced by a sound source. In natural settings, a sound produced by a living being or an object provides information about the identity and the location of the sound source. Sound's identity is orocessed alono the ventral "What" pathway which consists of regions within the superior and middle temporal cortices as well as the inferior frontal gyrus. This work concerns the creation of individual auditory object representations in narrow semantic categories and their plasticity using electrical imaging. Discrimination of sounds from broad category has been shown to occur along a temporal hierarchy and in different brain regions along the ventral "What" pathway. However, sounds belonging to the same semantic category, such as faces or voices, were shown to be discriminated in specific brain areas and are thought to represent a special class of stimuli. I have investigated how cortical representations of a narrow category, here birdsongs, is modulated by training novices to recognized songs of individual bird species. Dynamic analysis of distributed source estimations revealed differential sound object representations within the auditory ventral "What" pathway as a function of the level of expertise newly acquired. Correct recognition of trained items induces a sharpening within a left-lateralized semantic network starting around 200ms, whereas untrained items' processing occurs later in lower-level and memory-related regions. With another category of sounds belonging to the same category, here heartbeats, I investigated the cortical representations of correct and incorrect recognition of sounds. Source estimations revealed differential representations partially overlapping with regions involved in the semantic network that is activated when participants became experts in the task. Incorrect recognition also induces a higher activation when compared to correct recognition in regions processing lower-level features. The discrimination of heartbeat sounds is a difficult task and requires a continuous listening. I investigated whether the repetition effects are modulated by participants' behavioral performance. Dynamic source estimations revealed repetition suppression in areas located outside of the semantic network. Therefore, individual environmental sounds become meaningful with training. Their representations mainly involve a left-lateralized network of brain regions that are tuned with expertise, as well as other brain areas, not related to semantic processing, and occurring in early stages of semantic processing. -- Le terme objet sonore" décrit une expérience auditive associée à un événement acoustique produit par une source sonore. Dans l'environnement, un son produit par un être vivant ou un objet fournit des informations concernant l'identité et la localisation de la source sonore. Les informations concernant l'identité d'un son sont traitée le long de la voie ventrale di "Quoi". Cette voie est composée de regions situées dans le cortex temporal et frontal. L'objet de ce travail est d'étudier quels sont les neuro-mecanismes impliqués dans la représentation de nouveaux objets sonores appartenant à une meme catégorie sémantique ainsi que les phénomènes de plasticité à l'aide de l'imagerie électrique. Il a été montré que la discrimination de sons appartenant à différentes catégories sémantiques survient dans différentes aires situées le long la voie «Quoi» et suit une hiérarchie temporelle II a également été montré que la discrimination de sons appartenant à la même catégorie sémantique tels que les visages ou les voix, survient dans des aires spécifiques et représenteraient des stimuli particuliers. J'ai étudié comment les représentations corticales de sons appartenant à une même catégorie sémantique, dans ce cas des chants d'oiseaux, sont modifiées suite à un entraînement Pour ce faire, des sujets novices ont été entraînés à reconnaître des chants d'oiseaux spécifiques L'analyse des estimations des sources neuronales au cours du temps a montré que les representations des objets sonores activent de manière différente des régions situées le long de la vo,e ventrale en fonction du niveau d'expertise acquis grâce à l'entraînement. La reconnaissance des chants pour lesquels les sujets ont été entraînés implique un réseau sémantique principalement situé dans l'hémisphère gauche activé autour de 200ms. Au contraire, la reconnaissance des chants pour lesquels les sujets n'ont pas été entraînés survient plus tardivement dans des régions de plus bas niveau. J'ai ensuite étudié les mécanismes impliqués dans la reconnaissance et non reconnaissance de sons appartenant à une autre catégorie, .es battements de coeur. L'analyse des sources neuronales a montre que certaines régions du réseau sémantique lié à l'expertise acquise sont recrutées de maniere différente en fonction de la reconnaissance ou non reconnaissance du son La non reconnaissance des sons recrute des régions de plus bas niveau. La discrimination des bruits cardiaques est une tâche difficile et nécessite une écoute continue du son. J'ai étudié l'influence des réponses comportementales sur les effets de répétitions. L'analyse des sources neuronales a montré que la reconnaissance ou non reconnaissance des sons induisent des effets de repétition différents dans des régions situées en dehors des aires du réseau sémantique. Ainsi, les sons acquièrent un sens grâce à l'entraînement. Leur représentation corticale implique principalement un réseau d'aires cérébrales situé dans l'hémisphère gauche, dont l'activité est optimisée avec l'acquisition d'un certain niveau d'expertise, ainsi que d'autres régions qui ne sont pas liée au traitement de l'information sémantique. L'activité de ce réseau sémantique survient plus rapidemement que la prédiction par le modèle de la hiérarchie temporelle.