998 resultados para Cross-modal


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Synästhetiker schmecken Berührungen, sehen Farben und Formen, wenn sie Musik hören oder einen Duft riechen. Es wurden auch so außergewöhnliche Formen wie Wochentage-Farben-, Berührung-Geruch- oder Schmerz-Farben-Synästhesien gefunden. Die von Neuro- wissenschaftlern und Philosophen als „Bindung“ genannte Fähigkeit mehrere Reize, die in verschiedenen Hirnarealen verarbeitet werden, miteinander zu koppeln und zu einer einheitlichen Repräsentation bzw. erfahrenen Einheit des Bewusstseins zusammenzufassen, betrifft jeden gesunden Mensch. Synästhetiker sind aber Menschen, deren Gehirne zur „Hyperbindung“ oder zum hyperkohärentem Erleben befähigt sind, da bei ihnen wesentlich mehr solcher Kopplungen entstehen. Das Phänomen der Synästhesie ist schon seit mehreren Jahrhunderten bekannt, aber immer noch ein Rätsel. Bisher glaubten Forscher, solche Phänomene beruhten bloß auf überdurchschnittlich dichten neuronalen Verdrahtungen zwischen sensorischen Hirnregionen. Aus der aktuellen Forschung kann man jedoch schließen, dass die Ursache der Synästhesie nicht allein eine verstärkte Verbindung zwischen zwei Sinneskanälen ist. Laut eigener Studien ist der Sinnesreiz selbst sowie seine fest verdrahteten sensorischen Pfade nicht notwendig für die Auslösung des synästhetischen Erlebens. Eine grundlegende Rolle spielt dabei dessen Bedeutung für einen Synästhetiker. Für die Annahme, dass die Semantik für die synästhetische Wahrnehmung das Entscheidende ist, müssten synästhetische Assoziationen ziemlich flexibel sein. Und genau das wurde herausgefunden, nämlich, dass normalerweise sehr stabile synästhetische Assoziationen unter bestimmten Bedingungen sich auf neue Auslöser übertragen lassen. Weitere Untersuchung betraf die neu entdeckte Schwimmstil-Farbe-Synästhesie, die tritt hervor nicht nur wenn Synästhetiker schwimmen, aber auch wenn sie über das Schwimmen denken. Sogar die Namen dieser charakteristischen Bewegungen können ihre Farbempfindungen auslösen, sobald sie im stimmigen Kontext auftauchen. Wie man von anderen Beispielen in der Hirnforschung weiß, werden häufig benutzte neuronale Pfade im Laufe der Zeit immer stärker ausgebaut. Wenn also ein Synästhetiker auf bestimmte Stimuli häufig stoßt und dabei eine entsprechende Mitempfindung bekommt, kann das mit der Zeit auch seine Hirnanatomie verändern, so dass die angemessenen strukturellen Verknüpfungen entstehen. Die angebotene Erklärung steht also im Einklang mit den bisherigen Ergebnissen. Die vorliegende Dissertation veranschaulicht, wie einheitlich und kohärent Wahrnehmung, Motorik, Emotionen und Denken (sensorische und kognitive Prozesse) im Phänomen der Synästhesie miteinander zusammenhängen. Das synästhetische nicht-konzeptuelle Begleiterlebnis geht mit dem konzeptuellen Inhalt des Auslösers einher. Ähnlich schreiben wir übliche, nicht-synästhetische phänomenale Eigenschaften den bestimmten Begriffen zu. Die Synästhesie bringt solche Verschaltungen einfach auf beeindruckende Weise zum Ausdruck und lässt das mannigfaltige Erleben stärker integrieren.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this functional magnetic resonance imaging (fMRI) study was to identify human brain areas that are sensitive to the direction of auditory motion. Such directional sensitivity was assessed in a hypothesis-free manner by analyzing fMRI response patterns across the entire brain volume using a spherical-searchlight approach. In addition, we assessed directional sensitivity in three predefined brain areas that have been associated with auditory motion perception in previous neuroimaging studies. These were the primary auditory cortex, the planum temporale and the visual motion complex (hMT/V5+). Our whole-brain analysis revealed that the direction of sound-source movement could be decoded from fMRI response patterns in the right auditory cortex and in a high-level visual area located in the right lateral occipital cortex. Our region-of-interest-based analysis showed that the decoding of the direction of auditory motion was most reliable with activation patterns of the left and right planum temporale. Auditory motion direction could not be decoded from activation patterns in hMT/V5+. These findings provide further evidence for the planum temporale playing a central role in supporting auditory motion perception. In addition, our findings suggest a cross-modal transfer of directional information to high-level visual cortex in healthy humans.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Human subjects overestimate the change of rising intensity sounds compared with falling intensity sounds. Rising sound intensity has therefore been proposed to be an intrinsic warning cue. In order to test this hypothesis, we presented rising, falling, and constant intensity sounds to healthy humans and gathered psychophysiological and behavioral responses. Brain activity was measured using event-related functional magnetic resonance imaging. We found that rising compared with falling sound intensity facilitates autonomic orienting reflex and phasic alertness to auditory targets. Rising intensity sounds produced neural activity in the amygdala, which was accompanied by activity in intraparietal sulcus, superior temporal sulcus, and temporal plane. Our results indicate that rising sound intensity is an elementary warning cue eliciting adaptive responses by recruiting attentional and physiological resources. Regions involved in cross-modal integration were activated by rising sound intensity, while the right-hemisphere phasic alertness network could not be supported by this study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence. Hum Brain Mapp, 2008. (c) 2008 Wiley-Liss, Inc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A tacitly held assumption in synesthesia research is the unidirectionality of digit-color associations. This notion is based on synesthetes' report that digits evoke a color percept, but colors do not elicit any numerical impression. In a random color generation task, we found evidence for an implicit co-activation of digits by colors, a finding that constrains neurological theories concerning cross-modal associations in general and synesthesia in particular.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a shallow dialogue analysis model, aimed at human-human dialogues in the context of staff or business meetings. Four components of the model are defined, and several machine learning techniques are used to extract features from dialogue transcripts: maximum entropy classifiers for dialogue acts, latent semantic analysis for topic segmentation, or decision tree classifiers for discourse markers. A rule-based approach is proposed for solving cross-modal references to meeting documents. The methods are trained and evaluated thanks to a common data set and annotation format. The integration of the components into an automated shallow dialogue parser opens the way to multimodal meeting processing and retrieval applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Research has mainly focussed on the perceptual nature of synaesthesia. However, synaesthetic experiences are also semantically represented. It was our aim to develop a task to investigate the semantic representation of the concurrent and its relation to the inducer in grapheme-colour synaesthesia. Non-synaesthetes were either tested with a lexical-decision (i.e., word / non-word) or a semantic-classification (i.e., edibility decision) task. Targets consisted of words which were strongly associated with a specific colour (e.g., banana - yellow) and words which were neutral and not associated with a specific colour (e.g., aunt). Target words were primed with colours: the prime target relationship was either intramodal (i.e., word - word) or crossmodal (colour patch - word). Each of the four task versions consisted of three conditions: congruent (same colour for prime and target), incongruent (different colour), and unrelated (neutral target). For both tasks (i.e., lexical and semantic) and both versions of the task (i.e., intramodal and crossmodal), we expected faster reaction times (RTs) in the congruent condition than in the neutral condition and slower RTs in the incongruent condition than the neutral condition. Stronger effects were expected in the intramodal condition due to the overlap in the prime target modality. The results suggest that the hypotheses were partly confirmed. We conclude that the tasks and hypotheses can be readily adopted to investigate the nature of the representation of the synaesthetic experiences.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To investigate the stability of trace reactivation in healthy older adults, 22 older volunteers with no significant neurological history participated in a cross-modal priming task. Whilst both object relative center embedded (ORC) and object relative right branching (ORR) sentences is-ere employed, working memory load was reduced by limiting the number of wordy separating the antecedent front the gap for both sentence types. Analysis of the results did not reveal any significant trace reactivation for the ORC or ORR sentences. The results did reveal, however, a positive correlation between age and semantic printing at the pre-gap position and a negative correlation between age and semantic printing at the gap position for ORC sentences. In contrast, there was no correlation between age and priming effects for the ORR sentences. These results indicated that trace reactivation may be sensitive to a variety of age related factors, including lexical activation and working memory. The implications of these results for sentence processing in the older population arc discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Spatial generalization skills in school children aged 8-16 were studied with regard to unfamiliar objects that had been previously learned in a cross-modal priming and learning paradigm. We observed a developmental dissociation with younger children recognizing objects only from previously learnt perspectives whereas older children generalized acquired object knowledge to new viewpoints as well. Haptic and - to a lesser extent - visual priming improved spatial generalization in all but the youngest children. The data supports the idea of dissociable, view-dependent and view-invariant object representations with different developmental trajectories that are subject to modulatory effects of priming. Late-developing areas in the parietal or the prefrontal cortex may account for the retarded onset of view-invariant object recognition. © 2006 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is evidence for the late development in humans of configural face and animal recognition. We show that the recognition of artificial three-dimensional (3D) objects from part configurations develops similarly late. We also demonstrate that the cross-modal integration of object information reinforces the development of configural recognition more than the intra-modal integration does. Multimodal object representations in the brain may therefore play a role in configural object recognition. © 2003 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reflective Logic and Default Logic are both generalized so as to allow universally quantified variables to cross modal scopes whereby the Barcan formula and its converse hold. This is done by representing both the fixed-point equation for Reflective Logic and the fixed-point equation for Default both as necessary equivalences in the Modal Quantificational Logic Z. and then inserting universal quantifiers before the defaults. The two resulting systems, called Quantified Reflective Logic and Quantified Default Logic, are then compared by deriving metatheorems of Z that express their relationships. The main result is to show that every solution to the equivalence for Quantified Default Logic is a strongly grounded solution to the equivalence for Quantified Reflective Logic. It is further shown that Quantified Reflective Logic and Quantified Default Logic have exactly the same solutions when no default has an entailment condition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Acknowledgements We thank Brian Roberts and Mike Harris for responding to our questions regarding their paper; Zoltan Dienes for advice on Bayes factors; Denise Fischer, Melanie Römer, Ioana Stanciu, Aleksandra Romanczuk, Stefano Uccelli, Nuria Martos Sánchez, and Rosa María Beño Ruiz de la Sierra for help collecting data; Eva Viviani for managing data collection in Parma. We thank Maurizio Gentilucci for letting us use his lab, and the Centro Intradipartimentale Mente e Cervello (CIMeC), University of Trento, and especially Francesco Pavani for lending us his motion tracking equipment. We thank Rachel Foster for proofreading. KKK was supported by a Ph.D. scholarship as part of a grant to VHF within the International Graduate Research Training Group on Cross-Modal Interaction in Natural and Artificial Cognitive Systems (CINACS; DFG IKG-1247) and TS by a grant (DFG – SCHE 735/3-1); both from the German Research Council.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Load Theory (Lavie, 1995, 2005) states that the level of perceptual load in a task (i.e.,the amount of information involved in processing task-relevant stimuli) determines the efficiency of selective attention. There is evidence that perceptual load affects distractor processing, with increased inattentional blindness under high load. Given that high load can result in individuals failing to report seeing obvious objects, it is conceivable that load may also impair memory for the scene. The current study is the first to assess the effect of perceptual load on eyewitness memory. Across three experiments (two video-based and one in a driving simulator), the effect of perceptual load on eyewitness memory was assessed. The results showed that eyewitnesses were less accurate under high load, in particular for peripheral details. For example, memory for the central character in the video was not affected by load but memory for a witness who passed by the window at the edge of the scene was significantly worse under high load. High load memories were also more open to suggestion, showing increased susceptibility to leading questions. High visual perceptual load also affected recall for auditory information, illustrating a possible cross-modal perceptual load effect on memory accuracy. These results have implications for eyewitness memory researchers and forensic professionals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Colour words abound with figurative meanings, expressing much more than visual signals. Some of these figurative properties are well known; in English, for example, black is associated with EVIL and blue with DEPRESSION. Colours themselves are also described in metaphorical terms using lexis from other domains of experience, such as when we talk of deep blue, drawing on the domain of spatial position. Both metaphor and colour are of central concern to semantic theory; moreover, colour is recognised as a highly productive metaphoric field. Despite this, comparatively few works have dealt with these topics in unison, and even those few have tended to focus on Basic Colour Terms (BCTs) rather than including non-BCTs. This thesis addresses the need for an integrated study of both BCTs and non-BCTs, and provides an overview of metaphor and metonymy within the semantic area of colour. Conducted as part of the Mapping Metaphor project, this research uses the unique data source of the Historical Thesaurus of English (HT) to identify areas of meaning that share vocabulary with colour and thus point to figurative uses. The lexicographic evidence is then compared to current language use, found in the British National Corpus (BNC) and the Corpus of Contemporary American (COCA), to test for currency and further developments or changes in meaning. First, terms for saturation, tone and brightness are discussed. This lexis often functions as hue modifiers and is found to transfer into COLOUR from areas such as LIFE, EMOTION, TRUTH and MORALITY. The evidence for cross-modal links between COLOUR with SOUND, TOUCH and DIMENSION is then presented. Each BCT is discussed in turn, along with a selection of non-BCTs, where it is revealed how frequently hue terms engage in figurative meanings. This includes the secondary BCTs, with the only exception being orange, and a number of non-BCTs. All of the evidence discussed confirms that figurative uses of colour originate through a process of metonymy, although these are often extended into metaphor.