3 resultados para Motor response suppression

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The general aim of the thesis was to investigate how and to what extent the characteristics of action organization are reflected in language, and how they influence language processing and understanding. Even though a huge amount of research has been devoted to the study of the motor effects of language, this issue is very debated in literature. Namely, the majority of the studies have focused on low-level motor effects such as effector-relatedness of action, whereas only a few studies have started to systematically investigate how specific aspects of action organization are encoded and reflected in language. After a review of previous studies on the relationship between language comprehension and action (chapter 1) and a critical discussion of some of them (chapter 2), the thesis is composed by three experimental chapters, each devoted to a specific aspect of action organization. Chapter 3 presents a study designed with the aim to disentangle the effective time course of the involvement of the motor system during language processing. Three kinematics experiments were designed in order to determine whether and, at which stage of motor planning and execution effector-related action verbs influence actions executed with either the same or a different effector. Results demonstrate that the goal of an action can be linguistically re-activated, producing a modulation of the motor response. In chapter 4, a second study investigates the interplay between the role of motor perspective (agent) and the organization of action in motor chains. More specifically, this kinematics study aims at deepening how goal can be translated in language, using as stimuli simple sentences composed by a pronoun (I, You, He/She) and a verb. Results showed that the perspective activated by the pronoun You reflects the motor pattern of the “agent” combined with the chain structure of the verb. These data confirm an early involvement of the motor system in language processing, suggesting that it is specifically modulated by the activation of the agent’s perspective. In chapter 5, the issue of perspective is specifically investigated, focusing on its role in language comprehension. In particular, this study aimed at determining how a specific perspective (induced for example by a personal pronoun) modulates motor behaviour during and after language processing. A classical compatibility effect (the Action-sentence compatibility effect) has been used to this aim. In three behavioural experiments the authors investigated how the ACE is modulated by taking first or third person perspective. Results from these experiments showed that the ACE effect occurs only when a first-person perspective is activated by the sentences used as stimuli. Overall, the data from this thesis contributed to disentangle several aspects of how action organization is translated in language, and then reactivated during language processing. This constitutes a new contribution to the field, adding lacking information on how specific aspects such as goal and perspective are linguistically described. In addition, these studies offer a new point of view to understand the functional implications of the involvement of the motor system during language comprehension, specifically from the point of view of our social interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work I address the study of language comprehension in an “embodied” framework. Firstly I show behavioral evidence supporting the idea that language modulates the motor system in a specific way, both at a proximal level (sensibility to the effectors) and at the distal level (sensibility to the goal of the action in which the single motor acts are inserted). I will present two studies in which the method is basically the same: we manipulated the linguistic stimuli (the kind of sentence: hand action vs. foot action vs. mouth action) and the effector by which participants had to respond (hand vs. foot vs. mouth; dominant hand vs. non-dominant hand). Response times analyses showed a specific modulation depending on the kind of sentence: participants were facilitated in the task execution (sentence sensibility judgment) when the effector they had to use to respond was the same to which the sentences referred. Namely, during language comprehension a pre-activation of the motor system seems to take place. This activation is analogous (even if less intense) to the one detectable when we practically execute the action described by the sentence. Beyond this effector specific modulation, we also found an effect of the goal suggested by the sentence. That is, the hand effector was pre-activated not only by hand-action-related sentences, but also by sentences describing mouth actions, consistently with the fact that to execute an action on an object with the mouth we firstly have to bring it to the mouth with the hand. After reviewing the evidence on simulation specificity directly referring to the body (for instance, the kind of the effector activated by the language), I focus on the specific properties of the object to which the words refer, particularly on the weight. In this case the hypothesis to test was if both lifting movement perception and lifting movement execution are modulated by language comprehension. We used behavioral and kinematics methods, and we manipulated the linguistic stimuli (the kind of sentence: the lifting of heavy objects vs. the lifting of light objects). To study the movement perception we measured the correlations between the weight of the objects lifted by an actor (heavy objects vs. light objects) and the esteems provided by the participants. To study the movement execution we measured kinematics parameters variance (velocity, acceleration, time to the first peak of velocity) during the actual lifting of objects (heavy objects vs. light objects). Both kinds of measures revealed that language had a specific effect on the motor system, both at a perceptive and at a motoric level. Finally, I address the issue of the abstract words. Different studies in the “embodied” framework tried to explain the meaning of abstract words The limit of these works is that they account only for subsets of phenomena, so results are difficult to generalize. We tried to circumvent this problem by contrasting transitive verbs (abstract and concrete) and nouns (abstract and concrete) in different combinations. The behavioral study was conducted both with German and Italian participants, as the two languages are syntactically different. We found that response times were faster for both the compatible pairs (concrete verb + concrete noun; abstract verb + abstract noun) than for the mixed ones. Interestingly, for the mixed combinations analyses showed a modulation due to the specific language (German vs. Italian): when the concrete word precedes the abstract one responses were faster, regardless of the word grammatical class. Results are discussed in the framework of current views on abstract words. They highlight the important role of developmental and social aspects of language use, and confirm theories assigning a crucial role to both sensorimotor and linguistic experience for abstract words.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.