7 resultados para representations of the body
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The body is represented in the brain at levels that incorporate multisensory information. This thesis focused on interactions between vision and cutaneous sensations (i.e., touch and pain). Experiment 1 revealed that there are partially dissociable pathways for visual enhancement of touch (VET) depending upon whether one sees one’s own body or the body of another person. This indicates that VET, a seeming low-level effect on spatial tactile acuity, is actually sensitive to body identity. Experiments 2-4 explored the effect of viewing one’s own body on pain perception. They demonstrated that viewing the body biases pain intensity judgments irrespective of actual stimulus intensity, and, more importantly, reduces the discriminative capacities of the nociceptive pathway encoding noxious stimulus intensity. The latter effect only occurs if the pain-inducing event itself is not visible, suggesting that viewing the body alone and viewing a stimulus event on the body have distinct effects on cutaneous sensations. Experiment 5 replicated an enhancement of visual remapping of touch (VRT) when viewing fearful human faces being touched, and further demonstrated that VRT does not occur for observed touch on non-human faces, even fearful ones. This suggests that the facial expressions of non-human animals may not be simulated within the somatosensory system of the human observer in the same way that the facial expressions of other humans are. Finally, Experiment 6 examined the enfacement illusion, in which synchronous visuo-tactile inputs cause another’s face to be assimilated into the mental self-face representation. The strength of enfacement was not affected by the other’s facial expression, supporting an asymmetric relationship between processing of facial identity and facial expressions. Together, these studies indicate that multisensory representations of the body in the brain link low-level perceptual processes with the perception of emotional cues and body/face identity, and interact in complex ways depending upon contextual factors.
Resumo:
Recognizing one’s body as separate from the external world plays a crucial role in detecting external events, and thus in planning adequate reactions to them. In addition, recognizing one’s body as distinct from others’ bodies allows remapping the experiences of others onto one’s sensory system, providing improved social understanding. In line with these assumptions, two well-known multisensory mechanisms demonstrated modulations of somatosensation when viewing both one’s own and someone else’s body: the Visual Enhancement of Touch (VET) and the Visual Remapping of Touch (VRT) effects. Vision of the body, in the former, and vision of the body being touched, in the latter, enhance tactile processing. The present dissertation investigated the multisensory nature of these mechanisms and their neural bases. Further experiments compared these effects for viewing one’s own body or viewing another person’s body. These experiments showed important differences in multisensory processing for one’s own body, and for other bodies, and also highlighted interactions between VET and VRT effects. The present experimental evidence demonstrated that a multisensory representation of one’s body – underlie by a high order fronto-parietal network - sends rapid modulatory feedback to primary somatosensory cortex, thus functionally enhancing tactile processing. These effects were highly spatially-specific, and depended on current body position. In contrast, vision of another person’s body can drive mental representations able to modulate tactile perception without any spatial constraint. Finally, these modulatory effects seem sometimes to interact with high order information, such as emotional content of a face. This allows one’s somatosensory system to adequately modulate perception of external events on the body surface, as a function of its interaction with the emotional state expressed by another individual.
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
The question addressed by this dissertation is how the human brain builds a coherent representation of the body, and how this representation is used to recognize its own body. Recent approaches by neuroimaging and TMS revealed hints for a distinct brain representation of human body, as compared with other stimulus categories. Neuropsychological studies demonstrated that body-parts and self body-parts recognition are separate processes sub-served by two different, even if possibly overlapping, networks within the brain. Bodily self-recognition is one aspect of our ability to distinguish between self and others and the self/other distinction is a crucial aspect of social behaviour. This is the reason why I have conducted a series of experiment on subjects with everyday difficulties in social and emotional behaviour, such as patients with autism spectrum disorders (ASD) and patients with Parkinson’s disease (PD). More specifically, I studied the implicit self body/face recognition (Chapter 6) and the influence of emotional body postures on bodily self-processing in TD children as well as in ASD children (Chapter 7). I found that the bodily self-recognition is present in TD and in ASD children and that emotional body postures modulate self and others’ body processing. Subsequently, I compared implicit and explicit bodily self-recognition in a neuro-degenerative pathology, such as in PD patients, and I found a selective deficit in implicit but not in explicit self-recognition (Chapter 8). This finding suggests that implicit and explicit bodily self-recognition are separate processes subtended by different mechanisms that can be selectively impaired. If the bodily self is crucial for self/other distinction, the space around the body (personal space) represents the space of interaction and communication with others. When, I studied this space in autism, I found that personal space regulation is impaired in ASD children (Chapter 9).
Resumo:
A successful interaction with objects in the environment requires integrating information concerning object-location with the shape, dimension and position of body parts in space. The former information is coded in a multisensory representation of the space around the body, i.e. peripersonal space (PPS), whereas the latter is enabled by an online, constantly updated, action-orientated multisensory representation of the body (BR) that is critical for action. One of the critical features of these representations is that both PPS and BR are not fixed, but they dynamically change depending on different types of experience. In a series of experiment, I studied plastic properties of PPS and BR in humans. I have developed a series of methods to measure the boundaries of PPS representation (Chapter 4), to study its neural correlates (Chapter 3) and to assess BRs. These tasks have been used to study changes in PPS and BR following tool-use (Chapter 5), multisensory stimulation (Chapter 6), amputation and prosthesis implantation (Chapter 7) or social interaction (Chapter 8). I found that changes in the function (tool-use) and the structure (amputation and prosthesis implantation) of the physical body elongate or shrink both PPS and BR. Social context and social interaction also shape PPS representation. Such high degree of plasticity suggests that our sense of body in space is not given at once, but it is constantly constructed and adapted through experience.