989 resultados para Visual stimuli
Resumo:
Tese de Doutoramento (Programa Doutoral em Engenharia Biomédica)
Resumo:
The investigation of perceptual and cognitive functions with non-invasive brain imaging methods critically depends on the careful selection of stimuli for use in experiments. For example, it must be verified that any observed effects follow from the parameter of interest (e.g. semantic category) rather than other low-level physical features (e.g. luminance, or spectral properties). Otherwise, interpretation of results is confounded. Often, researchers circumvent this issue by including additional control conditions or tasks, both of which are flawed and also prolong experiments. Here, we present some new approaches for controlling classes of stimuli intended for use in cognitive neuroscience, however these methods can be readily extrapolated to other applications and stimulus modalities. Our approach is comprised of two levels. The first level aims at equalizing individual stimuli in terms of their mean luminance. Each data point in the stimulus is adjusted to a standardized value based on a standard value across the stimulus battery. The second level analyzes two populations of stimuli along their spectral properties (i.e. spatial frequency) using a dissimilarity metric that equals the root mean square of the distance between two populations of objects as a function of spatial frequency along x- and y-dimensions of the image. Randomized permutations are used to obtain a minimal value between the populations to minimize, in a completely data-driven manner, the spectral differences between image sets. While another paper in this issue applies these methods in the case of acoustic stimuli (Aeschlimann et al., Brain Topogr 2008), we illustrate this approach here in detail for complex visual stimuli.
Resumo:
Past multisensory experiences can influence current unisensory processing and memory performance. Repeated images are better discriminated if initially presented as auditory-visual pairs, rather than only visually. An experience's context thus plays a role in how well repetitions of certain aspects are later recognized. Here, we investigated factors during the initial multisensory experience that are essential for generating improved memory performance. Subjects discriminated repeated versus initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were only visual. Experiment 1 examined whether purely episodic multisensory information suffices for enhancing later discrimination performance by pairing visual objects with either tones or vibrations. We could therefore also assess whether effects can be elicited with different sensory pairings. Experiment 2 examined semantic context by manipulating the congruence between auditory and visual object stimuli within blocks of trials. Relative to images only encountered visually, accuracy in discriminating image repetitions was significantly impaired by auditory-visual, yet unaffected by somatosensory-visual multisensory memory traces. By contrast, this accuracy was selectively enhanced for visual stimuli with semantically congruent multisensory pasts and unchanged for those with semantically incongruent multisensory pasts. The collective results reveal opposing effects of purely episodic versus semantic information from auditory-visual multisensory events. Nonetheless, both types of multisensory memory traces are accessible for processing incoming stimuli and indeed result in distinct visual object processing, leading to either impaired or enhanced performance relative to unisensory memory traces. We discuss these results as supporting a model of object-based multisensory interactions.
Resumo:
Different visual stimuli have been shown to recruit different mental imagery strategies. However the role of specific visual stimuli properties related to body context and posture in mental imagery is still under debate. Aiming to dissociate the behavioural correlates of mental processing of visual stimuli characterized by different body context, in the present study we investigated whether the mental rotation of stimuli showing either hands as attached to a body (hands-on-body) or not (hands-only), would be based on different mechanisms. We further examined the effects of postural changes on the mental rotation of both stimuli. Thirty healthy volunteers verbally judged the laterality of rotated hands-only and hands-on-body stimuli presented from the dorsum- or the palm-view, while positioning their hands on their knees (front postural condition) or behind their back (back postural condition). Mental rotation of hands-only, but not of hands-on-body, was modulated by the stimulus view and orientation. Additionally, only the hands-only stimuli were mentally rotated at different speeds according to the postural conditions. This indicates that different stimulus-related mechanisms are recruited in mental rotation by changing the bodily context in which a particular body part is presented. The present data suggest that, with respect to hands-only, mental rotation of hands-on-body is less dependent on biomechanical constraints and proprioceptive input. We interpret our results as evidence for preferential processing of visual- rather than kinesthetic-based mechanisms during mental transformation of hands-on-body and hands-only, respectively.
Resumo:
L'imagerie mentale est définie comme une expérience similaire à la perception mais se déroulant en l'absence d'une stimulation physique. Des recherches antérieures ont montré que l'imagerie mentale améliore la performance dans certains domaines, comme par exemple le domaine moteur. Cependant, son rôle dans l'apprentissage perceptif n'a pas encore été étudié. L'apprentissage perceptif correspond à l'amélioration permanente des performances suite à la répétition de la même tâche. Cette thèse présente une série des résultats empiriques qui montrent que l'apprentissage perceptif peut aussi être achevé en l'absence des stimuli physiques. En effet, imaginer des stimuli visuels amène à une meilleure performance avec les stimuli réels. Donc, les processus sous-jacents l'apprentissage perceptif ne sont pas uniquement déclenchés par les stimuli sensoriels, mais également par des signaux internes. En plus, l'apprentissage perceptif à travers l'imagerie mentale ne se réalise que seule-ment quand les stimuli ne sont pas (complètement) présents, mais gaiement quand les stimuli montrés ne sont pas utiles quant à la résolution de la tâche. - Mental imagery is described as an experience that resembles pereeptnal ex-perience but which occurs in the absence ef a physical stimulation. Despite its beneficial effects in, among others, motor performance, the role of mental imagery m perceptual learning has not yet been addressed. Here we focus on a specific sensory modality: vision. Perceptual learning is the ability to improve perception in a stable way through the repetition of a given task Here I demonstrate by a series of empirical results that a perceptual improve¬ment can also occur in the absence of a stimulation. Imagining visual stimuli is sufficient for successful perceptual learning. Hence, processes underlying perceptual learning are not only stimulus-driven but can also be driven by internally generated signals. Moreover, I also show that perceptual learning via mental imagery can occur not only when physical stimuli are (partially) absent, but also in conditions where stimuli are uninformative with respect to the task that has to be learned.
Resumo:
Introduction: Neuroimaging of the self focused on high-level mechanisms such as language, memory or imagery of the self. Recent evidence suggests that low-level mechanisms of multisensory and sensorimotor integration may play a fundamental role in encoding self-location and the first-person perspective (Blanke and Metzinger, 2009). Neurological patients with out-of body experiences (OBE) suffer from abnormal self-location and the first-person perspective due to a damage in the temporo-parietal junction (Blanke et al., 2004). Although self-location and the first-person perspective can be studied experimentally (Lenggenhager et al., 2009), the neural underpinnings of self-location have yet to be investigated. To investigate the brain network involved in self-location and first-person perspective we used visuo-tactile multisensory conflict, magnetic resonance (MR)-compatible robotics, and fMRI in study 1, and lesion analysis in a sample of 9 patients with OBE due to focal brain damage in study 2. Methods: Twenty-two participants saw a video showing either a person's back or an empty room being stroked (visual stimuli) while the MR-compatible robotic device stroked their back (tactile stimulation). Direction and speed of the seen stroking could either correspond (synchronous) or not (asynchronous) to those of the seen stroking. Each run comprised the four conditions according to a 2x2 factorial design with Object (Body, No-Body) and Synchrony (Synchronous, Asynchronous) as main factors. Self-location was estimated using the mental ball dropping (MBD; Lenggenhager et al., 2009). After the fMRI session participants completed a 6-item adapted from the original questionnaire created by Botvinick and Cohen (1998) and based on questions and data obtained by Lenggenhager et al. (2007, 2009). They were also asked to complete a questionnaire to disclose the perspective they adopted during the illusion. Response times (RTs) for the MBD and fMRI data were analyzed with a 3-way mixed model ANOVA with the in-between factor Perspective (up, down) and the two with-in factors Object (body, no-body) and Stroking (synchronous, asynchronous). Quantitative lesion analysis was performed using MRIcron (Rorden et al., 2007). We compared the distributions of brain lesions confirmed by multimodality imaging (Knowlton, 2004) in patients with OBE with those showing complex visual hallucinations involving people or faces, but without any disturbance of self-location and first person perspective. Nine patients with OBE were investigated. The control group comprised 8 patients. Structural imaging data were available for normalization and co-registration in all the patients. Normalization of each patient's lesion into the common MNI (Montreal Neurological Institute) reference space permitted simple, voxel-wise, algebraic comparisons to be made. Results: Even if in the scanner all participants were lying on their back and were facing upwards, analysis of perspective showed that half of the participants had the impression to be looking down at the virtual human body below them, despite any cues about their body position (Down-group). The other participants had the impression to be looking up at the virtual body above them (Up-group). Analysis of Q3 ("How strong was the feeling that the body you saw was you?") indicated stronger self-identification with the virtual body during the synchronous stroking. RTs in the MBD task confirmed these subjective data (significant 3-way interaction between perspective, object and stroking). fMRI results showed eight cortical regions where the BOLD signal was significantly different during at least one of the conditions resulting from the combination of Object and Stroking, relative to baseline: right and left temporo-parietal junction, right EBA, left middle occipito-temporal gyrus, left postcentral gyrus, right medial parietal lobe, bilateral medial occipital lobe (Fig 1). The activation patterns in right and left temporo-parietal junction and right EBA reflected changes in self-location and perspective as revealed by statistical analysis that was performed on the percentage of BOLD change with respect to the baseline. Statistical lesion overlap comparison (using nonparametric voxel based lesion symptom mapping) with respect to the control group revealed the right temporo-parietal junction, centered at the angular gyrus (Talairach coordinates x = 54, y =-52, z = 26; p>0.05, FDR corrected). Conclusions: The present questionnaire and behavioural results show that - despite the noisy and constraining MR environment) our participants had predictable changes in self-location, self-identification, and first-person perspective when robotic tactile stroking was applied synchronously with the robotic visual stroking. fMRI data in healthy participants and lesion data in patients with abnormal self-location and first-person perspective jointly revealed that the temporo-parietal cortex especially in the right hemisphere encodes these conscious experiences. We argue that temporo-parietal activity reflects the experience of the conscious "I" as embodied and localized within bodily space.
Resumo:
Multisensory experiences influence subsequent memory performance and brain responses. Studies have thus far concentrated on semantically congruent pairings, leaving unresolved the influence of stimulus pairing and memory sub-types. Here, we paired images with unique, meaningless sounds during a continuous recognition task to determine if purely episodic, single-trial multisensory experiences can incidentally impact subsequent visual object discrimination. Psychophysics and electrical neuroimaging analyses of visual evoked potentials (VEPs) compared responses to repeated images either paired or not with a meaningless sound during initial encounters. Recognition accuracy was significantly impaired for images initially presented as multisensory pairs and could not be explained in terms of differential attention or transfer of effects from encoding to retrieval. VEP modulations occurred at 100-130ms and 270-310ms and stemmed from topographic differences indicative of network configuration changes within the brain. Distributed source estimations localized the earlier effect to regions of the right posterior temporal gyrus (STG) and the later effect to regions of the middle temporal gyrus (MTG). Responses in these regions were stronger for images previously encountered as multisensory pairs. Only the later effect correlated with performance such that greater MTG activity in response to repeated visual stimuli was linked with greater performance decrements. The present findings suggest that brain networks involved in this discrimination may critically depend on whether multisensory events facilitate or impair later visual memory performance. More generally, the data support models whereby effects of multisensory interactions persist to incidentally affect subsequent behavior as well as visual processing during its initial stages.
Resumo:
Multisensory experiences enhance perceptions and facilitate memory retrieval processes, even when only unisensory information is available for accessing such memories. Using fMRI, we identified human brain regions involved in discriminating visual stimuli according to past multisensory vs. unisensory experiences. Subjects performed a completely orthogonal task, discriminating repeated from initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were exclusively visual. Despite only single-trial exposures to initial image presentations, accuracy in indicating image repetitions was significantly improved by past auditory-visual multisensory experiences over images only encountered visually. Similarly, regions within the lateral-occipital complex-areas typically associated with visual object recognition processes-were more active to visual stimuli with multisensory than unisensory pasts. Additional differential responses were observed in the anterior cingulate and frontal cortices. Multisensory experiences are registered by the brain even when of no immediate behavioral relevance and can be used to categorize memories. These data reveal the functional efficacy of multisensory processing.
Resumo:
We perceive our environment through multiple sensory channels. Nonetheless, research has traditionally focused on the investigation of sensory processing within single modalities. Thus, investigating how our brain integrates multisensory information is of crucial importance for understanding how organisms cope with a constantly changing and dynamic environment. During my thesis I have investigated how multisensory events impact our perception and brain responses, either when auditory-visual stimuli were presented simultaneously or how multisensory events at one point in time impact later unisensory processing. In "Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012) we investigated the neuronal substrates involved in motion detection in depth under multisensory vs. unisensory conditions. We have shown that congruent auditory-visual looming (i.e. approaching) signals are preferentially integrated by the brain. Further, we show that early effects under these conditions are relevant for behavior, effectively speeding up responses to these combined stimulus presentations. In "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), we investigated the behavioral impact of single encounters with meaningless auditory-visual object parings upon subsequent visual object recognition. In addition to showing that these encounters lead to impaired recognition accuracy upon repeated visual presentations, we have shown that the brain discriminates images as soon as ~100ms post-stimulus onset according to the initial encounter context. In "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review) we have addressed whether auditory object recognition is affected by single-trial multisensory memories, and whether recognition accuracy of sounds was similarly affected by the initial encounter context as visual objects. We found that this is in fact the case. We propose that a common underlying brain network is differentially involved during encoding and retrieval of images and sounds based on our behavioral findings. - Nous percevons l'environnement qui nous entoure à l'aide de plusieurs organes sensoriels. Antérieurement, la recherche sur la perception s'est focalisée sur l'étude des systèmes sensoriels indépendamment les uns des autres. Cependant, l'étude des processus cérébraux qui soutiennent l'intégration de l'information multisensorielle est d'une importance cruciale pour comprendre comment notre cerveau travail en réponse à un monde dynamique en perpétuel changement. Pendant ma thèse, j'ai ainsi étudié comment des événements multisensoriels impactent notre perception immédiate et/ou ultérieure et comment ils sont traités par notre cerveau. Dans l'étude " Looming signals reveal synergistic principles of multisensory integration" (Cappe, Thelen et al., 2012), nous nous sommes intéressés aux processus neuronaux impliqués dans la détection de mouvements à l'aide de l'utilisation de stimuli audio-visuels seuls ou combinés. Nos résultats ont montré que notre cerveau intègre de manière préférentielle des stimuli audio-visuels combinés s'approchant de l'observateur. De plus, nous avons montré que des effets précoces, observés au niveau de la réponse cérébrale, influencent notre comportement, en accélérant la détection de ces stimuli. Dans l'étude "Electrical neuroimaging of memory discrimination based on single-trial multisensory learning" (Thelen et al., 2012), nous nous sommes intéressés à l'impact qu'a la présentation d'un stimulus audio-visuel sur l'exactitude de reconnaissance d'une image. Nous avons étudié comment la présentation d'une combinaison audio-visuelle sans signification, impacte, au niveau comportementale et cérébral, sur la reconnaissance ultérieure de l'image. Les résultats ont montré que l'exactitude de la reconnaissance d'images, présentées dans le passé, avec un son sans signification, est inférieure à celle obtenue dans le cas d'images présentées seules. De plus, notre cerveau différencie ces deux types de stimuli très tôt dans le traitement d'images. Dans l'étude "Single-trial multisensory memories affect later visual and auditory object recognition" (Thelen et al., in review), nous nous sommes posés la question si l'exactitude de ia reconnaissance de sons était affectée de manière semblable par la présentation d'événements multisensoriels passés. Ceci a été vérifié par nos résultats. Nous avons proposé que cette similitude puisse être expliquée par le recrutement différentiel d'un réseau neuronal commun.
Resumo:
Recent findings suggest that the visuo-spatial sketchpad (VSSP) may be divided into two sub-components processing dynamic or static visual information. This model may be useful to elucidate the confusion of data concerning the functioning of the VSSP in schizophrenia. The present study examined patients with schizophrenia and matched controls in a new working memory paradigm involving dynamic (the Ball Flight Task - BFT) or static (the Static Pattern Task - SPT) visual stimuli. In the BFT, the responses of the patients were apparently based on the retention of the last set of segments of the perceived trajectory, whereas control subjects relied on a more global strategy. We assume that the patients' performances are the result of a reduced capacity in chunking visual information since they relied mainly on the retention of the last set of segments. This assumption is confirmed by the poor performance of the patients in the static task (SPT), which requires a combination of stimulus components into object representations. We assume that the static/dynamic distinction may help us to understand the VSSP deficits in schizophrenia. This distinction also raises questions about the hypothesis that visuo-spatial working memory can simply be dissociated into visual and spatial sub-components.
Resumo:
Background: In the course of evolution butterflies and moths developed two different reproductive behaviors. Whereas butterflies rely on visual stimuli for mate location, moths use the"female calling plus male seduction" system, in which females release long-range sex pheromones to attract conspecific males. There are few exceptions from this pattern but in all cases known female moths possess sex pheromone glands which apparently have been lost in female butterflies. In the day-flying moth family Castniidae ("butterfly-moths"), which includes some important crop pests, no pheromones have been found so far. Methodology/Principal Findings: Using a multidisciplinary approach we described the steps involved in the courtship of P. archon, showing that visual cues are the only ones used for mate location; showed that the morphology and fine structure of the antennae of this moth are strikingly similar to those of butterflies, with male sensilla apparently not suited to detect female-released long range pheromones; showed that its females lack pheromone-producing glands, and identified three compounds as putative male sex pheromone (MSP) components of P. archon, released from the proximal halves of male forewings and hindwings. Conclusions/Significance: This study provides evidence for the first time in Lepidoptera that females of a moth do not produce any pheromone to attract males, and that mate location is achieved only visually by patrolling males, which may release a pheromone at short distance, putatively a mixture of Z,E-farnesal, E,E-farnesal, and (E,Z)-2,13-octadecadienol. The outlined behavior, long thought to be unique to butterflies, is likely to be widespread in Castniidae implying a novel, unparalleled butterfly-like reproductive behavior in moths. This will also have practical implications in applied entomology since it signifies that the monitoring/control of castniid pests should not be based on the use of female-produced pheromones, as it is usually done in many moths.
Resumo:
Covert spatial attention produces biases in perceptual and neural responses in the absence of overt orienting movements. The neural mechanism that gives rise to these effects is poorly understood. Here we report the relation between fixational eye movements, namely eye vergence, and covert attention. Visual stimuli modulate the angle of eye vergence as a function of their ability to capture attention. This illustrates the relation between eye vergence and bottom-up attention. In visual and auditory cue/no-cue paradigms, the angle of vergence is greater in the cue condition than in the no-cue condition. This shows a top-down attention component. In conclusion, observations reveal a close link between covert attention and modulation in eye vergence during eye fixation. Our study suggests a basis for the use of eye vergence as a tool for measuring attention and may provide new insights into attention and perceptual disorders.
Resumo:
Vision affords us with the ability to consciously see, and use this information in our behavior. While research has produced a detailed account of the function of the visual system, the neural processes that underlie conscious vision are still debated. One of the aims of the present thesis was to examine the time-course of the neuroelectrical processes that correlate with conscious vision. The second aim was to study the neural basis of unconscious vision, that is, situations where a stimulus that is not consciously perceived nevertheless influences behavior. According to current prevalent models of conscious vision, the activation of visual cortical areas is not, as such, sufficient for consciousness to emerge, although it might be sufficient for unconscious vision. Conscious vision is assumed to require reciprocal communication between cortical areas, but views differ substantially on the extent of this recurrent communication. Visual consciousness has been proposed to emerge from recurrent neural interactions within the visual system, while other models claim that more widespread cortical activation is needed for consciousness. Studies I-III compared models of conscious vision by studying event-related potentials (ERP). ERPs represent the brain’s average electrical response to stimulation. The results support the model that associates conscious vision with activity localized in the ventral visual cortex. The timing of this activity corresponds to an intermediate stage in visual processing. Earlier stages of visual processing may influence what becomes conscious, although these processes do not directly enable visual consciousness. Late processing stages, when more widespread cortical areas are activated, reflect the access to and manipulation of contents of consciousness. Studies IV and V concentrated on unconscious vision. By using transcranial magnetic stimulation (TMS) we show that when early visual cortical processing is disturbed so that subjects fail to consciously perceive visual stimuli, they may nevertheless guess (above chance-level) the location where the visual stimuli were presented. However, the results also suggest that in a similar situation, early visual cortex is necessary for both conscious and unconscious perception of chromatic information (i.e. color). Chromatic information that remains unconscious may influence behavioral responses when activity in visual cortex is not disturbed by TMS. Our results support the view that early stimulus-driven (feedforward) activation may be sufficient for unconscious processing. In conclusion, the results of this thesis support the view that conscious vision is enabled by a series of processing stages. The processes that most closely correlate with conscious vision take place in the ventral visual cortex ~200 ms after stimulus presentation, although preceding time-periods and contributions from other cortical areas such as the parietal cortex are also indispensable. Unconscious vision relies on intact early visual activation, although the location of visual stimulus may be unconsciously resolved even when activity in the early visual cortex is interfered with.
Resumo:
When two stimuli are presented simultaneously to an observer, the perceived temporal order does not always correspond to the actual one. In three experiments we examined how the location and spatial predictability of visual stimuli modulate the perception of temporal order. Thirty-two participants had to report the temporal order of appearance of two visual stimuli. In Experiment 1, both stimuli were presented at the same eccentricity and no perceptual asynchrony between them was found. In Experiment 2, one stimulus was presented close to the fixation point and the other, peripheral, stimulus was presented in separate blocks in two eccentricities (4.8º and 9.6º). We found that the peripheral stimulus was perceived to be delayed in relation to the central one, with no significant difference between the delays obtained in the two eccentricities. In Experiment 3, using three eccentricities (2.5º, 7.3º and 12.1º) for the presentation of the peripheral stimulus, we compared a condition in which its location was highly predictable with two other conditions in which its location was progressively less predictable. Here, the perception of the peripheral stimulus was also delayed in relation to the central one, with this delay depending on both the eccentricity and predictability of the stimulus. We argue that attentional deployment, manipulated by the spatial predictability of the stimulus, seems to play an important role in the temporal order perception of visual stimuli. Yet, under whichever condition of spatial predictability, basic sensory and attentional processes are unavoidably entangled and both factors must concur to the perception of temporal order.
Resumo:
In a serial feature-positive conditional discrimination procedure the properties of a target stimulus A are defined by the presence or not of a feature stimulus X preceding it. In the present experiment, composite features preceded targets associated with two different topography operant responses (right and left bar pressing); matching and non-matching-to-sample arrangements were also used. Five water-deprived Wistar rats were trained in 6 different trials: X-R®Ar and X-L®Al, in which X and A were same modality visual stimuli and the reinforcement was contingent to pressing either the right (r) or left (l) bar that had the light on during the feature (matching-to-sample); Y-R®Bl and Y-L®Br, in which Y and B were same modality auditory stimuli and the reinforcement was contingent to pressing the bar that had the light off during the feature (non-matching-to-sample); A- and B- alone. After 100 training sessions, the animals were submitted to transfer tests with the targets used plus a new one (auditory click). Average percentages of stimuli with a response were measured. Acquisition occurred completely only for Y-L®Br+; however, complex associations were established along training. Transfer was not complete during the tests since concurrent effects of extinction and response generalization also occurred. Results suggest the use of both simple conditioning and configurational strategies, favoring the most recent theories of conditional discrimination learning. The implications of the use of complex arrangements for discussing these theories are considered.