217 resultados para visual discrimination


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study was designed to assess sex-related differences in the selection of an appropriate strategy when facing novelty. A simple visuo-spatial task was used to investigate exploratory behavior as a specific response to novelty. The exploration task was followed by a visual discrimination task, and the responses were analyzed using signal detection theory. During exploration women selected a local searching strategy in which the metric distance between what is already known and what is unknown was reduced, whereas men adopted a global strategy based on an approximately uniform distribution of choices. Women's exploratory behavior gives rise to a notion of a secure base warranting a sense of safety while men's behavior does not appear to be influenced by risk. This sex-related difference was interpreted as a difference in beliefs concerning the likelihood of uncertain events influencing risk evaluation. Keywords: exploration, spontaneous strategies, sex differences, decision-making.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rats were treated postnatally (PND 5-16) with BSO (l-buthionine-(S,R)-sulfoximine) in an animal model of schizophrenia based on transient glutathione deficit. The BSO treated rats were impaired in patrolling a maze or a homing table when adult, yet demonstrated preserved escape learning, place discrimination and reversal in a water maze task [37]. In the present work, BSO rats' performance in the water maze was assessed in conditions controlling for the available visual cues. First, in a completely curtained environment with two salient controlled cues, BSO rats showed little accuracy compared to control rats. Secondly, pre-trained BSO rats were impaired in reaching the familiar spatial position when curtains partially occluded different portions of the room environment in successive sessions. The apparently preserved place learning in a classical water maze task thus appears to require the stability and the richness of visual landmarks from the surrounding environment. In other words, the accuracy of BSO rats in place and reversal learning is impaired in a minimal cue condition or when the visual panorama changes between trials. However, if the panorama remains rich and stable between trials, BSO rats are equally efficient in reaching a familiar position or in learning a new one. This suggests that the BSO accurate performance in the water maze does not satisfy all the criteria for a cognitive map based navigation on the integration of polymodal cues. It supports the general hypothesis of a binding deficit in BSO rats.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Detection and discrimination of visuospatial input involve at least extracting, selecting and encoding relevant information and decision-making processes allowing selecting a response. These two operations are altered, respectively, by attentional mechanisms that change discrimination capacities, and by beliefs concerning the likelihood of uncertain events. Information processing is tuned by the attentional level that acts like a filter on perception, while decision-making processes are weighed by subjective probability of risk. In addition, it has been shown that anxiety could affect the detection of unexpected events through the modification of the level of arousal. Consequently, purpose of this study concerns whether and how decision-making and brain dynamics are affected by anxiety. To investigate these questions, the performance of women with either a high (12) or a low (12) STAI-T (State-Trait Anxiety Inventory, Spielberger, 1983) was examined in a decision-making visuospatial task where subjects have to recognize a target visual pattern from non-target patterns. The target pattern was a schematic image of furniture arranged in such a way as to give the impression of a living room. Non-target patterns were created by either the compression or the dilatation of the distances between objects. Target and non-target patterns were always presented in the same configuration. Preliminary behavioral results show no group difference in reaction time. In addition, visuo-spatial abilities were analyzed trough the signal detection theory for quantifying perceptual decisions in the presence of uncertainty (Green and Swets, 1966). This theory treats detection of a stimulus as a decision-making process determined by the nature of the stimulus and cognitive factors. Astonishingly, no difference in d' (corresponding to the distance between means of the distributions) and c (corresponds to the likelihood ratio) indexes was observed. Comparison of Event-related potentials (ERP) reveals that brain dynamics differ according to anxiety. It shows differences in component latencies, particularly a delay in anxious subjects over posterior electrode sites. However, these differences are compensated during later components by shorter latencies in anxious subjects compared to non-anxious one. These inverted effects seem indicate that the absence of difference in reaction time rely on a compensation of attentional level that tunes cortical activation in anxious subjects, but they have to hammer away to maintain performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Past multisensory experiences can influence current unisensory processing and memory performance. Repeated images are better discriminated if initially presented as auditory-visual pairs, rather than only visually. An experience's context thus plays a role in how well repetitions of certain aspects are later recognized. Here, we investigated factors during the initial multisensory experience that are essential for generating improved memory performance. Subjects discriminated repeated versus initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were only visual. Experiment 1 examined whether purely episodic multisensory information suffices for enhancing later discrimination performance by pairing visual objects with either tones or vibrations. We could therefore also assess whether effects can be elicited with different sensory pairings. Experiment 2 examined semantic context by manipulating the congruence between auditory and visual object stimuli within blocks of trials. Relative to images only encountered visually, accuracy in discriminating image repetitions was significantly impaired by auditory-visual, yet unaffected by somatosensory-visual multisensory memory traces. By contrast, this accuracy was selectively enhanced for visual stimuli with semantically congruent multisensory pasts and unchanged for those with semantically incongruent multisensory pasts. The collective results reveal opposing effects of purely episodic versus semantic information from auditory-visual multisensory events. Nonetheless, both types of multisensory memory traces are accessible for processing incoming stimuli and indeed result in distinct visual object processing, leading to either impaired or enhanced performance relative to unisensory memory traces. We discuss these results as supporting a model of object-based multisensory interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the Morris water maze (MWM) task, proprioceptive information is likely to have a poor accuracy due to movement inertia. Hence, in this condition, dynamic visual information providing information on linear and angular acceleration would play a critical role in spatial navigation. To investigate this assumption we compared rat's spatial performance in the MWM and in the homing hole board (HB) tasks using a 1.5 Hz stroboscopic illumination. In the MWM, rats trained in the stroboscopic condition needed more time than those trained in a continuous light condition to reach the hidden platform. They expressed also little accuracy during the probe trial. In the HB task, in contrast, place learning remained unaffected by the stroboscopic light condition. The deficit in the MWM was thus complete, affecting both escape latency and discrimination of the reinforced area, and was thus task specific. This dissociation confirms that dynamic visual information is crucial to spatial navigation in the MWM whereas spatial navigation on solid ground is mediated by a multisensory integration, and thus less dependent on visual information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuroimaging studies analyzing neurophysiological signals are typically based on comparing averages of peri-stimulus epochs across experimental conditions. This approach can however be problematic in the case of high-level cognitive tasks, where response variability across trials is expected to be high and in cases where subjects cannot be considered part of a group. The main goal of this thesis has been to address this issue by developing a novel approach for analyzing electroencephalography (EEG) responses at the single-trial level. This approach takes advantage of the spatial distribution of the electric field on the scalp (topography) and exploits repetitions across trials for quantifying the degree of discrimination between experimental conditions through a classification scheme. In the first part of this thesis, I developed and validated this new method (Tzovara et al., 2012a,b). Its general applicability was demonstrated with three separate datasets, two in the visual modality and one in the auditory. This development allowed then to target two new lines of research, one in basic and one in clinical neuroscience, which represent the second and third part of this thesis respectively. For the second part of this thesis (Tzovara et al., 2012c), I employed the developed method for assessing the timing of exploratory decision-making. Using single-trial topographic EEG activity during presentation of a choice's payoff, I could predict the subjects' subsequent decisions. This prediction was due to a topographic difference which appeared on average at ~516ms after the presentation of payoff and was subject-specific. These results exploit for the first time the temporal correlates of individual subjects' decisions and additionally show that the underlying neural generators start differentiating their responses already ~880ms before the button press. Finally, in the third part of this project, I focused on a clinical study with the goal of assessing the degree of intact neural functions in comatose patients. Auditory EEG responses were assessed through a classical mismatch negativity paradigm, during the very early phase of coma, which is currently under-investigated. By taking advantage of the decoding method developed in the first part of the thesis, I could quantify the degree of auditory discrimination at the single patient level (Tzovara et al., in press). Our results showed for the first time that even patients who do not survive the coma can discriminate sounds at the neural level, during the first hours after coma onset. Importantly, an improvement in auditory discrimination during the first 48hours of coma was predictive of awakening and survival, with 100% positive predictive value. - L'analyse des signaux électrophysiologiques en neuroimagerie se base typiquement sur la comparaison des réponses neurophysiologiques à différentes conditions expérimentales qui sont moyennées après plusieurs répétitions d'une tâche. Pourtant, cette approche peut être problématique dans le cas des fonctions cognitives de haut niveau, où la variabilité des réponses entre les essais peut être très élevéeou dans le cas où des sujets individuels ne peuvent pas être considérés comme partie d'un groupe. Le but principal de cette thèse est d'investiguer cette problématique en développant une nouvelle approche pour l'analyse des réponses d'électroencephalographie (EEG) au niveau de chaque essai. Cette approche se base sur la modélisation de la distribution du champ électrique sur le crâne (topographie) et profite des répétitions parmi les essais afin de quantifier, à l'aide d'un schéma de classification, le degré de discrimination entre des conditions expérimentales. Dans la première partie de cette thèse, j'ai développé et validé cette nouvelle méthode (Tzovara et al., 2012a,b). Son applicabilité générale a été démontrée avec trois ensembles de données, deux dans le domaine visuel et un dans l'auditif. Ce développement a permis de cibler deux nouvelles lignes de recherche, la première dans le domaine des neurosciences cognitives et l'autre dans le domaine des neurosciences cliniques, représentant respectivement la deuxième et troisième partie de ce projet. En particulier, pour la partie cognitive, j'ai appliqué cette méthode pour évaluer l'information temporelle de la prise des décisions (Tzovara et al., 2012c). En se basant sur l'activité topographique de l'EEG au niveau de chaque essai pendant la présentation de la récompense liée à un choix, on a pu prédire les décisions suivantes des sujets (en termes d'exploration/exploitation). Cette prédiction s'appuie sur une différence topographique qui apparaît en moyenne ~516ms après la présentation de la récompense. Ces résultats exploitent pour la première fois, les corrélés temporels des décisions au niveau de chaque sujet séparément et montrent que les générateurs neuronaux de ces décisions commencent à différentier leurs réponses déjà depuis ~880ms avant que les sujets appuient sur le bouton. Finalement, pour la dernière partie de ce projet, je me suis focalisée sur une étude Clinique afin d'évaluer le degré des fonctions neuronales intactes chez les patients comateux. Des réponses EEG auditives ont été examinées avec un paradigme classique de mismatch negativity, pendant la phase précoce du coma qui est actuellement sous-investiguée. En utilisant la méthode de décodage développée dans la première partie de la thèse, j'ai pu quantifier le degré de discrimination auditive au niveau de chaque patient (Tzovara et al., in press). Nos résultats montrent pour la première fois que même des patients comateux qui ne vont pas survivre peuvent discriminer des sons au niveau neuronal, lors de la phase aigue du coma. De plus, une amélioration dans la discrimination auditive pendant les premières 48heures du coma a été observée seulement chez des patients qui se sont réveillés par la suite (100% de valeur prédictive pour un réveil).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The processing of biological motion is a critical, everyday task performed with remarkable efficiency by human sensory systems. Interest in this ability has focused to a large extent on biological motion processing in the visual modality (see, for example, Cutting, J. E., Moore, C., & Morrison, R. (1988). Masking the motions of human gait. Perception and Psychophysics, 44(4), 339-347). In naturalistic settings, however, it is often the case that biological motion is defined by input to more than one sensory modality. For this reason, here in a series of experiments we investigate behavioural correlates of multisensory, in particular audiovisual, integration in the processing of biological motion cues. More specifically, using a new psychophysical paradigm we investigate the effect of suprathreshold auditory motion on perceptions of visually defined biological motion. Unlike data from previous studies investigating audiovisual integration in linear motion processing [Meyer, G. F. & Wuerger, S. M. (2001). Cross-modal integration of auditory and visual motion signals. Neuroreport, 12(11), 2557-2560; Wuerger, S. M., Hofbauer, M., & Meyer, G. F. (2003). The integration of auditory and motion signals at threshold. Perception and Psychophysics, 65(8), 1188-1196; Alais, D. & Burr, D. (2004). No direction-specific bimodal facilitation for audiovisual motion detection. Cognitive Brain Research, 19, 185-194], we report the existence of direction-selective effects: relative to control (stationary) auditory conditions, auditory motion in the same direction as the visually defined biological motion target increased its detectability, whereas auditory motion in the opposite direction had the inverse effect. Our data suggest these effects do not arise through general shifts in visuo-spatial attention, but instead are a consequence of motion-sensitive, direction-tuned integration mechanisms that are, if not unique to biological visual motion, at least not common to all types of visual motion. Based on these data and evidence from neurophysiological and neuroimaging studies we discuss the neural mechanisms likely to underlie this effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multisensory experiences influence subsequent memory performance and brain responses. Studies have thus far concentrated on semantically congruent pairings, leaving unresolved the influence of stimulus pairing and memory sub-types. Here, we paired images with unique, meaningless sounds during a continuous recognition task to determine if purely episodic, single-trial multisensory experiences can incidentally impact subsequent visual object discrimination. Psychophysics and electrical neuroimaging analyses of visual evoked potentials (VEPs) compared responses to repeated images either paired or not with a meaningless sound during initial encounters. Recognition accuracy was significantly impaired for images initially presented as multisensory pairs and could not be explained in terms of differential attention or transfer of effects from encoding to retrieval. VEP modulations occurred at 100-130ms and 270-310ms and stemmed from topographic differences indicative of network configuration changes within the brain. Distributed source estimations localized the earlier effect to regions of the right posterior temporal gyrus (STG) and the later effect to regions of the middle temporal gyrus (MTG). Responses in these regions were stronger for images previously encountered as multisensory pairs. Only the later effect correlated with performance such that greater MTG activity in response to repeated visual stimuli was linked with greater performance decrements. The present findings suggest that brain networks involved in this discrimination may critically depend on whether multisensory events facilitate or impair later visual memory performance. More generally, the data support models whereby effects of multisensory interactions persist to incidentally affect subsequent behavior as well as visual processing during its initial stages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evidence of multisensory interactions within low-level cortices and at early post-stimulus latencies has prompted a paradigm shift in conceptualizations of sensory organization. However, the mechanisms of these interactions and their link to behavior remain largely unknown. One behaviorally salient stimulus is a rapidly approaching (looming) object, which can indicate potential threats. Based on findings from humans and nonhuman primates suggesting there to be selective multisensory (auditory-visual) integration of looming signals, we tested whether looming sounds would selectively modulate the excitability of visual cortex. We combined transcranial magnetic stimulation (TMS) over the occipital pole and psychophysics for "neurometric" and psychometric assays of changes in low-level visual cortex excitability (i.e., phosphene induction) and perception, respectively. Across three experiments we show that structured looming sounds considerably enhance visual cortex excitability relative to other sound categories and white-noise controls. The time course of this effect showed that modulation of visual cortex excitability started to differ between looming and stationary sounds for sound portions of very short duration (80 ms) that were significantly below (by 35 ms) perceptual discrimination threshold. Visual perceptions are thus rapidly and efficiently boosted by sounds through early, preperceptual and stimulus-selective modulation of neuronal excitability within low-level visual cortex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Choosing what to eat is a complex activity for humans. Determining a food's pleasantness requires us to combine information about what is available at a given time with knowledge of the food's palatability, texture, fat content, and other nutritional information. It has been suggested that humans may have an implicit knowledge of a food's fat content based on its appearance; Toepel et al. (Neuroimage 44:967-974, 2009) reported visual-evoked potential modulations after participants viewed images of high-energy, high-fat food (HF), as compared to viewing low-fat food (LF). In the present study, we investigated whether there are any immediate behavioural consequences of these modulations for human performance. HF, LF, or non-food (NF) images were used to exogenously direct participants' attention to either the left or the right. Next, participants made speeded elevation discrimination responses (up vs. down) to visual targets presented either above or below the midline (and at one of three stimulus onset asynchronies: 150, 300, or 450 ms). Participants responded significantly more rapidly following the presentation of a HF image than following the presentation of either LF or NF images, despite the fact that the identity of the images was entirely task-irrelevant. Similar results were found when comparing response speeds following images of high-carbohydrate (HC) food items to low-carbohydrate (LC) food items. These results support the view that people rapidly process (i.e. within a few hundred milliseconds) the fat/carbohydrate/energy value or, perhaps more generally, the pleasantness of food. Potentially as a result of HF/HC food items being more pleasant and thus having a higher incentive value, it seems as though seeing these foods results in a response readiness, or an overall alerting effect, in the human brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Action representations can interact with object recognition processes. For example, so-called mirror neurons respond both when performing an action and when seeing or hearing such actions. Investigations of auditory object processing have largely focused on categorical discrimination, which begins within the initial 100 ms post-stimulus onset and subsequently engages distinct cortical networks. Whether action representations themselves contribute to auditory object recognition and the precise kinds of actions recruiting the auditory-visual mirror neuron system remain poorly understood. We applied electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to sounds of man-made objects that were further subdivided between sounds conveying a socio-functional context and typically cuing a responsive action by the listener (e.g. a ringing telephone) and those that are not linked to such a context and do not typically elicit responsive actions (e.g. notes on a piano). This distinction was validated psychophysically by a separate cohort of listeners. Beginning approximately 300 ms, responses to such context-related sounds significantly differed from context-free sounds both in the strength and topography of the electric field. This latency is >200 ms subsequent to general categorical discrimination. Additionally, such topographic differences indicate that sounds of different action sub-types engage distinct configurations of intracranial generators. Statistical analysis of source estimations identified differential activity within premotor and inferior (pre)frontal regions (Brodmann's areas (BA) 6, BA8, and BA45/46/47) in response to sounds of actions typically cuing a responsive action. We discuss our results in terms of a spatio-temporal model of auditory object processing and the interplay between semantic and action representations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Patients with schizophrenia show deficits in visuospatial working memory and visual pursuit processes. It is currently unclear, however, whether both impairments are related to a common neuropathological origin. The purpose of the present study was therefore to examine the possible relations between the encoding and the discrimination of dynamic visuospatial stimuli in schizophrenia. METHOD: Sixteen outpatients with schizophrenia and 16 control subjects were asked to encode complex disc displacements presented on a screen. After a delay, participants had to identify the previously presented disc trajectory from a choice of six static linear paths, among which were five incorrect paths. The precision of visual pursuit eye movements during the initial presentation of the dynamic stimulus was assessed. The fixations and scanning time in definite regions of the six paths presented during the discrimination phase were investigated. RESULTS: In comparison with controls, patients showed poorer task performance, reduced pursuit accuracy during incorrect trials and less time scanning the correct stimulus or the incorrect paths approximating its global structure. Patients also spent less time scanning the leftmost portion of the correct path even when making a correct choice. The accuracy of visual pursuit and head movements, however, was not correlated with task performance. CONCLUSIONS: The present study provides direct support for the hypothesis that active integration of visuospatial information within working memory is deficient in schizophrenia. In contrast, a general impairment of oculomotor mechanisms involved in smooth pursuit did not appear to be directly related to lower visuospatial working memory performance in schizophrenia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.