66 resultados para experimental visual perception
Resumo:
Normal visual perception requires differentiating foreground from background objects. Differences in physical attributes sometimes determine this relationship. Often such differences must instead be inferred, as when two objects or their parts have the same luminance. Modal completion refers to such perceptual "filling-in" of object borders that are accompanied by concurrent brightness enhancement, in turn termed illusory contours (ICs). Amodal completion is filling-in without concurrent brightness enhancement. Presently there are controversies regarding whether both completion processes use a common neural mechanism and whether perceptual filling-in is a bottom-up, feedforward process initiating at the lowest levels of the cortical visual pathway or commences at higher-tier regions. We previously examined modal completion (Murray et al., 2002) and provided evidence that the earliest modal IC sensitivity occurs within higher-tier object recognition areas of the lateral occipital complex (LOC). We further proposed that previous observations of IC sensitivity in lower-tier regions likely reflect feedback modulation from the LOC. The present study tested these proposals, examining the commonality between modal and amodal completion mechanisms with high-density electrical mapping, spatiotemporal topographic analyses, and the local autoregressive average distributed linear inverse source estimation. A common initial mechanism for both types of completion processes (140 msec) that manifested as a modulation in response strength within higher-tier visual areas, including the LOC and parietal structures, is demonstrated, whereas differential mechanisms were evident only at a subsequent time period (240 msec), with amodal completion relying on continued strong responses in these structures.
Resumo:
Past multisensory experiences can influence current unisensory processing and memory performance. Repeated images are better discriminated if initially presented as auditory-visual pairs, rather than only visually. An experience's context thus plays a role in how well repetitions of certain aspects are later recognized. Here, we investigated factors during the initial multisensory experience that are essential for generating improved memory performance. Subjects discriminated repeated versus initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were only visual. Experiment 1 examined whether purely episodic multisensory information suffices for enhancing later discrimination performance by pairing visual objects with either tones or vibrations. We could therefore also assess whether effects can be elicited with different sensory pairings. Experiment 2 examined semantic context by manipulating the congruence between auditory and visual object stimuli within blocks of trials. Relative to images only encountered visually, accuracy in discriminating image repetitions was significantly impaired by auditory-visual, yet unaffected by somatosensory-visual multisensory memory traces. By contrast, this accuracy was selectively enhanced for visual stimuli with semantically congruent multisensory pasts and unchanged for those with semantically incongruent multisensory pasts. The collective results reveal opposing effects of purely episodic versus semantic information from auditory-visual multisensory events. Nonetheless, both types of multisensory memory traces are accessible for processing incoming stimuli and indeed result in distinct visual object processing, leading to either impaired or enhanced performance relative to unisensory memory traces. We discuss these results as supporting a model of object-based multisensory interactions.
Resumo:
PURPOSE: We characterized the pupil responses that reflect rod, cone, and melanopsin function in a genetically homogeneous cohort of patients with autosomal dominant retinitis pigmentosa (adRP). METHODS: Nine patients with Gly56Arg mutation of the NR2E3 gene and 12 control subjects were studied. Pupil and subjective visual responses to red and blue light flashes over a 7 log-unit range of intensities were recorded under dark and light adaptation. The pupil responses were plotted against stimulus intensity to obtain red-light and blue-light response curves. RESULTS: In the dark-adapted blue-light stimulus condition, patients showed significantly higher threshold intensities for visual perception and for a pupil response compared to controls (P = 0.02 and P = 0.006, respectively). The rod-dependent, blue-light pupil responses decreased with disease progression. In contrast, the cone-dependent pupil responses (light-adapted red-light stimulus condition) did not differ between patients and controls. The difference in the retinal sensitivity to blue and red stimuli was the most sensitive parameter to detect photoreceptor dysfunction. Unexpectedly, the melanopsin-mediated pupil response was decreased in patients (P = 0.02). CONCLUSIONS: Pupil responses of patients with NR2E3-associated adRP demonstrated reduced retinal sensitivity to dim blue light under dark adaptation, presumably reflecting decreased rod function. Rod-dependent pupil responses were quantifiable in all patients, including those with non-recordable scotopic electroretinogram, and correlated with the extent of clinical disease. Thus, the chromatic pupil light reflex can be used to monitor photoreceptor degeneration over a larger range of disease progression compared to standard electrophysiology.
Resumo:
Chicken pox is a very common infectious disease in children. Its corneal involvement is less serious than with measles, which may lead to blindness in numerous developing countries. However, with occasional cases occur. A case of a 59-year-old male patient whose left cornea was involved during a chicken pox infection at the age of 7 is reported. More recently, the vision of the right eye was normal at 20/20 and reduced to visual perception in the affected left eye. Corneal sensitivity was maintained in the left eye, which, however exhibited a central epithelial defect. A central round opacity of the left corneal stroma was believed to be the scar resulting from a previous disciform keratitis. The left central cornea was thinned and there was neither an anterior chamber flare nor new corneal vessels. This corneal condition required a corneal allograft, performed quickly because of the potential risk of perforation. Histopathological study of the corneal button showed a central corneal thinning with an increase in epithelial thickness. The corneal stroma was disorganized, with irregular collagen bundles. No inflammatory cells could be observed, however. All the histopathological changes observed were those of a corneal scar.
Resumo:
This review article summarizes evidence that multisensory experiences at one point in time have long-lasting effects on subsequent unisensory visual and auditory object recognition. The efficacy of single-trial exposure to task-irrelevant multisensory events is its ability to modulate memory performance and brain activity to unisensory components of these events presented later in time. Object recognition (either visual or auditory) is enhanced if the initial multisensory experience had been semantically congruent and can be impaired if this multisensory pairing was either semantically incongruent or entailed meaningless information in the task-irrelevant modality, when compared to objects encountered exclusively in a unisensory context. Processes active during encoding cannot straightforwardly explain these effects; performance on all initial presentations was indistinguishable despite leading to opposing effects with stimulus repetitions. Brain responses to unisensory stimulus repetitions differ during early processing stages (-100 ms post-stimulus onset) according to whether or not they had been initially paired in a multisensory context. Plus, the network exhibiting differential responses varies according to whether or not memory performance is enhanced or impaired. The collective findings we review indicate that multisensory associations formed via single-trial learning exert influences on later unisensory processing to promote distinct object representations that manifest as differentiable brain networks whose activity is correlated with memory performance. These influences occur incidentally, despite many intervening stimuli, and are distinguishable from the encoding/learning processes during the formation of the multisensory associations. The consequences of multisensory interactions that persist over time to impact memory retrieval and object discrimination.
Resumo:
Neuroimaging of the self has focused on high-level mechanisms such as language, memory or imagery of the self and implicated widely distributed brain networks. Yet recent evidence suggests that low-level mechanisms such as multisensory and sensorimotor integration may play a fundamental role in self-related processing. In the present study we used visuotactile multisensory conflict, robotics, virtual reality, and fMRI to study such low-level mechanisms by experimentally inducing changes in self-location. Participants saw a video of a person's back (body) or an empty room (no-body) being stroked while a MR-compatible robotic device stroked their back. The latter tactile input was synchronous or asynchronous with respect to the seen stroking. Self-location was estimated behaviorally confirming previous data that self-location only differed between the two body conditions. fMRI results showed a bilateral activation of the temporo-parietal cortex with a significantly higher BOLD signal increase in the synchronous/body condition with respect to the other conditions. Sensorimotor cortex and extrastriate-body-area were also activated. We argue that temporo-parietal activity reflects the experience of the conscious 'I' as embodied and localized within bodily space, compatible with clinical data in neurological patients with out-of-body experiences.
Resumo:
Multisensory stimuli can improve performance, facilitating RTs on sensorimotor tasks. This benefit is referred to as the redundant signals effect (RSE) and can exceed predictions on the basis of probability summation, indicative of integrative processes. Although an RSE exceeding probability summation has been repeatedly observed in humans and nonprimate animals, there are scant and inconsistent data from nonhuman primates performing similar protocols. Rather, existing paradigms have instead focused on saccadic eye movements. Moreover, the extant results in monkeys leave unresolved how stimulus synchronicity and intensity impact performance. Two trained monkeys performed a simple detection task involving arm movements to auditory, visual, or synchronous auditory-visual multisensory pairs. RSEs in excess of predictions on the basis of probability summation were observed and thus forcibly follow from neural response interactions. Parametric variation of auditory stimulus intensity revealed that in both animals, RT facilitation was limited to situations where the auditory stimulus intensity was below or up to 20 dB above perceptual threshold, despite the visual stimulus always being suprathreshold. No RT facilitation or even behavioral costs were obtained with auditory intensities 30-40 dB above threshold. The present study demonstrates the feasibility and the suitability of behaving monkeys for investigating links between psychophysical and neurophysiologic instantiations of multisensory interactions.
Resumo:
PURPOSE: To investigate the effect of intraocular straylight (IOS) induced by white opacity filters (WOF) on threshold measurements for stimuli employed in three perimeters: standard automated perimetry (SAP), pulsar perimetry (PP) and the Moorfields motion displacement test (MDT).¦METHODS: Four healthy young (24-28 years old) observers were tested six times with each perimeter, each time with one of five different WOFs and once without, inducing various levels of IOS (from 10% to 200%). An increase in IOS was measured with a straylight meter. The change in sensitivity from baseline was normalized, allowing comparison of standardized (z) scores (change divided by the SD of normative values) for each instrument.¦RESULTS: SAP and PP thresholds were significantly affected (P < 0.001) by moderate to large increases in IOS (50%-200%). The drop in motion displacement (MD) from baseline with WOF 5, was approximately 5 dB, in both SAP and PP which represents a clinically significant loss; in contrast the change in MD with MDT was on average 1 minute of arc, which is not likely to indicate a clinically significant loss.¦CONCLUSIONS: The Moorfields MDT is more robust to the effects of additional straylight in comparison with SAP or PP.
Resumo:
Multisensory experiences enhance perceptions and facilitate memory retrieval processes, even when only unisensory information is available for accessing such memories. Using fMRI, we identified human brain regions involved in discriminating visual stimuli according to past multisensory vs. unisensory experiences. Subjects performed a completely orthogonal task, discriminating repeated from initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were exclusively visual. Despite only single-trial exposures to initial image presentations, accuracy in indicating image repetitions was significantly improved by past auditory-visual multisensory experiences over images only encountered visually. Similarly, regions within the lateral-occipital complex-areas typically associated with visual object recognition processes-were more active to visual stimuli with multisensory than unisensory pasts. Additional differential responses were observed in the anterior cingulate and frontal cortices. Multisensory experiences are registered by the brain even when of no immediate behavioral relevance and can be used to categorize memories. These data reveal the functional efficacy of multisensory processing.
Resumo:
This study describes a task that combines random searching with goal directed navigation. The testing was conducted on a circular elevated open field (80 cm in diameter), with an unmarked target area (20 cm in diameter) in the center of 1 of the 4 quadrants. Whenever the rat entered the target area, the computerized tracking system released a pellet to a random point on the open field. Rats were able to learn the task under light and in total darkness, and on a stable or a rotating arena. Visual information was important in light, but idiothetic information became crucial in darkness. Learning of a new position was quicker under light than in total darkness on a rotating arena. The place preference task should make it possible to study place cells (PCs) when the rats use an allothetic (room frame) or idiothetic (arena frame) representation of space and to compare the behavioral response with the PCs' activity.
Resumo:
Contribution of visual and nonvisual mechanisms to spatial behavior of rats in the Morris water maze was studied with a computerized infrared tracking system, which switched off the room lights when the subject entered the inner circular area of the pool with an escape platform. Naive rats trained under light-dark conditions (L-D) found the escape platform more slowly than rats trained in permanent light (L). After group members were swapped, the L-pretrained rats found under L-D conditions the same target faster and eventually approached latencies attained during L navigation. Performance of L-D-trained rats deteriorated in permanent darkness (D) but improved with continued D training. Thus L-D navigation improves gradually by procedural learning (extrapolation of the start-target azimuth into the zero-visibility zone) but remains impaired by lack of immediate visual feedback rather than by absence of the snapshot memory of the target view.
Resumo:
Switching from one functional or cognitive operation to another is thought to rely on executive/control processes. The efficacy of these processes may depend on the extent of overlap between neural circuitry mediating the different tasks; more effective task preparation (and by extension smaller switch costs) is achieved when this overlap is small. We investigated the performance costs associated with switching tasks and/or switching sensory modalities. Participants discriminated either the identity or spatial location of objects that were presented either visually or acoustically. Switch costs between tasks were significantly smaller when the sensory modality of the task switched versus when it repeated. This was the case irrespective of whether the pre-trial cue informed participants only of the upcoming task, but not sensory modality (Experiment 1) or whether the pre-trial cue was informative about both the upcoming task and sensory modality (Experiment 2). In addition, in both experiments switch costs between the senses were positively correlated when the sensory modality of the task repeated across trials and not when it switched. The collective evidence supports the independence of control processes mediating task switching and modality switching and also the hypothesis that switch costs reflect competitive interference between neural circuits.
Resumo:
This study investigated the spatial, spectral, temporal and functional proprieties of functional brain connections involved in the concurrent execution of unrelated visual perception and working memory tasks. Electroencephalography data was analysed using a novel data-driven approach assessing source coherence at the whole-brain level. Three connections in the beta-band (18-24 Hz) and one in the gamma-band (30-40 Hz) were modulated by dual-task performance. Beta-coherence increased within two dorsofrontal-occipital connections in dual-task conditions compared to the single-task condition, with the highest coherence seen during low working memory load trials. In contrast, beta-coherence in a prefrontal-occipital functional connection and gamma-coherence in an inferior frontal-occipitoparietal connection was not affected by the addition of the second task and only showed elevated coherence under high working memory load. Analysis of coherence as a function of time suggested that the dorsofrontal-occipital beta-connections were relevant to working memory maintenance, while the prefrontal-occipital beta-connection and the inferior frontal-occipitoparietal gamma-connection were involved in top-down control of concurrent visual processing. The fact that increased coherence in the gamma-connection, from low to high working memory load, was negatively correlated with faster reaction time on the perception task supports this interpretation. Together, these results demonstrate that dual-task demands trigger non-linear changes in functional interactions between frontal-executive and occipitoparietal-perceptual cortices.
Resumo:
Multisensory interactions are observed in species from single-cell organisms to humans. Important early work was primarily carried out in the cat superior colliculus and a set of critical parameters for their occurrence were defined. Primary among these were temporal synchrony and spatial alignment of bisensory inputs. Here, we assessed whether spatial alignment was also a critical parameter for the temporally earliest multisensory interactions that are observed in lower-level sensory cortices of the human. While multisensory interactions in humans have been shown behaviorally for spatially disparate stimuli (e.g. the ventriloquist effect), it is not clear if such effects are due to early sensory level integration or later perceptual level processing. In the present study, we used psychophysical and electrophysiological indices to show that auditory-somatosensory interactions in humans occur via the same early sensory mechanism both when stimuli are in and out of spatial register. Subjects more rapidly detected multisensory than unisensory events. At just 50 ms post-stimulus, neural responses to the multisensory 'whole' were greater than the summed responses from the constituent unisensory 'parts'. For all spatial configurations, this effect followed from a modulation of the strength of brain responses, rather than the activation of regions specifically responsive to multisensory pairs. Using the local auto-regressive average source estimation, we localized the initial auditory-somatosensory interactions to auditory association areas contralateral to the side of somatosensory stimulation. Thus, multisensory interactions can occur across wide peripersonal spatial separations remarkably early in sensory processing and in cortical regions traditionally considered unisensory.