966 resultados para motion processing
Resumo:
The hallucinogenic serotonin(IA&2A) agonist psilocybin is known for its ability to induce illusions of motion in otherwise stationary objects or textured surfaces. This study investigated the effect of psilocybin on local and global motion processing in nine human volunteers. Using a forced choice direction of motion discrimination task we show that psilocybin selectively impairs coherence sensitivity for random dot patterns, likely mediated by high-level global motion detectors, but not contrast sensitivity for drifting gratings, believed to be mediated by low-level detectors. These results are in line with those observed within schizophrenic populations and are discussed in respect to the proposition that psilocybin may provide a model to investigate clinical psychosis and the pharmacological underpinnings of visual perception in normal populations.
Resumo:
Models of visual motion processing that introduce priors for low speed through Bayesian computations are sometimes treated with scepticism by empirical researchers because of the convenient way in which parameters of the Bayesian priors have been chosen. Using the effects of motion adaptation on motion perception to illustrate, we show that the Bayesian prior, far from being convenient, may be estimated on-line and therefore represents a useful tool by which visual motion processes may be optimized in order to extract the motion signals commonly encountered in every day experience. The prescription for optimization, when combined with system constraints on the transmission of visual information, may lead to an exaggeration of perceptual bias through the process of adaptation. Our approach extends the Bayesian model of visual motion proposed byWeiss et al. [Weiss Y., Simoncelli, E., & Adelson, E. (2002). Motion illusions as optimal perception Nature Neuroscience, 5:598-604.], in suggesting that perceptual bias reflects a compromise taken by a rational system in the face of uncertain signals and system constraints. © 2007.
Resumo:
A preliminary study by Freeman et al (1996b) has suggested that when complex patterns of motion elicit impressions of 2-dimensionality, odd-item-out detection improves given targets can be differentiated on the basis of surface properties. Their results can be accounted for, it if is supposed that observers are permitted efficient access to 3-D surface descriptions but access to 2-D motion descriptions is restricted. To test the hypothesis, a standard search technique was employed, in which targets could be discussed on the basis of slant sign. In one experiment, slant impressions were induced through the summing of deformation and translation components. In a second theory were induced through the summing of shear and translation components. Neither showed any evidence of efficient access. A third experiment explored the possibility that access to these representations may have been hindered by a lack of grouping between the stimuli. Attempts to improve grouping failed to produce convincing evidence in support of life. An alternative explanation is that complex patterns of motion are simply not processed simultaneously. Psychophysical and physiological studies have, however, suggested that multiple mechanisms selective for complex motion do exist. Using a subthreshold summation technique I found evidence supporting the notion that complex motions are processed in parallel. Furthermore, in a spatial summation experiment, coherence thresholds were measured for displays containing different numbers of complex motion patches. Consistent with the idea that complex motion processing proceeds in parallel, increases in the number of motion patches were seen to decrease thresholds, both for expansion and rotation. Moreover, the rates of decrease were higher than those typically expected from probability summation, thus implying mechanisms are available, which can pool signals from spatially distinct complex motion flows.
Resumo:
γ-aminobutyric acid (GABA) is the main inhibitory transmitter in the nervous system and acts via three distinct receptor classes: A, B, and C. GABAC receptors are ionotropic receptors comprising ρ subunits. In this work, we aimed to elucidate the expression of ρ subunits in the postnatal brain, the characteristics of ρ2 homo-oligomeric receptors, and the function of GABAC receptors in the hippocampus. In situ hybridization on rat brain slices showed ρ2 mRNA expression from the newborn in the superficial grey layer of the superior colliculus, from the first postnatal week in the hippocampal CA1 region and the pretectal nucleus of the optic tract, and in the adult dorsal lateral geniculate nucleus. Quantitative RT-PCR revealed expression of all three ρ subunits in the hippocampus and superior colliculus from the first postnatal day. In the hippocampus, ρ2 mRNA expression clearly dominated over ρ1 and ρ3. GABAC receptor protein expression was confirmed in the adult hippocampus, superior colliculus, and dorsal lateral geniculate nucleus by immunohistochemistry. From the selective distribution of ρ subunits, GABAC receptors may be hypothesized to be specifically involved in aspects of visual image motion processing in the rat brain. Although previous data had indicated a much higher expression level for ρ2 subunit transcripts than for ρ1 or ρ3 in the brain, previous work done on Xenopus oocytes had suggested that rat ρ2 subunits do not form functional homo-oligomeric GABAC receptors but need ρ1 or ρ3 subunits to form hetero-oligomers. Our results demonstrated, for the first time, that HEK 293 cells transfected with ρ2 cDNA displayed currents in whole-cell patch-clamp recordings. Homomeric rat ρ2 receptors had a decreased sensitivity to, but a high affinity for picrotoxin and a marked sensitivity to the GABAC receptor agonist CACA. Our results suggest that ρ2 subunits may contribute to brain function, also in areas not expressing other ρ subunits. Using extracellular electrophysiological recordings, we aimed to study the effects of the GABAC receptor agonists and antagonists on responses of the hippocampal neurons to electrical stimulation. Activation of GABAC receptors with CACA suppressed postsynaptic excitability and the GABAC receptor antagonist TPMPA inhibited the effects of CACA. Next, we aimed to display the activation of the GABAC receptors by synaptically released GABA using intracellular recordings. GABA-mediated long-lasting depolarizing responses evoked by high-frequency stimulation were prolonged by TPMPA. For weaker stimulation, the effect of TPMPA was enhanced after GABA uptake was inhibited. Our data demonstrate that GABAC receptors can be activated by endogenous synaptic transmitter release following strong stimulation or under conditions of reduced GABA uptake. The lack of GABAC receptor activation by less intensive stimulation under control conditions suggests that these receptors are extrasynaptic and activated via spillover of synaptically released GABA. Taken together with the restricted expression pattern of GABAC receptors in the brain and their distinctive pharmacological and biophysical properties, our findings supporting extrasynaptic localization of these receptors raise interesting possibilities for novel pharmacological therapies in the treatment of, for example, epilepsy and sleep disorders.
Resumo:
The temporal structure of neuronal spike trains in the visual cortex can provide detailed information about the stimulus and about the neuronal implementation of visual processing. Spike trains recorded from the macaque motion area MT in previous studies (Newsome et al., 1989a; Britten et al., 1992; Zohary et al., 1994) are analyzed here in the context of the dynamic random dot stimulus which was used to evoke them. If the stimulus is incoherent, the spike trains can be highly modulated and precisely locked in time to the stimulus. In contrast, the coherent motion stimulus creates little or no temporal modulation and allows us to study patterns in the spike train that may be intrinsic to the cortical circuitry in area MT. Long gaps in the spike train evoked by the preferred direction motion stimulus are found, and they appear to be symmetrical to bursts in the response to the anti-preferred direction of motion. A novel cross-correlation technique is used to establish that the gaps are correlated between pairs of neurons. Temporal modulation is also found in psychophysical experiments using a modified stimulus. A model is made that can account for the temporal modulation in terms of the computational theory of biological image motion processing. A frequency domain analysis of the stimulus reveals that it contains a repeated power spectrum that may account for psychophysical and electrophysiological observations.
Some neurons tend to fire bursts of action potentials while others avoid burst firing. Using numerical and analytical models of spike trains as Poisson processes with the addition of refractory periods and bursting, we are able to account for peaks in the power spectrum near 40 Hz without assuming the existence of an underlying oscillatory signal. A preliminary examination of the local field potential reveals that stimulus-locked oscillation appears briefly at the beginning of the trial.
Resumo:
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.
Resumo:
How does the brain use eye movements to track objects that move in unpredictable directions and speeds? Saccadic eye movements rapidly foveate peripheral visual or auditory targets and smooth pursuit eye movements keep the fovea pointed toward an attended moving target. Analyses of tracking data in monkeys and humans reveal systematic deviations from predictions of the simplest model of saccade-pursuit interactions, which would use no interactions other than common target selection and recruitment of shared motoneurons. Instead, saccadic and smooth pursuit movements cooperate to cancel errors of gaze position and velocity, and thus to maximize target visibility through time. How are these two systems coordinated to promote visual localization and identification of moving targets? How are saccades calibrated to correctly foveate a target despite its continued motion during the saccade? A neural model proposes answers to such questions. The modeled interactions encompass motion processing areas MT, MST, FPA, DLPN and NRTP; saccade planning and execution areas FEF and SC; the saccadic generator in the brain stem; and the cerebellum. Simulations illustrate the model’s ability to functionally explain and quantitatively simulate anatomical, neurophysiological and behavioral data about SAC-SPEM tracking.
Resumo:
Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth pursuit eye movements. In particular, the saccadic and smooth pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do multiple brain regions interact, including frontal cortical areas, to decide the choice of a target among several competing moving stimuli? How is target selection information that is created by a bias (e.g., electrical stimulation) transferred from one movement system to another? These saccade-pursuit interactions are clarified by a new computational neural model, which describes interactions among motion processing areas MT, MST, FPA, DLPN; saccade specification, selection, and planning areas LIP, FEF, SNr, SC; the saccadic generator in the brain stem; and the cerebellum. Model simulations explain a broad range of neuroanatomical and neurophysiological data. These results are in contrast with the simplest parallel model with no interactions between saccades and pursuit than common-target selection and recruitment of shared motoneurons. Actual tracking episodes in primates reveal multiple systematic deviations from predictions of the simplest parallel model, which are explained by the current model.
Resumo:
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.
Resumo:
We investigated whether infants from 8 ^ 22 weeks of age were sensitive to the illusory contour created by aligned line terminators. Previous reports of illusory-contour detection in infants under 4 months old could be due to infants' preference for the presence of terminators rather than their configuration. We generated preferential-looking stimuli containing sinusoidal lines whose oscillating, abutting terminators give a strong illusory contour in adult perception. Our experiments demonstrated a preference in infants 8 weeks old and above for an oscillating illusory contour compared with a stimulus containing equal terminator density and movement. Control experiments excluded local line density, or attention to alignment in general, as the basis for this result. In the youngest age group (8 ^ 10 weeks) stimulus velocity appears to be critical in determining the visibility of illusory contours, which is consistent with other data on motion processing at this age. We conclude that, by 2 months of age, the infant's visual system contains the nonlinear mechanisms necessary to extract an illusory contour from aligned terminators.
Resumo:
When viewing two superimposed, translating sets of dots moving in different directions, one overestimates the direction difference. This phenomenon of direction repulsion is thought to be driven by inhibitory interactions between directionally tuned motion detectors [1, 2]. However, there is disagreement on where this occurs — at early stages of motion processing [1, 3], or at the later, global motion-processing stage following “pooling” of these measures [4–6]. These two stages of motion pro - cessing have been identified as occurring in area V1 and the human homolog of macaque MT/V5, respectively[7, 8]. We designed experiments in which local and global predictions of repulsion are pitted against one another. Our stimuli contained a target set of dots, moving at a uniform speed, superimposed on a “mixed-speed” distractor set. Because the perceived speed of a mixed-speed stimulus is equal to the dots’ average speed [9], a global-processing account of direction repulsion predicts that repulsion magnitude induced by a mixed-speed distractor will be indistinguishable from that induced by a single-speed distractor moving at the same mean speed. This is exactly what we found. These results provide compelling evidence that global-motion interactions play a major role in driving direction repulsion.
Resumo:
Previous research has shown that prior adaptation to a spatially circumscribed, oscillating grating results in the duration of a subsequent stimulus briefly presented within the adapted region being underestimated. There is an on-going debate about where in the motion processing pathway the adaptation underlying this distortion of sub-second duration perception occurs. One position is that the LGN and, perhaps, early cortical processing areas are likely sites for the adaptation; an alternative suggestion is that visual area MT+ contains the neural mechanisms for sub-second timing; and a third position proposes that the effect is driven by adaptation at multiple levels of the motion processing pathway. A related issue is in what frame of reference – retinotopic or spatiotopic – does adaptation induced duration distortion occur. We addressed these questions by having participants adapt to a unidirectional random dot kinematogram (RDK), and then measuring perceived duration of a 600 ms test RDK positioned in either the same retinotopic or the same spatiotopic location as the adaptor. We found that, when it did occur, duration distortion of the test stimulus was direction contingent; that is it occurred when the adaptor and test stimuli drifted in the same direction, but not when they drifted in opposite directions. Furthermore the duration compression was evident primarily under retinotopic viewing conditions, with little evidence of duration distortion under spatiotopic viewing conditions. Our results support previous research implicating cortical mechanisms in the duration encoding of sub-second visual events, and reveal that these mechanisms encode duration within a retinotopic frame of reference.
Resumo:
The duration compression effect is a phenomenon in which prior adaptation to a spatially circumscribed dynamic stimulus results in the duration of subsequent subsecond stimuli presented in the adapted region being underestimated. There is disagreement over the frame of reference within which the duration compression phenomenon occurs. One view holds that the effect is driven by retinotopic-tuned mechanisms located at early stages of visual processing, and an alternate position is that the mechanisms are spatiotopic and occur at later stages of visual processing (MT+). We addressed the retinotopic-spatiotopic question by using adapting stimuli – drifting plaids - that are known to activate global-motion mechanisms in area MT. If spatiotopic mechanisms contribute to the duration compression effect, drifting plaid adaptors should be well suited to revealing them. Following adaptation participants were tasked with estimating the duration of a 600ms random dot stimulus, whose direction was identical to the pattern direction of the adapting plaid, presented at either the same retinotopic or the same spatiotopic location as the adaptor. Our results reveal significant duration compression in both conditions, pointing to the involvement of both retinotopic-tuned and spatiotopic-tuned mechanisms in the duration compression effect.
Resumo:
In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.
Resumo:
Emerging evidence of the high variability in the cognitive skills and deficits associated with reading achievement and dysfunction promotes both a more dimensional view of the risk factors involved, and the importance of discriminating between trajectories of impairment. Here we examined reading and component orthographic and phonological skills alongside measures of cognitive ability and auditory and visual sensory processing in a large group of primary school children between the ages of 7 and 12 years. We identified clusters of children with pseudoword or exception word reading scores at the 10th percentile or below relative to their age group, and a group with poor skills on both tasks. Compared to age-matched and reading-level controls, groups of children with more impaired exception word reading were best described by a trajectory of developmental delay, whereas readers with more impaired pseudoword reading or combined deficits corresponded more with a pattern of atypical development. Sensory processing deficits clustered within both of the groups with putative atypical development: auditory discrimination deficits with poor phonological awareness skills; impairments of visual motion processing in readers with broader and more severe patterns of reading and cognitive impairments. Sensory deficits have been variably associated with developmental impairments of literacy and language; these results suggest that such deficits are also likely to cluster in children with particular patterns of reading difficulty. © 2012 Elsevier Ltd.