866 resultados para visual half-field
Resumo:
Short-range correlations of two-dimensional electrons in a strong magnetic field are shown to be triangular in nature well below half-filling, but honeycomb well above half-filling. The half-filling point is thus proposed, and qualitatively confirmed by three-body correlation calculations, to be a new type of disorder point where short-range correlations change character. A wavefunction study also suggests that nodes become unbound at half-filling. Evidence for incompressibility but deformability of the half-filling state earlier suggested by Fano, Ortolani and Tosatti, is also presented and found to be in agreement with recent experiments.
Resumo:
Both commercial and scientific applications often need to transform color images into gray-scale images, e. g., to reduce the publication cost in printing color images or to help color blind people see visual cues of color images. However, conventional color to gray algorithms are not ready for practical applications because they encounter the following problems: 1) Visual cues are not well defined so it is unclear how to preserve important cues in the transformed gray-scale images; 2) some algorithms have extremely high time cost for computation; and 3) some require human-computer interactions to have a reasonable transformation. To solve or at least reduce these problems, we propose a new algorithm based on a probabilistic graphical model with the assumption that the image is defined over a Markov random field. Thus, color to gray procedure can be regarded as a labeling process to preserve the newly well-defined visual cues of a color image in the transformed gray-scale image. Visual cues are measurements that can be extracted from a color image by a perceiver. They indicate the state of some properties of the image that the perceiver is interested in perceiving. Different people may perceive different cues from the same color image and three cues are defined in this paper, namely, color spatial consistency, image structure information, and color channel perception priority. We cast color to gray as a visual cue preservation procedure based on a probabilistic graphical model and optimize the model based on an integral minimization problem. We apply the new algorithm to both natural color images and artificial pictures, and demonstrate that the proposed approach outperforms representative conventional algorithms in terms of effectiveness and efficiency. In addition, it requires no human-computer interactions.
Resumo:
C.R. Bull, R. Zwiggelaar and J.V. Stafford, 'Imaging as a technique for assessment and control in the field', Aspects of Applied Biology 43, 197-204 (1995)
Resumo:
Log-polar image architectures, motivated by the structure of the human visual field, have long been investigated in computer vision for use in estimating motion parameters from an optical flow vector field. Practical problems with this approach have been: (i) dependence on assumed alignment of the visual and motion axes; (ii) sensitivity to occlusion form moving and stationary objects in the central visual field, where much of the numerical sensitivity is concentrated; and (iii) inaccuracy of the log-polar architecture (which is an approximation to the central 20°) for wide-field biological vision. In the present paper, we show that an algorithm based on generalization of the log-polar architecture; termed the log-dipolar sensor, provides a large improvement in performance relative to the usual log-polar sampling. Specifically, our algorithm: (i) is tolerant of large misalignmnet of the optical and motion axes; (ii) is insensitive to significant occlusion by objects of unknown motion; and (iii) represents a more correct analogy to the wide-field structure of human vision. Using the Helmholtz-Hodge decomposition to estimate the optical flow vector field on a log-dipolar sensor, we demonstrate these advantages, using synthetic optical flow maps as well as natural image sequences.
Resumo:
How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.
Resumo:
A neural theory is proposed in which visual search is accomplished by perceptual grouping and segregation, which occurs simultaneous across the visual field, and object recognition, which is restricted to a selected region of the field. The theory offers an alternative hypothesis to recently developed variations on Feature Integration Theory (Treisman, and Sato, 1991) and Guided Search Model (Wolfe, Cave, and Franzel, 1989). A neural architecture and search algorithm is specified that quantitatively explains a wide range of psychophysical search data (Wolfe, Cave, and Franzel, 1989; Cohen, and lvry, 1991; Mordkoff, Yantis, and Egeth, 1990; Treisman, and Sato, 1991).
Resumo:
This article describes further evidence for a new neural network theory of biological motion perception that is called a Motion Boundary Contour System. This theory clarifies why parallel streams Vl-> V2 and Vl-> MT exist for static form and motion form processing among the areas Vl, V2, and MT of visual cortex. The Motion Boundary Contour System consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a Motion Oriented Contrast Filter, or MOC Filter, for preprocessing moving images; and a Cooperative-Competitive Feedback Loop, or CC Loop, for generating emergent boundary segmentations of the filtered signals. The present article uses the MOC Filter to explain a variety of classical and recent data about short-range and long-range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed-up of motion velocity as interfiash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte's Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem, including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90°, whereas opposite directions differ by 180°, and why a cortical stream Vl -> V2 -> MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the Motion Boundary Contour System design.
Synchronized Oscillations During Cooperative Feature Lining in a Cortical Model of Visual Perception
Resumo:
A neural network model of synchronized oscillations in visual cortex is presented to account for recent neurophysiological findings that such synchronization may reflect global properties of the stimulus. In these experiments, synchronization of oscillatory firing responses to moving bar stimuli occurred not only for nearby neurons, but also occurred between neurons separated by several cortical columns (several mm of cortex) when these neurons shared some receptive field preferences specific to the stimuli. These results were obtained for single bar stimuli and also across two disconnected, but colinear, bars moving in the same direction. Our model and computer simulations obtain these synchrony results across both single and double bar stimuli using different, but formally related, models of preattentive visual boundary segmentation and attentive visual object recognition, as well as nearest-neighbor and randomly coupled models.
Resumo:
A neural network model of synchronized oscillator activity in visual cortex is presented in order to account for recent neurophysiological findings that such synchronization may reflect global properties of the stimulus. In these recent experiments, it was reported that synchronization of oscillatory firing responses to moving bar stimuli occurred not only for nearby neurons, but also occurred between neurons separated by several cortical columns (several mm of cortex) when these neurons shared some receptive field preferences specific to the stimuli. These results were obtained not only for single bar stimuli but also across two disconnected, but colinear, bars moving in the same direction. Our model and computer simulations obtain these synchrony results across both single and double bar stimuli. For the double bar case, synchronous oscillations are induced in the region between the bars, but no oscillations are induced in the regions beyond the stimuli. These results were achieved with cellular units that exhibit limit cycle oscillations for a robust range of input values, but which approach an equilibrium state when undriven. Single and double bar synchronization of these oscillators was achieved by different, but formally related, models of preattentive visual boundary segmentation and attentive visual object recognition, as well as nearest-neighbor and randomly coupled models. In preattentive visual segmentation, synchronous oscillations may reflect the binding of local feature detectors into a globally coherent grouping. In object recognition, synchronous oscillations may occur during an attentive resonant state that triggers new learning. These modelling results support earlier theoretical predictions of synchronous visual cortical oscillations and demonstrate the robustness of the mechanisms capable of generating synchrony.
Resumo:
The ability to quickly detect and respond to visual stimuli in the environment is critical to many human activities. While such perceptual and visual-motor skills are important in a myriad of contexts, considerable variability exists between individuals in these abilities. To better understand the sources of this variability, we assessed perceptual and visual-motor skills in a large sample of 230 healthy individuals via the Nike SPARQ Sensory Station, and compared variability in their behavioral performance to demographic, state, sleep and consumption characteristics. Dimension reduction and regression analyses indicated three underlying factors: Visual-Motor Control, Visual Sensitivity, and Eye Quickness, which accounted for roughly half of the overall population variance in performance on this battery. Inter-individual variability in Visual-Motor Control was correlated with gender and circadian patters such that performance on this factor was better for males and for those who had been awake for a longer period of time before assessment. The current findings indicate that abilities involving coordinated hand movements in response to stimuli are subject to greater individual variability, while visual sensitivity and occulomotor control are largely stable across individuals.
Resumo:
Each of our movements activates our own sensory receptors, and therefore keeping track of self-movement is a necessary part of analysing sensory input. One way in which the brain keeps track of self-movement is by monitoring an internal copy, or corollary discharge, of motor commands. This concept could explain why we perceive a stable visual world despite our frequent quick, or saccadic, eye movements: corollary discharge about each saccade would permit the visual system to ignore saccade-induced visual changes. The critical missing link has been the connection between corollary discharge and visual processing. Here we show that such a link is formed by a corollary discharge from the thalamus that targets the frontal cortex. In the thalamus, neurons in the mediodorsal nucleus relay a corollary discharge of saccades from the midbrain superior colliculus to the cortical frontal eye field. In the frontal eye field, neurons use corollary discharge to shift their visual receptive fields spatially before saccades. We tested the hypothesis that these two components-a pathway for corollary discharge and neurons with shifting receptive fields-form a circuit in which the corollary discharge drives the shift. First we showed that the known spatial and temporal properties of the corollary discharge predict the dynamic changes in spatial visual processing of cortical neurons when saccades are made. Then we moved from this correlation to causation by isolating single cortical neurons and showing that their spatial visual processing is impaired when corollary discharge from the thalamus is interrupted. Thus the visual processing of frontal neurons is spatiotemporally matched with, and functionally dependent on, corollary discharge input from the thalamus. These experiments establish the first link between corollary discharge and visual processing, delineate a brain circuit that is well suited for mediating visual stability, and provide a framework for studying corollary discharge in other sensory systems.
Resumo:
The macaque frontal eye field (FEF) is involved in the generation of saccadic eye movements and fixations. To better understand the role of the FEF, we reversibly inactivated a portion of it while a monkey made saccades and fixations in response to visual stimuli. Lidocaine was infused into a FEF and neural inactivation was monitored with a nearby microelectrode. We used two saccadic tasks. In the delay task, a target was presented and then extinguished, but the monkey was not allowed to make a saccade to its location until a cue to move was given. In the step task, the monkey was allowed to look at a target as soon as it appeared. During FEF inactivation, monkeys were severely impaired at making saccades to locations of extinguished contralateral targets in the delay task. They were similarly impaired at making saccades to locations of contralateral targets in the step task if the target was flashed for < or =100 ms, such that it was gone before the saccade was initiated. Deficits included increases in saccadic latency, increases in saccadic error, and increases in the frequency of trials in which a saccade was not made. We varied the initial fixation location and found that the impairment specifically affected contraversive saccades rather than affecting all saccades made into head-centered contralateral space. Monkeys were impaired only slightly at making saccades to contralateral targets in the step task if the target duration was 1000 ms, such that the target was present during the saccade: latency increased, but increases in saccadic error were mild and increases in the frequency of trials in which a saccade was not made were insignificant. During FEF inactivation there usually was a direct correlation between the latency and the error of saccades made in response to contralateral targets. In the delay task, FEF inactivation increased the frequency of making premature saccades to ipsilateral targets. FEF inactivation had inconsistent and mild effects on saccadic peak velocity. FEF inactivation caused impairments in the ability to fixate lights steadily in contralateral space. FEF inactivation always caused an ipsiversive deviation of the eyes in darkness. In summary, our results suggest that the FEF plays major roles in (1) generating contraversive saccades to locations of extinguished or flashed targets, (2) maintaining contralateral fixations, and (3) suppressing inappropriate ipsiversive saccades.
Resumo:
Recent memories are generally recalled from a first-person perspective whereas older memories are often recalled from a third-person perspective. We investigated how repeated retrieval affects the availability of visual information, and whether it could explain the observed shift in perspective with time. In Experiment 1, participants performed mini-events and nominated memories of recent autobiographical events in response to cue words. Next, they described their memory for each event and rated its phenomenological characteristics. Over the following three weeks, they repeatedly retrieved half of the mini-event and cue-word memories. No instructions were given about how to retrieve the memories. In Experiment 2, participants were asked to adopt either a first- or third-person perspective during retrieval. One month later, participants retrieved all of the memories and again provided phenomenology ratings. When first-person visual details from the event were repeatedly retrieved, this information was retained better and the shift in perspective was slowed.
Resumo:
As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability,” quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.
Resumo:
The thermal stress in a Sn3.5Ag1Cu half-bump solder joint under a 3.82×108 A/m2 current stressing was analyzed using a coupled-field simulation. Substantial thermal stress accumulated around the Al-to-solder interface, especially in the Ni+(Ni,Cu)3Sn4 layer, where a maximal stress of 138 MPa was identified. The stress gradient in the Ni layer was about 1.67×1013 Pa/m, resulting in a stress migration force of 1.82×10-16 N, which is comparable to the electromigration force, 2.82×10-16 N. Dissolution of the Ni+(Ni,Cu)3Sn4 layer, void formation with cracks at the anode side, and extrusions at the cathode side were observed