867 resultados para Visual perception
Resumo:
Methods are presented (1) to partition or decompose a visual scene into the bodies forming it; (2) to position these bodies in three-dimensional space, by combining two scenes that make a stereoscopic pair; (3) to find the regions or zones of a visual scene that belong to its background; (4) to carry out the isolation of objects in (1) when the input has inaccuracies. Running computer programs implement the methods, and many examples illustrate their behavior. The input is a two-dimensional line-drawing of the scene, assumed to contain three-dimensional bodies possessing flat faces (polyhedra); some of them may be partially occluded. Suggestions are made for extending the work to curved objects. Some comparisons are made with human visual perception. The main conclusion is that it is possible to separate a picture or scene into the constituent objects exclusively on the basis of monocular geometric properties (on the basis of pure form); in fact, successful methods are shown.
Resumo:
A key goal of behavioral and cognitive neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how the visual cortex sees. Visual cortex, like many parts of perceptual and cognitive neocortex, is organized into six main layers of cells, as well as characteristic sub-lamina. Here it is proposed how these layered circuits help to realize the processes of developement, learning, perceptual grouping, attention, and 3D vision through a combination of bottom-up, horizontal, and top-down interactions. A key theme is that the mechanisms which enable developement and learning to occur in a stable way imply properties of adult behavior. These results thus begin to unify three fields: infant cortical developement, adult cortical neurophysiology and anatomy, and adult visual perception. The identified cortical mechanisms promise to generalize to explain how other perceptual and cognitive processes work.
Resumo:
Perceptual grouping is well-known to be a fundamental process during visual perception, notably grouping across scenic regions that do not receive contrastive visual inputs. Illusory contours are a classical example of such groupings. Recent psychophysical and neurophysiological evidence have shown that the grouping process can facilitate rapid synchronization of the cells that are bound together by a grouping, even when the grouping must be completed across regions that receive no contrastive inputs. Synchronous grouping can hereby bind together different object parts that may have become desynchronized due to a variety of factors, and can enhance the efficiency of cortical transmission. Neural models of perceptual grouping have clarified how such fast synchronization may occur by using bipole grouping cells, whose predicted properties have been supported by psychophysical, anatomical, and neurophysiological experiments. These models have not, however, incorporated some of the realistic constraints on which groupings in the brain are conditioned, notably the measured spatial extent of long-range interactions in layer 2/3 of a grouping network, and realistic synaptic and axonal signaling delays within and across cells in different cortical layers. This work addresses the question: Can long-range interactions that obey the bipole constraint achieve fast synchronization under realistic anatomical and neurophysiological constraints that initially desynchronize grouping signals? Can the cells that synchronize retain their analog sensitivity to changing input amplitudes? Can the grouping process complete and synchronize illusory contours across gaps in bottom-up inputs? Our simulations show that the answer to these questions is Yes.
Resumo:
Grouping of collinear boundary contours is a fundamental process during visual perception. Illusory contour completion vividly illustrates how stable perceptual boundaries interpolate between pairs of contour inducers, but do not extrapolate from a single inducer. Neural models have simulated how perceptual grouping occurs in laminar visual cortical circuits. These models predicted the existence of grouping cells that obey a bipole property whereby grouping can occur inwardly between pairs or greater numbers of similarly oriented and co-axial inducers, but not outwardly from individual inducers. These models have not, however, incorporated spiking dynamics. Perceptual grouping is a challenge for spiking cells because its properties of collinear facilitation and analog sensitivity to inducer configurations occur despite irregularities in spike timing across all the interacting cells. Other models have demonstrated spiking dynamics in laminar neocortical circuits, but not how perceptual grouping occurs. The current model begins to unify these two modeling streams by implementing a laminar cortical network of spiking cells whose intracellular temporal dynamics interact with recurrent intercellular spiking interactions to quantitatively simulate data from neurophysiological experiments about perceptual grouping, the structure of non-classical visual receptive fields, and gamma oscillations.
Resumo:
Remembering past events - or episodic retrieval - consists of several components. There is evidence that mental imagery plays an important role in retrieval and that the brain regions supporting imagery overlap with those supporting retrieval. An open issue is to what extent these regions support successful vs. unsuccessful imagery and retrieval processes. Previous studies that examined regional overlap between imagery and retrieval used uncontrolled memory conditions, such as autobiographical memory tasks, that cannot distinguish between successful and unsuccessful retrieval. A second issue is that fMRI studies that compared imagery and retrieval have used modality-aspecific cues that are likely to activate auditory and visual processing regions simultaneously. Thus, it is not clear to what extent identified brain regions support modality-specific or modality-independent imagery and retrieval processes. In the current fMRI study, we addressed this issue by comparing imagery to retrieval under controlled memory conditions in both auditory and visual modalities. We also obtained subjective measures of imagery quality allowing us to dissociate regions contributing to successful vs. unsuccessful imagery. Results indicated that auditory and visual regions contribute both to imagery and retrieval in a modality-specific fashion. In addition, we identified four sets of brain regions with distinct patterns of activity that contributed to imagery and retrieval in a modality-independent fashion. The first set of regions, including hippocampus, posterior cingulate cortex, medial prefrontal cortex and angular gyrus, showed a pattern common to imagery/retrieval and consistent with successful performance regardless of task. The second set of regions, including dorsal precuneus, anterior cingulate and dorsolateral prefrontal cortex, also showed a pattern common to imagery and retrieval, but consistent with unsuccessful performance during both tasks. Third, left ventrolateral prefrontal cortex showed an interaction between task and performance and was associated with successful imagery but unsuccessful retrieval. Finally, the fourth set of regions, including ventral precuneus, midcingulate cortex and supramarginal gyrus, showed the opposite interaction, supporting unsuccessful imagery, but successful retrieval performance. Results are discussed in relation to reconstructive, attentional, semantic memory, and working memory processes. This is the first study to separate the neural correlates of successful and unsuccessful performance for both imagery and retrieval and for both auditory and visual modalities.
Resumo:
When recalling autobiographical memories, individuals often experience visual images associated with the event. These images can be constructed from two different perspectives: first person, in which the event is visualized from the viewpoint experienced at encoding, or third person, in which the event is visualized from an external vantage point. Using a novel technique to measure visual perspective, we examined where the external vantage point is situated in third-person images. Individuals in two studies were asked to recall either 10 or 15 events from their lives and describe the perspectives they experienced. Wide variation in spatial locations was observed within third-person perspectives, with the location of these perspectives relating to the event being recalled. Results suggest remembering from an external viewpoint may be more common than previous studies have demonstrated.
Resumo:
Amnesia typically results from trauma to the medial temporal regions that coordinate activation among the disparate areas of cortex that represent the information that make up autobiographical memories. We proposed that amnesia should also result from damage to these regions, particularly regions that subserve long-term visual memory [Rubin, D. C., & Greenberg, D. L. (1998). Visual memory-deficit amnesia: A distinct amnesic presentation and etiology. Proceedings of the National Academy of Sciences of the USA, 95, 5413-5416]. We previously found 11 such cases in the literature, and all 11 had amnesia. We now present a detailed investigation of one of these patients. M.S. suffers from long-term visual memory loss along with some semantic deficits; he also manifests a severe retrograde amnesia and moderate anterograde amnesia. The presentation of his amnesia differs from that of the typical medial-temporal or lateral-temporal amnesic; we suggest that his visual deficits may be contributing to his autobiographical amnesia.
Resumo:
We describe a form of amnesia, which we have called visual memory-deficit amnesia, that is caused by damage to areas of the visual system that store visual information. Because it is caused by a deficit in access to stored visual material and not by an impaired ability to encode or retrieve new material, it has the otherwise infrequent properties of a more severe retrograde than anterograde amnesia with no temporal gradient in the retrograde amnesia. Of the 11 cases of long-term visual memory loss found in the literature, all had amnesia extending beyond a loss of visual memory, often including a near total loss of pretraumatic episodic memory. Of the 6 cases in which both the severity of retrograde and anterograde amnesia and the temporal gradient of the retrograde amnesia were noted, 4 had a more severe retrograde amnesia with no temporal gradient and 2 had a less severe retrograde amnesia with a temporal gradient.
Resumo:
Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance.
Resumo:
The ability to quickly detect and respond to visual stimuli in the environment is critical to many human activities. While such perceptual and visual-motor skills are important in a myriad of contexts, considerable variability exists between individuals in these abilities. To better understand the sources of this variability, we assessed perceptual and visual-motor skills in a large sample of 230 healthy individuals via the Nike SPARQ Sensory Station, and compared variability in their behavioral performance to demographic, state, sleep and consumption characteristics. Dimension reduction and regression analyses indicated three underlying factors: Visual-Motor Control, Visual Sensitivity, and Eye Quickness, which accounted for roughly half of the overall population variance in performance on this battery. Inter-individual variability in Visual-Motor Control was correlated with gender and circadian patters such that performance on this factor was better for males and for those who had been awake for a longer period of time before assessment. The current findings indicate that abilities involving coordinated hand movements in response to stimuli are subject to greater individual variability, while visual sensitivity and occulomotor control are largely stable across individuals.
Resumo:
Each of our movements activates our own sensory receptors, and therefore keeping track of self-movement is a necessary part of analysing sensory input. One way in which the brain keeps track of self-movement is by monitoring an internal copy, or corollary discharge, of motor commands. This concept could explain why we perceive a stable visual world despite our frequent quick, or saccadic, eye movements: corollary discharge about each saccade would permit the visual system to ignore saccade-induced visual changes. The critical missing link has been the connection between corollary discharge and visual processing. Here we show that such a link is formed by a corollary discharge from the thalamus that targets the frontal cortex. In the thalamus, neurons in the mediodorsal nucleus relay a corollary discharge of saccades from the midbrain superior colliculus to the cortical frontal eye field. In the frontal eye field, neurons use corollary discharge to shift their visual receptive fields spatially before saccades. We tested the hypothesis that these two components-a pathway for corollary discharge and neurons with shifting receptive fields-form a circuit in which the corollary discharge drives the shift. First we showed that the known spatial and temporal properties of the corollary discharge predict the dynamic changes in spatial visual processing of cortical neurons when saccades are made. Then we moved from this correlation to causation by isolating single cortical neurons and showing that their spatial visual processing is impaired when corollary discharge from the thalamus is interrupted. Thus the visual processing of frontal neurons is spatiotemporally matched with, and functionally dependent on, corollary discharge input from the thalamus. These experiments establish the first link between corollary discharge and visual processing, delineate a brain circuit that is well suited for mediating visual stability, and provide a framework for studying corollary discharge in other sensory systems.
Resumo:
The channel-based model of duration perception postulates the existence of neural mechanisms that respond selectively to a narrow range of stimulus durations centred on their preferred duration (Heron et al Proceedings of the Royal Society B 279 690–698). In principle the channel-based model could
explain recent reports of adaptation-induced, visual duration compression effects (Johnston et al Current Biology 16 472–479; Curran and Benton Cognition 122 252–257); from this perspective duration compression is a consequence of the adapting stimuli being presented for a longer duration than the test stimuli. In the current experiment observers adapted to a sequence of moving random dot patterns at the same retinal position, each 340ms in duration and separated by a variable (500–1000ms) interval. Following adaptation observers judged the duration of a 600ms test stimulus at the same location. The test stimulus moved in the same, or opposite, direction as the adaptor. Contrary to the channel-based
model’s prediction, test stimulus duration appeared compressed, rather than expanded, when it moved in the same direction as the adaptor. That test stimulus duration was not distorted when moving in the opposite direction further suggests that visual timing mechanisms are influenced by additional neural processing associated with the stimulus being timed.
Resumo:
This thesis addresses the problem of word learning in computational agents. The motivation behind this work lies in the need to support language-based communication between service robots and their human users, as well as grounded reasoning using symbols relevant for the assigned tasks. The research focuses on the problem of grounding human vocabulary in robotic agent’s sensori-motor perception. Words have to be grounded in bodily experiences, which emphasizes the role of appropriate embodiments. On the other hand, language is a cultural product created and acquired through social interactions. This emphasizes the role of society as a source of linguistic input. Taking these aspects into account, an experimental scenario is set up where a human instructor teaches a robotic agent the names of the objects present in a visually shared environment. The agent grounds the names of these objects in visual perception. Word learning is an open-ended problem. Therefore, the learning architecture of the agent will have to be able to acquire words and categories in an openended manner. In this work, four learning architectures were designed that can be used by robotic agents for long-term and open-ended word and category acquisition. The learning methods used in these architectures are designed for incrementally scaling-up to larger sets of words and categories. A novel experimental evaluation methodology, that takes into account the openended nature of word learning, is proposed and applied. This methodology is based on the realization that a robot’s vocabulary will be limited by its discriminatory capacity which, in turn, depends on its sensors and perceptual capabilities. An extensive set of systematic experiments, in multiple experimental settings, was carried out to thoroughly evaluate the described learning approaches. The results indicate that all approaches were able to incrementally acquire new words and categories. Although some of the approaches could not scale-up to larger vocabularies, one approach was shown to learn up to 293 categories, with potential for learning many more.
Resumo:
Few models can explain Mach bands (Pessoa, 1996 Vision Research 36 3205-3227) . Our own employs multiscale line and edge coding by simple and complex cells. Lines are interpreted by Gaussian functions, edges by bipolar, Gaussian-truncated errorfunctions. Widths of these functions are coupled to the scales of the underlying cells and the amplitudes are determined by their responses.
Resumo:
Painterly rendering (non-photorealistic rendering or NPR) aims at translating photographs into paintings with discrete brush strokes, simulating certain techniques (im- or expressionism) and media (oil or watercolour). Recently, our research into visual perception and models of processes in the visual cortex resulted in a new rendering scheme, in which detected lines and edges at different scales are translated into brush strokes of different sizes. In order to prepare a version which is suitable for many users, including children, the design of the interface in terms of window and menu system is very important. Discussions with artists and non-artists led to three design criteria: (1) the interface must reflect the procedures and possibilities that real painters follow and use, (2) it must be based on only one window, and (3) the menu system must be very simple, avoiding a jungle of menus and sub-menus. This paper explains the interface that has been developed.