868 resultados para Human vision system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a robust face location system based on human vision simulations to automatically locate faces in color static images. Our method is divided into four stages. In the first stage we use a gauss low-pass filter to remove the fine information of images, which is useless in the initial stage of human vision. During the second and the third stages, our technique approximately detects the image regions, which may contain faces. During the fourth stage, the existence of faces in the selected regions is verified. Having combined the advantages of Bottom-Up Feature Based Methods and Appearance-Based Methods, our algorithm performs well in various images, including those with highly complex backgrounds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a novel approach for human action recognition based on the combination of computer vision techniques and common-sense knowledge and reasoning capabilities. The emphasis of this work is on how common sense has to be leveraged to a vision-based human action recognition so that nonsensical errors can be amended at the understanding stage. The proposed framework is to be deployed in a realistic environment in which humans behave rationally, that is, motivated by an aim or a reason. © 2012 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The earliest stages of human cortical visual processing can be conceived as extraction of local stimulus features. However, more complex visual functions, such as object recognition, require integration of multiple features. Recently, neural processes underlying feature integration in the visual system have been under intensive study. A specialized mid-level stage preceding the object recognition stage has been proposed to account for the processing of contours, surfaces and shapes as well as configuration. This thesis consists of four experimental, psychophysical studies on human visual feature integration. In two studies, classification image a recently developed psychophysical reverse correlation method was used. In this method visual noise is added to near-threshold stimuli. By investigating the relationship between random features in the noise and observer s perceptual decision in each trial, it is possible to estimate what features of the stimuli are critical for the task. The method allows visualizing the critical features that are used in a psychophysical task directly as a spatial correlation map, yielding an effective "behavioral receptive field". Visual context is known to modulate the perception of stimulus features. Some of these interactions are quite complex, and it is not known whether they reflect early or late stages of perceptual processing. The first study investigated the mechanisms of collinear facilitation, where nearby collinear Gabor flankers increase the detectability of a central Gabor. The behavioral receptive field of the mechanism mediating the detection of the central Gabor stimulus was measured by the classification image method. The results show that collinear flankers increase the extent of the behavioral receptive field for the central Gabor, in the direction of the flankers. The increased sensitivity at the ends of the receptive field suggests a low-level explanation for the facilitation. The second study investigated how visual features are integrated into percepts of surface brightness. A novel variant of the classification image method with brightness matching task was used. Many theories assume that perceived brightness is based on the analysis of luminance border features. Here, for the first time this assumption was directly tested. The classification images show that the perceived brightness of both an illusory Craik-O Brien-Cornsweet stimulus and a real uniform step stimulus depends solely on the border. Moreover, the spatial tuning of the features remains almost constant when the stimulus size is changed, suggesting that brightness perception is based on the output of a single spatial frequency channel. The third and fourth studies investigated global form integration in random-dot Glass patterns. In these patterns, a global form can be immediately perceived, if even a small proportion of random dots are paired to dipoles according to a geometrical rule. In the third study the discrimination of orientation structure in highly coherent concentric and Cartesian (straight) Glass patterns was measured. The results showed that the global form was more efficiently discriminated in concentric patterns. The fourth study investigated how form detectability depends on the global regularity of the Glass pattern. The local structure was either Cartesian or curved. It was shown that randomizing the local orientation deteriorated the performance only with the curved pattern. The results give support for the idea that curved and Cartesian patterns are processed in at least partially separate neural systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Painterly rendering has been linked to computer vision, but we propose to link it to human vision because perception and painting are two processes that are interwoven. Recent progress in developing computational models allows to establish this link. We show that completely automatic rendering can be obtained by applying four image representations in the visual system: (1) colour constancy can be used to correct colours, (2) coarse background brightness in combination with colour coding in cytochrome-oxidase blobs can be used to create a background with a big brush, (3) the multi-scale line and edge representation provides a very natural way to render fi ner brush strokes, and (4) the multi-scale keypoint representation serves to create saliency maps for Focus-of-Attention, and FoA can be used to render important structures. Basic processes are described, renderings are shown, and important ideas for future research are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

in RoboCup 2007: Robot Soccer World Cup XI

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to estimate the motion of an object, the visual system needs to combine multiple local measurements, each of which carries some degree of ambiguity. We present a model of motion perception whereby measurements from different image regions are combined according to a Bayesian estimator --- the estimated motion maximizes the posterior probability assuming a prior favoring slow and smooth velocities. In reviewing a large number of previously published phenomena we find that the Bayesian estimator predicts a wide range of psychophysical results. This suggests that the seemingly complex set of illusions arise from a single computational strategy that is optimal under reasonable assumptions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatically extracting interesting objects from videos is a very challenging task and is applicable to many research areas such robotics, medical imaging, content based indexing and visual surveillance. Automated visual surveillance is a major research area in computational vision and a commonly applied technique in an attempt to extract objects of interest is that of motion segmentation. Motion segmentation relies on the temporal changes that occur in video sequences to detect objects, but as a technique it presents many challenges that researchers have yet to surmount. Changes in real-time video sequences not only include interesting objects, environmental conditions such as wind, cloud cover, rain and snow may be present, in addition to rapid lighting changes, poor footage quality, moving shadows and reflections. The list provides only a sample of the challenges present. This thesis explores the use of motion segmentation as part of a computational vision system and provides solutions for a practical, generic approach with robust performance, using current neuro-biological, physiological and psychological research in primate vision as inspiration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To evaluate the expression and presence of surfactant protein (SP) A and SP-D in the lacrimal apparatus, at the ocular surface, and in tears in healthy and pathologic states. METHODS: Expression of mRNA for SP-A and SP-D was analyzed by RT-PCR in healthy lacrimal gland, conjunctiva, cornea, and nasolacrimal ducts as well as in a spontaneously immortalized conjunctival epithelial cell line (HCjE; IOBA-NHC) and a SV40-transfected cornea epithelial cell line (HCE). Deposition of SP-A and SP-D was determined by Western blot, dot blot, and immunohistochemistry in healthy tissues, in tears, aqueous humor, and in sections of different corneal abnormalities (keratoconus, herpetic keratitis, and Staphylococcus aureus-based ulceration). Cell lines were stimulated with different cytokines and bacterial components and were analyzed for the production of SP-A and SP-D by immunohistochemistry. RESULTS: The presence of SP-A and SP-D on mRNA and protein levels was evidenced in healthy lacrimal gland, conjunctiva, cornea, and nasolacrimal duct samples. Moreover, both proteins were present in tears but were absent in aqueous humor. Immunohistochemistry revealed the production of both peptides by acinar epithelial cells of the lacrimal gland and epithelial cells of the conjunctiva and nasolacrimal ducts, whereas goblet cells revealed no reactivity. Healthy cornea revealed weak reactivity on epithelial surface cells only. In contrast, SP-A and SP-D revealed strong reactivity in patients with herpetic keratitis and corneal ulceration surrounding lesions and in several immigrated defense cells. Reactivity in corneal epithelium and endothelium was also seen in patients with keratoconus. Cell culture experiments revealed that SP-A and SP-D are produced by both epithelial cell lines without and after stimulation with cytokines and bacterial components. CONCLUSIONS: These results show that SP-A, in addition to SP-D, is a peptide of the tear film. Based on the known direct and indirect antimicrobial effects of collectins, the surfactant-associated proteins A and D seem to be involved in several ocular surface diseases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Efficient and reliable classification of visual stimuli requires that their representations reside a low-dimensional and, therefore, computationally manageable feature space. We investigated the ability of the human visual system to derive such representations from the sensory input-a highly nontrivial task, given the million or so dimensions of the visual signal at its entry point to the cortex. In a series of experiments, subjects were presented with sets of parametrically defined shapes; the points in the common high-dimensional parameter space corresponding to the individual shapes formed regular planar (two-dimensional) patterns such as a triangle, a square, etc. We then used multidimensional scaling to arrange the shapes in planar configurations, dictated by their experimentally determined perceived similarities. The resulting configurations closely resembled the original arrangements of the stimuli in the parameter space. This achievement of the human visual system was replicated by a computational model derived from a theory of object representation in the brain, according to which similarities between objects, and not the geometry of each object, need to be faithfully represented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with the challenging problem of designing systems able to perceive objects in underwater environments. In the last few decades research activities in robotics have advanced the state of art regarding intervention capabilities of autonomous systems. State of art in fields such as localization and navigation, real time perception and cognition, safe action and manipulation capabilities, applied to ground environments (both indoor and outdoor) has now reached such a readiness level that it allows high level autonomous operations. On the opposite side, the underwater environment remains a very difficult one for autonomous robots. Water influences the mechanical and electrical design of systems, interferes with sensors by limiting their capabilities, heavily impacts on data transmissions, and generally requires systems with low power consumption in order to enable reasonable mission duration. Interest in underwater applications is driven by needs of exploring and intervening in environments in which human capabilities are very limited. Nowadays, most underwater field operations are carried out by manned or remotely operated vehicles, deployed for explorations and limited intervention missions. Manned vehicles, directly on-board controlled, expose human operators to risks related to the stay in field of the mission, within a hostile environment. Remotely Operated Vehicles (ROV) currently represent the most advanced technology for underwater intervention services available on the market. These vehicles can be remotely operated for long time but they need support from an oceanographic vessel with multiple teams of highly specialized pilots. Vehicles equipped with multiple state-of-art sensors and capable to autonomously plan missions have been deployed in the last ten years and exploited as observers for underwater fauna, seabed, ship wrecks, and so on. On the other hand, underwater operations like object recovery and equipment maintenance are still challenging tasks to be conducted without human supervision since they require object perception and localization with much higher accuracy and robustness, to a degree seldom available in Autonomous Underwater Vehicles (AUV). This thesis reports the study, from design to deployment and evaluation, of a general purpose and configurable platform dedicated to stereo-vision perception in underwater environments. Several aspects related to the peculiar environment characteristics have been taken into account during all stages of system design and evaluation: depth of operation and light conditions, together with water turbidity and external weather, heavily impact on perception capabilities. The vision platform proposed in this work is a modular system comprising off-the-shelf components for both the imaging sensors and the computational unit, linked by a high performance ethernet network bus. The adopted design philosophy aims at achieving high flexibility in terms of feasible perception applications, that should not be as limited as in case of a special-purpose and dedicated hardware. Flexibility is required by the variability of underwater environments, with water conditions ranging from clear to turbid, light backscattering varying with daylight and depth, strong color distortion, and other environmental factors. Furthermore, the proposed modular design ensures an easier maintenance and update of the system over time. Performance of the proposed system, in terms of perception capabilities, has been evaluated in several underwater contexts taking advantage of the opportunity offered by the MARIS national project. Design issues like energy power consumption, heat dissipation and network capabilities have been evaluated in different scenarios. Finally, real-world experiments, conducted in multiple and variable underwater contexts, including open sea waters, have led to the collection of several datasets that have been publicly released to the scientific community. The vision system has been integrated in a state of the art AUV equipped with a robotic arm and gripper, and has been exploited in the robot control loop to successfully perform underwater grasping operations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The perception of an object as a single entity within a visual scene requires that its features are bound together and segregated from the background and/or other objects. Here, we used magnetoencephalography (MEG) to assess the hypothesis that coherent percepts may arise from the synchronized high frequency (gamma) activity between neurons that code features of the same object. We also assessed the role of low frequency (alpha, beta) activity in object processing. The target stimulus (i.e. object) was a small patch of a concentric grating of 3c/°, viewed eccentrically. The background stimulus was either a blank field or a concentric grating of 3c/° periodicity, viewed centrally. With patterned backgrounds, the target stimulus emerged--through rotation about its own centre--as a circular subsection of the background. Data were acquired using a 275-channel whole-head MEG system and analyzed using Synthetic Aperture Magnetometry (SAM), which allows one to generate images of task-related cortical oscillatory power changes within specific frequency bands. Significant oscillatory activity across a broad range of frequencies was evident at the V1/V2 border, and subsequent analyses were based on a virtual electrode at this location. When the target was presented in isolation, we observed that: (i) contralateral stimulation yielded a sustained power increase in gamma activity; and (ii) both contra- and ipsilateral stimulation yielded near identical transient power changes in alpha (and beta) activity. When the target was presented against a patterned background, we observed that: (i) contralateral stimulation yielded an increase in high-gamma (>55 Hz) power together with a decrease in low-gamma (40-55 Hz) power; and (ii) both contra- and ipsilateral stimulation yielded a transient decrease in alpha (and beta) activity, though the reduction tended to be greatest for contralateral stimulation. The opposing power changes across different regions of the gamma spectrum with 'figure/ground' stimulation suggest a possible dual role for gamma rhythms in visual object coding, and provide general support of the binding-by-synchronization hypothesis. As the power changes in alpha and beta activity were largely independent of the spatial location of the target, however, we conclude that their role in object processing may relate principally to changes in visual attention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human visual system combines contrast information from the two eyes to produce a single cyclopean representation of the external world. This task requires both summation of congruent images and inhibition of incongruent images across the eyes. These processes were explored psychophysically using narrowband sinusoidal grating stimuli. Initial experiments focussed on binocular interactions within a single detecting mechanism, using contrast discrimination and contrast matching tasks. Consistent with previous findings, dichoptic presentation produced greater masking than monocular or binocular presentation. Four computational models were compared, two of which performed well on all data sets. Suppression between mechanisms was then investigated, using orthogonal and oblique stimuli. Two distinct suppressive pathways were identified, corresponding to monocular and dichoptic presentation. Both pathways impact prior to binocular summation of signals, and differ in their strengths, tuning, and response to adaptation, consistent with recent single-cell findings in cat. Strikingly, the magnitude of dichoptic masking was found to be spatiotemporally scale invariant, whereas monocular masking was dependent on stimulus speed. Interocular suppression was further explored using a novel manipulation, whereby stimuli were presented in dichoptic antiphase. Consistent with the predictions of a computational model, this produced weaker masking than in-phase presentation. This allowed the bandwidths of suppression to be measured without the complicating factor of additive combination of mask and test. Finally, contrast vision in strabismic amblyopia was investigated. Although amblyopes are generally believed to have impaired binocular vision, binocular summation was shown to be intact when stimuli were normalized for interocular sensitivity differences. An alternative account of amblyopia was developed, in which signals in the affected eye are subject to attenuation and additive noise prior to binocular combination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last ten years our understanding of early spatial vision has improved enormously. The long-standing model of probability summation amongst multiple independent mechanisms with static output nonlinearities responsible for masking is obsolete. It has been replaced by a much more complex network of additive, suppressive, and facilitatory interactions and nonlinearities across eyes, area, spatial frequency, and orientation that extend well beyond the classical recep-tive field (CRF). A review of a substantial body of psychophysical work performed by ourselves (20 papers), and others, leads us to the following tentative account of the processing path for signal contrast. The first suppression stage is monocular, isotropic, non-adaptable, accelerates with RMS contrast, most potent for low spatial and high temporal frequencies, and extends slightly beyond the CRF. Second and third stages of suppression are difficult to disentangle but are possibly pre- and post-binocular summation, and involve components that are scale invariant, isotropic, anisotropic, chromatic, achromatic, adaptable, interocular, substantially larger than the CRF, and saturated by contrast. The monocular excitatory pathways begin with half-wave rectification, followed by a preliminary stage of half-binocular summation, a square-law transducer, full binocular summation, pooling over phase, cross-mechanism facilitatory interactions, additive noise, linear summation over area, and a slightly uncertain decision-maker. The purpose of each of these interactions is far from clear, but the system benefits from area and binocular summation of weak contrast signals as well as area and ocularity invariances above threshold (a herd of zebras doesn't change its contrast when it increases in number or when you close one eye). One of many remaining challenges is to determine the stage or stages of spatial tuning in the excitatory pathway.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO.