804 resultados para Color-vision


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inferences about leaf anatomical characteristics had largely been made by manually measuring diverse leaf regions, such as cuticle, epidermis and parenchyma to evaluate differences caused by environmental variables. Here we tested an approach for data acquisition and analysis in ecological quantitative leaf anatomy studies based on computer vision and pattern recognition methods. A case study was conducted on Gochnatia polymorpha (Less.) Cabrera (Asteraceae), a Neotropical savanna tree species that has high phenotypic plasticity. We obtained digital images of cross-sections of its leaves developed under different light conditions (sun vs. shade), different seasons (dry vs. wet) and in different soil types (oxysoil vs. hydromorphic soil), and analyzed several visual attributes, such as color, texture and tissues thickness in a perpendicular plane from microscopic images. The experimental results demonstrated that computational analysis is capable of distinguishing anatomical alterations in microscope images obtained from individuals growing in different environmental conditions. The methods presented here offer an alternative way to determine leaf anatomical differences. © 2013 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[ES]This paper describes some simple but useful computer vision techniques for human-robot interaction. First, an omnidirectional camera setting is described that can detect people in the surroundings of the robot, giving their angular positions and a rough estimate of the distance. The device can be easily built with inexpensive components. Second, we comment on a color-based face detection technique that can alleviate skin-color false positives. Third, a simple head nod and shake detector is described, suitable for detecting affirmative/negative, approval/dissaproval, understanding/disbelief head gestures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Small bistratified cells (SBCs) in the primate retina carry a major blue-yellow opponent signal to the brain. We found that SBCs also carry signals from rod photoreceptors, with the same sign as S cone input. SBCs exhibited robust responses under low scotopic conditions. Physiological and anatomical experiments indicated that this rod input arose from the AII amacrine cell-mediated rod pathway. Rod and cone signals were both present in SBCs at mesopic light levels. These findings have three implications. First, more retinal circuits may multiplex rod and cone signals than were previously thought to, efficiently exploiting the limited number of optic nerve fibers. Second, signals from AII amacrine cells may diverge to most or all of the approximately 20 retinal ganglion cell types in the peripheral primate retina. Third, rod input to SBCs may be the substrate for behavioral biases toward perception of blue at mesopic light levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lake Malawi boasts the highest diversity of freshwater fishes in the world. Nearshore sites are categorized according to their bottom substrate, rock or sand, and these habitats host divergent assemblages of cichlid fishes. Sexual selection driven by mate choice in cichlids led to spectacular diversification in male nuptial coloration. This suggests that the spectral radiance contrast of fish, the main determinant of visibility under water, plays a crucial role in cichlid visual communication. This study provides the first detailed description of underwater irradiance, radiance and beam attenuation at selected sites representing two major habitats in Lake Malawi. These quantities are essential for estimating radiance contrast and, thus, the constraints imposed on fish body coloration. Irradiance spectra in the sand habitat were shifted to longer wavelengths compared with those in the rock habitat. Beam attenuation in the sand habitat was higher than in the rock habitat. The effects of water depth, bottom depth and proximity to the lake bottom on radiometric quantities are discussed. The radiance contrast of targets exhibiting diffused and spectrally uniform reflectance depended on habitat type in deep water but not in shallow water. In deep water, radiance contrast of such targets was maximal at long wavelengths in the sand habitat and at short wavelengths in the rock habitat. Thus, to achieve conspicuousness, color patterns of rock-and sand-dwelling cichlids would be restricted to short and long wavelengths, respectively. This study provides a useful platform for the examination of cichlid visual communication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, the use of RGB-D sensors have focused a lot of research in computer vision and robotics. These kinds of sensors, like Kinect, allow to obtain 3D data together with color information. However, their working range is limited to less than 10 meters, making them useless in some robotics applications, like outdoor mapping. In these environments, 3D lasers, working in ranges of 20-80 meters, are better. But 3D lasers do not usually provide color information. A simple 2D camera can be used to provide color information to the point cloud, but a calibration process between camera and laser must be done. In this paper we present a portable calibration system to calibrate any traditional camera with a 3D laser in order to assign color information to the 3D points obtained. Thus, we can use laser precision and simultaneously make use of color information. Unlike other techniques that make use of a three-dimensional body of known dimensions in the calibration process, this system is highly portable because it makes use of small catadioptrics that can be placed in a simple manner in the environment. We use our calibration system in a 3D mapping system, including Simultaneous Location and Mapping (SLAM), in order to get a 3D colored map which can be used in different tasks. We show that an additional problem arises: 2D cameras information is different when lighting conditions change. So when we merge 3D point clouds from two different views, several points in a given neighborhood could have different color information. A new method for color fusion is presented, obtaining correct colored maps. The system will be tested by applying it to 3D reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper submitted to the 43rd International Symposium on Robotics (ISR2012), Taipei, Taiwan, Aug. 29-31, 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Day of Chemistry, Invited conference, San Alberto Magno 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Federal Transit Administration, Washington, D.C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Animal color pattern phenotypes evolve rapidly. What influences their evolution? Because color patterns are used in communication, selection for signal efficacy, relative to the intended receiver's visual system, may explain and predict the direction of evolution. We investigated this in bowerbirds, whose color patterns consist of plumage, bower structure, and ornaments and whose visual displays are presented under predictable visual conditions. We used data on avian vision, environmental conditions, color pattern properties, and an estimate of the bowerbird phylogeny to test hypotheses about evolutionary effects of visual processing. Different components of the color pattern evolve differently. Plumage sexual dimorphism increased and then decreased, while overall (plumage plus bower) visual contrast increased. The use of bowers allows relative crypsis of the bird but increased efficacy of the signal as a whole. Ornaments do not elaborate existing plumage features but instead are innovations (new color schemes) that increase signal efficacy. Isolation between species could be facilitated by plumage but not ornaments, because we observed character displacement only in plumage. Bowerbird color pattern evolution is at least partially predictable from the function of the visual system and from knowledge of different functions of different components of the color patterns. This provides clues to how more constrained visual signaling systems may evolve.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the challenging problem of designing systems able to perceive objects in underwater environments. In the last few decades research activities in robotics have advanced the state of art regarding intervention capabilities of autonomous systems. State of art in fields such as localization and navigation, real time perception and cognition, safe action and manipulation capabilities, applied to ground environments (both indoor and outdoor) has now reached such a readiness level that it allows high level autonomous operations. On the opposite side, the underwater environment remains a very difficult one for autonomous robots. Water influences the mechanical and electrical design of systems, interferes with sensors by limiting their capabilities, heavily impacts on data transmissions, and generally requires systems with low power consumption in order to enable reasonable mission duration. Interest in underwater applications is driven by needs of exploring and intervening in environments in which human capabilities are very limited. Nowadays, most underwater field operations are carried out by manned or remotely operated vehicles, deployed for explorations and limited intervention missions. Manned vehicles, directly on-board controlled, expose human operators to risks related to the stay in field of the mission, within a hostile environment. Remotely Operated Vehicles (ROV) currently represent the most advanced technology for underwater intervention services available on the market. These vehicles can be remotely operated for long time but they need support from an oceanographic vessel with multiple teams of highly specialized pilots. Vehicles equipped with multiple state-of-art sensors and capable to autonomously plan missions have been deployed in the last ten years and exploited as observers for underwater fauna, seabed, ship wrecks, and so on. On the other hand, underwater operations like object recovery and equipment maintenance are still challenging tasks to be conducted without human supervision since they require object perception and localization with much higher accuracy and robustness, to a degree seldom available in Autonomous Underwater Vehicles (AUV). This thesis reports the study, from design to deployment and evaluation, of a general purpose and configurable platform dedicated to stereo-vision perception in underwater environments. Several aspects related to the peculiar environment characteristics have been taken into account during all stages of system design and evaluation: depth of operation and light conditions, together with water turbidity and external weather, heavily impact on perception capabilities. The vision platform proposed in this work is a modular system comprising off-the-shelf components for both the imaging sensors and the computational unit, linked by a high performance ethernet network bus. The adopted design philosophy aims at achieving high flexibility in terms of feasible perception applications, that should not be as limited as in case of a special-purpose and dedicated hardware. Flexibility is required by the variability of underwater environments, with water conditions ranging from clear to turbid, light backscattering varying with daylight and depth, strong color distortion, and other environmental factors. Furthermore, the proposed modular design ensures an easier maintenance and update of the system over time. Performance of the proposed system, in terms of perception capabilities, has been evaluated in several underwater contexts taking advantage of the opportunity offered by the MARIS national project. Design issues like energy power consumption, heat dissipation and network capabilities have been evaluated in different scenarios. Finally, real-world experiments, conducted in multiple and variable underwater contexts, including open sea waters, have led to the collection of several datasets that have been publicly released to the scientific community. The vision system has been integrated in a state of the art AUV equipped with a robotic arm and gripper, and has been exploited in the robot control loop to successfully perform underwater grasping operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gamma activity in the visual cortex has been reported in numerous EEG studies of coherent and illusory figures. A dominant theme of many such findings has been that temporal synchronization in the gamma band in response to these identifiable percepts is related to perceptual binding of the common features of the stimulus. In two recent studies using magnetoencephalography (MEG) and the beamformer analysis technique, we have shown that the magnitude of induced gamma activity in visual cortex is dependent upon independent stimulus features such as spatial frequency and contrast. In particular, we showed that induced gamma activity is maximal in response to gratings of 3 cycles per degree (3 cpd) of high luminance contrast. In this work, we set out to examine stimulus contrast further by using isoluminant red/green gratings that possess color but not luminance contrast using the same cohort of subjects. We found no induced gamma activity in V1 or visual cortex in response to the isoluminant gratings in these subjects who had previously shown strong induced gamma activity in V1 for luminance contrast gratings.