964 resultados para Omnidirectional vision system


Relevância:

80.00% 80.00%

Publicador:

Resumo:

An overview is given of a vision system for locating, recognising and tracking multiple vehicles, using an image sequence taken by a single camera mounted on a moving vehicle. The camera motion is estimated by matching features on the ground plane from one image to the next. Vehicle detection and hypothesis generation are performed using template correlation and a 3D wire frame model of the vehicle is fitted to the image. Once detected and identified, vehicles are tracked using dynamic filtering. A separate batch mode filter obtains the 3D trajectories of nearby vehicles over an extended time. Results are shown for a motorway image sequence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An overview is given of a novel vision system for locating, recognising and tracking multiple vehicles.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The authors demonstrate four real-time reactive responses to movement in everyday scenes using an active head/eye platform. They first describe the design and realization of a high-bandwidth four-degree-of-freedom head/eye platform and visual feedback loop for the exploration of motion processing within active vision. The vision system divides processing into two scales and two broad functions. At a coarse, quasi-peripheral scale, detection and segmentation of new motion occurs across the whole image, and at fine scale, tracking of already detected motion takes place within a foveal region. Several simple coarse scale motion sensors which run concurrently at 25 Hz with latencies around 100 ms are detailed. The use of these sensors are discussed to drive the following real-time responses: (1) head/eye saccades to moving regions of interest; (2) a panic response to looming motion; (3) an opto-kinetic response to continuous motion across the image and (4) smooth pursuit of a moving target using motion alone.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The authors present an active vision system which performs a surveillance task in everyday dynamic scenes. The system is based around simple, rapid motion processors and a control strategy which uses both position and velocity information. The surveillance task is defined in terms of two separate behavioral subsystems, saccade and smooth pursuit, which are demonstrated individually on the system. It is shown how these and other elementary responses to 2D motion can be built up into behavior sequences, and how judicious close cooperation between vision and control results in smooth transitions between the behaviors. These ideas are demonstrated by an implementation of a saccade to smooth pursuit surveillance system on a high-performance robotic hand/eye platform.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BRITTO, Ricardo S.; MEDEIROS, Adelardo A. D.; ALSINA, Pablo J. Uma arquitetura distribuída de hardware e software para controle de um robô móvel autônomo. In: SIMPÓSIO BRASILEIRO DE AUTOMAÇÃO INTELIGENTE,8., 2007, Florianópolis. Anais... Florianópolis: SBAI, 2007.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, we present a hardware-software architecture for controlling the autonomous mobile robot Kapeck. The hardware of the robot is composed of a set of sensors and actuators organized in a CAN bus. Two embedded computers and eigth microcontroller based boards are used in the system. One of the computers hosts the vision system, due to the significant processing needs of this kind of system. The other computer is used to coordinate and access the CAN bus and to accomplish the other activities of the robot. The microcontroller-based boards are used with the sensors and actuators. The robot has this distributed configuration in order to exhibit a good real-time behavior, where the response time and the temporal predictability of the system is important. We adopted the hybrid deliberative-reactive paradigm in the proposed architecture to conciliate the reactive behavior of the sensors-actuators net and the deliberative activities required to accomplish more complex tasks

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work introduces a new method for environment mapping with three-dimensional information from visual information for robotic accurate navigation. Many approaches of 3D mapping using occupancy grid typically requires high computacional effort to both build and store the map. We introduce an 2.5-D occupancy-elevation grid mapping, which is a discrete mapping approach, where each cell stores the occupancy probability, the height of the terrain at current place in the environment and the variance of this height. This 2.5-dimensional representation allows that a mobile robot to know whether a place in the environment is occupied by an obstacle and the height of this obstacle, thus, it can decide if is possible to traverse the obstacle. Sensorial informations necessary to construct the map is provided by a stereo vision system, which has been modeled with a robust probabilistic approach, considering the noise present in the stereo processing. The resulting maps favors the execution of tasks like decision making in the autonomous navigation, exploration, localization and path planning. Experiments carried out with a real mobile robots demonstrates that this proposed approach yields useful maps for robot autonomous navigation

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work proposes a kinematic control scheme, using visual feedback for a robot arm with five degrees of freedom. Using computational vision techniques, a method was developed to determine the cartesian 3d position and orientation of the robot arm (pose) using a robot image obtained through a camera. A colored triangular label is disposed on the robot manipulator tool and efficient heuristic rules are used to obtain the vertexes of that label in the image. The tool pose is obtained from those vertexes through numerical methods. A color calibration scheme based in the K-means algorithm was implemented to guarantee the robustness of the vision system in the presence of light variations. The extrinsic camera parameters are computed from the image of four coplanar points whose cartesian 3d coordinates, related to a fixed frame, are known. Two distinct poses of the tool, initial and final, obtained from image, are interpolated to generate a desired trajectory in cartesian space. The error signal in the proposed control scheme consists in the difference between the desired tool pose and the actual tool pose. Gains are applied at the error signal and the signal resulting is mapped in joint incrementals using the pseudoinverse of the manipulator jacobian matrix. These incrementals are applied to the manipulator joints moving the tool to the desired pose

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Navigation based on visual feedback for robots, working in a closed environment, can be obtained settling a camera in each robot (local vision system). However, this solution requests a camera and capacity of local processing for each robot. When possible, a global vision system is a cheapest solution for this problem. In this case, one or a little amount of cameras, covering all the workspace, can be shared by the entire team of robots, saving the cost of a great amount of cameras and the associated processing hardware needed in a local vision system. This work presents the implementation and experimental results of a global vision system for mobile mini-robots, using robot soccer as test platform. The proposed vision system consists of a camera, a frame grabber and a computer (PC) for image processing. The PC is responsible for the team motion control, based on the visual feedback, sending commands to the robots through a radio link. In order for the system to be able to unequivocally recognize each robot, each one has a label on its top, consisting of two colored circles. Image processing algorithms were developed for the eficient computation, in real time, of all objects position (robot and ball) and orientation (robot). A great problem found was to label the color, in real time, of each colored point of the image, in time-varying illumination conditions. To overcome this problem, an automatic camera calibration, based on clustering K-means algorithm, was implemented. This method guarantees that similar pixels will be clustered around a unique color class. The obtained experimental results shown that the position and orientation of each robot can be obtained with a precision of few millimeters. The updating of the position and orientation was attained in real time, analyzing 30 frames per second

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mobile robots need autonomy to fulfill their tasks. Such autonomy is related whith their capacity to explorer and to recognize their navigation environments. In this context, the present work considers techniques for the classification and extraction of features from images, using artificial neural networks. This images are used in the mapping and localization system of LACE (Automation and Evolutive Computing Laboratory) mobile robot. In this direction, the robot uses a sensorial system composed by ultrasound sensors and a catadioptric vision system equipped with a camera and a conical mirror. The mapping system is composed of three modules; two of them will be presented in this paper: the classifier and the characterizer modules. Results of these modules simulations are presented in this paper.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Petroleum well drilling monitoring has become an important tool for detecting and preventing problems during the well drilling process. In this paper, we propose to assist the drilling process by analyzing the cutting images at the vibrating shake shaker, in which different concentrations of cuttings can indicate possible problems, such as the collapse of the well borehole walls. In such a way, we present here an innovative computer vision system composed by a real time cutting volume estimator addressed by support vector regression. As far we know, we are the first to propose the petroleum well drilling monitoring by cutting image analysis. We also applied a collection of supervised classifiers for cutting volume classification. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Autonomous robots must be able to learn and maintain models of their environments. In this context, the present work considers techniques for the classification and extraction of features from images in joined with artificial neural networks in order to use them in the system of mapping and localization of the mobile robot of Laboratory of Automation and Evolutive Computer (LACE). To do this, the robot uses a sensorial system composed for ultrasound sensors and a catadioptric vision system formed by a camera and a conical mirror. The mapping system is composed by three modules. Two of them will be presented in this paper: the classifier and the characterizer module. The first module uses a hierarchical neural network to do the classification; the second uses techiniques of extraction of attributes of images and recognition of invariant patterns extracted from the places images set. The neural network of the classifier module is structured in two layers, reason and intuition, and is trained to classify each place explored for the robot amongst four predefine classes. The final result of the exploration is the construction of a topological map of the explored environment. Results gotten through the simulation of the both modules of the mapping system will be presented in this paper. © 2008 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article describes the development of a visual stimulus generator to be used in neuroscience experiments with invertebrates such as flies. The experiment consists in the visualization of a fixed image that is displaced horizontally according to the stimulus data. The system is capable of displaying 640 x 480 pixels with 256 intensity levels at 200 frames per second (FPS) on conventional raster monitors. To double the possible horizontal positioning possibilities from 640 to 1280, a novel technique is presented introducing artificial inter-pixel steps. The implementation consists in using two video frame buffers containing each a distinct view of the desired image pattern. This implementation generates a visual effect capable of doubling the horizontal positioning capabilities of the visual stimulus generator allowing more precise and movements more contiguous. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The quality of fish products is indispensably linked to the freshness of the raw material modulated by appropriate manipulation and storage conditions, specially the storage temperature after catch. The purpose of the research presented in this thesis, which was largely conducted in the context of a research project funded by Italian Ministry of Agricultural, Food and Forestry Policies (MIPAAF), concerned the evaluation of the freshness of farmed and wild fish species, in relation to different storage conditions, under ice (0°C) or at refrigeration temperature (4°C). Several specimens of different species, bogue (Boops boops), red mullet (Mullus barbatus), sea bream (Sparus aurata) and sea bass (Dicentrarchus labrax), during storage, under the different temperature conditions adopted, have been examined. The assessed control parameters were physical (texture, through the use of a dynamometer; visual quality using a computer vision system (CVS)), chemical (through footprint metabolomics 1H-NMR) and sensory (Quality Index Method (QIM). Microbiological determinations were also carried out on the species of hake (Merluccius merluccius). In general obtained results confirmed that the temperature of manipulation/conservation is a key factor in maintaining fish freshness. NMR spectroscopy showed to be able to quantify and evaluate the kinetics for unselected compounds during fish degradation, even a posteriori. This can be suitable for the development of new parameters related to quality and freshness. The development of physical methods, particularly the image analysis performed by computer vision system (CVS), for the evaluation of fish degradation, is very promising. Among CVS parameters, skin colour, presence and distribution of gill mucus, and eye shape modification evidenced a high sensibility for the estimation of fish quality loss, as a function of the adopted storage conditions. Particularly the eye concavity index detected on fish eye showed a high positive correlation with total QIM score.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Der Goldfisch besitzt, im Gegensatz zum Menschen, ein tetrachromatisches Farbensehsystem, das außerordentlich gut untersucht ist. Die Farben gleicher Helligkeit lassen sich hier in einem dreidimensionalen Tetraeder darstellen. Ziel der vorliegenden Arbeit war es herauszufinden, wie gut der Goldfisch Farben, die dem Menschen ungesättigt erscheinen und im Inneren des Farbtetraeders liegen, unterscheiden kann. Des Weiteren stellte sich die Frage, ob sowohl „Weiß“ (ohne UV) als auch Xenonweiß (mit UV) vom Fisch als „unbunt“ oder „neutral“ wahrgenommenen werden. Um all dies untersuchen zu können, musste ein komplexer Versuchsaufbau entwickelt werden, mit dem den Fischen monochromatische und mit Weiß gemischte Lichter gleicher Helligkeit, sowie Xenonweiß gezeigt werden konnte. Die Fische erlernten durch operante Konditionierung einen Dressurstimulus (monochromatisches Licht der Wellenlängen 660 nm, 599 nm, 540 nm, 498 nm oder 450 nm) von einem Vergleichsstimulus (Projektorweiß) zu unterscheiden. Im Folgenden wurde dem Vergleichstimulus in 10er-Schritten immer mehr der jeweiligen Dressurspektralfarbe beigemischt, bis die Goldfische keine sichere Wahl für den Dressurstimulus mehr treffen konnten. Die Unterscheidungsleistung der Goldfische wurde mit zunehmender Beimischung von Dressurspektralfarbe zum Projektorweiß immer geringer und es kristallisierte sich ein Bereich in der Grundfläche des Tetraeders heraus, in dem die Goldfische keine Unterscheidung mehr treffen konnten. Um diesen Bereich näher zu charakterisieren, bekamen die Goldfische Mischlichter, bei denen gerade keine Unterscheidung mehr zum Projektorweiß möglich war, in Transfertests gezeigt. Da die Goldfische diese Mischlichter nicht voneinander unterscheiden konnten, läßt sich schließen, dass es einen größeren Bereich gibt, der, ebenso wie Weiß (ohne UV) für den Goldfisch „neutral“ erscheint. Wenn nun Weiß (ohne UV) für den Goldfisch „neutral“ erscheint, sollte es dem Xenonweiß ähnlich sein. Die Versuche zeigten allerdings, dass die Goldfische die Farben Weiß (ohne UV) und Xenonweiß als verschieden wahrnehmen. Betrachtet man die Sättigung für die Spektralfarben, so zeigte sich, dass die Spektralfarbe 540 nm für den Goldfisch am gesättigsten, die Spektralfarbe 660 nm am ungesättigsten erscheint.