979 resultados para Visione Robotica Calibrazione Camera Robot Hand Eye


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the design and development of an eye alignment/tracking system which allows self alignment of the eye’s optical axis with a measurement axis. Eye alignment is an area of research largely over-looked, yet it is a fundamental requirement in the acquisition of clinical data from the eye. New trends in the ophthalmic market, desiring portable hand-held apparatus, and the application of ophthalmic measurements in areas other than vision care have brought eye alignment under new scrutiny. Ophthalmic measurements taken in hand-held devices with out an clinician present requires alignment in an entirely new set of circumstances, requiring a novel solution. In order to solve this problem, the research has drawn upon eye tracking technology to monitor the eye, and a principle of self alignment to perform alignment correction. A handheld device naturally lends itself to the patient performing alignment, thus a technique has been designed to communicate raw eye tracking data to the user in a manner which allows the user to make the necessary corrections. The proposed technique is a novel methodology in which misalignment to the eye’s optical axis can be quantified, corrected and evaluated. The technique uses Purkinje Image tracking to monitor the eye’s movement as well as the orientation of the optical axis. The use of two sets of Purkinje Images allows quantification of the eye’s physical parameters needed for accurate Purkinje Image tracking, negating the need for prior anatomical data. An instrument employing the methodology was subsequently prototyped and validated, allowing a sample group to achieve self alignment of their optical axis with an imaging axis within 16.5-40.8 s, and with a rotational precision of 0.03-0.043°(95% confidence intervals). By encompassing all these factors the technique facilitates self alignment from an unaligned position on the visual axis to an aligned position on the optical axis. The consequence of this is that ophthalmic measurements, specifically pachymetric measurements, can be made in the absence of an optician, allowing the use of ophthalmic instrumentation and measurements in health professions other than vision care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this work was to enable the recognition of human gestures through the development of a computer program. The program created captures the gestures executed by the user through a camera attached to the computer and sends it to the robot command referring to the gesture. They were interpreted in total ve gestures made by human hand. The software (developed in C ++) widely used the computer vision concepts and open source library OpenCV that directly impact the overall e ciency of the control of mobile robots. The computer vision concepts take into account the use of lters to smooth/blur the image noise reduction, color space to better suit the developer's desktop as well as useful information for manipulating digital images. The OpenCV library was essential in creating the project because it was possible to use various functions/procedures for complete control lters, image borders, image area, the geometric center of borders, exchange of color spaces, convex hull and convexity defect, plus all the necessary means for the characterization of imaged features. During the development of the software was the appearance of several problems, as false positives (noise), underperforming the insertion of various lters with sizes oversized masks, as well as problems arising from the choice of color space for processing human skin tones. However, after the development of seven versions of the control software, it was possible to minimize the occurrence of false positives due to a better use of lters combined with a well-dimensioned mask size (tested at run time) all associated with a programming logic that has been perfected over the construction of the seven versions. After all the development is managed software that met the established requirements. After the completion of the control software, it was observed that the overall e ectiveness of the various programs, highlighting in particular the V programs: 84.75 %, with VI: 93.00 % and VII with: 94.67 % showed that the nal program performed well in interpreting gestures, proving that it was possible the mobile robot control through human gestures without the need for external accessories to give it a better mobility and cost savings for maintain such a system. The great merit of the program was to assist capacity in demystifying the man set/machine therefore uses an easy and intuitive interface for control of mobile robots. Another important feature observed is that to control the mobile robot is not necessary to be close to the same, as to control the equipment is necessary to receive only the address that the Robotino passes to the program via network or Wi-Fi.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lo scopo della tesi è creare un’architettura in FPGA in grado di ricavare informazioni 3D da una coppia di sensori stereo. La pipeline è stata realizzata utilizzando il System-on-Chip Zynq, che permette una stretta interazione tra la parte hardware realizzata in FPGA e la CPU. Dopo uno studio preliminare degli strumenti hardware e software, è stata realizzata l’architettura base per la scrittura e la lettura di immagini nella memoria DDR dello Zynq. In seguito l’attenzione si è spostata sull’implementazione di algoritmi stereo (rettificazione e stereo matching) su FPGA e nella realizzazione di una pipeline in grado di ricavare accurate mappe di disparità in tempo reale acquisendo le immagini da una camera stereo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SANTANA, André M.; SANTIAGO, Gutemberg S.; MEDEIROS, Adelardo A. D. Real-Time Visual SLAM Using Pre-Existing Floor Lines as Landmarks and a Single Camera. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG. Anais... Juiz de Fora: CBA, 2008.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ricavare informazioni dalla realtà circostante è un obiettivo molto importante dell'informatica moderna, in modo da poter progettare robot, veicoli a guida autonoma, sistemi di riconoscimento e tanto altro. La computer vision è la parte dell'informatica che se ne occupa e sta sempre più prendendo piede. Per raggiungere tale obiettivo si utilizza una pipeline di visione stereo i cui passi di rettificazione e generazione di mappa di disparità sono oggetto di questa tesi. In particolare visto che questi passi sono spesso affidati a dispositivi hardware dedicati (come le FPGA) allora si ha la necessità di utilizzare algoritmi che siano portabili su questo tipo di tecnologia, dove le risorse sono molto minori. Questa tesi mostra come sia possibile utilizzare tecniche di approssimazione di questi algoritmi in modo da risparmiare risorse ma che che garantiscano comunque ottimi risultati.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This presentation was both an illustrated lecture and a published paper presented at the IMPACT 9 Conference Printmaking in the Post-Print Age, Hangzhou China 2015. It was an extension of the exhibition catalogue essay for the Bluecoat Gallery Exhibition of the same name. In 2014 I curated an exhibition The Negligent Eye at the Bluecoat Gallery in Liverpool as the result of longstanding interest in scanning and 3D printing and the role of these in changing the field of Print within Fine Art Practice. In the aftermath of curatingshow I have continued to reflect on this material with reference to the writings of Vilém Flusser and Hito Steyerl. The work in the exhibition came from a wide range of artists of all generations most of whom are not explicitly located within Printmaking. Whilst some work did not use any scanning technology at all, a shared fascination with the particular translating device of the systematizing ‘eye’ of a scanning digital video camera, flatbed or medical scanner was expressed by all the work in the show. Through writing this paper I aim to extend my own understanding of questions, which arose from the juxtapositions of work and the production of the accompanying catalogue. The show developed in dialogue with curators Bryan Biggs and Sarah-Jane Parsons of the Bluecoat Gallery who sent a series of questions about scanning to participating artists. In reflecting upon their answers I will extend the discussions begun in the process of this research. A kind of created attention deficit disorder seems to operate on us all today to make and distribute images and information at speed. What value do ways of making which require slow looking or intensive material explorations have in this accelerated system? What model of the world is being constructed by the drive to simulated realities toward ever-greater resolution, so called high definition? How are our perceptions of reality being altered by the world-view presented in the smooth colourful ever morphing simulations that surround us? The limitations of digital technology are often a starting point for artists to reflect on our relationship to real-world fragility. I will be looking at practices where tactility or dimensionality in a form of hard copy engages with these questions using examples from the exhibition. Artists included in the show were: Cory Arcangel, Christiane Baumgartner, Thomas Bewick, Jyll Bradley, Maurice Carlin, Helen Chadwick, Susan Collins, Conroy/Sanderson, Nicky Coutts, Elizabeth Gossling, Beatrice Haines, Juneau Projects, Laura Maloney, Bob Matthews, London Fieldworks (with the participation of Gustav Metzger), Marilène Oliver, Flora Parrott, South Atlantic Souvenirs, Imogen Stidworthy, Jo Stockham, Wolfgang Tillmans, Alessa Tinne, Michael Wegerer, Rachel Whiteread, Jane and Louise Wilson. Scanning, Art, Technology, Copy, Materiality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il percorso di tesi che ho intrapreso è stato svolto presso l'azienda Datalogic, con l'intento di integrare un sistema di visione ad un sistema di marcatura laser. L'utilizzo di questo potente strumento è però vincolato dalla particolare posizione fisica occupata, di volta in volta, dall'oggetto; per questo motivo viene fissato nella posizione desiderata, attraverso dime meccaniche. Fin ad ora si riteneva assolutamente necessaria la presenza di un operatore per il controllo del corretto posizionamento, tramite una simulazione della marcatura. Per ovviare a questo limite strutturale, Datalogic ha pensato di introdurre uno strumento di aiuto e di visione del processo: la camera. L'idea di base è stata quella di impiegare le moderne smart camera per individuare l'oggetto da marcare e rendere quindi il processo più automatico possibile. Per giungere a questo risultato è stato necessario effettuare una calibrazione del sistema totale: Camera più Laser. Il mio studio si è focalizzato quindi nel creare un eseguibile che aiutasse il cliente ad effettuare questa operazione nella maniera più semplice possibile. E' stato creato un eseguibile in C# che mettesse in comunicazione i due dispositivi ed eseguisse la calibrazione dei parametri intrinseci ed estrinseci. Il risultato finale ha permesso di avere il sistema di riferimento mondo della camera coincidente con quello del piano di marcatura del laser. Ne segue che al termine del processo di calibrazione se un oggetto verrà rilevato dalla camera, avente il baricentro nella posizione (10,10), il laser, utilizzando le medesime coordinate, marcherà proprio nel baricentro dell'oggetto desiderato. La maggiore difficoltà riscontrata è stata la differenza dei software che permettono la comunicazione con i due dispositivi e la creazione di una comunicazione con il laser, non esistente prima in C#.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SANTANA, André M.; SANTIAGO, Gutemberg S.; MEDEIROS, Adelardo A. D. Real-Time Visual SLAM Using Pre-Existing Floor Lines as Landmarks and a Single Camera. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG. Anais... Juiz de Fora: CBA, 2008.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este artículo se presenta a DeBuPa (Detección Búsqueda Pateo) un humanoide de tamaño pequeño (38 cm de alto) construido con las piezas del kit Bioloid. Del kit se ha excluido la tarjeta CM-510 para sustituirla por la tarjeta controladora Arbotix, que será la que controle los 16 motores Dynamixel Ax-12+ (para mover al robot) y 2 servomotores analógicos (para mover la cámara). Además se ha agregado un mini computador Raspberry Pi, con su cámara, para que el robot pueda detectar y seguir la pelota de forma autónoma. Todos estos componentes deben ser coordinados para que se logre cumplir la tarea de detectar, seguir y patear la pelota. Por ello se hace necesaria la comunicación entre la Arbotix y la Raspberry Pi. La herramienta empleada para ello es el framework ROS (Robot Operating System). En la Raspberry Pi se usa el lenguaje C++ y se ejecuta un solo programa encargado de captar la imagen de la cámara, filtrar y procesar para encontrar la pelota, tomar la decisión de la acción a ejecutar y hacer la petición a la Arbotix para que dé la orden a los motores de ejecutar el movimiento. Para captar la imagen de la cámara se ha utilizado la librería RasPiCam CV. Para filtrar y procesar la imagen se ha usado las librerías de OpenCV. La Arbotix, además de controlar los motores, se encarga de monitorizar que el robot se encuentre balanceado, para ello usa el sensor Gyro de Robotis. Si detecta un desbalance de un cierto tamaño puede saber si se ha caído y levantarse.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of robots has shown itself as a very complex interdisciplinary research field. The predominant procedure for these developments in the last decades is based on the assumption that each robot is a fully personalized project, with the direct embedding of hardware and software technologies in robot parts with no level of abstraction. Although this methodology has brought countless benefits to the robotics research, on the other hand, it has imposed major drawbacks: (i) the difficulty to reuse hardware and software parts in new robots or new versions; (ii) the difficulty to compare performance of different robots parts; and (iii) the difficulty to adapt development needs-in hardware and software levels-to local groups expertise. Large advances might be reached, for example, if physical parts of a robot could be reused in a different robot constructed with other technologies by other researcher or group. This paper proposes a framework for robots, TORP (The Open Robot Project), that aims to put forward a standardization in all dimensions (electrical, mechanical and computational) of a robot shared development model. This architecture is based on the dissociation between the robot and its parts, and between the robot parts and their technologies. In this paper, the first specification for a TORP family and the first humanoid robot constructed following the TORP specification set are presented, as well as the advances proposed for their improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in mobile phone cameras have poised them to take over compact hand-held cameras as the consumer’s preferred camera option. Along with advances in the number of pixels, motion blur removal, face-tracking, and noise reduction algorithms have significant roles in the internal processing of the devices. An undesired effect of severe noise reduction is the loss of texture (i.e. low-contrast fine details) of the original scene. Current established methods for resolution measurement fail to accurately portray the texture loss incurred in a camera system. The development of an accurate objective method to identify the texture preservation or texture reproduction capability of a camera device is important in this regard. The ‘Dead Leaves’ target has been used extensively as a method to measure the modulation transfer function (MTF) of cameras that employ highly non-linear noise-reduction methods. This stochastic model consists of a series of overlapping circles with radii r distributed as r−3, and having uniformly distributed gray level, which gives an accurate model of occlusion in a natural setting and hence mimics a natural scene. This target can be used to model the texture transfer through a camera system when a natural scene is captured. In the first part of our study we identify various factors that affect the MTF measured using the ‘Dead Leaves’ chart. These include variations in illumination, distance, exposure time and ISO sensitivity among others. We discuss the main differences of this method with the existing resolution measurement techniques and identify the advantages. In the second part of this study, we propose an improvement to the current texture MTF measurement algorithm. High frequency residual noise in the processed image contains the same frequency content as fine texture detail, and is sometimes reported as such, thereby leading to inaccurate results. A wavelet thresholding based denoising technique is utilized for modeling the noise present in the final captured image. This updated noise model is then used for calculating an accurate texture MTF. We present comparative results for both algorithms under various image capture conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fully articulated hand tracking promises to enable fundamentally new interactions with virtual and augmented worlds, but the limited accuracy and efficiency of current systems has prevented widespread adoption. Today's dominant paradigm uses machine learning for initialization and recovery followed by iterative model-fitting optimization to achieve a detailed pose fit. We follow this paradigm, but make several changes to the model-fitting, namely using: (1) a more discriminative objective function; (2) a smooth-surface model that provides gradients for non-linear optimization; and (3) joint optimization over both the model pose and the correspondences between observed data points and the model surface. While each of these changes may actually increase the cost per fitting iteration, we find a compensating decrease in the number of iterations. Further, the wide basin of convergence means that fewer starting points are needed for successful model fitting. Our system runs in real-time on CPU only, which frees up the commonly over-burdened GPU for experience designers. The hand tracker is efficient enough to run on low-power devices such as tablets. We can track up to several meters from the camera to provide a large working volume for interaction, even using the noisy data from current-generation depth cameras. Quantitative assessments on standard datasets show that the new approach exceeds the state of the art in accuracy. Qualitative results take the form of live recordings of a range of interactive experiences enabled by this new approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’obbiettivo di questo elaborato di tesi è quello di eseguire uno studio di fattibilità per l’applicazione della tecnologia dei robot a cavi in ambito navale e della Difesa. Il lavoro è stato svolto presso l’azienda Calzoni di Calderara di Reno. In particolare, si è analizzata la possibilità di sostituire le tradizionali strutture rigide impiegate nella movimentazione di carichi con un sistema robotico azionato da cavi che fosse in grado di garantire caratteristiche quali modularità e una più facile riconfigurabilità. Sono state prese in considerazione diverse architetture di robot a cavi. Innanzitutto, si è verificato per ognuna il rispetto delle specifiche di progetto assegnate dall’azienda. Si è quindi condotta un’analisi cineto-statica sulle architetture potenzialmente idonee in modo tale da determinare quale fosse quella più prestazionale. Definita la migliore configurazione, se ne è sviluppato un primo concept preliminare.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Industrial robots are both versatile and high performant, enabling the flexible automation typical of the modern Smart Factories. For safety reasons, however, they must be relegated inside closed fences and/or virtual safety barriers, to keep them strictly separated from human operators. This can be a limitation in some scenarios in which it is useful to combine the human cognitive skill with the accuracy and repeatability of a robot, or simply to allow a safe coexistence in a shared workspace. Collaborative robots (cobots), on the other hand, are intrinsically limited in speed and power in order to share workspace and tasks with human operators, and feature the very intuitive hand guiding programming method. Cobots, however, cannot compete with industrial robots in terms of performance, and are thus useful only in a limited niche, where they can actually bring an improvement in productivity and/or in the quality of the work thanks to their synergy with human operators. The limitations of both the pure industrial and the collaborative paradigms can be overcome by combining industrial robots with artificial vision. In particular, vision can be exploited for a real-time adjustment of the pre-programmed task-based robot trajectory, by means of the visual tracking of dynamic obstacles (e.g. human operators). This strategy allows the robot to modify its motion only when necessary, thus maintain a high level of productivity but at the same time increasing its versatility. Other than that, vision offers the possibility of more intuitive programming paradigms for the industrial robots as well, such as the programming by demonstration paradigm. These possibilities offered by artificial vision enable, as a matter of fact, an efficacious and promising way of achieving human-robot collaboration, which has the advantage of overcoming the limitations of both the previous paradigms yet keeping their strengths.