932 resultados para Vision-based


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Automação e Electrónica Industrial

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores.Área de Especialização de Sistemas Autónomos

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabalho visa contribuir para o desenvolvimento de um sistema de visão multi-câmara para determinação da localização, atitude e seguimento de múltiplos objectos, para ser utilizado na unidade de robótica do INESCTEC, e resulta da necessidade de ter informação externa exacta que sirva de referência no estudo, caracterização e desenvolvimento de algoritmos de localização, navegação e controlo de vários sistemas autónomos. Com base na caracterização dos veículos autónomos existentes na unidade de robótica do INESCTEC e na análise dos seus cenários de operação, foi efectuado o levantamento de requisitos para o sistema a desenvolver. Foram estudados os fundamentos teóricos, necessários ao desenvolvimento do sistema, em temas relacionados com visão computacional, métodos de estimação e associação de dados para problemas de seguimento de múltiplos objectos . Foi proposta uma arquitectura para o sistema global que endereça os vários requisitos identi cados, permitindo a utilização de múltiplas câmaras e suportando o seguimento de múltiplos objectos, com ou sem marcadores. Foram implementados e validados componentes da arquitectura proposta e integrados num sistema para validação, focando na localização e seguimento de múltiplos objectos com marcadores luminosos à base de Light-Emitting Diodes (LEDs). Nomeadamente, os módulos para a identi cação dos pontos de interesse na imagem, técnicas para agrupar os vários pontos de interesse de cada objecto e efectuar a correspondência das medidas obtidas pelas várias câmaras, método para a determinação da posição e atitude dos objectos, ltro para seguimento de múltiplos objectos. Foram realizados testes para validação e a nação do sistema implementado que demonstram que a solução encontrada vai de encontro aos requisitos, e foram identi cadas as linhas de trabalho para a continuação do desenvolvimento do sistema global.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Oceans - San Diego, 2013

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work presents an automatic calibration method for a vision based external underwater ground-truth positioning system. These systems are a relevant tool in benchmarking and assessing the quality of research in underwater robotics applications. A stereo vision system can in suitable environments such as test tanks or in clear water conditions provide accurate position with low cost and flexible operation. In this work we present a two step extrinsic camera parameter calibration procedure in order to reduce the setup time and provide accurate results. The proposed method uses a planar homography decomposition in order to determine the relative camera poses and the determination of vanishing points of detected lines in the image to obtain the global pose of the stereo rig in the reference frame. This method was applied to our external vision based ground-truth at the INESC TEC/Robotics test tank. Results are presented in comparison with an precise calibration performed using points obtained from an accurate 3D LIDAR modelling of the environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

When underwater vehicles perform navigation close to the ocean floor, computer vision techniques can be applied to obtain quite accurate motion estimates. The most crucial step in the vision-based estimation of the vehicle motion consists on detecting matchings between image pairs. Here we propose the extensive use of texture analysis as a tool to ameliorate the correspondence problem in underwater images. Once a robust set of correspondences has been found, the three-dimensional motion of the vehicle can be computed with respect to the bed of the sea. Finally, motion estimates allow the construction of a map that could aid to the navigation of the robot

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, the sensor of an optical mouse is presented as a counterfeit coin detector applied to the two-Euro case. The detection process is based on the short distance image acquisition capabilities of the optical mouse sensor where partial images of the coin under analysis are compared with some partial reference coin images for matching. Results show that, using only the vision sense, the counterfeit acceptance and rejection rates are very similar to those of a trained user and better than those of an untrained user.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this study, the evaluation of the accuracy and performance of a light detection and ranging (LIDAR) sensor for vegetation using distance and reflection measurements aiming to detect and discriminate maize plants and weeds from soil surface was done. The study continues a previous work carried out in a maize field in Spain with a LIDAR sensor using exclusively one index, the height profile. The current system uses a combination of the two mentioned indexes. The experiment was carried out in a maize field at growth stage 12–14, at 16 different locations selected to represent the widest possible density of three weeds: Echinochloa crus-galli (L.) P.Beauv., Lamium purpureum L., Galium aparine L.and Veronica persica Poir.. A terrestrial LIDAR sensor was mounted on a tripod pointing to the inter-row area, with its horizontal axis and the field of view pointing vertically downwards to the ground, scanning a vertical plane with the potential presence of vegetation. Immediately after the LIDAR data acquisition (distances and reflection measurements), actual heights of plants were estimated using an appropriate methodology. For that purpose, digital images were taken of each sampled area. Data showed a high correlation between LIDAR measured height and actual plant heights (R2 = 0.75). Binary logistic regression between weed presence/absence and the sensor readings (LIDAR height and reflection values) was used to validate the accuracy of the sensor. This permitted the discrimination of vegetation from the ground with an accuracy of up to 95%. In addition, a Canonical Discrimination Analysis (CDA) was able to discriminate mostly between soil and vegetation and, to a far lesser extent, between crop and weeds. The studied methodology arises as a good system for weed detection, which in combination with other principles, such as vision-based technologies, could improve the efficiency and accuracy of herbicide spraying.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Työn tavoitteena oli uuden konenäköpohjaisen hitsausrobottiaseman käyttöönotto ja järjestelmän kehittäminen siten, että voitiin mahdollistaa vähintään vaadittu 70 %:n kaariaikasuhde ohjelmien sisällä. Aseman käyttöönotolla pyrittiin tehostamaan hitsaustyötä ja helpottamaan tuotanto-paineita osavalmistuksen paneeli-linjalla. Hitsausasemalla parannetaan myös työntekijöiden työergonomiaa sekä koko työn imagoa. Kirjallisen osan tarkoituksena oli tutkia robotisoitua hitsausta ja sen tuomia etuja, verrata jo olemassa olevia ohjelmointimenetelmiä uuteen ohjelmointitapaan ja selvittää robottien tarkkuuksia. Kirjallisuusosan jälkimmäisessä osassa tutustuttiin konenäköön ja sen laitteistoihin sekä sovelluksiin. Kirjallisuus-tutkimuksessa selvisi, että uusi ohjelmointimenetelmä on selvä parannus kyseiseen sovellukseen. Käytännön osassa on esitelty konenäköpohjainen hitsausrobottiasema sen suunnittelusta, käyttöönottoon ja tuotantotesteihin asti. Käytännön osassa on lisäksi esitelty asema osana koko automatisointiprojektia. Lisäksi on kerrottu järjestelmän toiminnasta sekä makroista, joihin robotin toiminta perustuu. Lopuksi on tehty katsaus maailmalla olevista vastaavista järjestelmistä ja niiden teknologioista sekä verrattiin niitä tähän uuteen järjestelmään. Tavoitteena olleeseen 70% kaariaikasuhteeseen ohjelmien sisällä päästiin jo lyhyen koejakson aikana. Käyttöönotettu konenäköpohjainen hitsausrobottiasema on tiettävästi ensimmäinen hitsausasema maailmassa, jota ei tarvitse ohjelmoida etukäteen. Ohjelmointiin kuluva aika on minimaalinen, koska operaattori ohjelmoi robotin sen hitsatessa tuotetta. Kokeellinen osa osoitti, että käyttöönotettu konenäköpohjainen hitsausrobottiasema toimii, kuten se oli suunniteltukin. Käyttöönoton yhteydessä huomattiin monia kehitettäviä asioita, joilla järjestelmästä on mahdollista saada vieläkin tehokkaampi.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Elintarvikeala on muita toimialoja kannattomampana jäänyt automatisoinnin kehityksessä jälkeen. Varsinkin pienet - sekä keskisuuret yritykset ovat olleet haluttomia panostamaan automatisointiin. Monet automatisoitavissa olevat sovellutukset ovat jääneet toteuttamatta ja ne tehdään edelleen manuaalisesti. Automaatioyritykset ovat nähneet tämän elintarviketeollisuudessa vallitsevan potentiaalin, mutta prosessit ovat olleet liian hankalia automatisoida kilpailukykyisesti. Teknologioiden kehityttyä on asia kuitenkin muuttunut ja elintarviketeollisuus on täynnä mahdollisuuksia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Diplomityön tavoitteena oli kehittää vuokamallisille kartonkipakkauksille laadunvarmistuslaitteisto. Kirjallisen osan alussa esiteltiin vuokamallisten kartonkipakkausten valmistusprosessia. Tästä siirryttiin laatuasioihin, jossa tärkeimmät asiat olivat kartonkivuokien valmistuksessa esiintyvät laatupoikkeamat ja konenäkö. Tutkimusosan alussa esitellään Lappeenrannan teknillisessä yliopistossa kehitetty kartonkivuokien valmistuslinjasto. Tämän jälkeen vaatimuslistan pohjalta suunnitellaan kyseiseen linjastoon sopiva automaattinen laadunvalvontalaite, johon sisältyy myös kartonkivuokien siirtolaite. Suunnitteluprosessi aloitettiin koekuvaamalla kartonkivuokia erilaisilla kameroilla ja valaistusmenetelmillä. Koekuvausten perusteella valittiin konenäkölaitteisto. Tämän jälkeen toiminnoista luotiin periaatepiirroksia, joista kehitettiin varsinainen suunnitelma. Työn tuloksena saatiin suunnitelma konenäköön perustuvan automaattisen laadunvalvontalaitteen rakentamiselle.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis researches automatic traffic sign inventory and condition analysis using machine vision and pattern recognition methods. Automatic traffic sign inventory and condition analysis can be used to more efficient road maintenance, improving the maintenance processes, and to enable intelligent driving systems. Automatic traffic sign detection and classification has been researched before from the viewpoint of self-driving vehicles, driver assistance systems, and the use of signs in mapping services. Machine vision based inventory of traffic signs consists of detection, classification, localization, and condition analysis of traffic signs. The produced machine vision system performance is estimated with three datasets, from which two of have been been collected for this thesis. Based on the experiments almost all traffic signs can be detected, classified, and located and their condition analysed. In future, the inventory system performance has to be verified in challenging conditions and the system has to be pilot tested.