885 resultados para Visual robot control
Resumo:
Parkinson disease is mainly characterized by the degeneration of dopaminergic neurons in the central nervous system, including the retina. Different interrelated molecular mechanisms underlying Parkinson disease-associated neuronal death have been put forward in the brain, including oxidative stress and mitochondrial dysfunction. Systemic injection of the proneurotoxin 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) to monkeys elicits the appearance of a parkinsonian syndrome, including morphological and functional impairments in the retina. However, the intracellular events leading to derangement of dopaminergic and other retinal neurons in MPTP-treated animal models have not been so far investigated. Here we have used a comparative proteomics approach to identify proteins differentially expressed in the retina of MPTP-treated monkeys. Proteins were solubilized from the neural retinas of control and MPTP-treated animals, labelled separately with two different cyanine fluorophores and run pairwise on 2D DIGE gels. Out of >700 protein spots resolved and quantified, 36 were found to exhibit statistically significant differences in their expression levels, of at least ±1.4-fold, in the parkinsonian monkey retina compared with controls. Most of these spots were excised from preparative 2D gels, trypsinized and subjected to MALDI-TOF MS and LC-MS/MS analyses. Data obtained were used for protein sequence database interrogation, and 15 different proteins were successfully identified, of which 13 were underexpressed and 2 overexpressed. These proteins were involved in key cellular functional pathways such as glycolysis and mitochondrial electron transport, neuronal protection against stress and survival, and phototransduction processes. These functional categories underscore that alterations in energy metabolism, neuroprotective mechanisms and signal transduction are involved in MPTPinduced neuronal degeneration in the retina, in similarity to mechanisms thought to underlie neuronal death in the Parkinson’s diseased brain and neurodegenerative diseases of the retina proper.
Resumo:
Humans and machines have shared the same physical space for many years. To share the same space, we want the robots to behave like human beings. This will facilitate their social integration, their interaction with humans and create an intelligent behavior. To achieve this goal, we need to understand how human behavior is generated, analyze tasks running our nerves and how they relate to them. Then and only then can we implement these mechanisms in robotic beings. In this study, we propose a model of competencies based on human neuroregulator system for analysis and decomposition of behavior into functional modules. Using this model allow separate and locate the tasks to be implemented in a robot that displays human-like behavior. As an example, we show the application of model to the autonomous movement behavior on unfamiliar environments and its implementation in various simulated and real robots with different physical configurations and physical devices of different nature. The main result of this study has been to build a model of competencies that is being used to build robotic systems capable of displaying behaviors similar to humans and consider the specific characteristics of robots.
Resumo:
Grady distinguishes two main types of metaphor in order to provide a solution in the controversies stemming from the conceptual theory of metaphor: correlation-based metaphors and resemblance metaphors. In “correlation-based metaphors”, the source domain is sensory-motor, while the target domain is not. On the contrary, “resemblance metaphors” are originated by a physical or conceptual perception which is common in both domains, by the association of concepts with common features. Primary metaphors are the minimal units of correlation-based metaphors; they are inherent in human nature and the result of the nature of our brain, our body and the world that we inhabit. We acquire them automatically and we cannot avoid them. Furthermore, as corporal experiences are universal, so are primary metaphors. In this paper, I will argue that primary metaphors manifest themselves visually through scene-setting techniques such as composition, framing, camera movement or lighting. Film-makers can use the different aspects of mise-en-scène metaphorically in order to express abstract notions like evil, importance, control, relationship or confusion. Such visual manifestations, as also occurs with their verbal equivalents, frequently go unnoticed or have been used so often that they have become clichés. But the important thing to bear in mind is that their origin lies in a primary metaphor and due to this origin these kinds of film-making strategies have been so expressively successful.
Resumo:
La percepción de profundidad se hace imprescindible en muchas tareas de manipulación, control visual y navegación de robots. Las cámaras de tiempo de vuelo (ToF: Time of Flight) generan imágenes de rango que proporcionan medidas de profundidad en tiempo real. No obstante, el parámetro distancia que calculan estas cámaras es fuertemente dependiente del tiempo de integración que se configura en el sensor y de la frecuencia de modulación empleada por el sistema de iluminación que integran. En este artículo, se presenta una metodología para el ajuste adaptativo del tiempo de integración y un análisis experimental del comportamiento de una cámara ToF cuando se modifica la frecuencia de modulación. Este método ha sido probado con éxito en algoritmos de control visual con arquitectura ‘eye-in-hand’ donde el sistema sensorial está compuesto por una cámara ToF. Además, la misma metodología puede ser aplicada en otros escenarios de trabajo.
Resumo:
En este proyecto se pretende diseñar un sistema embebido capaz de realizar procesamiento de imágenes y guiado de un hexacóptero. El hexacóptero dispondrá a bordo de una cámara así como las baterías y todo el hardware necesario para realizar el procesamiento de la información visual obtenida e implementar el controlador necesario para permitir su guiado. OpenCV es una biblioteca de primitivas de procesado de imagen que permite crear algoritmos de Visión por Computador de última generación. OpenCV fue desarrollado originalmente por Intel en 1999 para mostrar la capacidad de procesamiento de los micros de Intel, por lo que la mayoría de la biblioteca está optimizada para correr en estos micros, incluyendo las extensiones MMX y SSE. http://en.wikipedia.org/wiki/OpenCV Actualmente es ampliamente utilizada tanto por la comunidad científica como por la industria, para desarrollar nuevos algoritmos para equipos de sobremesa y sobre todo para sistemas empotrados (robots móviles, cámaras inteligentes, sistemas de inspección, sistemas de vigilancia, etc..). Debido a su gran popularidad se han realizado compilaciones de la biblioteca para distintos sistemas operativos tradicionales (Windows, Linux, Mac), para dispositivos móviles (Android, iOS) y para sistemas embebidos basados en distintos tipos de procesadores (ARM principalmente). - iPhone port: http://www.eosgarden.com/en/opensource/opencv-ios/overview/ - Android port: http://opencv.willowgarage.com/wiki/AndroidExperimental Un ejemplo de plataforma embebida es la tarjeta Zedboard (http://www.zedboard.org/), que representa el estado del arte en dispositivos embebidos basados en la arquitectura Cortex de ARM. La tarjeta incluye un procesador Cortex-A9 dual core junto con una gran cantidad de periféricos y posibilidades de conexión a tarjetas de expansión de terceras partes, lo que permite desarrollar aplicaciones en muy distintos campos de la Visión por Computador.
Resumo:
New low cost sensors and the new open free libraries for 3D image processing are permitting to achieve important advances for robot vision applications such as tridimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a method to recognize the human hand and to track the fingers is proposed. This new method is based on point clouds from range images, RGBD. It does not require visual marks, camera calibration, environment knowledge and complex expensive acquisition systems. Furthermore, this method has been implemented to create a human interface in order to move a robot hand. The human hand is recognized and the movement of the fingers is analyzed. Afterwards, it is imitated from a Barret hand, using communication events programmed from ROS.
Resumo:
This paper presents a method for the fast calculation of a robot’s egomotion using visual features. The method is part of a complete system for automatic map building and Simultaneous Location and Mapping (SLAM). The method uses optical flow to determine whether the robot has undergone a movement. If so, some visual features that do not satisfy several criteria are deleted, and then egomotion is calculated. Thus, the proposed method improves the efficiency of the whole process because not all the data is processed. We use a state-of-the-art algorithm (TORO) to rectify the map and solve the SLAM problem. Additionally, a study of different visual detectors and descriptors has been conducted to identify which of them are more suitable for the SLAM problem. Finally, a navigation method is described using the map obtained from the SLAM solution.
Resumo:
Este trabajo muestra cómo se realiza la enseñanza de robótica mediante un robot modular y los resultados educativos obtenidos en el Máster Universitario en Automática y Robótica de la Escuela Politécnica Superior de la Universidad de Alicante. En el artículo se describen los resultados obtenidos con el uso de este robot modular tanto en competencias genéricas como específicas, en las enseñanzas de electrónica, control y programación del Máster. En este artículo se exponen los objetivos de aprendizaje para cada uno de ellos, su aplicación a la enseñanza y los resultados educativos obtenidos. En los resultados del estudio, cabe destacar que el alumno ha mostrado mayor interés y ha fomentado su aprendizaje autónomo. Para ello, el robot modular se construyó con herramientas para fomentar este tipo de enseñanza y aprendizaje, tales como comunicaciones interactivas para monitorizar, cambiar y adaptar diversos parámetros de control y potencia del robot.
Resumo:
Este trabajo presenta el diseño, construcción y programación de un robot modular para el desarrollo tanto de competencias genéricas como específicas, en las enseñanzas de electrónica, control y programación del Master de Automática y Robótica de la Escuela Politécnica Superior de la Universidad de Alicante. En este trabajo se exponen los diferentes módulos propuestos, así como los objetivos de aprendizaje para cada uno de ellos. Uno de los factores más importantes a destacar en el presente estudio es el posible desarrollo de la creatividad y el aprendizaje autónomo. Para ello, se desarrollará especialmente un módulo de comunicación por bluetooth que servirá para monitorizar, cambiar y adaptar on-line diversos parámetros de control y potencia del robot. Además, dicha herramienta se ha introducido como parte de la metodología en las asignaturas del Máster de Electromecánica y Sistemas de Control Automático. En esta memoria se mostrarán los distintos resultados obtenidos durante y en la finalización de este trabajo.
Resumo:
During grasping and intelligent robotic manipulation tasks, the camera position relative to the scene changes dramatically because the robot is moving to adapt its path and correctly grasp objects. This is because the camera is mounted at the robot effector. For this reason, in this type of environment, a visual recognition system must be implemented to recognize and “automatically and autonomously” obtain the positions of objects in the scene. Furthermore, in industrial environments, all objects that are manipulated by robots are made of the same material and cannot be differentiated by features such as texture or color. In this work, first, a study and analysis of 3D recognition descriptors has been completed for application in these environments. Second, a visual recognition system designed from specific distributed client-server architecture has been proposed to be applied in the recognition process of industrial objects without these appearance features. Our system has been implemented to overcome problems of recognition when the objects can only be recognized by geometric shape and the simplicity of shapes could create ambiguity. Finally, some real tests are performed and illustrated to verify the satisfactory performance of the proposed system.
Resumo:
A comunicação e transmissão de informação sem fios tornou - se uma realidade cada vez mais utilizada pelas sociedades contemporâneas. A nível profissional, as forças armadas de cada país acharam conveniente modernizar os seus meios, por forma a aumentar a eficiência e a segurança em determinadas tarefas. Nesse sentido, o Exército português adquiriu um robot (ROVIM) cuja função é desempenhar ações de reconhecimento e vigilância de modo a obter informações de forma segura. O objetivo desta dissertação é dimensionar e construir uma antena para controlo wireless do robot (ROVIM). As especificações técnicas desta antena requerem dois modos de operação, um com uma largura de feixe larga e outro com uma largura de feixe estreita. Para alcançar esses objetivos dimensionou-se e construiu-se duas antenas. Na dissertação são construídas duas antenas, a primeira é uma antena Yagi – Uda convencional e a segunda é uma antena com uma estrutura nova que permite a regulação do ganho e da largura de feixe a -3 dB. A primeira antena será o modelo base da segunda antena, que apresenta a inovação do controlo das caraterísticas de radiação. Esse controlo é possível através da introdução de díodos e do respetivo circuito de polarização na estrutura da antena. Inicialmente, as antenas foram dimensionadas e simuladas recorrendo ao programa de simulação CST MWS, de modo a operarem na banda dos 2,4 GHz. Após a construção das antenas, as caraterísticas de radiação foram medidas recorrendo à câmara anecoica e ao network analyzer, permitindo assim a comparação dos resultados medidos com os simulados.
Resumo:
L’obiettivo di questa tesi è di descrivere e implementare via software un modello di rover autonomo per uso in ambito agricolo. La scelta di questo argomento deriva dal fatto che al laboratorio CASY dell’Università di Bologna è stato commissionato un robot che possa aiutare piccoli imprenditori agricoli a essere competitivi con i più grandi. Le funzionalità che il robot avrà, una volta ultimato, andranno dal tagliare l’erba allo spruzzare fertilizzante sugli alberi da frutto. Questa tesi si interessa del progetto del sistema di navigazione. Inizialmente viene introdotto il modello cinematico e in particolare la configurazione differential drive in cui il rover rientra. Successivamente viene elaborato un sistema di controllo basato sulla linearizzazione statica del feedback. Una volta completati il modello e il sistema di controllo si procede con la generazione di traiettoria: vengono analizzati e confrontati alcuni algoritmi per l’inseguimento di una traiettoria definita tramite waypoint. Infine è presentato un algoritmo per la navigazione all’interno di un campo di filari di alberi da frutto. Le uniche informazioni esterne disponibili in questo contesto sono le rilevazioni di sensori di distanza frontali e laterali, in quanto un GPS sarebbe troppo impreciso per gli scopi. Questa tesi costituisce la base per ulteriori sviluppi del progetto. In particolare la realizzazione di un programma di supervisione che stabilisca la modalità di moto da attuare e programmi specifici per le varie funzionalità agricole del rover.
Resumo:
Among the groups of oceanic microfossils, only Radiolaria occur in abundances and preservation states sufficient to provide biostratigraphic control for restricted intervals within sediments recovered in Hole 1223A. The distribution of these microfossils has been divided into four major intervals, A-D. Radiolaria distribution Interval A occupies the depth range 0-3.0 meters below seafloor (mbsf), where the abundance of specimens is very low and preservation is poor. Radiolaria distribution Interval B occupies the depth range 3.02-7.1 mbsf. Radiolaria in Interval B are locally rare to abundant and well preserved, and assemblages range in age from pure early Eocene to early Eocene admixed with late Neogene taxa. Radiolaria distribution Interval C occupies the depth range 7.1-36.99 mbsf and is characterized by sediments either barren of microfossils or containing extremely rare early Eocene specimens. Radiolaria distribution Interval D occupies the depth range 36.99-38.7 mbsf (base of the recovered sedimentary section), where early Eocene Radiolaria are present in rare to common frequencies, but opal-A to opal-CT recrystallization has degraded the preservation state. The late Neogene assemblage of Radiolaria distribution Interval B is dated at 1.55-2.0 Ma, based on occurrences of Eucyrtidium matuyamai, Lamprocyclas heteroporos, and Theocorythium trachelium trachelium. The early Eocene assemblage of Radiolaria distribution Intervals B and D is somewhat problematically assigned to the Buryella clinata Zone.
Resumo:
Mode of access: Internet.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.