940 resultados para Robot
Resumo:
Image Based Visual Servoing (IBVS) is a robotic control scheme based on vision. This scheme uses only the visual information obtained from a camera to guide a robot from any robot pose to a desired one. However, IBVS requires the estimation of different parameters that cannot be obtained directly from the image. These parameters range from the intrinsic camera parameters (which can be obtained from a previous camera calibration), to the measured distance on the optical axis between the camera and visual features, it is the depth. This paper presents a comparative study of the performance of D-IBVS estimating the depth from three different ways using a low cost RGB-D sensor like Kinect. The visual servoing system has been developed over ROS (Robot Operating System), which is a meta-operating system for robots. The experiments prove that the computation of the depth value for each visual feature improves the system performance.
Resumo:
Traditional visual servoing systems do not deal with the topic of moving objects tracking. When these systems are employed to track a moving object, depending on the object velocity, visual features can go out of the image, causing the fail of the tracking task. This occurs specially when the object and the robot are both stopped and then the object starts the movement. In this work, we have employed a retina camera based on Address Event Representation (AER) in order to use events as input in the visual servoing system. The events launched by the camera indicate a pixel movement. Event visual information is processed only at the moment it occurs, reducing the response time of visual servoing systems when they are used to track moving objects.
Resumo:
New low cost sensors and the new open free libraries for 3D image processing are permitting to achieve important advances for robot vision applications such as tridimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a method to recognize the human hand and to track the fingers is proposed. This new method is based on point clouds from range images, RGBD. It does not require visual marks, camera calibration, environment knowledge and complex expensive acquisition systems. Furthermore, this method has been implemented to create a human interface in order to move a robot hand. The human hand is recognized and the movement of the fingers is analyzed. Afterwards, it is imitated from a Barret hand, using communication events programmed from ROS.
Resumo:
Este trabajo muestra cómo se realiza la enseñanza de robótica mediante un robot modular y los resultados educativos obtenidos en el Máster Universitario en Automática y Robótica de la Escuela Politécnica Superior de la Universidad de Alicante. En el artículo se describen los resultados obtenidos con el uso de este robot modular tanto en competencias genéricas como específicas, en las enseñanzas de electrónica, control y programación del Máster. En este artículo se exponen los objetivos de aprendizaje para cada uno de ellos, su aplicación a la enseñanza y los resultados educativos obtenidos. En los resultados del estudio, cabe destacar que el alumno ha mostrado mayor interés y ha fomentado su aprendizaje autónomo. Para ello, el robot modular se construyó con herramientas para fomentar este tipo de enseñanza y aprendizaje, tales como comunicaciones interactivas para monitorizar, cambiar y adaptar diversos parámetros de control y potencia del robot.
Resumo:
Este trabajo presenta el diseño, construcción y programación de un robot modular para el desarrollo tanto de competencias genéricas como específicas, en las enseñanzas de electrónica, control y programación del Master de Automática y Robótica de la Escuela Politécnica Superior de la Universidad de Alicante. En este trabajo se exponen los diferentes módulos propuestos, así como los objetivos de aprendizaje para cada uno de ellos. Uno de los factores más importantes a destacar en el presente estudio es el posible desarrollo de la creatividad y el aprendizaje autónomo. Para ello, se desarrollará especialmente un módulo de comunicación por bluetooth que servirá para monitorizar, cambiar y adaptar on-line diversos parámetros de control y potencia del robot. Además, dicha herramienta se ha introducido como parte de la metodología en las asignaturas del Máster de Electromecánica y Sistemas de Control Automático. En esta memoria se mostrarán los distintos resultados obtenidos durante y en la finalización de este trabajo.
Resumo:
Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.
Resumo:
Stroke is a leading cause of death and permanent disability worldwide, affecting millions of individuals. Traditional clinical scores for assessment of stroke-related impairments are inherently subjective and limited by inter-rater and intra-rater reliability, as well as floor and ceiling effects. In contrast, robotic technologies provide objective, highly repeatable tools for quantification of neurological impairments following stroke. KINARM is an exoskeleton robotic device that provides objective, reliable tools for assessment of sensorimotor, proprioceptive and cognitive brain function by means of a battery of behavioral tasks. As such, KINARM is particularly useful for assessment of neurological impairments following stroke. This thesis introduces a computational framework for assessment of neurological impairments using the data provided by KINARM. This is done by achieving two main objectives. First, to investigate how robotic measurements can be used to estimate current and future abilities to perform daily activities for subjects with stroke. We are able to predict clinical scores related to activities of daily living at present and future time points using a set of robotic biomarkers. The findings of this analysis provide a proof of principle that robotic evaluation can be an effective tool for clinical decision support and target-based rehabilitation therapy. The second main objective of this thesis is to address the emerging problem of long assessment time, which can potentially lead to fatigue when assessing subjects with stroke. To address this issue, we examine two time reduction strategies. The first strategy focuses on task selection, whereby KINARM tasks are arranged in a hierarchical structure so that an earlier task in the assessment procedure can be used to decide whether or not subsequent tasks should be performed. The second strategy focuses on time reduction on the longest two individual KINARM tasks. Both reduction strategies are shown to provide significant time savings, ranging from 30% to 90% using task selection and 50% using individual task reductions, thereby establishing a framework for reduction of assessment time on a broader set of KINARM tasks. All in all, findings of this thesis establish an improved platform for diagnosis and prognosis of stroke using robot-based biomarkers.
Resumo:
A comunicação e transmissão de informação sem fios tornou - se uma realidade cada vez mais utilizada pelas sociedades contemporâneas. A nível profissional, as forças armadas de cada país acharam conveniente modernizar os seus meios, por forma a aumentar a eficiência e a segurança em determinadas tarefas. Nesse sentido, o Exército português adquiriu um robot (ROVIM) cuja função é desempenhar ações de reconhecimento e vigilância de modo a obter informações de forma segura. O objetivo desta dissertação é dimensionar e construir uma antena para controlo wireless do robot (ROVIM). As especificações técnicas desta antena requerem dois modos de operação, um com uma largura de feixe larga e outro com uma largura de feixe estreita. Para alcançar esses objetivos dimensionou-se e construiu-se duas antenas. Na dissertação são construídas duas antenas, a primeira é uma antena Yagi – Uda convencional e a segunda é uma antena com uma estrutura nova que permite a regulação do ganho e da largura de feixe a -3 dB. A primeira antena será o modelo base da segunda antena, que apresenta a inovação do controlo das caraterísticas de radiação. Esse controlo é possível através da introdução de díodos e do respetivo circuito de polarização na estrutura da antena. Inicialmente, as antenas foram dimensionadas e simuladas recorrendo ao programa de simulação CST MWS, de modo a operarem na banda dos 2,4 GHz. Após a construção das antenas, as caraterísticas de radiação foram medidas recorrendo à câmara anecoica e ao network analyzer, permitindo assim a comparação dos resultados medidos com os simulados.
Resumo:
Drawing class. 1949
Resumo:
Vita.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
This paper describes the real time global vision system for the robot soccer team the RoboRoos. It has a highly optimised pipeline that includes thresholding, segmenting, colour normalising, object recognition and perspective and lens correction. It has a fast ‘paint’ colour calibration system that can calibrate in any face of the YUV or HSI cube. It also autonomously selects both an appropriate camera gain and colour gains robot regions across the field to achieve colour uniformity. Camera geometry calibration is performed automatically from selection of keypoints on the field. The system acheives a position accuracy of better than 15mm over a 4m × 5.5m field, and orientation accuracy to within 1°. It processes 614 × 480 pixels at 60Hz on a 2.0GHz Pentium 4 microprocessor.
Resumo:
The control and coordination of multiple mobile robots is a challenging task; particularly in environments with multiple, rapidly moving obstacles and agents. This paper describes a robust approach to multi-robot control, where robustness is gained from competency at every layer of robot control. The layers are: (i) a central coordination system (MAPS), (ii) an action system (AES), (iii) a navigation module, and (iv) a low level dynamic motion control system. The multi-robot coordination system assigns each robot a role and a sub-goal. Each robot’s action execution system then assumes the assigned role and attempts to achieve the specified sub-goal. The robot’s navigation system directs the robot to specific goal locations while ensuring that the robot avoids any obstacles. The motion system maps the heading and speed information from the navigation system to force-constrained motion. This multi-robot system has been extensively tested and applied in the robot soccer domain using both centralized and distributed coordination.
Resumo:
The GuRm is a 1.2m tall, 23 degree of freedom humanoid consuucted at the University of Queensland for research into humanoid robotics. The key challenge being addressed by the GuRw projcct is the development of appropriate learning strategies for control and coodinadon of the robot’s many joints. The development of learning strategies is Seen as a way to sidestep the inherent intricacy of modeling a multi-DOP biped robot. This paper outlines the approach taken to generate an appmpria*e control scheme for the joinis of the GuRoo. The paper demonsrrates the determination of local feedback control parameters using a genetic algorithm. The feedback loop is then augmented by a predictive modulator that learns a form of feed-fonward control to overcome the irregular loads experienced at each joint during the gait cycle. The predictive modulator is based on thc CMAC architecture. Results from tats on the GuRoo platform show that both systems provide improvements in stability and tracking of joint control.