970 resultados para visual pose control


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, two techniques to control UAVs (Unmanned Aerial Vehicles), based on visual information are presented. The first one is based on the detection and tracking of planar structures from an on-board camera, while the second one is based on the detection and 3D reconstruction of the position of the UAV based on an external camera system. Both strategies are tested with a VTOL (Vertical take-off and landing) UAV, and results show good behavior of the visual systems (precision in the estimation and frame rate) when estimating the helicopter¿s position and using the extracted information to control the UAV.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of the current study was to understand how visual information about an ongoing change in obstacle size is used during obstacle avoidance for both lead and trail limbs. Participants were required to walk in a dark room and to step over an obstacle edged with a special tape visible in the dark. The obstacle's dimensions were manipulated one step before obstacle clearance by increasing or decreasing its size. Two increasing and two decreasing obstacle conditions were combined with seven control static conditions. Results showed that information about the obstacle's size was acquired and used to modulate trail limb trajectory, but had no effect on lead limb trajectory. The adaptive step was influenced by the time available to acquire and process visual information. In conclusion, visual information about obstacle size acquired during lead limb crossing was used in a feedforward manner to modulate trail limb trajectory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La retina juega un rol esencial en el funcionamiento del sistema circadiano de los vertebrados al ser la encargada de sensar las condiciones de iluminación ambiental que ajustan el reloj interno con el fotoperíodo exterior a través de un circuito no-visual. Este circuito es independiente de la vía de formación de imágenes e involucra a las células ganglionares retinianas (CGRs) que proyectan a varias estructuras no-visuales del cerebro; esta vía es la encargada de regular el reflejo pupilar, la sincronización de los ritmos diarios de actividad, el sueño y la supresión de melatonina pineal. La retina contiene además un reloj autónomo que genera ritmos diarios autosostenidos en distintas funciones bioquímicas y fisiológicas, que le confiere la capacidad de predecir el tiempo y anticiparse en su fisiología a los cambios lumínicos a lo largo del ciclo día-noche. Este laboratorio ha demostrado por 1ra vez que las CGRs de pollo poseen osciladores endógenos que generan variaciones diarias en la biosíntesis de fosfolípidos (Guido et al, J Neurochem. 2001; Garbarino et al., J Neurosci Res. 2004a) y de la hormona melatonina con niveles máximos durante el día (Garbarino et al., J Biol Chem 2004b). Aún más, cultivos primarios de CGRs responden a la luz a través de una cascada bioquímica de fototransducción similar a la de invertebrados y que involucra la activación de la enzima fosfolipasa C (PLC) (Contin et al., FASEB J 2006). Estos cultivos fueron obtenidos a estadios embrionarios muy tempranos en dónde solo las CGRs son postmitóticas y mayoritariamente maduras. A estos estadios, los cultivos expresan marcadores de especificación de células ganglionares (pax6, brn3), la proteina Gq y los fotopigmentos melanopsina y criptocromos con gran homología con marcadores descriptos para fotorreceptores rabdoméricos de invertebrados (Contin et al, 2006). Recientemente comenzamos a investigar la percepción de luz en pollos GUCY1*, un modelo de ceguera, en animales que carecen de células fotorreceptoras-conos y bastones-funcionales. Resultados preliminares indicarían que la retina interna, y potencialmente las CGRs de estos animales conservarían la capacidad de responder a la luz regulando el reflejo pupilar y sincronizando los ritmos diarios de alimentación. La convergencia de osciladores y fotopigmentos en la población de CGRs podría contribuir al control temporal de la fisiología del organismo y regulación de funciones no-visuales. Son objetivos de este proyecto: a) Investigar el rol de las CGRs en el sistema circadiano estudiando: i- su habilidad para sintetizar melatonina y, su regulación por luz y dopamina; ii- su capacidad fotorreceptora intrínseca, investigando la presencia de fotopigmentos y componentes de la cascada de fototransducción fundamentalmente la vía de los fosfoinosítidos y la activación de PLC, mediante ensayos moleculares, bioquímicos y farmacológicos; b) Extender estos estudios a cultivos primarios de CGRs inmunopurificadas midiendo la respuesta a la luz sobre la síntesis de melatonina, y los niveles de los mensajeros 2rios Ca2+ y AMP cíclico, la inducción de genes tempranos y la regulación de la actividad NAT, enzima clave en la síntesis de melatonina; y c) Investigar la percepción de luz en pollos GUCY1*(ciegos), sobre distintas funciones no-visuales tales como el reflejo pupilar, la sincronización de los ritmos diarios de alimentación, la síntesis de melatonina y la expresión génica en animales expuestos a estimulación lumínica de distintas intensidades y longitudes de onda. Estos estudios permitirán construir el espectro de acción de la respuesta a la luz en los pollos ciegos a fin de identificar el/los fotopigmentos intervinientes en este fenómeno. Este proyecto profundizará el conocimiento sobre la capacidad fotorreceptora-no visual de la retina interna y particularmente de las CGRs, de la naturaleza de la cascada bioquímica que opera en las mismas y de los mecanismos de regeneración del cromóforo utilizado.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Near ground maneuvers, such as hover, approach and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground often using ultrasonic or laser range finders. Near ground maneuvers are naturally mastered by flying birds and insects as objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-to-contact (Tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for Unmanned Aerial Vehicles (UAV) relative ground distance control. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the Tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented on-board an experimental quadrotor UAV and shown not only to successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Near-ground maneuvers, such as hover, approach, and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground, often using ultrasonic or laser range finders. Near-ground maneuvers are naturally mastered by flying birds and insects because objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-tocontact (tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for relative ground distance control for unmanned aerial vehicles. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented onboard an experimental quadrotor unmannedaerial vehicle and is shown to not only successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large part of the new generation of computer numerical control systems has adopted an architecture based on robotic systems. This architecture improves the implementation of many manufacturing processes in terms of flexibility, efficiency, accuracy and velocity. This paper presents a 4-axis robot tool based on a joint structure whose primary use is to perform complex machining shapes in some non-contact processes. A new dynamic visual controller is proposed in order to control the 4-axis joint structure, where image information is used in the control loop to guide the robot tool in the machining task. In addition, this controller eliminates the chaotic joint behavior which appears during tracking of the quasi-repetitive trajectories required in machining processes. Moreover, this robot tool can be coupled to a manipulator robot in order to form a multi-robot platform for complex manufacturing tasks. Therefore, the robot tool could perform a machining task using a piece grasped from the workspace by a manipulator robot. This manipulator robot could be guided by using visual information given by the robot tool, thereby obtaining an intelligent multi-robot platform controlled by only one camera.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El principal objetivo de esta tesis es dotar a los vehículos aéreos no tripulados (UAVs, por sus siglas en inglés) de una fuente de información adicional basada en visión. Esta fuente de información proviene de cámaras ubicadas a bordo de los vehículos o en el suelo. Con ella se busca que los UAVs realicen tareas de aterrizaje o inspección guiados por visión, especialmente en aquellas situaciones en las que no haya disponibilidad de estimar la posición del vehículo con base en GPS, cuando las estimaciones de GPS no tengan la suficiente precisión requerida por las tareas a realizar, o cuando restricciones de carga de pago impidan añadir sensores a bordo de los vehículos. Esta tesis trata con tres de las principales áreas de la visión por computador: seguimiento visual y estimación visual de la pose (posición y orientación), que a su vez constituyen la base de la tercera, denominada control servo visual, que en nuestra aplicación se enfoca en el empleo de información visual para controlar los UAVs. Al respecto, esta tesis se ocupa de presentar propuestas novedosas que permitan solucionar problemas relativos al seguimiento de objetos mediante cámaras ubicadas a bordo de los UAVs, se ocupa de la estimación de la pose de los UAVs basada en información visual obtenida por cámaras ubicadas en el suelo o a bordo, y también se ocupa de la aplicación de las técnicas propuestas para solucionar diferentes problemas, como aquellos concernientes al seguimiento visual para tareas de reabastecimiento autónomo en vuelo o al aterrizaje basado en visión, entre otros. Las diversas técnicas de visión por computador presentadas en esta tesis se proponen con el fin de solucionar dificultades que suelen presentarse cuando se realizan tareas basadas en visión con UAVs, como las relativas a la obtención, en tiempo real, de estimaciones robustas, o como problemas generados por vibraciones. Los algoritmos propuestos en esta tesis han sido probados con información de imágenes reales obtenidas realizando pruebas on-line y off-line. Diversos mecanismos de evaluación han sido empleados con el propósito de analizar el desempeño de los algoritmos propuestos, entre los que se incluyen datos simulados, imágenes de vuelos reales, estimaciones precisas de posición empleando el sistema VICON y comparaciones con algoritmos del estado del arte. Los resultados obtenidos indican que los algoritmos de visión por computador propuestos tienen un desempeño que es comparable e incluso mejor al de algoritmos que se encuentran en el estado del arte. Los algoritmos propuestos permiten la obtención de estimaciones robustas en tiempo real, lo cual permite su uso en tareas de control visual. El desempeño de estos algoritmos es apropiado para las exigencias de las distintas aplicaciones examinadas: reabastecimiento autónomo en vuelo, aterrizaje y estimación del estado del UAV. Abstract The main objective of this thesis is to provide Unmanned Aerial Vehicles (UAVs) with an additional vision-based source of information extracted by cameras located either on-board or on the ground, in order to allow UAVs to develop visually guided tasks, such as landing or inspection, especially in situations where GPS information is not available, where GPS-based position estimation is not accurate enough for the task to develop, or where payload restrictions do not allow the incorporation of additional sensors on-board. This thesis covers three of the main computer vision areas: visual tracking and visual pose estimation, which are the bases the third one called visual servoing, which, in this work, focuses on using visual information to control UAVs. In this sense, the thesis focuses on presenting novel solutions for solving the tracking problem of objects when using cameras on-board UAVs, on estimating the pose of the UAVs based on the visual information collected by cameras located either on the ground or on-board, and also focuses on applying these proposed techniques for solving different problems, such as visual tracking for aerial refuelling or vision-based landing, among others. The different computer vision techniques presented in this thesis are proposed to solve some of the frequently problems found when addressing vision-based tasks in UAVs, such as obtaining robust vision-based estimations at real-time frame rates, and problems caused by vibrations, or 3D motion. All the proposed algorithms have been tested with real-image data in on-line and off-line tests. Different evaluation mechanisms have been used to analyze the performance of the proposed algorithms, such as simulated data, images from real-flight tests, publicly available datasets, manually generated ground truth data, accurate position estimations using a VICON system and a robotic cell, and comparison with state of the art algorithms. Results show that the proposed computer vision algorithms obtain performances that are comparable to, or even better than, state of the art algorithms, obtaining robust estimations at real-time frame rates. This proves that the proposed techniques are fast enough for vision-based control tasks. Therefore, the performance of the proposed vision algorithms has shown to be of a standard appropriate to the different explored applications: aerial refuelling and landing, and state estimation. It is noteworthy that they have low computational overheads for vision systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In conventional robot manipulator control, the desired path is specified in cartesian space and converted to joint space through inverse kinematics mapping. The joint references generated by this mapping are utilized for dynamic control in joint space. Thus, the end-effector position is, in fact, controlled indirectly, in open-loop, and the accuracy of grip position control directly depends on the accuracy of the available kinematic model. In this report, a new scheme for redundant manipulator kinematic control, based on visual servoing is proposed. In the proposed system, a robot image acquired through a CCD camera is processed in order to compute the position and orientation of each link of the robot arm. The robot task is specified as a temporal sequence of reference images of the robot arm. Thus, both the measured pose and the reference pose are specified in the same image space, and its difference is utilized to generate a cartesian space error for kinematic control purposes. The proposed control scheme was applied in a four degree-of-freedom planar redundant robot arm, experimental results are shown

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new framework based on optimal control to define new dynamic visual controllers to carry out the guidance of any serial link structure. The proposed general method employs optimal control to obtain the desired behaviour in the joint space based on an indicated cost function which determines how the control effort is distributed over the joints. The proposed approach allows the development of new direct visual controllers for any mechanical joint system with redundancy. Finally, authors show experimental results and verifications on a real robotic system for some derived controllers obtained from the control framework.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Several tools of precision agriculture have been developed for specific uses. However, this specificity may hinder the implementation of precision agriculture due to an increasing in costs and operational complexity. The use of vegetation index sensors which are traditionally developed for crop fertilization, for site-specific weed management can provide multiple utilizations of these sensors and result in the optimization of precision agriculture. The aim of this study was to evaluate the relationship between reflectance indices of weeds obtained by the GreenSeekerTM sensor and conventional parameters used for weed interference quantification. Two experiments were conducted with soybean and corn by establishing a gradient of weed interference through the use of pre- and post-emergence herbicides. The weed quantification was evaluated by the normalized difference vegetation index (NDVI) and the ratio of red to near infrared (Red/NIR) obtained using the GreenSeekerTM sensor, the visual weed control, the weed dry matter, and digital photographs, which supplied information about the leaf area coverage proportions of weed and straw. The weed leaf coverage obtained using digital photography was highly associated with the NDVI (r = 0.78) and the Red/NIR (r = -0.74). The weed dry matter also positively correlated with the NDVI obtained in 1 m linear (r = 0.66). The results indicated that the GreenSeekerTM sensor originally used for crop fertilization could also be used to obtain reflectance indices in the area between rows of crops to support decision-making programs for weed control.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Resumen tomado del autor. Contiene fotograf??as y tablas de las diferentes situaciones experimentales

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In terms of evolution, the strategy of catching prey would have been an important part of survival in a constantly changing environment. A prediction mechanism would have developed to compensate for any delay in the sensory-motor system. In a previous study, “proactive control” was found, in which the motion of the hands preceded the virtual moving target. These results implied that the positive phase shift of the hand motion represents the proactive nature of the visual-motor control system, which attempts to minimize the brief error in the hand motion when the target changes position unexpectedly. In our study, a visual target moves in circle (13 cm diameter) on a computer screen, and each subject is asked to keep track of the target’s motion by the motion of a cursor. As the frequency of the target increases, a rhythmic component was found in the velocity of the cursor in spite of the fact that the velocity of the target was constant. The generation of a rhythmic component cannot be explained simply as a feedback mechanism for the phase shifts of the target and cursor in a sensory-motor system. Therefore, it implies that the rhythmic component was generated to predict the velocity of the target, which is a feed-forward mechanism in the sensory-motor system. Here, we discuss the generation of the rhythmic component and its roll in the feed-forward mechanism.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The frontal eye field (FEF) is known to be involved in saccade generation and visual attention control. Studies applying covert attentional orienting paradigms have shown that the right FEF is involved in attentional shifts to both the left and the right hemifield. In the current study, we aimed at examining the effects of inhibitory continuous theta burst (cTBS) transcranial magnetic stimulation over the right FEF on overt attentional orienting, as measured by a free visual exploration paradigm. In forty-two healthy subjects, free visual exploration of naturalistic pictures was tested in three conditions: (1) after cTBS over the right FEF; (2) after cTBS over a control site (vertex); and, (3) without any stimulation. The results showed that cTBS over the right FEF-but not cTBS over the vertex-triggered significant changes in the spatial distribution of the cumulative fixation duration. Compared to the group without stimulation and the group with cTBS over the vertex, cTBS over the right FEF decreased cumulative fixation duration in the left and in the right peripheral regions, and increased cumulative fixation duration in the central region. The present study supports the view that the right FEF is involved in the bilateral control of not only covert, but also of overt, peripheral visual attention.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article describes a new visual servo control and strategies that are used to carry out dynamic tasks by the Robotenis platform. This platform is basically a parallel robot that is equipped with an acquisition and processing system of visual information, its main feature is that it has a completely open architecture control, and planned in order to design, implement, test and compare control strategies and algorithms (visual and actuated joint controllers). Following sections describe a new visual control strategy specially designed to track and intercept objects in 3D space. The results are compared with a controller shown in previous woks, where the end effector of the robot keeps a constant distance from the tracked object. In this work, the controller is specially designed in order to allow changes in the tracking reference. Changes in the tracking reference can be used to grip an object that is under movement, or as in this case, hitting a hanging Ping-Pong ball. Lyapunov stability is taken into account in the controller design.