953 resultados para vision control
Resumo:
The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.
Resumo:
The article described an open-source toolbox for machine vision called Machine Vision Toolbox (MVT). MVT includes more than 60 functions including image file reading and writing, acquisition, display, filtering, blob, point and line feature extraction, mathematical morphology, homographies, visual Jacobians, camera calibration, and color space conversion. MVT can be used for research into machine vision but is also versatile enough to be usable for real-time work and even control. MVT, combined with MATLAB and a model workstation computer, is a useful and convenient environment for the investigation of machine vision algorithms. The article illustrated the use of a subset of toolbox functions for some typical problems and described MVT operations including the simulation of a complete image-based visual servo system.
Resumo:
The application of high-speed machine vision for close-loop position control, or visual servoing, of a robot manipulator. It provides a comprehensive coverage of all aspects of the visual servoing problem: robotics, vision, control, technology and implementation issues. While much of the discussion is quite general the experimental work described is based on the use of a high-speed binary vision system with a monocular "eye-in-hand" camera.
Resumo:
This thesis presents an approach for a vertical infrastructure inspection using a vertical take-off and landing (VTOL) unmanned aerial vehicle and shared autonomy. Inspecting vertical structure such as light and power distribution poles is a difficult task. There are challenges involved with developing such an inspection system, such as flying in close proximity to a target while maintaining a fixed stand-off distance from it. The contributions of this thesis fall into three main areas. Firstly, an approach to vehicle dynamic modeling is evaluated in simulation and experiments. Secondly, EKF-based state estimators are demonstrated, as well as estimator-free approaches such as image based visual servoing (IBVS) validated with motion capture ground truth data. Thirdly, an integrated pole inspection system comprising a VTOL platform with human-in-the-loop control, (shared autonomy) is demonstrated. These contributions are comprehensively explained through a series of published papers.
Resumo:
This paper discusses predictive motion control of a MiRoSoT robot. The dynamic model of the robot is deduced by taking into account the whole process - robot, vision, control and transmission systems. Based on the obtained dynamic model, an integrated predictive control algorithm is proposed to position precisely with either stationary or moving obstacle avoidance. This objective is achieved automatically by introducing distant constraints into the open-loop optimization of control inputs. Simulation results demonstrate the feasibility of such control strategy for the deduced dynamic model
Resumo:
Background/Aims: To develop and assess the psychometric validity of a Chinese language Vision Health related quality-of-life (VRQoL) measurement instrument for the Chinese visually impaired. Methods: The Low Vision Quality of Life Questionnaire (LVQOL) was translated and adapted into the Chinese-version Low Vision Quality of Life Questionnaire (CLVQOL). The CLVQOL was completed by 100 randomly selected people with low vision (primary group) and 100 people with normal vision (control group). Ninety-four participants from the primary group completed the CLVQOL a second time 2 weeks later (test-retest group). The internal consistency reliability, test-retest reliability, item-internal consistency, item-discrimination validity, construct validity and discriminatory power of the CLVQOL were calculated. Results: The review committee agreed that the CLVQOL replicated the meaning of the LVQOL and was sensitive to cultural differences. The Cronbach's α coefficient and the split-half coefficient for the four scales and total CLVQOL scales were 0.75-0.97. The test-retest reliability as estimated by the intraclass correlations coefficient was 0.69-0.95. Item-internal consistency was >0.4 and item-discrimination validity was generally <0.40. The Varimax rotation factor analysis of the CLVQOL identified four principal factors. the quality-of-life rating of four subscales and the total score of the CLVQOL of the primary group were lower than those of the Control group, both in hospital-based subjects and community-based subjects. Conclusion: The CLVQOL Chinese is a culturally specific vision-related quality-of-life measure instrument. It satisfies conventional psychometric criteria, discriminates visually healthy populations from low vision patients and may be valuable in screening the local community as well as for use in clinical practice or research. © Springer 2005.
Resumo:
The ninth release of the Toolbox, represents over fifteen years of development and a substantial level of maturity. This version captures a large number of changes and extensions generated over the last two years which support my new book “Robotics, Vision & Control”. The Toolbox has always provided many functions that are useful for the study and simulation of classical arm-type robotics, for example such things as kinematics, dynamics, and trajectory generation. The Toolbox is based on a very general method of representing the kinematics and dynamics of serial-link manipulators. These parameters are encapsulated in MATLAB ® objects - robot objects can be created by the user for any serial-link manipulator and a number of examples are provided for well know robots such as the Puma 560 and the Stanford arm amongst others. The Toolbox also provides functions for manipulating and converting between datatypes such as vectors, homogeneous transformations and unit-quaternions which are necessary to represent 3-dimensional position and orientation. This ninth release of the Toolbox has been significantly extended to support mobile robots. For ground robots the Toolbox includes standard path planning algorithms (bug, distance transform, D*, PRM), kinodynamic planning (RRT), localization (EKF, particle filter), map building (EKF) and simultaneous localization and mapping (EKF), and a Simulink model a of non-holonomic vehicle. The Toolbox also including a detailed Simulink model for a quadcopter flying robot.
Resumo:
Here the design and operation of a novel transmission electron microscope (TEM) triboprobe instrument with real-time vision control for advanced in situ electron microscopy is demonstrated. The NanoLAB triboprobe incorporates a new high stiffness coarse slider design for increased stability and positioning performance. This is linked with an advanced software control system which introduces both new and flexible in situ experimental functional testing modes, plus an automated vision control feedback system. This advancement in instrumentation design unlocks new possibilities of performing a range of new dynamical nanoscale materials tests, including novel friction and fatigue experiments inside the electron microscope.
Resumo:
In-situ transmission electron microscopy (TEM) has developed rapidly over the last decade. In particular, with the inclusion of scanning probes in TEM holders, allows both mechanical and electrical testing to be performed whilst simultaneously imaging the microstructure at high resolution. In-situ TEM nanoindentation and tensile experiments require only an axial displacement perpendicular to the test surface. However, here, through the development of a novel in-situ TEM triboprobe, other surface characterisation experiments are now possible, with the introduction of a fully programmable 3D positioning system. Programmable lateral displacement control allows scratch tests to be performed at high resolution with simultaneous imaging of the changing microstructure. With the addition of repeated cyclic movements, both nanoscale fatigue and friction experiments can also now be performed. We demonstrate a range of movement profiles for a variety of applications, in particular, lateral sliding wear. The developed NanoLAB TEM triboprobe also includes a new closed loop vision control system for intuitive control during positioning and alignment. It includes an automated online calibration to ensure that the fine piezotube is controlled accurately throughout any type of test. Both the 3D programmability and the closed loop vision feedback system are demonstrated here.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
The research described in this paper is directed toward increasing productivity of draglines through automation. In particular, it focuses on the swing-to-dump, dump, and return-to-dig phases of the dragline operational cycle by developing a swing automation system. In typical operation the dragline boom can be in motion for up to 80% of the total cycle time. This provides considerable scope for improving cycle time through automated or partially automated boom motion control. This paper describes machine vision based sensor technology and control algorithms under development to solve the problem of continuous real time bucket location and control. Incorporation of this capability into existing dragline control systems will then enable true automation of dragline swing and dump operations.
Resumo:
This paper, which serves as an introduction to the mini-symposium on Real-Time Vision, Tracking and Control, provides a broad sketch of visual servoing, the application of real-time vision, tracking and control for robot guidance. It outlines the basic theoretical approaches to the problem, describes a typical architecture, and discusses major milestones, applications and the significant vision sub-problems that must be solved.
Resumo:
The following paper proposes a novel application of Skid-to-Turn maneuvers for fixed wing Unmanned Aerial Vehicles (UAVs) inspecting locally linear infrastructure. Fixed wing UAVs, following the design of manned aircraft, commonly employ Bank-to-Turn ma- neuvers to change heading and thus direction of travel. Whilst effective, banking an aircraft during the inspection of ground based features hinders data collection, with body fixed sen- sors angled away from the direction of turn and a panning motion induced through roll rate that can reduce data quality. By adopting Skid-to-Turn maneuvers, the aircraft can change heading whilst maintaining wings level flight, thus allowing body fixed sensors to main- tain a downward facing orientation. An Image-Based Visual Servo controller is developed to directly control the position of features as captured by onboard inspection sensors. This improves on the indirect approach taken by other tracking controllers where a course over ground directly above the feature is assumed to capture it centered in the field of view. Performance of the proposed controller is compared against that of a Bank-to-Turn tracking controller driven by GPS derived cross track error in a simulation environment developed to replicate the field of view of a body fixed camera.