953 resultados para vision control
Resumo:
Previous research has suggested that perceptual-motor difficulties may account for obese children's lower motor competence; however, specific evidence is currently lacking. Therefore, this study examined the effect of altered visual conditions on spatiotemporal and kinematic gait parameters in obese versus normal-weight children. Thirty-two obese and normal-weight children (11.2 ± 1.5 years) walked barefoot on an instrumented walkway at constant self-selected speed during LIGHT and DARK conditions. Three-dimensional motion analysis was performed to calculate spatiotemporal parameters, as well as sagittal trunk segment and lower extremity joint angles at heel-strike and toe-off. Self-selected speed did not significantly differ between groups. In the DARK condition, all participants walked at a significantly slower speed, decreased stride length, and increased stride width. Without normal vision, obese children had a more pronounced increase in relative double support time compared to the normal-weight group, resulting in a significantly greater percentage of the gait cycle spent in stance. Walking in the DARK, both groups showed greater forward tilt of the trunk and restricted hip movement. All participants had increased knee flexion at heel-strike, as well as decreased knee extension and ankle plantarflexion at toe-off in the DARK condition. The removal of normal vision affected obese children's temporal gait pattern to a larger extent than that of normal-weight peers. Results suggest an increased dependency on vision in obese children to control locomotion. Next to the mechanical problem of moving excess mass, a different coupling between perception and action appears to be governing obese children's motor coordination and control.
Resumo:
The practice of robotics and computer vision each involve the application of computational algorithms to data. The research community has developed a very large body of algorithms but for a newcomer to the field this can be quite daunting. For more than 10 years the author has maintained two open-source MATLAB® Toolboxes, one for robotics and one for vision. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. This new book makes the fundamental algorithms of robotics, vision and control accessible to all. It weaves together theory, algorithms and examples in a narrative that covers robotics and computer vision separately and together. Using the latest versions of the Toolboxes the author shows how complex problems can be decomposed and solved using just a few simple lines of code. The topics covered are guided by real problems observed by the author over many years as a practitioner of both robotics and computer vision. It is written in a light but informative style, it is easy to read and absorb, and includes over 1000 MATLAB® and Simulink® examples and figures. The book is a real walk through the fundamentals of mobile robots, navigation, localization, arm-robot kinematics, dynamics and joint level control, then camera models, image processing, feature extraction and multi-view geometry, and finally bringing it all together with an extensive discussion of visual servo systems.
Resumo:
The following paper proposes a novel application of Skid-to-Turn maneuvers for fixed wing Unmanned Aerial Vehicles (UAVs) inspecting locally linear infrastructure. Fixed wing UAVs, following the design of manned aircraft, traditionally employ Bank-to-Turn maneuvers to change heading and thus direction of travel. Commonly overlooked is the effect these maneuvers have on downward facing body fixed sensors, which as a result of bank, point away from the feature during turns. By adopting Skid-to-Turn maneuvers, the aircraft is able change heading whilst maintaining wings level flight, thus allowing body fixed sensors to maintain a downward facing orientation. Eliminating roll also helps to improve data quality, as sensors are no longer subjected to the swinging motion induced as they pivot about an axis perpendicular to their line of sight. Traditional tracking controllers that apply an indirect approach of capturing ground based data by flying directly overhead can also see the feature off center due to steady state pitch and roll required to stay on course. An Image Based Visual Servo controller is developed to address this issue, allowing features to be directly tracked within the image plane. Performance of the proposed controller is tested against that of a Bank-to-Turn tracking controller driven by GPS derived cross track error in a simulation environment developed to simulate the field of view of a body fixed camera.
Resumo:
The future emergence of many types of airborne vehicles and unpiloted aircraft in the national airspace means collision avoidance is of primary concern in an uncooperative airspace environment. The ability to replicate a pilot’s see and avoid capability using cameras coupled with vision based avoidance control is an important part of an overall collision avoidance strategy. But unfortunately without range collision avoidance has no direct way to guarantee a level of safety. Collision scenario flight tests with two aircraft and a monocular camera threat detection and tracking system were used to study the accuracy of image-derived angle measurements. The effect of image-derived angle errors on reactive vision-based avoidance performance was then studied by simulation. The results show that whilst large angle measurement errors can significantly affect minimum ranging characteristics across a variety of initial conditions and closing speeds, the minimum range is always bounded and a collision never occurs.
Resumo:
This work presents a collision avoidance approach based on omnidirectional cameras that does not require the estimation of range between two platforms to resolve a collision encounter. Our method achieves minimum separation between the two vehicles involved by maximising the view-angle given by the omnidirectional sensor. Only visual information is used to achieve avoidance under a bearing- only visual servoing approach. We provide theoretical problem formulation, as well as results from real flights using small quadrotors
Resumo:
Background: The transmission of soil-transmitted helminths (STHs) is associated with poverty, poor hygiene behaviour, lack of clean water and inadequate waste disposal and sanitation. Periodic administration of benzimidazole drugs is the mainstay for global STH control but it does not prevent re-infection, and is unlikely to interrupt transmission as a stand-alone intervention. Findings: We reported recently on the development and successful testing in Hunan province, PR China, of a health education package to prevent STH infections in Han Chinese primary school students. We have recently commenced a new trial of the package in the ethnically diverse Xishuangbanna autonomous prefecture in Yunnan province and the approach is also being tested in West Africa, with further expansion into the Philippines in 2015. Conclusions: The work in China illustrates well the direct impact that health education can have in improving knowledge and awareness, and in changing hygiene behaviour. Further, it can provide insight into the public health outcomes of a multi-component integrated control program, where health education prevents re-infection and periodic drug treatment reduces prevalence and morbidity.
Resumo:
The mining industry is highly suitable for the application of robotics and automation technology since the work is both arduous and dangerous. Visual servoing is a means of integrating noncontact visual sensing with machine control to augment or replace operator based control. This article describes two of our current mining automation projects in order to demonstrate some, perhaps unusual, applications of visual servoing, and also to illustrate some very real problems with robust computer vision
Resumo:
The mining industry presents us with a number of ideal applications for sensor based machine control because of the unstructured environment that exists within each mine. The aim of the research presented here is to increase the productivity of existing large compliant mining machines by retrofitting with enhanced sensing and control technology. The current research focusses on the automatic control of the swing motion cycle of a dragline and an automated roof bolting system. We have achieved: * closed-loop swing control of an one-tenth scale model dragline; * single degree of freedom closed-loop visual control of an electro-hydraulic manipulator in the lab developed from standard components.
Resumo:
This paper details the design and performance assessment of a unique collision avoidance decision and control strategy for autonomous vision-based See and Avoid systems. The general approach revolves around re-positioning a collision object in the image using image-based visual servoing, without estimating range or time to collision. The decision strategy thus involves determining where to move the collision object, to induce a safe avoidance manuever, and when to cease the avoidance behaviour. These tasks are accomplished by exploiting human navigation models, spiral motion properties, expected image feature uncertainty and the rules of the air. The result is a simple threshold based system that can be tuned and statistically evaluated by extending performance assessment techniques derived for alerting systems. Our results demonstrate how autonomous vision-only See and Avoid systems may be designed under realistic problem constraints, and then evaluated in a manner consistent to aviation expectations.
Resumo:
This paper introduces a machine learning based system for controlling a robotic manipulator with visual perception only. The capability to autonomously learn robot controllers solely from raw-pixel images and without any prior knowledge of configuration is shown for the first time. We build upon the success of recent deep reinforcement learning and develop a system for learning target reaching with a three-joint robot manipulator using external visual observation. A Deep Q Network (DQN) was demonstrated to perform target reaching after training in simulation. Transferring the network to real hardware and real observation in a naive approach failed, but experiments show that the network works when replacing camera images with synthetic images.
Resumo:
This Chapter presents a vision-based system for touch-free interaction with a display at a distance. A single camera is fixed on top of the screen and is pointing towards the user. An attention mechanism allows the user to start the interaction and control a screen pointer by moving their hand in a fist pose directed at the camera. On-screen items can be chosen by a selection mechanism. Current sample applications include browsing video collections as well as viewing a gallery of 3D objects, which the user can rotate with their hand motion. We have included an up-to-date review of hand tracking methods, and comment on the merits and shortcomings of previous approaches. The proposed tracker uses multiple cues, appearance, color, and motion, for robustness. As the space of possible observation models is generally too large for exhaustive online search, we select models that are suitable for the particular tracking task at hand. During a training stage, various off-the-shelf trackers are evaluated. From this data differentmethods of fusing them online are investigated, including parallel and cascaded tracker evaluation. For the case of fist tracking, combining a small number of observers in a cascade results in an efficient algorithm that is used in our gesture interface. The system has been on public display at conferences where over a hundred users have engaged with it. © 2010 Springer-Verlag Berlin Heidelberg.
Resumo:
On-site tracking in open construction sites is often difficult because of the large amounts of items that are present and need to be tracked. Additionally, the amounts of occlusions/obstructions present create a highly complex tracking environment. Existing tracking methods are based mainly on Radio Frequency technologies, including Global Positioning Systems (GPS), Radio Frequency Identification (RFID), Bluetooth and Wireless Fidelity (Wi-Fi, Ultra-Wideband, etc). These methods require considerable amounts of pre-processing time since they need to manually deploy tags and keep record of the items they are placed on. In construction sites with numerous entities, tags installation, maintenance and decommissioning become an issue since it increases the cost and time needed to implement these tracking methods. This paper presents a novel method for open site tracking with construction cameras based on machine vision. According to this method, video feed is collected from on site video cameras, and the user selects the entity he wishes to track. The entity is tracked in each video using 2D vision tracking. Epipolar geometry is then used to calculate the depth of the marked area to provide the 3D location of the entity. This method addresses the limitations of radio frequency methods by being unobtrusive and using inexpensive, and easy to deploy equipment. The method has been implemented in a C++ prototype and preliminary results indicate its effectiveness
Resumo:
A vision based technique for non-rigid control is presented that can be used for animation and video game applications. The user grasps a soft, squishable object in front of a camera that can be moved and deformed in order to specify motion. Active Blobs, a non-rigid tracking technique is used to recover the position, rotation and non-rigid deformations of the object. The resulting transformations can be applied to a texture mapped mesh, thus allowing the user to control it interactively. Our use of texture mapping hardware allows us to make the system responsive enough for interactive animation and video game character control.
Resumo:
The application of computer vision based quality control has been slowly but steadily gaining importance mainly due to its speed in achieving results and also greatly due to its non- destnictive nature of testing. Besides, in food applications it also does not contribute to contamination. However, computer vision applications in quality control needs the application of an appropriate software for image analysis. Eventhough computer vision based quality control has several advantages, its application has limitations as to the type of work to be done, particularly so in the food industries. Selective applications, however, can be highly advantageous and very accurate.Computer vision based image analysis could be used in morphometric measurements of fish with the same accuracy as the existing conventional method. The method is non-destructive and non-contaminating thus providing anadvantage in seafood processing.The images could be stored in archives and retrieved at anytime to carry out morphometric studies for biologists.Computer vision and subsequent image analysis could be used in measurements of various food products to assess uniformity of size. One product namely cutlet and product ingredients namely coating materials such as bread crumbs and rava were selected for the study. Computer vision based image analysis was used in the measurements of length, width and area of cutlets. Also the width of coating materials like bread crumbs was measured.Computer imaging and subsequent image analysis can be very effectively used in quality evaluations of product ingredients in food processing. Measurement of width of coating materials could establish uniformity of particles or the lack of it. The application of image analysis in bacteriological work was also done