893 resultados para Motion perception (Vision)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

For many years, computer vision has lured researchers with promises of a low-cost, passive, lightweight and information-rich sensor suitable for navigation purposes. The prime difficulty in vision-based navigation is that the navigation solution will continually drift with time unless external information is available, whether it be cues from the appearance of the scene, a map of features (whether built online or known a priori), or from an externally-referenced sensor. It is not merely position that is of interest in the navigation problem. Attitude (i.e. the angular orientation of a body with respect to a reference frame) is integral to a visionbased navigation solution and is often of interest in its own right (e.g. flight control). This thesis examines vision-based attitude estimation in an aerospace environment, and two methods are proposed for constraining drift in the attitude solution; one through a novel integration of optical flow and the detection of the sky horizon, and the other through a loosely-coupled integration of Visual Odometry and GPS position measurements. In the first method, roll angle, pitch angle and the three aircraft body rates are recovered though a novel method of tracking the horizon over time and integrating the horizonderived attitude information with optical flow. An image processing front-end is used to select several candidate lines in a image that may or may not correspond to the true horizon, and the optical flow is calculated for each candidate line. Using an Extended Kalman Filter (EKF), the previously estimated aircraft state is propagated using a motion model and a candidate horizon line is associated using a statistical test based on the optical flow measurements and location of the horizon in the image. Once associated, the selected horizon line, along with the associated optical flow, is used as a measurement to the EKF. To evaluate the accuracy of the algorithm, two flights were conducted, one using a highly dynamic Uninhabited Airborne Vehicle (UAV) in clear flight conditions and the other in a human-piloted Cessna 172 in conditions where the horizon was partially obscured by terrain, haze and smoke. The UAV flight resulted in pitch and roll error standard deviations of 0.42° and 0.71° respectively when compared with a truth attitude source. The Cessna 172 flight resulted in pitch and roll error standard deviations of 1.79° and 1.75° respectively. In the second method for estimating attitude, a novel integrated GPS/Visual Odometry (GPS/VO) navigation filter is proposed, using a structure similar to a classic looselycoupled GPS/INS error-state navigation filter. Under such an arrangement, the error dynamics of the system are derived and a Kalman Filter is developed for estimating the errors in position and attitude. Through similar analysis to the GPS/INS problem, it is shown that the proposed filter is capable of recovering the complete attitude (i.e. pitch, roll and yaw) of the platform when subjected to acceleration not parallel to velocity for both the monocular and stereo variants of the filter. Furthermore, it is shown that under general straight line motion (e.g. constant velocity), only the component of attitude in the direction of motion is unobservable. Numerical simulations are performed to demonstrate the observability properties of the GPS/VO filter in both the monocular and stereo camera configurations. Furthermore, the proposed filter is tested on imagery collected using a Cessna 172 to demonstrate the observability properties on real-world data. The proposed GPS/VO filter does not require additional restrictions or assumptions such as platform-specific dynamics, map-matching, feature-tracking, visual loop-closing, gravity vector or additional sensors such as an IMU or magnetic compass. Since no platformspecific dynamics are required, the proposed filter is not limited to the aerospace domain and has the potential to be deployed in other platforms such as ground robots or mobile phones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the application of a monocular visual SLAMon a fixed-wing small Unmanned Aerial System (sUAS) capable of simultaneous estimation of aircraft pose and scene structure. We demonstrate the robustness of unconstrained vision alone in producing reliable pose estimates of a sUAS, at altitude. It is ultimately capable of online state estimation feedback for aircraft control and next-best-view estimation for complete map coverage without the use of additional sensors.We explore some of the challenges of visual SLAM from a sUAS including dealing with planar structure, distant scenes and noisy observations. The developed techniques are applied on vision data gathered from a fast-moving fixed-wing radio control aircraft flown over a 1×1km rural area at an altitude of 20-100m.We present both raw Structure from Motion results and a SLAM solution that includes FAB-MAP based loop-closures and graph-optimised pose. Timing information is also presented to demonstrate near online capabilities. We compare the accuracy of the 6-DOF pose estimates to an off-the-shelfGPS aided INS over a 1.7kmtrajectory.We also present output 3D reconstructions of the observed scene structure and texture that demonstrates future applications in autonomous monitoring and surveying.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The automotive industry has been the focus of digital human modeling (DHM) research and application for many years. In the highly competitive marketplace for personal transportation, the desire to improve the customer’s experience has driven extensive research in both the physical and cognitive interaction between the vehicle and its occupants. Human models provide vehicle designers with tools to view and analyze product interactions before the first prototypes are built, potentially improving the design while reducing cost and development time. The focus of DHM research and applications began with prediction and representation of static postures for purposes of driver workstation layout, including assessments of seat adjustment ranges and exterior vision. Now DHMs are used for seat design and assessment of driver reach and ingress/egress. DHMs and related simulation tools are expanding into the cognitive domain, with computational models of perception and motion, and into the dynamic domain with models of physical responses to ride and vibration. Moreover, DHMs are now widely used to analyze the ergonomics of vehicle assembly tasks. In this case, the analysis aims to determine whether workers can be expected to complete the tasks safely and with good quality. This preface provides a review of the literature to provide context for the nine new papers presented in this special issue.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In most visual mapping applications suited to Autonomous Underwater Vehicles (AUVs), stereo visual odometry (VO) is rarely utilised as a pose estimator as imagery is typically of very low framerate due to energy conservation and data storage requirements. This adversely affects the robustness of a vision-based pose estimator and its ability to generate a smooth trajectory. This paper presents a novel VO pipeline for low-overlap imagery from an AUV that utilises constrained motion and integrates magnetometer data in a bi-objective bundle adjustment stage to achieve low-drift pose estimates over large trajectories. We analyse the performance of a standard stereo VO algorithm and compare the results to the modified vo algorithm. Results are demonstrated in a virtual environment in addition to low-overlap imagery gathered from an AUV. The modified VO algorithm shows significantly improved pose accuracy and performance over trajectories of more than 300m. In addition, dense 3D meshes generated from the visual odometry pipeline are presented as a qualitative output of the solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a high-speed, 100Hz, visionbased state estimator that is suitable for quadrotor control in close quarters manoeuvring applications. We describe the hardware and algorithms for estimating the state of the quadrotor. Experimental results for position, velocity and yaw angle estimators are presented and compared with motion capture data. Quantitative performance comparison with state-of-the-art achievements are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This case study report describes the stages involved in the translation of research on night-time visibility into standards for the safety clothing worn by roadworkers. Vision research demonstrates that when lights are placed on the moveable joints of the body and the person moves in a dark setting, the phenomenon known as “biological motion or biomotion” occurs, enabling rapid and accurate recognition of the human form although only the lights can be seen. QUT was successful in gaining funding from the Australian Research Council for a Linkage grant due to the support of the predecessors of the Queensland Department of Transport and Main Roads (TMR) to research the biomotion effect in on-road settings using materials that feature in roadworker clothing. Although positive results were gained, the process of translating the research results into policy, practices and standards relied strongly on the supportive efforts of TMR staff engaged in the review and promulgation of national standards. The ultimate result was the incorporation of biomotion marking into AS/NZS 4602.1 2011. The experiences gained in this case provide insights into the processes involved in translating research into practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we use the algorithm SeqSLAM to address the question, how little and what quality of visual information is needed to localize along a familiar route? We conduct a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length. Results confirm that place recognition using single images or short image sequences is poor, but improves to match or exceed current benchmarks as the matching sequence length increases. We then present place recognition results from two experiments where low-quality imagery is directly caused by sensor limitations; in one, place recognition is achieved along an unlit mountain road by using noisy, long-exposure blurred images, and in the other, two single pixel light sensors are used to localize in an indoor environment. We also show failure modes caused by pose variance and sequence aliasing, and discuss ways in which they may be overcome. By showing how place recognition along a route is feasible even with severely degraded image sequences, we hope to provoke a re-examination of how we develop and test future localization and mapping systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a monocular vision based autonomous navigation system for Micro Aerial Vehicles (MAVs) in GPS-denied environments. The major drawback of monocular systems is that the depth scale of the scene can not be determined without prior knowledge or other sensors. To address this problem, we minimize a cost function consisting of a drift-free altitude measurement and up-to-scale position estimate obtained using the visual sensor. We evaluate the scale estimator, state estimator and controller performance by comparing with ground truth data acquired using a motion capture system. All resources including source code, tutorial documentation and system models are available online.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

“Made by Motion” is a collaboration between digital artist Paul Van Opdenbosch and performer and choreographer Elise May; a series of studies on captured motion data used to generating experimental visual forms that reverberate in space and time. The project investigates the invisible forces generated by and influencing the movement of a dancer. Along with how the forces can be captured and applied to generating visual outcomes that surpass simple data visualisation, projecting the intent of the performer’s movements. The source or ‘seed’ comes from using an Xsens MVN - Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. In this first series the visual investigation focused on manipulating the movement date at the instance of capture, capture been the recording of three-dimensional movement as ‘seen’ by the hardware and ‘understood’ through the calibration of software. By repositioning the capture hardware on the body we can effectively change how the same sequence of movements is ‘seen’ by the motion capture system thus generating a different visual result from effetely identical movement. The outcomes from the experiments clearly demonstrates the effectiveness of using motion capture hardware as a creative tool to manipulate the perception of the capture subject, in this case been a sequence of dance movements. The creative work exhibited is a cross-section of the experiments conducted in practice with the first animated work (Movement A - Control) using the motion capture hardware in its default ‘normal’ configuration. Following this is the lower body moved to the upper body (Lb-Ub), right arm moved onto the left arm (Ra-La), right leg moved onto the left leg (Rl-Ll) and finally the left leg moved onto a object that is then held in the left hand (Ll-Pf (Lh)).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The process of translating research into policy and practice is not well understood. This paper uses a case study approach to interpret an example of translation with respect to theoretical approaches identified in the literature. The case study concerns research into “biological motion” or “biomotion”: when lights are placed on the moveable joints of the body and the person moves in a dark setting, there is immediate and accurate recognition of the human form although only the lights can be seen. QUT was successful in gaining Australian Research Council funding with the support of the predecessors of the Queensland Department of Transport and Main Roads (TMR) to research the biomotion effect in road worker clothing using reflective tape rather than lights, and this resulted in the incorporation of biomotion marking into AS/NZS 4602.1 2011. The most promising approach to understanding the success of this translation, SWOV’s “knowledge utilisation approach” provided some insights but was more descriptive than predictive and provided “necessary but not sufficient” conditions for translation. In particular, the supportive efforts of TMR staff engaged in the review and promulgation of national standards were critical in this case. A model of the conclusions is presented. The experiences gained in this case should provide insights into the processes involved in effectively translating research into practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces an improved line tracker using IMU and vision data for visual servoing tasks. We utilize an Image Jacobian which describes motion of a line feature to corresponding camera movements. These camera motions are estimated using an IMU. We demonstrate impacts of the proposed method in challenging environments: maximum angular rate ~160 0/s, acceleration ~6m /s2 and in cluttered outdoor scenes. Simulation and quantitative tracking performance comparison with the Visual Servoing Platform (ViSP) are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to contribute to the reliability and integrity of perceptual systems of unmanned ground vehicles (UGV). A method is proposed to evaluate the quality of sensor data prior to its use in a perception system by utilising a quality metric applied to heterogeneous sensor data such as visual and infrared camera images. The concept is illustrated specifically with sensor data that is evaluated prior to the use of the data in a standard SIFT feature extraction and matching technique. The method is then evaluated using various experimental data sets that were collected from a UGV in challenging environmental conditions, represented by the presence of airborne dust and smoke. In the first series of experiments, a motionless vehicle is observing a ’reference’ scene, then the method is extended to the case of a moving vehicle by compensating for its motion. This paper shows that it is possible to anticipate degradation of a perception algorithm by evaluating the input data prior to any actual execution of the algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to promote integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicles equipped with a camera and a 2D laser range finder. A method to check for inconsistencies between the data provided by these two heterogeneous sensors is proposed and discussed. First, uncertainties in the estimated transformation between the laser and camera frames are evaluated and propagated up to the projection of the laser points onto the image. Then, for each pair of laser scan-camera image acquired, the information at corners of the laser scan is compared with the content of the image, resulting in a likelihood of correspondence. The result of this process is then used to validate segments of the laser scan that are found to be consistent with the image, while inconsistent segments are rejected. Experimental results illustrate how this technique can improve the reliability of perception in challenging environmental conditions, such as in the presence of airborne dust.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a 100 Hz monocular position based visual servoing system to control a quadrotor flying in close proximity to vertical structures approximating a narrow, locally linear shape. Assuming the object boundaries are represented by parallel vertical lines in the image, detection and tracking is achieved using Plücker line representation and a line tracker. The visual information is fused with IMU data in an EKF framework to provide fast and accurate state estimation. A nested control design provides position and velocity control with respect to the object. Our approach is aimed at high performance on-board control for applications allowing only small error margins and without a motion capture system, as required for real world infrastructure inspection. Simulated and ground-truthed experimental results are presented.