199 resultados para camera motion
em Queensland University of Technology - ePrints Archive
Resumo:
Position estimation for planetary rovers has been typically limited to odometry based on proprioceptive measurements such as the integration of distance traveled and measurement of heading change. Here we present and compare two methods of online visual odometry suited for planetary rovers. Both methods use omnidirectional imagery to estimate motion of the rover. One method is based on robust estimation of optical flow and subsequent integration of the flow. The second method is a full structure-from-motion solution. To make the comparison meaningful we use the same set of raw corresponding visual features for each method. The dataset is an sequence of 2000 images taken during a field experiment in the Atacama desert, for which high resolution GPS ground truth is available.
Resumo:
Background: Pre-participation screening is commonly used to measure and assess potential intrinsic injury risk. The single leg squat is one such clinical screening measure used to assess lumbopelvic stability and associated intrinsic injury risk. With the addition of a decline board, the single leg decline squat (SLDS) has been shown to reduce ankle dorsiflexion restrictions and allowed greater sagittal plane movement of the hip and knee. On this basis, the SLDS has been employed in the Cricket Australia physiotherapy screening protocols as a measure of lumbopelvic control in the place of the more traditional single leg flat squat (SLFS). Previous research has failed to demonstrate which squatting technique allows for a more comprehensive assessment of lumbopelvic stability. Tenuous links are drawn between kinematics and hip strength measures within the literature for the SLS. Formal evaluation of subjective screening methods has also been suggested within the literature. Purpose: This study had several focal points namely 1) to compare the kinematic differences between the two single leg squatting conditions, primarily the five key kinematic variables fundamental to subjectively assess lumbopelvic stability; 2) determine the effect of ankle dorsiflexion range of motion has on squat kinematics in the two squat techniques; 3) examine the association between key kinematics and subjective physiotherapists’ assessment; and finally 4) explore the association between key kinematics and hip strength. Methods: Nineteen (n=19) subjects performed five SLDS and five SLFS on each leg while being filmed by an 8 camera motion analysis system. Four hip strength measures (internal/external rotation and abd/adduction) and ankle dorsiflexion range of motion were measured using a hand held dynamometer and a goniometer respectively on 16 of these subjects. The same 16 participants were subjectively assessed by an experienced physiotherapist for lumbopelvic stability. Paired samples t-tests were performed on the five predetermined kinematic variables to assess the differences between squat conditions. A Bonferroni correction for multiple comparisons was used which adjusted the significance value to p = 0.005 for the paired t-tests. Linear regressions were used to assess the relationship between kinematics, ankle range of motion and hip strength measures. Bivariate correlations between hip strength measures and kinematics and pelvic obliquity were employed to investigate any possible relationships. Results: 1) Significant kinematic differences between squats were observed in dominant (D) and non-dominant (ND) end of range hip external rotation (ND p = <0.001; D p = 0.004) and hip adduction kinematics (ND p = <0.001; D p = <0.001). With the mean angle, only the non-dominant leg observed significant differences in hip adduction (p = 0.001) and hip external rotation (p = <0.001); 2) Significant linear relationships were observed between clinical measures of ankle dorsiflexion and sagittal plane kinematic namely SLFS dominant ankle (p = 0.006; R2 = .429), SLFS non-dominant knee (p = 0.015; R2 = .352) and SLFS non-dominant ankle (p = 0.027; R2 = .305) kinematics. Only the dominant ankle (p = 0.020; R2 = .331) was found to have a relationship with the decline squat. 3) Strength measures had tenuous associations with the subjective assessments of lumbopelvic stability with no significant relationships being observed. 4) For the non-dominant leg, external rotation strength and abduction strength were found to be significantly correlated with hip rotation kinematics (Newtons r = 0.458 p = 0.049; Normalised for bodyweight: r = 0.469; p = 0.043) and pelvic obliquity (normalised for bodyweight: r = 0.498 p = 0.030) respectively for the SLFS only. No significant relationships were observed in the dominant leg for either squat condition. Some elements of the hip strength screening protocols had linear relationships with kinematics of the lower limb, particularly the sagittal plane movements of the knee and ankle. Strength measures had tenuous associations with the subjective assessments of lumbopelvic stability with no significant relationships being observed; Discussion: The key finding of this study illustrated that kinematic differences can occur at the hip without significant kinematic differences at the knee as a result of the introduction of a decline board. Further observations reinforce the role of limited ankle dorsiflexion range of motion on sagittal plane movement of the hip and knee and in turn multiplanar kinematics of the lower limb. The kinematic differences between conditions have clinical implications for screening protocols that employ frontal plane movement of the knee as a guide for femoral adduction and rotation. Subjects who returned stronger hip strength measurements also appeared to squat deeper as characterised by differences in sagittal plane kinematics of the knee and ankle. Despite the aforementioned findings, the relationship between hip strength and lower limb kinematics remains largely tenuous in the assessment of the lumbopelvic stability using the SLS. The association between kinematics and the subjective measures of lumbopelvic stability also remain tenuous between and within SLS screening protocols. More functional measures of hip strength are needed to further investigate these relationships. Conclusion: The type of SLS (flat or decline) should be taken into account when screening for lumbopelvic stability. Changes to lower limb kinematics, especially around the hip and pelvis, were observed with the introduction of a decline board despite no difference in frontal plane knee movements. Differences in passive ankle dorsiflexion range of motion yielded variations in knee and ankle kinematics during a self-selected single leg squatting task. Clinical implications of removing posterior ankle restraints and using the knee as a guide to illustrate changes at the hip may result in inaccurate screening of lumbopelvic stability. The relationship between sagittal plane lower limb kinematics and hip strength may illustrate that self-selected squat depth may presumably be a useful predictor of the lumbopelvic stability. Further research in this area is required.
Resumo:
We propose and evaluate a novel methodology to identify the rolling shutter parameters of a real camera. We also present a model for the geometric distortion introduced when a moving camera with a rolling shutter views a scene. Unlike previous work this model allows for arbitrary camera motion, including accelerations, is exact rather than a linearization and allows for arbitrary camera projection models, for example fisheye or panoramic. We show the significance of the errors introduced by a rolling shutter for typical robot vision problems such as structure from motion, visual odometry and pose estimation.
Resumo:
This paper presents an implementation of an aircraft pose and motion estimator using visual systems as the principal sensor for controlling an Unmanned Aerial Vehicle (UAV) or as a redundant system for an Inertial Measure Unit (IMU) and gyros sensors. First, we explore the applications of the unified theory for central catadioptric cameras for attitude and heading estimation, explaining how the skyline is projected on the catadioptric image and how it is segmented and used to calculate the UAV’s attitude. Then we use appearance images to obtain a visual compass, and we calculate the relative rotation and heading of the aerial vehicle. Additionally, we show the use of a stereo system to calculate the aircraft height and to measure the UAV’s motion. Finally, we present a visual tracking system based on Fuzzy controllers working in both a UAV and a camera pan and tilt platform. Every part is tested using the UAV COLIBRI platform to validate the different approaches, which include comparison of the estimated data with the inertial values measured onboard the helicopter platform and the validation of the tracking schemes on real flights.
Resumo:
This paper demonstrates the application of a robust form of pose estimation and scene reconstruction using data from camera images. We demonstrate results that suggest the ability of the algorithm to rival methods of RANSAC based pose estimation polished by bundle adjustment in terms of solution robustness, speed and accuracy, even when given poor initialisations. Our simulated results show the behaviour of the algorithm in a number of novel simulated scenarios reflective of real world cases that show the ability of the algorithm to handle large observation noise and difficult reconstruction scenes. These results have a number of implications for the vision and robotics community, and show that the application of visual motion estimation on robotic platforms in an online fashion is approaching real-world feasibility.
Resumo:
Recent algorithms for monocular motion capture (MoCap) estimate weak-perspective camera matrices between images using a small subset of approximately-rigid points on the human body (i.e. the torso and hip). A problem with this approach, however, is that these points are often close to coplanar, causing canonical linear factorisation algorithms for rigid structure from motion (SFM) to become extremely sensitive to noise. In this paper, we propose an alternative solution to weak-perspective SFM based on a convex relaxation of graph rigidity. We demonstrate the success of our algorithm on both synthetic and real world data, allowing for much improved solutions to marker less MoCap problems on human bodies. Finally, we propose an approach to solve the two-fold ambiguity over bone direction using a k-nearest neighbour kernel density estimator.
Resumo:
Camera trapping is a scientific survey technique that involves the placement of heat-and motion-sensing automatic triggered cameras into the ecosystem to record images of animals for the purpose of studying wildlife. As technology continues to advance in sophistication, the use of camera trapping is becoming more widespread and is a crucial tool in the study of, and attempts to preserve, various species of animals, particularly those that are internationally endangered. However, whatever their value as an ecological device, camera traps also create a new risk of incidentally and accidentally capturing images of humans who venture into the area under surveillance. This article examines the current legal position in Australia in relation to such unintended invasions of privacy. It considers the current patchwork of statute and common laws that may provide a remedy in such circumstances. It also discusses the position that may prevail should the recommendations of either the Australian Law Reform Commission and/or New South Wales Law Reform Commission be adopted and a statutory cause of action protecting personal privacy be enacted.
Resumo:
Motion planning for planetary rovers must consider control uncertainty in order to maintain the safety of the platform during navigation. Modelling such control uncertainty is difficult due to the complex interaction between the platform and its environment. In this paper, we propose a motion planning approach whereby the outcome of control actions is learned from experience and represented statistically using a Gaussian process regression model. This model is used to construct a control policy for navigation to a goal region in a terrain map built using an on-board RGB-D camera. The terrain includes flat ground, small rocks, and non-traversable rocks. We report the results of 200 simulated and 35 experimental trials that validate the approach and demonstrate the value of considering control uncertainty in maintaining platform safety.
Resumo:
This paper deals with constrained image-based visual servoing of circular and conical spiral motion about an unknown object approximating a single image point feature. Effective visual control of such trajectories has many applications for small unmanned aerial vehicles, including surveillance and inspection, forced landing (homing), and collision avoidance. A spherical camera model is used to derive a novel visual-predictive controller (VPC) using stability-based design methods for general nonlinear model-predictive control. In particular, a quasi-infinite horizon visual-predictive control scheme is derived. A terminal region, which is used as a constraint in the controller structure, can be used to guide appropriate reference image features for spiral tracking with respect to nominal stability and feasibility. Robustness properties are also discussed with respect to parameter uncertainty and additive noise. A comparison with competing visual-predictive control schemes is made, and some experimental results using a small quad rotor platform are given.
Resumo:
This paper introduces a machine learning based system for controlling a robotic manipulator with visual perception only. The capability to autonomously learn robot controllers solely from raw-pixel images and without any prior knowledge of configuration is shown for the first time. We build upon the success of recent deep reinforcement learning and develop a system for learning target reaching with a three-joint robot manipulator using external visual observation. A Deep Q Network (DQN) was demonstrated to perform target reaching after training in simulation. Transferring the network to real hardware and real observation in a naive approach failed, but experiments show that the network works when replacing camera images with synthetic images.