980 resultados para position estimation
Resumo:
This paper presents the preliminary results in establishing a strategy for predicting Zenith Tropospheric Delay (ZTD) and relative ZTD (rZTD) between Continuous Operating Reference Stations (CORS) in near real-time. It is anticipated that the predicted ZTD or rZTD can assist the network-based Real-Time Kinematic (RTK) performance over long inter-station distances, ultimately, enabling a cost effective method of delivering precise positioning services to sparsely populated regional areas, such as Queensland. This research firstly investigates two ZTD solutions: 1) the post-processed IGS ZTD solution and 2) the near Real-Time ZTD solution. The near Real-Time solution is obtained through the GNSS processing software package (Bernese) that has been deployed for this project. The predictability of the near Real-Time Bernese solution is analyzed and compared to the post-processed IGS solution where it acts as the benchmark solution. The predictability analyses were conducted with various prediction time of 15, 30, 45, and 60 minutes to determine the error with respect to timeliness. The predictability of ZTD and relative ZTD is determined (or characterized) by using the previously estimated ZTD as the predicted ZTD of current epoch. This research has shown that both the ZTD and relative ZTD predicted errors are random in nature; the STD grows from a few millimeters to sub-centimeters while the predicted delay interval ranges from 15 to 60 minutes. Additionally, the RZTD predictability shows very little dependency on the length of tested baselines of up to 1000 kilometers. Finally, the comparison of near Real-Time Bernese solution with IGS solution has shown a slight degradation in the prediction accuracy. The less accurate NRT solution has an STD error of 1cm within the delay of 50 minutes. However, some larger errors of up to 10cm are observed.
Resumo:
The development of an automated system for the quality assessment of aerodrome ground lighting (AGL), in accordance with associated standards and recommendations, is presented. The system is composed of an image sensor, placed inside the cockpit of an aircraft to record images of the AGL during a normal descent to an aerodrome. A model-based methodology is used to ascertain the optimum match between a template of the AGL and the actual image data in order to calculate the position and orientation of the camera at the instant the image was acquired. The camera position and orientation data are used along with the pixel grey level for each imaged luminaire, to estimate a value for the luminous intensity of a given luminaire. This can then be compared with the expected brightness for that luminaire to ensure it is operating to the required standards. As such, a metric for the quality of the AGL pattern is determined. Experiments on real image data is presented to demonstrate the application and effectiveness of the system.
Resumo:
This paper proposes an online sensorless rotor position estimation technique for switched reluctance motors (SRMs) using just one current sensor. It is achieved by first decoupling the excitation current from the bus current. Two phase-shifted pulse width modulation signals are injected into the relevant lower transistors in the asymmetrical half-bridge converter for short intervals during each current fundamental cycle. Analog-to-digital converters are triggered in the pause middles of the dual pulse to separate the bus current for excitation current recognition. Next, the rotor position is estimated from the excitation current, by a current-rise-time method in the current-chopping-control mode in a low-speed operation and a current-gradient method in the voltage-pulse-control mode in a high-speed operation. The proposed scheme requires only a bus current sensor and a minor change to the converter circuit, without a need for individual phase current sensors or additional detection devices, achieving a more compact and cost-effective drive. The performance of the sensorless SRM drive is fully investigated. The simulation and experiments on a 750-W three-phase 12/8-pole SRM are carried out to verify the effectiveness of the proposed scheme.
Resumo:
We investigated the relative importance of vision and proprioception in estimating target and hand locations in a dynamic environment. Subjects performed a position estimation task in which a target moved horizontally on a screen at a constant velocity and then disappeared. They were asked to estimate the position of the invisible target under two conditions: passively observing and manually tracking. The tracking trials included three visual conditions with a cursor representing the hand position: always visible, disappearing simultaneously with target disappearance, and always invisible. The target’s invisible displacement was systematically underestimated during passive observation. In active conditions, tracking with the visible cursor significantly decreased the extent of underestimation. Tracking of the invisible target became much more accurate under this condition and was not affected by cursor disappearance. In a second experiment, subjects were asked to judge the position of their unseen hand instead of the target during tracking movements. Invisible hand displacements were also underestimated when compared with the actual displacement. Continuous or brief presentation of the cursor reduced the extent of underestimation. These results suggest that vision–proprioception interactions are critical for representing exact target–hand spatial relationships, and that such sensorimotor representation of hand kinematics serves a cognitive function in predicting target position. We propose a hypothesis that the central nervous system can utilize information derived from proprioception and/or efference copy for sensorimotor prediction of dynamic target and hand positions, but that effective use of this information for conscious estimation requires that it be presented in a form that corresponds to that used for the estimations.
Resumo:
El principal objetivo de esta tesis es dotar a los vehículos aéreos no tripulados (UAVs, por sus siglas en inglés) de una fuente de información adicional basada en visión. Esta fuente de información proviene de cámaras ubicadas a bordo de los vehículos o en el suelo. Con ella se busca que los UAVs realicen tareas de aterrizaje o inspección guiados por visión, especialmente en aquellas situaciones en las que no haya disponibilidad de estimar la posición del vehículo con base en GPS, cuando las estimaciones de GPS no tengan la suficiente precisión requerida por las tareas a realizar, o cuando restricciones de carga de pago impidan añadir sensores a bordo de los vehículos. Esta tesis trata con tres de las principales áreas de la visión por computador: seguimiento visual y estimación visual de la pose (posición y orientación), que a su vez constituyen la base de la tercera, denominada control servo visual, que en nuestra aplicación se enfoca en el empleo de información visual para controlar los UAVs. Al respecto, esta tesis se ocupa de presentar propuestas novedosas que permitan solucionar problemas relativos al seguimiento de objetos mediante cámaras ubicadas a bordo de los UAVs, se ocupa de la estimación de la pose de los UAVs basada en información visual obtenida por cámaras ubicadas en el suelo o a bordo, y también se ocupa de la aplicación de las técnicas propuestas para solucionar diferentes problemas, como aquellos concernientes al seguimiento visual para tareas de reabastecimiento autónomo en vuelo o al aterrizaje basado en visión, entre otros. Las diversas técnicas de visión por computador presentadas en esta tesis se proponen con el fin de solucionar dificultades que suelen presentarse cuando se realizan tareas basadas en visión con UAVs, como las relativas a la obtención, en tiempo real, de estimaciones robustas, o como problemas generados por vibraciones. Los algoritmos propuestos en esta tesis han sido probados con información de imágenes reales obtenidas realizando pruebas on-line y off-line. Diversos mecanismos de evaluación han sido empleados con el propósito de analizar el desempeño de los algoritmos propuestos, entre los que se incluyen datos simulados, imágenes de vuelos reales, estimaciones precisas de posición empleando el sistema VICON y comparaciones con algoritmos del estado del arte. Los resultados obtenidos indican que los algoritmos de visión por computador propuestos tienen un desempeño que es comparable e incluso mejor al de algoritmos que se encuentran en el estado del arte. Los algoritmos propuestos permiten la obtención de estimaciones robustas en tiempo real, lo cual permite su uso en tareas de control visual. El desempeño de estos algoritmos es apropiado para las exigencias de las distintas aplicaciones examinadas: reabastecimiento autónomo en vuelo, aterrizaje y estimación del estado del UAV. Abstract The main objective of this thesis is to provide Unmanned Aerial Vehicles (UAVs) with an additional vision-based source of information extracted by cameras located either on-board or on the ground, in order to allow UAVs to develop visually guided tasks, such as landing or inspection, especially in situations where GPS information is not available, where GPS-based position estimation is not accurate enough for the task to develop, or where payload restrictions do not allow the incorporation of additional sensors on-board. This thesis covers three of the main computer vision areas: visual tracking and visual pose estimation, which are the bases the third one called visual servoing, which, in this work, focuses on using visual information to control UAVs. In this sense, the thesis focuses on presenting novel solutions for solving the tracking problem of objects when using cameras on-board UAVs, on estimating the pose of the UAVs based on the visual information collected by cameras located either on the ground or on-board, and also focuses on applying these proposed techniques for solving different problems, such as visual tracking for aerial refuelling or vision-based landing, among others. The different computer vision techniques presented in this thesis are proposed to solve some of the frequently problems found when addressing vision-based tasks in UAVs, such as obtaining robust vision-based estimations at real-time frame rates, and problems caused by vibrations, or 3D motion. All the proposed algorithms have been tested with real-image data in on-line and off-line tests. Different evaluation mechanisms have been used to analyze the performance of the proposed algorithms, such as simulated data, images from real-flight tests, publicly available datasets, manually generated ground truth data, accurate position estimations using a VICON system and a robotic cell, and comparison with state of the art algorithms. Results show that the proposed computer vision algorithms obtain performances that are comparable to, or even better than, state of the art algorithms, obtaining robust estimations at real-time frame rates. This proves that the proposed techniques are fast enough for vision-based control tasks. Therefore, the performance of the proposed vision algorithms has shown to be of a standard appropriate to the different explored applications: aerial refuelling and landing, and state estimation. It is noteworthy that they have low computational overheads for vision systems.
Resumo:
With the advantages and popularity of Permanent Magnet (PM) motors due to their high power density, there is an increasing incentive to use them in variety of applications including electric actuation. These applications have strict noise emission standards. The generation of audible noise and associated vibration modes are characteristics of all electric motors, it is especially problematic in low speed sensorless control rotary actuation applications using high frequency voltage injection technique. This dissertation is aimed at solving the problem of optimizing the sensorless control algorithm for low noise and vibration while achieving at least 12 bit absolute accuracy for speed and position control. The low speed sensorless algorithm is simulated using an improved Phase Variable Model, developed and implemented in a hardware-in-the-loop prototyping environment. Two experimental testbeds were developed and built to test and verify the algorithm in real time.^ A neural network based modeling approach was used to predict the audible noise due to the high frequency injected carrier signal. This model was created based on noise measurements in an especially built chamber. The developed noise model is then integrated into the high frequency based sensorless control scheme so that appropriate tradeoffs and mitigation techniques can be devised. This will improve the position estimation and control performance while keeping the noise below a certain level. Genetic algorithms were used for including the noise optimization parameters into the developed control algorithm.^ A novel wavelet based filtering approach was proposed in this dissertation for the sensorless control algorithm at low speed. This novel filter was capable of extracting the position information at low values of injection voltage where conventional filters fail. This filtering approach can be used in practice to reduce the injected voltage in sensorless control algorithm resulting in significant reduction of noise and vibration.^ Online optimization of sensorless position estimation algorithm was performed to reduce vibration and to improve the position estimation performance. The results obtained are important and represent original contributions that can be helpful in choosing optimal parameters for sensorless control algorithm in many practical applications.^
Resumo:
In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR.
Resumo:
This paper studies receiver autonomous integrity monitoring (RAIM) algorithms and performance benefits of RTK solutions with multiple-constellations. The proposed method is generally known as Multi-constellation RAIM -McRAIM. The McRAIM algorithms take advantage of the ambiguity invariant character to assist fast identification of multiple satellite faults in the context of multiple constellations, and then detect faulty satellites in the follow-up ambiguity search and position estimation processes. The concept of Virtual Galileo Constellation (VGC) is used to generate useful data sets of dual-constellations for performance analysis. Experimental results from a 24-h data set demonstrate that with GPS&VGC constellations, McRAIM can significantly enhance the detection and exclusion probabilities of two simultaneous faulty satellites in RTK solutions.
Resumo:
In this paper, the problems of three carrier phase ambiguity resolution (TCAR) and position estimation (PE) are generalized as real time GNSS data processing problems for a continuously observing network on large scale. In order to describe these problems, a general linear equation system is presented to uniform various geometry-free, geometry-based and geometry-constrained TCAR models, along with state transition questions between observation times. With this general formulation, generalized TCAR solutions are given to cover different real time GNSS data processing scenarios, and various simplified integer solutions, such as geometry-free rounding and geometry-based LAMBDA solutions with single and multiple-epoch measurements. In fact, various ambiguity resolution (AR) solutions differ in the floating ambiguity estimation and integer ambiguity search processes, but their theoretical equivalence remains under the same observational systems models and statistical assumptions. TCAR performance benefits as outlined from the data analyses in some recent literatures are reviewed, showing profound implications for the future GNSS development from both technology and application perspectives.
Resumo:
Position estimation for planetary rovers has been typically limited to odometry based on proprioceptive measurements such as the integration of distance traveled and measurement of heading change. Here we present and compare two methods of online visual odometry suited for planetary rovers. Both methods use omnidirectional imagery to estimate motion of the rover. One method is based on robust estimation of optical flow and subsequent integration of the flow. The second method is a full structure-from-motion solution. To make the comparison meaningful we use the same set of raw corresponding visual features for each method. The dataset is an sequence of 2000 images taken during a field experiment in the Atacama desert, for which high resolution GPS ground truth is available.
Resumo:
This paper presents a shared autonomy control scheme for a quadcopter that is suited for inspection of vertical infrastructure — tall man-made structures such as streetlights, electricity poles or the exterior surfaces of buildings. Current approaches to inspection of such structures is slow, expensive, and potentially hazardous. Low-cost aerial platforms with an ability to hover now have sufficient payload and endurance for this kind of task, but require significant human skill to fly. We develop a control architecture that enables synergy between the ground-based operator and the aerial inspection robot. An unskilled operator is assisted by onboard sensing and partial autonomy to safely fly the robot in close proximity to the structure. The operator uses their domain knowledge and problem solving skills to guide the robot in difficult to reach locations to inspect and assess the condition of the infrastructure. The operator commands the robot in a local task coordinate frame with limited degrees of freedom (DOF). For instance: up/down, left/right, toward/away with respect to the infrastructure. We therefore avoid problems of global mapping and navigation while providing an intuitive interface to the operator. We describe algorithms for pole detection, robot velocity estimation with respect to the pole, and position estimation in 3D space as well as the control algorithms and overall system architecture. We present initial results of shared autonomy of a quadrotor with respect to a vertical pole and robot performance is evaluated by comparing with motion capture data.
Resumo:
In this paper, we propose a vision based mobile robot localization strategy. Local scale-invariant features are used as natural landmarks in unstructured and unmodified environment. The local characteristics of the features we use prove to be robust to occlusion and outliers. In addition, the invariance of the features to viewpoint change makes them suitable landmarks for mobile robot localization. Scale-invariant features detected in the first exploration are indexed into a location database. Indexing and voting allow efficient recognition of global localization. The localization result is verified by epipolar geometry between the representative view in database and the view to be localized, thus the probability of false localization will be decreased. The localization system can recover the pose of the camera mounted on the robot by essential matrix decomposition. Then the position of the robot can be computed easily. Both calibrated and un-calibrated cases are discussed and relative position estimation based on calibrated camera turns out to be the better choice. Experimental results show that our approach is effective and reliable in the case of illumination changes, similarity transformations and extraneous features. © 2004 IEEE.