5 resultados para Global localization problem
em Digital Commons - Michigan Tech
Resumo:
This dissertation investigates high performance cooperative localization in wireless environments based on multi-node time-of-arrival (TOA) and direction-of-arrival (DOA) estimations in line-of-sight (LOS) and non-LOS (NLOS) scenarios. Here, two categories of nodes are assumed: base nodes (BNs) and target nodes (TNs). BNs are equipped with antenna arrays and capable of estimating TOA (range) and DOA (angle). TNs are equipped with Omni-directional antennas and communicate with BNs to allow BNs to localize TNs; thus, the proposed localization is maintained by BNs and TNs cooperation. First, a LOS localization method is proposed, which is based on semi-distributed multi-node TOA-DOA fusion. The proposed technique is applicable to mobile ad-hoc networks (MANETs). We assume LOS is available between BNs and TNs. One BN is selected as the reference BN, and other nodes are localized in the coordinates of the reference BN. Each BN can localize TNs located in its coverage area independently. In addition, a TN might be localized by multiple BNs. High performance localization is attainable via multi-node TOA-DOA fusion. The complexity of the semi-distributed multi-node TOA-DOA fusion is low because the total computational load is distributed across all BNs. To evaluate the localization accuracy of the proposed method, we compare the proposed method with global positioning system (GPS) aided TOA (DOA) fusion, which are applicable to MANETs. The comparison criterion is the localization circular error probability (CEP). The results confirm that the proposed method is suitable for moderate scale MANETs, while GPS-aided TOA fusion is suitable for large scale MANETs. Usually, TOA and DOA of TNs are periodically estimated by BNs. Thus, Kalman filter (KF) is integrated with multi-node TOA-DOA fusion to further improve its performance. The integration of KF and multi-node TOA-DOA fusion is compared with extended-KF (EKF) when it is applied to multiple TOA-DOA estimations made by multiple BNs. The comparison depicts that it is stable (no divergence takes place) and its accuracy is slightly lower than that of the EKF, if the EKF converges. However, the EKF may diverge while the integration of KF and multi-node TOA-DOA fusion does not; thus, the reliability of the proposed method is higher. In addition, the computational complexity of the integration of KF and multi-node TOA-DOA fusion is much lower than that of EKF. In wireless environments, LOS might be obstructed. This degrades the localization reliability. Antenna arrays installed at each BN is incorporated to allow each BN to identify NLOS scenarios independently. Here, a single BN measures the phase difference across two antenna elements using a synchronized bi-receiver system, and maps it into wireless channel’s K-factor. The larger K is, the more likely the channel would be a LOS one. Next, the K-factor is incorporated to identify NLOS scenarios. The performance of this system is characterized in terms of probability of LOS and NLOS identification. The latency of the method is small. Finally, a multi-node NLOS identification and localization method is proposed to improve localization reliability. In this case, multiple BNs engage in the process of NLOS identification, shared reflectors determination and localization, and NLOS TN localization. In NLOS scenarios, when there are three or more shared reflectors, those reflectors are localized via DOA fusion, and then a TN is localized via TOA fusion based on the localization of shared reflectors.
Resumo:
More than 1 billion people lack access to clean water and proper sanitation. As part of efforts to solve this problem, there is a growing shift from public to private water management led by The World Bank and the International Monetary Fund (IMF). This shift has inspired much related research. Researchers have assessed water privatization related perceptions of consumers, government officials, and multinational company agents. This thesis presents results of a study of nongovernmental (NGO) staff perceptions of water privatization. Although NGOs are important actors in sustainable water related development through water provision, we have little understanding of their perceptions of water privatization and how it impacts their activities. My goal was to fill this gap. I sampled international and national development NGOs with water, sanitation, and hygiene (WASH) foci. I conducted 28 interviews between January and June of 2011 with staff in key positions including water policy analysts, program officers, and project coordinators. Their perceptions of water privatization were mixed. I also found that local water privatization in most cases does not influence NGO decisions to conduct projects in a region. I found that development NGO staff base their beliefs about water privatization on a mix of personal experience and media coverage. My findings have important implications for the WASH sector as we work to solve the worsening global water access crisis.
Resumo:
Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.
Resumo:
Since it is very toxic and accumulates in organisms, particularly in fish, mercury is a very important pollutant and one of the most studies. And this concern over the toxicity and human health risks of mercury has prompted efforts to regulate anthropogenic emissions. As mercury pollution problem is getting increasingly serious, we are curious about how serious this problem will be in the future. What is more, how the climate change in the future will affect the mercury concentration in the atmosphere. So we investigate the impact of climate change on mercury concentration in the atmosphere. We focus on the comparison between the mercury data for year 2000 and for year 2050. The GEOS-Chem model shows that the mercury concentrations for all tracers (1 to 3), elemental mercury (Hg(0)), divalent mercury (Hg(II)) and primary particulate mercury (Hg(P)) have differences between 2000 and 2050 in most regions over the world. From the model results, we can see the climate change from 2000 to 2050 would decrease Hg(0) surface concentration in most of the world. The driving factors of Hg(0) surface concentration changes are natural emissions(ocean and vegetation) and the transformation reactions between Hg(0) and Hg(II). The climate change from 2000 to 2050 would increase Hg(II) surface concentration in most of mid-latitude continental parts of the world while decreasing Hg(II) surface concentration in most of high-latitude part of the world. The driving factors of Hg(II) surface concentration changes is deposition amount change (majorly wet deposition) from 2000 to 2050 and the transformation reactions between Hg(0) and Hg(II). Climate change would increase Hg(P) concentration in most of mid-latitude area of the world and meanwhile decrease Hg(P) concentration in most of high-latitude regions of the world. For the Hg(P) concentration changes, the major driving factor is the deposition amount change (mainly wet deposition) from 2000 to 2050.
Resumo:
A camera maps 3-dimensional (3D) world space to a 2-dimensional (2D) image space. In the process it loses the depth information, i.e., the distance from the camera focal point to the imaged objects. It is impossible to recover this information from a single image. However, by using two or more images from different viewing angles this information can be recovered, which in turn can be used to obtain the pose (position and orientation) of the camera. Using this pose, a 3D reconstruction of imaged objects in the world can be computed. Numerous algorithms have been proposed and implemented to solve the above problem; these algorithms are commonly called Structure from Motion (SfM). State-of-the-art SfM techniques have been shown to give promising results. However, unlike a Global Positioning System (GPS) or an Inertial Measurement Unit (IMU) which directly give the position and orientation respectively, the camera system estimates it after implementing SfM as mentioned above. This makes the pose obtained from a camera highly sensitive to the images captured and other effects, such as low lighting conditions, poor focus or improper viewing angles. In some applications, for example, an Unmanned Aerial Vehicle (UAV) inspecting a bridge or a robot mapping an environment using Simultaneous Localization and Mapping (SLAM), it is often difficult to capture images with ideal conditions. This report examines the use of SfM methods in such applications and the role of combining multiple sensors, viz., sensor fusion, to achieve more accurate and usable position and reconstruction information. This project investigates the role of sensor fusion in accurately estimating the pose of a camera for the application of 3D reconstruction of a scene. The first set of experiments is conducted in a motion capture room. These results are assumed as ground truth in order to evaluate the strengths and weaknesses of each sensor and to map their coordinate systems. Then a number of scenarios are targeted where SfM fails. The pose estimates obtained from SfM are replaced by those obtained from other sensors and the 3D reconstruction is completed. Quantitative and qualitative comparisons are made between the 3D reconstruction obtained by using only a camera versus that obtained by using the camera along with a LIDAR and/or an IMU. Additionally, the project also works towards the performance issue faced while handling large data sets of high-resolution images by implementing the system on the Superior high performance computing cluster at Michigan Technological University.