862 resultados para Path Planning Under Uncertainty
Resumo:
In this paper, we study a problem of designing a multi-hop wireless network for interconnecting sensors (hereafter called source nodes) to a Base Station (BS), by deploying a minimum number of relay nodes at a subset of given potential locations, while meeting a quality of service (QoS) objective specified as a hop count bound for paths from the sources to the BS. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard. For this problem, we propose a polynomial time approximation algorithm based on iteratively constructing shortest path trees and heuristically pruning away the relay nodes used until the hop count bound is violated. Results show that the algorithm performs efficiently in various randomly generated network scenarios; in over 90% of the tested scenarios, it gave solutions that were either optimal or were worse than optimal by just one relay. We then use random graph techniques to obtain, under a certain stochastic setting, an upper bound on the average case approximation ratio of a class of algorithms (including the proposed algorithm) for this problem as a function of the number of source nodes, and the hop count bound. To the best of our knowledge, the average case analysis is the first of its kind in the relay placement literature. Since the design is based on a light traffic model, we also provide simulation results (using models for the IEEE 802.15.4 physical layer and medium access control) to assess the traffic levels up to which the QoS objectives continue to be met. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Unmanned vehicle path following by pursuing a virtual target moving along the path is considered. Limitations for pure pursuit guidance are analyzed while following the virtual target on curved paths. Trajectory shaping guidance is proposed as an alternate guidance scheme for a general curvature path. It is proven that under certain tenable assumptions trajectory shaping guidance yields an identical path as that of the virtual target. By linear analysis it is shown that the convergence to the path for trajectory shaping guidance is twice as fast as pure pursuit. Simulations highlight significant improvement in position errors by using trajectory shaping guidance. Comparative simulation studies comply with analytic findings and present better performance as compared with pure pursuit and a nonlinear guidance methodology from the literature. Experimental validation supports the analytic and simulations studies as the guidance laws are implemented on a radio-controlled car in a laboratory environment.
Resumo:
In this work, the hypothesis testing problem of spectrum sensing in a cognitive radio is formulated as a Goodness-of-fit test against the general class of noise distributions used in most communications-related applications. A simple, general, and powerful spectrum sensing technique based on the number of weighted zero-crossings in the observations is proposed. For the cases of uniform and exponential weights, an expression for computing the near-optimal detection threshold that meets a given false alarm probability constraint is obtained. The proposed detector is shown to be robust to two commonly encountered types of noise uncertainties, namely, the noise model uncertainty, where the PDF of the noise process is not completely known, and the noise parameter uncertainty, where the parameters associated with the noise PDF are either partially or completely unknown. Simulation results validate our analysis, and illustrate the performance benefits of the proposed technique relative to existing methods, especially in the low SNR regime and in the presence of noise uncertainties.
Resumo:
In the context of wireless sensor networks, we are motivated by the design of a tree network spanning a set of source nodes that generate packets, a set of additional relay nodes that only forward packets from the sources, and a data sink. We assume that the paths from the sources to the sink have bounded hop count, that the nodes use the IEEE 802.15.4 CSMA/CA for medium access control, and that there are no hidden terminals. In this setting, starting with a set of simple fixed point equations, we derive explicit conditions on the packet generation rates at the sources, so that the tree network approximately provides certain quality of service (QoS) such as end-to-end delivery probability and mean delay. The structures of our conditions provide insight on the dependence of the network performance on the arrival rate vector, and the topological properties of the tree network. Our numerical experiments suggest that our approximations are able to capture a significant part of the QoS aware throughput region (of a tree network), that is adequate for many sensor network applications. Furthermore, for the special case of equal arrival rates, default backoff parameters, and for a range of values of target QoS, we show that among all path-length-bounded trees (spanning a given set of sources and the data sink) that meet the conditions derived in the paper, a shortest path tree achieves the maximum throughput. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
In this article, we analyze how to evaluate fishery resource management under “ecological uncertainty”. In this context, an efficient policy consists of applying a different exploitation rule depending on the state of the resource and we could say that the stock is always in transition, jumping from one steady state to another. First, we propose a method for calibrating the growth path of the resource such that observed dynamics of resource and captures are matched. Second, we apply the calibration procedure proposed in two different fishing grounds: the European Anchovy (Division VIII) and the Southern Stock of Hake. Our results show that the role played by uncertainty is essential for the conclusions. For European Anchovy fishery (Division VIII) we find, in contrast with Del Valle et al. (2001), that this is not an overexploited fishing ground. However, we show that the Southern Stock of Hake is in a dangerous situation. In both cases our results are in accordance with ICES advice.
Resumo:
This paper analyzes the use of artificial neural networks (ANNs) for predicting the received power/path loss in both outdoor and indoor links. The approach followed has been a combined use of ANNs and ray-tracing, the latter allowing the identification and parameterization of the so-called dominant path. A complete description of the process for creating and training an ANN-based model is presented with special emphasis on the training process. More specifically, we will be discussing various techniques to arrive at valid predictions focusing on an optimum selection of the training set. A quantitative analysis based on results from two narrowband measurement campaigns, one outdoors and the other indoors, is also presented.
Resumo:
We study quantum state tomography, entanglement detection and channel noise reconstruction of propagating quantum microwaves via dual-path methods. The presented schemes make use of the following key elements: propagation channels, beam splitters, linear amplifiers and field quadrature detectors. Remarkably, our methods are tolerant to the ubiquitous noise added to the signals by phase-insensitive microwave amplifiers. Furthermore, we analyse our techniques with numerical examples and experimental data, and compare them with the scheme developed in Eichler et al (2011 Phys. Rev. Lett. 106 220503; 2011 Phys. Rev. Lett. 107 113601), based on a single path. Our methods provide key toolbox components that may pave the way towards quantum microwave teleportation and communication protocols.
Resumo:
Over the years, Nigeria have witnessed different government with different policy measures. Against the negative consequences of the past policies, the structural adjustment was initiated in 1986. Its aims is to effectively altar and restructure the consumption patterns of the economy as well as to eliminate price distortions and heavy dependence on the oil and the imports of consumer goods and services. Within the period of implementation, there has been a decreasing trend in yearly fish catch landings and sizes but the reverse in shrimping. There is also a gradual shift from fishing to shrimping, from the vessels purchased with 83.3% increase of shrimpers from 1985 to 1989. Decreasing fish catch sizes and quantity aggravated by the present high cost of fishing coupled with the favourable export market for Nigeria shrimp tend to influence the sift. This economic situation is the result of the supply measures of SAP through the devaluation of the Naira. There is also overconcentration of vessels on the inshore waters as majority of the vessels are old and low powers hence incapable of fishing on the deep sea. Rotterdam price being paid for automotive gas oil (AGO) by fishing industries is observed to be discriminating and unhealthy to the growth of the industry as it is exceedingly high and unstable thus affecting planning for fishing operation. Fuel alone takes 43% of the total cost of operation. The overall consequences is that fishing days are loss and therefore higher overhead cost. It was concluded that for a healthy growth and sustainable resources of our marine fishery under the structural adjustment programme licensing of new fishing vessels should be stopped immediately and the demand side of SAP should be employed by subsidizing high powered fishing vessels which can operate effectively on the deep sea
Resumo:
This thesis explores the problem of mobile robot navigation in dense human crowds. We begin by considering a fundamental impediment to classical motion planning algorithms called the freezing robot problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing predictive uncertainty by employing higher fidelity individual dynamics models or heuristically limiting the individual predictive covariance to prevent overcautious navigation. We demonstrate that both the individual prediction and the individual predictive uncertainty have little to do with this undesirable navigation behavior. Additionally, we provide evidence that dynamic agents are able to navigate in dense crowds by engaging in joint collision avoidance, cooperatively making room to create feasible trajectories. We accordingly develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a "multiple goal" extension that models the goal driven nature of human decision making. Navigation naturally emerges as a statistic of this distribution.
Most importantly, we empirically validate our models in the Chandler dining hall at Caltech during peak hours, and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (collecting data on 488 runs). The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our noncooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. For inclusive validation purposes, we show that either our non-interacting planner or our reactive planner captures the salient characteristics of nearly any existing dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.
Finally, we produce a large database of ground truth pedestrian crowd data. We make this ground truth database publicly available for further scientific study of crowd prediction models, learning from demonstration algorithms, and human robot interaction models in general.
Resumo:
Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.
This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.
When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.
Resumo:
In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.
For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.
Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.
Resumo:
25 p.
Resumo:
The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.
In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.
This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.
The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.
The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.
Resumo:
An extensive range of conventional, vane-type, passive vortex generators (VGs) are in use for successful applications of flow separation control. In most cases, the VG height is designed with the same thickness as the local boundary layer at the VG position. However, in some applications, these conventional VGs may produce excess residual drag. The so-called low-profile VGs can reduce the parasitic drag associated to this kind of passive control devices. As suggested by many authors, low-profile VGs can provide enough momentum transfer over a region several times their own height for effective flow-separation control with much lower drag. The main objective of this work is to study the variation of the path and the development of the primary vortex generated by a rectangular VG mounted on a flat plate with five different device heights h = delta, h(1) = 0.8 delta, h(2) = 0.6 delta, h(3) = 0.4 delta and h(4) = 0.2 delta, where delta is the local boundary layer thickness. For this purpose, computational simulations have been carried out at Reynolds number Re = 1350 based on the height of the conventional VG h = 0.25m with the angle of attack of the vane to the oncoming flow beta = 18.5 degrees. The results show that the VG scaling significantly affects the vortex trajectory and the peak vorticity generated by the primary vortex.
Resumo:
This report presents the findings and recommendations of a strategic planning mission to reevaluate the feasibility of WorldFish implementing a fish value chain research program in Uganda under the CGIAR Research Program on Livestock and Fish (L&F). The over-arching goal of L&F is to increase productivity of small-scale livestock and fish systems so as to increase availability and affordability of meat, milk and fish for poor consumers and, in doing so, to reduce poverty through greater participation by the poor along animal source food value chains. This will be achieved by making a small number of carefully selected animal source food value chains function better, for example by identifying and addressing key constraints and opportunities (from production to consumption), improving institutional arrangements and capacities, and supporting the establishment of enabling pro-poor policy and institutional environments.