32 resultados para Path Planning Under Uncertainty
Resumo:
We present a seabed profile estimation and following method for close proximity inspection of 3D underwater structures using autonomous underwater vehicles (AUVs). The presented method is used to determine a path allowing the AUV to pass its sensors over all points of the target structure, which is known as coverage path planning. Our profile following method goes beyond traditional seabed following at a safe altitude and exploits hovering capabilities of recent AUV developments. A range sonar is used to incrementally construct a local probabilistic map representation of the environment and estimates of the local profile are obtained via linear regression. Two behavior-based controllers use these estimates to perform horizontal and vertical profile following. We build upon these tools to address coverage path planning for 3D underwater structures using a (potentially inaccurate) prior map and following cross-section profiles of the target structure. The feasibility of the proposed method is demonstrated using the GIRONA 500 AUV both in simulation using synthetic and real-world bathymetric data and in pool trials
Resumo:
New economic and enterprise needs have increased the interest and utility of the methods of the grouping process based on the theory of uncertainty. A fuzzy grouping (clustering) process is a key phase of knowledge acquisition and reduction complexity regarding different groups of objects. Here, we considered some elements of the theory of affinities and uncertain pretopology that form a significant support tool for a fuzzy clustering process. A Galois lattice is introduced in order to provide a clearer vision of the results. We made an homogeneous grouping process of the economic regions of Russian Federation and Ukraine. The obtained results gave us a large panorama of a regional economic situation of two countries as well as the key guidelines for the decision-making. The mathematical method is very sensible to any changes the regional economy can have. We gave an alternative method of the grouping process under uncertainty.
Resumo:
We study the relation between the number of firms and price-cost margins under price competition with uncertainty about competitors' costs. We present results of an experiment in which two, three and four identical firms repeatedly interact in this environment. In line with the theoretical prediction, market prices decrease with the number of firms, but on average stay above marginal costs. Pricing is less aggressive in duopolies than in triopolies and tetrapolies. However, independently from the number of firms, pricing is more aggressive than in the theoretical equilibrium. Both the absolute and the relative surpluses increase with the number of firms. Total surplus is close to the equilibrium level, since enhanced consumer surplus through lower prices is counteracted by occasional displacements of the most efficient firm in production.
Resumo:
This paper studies optimal monetary policy in a framework that explicitly accounts for policymakers' uncertainty about the channels of transmission of oil prices into the economy. More specfically, I examine the robust response to the real price of oil that US monetary authorities would have been recommended to implement in the period 1970 2009; had they used the approach proposed by Cogley and Sargent (2005b) to incorporate model uncertainty and learning into policy decisions. In this context, I investigate the extent to which regulator' changing beliefs over different models of the economy play a role in the policy selection process. The main conclusion of this work is that, in the specific environment under analysis, one of the underlying models dominates the optimal interest rate response to oil prices. This result persists even when alternative assumptions on the model's priors change the pattern of the relative posterior probabilities, and can thus be attributed to the presence of model uncertainty itself.
Resumo:
This paper addresses the issue of policy evaluation in a context in which policymakers are uncertain about the effects of oil prices on economic performance. I consider models of the economy inspired by Solow (1980), Blanchard and Gali (2007), Kim and Loungani (1992) and Hamilton (1983, 2005), which incorporate different assumptions on the channels through which oil prices have an impact on economic activity. I first study the characteristics of the model space and I analyze the likelihood of the different specifications. I show that the existence of plausible alternative representations of the economy forces the policymaker to face the problem of model uncertainty. Then, I use the Bayesian approach proposed by Brock, Durlauf and West (2003, 2007) and the minimax approach developed by Hansen and Sargent (2008) to integrate this form of uncertainty into policy evaluation. I find that, in the environment under analysis, the standard Taylor rule is outperformed under a number of criteria by alternative simple rules in which policymakers introduce persistence in the policy instrument and respond to changes in the real price of oil.
Resumo:
In this paper, we study the determinants of political myopia in a rational model of electoral accountability where the key elements are informational frictions and uncertainty. We build a framework where political ability is ex-ante unknown and policy choices are not perfectly observable. On the one hand, elections improve accountability and allow to keep well-performing incumbents. On the other, politicians invest too little in costly policies with future returns in an attempt to signal high ability and increase their reelection probability. Contrary to the conventional wisdom, uncertainty reduces political myopia and may, under some conditions, increase social welfare. We use the model to study how political rewards can be set so as to maximise social welfare and the desirability of imposing a one-term limit to governments. The predictions of our theory are consistent with a number of stylised facts and with a new empirical observation documented in this paper: aggregate uncertainty, measured by economic volatility, is associated to better ...scal discipline in a panel of 20 OECD countries.
Resumo:
The speed of fault isolation is crucial for the design and reconfiguration of fault tolerant control (FTC). In this paper the fault isolation problem is stated as a constraint satisfaction problem (CSP) and solved using constraint propagation techniques. The proposed method is based on constraint satisfaction techniques and uncertainty space refining of interval parameters. In comparison with other approaches based on adaptive observers, the major advantage of the presented method is that the isolation speed is fast even taking into account uncertainty in parameters, measurements and model errors and without the monotonicity assumption. In order to illustrate the proposed approach, a case study of a nonlinear dynamic system is presented
Resumo:
The availability of induced pluripotent stem cells (iPSCs)has created extraordinary opportunities for modeling andperhaps treating human disease. However, all reprogrammingprotocols used to date involve the use of products of animal origin. Here, we set out to develop a protocol to generate and maintain human iPSC that would be entirelydevoid of xenobiotics. We first developed a xeno-free cellculture media that supported the long-term propagation of human embryonic stem cells (hESCs) to a similar extent as conventional media containing animal origin products or commercially available xeno-free medium. We also derivedprimary cultures of human dermal fibroblasts under strictxeno-free conditions (XF-HFF), and we show that they can be used as both the cell source for iPSC generation as well as autologous feeder cells to support their growth. We also replaced other reagents of animal origin trypsin, gelatin, matrigel) with their recombinant equivalents. Finally, we used vesicular stomatitis virus G-pseudotyped retroviral particles expressing a polycistronic construct encoding Oct4, Sox2, Klf4, and GFP to reprogram XF-HFF cells under xeno-free conditions. A total of 10 xeno-free humaniPSC lines were generated, which could be continuously passaged in xeno-free conditions and aintained characteristics indistinguishable from hESCs, including colonymorphology and growth behavior, expression of pluripotency-associated markers, and pluripotent differentiationability in vitro and in teratoma assays. Overall, the resultspresented here demonstrate that human iPSCs can be generatedand maintained under strict xeno-free conditions and provide a path to good manufacturing practice (GMP) applicability that should facilitate the clinical translation of iPSC-based therapies.
Resumo:
We compare behavior in modified dictator games with and without role uncertainty. Subjectschoose between a selfish action, a costly surplus creating action (altruistic behavior) and acostly surplus destroying action (spiteful behavior). While costly surplus creating actions are themost frequent under role uncertainty (64%), selfish actions become the most frequent withoutrole uncertainty (69%). Also, the frequency of surplus destroying choices is negligible with roleuncertainty (1%) but not so without it (11%). A classification of subjects into four differenttypes of interdependent preferences (Selfish, Social Welfare maximizing, Inequity Averse andCompetitive) shows that the use of role uncertainty overestimates the prevalence of SocialWelfare maximizing preferences in the subject population (from 74% with role uncertainty to21% without it) and underestimates Selfish and Inequity Averse preferences. An additionaltreatment, in which subjects undertake an understanding test before participating in theexperiment with role uncertainty, shows that the vast majority of subjects (93%) correctlyunderstand the payoff mechanism with role uncertainty, but yet surplus creating actions weremost frequent. Our results warn against the use of role uncertainty in experiments that aim tomeasure the prevalence of interdependent preferences.
Resumo:
To understand whether retailers should consider consumer returns when merchandising, we study howthe optimal assortment of a price-taking retailer is influenced by its return policy. The retailer selects itsassortment from an exogenous set of horizontally differentiated products. Consumers make purchase andkeep/return decisions in nested multinomial logit fashion. Our main finding is that the optimal assortmenthas a counterintuitive structure for relatively strict return policies: It is optimal to offer a mix of the mostpopular and most eccentric products when the refund amount is sufficiently low, which can be viewed asa form of risk sharing between the retailer and consumers. In contrast, if the refund is sufficiently high, orwhen returns are disallowed, optimal assortment is composed of only the most popular products (a commonfinding in the literature). We provide preliminary empirical evidence for one of the key drivers of our results:more eccentric products have higher probability of return conditional on purchase. In light of our analyticalfindings and managerial insights, we conclude that retailers should take their return policies into accountwhen merchandising.
Resumo:
Individual-specific uncertainty may increase the chances of reform beingenacted and sustained. Reform may be more likely to be enacted because amajority of agents might end up losing little from reform and a minoritygaining a lot. Under certainty, reform would therefore be rejected, butit may be enacted with uncertainty because those who end up losing believethat they might be among the winners. Reform may be more likely to besustained because, in a realistic setting, reform will increase theincentives of agents to move into those economic activities that benefit.Agents who respond to these incentives will vote to sustain reform infuture elections, even if they would have rejected reform under certainty.These points are made using the trade-model of Fernandez and Rodrik (AER,1991).
Resumo:
We study elections in which one party (the strong party) controls a source of political unrest; e.g., this party could instigate riots if it lost the election. We show that the strong party is more likely to win the election when there is less information about its ability to cause unrest. This is because when theweak party is better informed, it can more reliably prevent political unrest by implementing a ``centrist'' policy. When there is uncertainty over the credibility of the threat, ``posturing'' by the strong party leads to platform divergence.
Resumo:
A geometrical treatment of the path integral for gauge theories with first-class constraints linear in the momenta is performed. The equivalence of reduced, Polyakov, Faddeev-Popov, and Faddeev path-integral quantization of gauge theories is established. In the process of carrying this out we find a modified version of the original Faddeev-Popov formula which is derived under much more general conditions than the usual one. Throughout this paper we emphasize the fact that we only make use of the information contained in the action for the system, and of the natural geometrical structures derived from it.
Resumo:
We introduce a width parameter that bounds the complexity of classical planning problems and domains, along with a simple but effective blind-search procedure that runs in time that is exponential in the problem width. We show that many benchmark domains have a bounded and small width provided thatgoals are restricted to single atoms, and hence that such problems are provably solvable in low polynomial time. We then focus on the practical value of these ideas over the existing benchmarks which feature conjunctive goals. We show that the blind-search procedure can be used for both serializing the goal into subgoals and for solving the resulting problems, resulting in a ‘blind’ planner that competes well with a best-first search planner guided by state-of-the-art heuristics. In addition, ideas like helpful actions and landmarks can be integrated as well, producing a planner with state-of-the-art performance.
Resumo:
Annualising work hours (AH) is a means of achievement flexibility in the use of human resources to face the seasonal nature of demand. In Corominas et al. (1) two MILP models are used to solve the problem of planning staff working hours with annual horizon. The costs due to overtime and to the employment of temporary workers are minimised, and the distribution of working time over the course of the year for each worker and the distribution of working time provided by temporary workers are regularised.In the aforementioned paper, the following is assumed: (i) the holiday weeks are fixed a priori and (ii) the workers are from different categories who are able to perform specific type of task have se same efficiency; moreover, the values of the binary variables (and others) in the second model are fixed to those in the first model (thus, in the second model these will intervene as constants and not as variables, resulting in an LP model).In the present paper, these assumptions are relaxed and a more general problem is solved. The computational experiment leads to the conclusion that MILP is a technique suited to dealing with the problem.