3 resultados para Pareto optimality
em DRUM (Digital Repository at the University of Maryland)
Resumo:
Unmanned aerial vehicles (UAVs) frequently operate in partially or entirely unknown environments. As the vehicle traverses the environment and detects new obstacles, rapid path replanning is essential to avoid collisions. This thesis presents a new algorithm called Hierarchical D* Lite (HD*), which combines the incremental algorithm D* Lite with a novel hierarchical path planning approach to replan paths sufficiently fast for real-time operation. Unlike current hierarchical planning algorithms, HD* does not require map corrections before planning a new path. Directional cost scale factors, path smoothing, and Catmull-Rom splines are used to ensure the resulting paths are feasible. HD* sacrifices optimality for real-time performance. Its computation time and path quality are dependent on the map size, obstacle density, sensor range, and any restrictions on planning time. For the most complex scenarios tested, HD* found paths within 10% of optimal in under 35 milliseconds.
Resumo:
This dissertation provides a novel theory of securitization based on intermediaries minimizing the moral hazard that insiders can misuse assets held on-balance sheet. The model predicts how intermediaries finance different assets. Under deposit funding, the moral hazard is greatest for low-risk assets that yield sizable returns in bad states of nature; under securitization, it is greatest for high-risk assets that require high guarantees and large reserves. Intermediaries thus securitize low-risk assets. In an extension, I identify a novel channel through which government bailouts exacerbate the moral hazard and reduce total investment irrespective of the funding mode. This adverse effect is stronger under deposit funding, implying that intermediaries finance more risky assets off-balance sheet. The dissertation discusses the implications of different forms of guarantees. With explicit guarantees, banks securitize assets with either low information-intensity or low risk. By contrast, with implicit guarantees, banks only securitize assets with high information-intensity and low risk. Two extensions to the benchmark static and dynamic models are discussed. First, an extension to the static model studies the optimality of tranching versus securitization with guarantees. Tranching eliminates agency costs but worsens adverse selection, while securitization with guarantees does the opposite. When the quality of underlying assets in a certain security market is sufficiently heterogeneous, and when the highest quality assets are perceived to be sufficiently safe, securitization with guarantees dominates tranching. Second, in an extension to the dynamic setting, the moral hazard of misusing assets held on-balance sheet naturally gives rise to the moral hazard of weak ex-post monitoring in securitization. The use of guarantees reduces the dependence of banks' ex-post payoffs on monitoring efforts, thereby weakening monitoring incentives. The incentive to monitor under securitization with implicit guarantees is the weakest among all funding modes, as implicit guarantees allow banks to renege on their monitoring promises without being declared bankrupt and punished.
Resumo:
Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.