884 resultados para Optimal control problem
Resumo:
The Network Revenue Management problem can be formulated as a stochastic dynamic programming problem (DP or the\optimal" solution V *) whose exact solution is computationally intractable. Consequently, a number of heuristics have been proposed in the literature, the most popular of which are the deterministic linear programming (DLP) model, and a simulation based method, the randomized linear programming (RLP) model. Both methods give upper bounds on the optimal solution value (DLP and PHLP respectively). These bounds are used to provide control values that can be used in practice to make accept/deny decisions for booking requests. Recently Adelman [1] and Topaloglu [18] have proposed alternate upper bounds, the affine relaxation (AR) bound and the Lagrangian relaxation (LR) bound respectively, and showed that their bounds are tighter than the DLP bound. Tight bounds are of great interest as it appears from empirical studies and practical experience that models that give tighter bounds also lead to better controls (better in the sense that they lead to more revenue). In this paper we give tightened versions of three bounds, calling themsAR (strong Affine Relaxation), sLR (strong Lagrangian Relaxation) and sPHLP (strong Perfect Hindsight LP), and show relations between them. Speciffically, we show that the sPHLP bound is tighter than sLR bound and sAR bound is tighter than the LR bound. The techniques for deriving the sLR and sPHLP bounds can potentially be applied to other instances of weakly-coupled dynamic programming.
Resumo:
In this paper, we focus on the problem created by asymmetric informationabout the enforcer's (agent's) costs associated to enforcement expenditure. This adverse selection problem affects optimal law enforcement because a low cost enforcer may conceal its information by imitating a high cost enforcer, and must then be given a compensation to be induced to reveal its true costs. The government faces a trade-off between minimizing the enforcer's compensation and maximizing the net surplus of harmful acts. As a consequence, the probability of apprehension and punishment is usually reduced leading to more offenses being committed. We show that asymmetry of information does not affect law enforcement as long as raising public funds is costless. The consideration of costly raising of public funds permits to establish the positive correlation between asymmetry of information between government and enforcers andthe crime rate.
Resumo:
Background. New recommendations for rabies postexposure prophylaxis (PEP) were published by the Centers for Disease Control and Prevention and the World Health Organization in 2010. In view of these new recommendations, we investigated the adequacy of rabies PEP among patients consulting our travel clinic. Methods. A retrospective analysis of the files of all patients who consulted for rabies PEP at the Travel Clinic of the University Hospital in Lausanne, Switzerland, between January 2005 and August 2011 was conducted. Results. A total of 110 patients who received rabies PEP were identified. The median age of the patients was 34 years (range, 2-79 years), and 53% were women. Ninety subjects were potentially exposed to rabies while travelling abroad. Shortcomings in the management of these patients were (1) late initiation of rabies PEP in travelers who waited to seek medical care until returning to Switzerland, (2) administration of human rabies immunoglobulin (HRIG) to only 7 of 50 travelers (14%) who sought care abroad and for whom HRIG was indicated, and (3) antibody levels <0.5 IU/mL in 6 of 90 patients (6.7%) after 4 doses of vaccine. Conclusions. Patients do not always receive optimal rabies PEP under real-life conditions. A significant proportion of patients did not develop adequate antibody levels after 4 doses of vaccine. These data indicate that the measurement of antibody levels on day 21 of the Essen PEP regimen is useful in order to verify an adequate immune response.
Resumo:
Un dels problemes típics de regulació en el camp de l’automatització industrial és el control de velocitat lineal d’entrada del fil a les bobines, ja que com més gruix acumulem a igual velocitat de rotació de la bobina s’augmenta notablement la velocitat lineal d’entrada del fil, aquest desajust s’ha de poder compensar de forma automàtica per aconseguir una velocitat d’entrada constant. Aquest problema de regulació de velocitats és molt freqüent i de difícil control a la indústria on intervé el bobinat d’algun tipus de material com cablejat, fil, paper, làmines de planxa, tubs, etc... Els dos reptes i objectius principals són, primer, la regulació de la velocitat de rotació de la bobina per aconseguir una velocitat lineal del fil d’entrada, i segon, mitjançant el guiatge de l’alimentació de fil a la bobina, aconseguir un repartiment uniforme de cada capa de fil. El desenvolupament consisteix amb l’automatització i control d’una bobinadora automàtica mitjançant la configuració i programació de PLC’s, servomotors i encoders. Finalment es farà el muntatge pràctic sobre una bancada per verificar i simular el seu correcte funcionament que ha de donar solució a aquests problemes de regulació de velocitats. Com a conclusions finals s’han aconseguit els objectius i una metodologia per fer una regulació de velocitats de rotació per bobines, amb accionaments de servomotors amb polsos, i a nivell de coneixements he aconseguit dominar les aplicacions d’aquest tipus d’accionaments aplicats a construccions mecàniques.
Resumo:
Desenvolupament dels models matemàtics necessaris per a controlar de forma òptima la microxarxa existent als laboratoris del Institut de Recerca en Energia de Catalunya. Els algoritmes s'implementaran per tal de simular el comportament i posteriorment es programaran directament sobre els elements de la microxarxa per verificar el seu correcte funcionament.. Desenvolupament dels models matemàtics necessaris per a controlar de forma òptima la microxarxa existent als laboratoris del Institut de Recerca en Energia de Catalunya. Els algoritmes s'implementaran per tal de simular el comportament i posteriorment es programaran directament sobre els elements de la microxarxa per verificar el seu correcte funcionament.
Resumo:
Blood pressure is poorly controlled in most European countries and the control rate is even lower in high-risk patients such as patients with chronic kidney disease, diabetic patients or previous coronary heart disease. Several factors have been associated with poor control, some of which involve the characteristic of the patients themselves, such as socioeconomic factors, or unsuitable life-styles, other factors related to hypertension or to associated comorbidity, but there are also factors directly associated with antihypertensive therapy, mainly involving adherence problems, therapeutic inertia and therapeutic strategies unsuited to difficult-to-control hypertensive patients.It is common knowledge that only 30% of hypertensive patients can be controlled using monotherapy; all the rest require a combination of two or more antihypertensive drugs, and this can be a barrier to good adherence and log-term persistence in patients who also often need to use other drugs, such as antidiabetic agents, statins or antiplatelet agents. The fixed combinations of three antihypertensive agents currently available can facilitate long-term control of these patients in clinical practice. If well tolerated, a long-term therapeutic regimen that includes a diuretic, an ACE inhibitor or an angiotensin receptor blocker, and a calcium channel blocker is the recommended optimal triple therapy.
Resumo:
Some methadone maintenance treatment (MMT) programs prescribe inadequate daily methadone doses. Patients complain of withdrawal symptoms and continue illicit opioid use, yet practitioners are reluctant to increase doses above certain arbitrary thresholds. Serum methadone levels (SMLs) may guide practitioners dosing decisions, especially for those patients who have low SMLs despite higher methadone doses. Such variation is due in part to the complexities of methadone metabolism. The medication itself is a racemic (50:50) mixture of 2 enantiomers: an active "R" form and an essentially inactive "S" form. Methadone is metabolized primarily in the liver, by up to five cytochrome P450 isoforms, and individual differences in enzyme activity help explain wide ranges of active R-enantiomer concentrations in patients given identical doses of racemic methadone. Most clinical research studies have used methadone doses of less than 100 mg/day [d] and have not reported corresponding SMLs. New research suggests that doses ranging from 120 mg/d to more than 700 mg/d, with correspondingly higher SMLs, may be optimal for many patients. Each patient presents a unique clinical challenge, and there is no way of prescribing a single best methadone dose to achieve a specific blood level as a "gold standard" for all patients. Clinical signs and patient-reported symptoms of abstinence syndrome, and continuing illicit opioid use, are effective indicators of dose inadequacy. There does not appear to be a maximum daily dose limit when determining what is adequately "enough" methadone in MMT.
Resumo:
This paper analyzes the issue of the interiority of the optimal population growth rate in a two-period overlapping generations model with endogenous fertility. Using Cobb-Douglas utility and production functions, we show that the introduction of a cost of raising children allows for the possibility of the existence of an interior global maximum in the planner¿s problem, contrary to the exogenous fertility case
Resumo:
Drug use is a preventable behavior; drug addiction is a treatable disease; and a balanced approach of proven and promising prevention, treatment and enforcement is required to protect Iowans from drugs now and in the future. Drug abuse itself is a two-faceted problem, affected by both the available supply of and the demand for illegal drugs and other substances of abuse. Any strategy dealing with both the supply of and demand for drugs of abuse must be three-fold and involve these coordinated components: 1) Prevention strategies to discourage the initial human demand for drugs, 2) Treatment for those who already abuse or are addicted to drugs, in order to halt their drug-seeking behavior, and 3) Law enforcement actions to decrease the supply of illegal drugs and bring to treatment those who otherwise would not seek help. It is with these three approaches in mind that the Governor’s Office of Drug Control Policy presents the 2012 Iowa Drug Control Strategy. Mark J. Schouten Director, Governor’s Office of Drug Control Policy
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
The multiscale finite-volume (MSFV) method is designed to reduce the computational cost of elliptic and parabolic problems with highly heterogeneous anisotropic coefficients. The reduction is achieved by splitting the original global problem into a set of local problems (with approximate local boundary conditions) coupled by a coarse global problem. It has been shown recently that the numerical errors in MSFV results can be reduced systematically with an iterative procedure that provides a conservative velocity field after any iteration step. The iterative MSFV (i-MSFV) method can be obtained with an improved (smoothed) multiscale solution to enhance the localization conditions, with a Krylov subspace method [e.g., the generalized-minimal-residual (GMRES) algorithm] preconditioned by the MSFV system, or with a combination of both. In a multiphase-flow system, a balance between accuracy and computational efficiency should be achieved by finding a minimum number of i-MSFV iterations (on pressure), which is necessary to achieve the desired accuracy in the saturation solution. In this work, we extend the i-MSFV method to sequential implicit simulation of time-dependent problems. To control the error of the coupled saturation/pressure system, we analyze the transport error caused by an approximate velocity field. We then propose an error-control strategy on the basis of the residual of the pressure equation. At the beginning of simulation, the pressure solution is iterated until a specified accuracy is achieved. To minimize the number of iterations in a multiphase-flow problem, the solution at the previous timestep is used to improve the localization assumption at the current timestep. Additional iterations are used only when the residual becomes larger than a specified threshold value. Numerical results show that only a few iterations on average are necessary to improve the MSFV results significantly, even for very challenging problems. Therefore, the proposed adaptive strategy yields efficient and accurate simulation of multiphase flow in heterogeneous porous media.
Resumo:
The problem of searchability in decentralized complex networks is of great importance in computer science, economy, and sociology. We present a formalism that is able to cope simultaneously with the problem of search and the congestion effects that arise when parallel searches are performed, and we obtain expressions for the average search cost both in the presence and the absence of congestion. This formalism is used to obtain optimal network structures for a system using a local search algorithm. It is found that only two classes of networks can be optimal: starlike configurations, when the number of parallel searches is small, and homogeneous-isotropic configurations, when it is large.
Resumo:
We show that the dispersal routes reconstruction problem can be stated as an instance of a graph theoretical problem known as the minimum cost arborescence problem, for which there exist efficient algorithms. Furthermore, we derive some theoretical results, in a simplified setting, on the possible optimal values that can be obtained for this problem. With this, we place the dispersal routes reconstruction problem on solid theoretical grounds, establishing it as a tractable problem that also lends itself to formal mathematical and computational analysis. Finally, we present an insightful example of how this framework can be applied to real data. We propose that our computational method can be used to define the most parsimonious dispersal (or invasion) scenarios, which can then be tested using complementary methods such as genetic analysis.
Resumo:
This paper analyzes the issue of the interiority of the optimal population growth rate in a two-period overlapping generations model with endogenous fertility. Using Cobb-Douglas utility and production functions, we show that the introduction of a cost of raising children allows for the possibility of the existence of an interior global maximum in the planner¿s problem, contrary to the exogenous fertility case
Resumo:
Stream degradation due to steep stream gradients and large deposits of loess soil is a serious problem in western Iowa. One solution to this problem is to construct grade stabilization structures at critical points along the length of the stream. Iowa Highway Research Board project HR-236, "Pottawattamie County Evaluation of Control Structures for Stabilizing Degrading Stream Channels", was initiated in order to study the effectiveness of such structures in preventing stream degradation. This report describes the construction and 4-year performance of a gabion drop structure constructed along Keg Creek during the winter of 1982-83.