992 resultados para optimal stopping rule


Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES: To investigate the frequency of interim analyses, stopping rules, and data safety and monitoring boards (DSMBs) in protocols of randomized controlled trials (RCTs); to examine these features across different reasons for trial discontinuation; and to identify discrepancies in reporting between protocols and publications. STUDY DESIGN AND SETTING: We used data from a cohort of RCT protocols approved between 2000 and 2003 by six research ethics committees in Switzerland, Germany, and Canada. RESULTS: Of 894 RCT protocols, 289 prespecified interim analyses (32.3%), 153 stopping rules (17.1%), and 257 DSMBs (28.7%). Overall, 249 of 894 RCTs (27.9%) were prematurely discontinued; mostly due to reasons such as poor recruitment, administrative reasons, or unexpected harm. Forty-six of 249 RCTs (18.4%) were discontinued due to early benefit or futility; of those, 37 (80.4%) were stopped outside a formal interim analysis or stopping rule. Of 515 published RCTs, there were discrepancies between protocols and publications for interim analyses (21.1%), stopping rules (14.4%), and DSMBs (19.6%). CONCLUSION: Two-thirds of RCT protocols did not consider interim analyses, stopping rules, or DSMBs. Most RCTs discontinued for early benefit or futility were stopped without a prespecified mechanism. When assessing trial manuscripts, journals should require access to the protocol.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES To investigate the frequency of interim analyses, stopping rules, and data safety and monitoring boards (DSMBs) in protocols of randomized controlled trials (RCTs); to examine these features across different reasons for trial discontinuation; and to identify discrepancies in reporting between protocols and publications. STUDY DESIGN AND SETTING We used data from a cohort of RCT protocols approved between 2000 and 2003 by six research ethics committees in Switzerland, Germany, and Canada. RESULTS Of 894 RCT protocols, 289 prespecified interim analyses (32.3%), 153 stopping rules (17.1%), and 257 DSMBs (28.7%). Overall, 249 of 894 RCTs (27.9%) were prematurely discontinued; mostly due to reasons such as poor recruitment, administrative reasons, or unexpected harm. Forty-six of 249 RCTs (18.4%) were discontinued due to early benefit or futility; of those, 37 (80.4%) were stopped outside a formal interim analysis or stopping rule. Of 515 published RCTs, there were discrepancies between protocols and publications for interim analyses (21.1%), stopping rules (14.4%), and DSMBs (19.6%). CONCLUSION Two-thirds of RCT protocols did not consider interim analyses, stopping rules, or DSMBs. Most RCTs discontinued for early benefit or futility were stopped without a prespecified mechanism. When assessing trial manuscripts, journals should require access to the protocol.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper examines four equivalent methods of optimal monetary policymaking, committing to the social loss function, using discretion with the central bank long-run and short-run loss functions, and following monetary policy rules. All lead to optimal economic performance. The same performance emerges from these different policymaking methods because the central bank actually follows the same (similar) policy rules. These objectives (the social loss function, the central bank long-run and short-run loss functions) and monetary policy rules imply a complete regime for optimal policy making. The central bank long-run and short-run loss functions that produce the optimal policy with discretion differ from the social loss function. Moreover, the optimal policy rule emerges from the optimization of these different central bank loss functions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Threshold estimation with sequential procedures is justifiable on the surmise that the index used in the so-called dynamic stopping rule has diagnostic value for identifying when an accurate estimate has been obtained. The performance of five types of Bayesian sequential procedure was compared here to that of an analogous fixed-length procedure. Indices for use in sequential procedures were: (1) the width of the Bayesian probability interval, (2) the posterior standard deviation, (3) the absolute change, (4) the average change, and (5) the number of sign fluctuations. A simulation study was carried out to evaluate which index renders estimates with less bias and smaller standard error at lower cost (i.e. lower average number of trials to completion), in both yes–no and two-alternative forced-choice (2AFC) tasks. We also considered the effect of the form and parameters of the psychometric function and its similarity with themodel function assumed in the procedure. Our results show that sequential procedures do not outperform fixed-length procedures in yes–no tasks. However, in 2AFC tasks, sequential procedures not based on sign fluctuations all yield minimally better estimates than fixed-length procedures, although most of the improvement occurs with short runs that render undependable estimates and the differences vanish when the procedures run for a number of trials (around 70) that ensures dependability. Thus, none of the indices considered here (some of which are widespread) has the diagnostic value that would justify its use. In addition, difficulties of implementation make sequential procedures unfit as alternatives to fixed-length procedures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Threatened species often exist in a small number of isolated subpopulations. Given limitations on conservation spending, managers must choose from strategies that range from managing just one subpopulation and risking all other subpopulations to managing all subpopulations equally and poorly, thereby risking the loss of all subpopulations. We took an economic approach to this problem in an effort to discover a simple rule of thumb for optimally allocating conservation effort among subpopulations. This rule was derived by maximizing the expected number of extant subpopulations remaining given n subpopulations are actually managed. We also derived a spatiotemporally optimized strategy through stochastic dynamic programming. The rule of thumb suggested that more subpopulations should be managed if the budget increases or if the cost of reducing local extinction probabilities decreases. The rule performed well against the exact optimal strategy that was the result of the stochastic dynamic program and much better than other simple strategies (e.g., always manage one extant subpopulation or half of the remaining subpopulation). We applied our approach to the allocation of funds in 2 contrasting case studies: reduction of poaching of Sumatran tigers (Panthera tigris sumatrae) and habitat acquisition for San Joaquin kit foxes (Vulpes macrotis mutica). For our estimated annual budget for Sumatran tiger management, the mean time to extinction was about 32 years. For our estimated annual management budget for kit foxes in the San Joaquin Valley, the mean time to extinction was approximately 24 years. Our framework allows managers to deal with the important question of how to allocate scarce conservation resources among subpopulations of any threatened species. © 2008 Society for Conservation Biology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

For a multiarmed bandit problem with exponential discounting the optimal allocation rule is defined by a dynamic allocation index defined for each arm on its space. The index for an arm is equal to the expected immediate reward from the arm, with an upward adjustment reflecting any uncertainty about the prospects of obtaining rewards from the arm, and the possibilities of resolving those uncertainties by selecting that arm. Thus the learning component of the index is defined to be the difference between the index and the expected immediate reward. For two arms with the same expected immediate reward the learning component should be larger for the arm for which the reward rate is more uncertain. This is shown to be true for arms based on independent samples from a fixed distribution with an unknown parameter in the cases of Bernoulli and normal distributions, and similar results are obtained in other cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study, we investigate the qualitative and quantitative effects of an R&D subsidy for a clean technology and a Pigouvian tax on a dirty technology on environmental R&D when it is uncertain how long the research takes to complete. The model is formulated as an optimal stopping problem, in which the number of successes required to complete the R&D project is finite and learning about the probability of success is incorporated. We show that the optimal R&D subsidy with the consideration of learning is higher than that without it. We also find that an R&D subsidy performs better than a Pigouvian tax unless suppliers have sufficient incentives to continue cost-reduction efforts after the new technology success-fully replaces the old one. Moreover, by using a two-project model, we show that a uniform subsidy is better than a selective subsidy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis consists of an introduction to a topic of optimal use of taxes and government expenditure and three chapters analysing these themes more in depth. Chapter 2 analyses to what extent a given amount of subsidies affects the labour supply of parents. Municipal supplement to the Finnish home care allowance provides exogenous variation to labour supply decision of a parent. This kind of subsidy that is tied to staying at home instead of working is found to have fairly large effect on labour supply decisions of parents. Chapter 3 studies theoretically when it is optimal to provide publicly private goods. In the set up of the model government sets income taxes optimally and provides a private good, if it is beneficial to do so. The analysis results in an optimal provision rule according to which the good should be provided when it lowers the participation threshold into labour force. Chapter 4 investigates what happened to prices and demand when hairdressers value added tax was cut in Finland from 22 per cent to 8 per cent. The pass-through to prices was about half of the full pass-through and no clear indication of increased demand for the services or better employment situation in the sector is found.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many IEEE 802.11 WLAN deployments, wireless clients have a choice of access points (AP) to connect to. In current systems, clients associate with the access point with the strongest signal to noise ratio. However, such an association mechanism can lead to unequal load sharing, resulting in diminished system performance. In this paper, we first provide a numerical approach based on stochastic dynamic programming to find the optimal client-AP association algorithm for a small topology consisting of two access points. Using the value iteration algorithm, we determine the optimal association rule for the two-AP topology. Next, utilizing the insights obtained from the optimal association ride for the two-AP case, we propose a near-optimal heuristic that we call RAT. We test the efficacy of RAT by considering more realistic arrival patterns and a larger topology. Our results show that RAT performs very well in these scenarios as well. Moreover, RAT lends itself to a fairly simple implementation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problem of spurious patterns in neural associative memory models is discussed, Some suggestions to solve this problem from the literature are reviewed and their inadequacies are pointed out, A solution based on the notion of neural self-interaction with a suitably chosen magnitude is presented for the Hebb learning rule. For an optimal learning rule based on linear programming, asymmetric dilution of synaptic connections is presented as another solution to the problem of spurious patterns, With varying percentages of asymmetric dilution it is demonstrated numerically that this optimal learning rule leads to near total suppression of spurious patterns. For practical usage of neural associative memory networks a combination of the two solutions with the optimal learning rule is recommended to be the best proposition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we analyze the valuation of options stemming from the flexibility in an Integrated Gasification Combined Cycle (IGCC) Power Plant. First we use as a base case the opportunity to invest in a Natural Gas Combined Cycle (NGCC) Power Plant, deriving the optimal investment rule as a function of fuel price and the remaining life of the right to invest. Additionally, the analytical solution for a perpetual option is obtained. Second, the valuation of an operating IGCC Power Plant is studied, with switching costs between states and a choice of the best operation mode. The valuation of this plant serves as a base to obtain the value of the option to delay an investment of this type. Finally, we derive the value of an opportunity to invest either in a NGCC or IGCC Power Plant, that is, to choose between an inflexible and a flexible technology, respectively. Numerical computations involve the use of one- and two-dimensional binomial lattices that support a mean-reverting process for the fuel prices. Basic parameter values refer to an actual IGCC power plant currently in operation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Optimal management in a multi-cohort Beverton-Holt model with any number of age classes and imperfect selectivity is equivalent to finding the optimal fish lifespan by chosen fallow cycles. Optimal policy differs in two main ways from the optimal lifespan rule with perfect selectivity. First, weight gain is valued in terms of the whole population structure. Second, the cost of waiting is the interest rate adjusted for the increase in the pulse length. This point is especially relevant for assessing the role of selectivity. Imperfect selectivity reduces the optimal lifespan and the optimal pulse length. We illustrate our theoretical findings with a numerical example. Results obtained using global numerical methods select the optimal pulse length predicted by the optimal lifespan rule.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent advances in processor speeds, mobile communications and battery life have enabled computers to evolve from completely wired to completely mobile. In the most extreme case, all nodes are mobile and communication takes place at available opportunities – using both traditional communication infrastructure as well as the mobility of intermediate nodes. These are mobile opportunistic networks. Data communication in such networks is a difficult problem, because of the dynamic underlying topology, the scarcity of network resources and the lack of global information. Establishing end-to-end routes in such networks is usually not feasible. Instead a store-and-carry forwarding paradigm is better suited for such networks. This dissertation describes and analyzes algorithms for forwarding of messages in such networks. In order to design effective forwarding algorithms for mobile opportunistic networks, we start by first building an understanding of the set of all paths between nodes, which represent the available opportunities for any forwarding algorithm. Relying on real measurements, we enumerate paths between nodes and uncover what we refer to as the path explosion effect. The term path explosion refers to the fact that the number of paths between a randomly selected pair of nodes increases exponentially with time. We draw from the theory of epidemics to model and explain the path explosion effect. This is the first contribution of the thesis, and is a key observation that underlies subsequent results. Our second contribution is the study of forwarding algorithms. For this, we rely on trace driven simulations of different algorithms that span a range of design dimensions. We compare the performance (success rate and average delay) of these algorithms. We make the surprising observation that most algorithms we consider have roughly similar performance. We explain this result in light of the path explosion phenomenon. While the performance of most algorithms we studied was roughly the same, these algorithms differed in terms of cost. This prompted us to focus on designing algorithms with the explicit intent of reducing costs. For this, we cast the problem of forwarding as an optimal stopping problem. Our third main contribution is the design of strategies based on optimal stopping principles which we refer to as Delegation schemes. Our analysis shows that using a delegation scheme reduces cost over naive forwarding by a factor of O(√N), where N is the number of nodes in the network. We further validate this result on real traces, where the cost reduction observed is even greater. Our results so far include a key assumption, which is unbounded buffers on nodes. Next, we relax this assumption, so that the problem shifts to one of prioritization of messages for transmission and dropping. Our fourth contribution is the study of message prioritization schemes, combined with forwarding. Our main result is that one achieves higher performance by assigning higher priorities to young messages in the network. We again interpret this result in light of the path explosion effect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Longevity risk has become one of the major risks facing the insurance and pensions markets globally. The trade in longevity risk is underpinned by accurate forecasting of mortality rates. Using techniques from macroeconomic forecasting, we propose a dynamic factor model of mortality that fits and forecasts mortality rates parsimoniously.We compare the forecasting quality of this model and of existing models and find that the dynamic factor model generally provides superior forecasts when applied to international mortality data. We also show that existing multifactorial models have superior fit but their forecasting performance worsens as more factors are added. The dynamic factor approach used here can potentially be further improved upon by applying an appropriate stopping rule for the number of static and dynamic factors. 

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse est composée de trois essais liés à la conception de mécanisme et aux enchères. Dans le premier essai j'étudie la conception de mécanismes bayésiens efficaces dans des environnements où les fonctions d'utilité des agents dépendent de l'alternative choisie même lorsque ceux-ci ne participent pas au mécanisme. En plus d'une règle d'attribution et d'une règle de paiement le planificateur peut proférer des menaces afin d'inciter les agents à participer au mécanisme et de maximiser son propre surplus; Le planificateur peut présumer du type d'un agent qui ne participe pas. Je prouve que la solution du problème de conception peut être trouvée par un choix max-min des types présumés et des menaces. J'applique ceci à la conception d'une enchère multiple efficace lorsque la possession du bien par un acheteur a des externalités négatives sur les autres acheteurs. Le deuxième essai considère la règle du juste retour employée par l'agence spatiale européenne (ESA). Elle assure à chaque état membre un retour proportionnel à sa contribution, sous forme de contrats attribués à des sociétés venant de cet état. La règle du juste retour est en conflit avec le principe de la libre concurrence puisque des contrats ne sont pas nécessairement attribués aux sociétés qui font les offres les plus basses. Ceci a soulevé des discussions sur l'utilisation de cette règle: les grands états ayant des programmes spatiaux nationaux forts, voient sa stricte utilisation comme un obstacle à la compétitivité et à la rentabilité. Apriori cette règle semble plus coûteuse à l'agence que les enchères traditionnelles. Nous prouvons au contraire qu'une implémentation appropriée de la règle du juste retour peut la rendre moins coûteuse que des enchères traditionnelles de libre concurrence. Nous considérons le cas de l'information complète où les niveaux de technologie des firmes sont de notoriété publique, et le cas de l'information incomplète où les sociétés observent en privée leurs coûts de production. Enfin, dans le troisième essai je dérive un mécanisme optimal d'appel d'offre dans un environnement où un acheteur d'articles hétérogènes fait face a de potentiels fournisseurs de différents groupes, et est contraint de choisir une liste de gagnants qui est compatible avec des quotas assignés aux différents groupes. La règle optimale d'attribution consiste à assigner des niveaux de priorité aux fournisseurs sur la base des coûts individuels qu'ils rapportent au décideur. La manière dont ces niveaux de priorité sont déterminés est subjective mais connue de tous avant le déroulement de l'appel d'offre. Les différents coûts rapportés induisent des scores pour chaque liste potentielle de gagnant. Les articles sont alors achetés à la liste ayant les meilleurs scores, s'il n'est pas plus grand que la valeur de l'acheteur. Je montre également qu'en général il n'est pas optimal d'acheter les articles par des enchères séparées.