951 resultados para exceedance probabilities
Resumo:
Revenue management practices often include overbooking capacity to account for customerswho make reservations but do not show up. In this paper, we consider the network revenuemanagement problem with no-shows and overbooking, where the show-up probabilities are specificto each product. No-show rates differ significantly by product (for instance, each itinerary andfare combination for an airline) as sale restrictions and the demand characteristics vary byproduct. However, models that consider no-show rates by each individual product are difficultto handle as the state-space in dynamic programming formulations (or the variable space inapproximations) increases significantly. In this paper, we propose a randomized linear program tojointly make the capacity control and overbooking decisions with product-specific no-shows. Weestablish that our formulation gives an upper bound on the optimal expected total profit andour upper bound is tighter than a deterministic linear programming upper bound that appearsin the existing literature. Furthermore, we show that our upper bound is asymptotically tightin a regime where the leg capacities and the expected demand is scaled linearly with the samerate. We also describe how the randomized linear program can be used to obtain a bid price controlpolicy. Computational experiments indicate that our approach is quite fast, able to scale to industrialproblems and can provide significant improvements over standard benchmarks.
Resumo:
Scoring rules that elicit an entire belief distribution through the elicitation of point beliefsare time-consuming and demand considerable cognitive e¤ort. Moreover, the results are validonly when agents are risk-neutral or when one uses probabilistic rules. We investigate a classof rules in which the agent has to choose an interval and is rewarded (deterministically) onthe basis of the chosen interval and the realization of the random variable. We formulatean e¢ ciency criterion for such rules and present a speci.c interval scoring rule. For single-peaked beliefs, our rule gives information about both the location and the dispersion of thebelief distribution. These results hold for all concave utility functions.
Resumo:
This paper explores biases in the elicitation of utilities under risk and the contribution that generalizations of expected utility can make to the resolution of these biases. We used five methods to measure utilities under risk and found clear violations of expected utility. Of the theories studies, prospect theory was most consistent with our data. The main improvement of prospect theory over expected utility was in comparisons between a riskless and a risky prospect(riskless-risk methods). We observed no improvement over expected utility in comparisons between two risky prospects (risk-risk methods). An explanation why we found no improvement of prospect theory over expected utility in risk-risk methods may be that there was less overweighting of small probabilities in our study than has commonly been observed.
Resumo:
An important problem in descriptive and prescriptive research in decision making is to identify regions of rationality, i.e., the areas for which heuristics are and are not effective. To map the contours of such regions, we derive probabilities that heuristics identify the best of m alternatives (m > 2) characterized by k attributes or cues (k > 1). The heuristics include a single variable (lexicographic), variations of elimination-by-aspects, equal weighting, hybrids of the preceding, and models exploiting dominance. We use twenty simulated and four empirical datasets for illustration. We further provide an overview by regressing heuristic performance on factors characterizing environments. Overall, sensible heuristics generally yield similar choices in many environments. However, selection of the appropriate heuristic can be important in some regions (e.g., if there is low inter-correlation among attributes/cues). Since our work assumes a hit or miss decision criterion, we conclude by outlining extensions for exploring the effects of different loss functions.
Resumo:
Unemployment rates in developed countries have recently reached levels not seenin a generation, and workers of all ages are facing increasing probabilities of losingtheir jobs and considerable losses in accumulated assets. These events likely increasethe reliance that most older workers will have on public social insurance programs,exactly at a time that public finances are suffering from a large drop in contributions.Our paper explicitly accounts for employment uncertainty and unexpectedwealth shocks, something that has been relatively overlooked in the literature, butthat has grown in importance in recent years. Using administrative and householdlevel data we empirically characterize a life-cycle model of retirement and claimingdecisions in terms of the employment, wage, health, and mortality uncertainty facedby individuals. Our benchmark model explains with great accuracy the strikinglyhigh proportion of individuals who claim benefits exactly at the Early RetirementAge, while still explaining the increased claiming hazard at the Normal RetirementAge. We also discuss some policy experiments and their interplay with employmentuncertainty. Additionally, we analyze the effects of negative wealth shocks on thelabor supply and claiming decisions of older Americans. Our results can explainwhy early claiming has remained very high in the last years even as the early retirementpenalties have increased substantially compared with previous periods, andwhy labor force participation has remained quite high for older workers even in themidst of the worse employment crisis in decades.
Resumo:
This paper explores three aspects of strategic uncertainty: its relation to risk, predictability of behavior and subjective beliefs of players. In a laboratory experiment we measure subjects certainty equivalents for three coordination games and one lottery. Behavior in coordination games is related to risk aversion, experience seeking, and age.From the distribution of certainty equivalents we estimate probabilities for successful coordination in a wide range of games. For many games, success of coordination is predictable with a reasonable error rate. The best response to observed behavior is close to the global-game solution. Comparing choices in coordination games with revealed risk aversion, we estimate subjective probabilities for successful coordination. In games with a low coordination requirement, most subjects underestimate the probability of success. In games with a high coordination requirement, most subjects overestimate this probability. Estimating probabilistic decision models, we show that the quality of predictions can be improved when individual characteristics are taken into account. Subjects behavior is consistent with probabilistic beliefs about the aggregate outcome, but inconsistent with probabilistic beliefs about individual behavior.
Resumo:
We propose a simple adaptive procedure for playing a game. In thisprocedure, players depart from their current play with probabilities thatare proportional to measures of regret for not having used other strategies(these measures are updated every period). It is shown that our adaptiveprocedure guaranties that with probability one, the sample distributionsof play converge to the set of correlated equilibria of the game. Tocompute these regret measures, a player needs to know his payoff functionand the history of play. We also offer a variation where every playerknows only his own realized payoff history (but not his payoff function).
Resumo:
Consider the density of the solution $X(t,x)$ of a stochastic heat equation with small noise at a fixed $t\in [0,T]$, $x \in [0,1]$.In the paper we study the asymptotics of this density as the noise is vanishing. A kind of Taylor expansion in powers of the noiseparameter is obtained. The coefficients and the residue of the expansion are explicitly calculated.In order to obtain this result some type of exponential estimates of tail probabilities of the difference between the approximatingprocess and the limit one is proved. Also a suitable local integration by parts formula is developped.
Resumo:
OBJECTIVES: We studied the influence of noninjecting and injecting drug use on mortality, dropout rate, and the course of antiretroviral therapy (ART), in the Swiss HIV Cohort Study (SHCS). METHODS: Cohort participants, registered prior to April 2007 and with at least one drug use questionnaire completed until May 2013, were categorized according to their self-reported drug use behaviour. The probabilities of death and dropout were separately analysed using multivariable competing risks proportional hazards regression models with mutual correction for the other endpoint. Furthermore, we describe the influence of drug use on the course of ART. RESULTS: A total of 6529 participants (including 31% women) were followed during 31 215 person-years; 5.1% participants died; 10.5% were lost to follow-up. Among persons with homosexual or heterosexual HIV transmission, noninjecting drug use was associated with higher all-cause mortality [subhazard rate (SHR) 1.73; 95% confidence interval (CI) 1.07-2.83], compared with no drug use. Also, mortality was increased among former injecting drug users (IDUs) who reported noninjecting drug use (SHR 2.34; 95% CI 1.49-3.69). Noninjecting drug use was associated with higher dropout rates. The mean proportion of time with suppressed viral replication was 82.2% in all participants, irrespective of ART status, and 91.2% in those on ART. Drug use lowered adherence, and increased rates of ART change and ART interruptions. Virological failure on ART was more frequent in participants who reported concomitant drug injections while on opiate substitution, and in current IDUs, but not among noninjecting drug users. CONCLUSIONS: Noninjecting drug use and injecting drug use are modifiable risks for death, and they lower retention in a cohort and complicate ART.
Resumo:
Kahneman and Tversky asserted a fundamental asymmetry between gains and losses, namely a reflection effect which occurs when an individual prefers a sure gain of $ pz to anuncertain gain of $ z with probability p, while preferring an uncertain loss of $z with probability p to a certain loss of $ pz.We focus on this class of choices (actuarially fair), and explore the extent to which thereflection effect, understood as occurring at a range of wealth levels, is compatible with single-self preferences.We decompose the reflection effect into two components, a probability switch effect,which is compatible with single-self preferences, and a translation effect, which is not. To argue the first point, we analyze two classes of single-self, nonexpected utility preferences, which we label homothetic and weakly homothetic. In both cases, we characterize the switch effect as well as the dependence of risk attitudes on wealth.We also discuss two types of utility functions of a form reminiscent of expected utility but with distorted probabilities. Type I always distorts the probability of the worst outcome downwards, yielding attraction to small risks for all probabilities. Type II distorts low probabilities upwards, and high probabilities downwards, implying risk aversion when the probability of the worst outcome is low. By combining homothetic or weak homothetic preferences with Type I or Type II distortion functions, we present four explicit examples: All four display a switch effect and, hence, a form of reflection effect consistent a single self preferences.
Resumo:
BACKGROUND: Low-molecular-weight heparin (LMWH) appears to be safe and effective for treating pulmonary embolism (PE), but its cost-effectiveness has not been assessed. METHODS: We built a Markov state-transition model to evaluate the medical and economic outcomes of a 6-day course with fixed-dose LMWH or adjusted-dose unfractionated heparin (UFH) in a hypothetical cohort of 60-year-old patients with acute submassive PE. Probabilities for clinical outcomes were obtained from a meta-analysis of clinical trials. Cost estimates were derived from Medicare reimbursement data and other sources. The base-case analysis used an inpatient setting, whereas secondary analyses examined early discharge and outpatient treatment with LMWH. Using a societal perspective, strategies were compared based on lifetime costs, quality-adjusted life-years (QALYs), and the incremental cost-effectiveness ratio. RESULTS: Inpatient treatment costs were higher for LMWH treatment than for UFH (dollar 13,001 vs dollar 12,780), but LMWH yielded a greater number of QALYs than did UFH (7.677 QALYs vs 7.493 QALYs). The incremental costs of dollar 221 and the corresponding incremental effectiveness of 0.184 QALYs resulted in an incremental cost-effectiveness ratio of dollar 1,209/QALY. Our results were highly robust in sensitivity analyses. LMWH became cost-saving if the daily pharmacy costs for LMWH were < dollar 51, if > or = 8% of patients were eligible for early discharge, or if > or = 5% of patients could be treated entirely as outpatients. CONCLUSION: For inpatient treatment of PE, the use of LMWH is cost-effective compared to UFH. Early discharge or outpatient treatment in suitable patients with PE would lead to substantial cost savings.
Resumo:
Diverse sources of GABAergic inhibition are a major feature of cortical networks, but distinct inhibitory input systems have not been systematically characterized in the thalamus. Here, we contrasted the properties of two independent GABAergic pathways in the posterior thalamic nucleus of rat, one input from the reticular thalamic nucleus (nRT), and one "extrareticular" input from the anterior pretectal nucleus (APT). The vast majority of nRT-thalamic terminals formed single synapses per postsynaptic target and innervated thin distal dendrites of relay cells. In contrast, single APT-thalamic terminals formed synaptic contacts exclusively via multiple, closely spaced synapses on thick relay cell dendrites. Quantal analysis demonstrated that the two inputs displayed comparable quantal amplitudes, release probabilities, and multiple release sites. The morphological and physiological data together indicated multiple, single-site contacts for nRT and multisite contacts for APT axons. The contrasting synaptic arrangements of the two pathways were paralleled by different short-term plasticities. The multisite APT-thalamic pathway showed larger charge transfer during 50-100 Hz stimulation compared with the nRT pathway and a greater persistent inhibition accruing during stimulation trains. Our results demonstrate that the two inhibitory systems are morpho-functionally distinct and suggest and that multisite GABAergic terminals are tailored for maintained synaptic inhibition even at high presynaptic firing rates. These data explain the efficacy of extrareticular inhibition in timing relay cell activity in sensory and motor thalamic nuclei. Finally, based on the classic nomenclature and the difference between reticular and extrareticular terminals, we define a novel, multisite GABAergic terminal type (F3) in the thalamus.
Resumo:
Protein-protein interactions encode the wiring diagram of cellular signaling pathways and their deregulations underlie a variety of diseases, such as cancer. Inhibiting protein-protein interactions with peptide derivatives is a promising way to develop new biological and therapeutic tools. Here, we develop a general framework to computationally handle hundreds of non-natural amino acid sidechains and predict the effect of inserting them into peptides or proteins. We first generate all structural files (pdb and mol2), as well as parameters and topologies for standard molecular mechanics software (CHARMM and Gromacs). Accurate predictions of rotamer probabilities are provided using a novel combined knowledge and physics based strategy. Non-natural sidechains are useful to increase peptide ligand binding affinity. Our results obtained on non-natural mutants of a BCL9 peptide targeting beta-catenin show very good correlation between predicted and experimental binding free-energies, indicating that such predictions can be used to design new inhibitors. Data generated in this work, as well as PyMOL and UCSF Chimera plug-ins for user-friendly visualization of non-natural sidechains, are all available at http://www.swisssidechain.ch. Our results enable researchers to rapidly and efficiently work with hundreds of non-natural sidechains.
Resumo:
This paper reports an analysis of the evolution of equity in access to health care in Spain over the period 1987-2001, a time span covering the development of the modern Spanish National Health System. Our measures of access are the probabilities of visiting a doctor, using emergency services and being hospitalised. For these three measures we obtain indices of horizontal inequity from microeconometric models of utilization that exploit the individual information in the Spanish National Health Surveys of 1987 and 2001. We find that by 2001 the system has improved in the sense that differences in income no longer lead to different access given the same level of need. However, the tenure of private health insurance leads to differences in access given the same level of need, and its contribution to inequity has increased over time, both because insurance is more concentrated among the rich and because the elasticity of utilization for the three services has increased too.
Resumo:
Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.