938 resultados para optimal-stocking model
Resumo:
Customer choice behavior, such as 'buy-up' and 'buy-down', is an importantphe-nomenon in a wide range of industries. Yet there are few models ormethodologies available to exploit this phenomenon within yield managementsystems. We make some progress on filling this void. Specifically, wedevelop a model of yield management in which the buyers' behavior ismodeled explicitly using a multi-nomial logit model of demand. Thecontrol problem is to decide which subset of fare classes to offer ateach point in time. The set of open fare classes then affects the purchaseprobabilities for each class. We formulate a dynamic program todetermine the optimal control policy and show that it reduces to a dynamicnested allocation policy. Thus, the optimal choice-based policy caneasily be implemented in reservation systems that use nested allocationcontrols. We also develop an estimation procedure for our model based onthe expectation-maximization (EM) method that jointly estimates arrivalrates and choice model parameters when no-purchase outcomes areunobservable. Numerical results show that this combined optimization-estimation approach may significantly improve revenue performancerelative to traditional leg-based models that do not account for choicebehavior.
Resumo:
We study the standard economic model of unilateral accidents, in its simplest form, assumingthat the injurers have limited assets.We identify a second-best optimal rule that selects as duecare the minimum of first-best care, and a level of care that takes into account the wealth ofthe injurer. We show that such a rule in fact maximizes the precautionary effort by a potentialinjurer. The idea is counterintuitive: Being softer on an injurer, in terms of the required level ofcare, actually improves the incentives to take care when he is potentially insolvent. We extendthe basic result to an entire population of potentially insolvent injurers, and find that the optimalgeneral standards of care do depend on wealth, and distribution of income. We also show theconditions for the result that higher income levels in a given society call for higher levels of carefor accidents.
Resumo:
Most cases of cost overruns in public procurement are related to important changes in the initial project design. This paper deals with the problem of design specification in public procurement and provides a rationale for design misspecification. We propose a model in which the sponsor decides how much to invest in design specification and awards competitively the project to a contractor. After the project has been awarded the sponsor engages in bilateral renegotiation with the contractor, in order to accommodate changes in the initial project s design that new information makes desirable. When procurement takes place in the presence of horizontally differentiated contractors, the design s specification level is seen to affect the resulting degree of competition. The paper highlights this interaction between market competition and design specification and shows that the sponsor s optimal strategy, when facing an imperfectly competitive market supply, is to underinvest in design specification so as to make significant cost overruns likely. Since no such misspecification occurs in a perfectly competitive market, cost overruns are seen to arise as a consequence of lack of competition in the procurement market.
Resumo:
I present an optimisation model that links paternal investment, male display and female choice. Although deviced for sticklebacks, it readily applies to other fish with male guarding behaviour. It relies on a few basic assumptions on the ways hatching success depends on paternal investment and clutch size, and male survival on paternal investment and signaling. Paternal investment is here a state-dependent decision, and signal a condition-dependent handicap by which males inform females of how much they are willing to invest. Series of predictions are derived on female and male breeding strategies, including optimal levels of signaling and paternal investment as functions of clutch size, own condition, and residual reproductive value, as well as alternative strategies such as egg kleptoparasitism. Some predictions already have empirical support, for which the present model provides new interpretations. Other might readily be tested, e.g. by simple clutch-size manipulations.
Resumo:
Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.
Resumo:
We propose a stylized model of a problem-solving organization whoseinternal communication structure is given by a fixed network. Problemsarrive randomly anywhere in this network and must find their way to theirrespective specialized solvers by relying on local information alone.The organization handles multiple problems simultaneously. For this reason,the process may be subject to congestion. We provide a characterization ofthe threshold of collapse of the network and of the stock of foatingproblems (or average delay) that prevails below that threshold. We buildupon this characterization to address a design problem: the determinationof what kind of network architecture optimizes performance for any givenproblem arrival rate. We conclude that, for low arrival rates, the optimalnetwork is very polarized (i.e. star-like or centralized ), whereas it islargely homogenous (or decentralized ) for high arrival rates. We also showthat, if an auxiliary assumption holds, the transition between these twoopposite structures is sharp and they are the only ones to ever qualify asoptimal.
Resumo:
This paper extends the optimal law enforcement literature to organized crime.We model the criminal organization as a vertical structure where the principal extracts some rents from the agents through extortion. Depending on the principal's information set, threats may or may not be credible. As long as threats are credible, the principal is able to fully extract rents.In that case, the results obtained by applying standard theory of optimal law enforcement are robust: we argue for a tougher policy. However, when threats are not credible, the principal is not able to fully extract rents and there is violence. Moreover, we show that it is not necessarily true that a tougher law enforcement policy should be chosen when in presence of organized crime.
Resumo:
Aware of the importance of developing new alternatives to improve the performance of the companies, our purpose in this paper is to develop a medium term production planning model that deals with the concepts of Partnership and Reverse Logistics. Our model takes advantage of the synergies of integration, developing a model for global production planning that generates the optimal production and purchasing schedule for all the companies integrating a logistic chain. In a second part of the paper we incorporate products returns to the first model proposed, and analyze the implications they have over this model. We use some examples with different configurations of supply chains varying the number of production plants, distribution centers and recovery plants. To solve the model we have combined optimization and simulation procedures.
Resumo:
AIMS: While successful termination by pacing of organized atrial tachycardias has been observed in patients, single site rapid pacing has not yet led to conclusive results for the termination of atrial fibrillation (AF). The purpose of this study was to evaluate a novel atrial septal pacing algorithm for the termination of AF in a biophysical model of the human atria. METHODS AND RESULTS: Sustained AF was generated in a model based on human magnetic resonance images and membrane kinetics. Rapid pacing was applied from the septal area following a dual-stage scheme: (i) rapid pacing for 10-30 s at pacing intervals 62-70% of AF cycle length (AFCL), (ii) slow pacing for 1.5 s at 180% AFCL, initiated by a single stimulus at 130% AFCL. Atrial fibrillation termination success rates were computed. A mean success rate for AF termination of 10.2% was obtained for rapid septal pacing only. The addition of the slow pacing phase increased this rate to 20.2%. At an optimal pacing cycle length (64% AFCL) up to 29% of AF termination was observed. CONCLUSION: The proposed septal pacing algorithm could suppress AF reentries in a more robust way than classical single site rapid pacing. Experimental studies are now needed to determine whether similar termination mechanisms and rates can be observed in animals or humans, and in which types of AF this pacing strategy might be most effective.
Identification of optimal structural connectivity using functional connectivity and neural modeling.
Resumo:
The complex network dynamics that arise from the interaction of the brain's structural and functional architectures give rise to mental function. Theoretical models demonstrate that the structure-function relation is maximal when the global network dynamics operate at a critical point of state transition. In the present work, we used a dynamic mean-field neural model to fit empirical structural connectivity (SC) and functional connectivity (FC) data acquired in humans and macaques and developed a new iterative-fitting algorithm to optimize the SC matrix based on the FC matrix. A dramatic improvement of the fitting of the matrices was obtained with the addition of a small number of anatomical links, particularly cross-hemispheric connections, and reweighting of existing connections. We suggest that the notion of a critical working point, where the structure-function interplay is maximal, may provide a new way to link behavior and cognition, and a new perspective to understand recovery of function in clinical conditions.
Resumo:
Designing an efficient sampling strategy is of crucial importance for habitat suitability modelling. This paper compares four such strategies, namely, 'random', 'regular', 'proportional-stratified' and 'equal -stratified'- to investigate (1) how they affect prediction accuracy and (2) how sensitive they are to sample size. In order to compare them, a virtual species approach (Ecol. Model. 145 (2001) 111) in a real landscape, based on reliable data, was chosen. The distribution of the virtual species was sampled 300 times using each of the four strategies in four sample sizes. The sampled data were then fed into a GLM to make two types of prediction: (1) habitat suitability and (2) presence/ absence. Comparing the predictions to the known distribution of the virtual species allows model accuracy to be assessed. Habitat suitability predictions were assessed by Pearson's correlation coefficient and presence/absence predictions by Cohen's K agreement coefficient. The results show the 'regular' and 'equal-stratified' sampling strategies to be the most accurate and most robust. We propose the following characteristics to improve sample design: (1) increase sample size, (2) prefer systematic to random sampling and (3) include environmental information in the design'
Resumo:
In a previous paper a novel Generalized Multiobjective Multitree model (GMM-model) was proposed. This model considers for the first time multitree-multicast load balancing with splitting in a multiobjective context, whose mathematical solution is a whole Pareto optimal set that can include several results than it has been possible to find in the publications surveyed. To solve the GMM-model, in this paper a multi-objective evolutionary algorithm (MOEA) inspired by the Strength Pareto Evolutionary Algorithm (SPEA) is proposed. Experimental results considering up to 11 different objectives are presented for the well-known NSF network, with two simultaneous data flows
Resumo:
The population density of an organism is one of the main aspects of its environment, and shoud therefore strongly influence its adaptive strategy. The r/K theory, based on the logistic model, was developed to formalize this influence. K-selectioon is classically thought to favour large body sizes. This prediction, however, cannot be directly derived from the logistic model: some auxiliary hypotheses are therefor implicit. These are to be made explicit if the theory is to be tested. An alternative approach, based on the Euler-Lotka equation, shows that density itself is irrelevant, but that the relative effect of density on adult and juvenile features is crucial. For instance, increasing population will select for a smaller body size if the density affects mainly juvenile growth and/or survival. In this case, density shoud indeed favour large body sizes. The theory appears nevertheless inconsistent, since a probable consequence of increasing body size will be a decrease in the carrying capacity
Resumo:
[eng] This paper provides, from a theoretical and quantitative point of view, an explanation of why taxes on capital returns are high (around 35%) by analyzing the optimal fiscal policy in an economy with intergenerational redistribution. For this purpose, the government is modeled explicitly and can choose (and commit to) an optimal tax policy in order to maximize society's welfare. In an infinitely lived economy with heterogeneous agents, the long run optimal capital tax is zero. If heterogeneity is due to the existence of overlapping generations, this result in general is no longer true. I provide sufficient conditions for zero capital and labor taxes, and show that a general class of preferences, commonly used on the macro and public finance literature, violate these conditions. For a version of the model, calibrated to the US economy, the main results are: first, if the government is restricted to a set of instruments, the observed fiscal policy cannot be disregarded as sub optimal and capital taxes are positive and quantitatively relevant. Second, if the government can use age specific taxes for each generation, then the age profile capital tax pattern implies subsidizing asset returns of the younger generations and taxing at higher rates the asset returns of the older ones.
Resumo:
This paper analyzes the issue of the interiority of the optimal population growth rate in a two-period overlapping generations model with endogenous fertility. Using Cobb-Douglas utility and production functions, we show that the introduction of a cost of raising children allows for the possibility of the existence of an interior global maximum in the planner¿s problem, contrary to the exogenous fertility case