992 resultados para Multistage stochastic linear programs
Resumo:
The achievable region approach seeks solutions to stochastic optimisation problems by: (i) characterising the space of all possible performances(the achievable region) of the system of interest, and (ii) optimisingthe overall system-wide performance objective over this space. This isradically different from conventional formulations based on dynamicprogramming. The approach is explained with reference to a simpletwo-class queueing system. Powerful new methodologies due to the authorsand co-workers are deployed to analyse a general multiclass queueingsystem with parallel servers and then to develop an approach to optimalload distribution across a network of interconnected stations. Finally,the approach is used for the first time to analyse a class of intensitycontrol problems.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
The choice network revenue management model incorporates customer purchase behavioras a function of the offered products, and is the appropriate model for airline and hotel networkrevenue management, dynamic sales of bundles, and dynamic assortment optimization.The optimization problem is a stochastic dynamic program and is intractable. A certainty-equivalencerelaxation of the dynamic program, called the choice deterministic linear program(CDLP) is usually used to generate dyamic controls. Recently, a compact linear programmingformulation of this linear program was given for the multi-segment multinomial-logit (MNL)model of customer choice with non-overlapping consideration sets. Our objective is to obtaina tighter bound than this formulation while retaining the appealing properties of a compactlinear programming representation. To this end, it is natural to consider the affine relaxationof the dynamic program. We first show that the affine relaxation is NP-complete even for asingle-segment MNL model. Nevertheless, by analyzing the affine relaxation we derive a newcompact linear program that approximates the dynamic programming value function betterthan CDLP, provably between the CDLP value and the affine relaxation, and often comingclose to the latter in our numerical experiments. When the segment consideration sets overlap,we show that some strong equalities called product cuts developed for the CDLP remain validfor our new formulation. Finally we perform extensive numerical comparisons on the variousbounds to evaluate their performance.
Resumo:
We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.
Resumo:
We develop a mathematical programming approach for the classicalPSPACE - hard restless bandit problem in stochastic optimization.We introduce a hierarchy of n (where n is the number of bandits)increasingly stronger linear programming relaxations, the lastof which is exact and corresponds to the (exponential size)formulation of the problem as a Markov decision chain, while theother relaxations provide bounds and are efficiently computed. Wealso propose a priority-index heuristic scheduling policy fromthe solution to the first-order relaxation, where the indices aredefined in terms of optimal dual variables. In this way wepropose a policy and a suboptimality guarantee. We report resultsof computational experiments that suggest that the proposedheuristic policy is nearly optimal. Moreover, the second-orderrelaxation is found to provide strong bounds on the optimalvalue.
Resumo:
The semiclassical Einstein-Langevin equations which describe the dynamics of stochastic perturbations of the metric induced by quantum stress-energy fluctuations of matter fields in a given state are considered on the background of the ground state of semiclassical gravity, namely, Minkowski spacetime and a scalar field in its vacuum state. The relevant equations are explicitly derived for massless and massive fields arbitrarily coupled to the curvature. In doing so, some semiclassical results, such as the expectation value of the stress-energy tensor to linear order in the metric perturbations and particle creation effects, are obtained. We then solve the equations and compute the two-point correlation functions for the linearized Einstein tensor and for the metric perturbations. In the conformal field case, explicit results are obtained. These results hint that gravitational fluctuations in stochastic semiclassical gravity have a non-perturbative behavior in some characteristic correlation lengths.
Resumo:
In inflationary cosmological models driven by an inflaton field the origin of the primordial inhomogeneities which are responsible for large-scale structure formation are the quantum fluctuations of the inflaton field. These are usually calculated using the standard theory of cosmological perturbations, where both the gravitational and the inflaton fields are linearly perturbed and quantized. The correlation functions for the primordial metric fluctuations and their power spectrum are then computed. Here we introduce an alternative procedure for calculating the metric correlations based on the Einstein-Langevin equation which emerges in the framework of stochastic semiclassical gravity. We show that the correlation functions for the metric perturbations that follow from the Einstein-Langevin formalism coincide with those obtained with the usual quantization procedures when the scalar field perturbations are linearized. This method is explicitly applied to a simple model of chaotic inflation consisting of a Robertson-Walker background, which undergoes a quasi-de Sitter expansion, minimally coupled to a free massive quantum scalar field. The technique based on the Einstein-Langevin equation can, however, deal naturally with the perturbations of the scalar field even beyond the linear approximation, as is actually required in inflationary models which are not driven by an inflaton field, such as Starobinsky¿s trace-anomaly driven inflation or when calculating corrections due to nonlinear quantum effects in the usual inflaton driven models.
Resumo:
We show that a magnetic dipole in a shear flow under the action of an oscillating magnetic field displays stochastic resonance in the linear response regime. To this end, we compute the classical quantifiers of stochastic resonance, i.e., the signal to noise ratio, the escape time distribution, and the mean first passage time. We also discuss the limitations and role of the linear response theory in its applications to the theory of stochastic resonance.
Resumo:
Regulatory gene networks contain generic modules, like those involving feedback loops, which are essential for the regulation of many biological functions (Guido et al. in Nature 439:856-860, 2006). We consider a class of self-regulated genes which are the building blocks of many regulatory gene networks, and study the steady-state distribution of the associated Gillespie algorithm by providing efficient numerical algorithms. We also study a regulatory gene network of interest in gene therapy, using mean-field models with time delays. Convergence of the related time-nonhomogeneous Markov chain is established for a class of linear catalytic networks with feedback loops.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed an upscaling procedure based on a Bayesian sequential simulation approach. This method is then applied to the stochastic integration of low-resolution, regional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this upscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
The purpose of this research was to do a repeated cross-sectional research on class teachers who study in the 4th year and also graduated at the Faculty of Education, University of Turku between the years of 2000 through 2004. Specifically, seven research questions were addressed to target the main purpose of the study: How do class teacher education masters’ degree senior students and graduates rate “importance; effectiveness; and quality” of training they have received at the Faculty of Education? Are there significant differences between overall ratings of importance; effectiveness and quality of training by year of graduation, sex, and age (for graduates) and sex and age (for senior students)? Is there significant relationship between respondents’ overall ratings of importance; effectiveness and their overall ratings of the quality of training and preparation they have received? Are there significant differences between graduates and senior students about importance, effectiveness, and quality of teacher education programs? And what do teachers’ [Graduates] believe about how increasing work experience has changed their opinions of their preservice training? Moreover the following concepts related to the instructional activities were studied: critical thinking skills, communication skills, attention to ethics, curriculum and instruction (planning), role of teacher and teaching knowledge, assessment skills, attention to continuous professional development, subject matters knowledge, knowledge of learning environment, and using educational technology. Researcher also tried to find influence of some moderator variables e.g. year of graduation, sex, and age on the dependent and independent variables. This study consisted of two questionnaires (a structured likert-scale and an open ended questionnaire). The population in study 1 was all senior students and 2000-2004 class teacher education masters’ degree from the departments of Teacher Education Faculty of Education at University of Turku. Of the 1020 students and graduates the researcher was able to find current addresses of 675 of the subjects and of the 675 graduates contacted, 439 or 66.2 percent responded to the survey. The population in study 2 was all class teachers who graduated from Turku University and now work in the few basic schools (59 Schools) in South- West Finland. 257 teachers answered to the open ended web-based questions. SPSS was used to produce standard deviations; Analysis of Variance; Pearson Product Moment Correlation (r); T-test; ANOVA, Bonferroni post-hoc test; and Polynomial Contrast tests meant to analyze linear trend. An alpha level of .05 was used to determine statistical significance. The results of the study showed that: A majority of the respondents (graduates and students) rated the overall importance, effectiveness and quality of the teacher education programs as important, effective and good. Generally speaking there were only a few significant differences between the cohorts and groups related to the background variables (gender, age). The different cohorts were rating the quality of the programs very similarly but some differences between the cohorts were found in the importance and effectiveness ratings. Graduates of 2001 and 2002 rated the importance of the program significantly higher than 2000 graduates. The effectiveness of the programs was rated significantly higher by 2001 and 2003 graduates than other groups. In spite of these individual differences between cohorts there were no linear trends among the year cohorts in any measure. In respondents’ ratings of the effectiveness of teacher education programs there was significant difference between males and females; females rated it higher than males. There were no significant differences between males’ and females’ ratings of the importance and quality of programs. In the ratings there was only one difference between age groups. Older graduates (35 years or older) rated the importance of the teacher training significantly higher that 25-35 years old graduates. In graduates’ ratings there were positive but relatively low correlations between all variables related to importance, effectiveness and quality of Teacher Education Programs. Generally speaking students’ ratings about importance, effectiveness and quality of teacher education program were very positive. There was only one significant difference related to the background variables. Females rated higher the effectiveness of the program. The comparison of students’ and graduates’ perception about importance, effectiveness, and quality of teacher education programs showed that there were no significant differences between graduates and students in the overall ratings. However there were differences in some individual variables. Students rated higher in importance of “Continuous Professional Development”, effectiveness of “Critical Thinking Skills” and “Using Educational Technology” and quality of “Advice received from the advisor”. Graduates rated higher in importance of “Knowledge of Learning Environment” and effectiveness of “Continuous Professional Development”. According to the qualitative data of study 2 some graduates expressed that their perceptions have not changed about the importance, effectiveness, and quality of training that they received during their study time. They pointed out that teacher education programs have provided them the basic theoretical/formal knowledge and some training of practical routines. However, a majority of the teachers seems to have somewhat critical opinions about the teacher education. These teachers were not satisfied with teacher education programs because they argued that the programs failed to meet their practical demands in different everyday situations of the classroom e.g. in coping with students’ learning difficulties, multiprofessional communication with parents and other professional groups (psychologists and social workers), and classroom management problems. Participants also emphasized more practice oriented knowledge of subject matter, evaluation methods and teachers’ rights and responsibilities. Therefore, they (54.1% of participants) suggested that teacher education departments should provide more practice-based courses and programs as well as closer collaboration between regular schools and teacher education departments in order to fill gap between theory and practice.
Resumo:
The Practical Stochastic Model is a simple and robust method to describe coupled chemical reactions. The connection between this stochastic method and a deterministic method was initially established to understand how the parameters and variables that describe the concentration in both methods were related. It was necessary to define two main concepts to make this connection: the filling of compartments or dilutions and the rate of reaction enhancement. The parameters, variables, and the time of the stochastic methods were scaled with the size of the compartment and were compared with a deterministic method. The deterministic approach was employed as an initial reference to achieve a consistent stochastic result. Finally, an independent robust stochastic method was obtained. This method could be compared with the Stochastic Simulation Algorithm developed by Gillespie, 1977. The Practical Stochastic Model produced absolute values that were essential to describe non-linear chemical reactions with a simple structure, and allowed for a correct description of the chemical kinetics.
Resumo:
Biological dosimetry (biodosimetry) is based on the investigation of radiation-induced biological effects (biomarkers), mainly dicentric chromosomes, in order to correlate them with radiation dose. To interpret the dicentric score in terms of absorbed dose, a calibration curve is needed. Each curve should be constructed with respect to basic physical parameters, such as the type of ionizing radiation characterized by low or high linear energy transfer (LET) and dose rate. This study was designed to obtain dose calibration curves by scoring of dicentric chromosomes in peripheral blood lymphocytes irradiated in vitro with a 6 MV electron linear accelerator (Mevatron M, Siemens, USA). Two software programs, CABAS (Chromosomal Aberration Calculation Software) and Dose Estimate, were used to generate the curve. The two software programs are discussed; the results obtained were compared with each other and with other published low LET radiation curves. Both software programs resulted in identical linear and quadratic terms for the curve presented here, which was in good agreement with published curves for similar radiation quality and dose rates.
Resumo:
The aim of this thesis is to price options on equity index futures with an application to standard options on S&P 500 futures traded on the Chicago Mercantile Exchange. Our methodology is based on stochastic dynamic programming, which can accommodate European as well as American options. The model accommodates dividends from the underlying asset. It also captures the optimal exercise strategy and the fair value of the option. This approach is an alternative to available numerical pricing methods such as binomial trees, finite differences, and ad-hoc numerical approximation techniques. Our numerical and empirical investigations demonstrate convergence, robustness, and efficiency. We use this methodology to value exchange-listed options. The European option premiums thus obtained are compared to Black's closed-form formula. They are accurate to four digits. The American option premiums also have a similar level of accuracy compared to premiums obtained using finite differences and binomial trees with a large number of time steps. The proposed model accounts for deterministic, seasonally varying dividend yield. In pricing futures options, we discover that what matters is the sum of the dividend yields over the life of the futures contract and not their distribution.
Resumo:
The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.