959 resultados para Linear program model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Customer choice behavior, such as 'buy-up' and 'buy-down', is an importantphe-nomenon in a wide range of industries. Yet there are few models ormethodologies available to exploit this phenomenon within yield managementsystems. We make some progress on filling this void. Specifically, wedevelop a model of yield management in which the buyers' behavior ismodeled explicitly using a multi-nomial logit model of demand. Thecontrol problem is to decide which subset of fare classes to offer ateach point in time. The set of open fare classes then affects the purchaseprobabilities for each class. We formulate a dynamic program todetermine the optimal control policy and show that it reduces to a dynamicnested allocation policy. Thus, the optimal choice-based policy caneasily be implemented in reservation systems that use nested allocationcontrols. We also develop an estimation procedure for our model based onthe expectation-maximization (EM) method that jointly estimates arrivalrates and choice model parameters when no-purchase outcomes areunobservable. Numerical results show that this combined optimization-estimation approach may significantly improve revenue performancerelative to traditional leg-based models that do not account for choicebehavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a test of the predictive validity of various classes ofQALY models (i.e., linear, power and exponential models). We first estimatedTTO utilities for 43 EQ-5D chronic health states and next these states wereembedded in health profiles. The chronic TTO utilities were then used topredict the responses to TTO questions with health profiles. We find that thepower QALY model clearly outperforms linear and exponential QALY models.Optimal power coefficient is 0.65. Our results suggest that TTO-based QALYcalculations may be biased. This bias can be avoided using a power QALY model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on judgment and decision making presents a confusing picture of human abilities. For example, much research has emphasized the dysfunctional aspects of judgmental heuristics, and yet, other findings suggest that these can be highly effective. A further line of research has modeled judgment as resulting from as if linear models. This paper illuminates the distinctions in these approaches by providing a common analytical framework based on the central theoretical premise that understanding human performance requires specifying how characteristics of the decision rules people use interact with the demands of the tasks they face. Our work synthesizes the analytical tools of lens model research with novel methodology developed to specify the effectiveness of heuristics in different environments and allows direct comparisons between the different approaches. We illustrate with both theoretical analyses and simulations. We further link our results to the empirical literature by a meta-analysis of lens model studies and estimate both human andheuristic performance in the same tasks. Our results highlight the trade-off betweenlinear models and heuristics. Whereas the former are cognitively demanding, the latterare simple to use. However, they require knowledge and thus maps of when andwhich heuristic to employ.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A polarizable quantum mechanics and molecular mechanics model has been extended to account for the difference between the macroscopic electric field and the actual electric field felt by the solute molecule. This enables the calculation of effective microscopic properties which can be related to macroscopic susceptibilities directly comparable with experimental results. By seperating the discrete local field into two distinct contribution we define two different microscopic properties, the so-called solute and effective properties. The solute properties account for the pure solvent effects, i.e., effects even when the macroscopic electric field is zero, and the effective properties account for both the pure solvent effects and the effect from the induced dipoles in the solvent due to the macroscopic electric field. We present results for the linear and nonlinear polarizabilities of water and acetonitrile both in the gas phase and in the liquid phase. For all the properties we find that the pure solvent effect increases the properties whereas the induced electric field decreases the properties. Furthermore, we present results for the refractive index, third-harmonic generation (THG), and electric field induced second-harmonic generation (EFISH) for liquid water and acetonitrile. We find in general good agreement between the calculated and experimental results for the refractive index and the THG susceptibility. For the EFISH susceptibility, however, the difference between experiment and theory is larger since the orientational effect arising from the static electric field is not accurately described

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The SMILING project, a multicentric project fundedby the European Union, aims to develop a new gait and balance trainingprogram to prevent falls in older persons. The program includes the"SMILING shoe", an innovative device that generates mechanical perturbationwhile walking by changing the soles' inclination. Induced perturbationschallenge subjects' balance and force them to react to avoidfalls. By training specifically the complex motor reactions used to maintainbalance when walking on irregular ground, the program will improvesubjects' ability to react in situation of unsteadiness and reduce theirrisk of falling. Methods: The program will be evaluated in a multicentric,cross-over randomized controlled trial. Overall, 112 subjects (aged≥65 years, ≥1 falls, POMA score 22-26/28) will be enrolled. Subjectswill be randomised in 2 groups: group A begin the training with active"SMILING shoes", group B with inactive dummy shoes. After 4 weeksof training, group A and B will exchange the shoes. Supervised trainingsessions (30 minutes twice a week for 8 weeks) include walkingtasks of progressive difficulties.To avoid a learning effect, "SMILINGshoes" perturbations will be generated in a non-linear and chaotic way.Gait performance, fear of falling, and acceptability of the program willbe assessed. Conclusion: The SMILING program is an innovative interventionfor falls prevention in older persons based on gait and balancetraining using chaotic perturbations. Because of the easy use of the"SMILING shoes", this program could be used in various settings, suchas geriatric clinics or at home.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phosphorylation of transcription factors is a rapid and reversible process linking cell signaling and control of gene expression, therefore understanding how it controls the transcription factor functions is one of the challenges of functional genomics. We performed such analysis for the forkhead transcription factor FOXC2 mutated in human hereditary disease lymphedemadistichiasis and important for the development of venous and lymphatic valves and lymphatic collecting vessels. We found that FOXC2 is phosphorylated in a cell-cycle dependent manner on eight evolutionary conserved serine/threonine residues, seven of which are clustered within a 70 amino acid domain. Surprisingly, the mutation of phosphorylation sites or a complete deletion of the domain did not affect the transcriptional activity of FOXC2 in a synthetic reporter assay. However, overexpression of the wild type or phosphorylation-deficient mutant resulted in overlapping but distinct gene expression profiles suggesting that binding of FOXC2 to individual sites under physiological conditions is affected by phosphorylation. To gain a direct insight into the role of FOXC2 phosphorylation, we performed comparative genome-wide location analysis (ChIP-chip) of wild type and phosphorylation-deficient FOXC2 in primary lymphatic endothelial cells. The effect of loss of phosphorylation on FOXC2 binding to genomic sites ranged from no effect to nearly complete inhibition of binding, suggesting a mechanism for how FOXC2 transcriptional program can be differentially regulated depending on FOXC2 phosphorylation status. Based on these results, we propose an extension to the enhanceosome model, where a network of genomic context-dependent DNA-protein and protein-protein interactions not only distinguishes a functional site from a nonphysiological site, but also determines whether binding to the functional site can be regulated by phosphorylation. Moreover, our results indicate that FOXC2 may have different roles in quiescent versus proliferating lymphatic endothelial cells in vivo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein-ligand docking has made important progress during the last decade and has become a powerful tool for drug development, opening the way to virtual high throughput screening and in silico structure-based ligand design. Despite the flattering picture that has been drawn, recent publications have shown that the docking problem is far from being solved, and that more developments are still needed to achieve high successful prediction rates and accuracy. Introducing an accurate description of the solvation effect upon binding is thought to be essential to achieve this goal. In particular, EADock uses the Generalized Born Molecular Volume 2 (GBMV2) solvent model, which has been shown to reproduce accurately the desolvation energies calculated by solving the Poisson equation. Here, the implementation of the Fast Analytical Continuum Treatment of Solvation (FACTS) as an implicit solvation model in small molecules docking calculations has been assessed using the EADock docking program. Our results strongly support the use of FACTS for docking. The success rates of EADock/FACTS and EADock/GBMV2 are similar, i.e. around 75% for local docking and 65% for blind docking. However, these results come at a much lower computational cost: FACTS is 10 times faster than GBMV2 in calculating the total electrostatic energy, and allows a speed up of EADock by a factor of 4. This study also supports the EADock development strategy relying on the CHARMM package for energy calculations, which enables straightforward implementation and testing of the latest developments in the field of Molecular Modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The aim of our study was to assess the feasibility of minimally invasive digestive anastomosis using a modular flexible magnetic anastomotic device made up of a set of two flexible chains of magnetic elements. The assembly possesses a non-deployed linear configuration which allows it to be introduced through a dedicated small-sized applicator into the bowel where it takes the deployed form. A centering suture allows the mating between the two parts to be controlled in order to include the viscerotomy between the two magnetic rings and the connected viscera. METHODS AND PROCEDURES: Eight pigs were involved in a 2-week survival experimental study. In five colorectal anastomoses, the proximal device was inserted by a percutaneous endoscopic technique, and the colon was divided below the magnet. The distal magnet was delivered transanally to connect with the proximal magnet. In three jejunojejunostomies, the first magnetic chain was injected in its linear configuration through a small enterotomy. Once delivered, the device self-assembled into a ring shape. A second magnet was injected more distally through the same port. The centering sutures were tied together extracorporeally and, using a knot pusher, magnets were connected. Ex vivo strain testing to determine the compression force delivered by the magnetic device, burst pressure of the anastomosis, and histology were performed. RESULTS: Mean operative time including endoscopy was 69.2 ± 21.9 min, and average time to full patency was 5 days for colorectal anastomosis. Operative times for jejunojejunostomies were 125, 80, and 35 min, respectively. The postoperative period was uneventful. Burst pressure of all anastomoses was ≥ 110 mmHg. Mean strain force to detach the devices was 6.1 ± 0.98 and 12.88 ± 1.34 N in colorectal and jejunojejunal connections, respectively. Pathology showed a mild-to-moderate inflammation score. CONCLUSIONS: The modular magnetic system showed enormous potential to create minimally invasive digestive anastomoses, and may represent an alternative to stapled anastomoses, being easy to deliver, effective, and low cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is twofold. First, we study the determinants of economic growth among a wide set of potential variables for the Spanish provinces (NUTS3). Among others, we include various types of private, public and human capital in the group of growth factors. Also,we analyse whether Spanish provinces have converged in economic terms in recent decades. Thesecond objective is to obtain cross-section and panel data parameter estimates that are robustto model speci¯cation. For this purpose, we use a Bayesian Model Averaging (BMA) approach.Bayesian methodology constructs parameter estimates as a weighted average of linear regression estimates for every possible combination of included variables. The weight of each regression estimate is given by the posterior probability of each model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[cat] En aquest treball extenem les reformes lineals introduïdes per Pfähler (1984) al cas d’impostos duals. Estudiem l’efecte relatiu que els retalls lineals duals d’un impost dual tenen sobre la distribució de la desigualtat -es pot fer un estudi simètric per al cas d’augments d’impostos-. Tambe introduïm mesures del grau de progressivitat d’impostos duals i mostrem que estan connectades amb el criteri de dominació de Lorenz. Addicionalment, estudiem l’elasticitat de la càrrega fiscal de cadascuna de les reformes proposades. Finalment, gràcies a un model de microsimulació i una gran base de dades que conté informació sobre l’IRPF espanyol de l’any 2004, 1) comparem l’efecte que diferents reformes tindrien sobre l’impost dual espanyol i 2) estudiem quina redistribució de la riquesa va suposar la reforma dual de l’IRPF (Llei ’35/2006’) respecte l’anterior impost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is twofold. First, we study the determinants of economic growth among a wide set of potential variables for the Spanish provinces (NUTS3). Among others, we include various types of private, public and human capital in the group of growth factors. Also,we analyse whether Spanish provinces have converged in economic terms in recent decades. Thesecond objective is to obtain cross-section and panel data parameter estimates that are robustto model speci¯cation. For this purpose, we use a Bayesian Model Averaging (BMA) approach.Bayesian methodology constructs parameter estimates as a weighted average of linear regression estimates for every possible combination of included variables. The weight of each regression estimate is given by the posterior probability of each model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Health status measures usually have an asymmetric distribution and present a highpercentage of respondents with the best possible score (ceiling effect), specially when they areassessed in the overall population. Different methods to model this type of variables have beenproposed that take into account the ceiling effect: the tobit models, the Censored Least AbsoluteDeviations (CLAD) models or the two-part models, among others. The objective of this workwas to describe the tobit model, and compare it with the Ordinary Least Squares (OLS) model,that ignores the ceiling effect.Methods: Two different data sets have been used in order to compare both models: a) real datacomming from the European Study of Mental Disorders (ESEMeD), in order to model theEQ5D index, one of the measures of utilities most commonly used for the evaluation of healthstatus; and b) data obtained from simulation. Cross-validation was used to compare thepredicted values of the tobit model and the OLS models. The following estimators werecompared: the percentage of absolute error (R1), the percentage of squared error (R2), the MeanSquared Error (MSE) and the Mean Absolute Prediction Error (MAPE). Different datasets werecreated for different values of the error variance and different percentages of individuals withceiling effect. The estimations of the coefficients, the percentage of explained variance and theplots of residuals versus predicted values obtained under each model were compared.Results: With regard to the results of the ESEMeD study, the predicted values obtained with theOLS model and those obtained with the tobit models were very similar. The regressioncoefficients of the linear model were consistently smaller than those from the tobit model. In thesimulation study, we observed that when the error variance was small (s=1), the tobit modelpresented unbiased estimations of the coefficients and accurate predicted values, specially whenthe percentage of individuals wiht the highest possible score was small. However, when theerrror variance was greater (s=10 or s=20), the percentage of explained variance for the tobitmodel and the predicted values were more similar to those obtained with an OLS model.Conclusions: The proportion of variability accounted for the models and the percentage ofindividuals with the highest possible score have an important effect in the performance of thetobit model in comparison with the linear model.