963 resultados para non-linear programming
Resumo:
The paper proposes an approach aimed at detecting optimal model parameter combinations to achieve the most representative description of uncertainty in the model performance. A classification problem is posed to find the regions of good fitting models according to the values of a cost function. Support Vector Machine (SVM) classification in the parameter space is applied to decide if a forward model simulation is to be computed for a particular generated model. SVM is particularly designed to tackle classification problems in high-dimensional space in a non-parametric and non-linear way. SVM decision boundaries determine the regions that are subject to the largest uncertainty in the cost function classification, and, therefore, provide guidelines for further iterative exploration of the model space. The proposed approach is illustrated by a synthetic example of fluid flow through porous media, which features highly variable response due to the parameter values' combination.
Resumo:
This paper introduces the approach of using TURF analysis to design a product line through a binary linear programming model. This improves the efficiency of the search for the solution to the problem compared to the algorithms that have been used to date. Furthermore, the proposed technique enables the model to be improved in order to overcome the main drawbacks presented by TURF analysis in practice.
Resumo:
The vast territories that have been radioactively contaminated during the 1986 Chernobyl accident provide a substantial data set of radioactive monitoring data, which can be used for the verification and testing of the different spatial estimation (prediction) methods involved in risk assessment studies. Using the Chernobyl data set for such a purpose is motivated by its heterogeneous spatial structure (the data are characterized by large-scale correlations, short-scale variability, spotty features, etc.). The present work is concerned with the application of the Bayesian Maximum Entropy (BME) method to estimate the extent and the magnitude of the radioactive soil contamination by 137Cs due to the Chernobyl fallout. The powerful BME method allows rigorous incorporation of a wide variety of knowledge bases into the spatial estimation procedure leading to informative contamination maps. Exact measurements (?hard? data) are combined with secondary information on local uncertainties (treated as ?soft? data) to generate science-based uncertainty assessment of soil contamination estimates at unsampled locations. BME describes uncertainty in terms of the posterior probability distributions generated across space, whereas no assumption about the underlying distribution is made and non-linear estimators are automatically incorporated. Traditional estimation variances based on the assumption of an underlying Gaussian distribution (analogous, e.g., to the kriging variance) can be derived as a special case of the BME uncertainty analysis. The BME estimates obtained using hard and soft data are compared with the BME estimates obtained using only hard data. The comparison involves both the accuracy of the estimation maps using the exact data and the assessment of the associated uncertainty using repeated measurements. Furthermore, a comparison of the spatial estimation accuracy obtained by the two methods was carried out using a validation data set of hard data. Finally, a separate uncertainty analysis was conducted that evaluated the ability of the posterior probabilities to reproduce the distribution of the raw repeated measurements available in certain populated sites. The analysis provides an illustration of the improvement in mapping accuracy obtained by adding soft data to the existing hard data and, in general, demonstrates that the BME method performs well both in terms of estimation accuracy as well as in terms estimation error assessment, which are both useful features for the Chernobyl fallout study.
Resumo:
Introduction: Imatinib, a first-line drug for chronic myeloid leukaemia (CML), has been increasingly proposed for therapeutic drug monitoring (TDM), as trough concentrations >=1000 ng/ml (Cmin) have been associated with improved molecular and complete cytogenetic response (CCyR). The pharmacological monitoring project of EUTOS (European Treatment and Outcome Study) was launched to validate retrospectively the correlation between Cmin and response in a large population of patients followed by central TDM in Bordeaux.¦Methods: 1898 CML patients with first TDM 0-9 years after imatinib initiation, providing cytogenetic data along with demographic and comedication (37%) information, were included. Individual Cmin, estimated by non-linear regression (NONMEM), was adjusted to initial standard dose (400 mg/day) and stratified at 1000 ng/ml. Kaplan-Meier estimates of overall cumulative CCyR rates (stratified by sex, age, comedication and Cmin) were compared using asymptotic logrank k-sample test for interval-censored data. Differences in Cmin were assessed by Wilcoxon test.¦Results: There were no significant differences in overall cumulative CCyR rates between Cmin strata, sex and comedication with P-glycoprotein inhibitors/inducers or CYP3A4 inhibitors (p >0.05). Lower rates were observed in 113 young patients <30 years (p = 0.037; 1-year rates: 43% vs 60% in older patients), as well as in 29 patients with CYP3A4 inducers (p = 0.001, 1-year rates: 40% vs 66% without). Higher rates were observed in 108 patients on organic-cation-transporter-1 (hOCT-1) inhibitors (p = 0.034, 1-year rates: 83% vs 56% without). Considering 1-year CCyR rates, a trend towards better response for Cmin above 1000 ng/ml was observed: 64% (95%CI: 60-69%) vs 59% (95%CI: 56-61%). Median Cmin (400 mg/day) was significantly reduced in male patients (732 vs 899ng/ml, p <0.001), young patients <30 years (734 vs 802 ng/ml, p = 0.037) and under CYP3A4 inducers (758 vs 859 ng/ml, p = 0.022). Under hOCT-1 inhibitors, Cmin was increased (939 vs 827 ng/ml, p = 0.038).¦Conclusion: Based on observational TDM data, the impact of imatinib Cmin >1000 ng/ml on CCyR was not salient. Young CML patients (<30 years) and patients taking CYP3A4 inducers probably need close monitoring and possibly higher imatinib doses, due to lower Cmin along with lower CCyR rates. Patients taking hOCT-1 inhibitors seem in contrast to have improved CCyR response rates. The precise role for imatinib TDM remains to be established prospectively.
Resumo:
Comprehensive approach study aimed understanding the reflections and contrasts between personal time and medical therapy protocol time in the life of a young woman with breast cancer. Addressed as a situational study and grounded in Beth’s life story about getting sick and dying of cancer at age 34, the study’s data collection process employed interviews, observation and medical record analysis. The construction of the analytic-synthetic box based on the chronology of Beth’s clinical progression, treatment phases and temporal perception of occurrences enabled us to point out a linear medical therapy protocol time identified by the diagnosis and treatment sequencing process. On the other hand, Beth’s experienced time was marked by simultaneous and non-linear events that generated suffering resulting from the disease. Such comprehension highlights the need for healthcare professionals to take into account the time experienced by the patient, thus providing an indispensable cancer therapeutic protocol with a personal character.
Resumo:
We discuss some practical issues related to the use of the Parameterized Expectations Approach (PEA) for solving non-linear stochastic dynamic models with rational expectations. This approach has been applied in models of macroeconomics, financial economics, economic growth, contracttheory, etc. It turns out to be a convenient algorithm, especially when there is a large number of state variables and stochastic shocks in the conditional expectations. We discuss some practical issues having to do with the application of the algorithm, and we discuss a Fortran program for implementing the algorithm that is available through the internet.We discuss these issues in a battery of six examples.
Resumo:
The achievable region approach seeks solutions to stochastic optimisation problems by: (i) characterising the space of all possible performances(the achievable region) of the system of interest, and (ii) optimisingthe overall system-wide performance objective over this space. This isradically different from conventional formulations based on dynamicprogramming. The approach is explained with reference to a simpletwo-class queueing system. Powerful new methodologies due to the authorsand co-workers are deployed to analyse a general multiclass queueingsystem with parallel servers and then to develop an approach to optimalload distribution across a network of interconnected stations. Finally,the approach is used for the first time to analyse a class of intensitycontrol problems.
Resumo:
Revenue management practices often include overbooking capacity to account for customerswho make reservations but do not show up. In this paper, we consider the network revenuemanagement problem with no-shows and overbooking, where the show-up probabilities are specificto each product. No-show rates differ significantly by product (for instance, each itinerary andfare combination for an airline) as sale restrictions and the demand characteristics vary byproduct. However, models that consider no-show rates by each individual product are difficultto handle as the state-space in dynamic programming formulations (or the variable space inapproximations) increases significantly. In this paper, we propose a randomized linear program tojointly make the capacity control and overbooking decisions with product-specific no-shows. Weestablish that our formulation gives an upper bound on the optimal expected total profit andour upper bound is tighter than a deterministic linear programming upper bound that appearsin the existing literature. Furthermore, we show that our upper bound is asymptotically tightin a regime where the leg capacities and the expected demand is scaled linearly with the samerate. We also describe how the randomized linear program can be used to obtain a bid price controlpolicy. Computational experiments indicate that our approach is quite fast, able to scale to industrialproblems and can provide significant improvements over standard benchmarks.
Resumo:
In this paper, we study how access pricing affects network competition when subscription demand is elastic and each network uses non-linear prices and can applytermination-based price discrimination. In the case of a fixed per minute terminationcharge, we find that a reduction of the termination charge below cost has two opposing effects: it softens competition but helps to internalize network externalities. Theformer reduces mobile penetration while the latter boosts it. We find that firms always prefer termination charge below cost for either motive while the regulator preferstermination below cost only when this boosts penetration.Next, we consider the retail benchmarking approach (Jeon and Hurkens, 2008)that determines termination charges as a function of retail prices and show that thisapproach allows the regulator to increase penetration without distorting call volumes.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
This paper provides a method to estimate time varying coefficients structuralVARs which are non-recursive and potentially overidentified. The procedureallows for linear and non-linear restrictions on the parameters, maintainsthe multi-move structure of standard algorithms and can be used toestimate structural models with different identification restrictions. We studythe transmission of monetary policy shocks and compare the results with thoseobtained with traditional methods.
Resumo:
We address the problem of scheduling a multiclass $M/M/m$ queue with Bernoulli feedback on $m$ parallel servers to minimize time-average linear holding costs. We analyze the performance of a heuristic priority-index rule, which extends Klimov's optimal solution to the single-server case: servers select preemptively customers with larger Klimov indices. We present closed-form suboptimality bounds (approximate optimality) for Klimov's rule, which imply that its suboptimality gap is uniformly bounded above with respect to (i) external arrival rates, as long as they stay within system capacity;and (ii) the number of servers. It follows that its relativesuboptimality gap vanishes in a heavy-traffic limit, as external arrival rates approach system capacity (heavy-traffic optimality). We obtain simpler expressions for the special no-feedback case, where the heuristic reduces to the classical $c \mu$ rule. Our analysis is based on comparing the expected cost of Klimov's ruleto the value of a strong linear programming (LP) relaxation of the system's region of achievable performance of mean queue lengths. In order to obtain this relaxation, we derive and exploit a new set ofwork decomposition laws for the parallel-server system. We further report on the results of a computational study on the quality of the $c \mu$ rule for parallel scheduling.
Resumo:
In order to have references for discussing mathematical menus in political science, Ireview the most common types of mathematical formulae used in physics andchemistry, as well as some mathematical advances in economics. Several issues appearrelevant: variables should be well defined and measurable; the relationships betweenvariables may be non-linear; the direction of causality should be clearly identified andnot assumed on a priori grounds. On these bases, theoretically-driven equations onpolitical matters can be validated by empirical tests and can predict observablephenomena.
Resumo:
This study explored the links between having older siblings who get drunk, satisfaction with the parent-adolescent relationship, parental monitoring, and adolescents' risky drinking. Regression models were conducted based on a national representative sample of 3725 8th to 10th graders in Switzerland (mean age 15.0, SD = .93) who indicated having older siblings. Results showed that both parental factors and older siblings' drinking behaviour shape younger siblings' frequency of risky drinking. Parental monitoring showed a linear dose-response relationship, and siblings' influence had an additive effect. There was a non-linear interaction effect between parent-adolescent relationship and older sibling's drunkenness. The findings suggest that, apart from avoiding an increasingly unsatisfactory relationship with their children, parental monitoring appears to be important in preventing risky drinking by their younger children, even if the older sibling drinks in such a way. However, a satisfying relationship with parents does not seem to be sufficient to counterbalance older siblings' influence.
Resumo:
We show that if performance measures in a stochastic scheduling problem satisfy a set of so-called partial conservation laws (PCL), which extend previously studied generalized conservation laws (GCL), then the problem is solved optimally by a priority-index policy for an appropriate range of linear performance objectives, where the optimal indices are computed by a one-pass adaptive-greedy algorithm, based on Klimov's. We further apply this framework to investigate the indexability property of restless bandits introduced by Whittle, obtaining the following results: (1) we identify a class of restless bandits (PCL-indexable) which are indexable; membership in this class is tested through a single run of the adaptive-greedy algorithm, which also computes the Whittle indices when the test is positive; this provides a tractable sufficient condition for indexability; (2) we further indentify the class of GCL-indexable bandits, which includes classical bandits, having the property that they are indexable under any linear reward objective. The analysis is based on the so-called achievable region method, as the results follow fromnew linear programming formulations for the problems investigated.