59 resultados para Linear programming models
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
This paper introduces and analyzes a stochastic search method for parameter estimation in linear regression models in the spirit of Beran and Millar [Ann. Statist. 15(3) (1987) 1131–1154]. The idea is to generate a random finite subset of a parameter space which will automatically contain points which are very close to an unknown true parameter. The motivation for this procedure comes from recent work of Dümbgen et al. [Ann. Statist. 39(2) (2011) 702–730] on regression models with log-concave error distributions.
Resumo:
This paper deals with “The Enchanted Journey,” which is a daily event tour booked by Bollywood-film fans. During the tour, the participants visit original sites of famous Bollywood films at various locations in Switzerland; moreover, the tour includes stops for lunch and shopping. Each day, up to five buses operate the tour. For operational reasons, however, two or more buses cannot stay at the same location simultaneously. Further operative constraints include time windows for all activities and precedence constraints between some activities. The planning problem is how to compute a feasible schedule for each bus. We implement a two-step hierarchical approach. In the first step, we minimize the total waiting time; in the second step, we minimize the total travel time of all buses. We present a basic formulation of this problem as a mixed-integer linear program. We enhance this basic formulation by symmetry-breaking constraints, which reduces the search space without loss of generality. We report on computational results obtained with the Gurobi Solver. Our numerical results show that all relevant problem instances can be solved using the basic formulation within reasonable CPU time, and that the symmetry-breaking constraints reduce that CPU time considerably.
Resumo:
The counterfactual decomposition technique popularized by Blinder (1973, Journal of Human Resources, 436–455) and Oaxaca (1973, International Economic Review, 693–709) is widely used to study mean outcome differences between groups. For example, the technique is often used to analyze wage gaps by sex or race. This article summarizes the technique and addresses several complications, such as the identification of effects of categorical predictors in the detailed decomposition or the estimation of standard errors. A new command called oaxaca is introduced, and examples illustrating its usage are given.
Resumo:
Index tracking has become one of the most common strategies in asset management. The index-tracking problem consists of constructing a portfolio that replicates the future performance of an index by including only a subset of the index constituents in the portfolio. Finding the most representative subset is challenging when the number of stocks in the index is large. We introduce a new three-stage approach that at first identifies promising subsets by employing data-mining techniques, then determines the stock weights in the subsets using mixed-binary linear programming, and finally evaluates the subsets based on cross validation. The best subset is returned as the tracking portfolio. Our approach outperforms state-of-the-art methods in terms of out-of-sample performance and running times.
Resumo:
BACKGROUND: Despite recent algorithmic and conceptual progress, the stoichiometric network analysis of large metabolic models remains a computationally challenging problem. RESULTS: SNA is a interactive, high performance toolbox for analysing the possible steady state behaviour of metabolic networks by computing the generating and elementary vectors of their flux and conversions cones. It also supports analysing the steady states by linear programming. The toolbox is implemented mainly in Mathematica and returns numerically exact results. It is available under an open source license from: http://bioinformatics.org/project/?group_id=546. CONCLUSION: Thanks to its performance and modular design, SNA is demonstrably useful in analysing genome scale metabolic networks. Further, the integration into Mathematica provides a very flexible environment for the subsequent analysis and interpretation of the results.
Resumo:
In process industries, make-and-pack production is used to produce food and beverages, chemicals, and metal products, among others. This type of production process allows the fabrication of a wide range of products in relatively small amounts using the same equipment. In this article, we consider a real-world production process (cf. Honkomp et al. 2000. The curse of reality – why process scheduling optimization problems are diffcult in practice. Computers & Chemical Engineering, 24, 323–328.) comprising sequence-dependent changeover times, multipurpose storage units with limited capacities, quarantine times, batch splitting, partial equipment connectivity, and transfer times. The planning problem consists of computing a production schedule such that a given demand of packed products is fulfilled, all technological constraints are satisfied, and the production makespan is minimised. None of the models in the literature covers all of the technological constraints that occur in such make-and-pack production processes. To close this gap, we develop an efficient mixed-integer linear programming model that is based on a continuous time domain and general-precedence variables. We propose novel types of symmetry-breaking constraints and a preprocessing procedure to improve the model performance. In an experimental analysis, we show that small- and moderate-sized instances can be solved to optimality within short CPU times.
Resumo:
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.
Resumo:
Libraries of learning objects may serve as basis for deriving course offerings that are customized to the needs of different learning communities or even individuals. Several ways of organizing this course composition process are discussed. Course composition needs a clear understanding of the dependencies between the learning objects. Therefore we discuss the metadata for object relationships proposed in different standardization projects and especially those suggested in the Dublin Core Metadata Initiative. Based on these metadata we construct adjacency matrices and graphs. We show how Gozinto-type computations can be used to determine direct and indirect prerequisites for certain learning objects. The metadata may also be used to define integer programming models which can be applied to support the instructor in formulating his specifications for selecting objects or which allow a computer agent to automatically select learning objects. Such decision models could also be helpful for a learner navigating through a library of learning objects. We also sketch a graph-based procedure for manual or automatic sequencing of the learning objects.
Resumo:
OBJECTIVE: To compare and evaluate longitudinally the dental arch relationships from 4.5 to 13.5 years of age with the Bauru-BCLP Yardstick in a large sample of patients with bilateral cleft lip and palate (BCLP). DESIGN: Retrospective longitudinal intercenter outcome study. PATIENTS: Dental casts of 204 consecutive patients with complete BCLP were evaluated at 6, 9, and 12 years of age. All models were identified only by random identification numbers. SETTING: Three cleft palate centers with different treatment protocols. MAIN OUTCOME MEASURES: Dental arch relationships were categorized with the Bauru-BCLP yardstick. Increments for each interval (from 6 to 9 years, 6 to 12 years, and 9 to 12 years) were analyzed by logistic and linear regression models. RESULTS: There were no significant differences in outcome measures between the centers at age 12 or at age 9. At age 6, center B showed significantly better results (p=.027), but this difference diminished as the yardstick score for this group increased over time (linear regression analysis), the difference with the reference category (center C, boys) for the intervals 6 to 12 and 9 to 12 years being 10.4% (p=.041) and 12.9% (p=.009), respectively. CONCLUSIONS: Despite different treatment protocols, dental arch relationships in the three centers were comparable in final scores at age 9 and 12 years. Delaying hard palate closure and employing infant orthopedics did not appear to be advantageous in the long run. Premaxillary osteotomy employed in center B appeared to be associated with less favorable development of the dental arch relationship between 9 and 12 years.
Resumo:
Purpose To compare changes in the largest cross-sectional area (CSA) of the median nerve in wrists undergoing surgical decompression with changes in wrists undergoing non-surgical treatment of carpal tunnel syndrome (CTS). Methods This study was a prospective cohort study in 55 consecutive patients with 78 wrists with established CTS, including 60 wrists treated with surgical decompression and 18 wrists with non-surgical treatment. A sonographic examination was scheduled before and 4 months after initiation of treatment. We compared changes in CSA of the median nerve between wrists with surgical treatment and wrists with non-surgical treatment using linear regression models. Results Decreases in CSA of the median nerve were more pronounced in wrists with CTS release than in wrists undergoing nonsurgical treatment (difference in means, 1.0 mm2; 95% confidence interval, 0.3–1.8 mm2). Results were robust to the adjustment for age, gender, and neurological severity at baseline. Among wrists with CTS release, those with postoperative CSA of 10 mm2 or less tended to have better clinical outcomes than those with postoperative CSA of greater than 10 mm2 (p=.055). Postoperative sonographic workup in the 3 patients with unfavorable outcome or recurrence identified likely causes for treatment failure in 2 patients. Conclusions In this observational study, surgical decompression was associated with a greater decrease in median nerve CSA than was nonsurgical treatment. Smaller postoperative CSAs may be associated with better clinical outcomes. Additional randomized trials are necessary to determine the optimal treatment strategy in different subgroups of patients with CTS. Type of study/level of evidence Therapeutic III.
Resumo:
Objective: To compare clinical outcomes after laparoscopic cholecystectomy (LC) for acute cholecystitis performed at various time-points after hospital admission. Background: Symptomatic gallstones represent an important public health problem with LC the treatment of choice. LC is increasingly offered for acute cholecystitis, however, the optimal time-point for LC in this setting remains a matter of debate. Methods: Analysis was based on the prospective database of the Swiss Association of Laparoscopic and Thoracoscopic Surgery and included patients undergoing emergency LC for acute cholecystitis between 1995 and 2006, grouped according to the time-points of LC since hospital admission (admission day (d0), d1, d2, d3, d4/5, d ≥6). Linear and generalized linear regression models assessed the effect of timing of LC on intra- or postoperative complications, conversion and reoperation rates and length of postoperative hospital stay. Results: Of 4113 patients, 52.8% were female, median age was 59.8 years. Delaying LC resulted in significantly higher conversion rates (from 11.9% at d0 to 27.9% at d ≥6 days after admission, P < 0.001), surgical postoperative complications (5.7% to 13%, P < 0.001) and re-operation rates (0.9% to 3%, P = 0.007), with a significantly longer postoperative hospital stay (P < 0.001). Conclusions: Delaying LC for acute cholecystitis has no advantages, resulting in significantly increased conversion/re-operation rate, postoperative complications and longer postoperative hospital stay. This investigation—one of the largest in the literature—provides compelling evidence that acute cholecystitis merits surgery within 48 hours of hospital admission if impact on the patient and health care system is to be minimized.
Resumo:
A growing body of literature addresses possible health effects of mobile phone use in children and adolescents by relying on the study participants' retrospective reconstruction of mobile phone use. In this study, we used data from the international case-control study CEFALO to compare self-reported with objectively operator-recorded mobile phone use. The aim of the study was to assess predictors of level of mobile phone use as well as factors that are associated with overestimating own mobile phone use. For cumulative number and duration of calls as well as for time since first subscription we calculated the ratio of self-reported to operator-recorded mobile phone use. We used multiple linear regression models to assess possible predictors of the average number and duration of calls per day and logistic regression models to assess possible predictors of overestimation. The cumulative number and duration of calls as well as the time since first subscription of mobile phones were overestimated on average by the study participants. Likelihood to overestimate number and duration of calls was not significantly different for controls compared to cases (OR=1.1, 95%-CI: 0.5 to 2.5 and OR=1.9, 95%-CI: 0.85 to 4.3, respectively). However, likelihood to overestimate was associated with other health related factors such as age and sex. As a consequence, such factors act as confounders in studies relying solely on self-reported mobile phone use and have to be considered in the analysis.