887 resultados para network cost models
Resumo:
Background Folate deficiency leads to DNA damage and inadequate repair, caused by a decreased synthesis of thymidylate and purines. We analyzed the relationship between dietary folate intake and the risk of several cancers. Patients and methods The study is based on a network of case-control studies conducted in Italy and Switzerland in 1991-2009. The odds ratios (ORs) for dietary folate intake were estimated by multiple logistic regression models, adjusted for major identified confounding factors. Results For a few cancer sites, we found a significant inverse relation, with ORs for an increment of 100 μg/day of dietary folate of 0.65 for oropharyngeal (1467 cases), 0.58 for esophageal (505 cases), 0.83 for colorectal (2390 cases), 0.72 for pancreatic (326 cases), 0.67 for laryngeal (851 cases) and 0.87 for breast (3034 cases) cancers. The risk estimates were below unity, although not significantly, for cancers of the endometrium (OR = 0.87, 454 cases), ovary (OR = 0.86, 1031 cases), prostate (OR = 0.91, 1468 cases) and kidney (OR = 0.88, 767 cases), and was 1.00 for stomach cancer (230 cases). No material heterogeneity was found in strata of sex, age, smoking and alcohol drinking. Conclusions Our data support a real inverse association of dietary folate intake with the risk of several common cancers.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing from an offerset, have recently become popular in assortment optimization and revenue management. The dynamicprogram for these models is intractable and approximated by a deterministic linear program called theCDLP which has an exponential number of columns. When there are products that are being consideredfor purchase by more than one customer segment, CDLP is difficult to solve since column generationis known to be NP-hard. However, recent research indicates that a formulation based on segments withcuts imposing consistency (SDCP+) is tractable and approximates the CDLP value very closely. In thispaper we investigate the structure of the consideration sets that make the two formulations exactly equal.We show that if the segment consideration sets follow a tree structure, CDLP = SDCP+. We give acounterexample to show that cycles can induce a gap between the CDLP and the SDCP+ relaxation.We derive two classes of valid inequalities called flow and synchronization inequalities to further improve(SDCP+), based on cycles in the consideration set structure. We give a numeric study showing theperformance of these cycle-based cuts.
Resumo:
Diabetes has been associated to the risk of a few cancer sites, though quantification of this association in various populations remains open to discussion. We analyzed the relation between diabetes and the risk of various cancers in an integrated series of case-control studies conducted in Italy and Switzerland between 1991 and 2009. The studies included 1,468 oral and pharyngeal, 505 esophageal, 230 gastric, 2,390 colorectal, 185 liver, 326 pancreatic, 852 laryngeal, 3,034 breast, 607 endometrial, 1,031 ovarian, 1,294 prostate, and 767 renal cell cancer cases and 12,060 hospital controls. The multivariate odds ratios (OR) for subjects with diabetes as compared to those without-adjusted for major identified confounding factors for the cancers considered through logistic regression models-were significantly elevated for cancers of the oral cavity/pharynx (OR = 1.58), esophagus (OR = 2.52), colorectum (OR = 1.23), liver (OR = 3.52), pancreas (OR = 3.32), postmenopausal breast (OR = 1.76), and endometrium (OR = 1.70). For cancers of the oral cavity, esophagus, colorectum, liver, and postmenopausal breast, the excess risk persisted over 10 yr since diagnosis of diabetes. Our data confirm and further quantify the association of diabetes with colorectal, liver, pancreatic, postmenopausal breast, and endometrial cancer and suggest forthe first time that diabetes may also increase the risk of oral/pharyngeal and esophageal cancer. [Table: see text] [Table: see text].
Resumo:
Using a suitable Hull and White type formula we develop a methodology to obtain asecond order approximation to the implied volatility for very short maturities. Using thisapproximation we accurately calibrate the full set of parameters of the Heston model. Oneof the reasons that makes our calibration for short maturities so accurate is that we alsotake into account the term-structure for large maturities. We may say that calibration isnot "memoryless", in the sense that the option's behavior far away from maturity doesinfluence calibration when the option gets close to expiration. Our results provide a wayto perform a quick calibration of a closed-form approximation to vanilla options that canthen be used to price exotic derivatives. The methodology is simple, accurate, fast, andit requires a minimal computational cost.
Resumo:
This paper analyzes the flow of intermediate inputs across sectors by adopting a network perspective on sectoral interactions. I apply these tools to show how fluctuationsin aggregate economic activity can be obtained from independent shocks to individualsectors. First, I characterize the network structure of input trade in the U.S. On thedemand side, a typical sector relies on a small number of key inputs and sectors arehomogeneous in this respect. However, in their role as input-suppliers sectors do differ:many specialized input suppliers coexist alongside general purpose sectors functioningas hubs to the economy. I then develop a model of intersectoral linkages that can reproduce these connectivity features. In a standard multisector setup, I use this modelto provide analytical expressions linking aggregate volatility to the network structureof input trade. I show that the presence of sectoral hubs - by coupling productiondecisions across sectors - leads to fluctuations in aggregates.
Resumo:
Iowa Department of Transportation Fiscal Year 2007 Report of Savings by Using Video Conferencing Through Iowa Communications Network to the Iowa General Assembly Pursuant to Chapter II 84 Acts and Joint Resolutions Enacted at the 1994 Regular Session of the 75th General Assembly of the State of Iowa Code section 8D.10 Report of Savings by State Agencies Iowa Code section 8D.10 requires certain state agencies prepare an annual report to the General Assembly certifying the identified savings associated with that state agency’s use of the Iowa Communications Network (ICN). This report covers estimated cost savings related to video conferencing via ICN for the Iowa Department of Transportation (DOT). In FY 2007, the DOT conducted two sessions utilizing ICN’s video conferencing system. These two sessions included DOT employees in Ames with non-DOT participants at remote ICN sites. Since the cost savings is calculated based on DOT staff savings, no cost savings from these conferences were gained because the public participants were attending from the ICN sites.
Resumo:
We offer a formulation that locates hubs on a network in a competitiveenvironment; that is, customer capture is sought, which happenswhenever the location of a new hub results in a reduction of thecurrent cost (time, distance) needed by the traffic that goes from thespecified origin to the specified destination.The formulation presented here reduces the number of variables andconstraints as compared to existing covering models. This model issuited for both air passenger and cargo transportation.In this model, each origin-destination flow can go through either oneor two hubs, and each demand point can be assigned to more than a hub,depending on the different destinations of its traffic. Links(``spokes'' have no capacity limit. Computational experience is provided.
Resumo:
BACKGROUND: Low-molecular-weight heparin (LMWH) appears to be safe and effective for treating pulmonary embolism (PE), but its cost-effectiveness has not been assessed. METHODS: We built a Markov state-transition model to evaluate the medical and economic outcomes of a 6-day course with fixed-dose LMWH or adjusted-dose unfractionated heparin (UFH) in a hypothetical cohort of 60-year-old patients with acute submassive PE. Probabilities for clinical outcomes were obtained from a meta-analysis of clinical trials. Cost estimates were derived from Medicare reimbursement data and other sources. The base-case analysis used an inpatient setting, whereas secondary analyses examined early discharge and outpatient treatment with LMWH. Using a societal perspective, strategies were compared based on lifetime costs, quality-adjusted life-years (QALYs), and the incremental cost-effectiveness ratio. RESULTS: Inpatient treatment costs were higher for LMWH treatment than for UFH (dollar 13,001 vs dollar 12,780), but LMWH yielded a greater number of QALYs than did UFH (7.677 QALYs vs 7.493 QALYs). The incremental costs of dollar 221 and the corresponding incremental effectiveness of 0.184 QALYs resulted in an incremental cost-effectiveness ratio of dollar 1,209/QALY. Our results were highly robust in sensitivity analyses. LMWH became cost-saving if the daily pharmacy costs for LMWH were < dollar 51, if > or = 8% of patients were eligible for early discharge, or if > or = 5% of patients could be treated entirely as outpatients. CONCLUSION: For inpatient treatment of PE, the use of LMWH is cost-effective compared to UFH. Early discharge or outpatient treatment in suitable patients with PE would lead to substantial cost savings.
Resumo:
This paper presents a test of the predictive validity of various classes ofQALY models (i.e., linear, power and exponential models). We first estimatedTTO utilities for 43 EQ-5D chronic health states and next these states wereembedded in health profiles. The chronic TTO utilities were then used topredict the responses to TTO questions with health profiles. We find that thepower QALY model clearly outperforms linear and exponential QALY models.Optimal power coefficient is 0.65. Our results suggest that TTO-based QALYcalculations may be biased. This bias can be avoided using a power QALY model.
Resumo:
The principal aim of this paper is to estimate a stochastic frontier costfunction and an inefficiency effects model in the analysis of the primaryhealth care services purchased by the public authority and supplied by 180providers in 1996 in Catalonia. The evidence from our sample does not supportthe premise that contracting out has helped improve purchasing costefficiency in primary care. Inefficient purchasing cost was observed in thecomponent of this purchasing cost explicitly included in the contract betweenpurchaser and provider. There are no observable incentives for thecontracted-out primary health care teams to minimise prescription costs, whichare not explicitly included in the present contracting system.
Resumo:
The Network Revenue Management problem can be formulated as a stochastic dynamic programming problem (DP or the\optimal" solution V *) whose exact solution is computationally intractable. Consequently, a number of heuristics have been proposed in the literature, the most popular of which are the deterministic linear programming (DLP) model, and a simulation based method, the randomized linear programming (RLP) model. Both methods give upper bounds on the optimal solution value (DLP and PHLP respectively). These bounds are used to provide control values that can be used in practice to make accept/deny decisions for booking requests. Recently Adelman [1] and Topaloglu [18] have proposed alternate upper bounds, the affine relaxation (AR) bound and the Lagrangian relaxation (LR) bound respectively, and showed that their bounds are tighter than the DLP bound. Tight bounds are of great interest as it appears from empirical studies and practical experience that models that give tighter bounds also lead to better controls (better in the sense that they lead to more revenue). In this paper we give tightened versions of three bounds, calling themsAR (strong Affine Relaxation), sLR (strong Lagrangian Relaxation) and sPHLP (strong Perfect Hindsight LP), and show relations between them. Speciffically, we show that the sPHLP bound is tighter than sLR bound and sAR bound is tighter than the LR bound. The techniques for deriving the sLR and sPHLP bounds can potentially be applied to other instances of weakly-coupled dynamic programming.
Resumo:
The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.
Resumo:
Iowa Code § 8D.10 requires certain state agencies prepare an annual report to the General Assembly certifying the identified savings associated with that state agency’s use of the Iowa Communications Network (ICN). This report covers estimated cost savings related to video conferencing via ICN for the Iowa Department of Transportation (DOT). In FY 2008, the DOT did not conduct any sessions utilizing ICN’s video conferencing system. Therefore, no cost savings were calculated for this report.
Resumo:
BACKGROUND: Consumption of red meat has been related to increased risk of several cancers. Cooking methods could modify the magnitude of this association, as production of chemicals depends on the temperature and duration of cooking. METHODS: We analyzed data from a network of case-control studies conducted in Italy and Switzerland between 1991 and 2009. The studies included 1465 oral and pharyngeal, 198 nasopharyngeal, 851 laryngeal, 505 esophageal, 230 stomach, 1463 colon, 927 rectal, 326 pancreatic, 3034 breast, 454 endometrial, 1031 ovarian, 1294 prostate and 767 renal cancer cases. Controls included 11 656 patients admitted for acute, non-neoplastic conditions. Odds ratios (ORs) and confidence intervals (CIs) were estimated by multiple logistic regression models, adjusted for known confounding factors. RESULTS: Daily intake of red meat was significantly associated with the risk of cancer of the oral cavity and pharynx (OR for increase of 50 g/day = 1.38; 95% CI: 1.26-1.52), nasopharynx (OR = 1.29; 95% CI: 1.04-1.60), larynx (OR = 1.46; 95% CI: 1.30-1.64), esophagus (OR = 1.46; 95% CI: 1.23-1.72), colon (OR = 1.17; 95% CI: 1.08-1.26), rectum (OR = 1.22; 95% CI:1.11-1.33), pancreas (OR = 1.51; 95% CI: 1.25-1.82), breast (OR = 1.12; 95% CI: 1.04-1.19), endometrium (OR = 1.30; 95% CI: 1.10-1.55) and ovary (OR = 1.29; 95% CI: 1.16-1.43). Fried meat was associated with a higher risk of cancer of oral cavity and pharynx (OR = 2.80; 95% CI: 2.02-3.89) and esophagus (OR = 4.52; 95% CI: 2.50-8.18). Risk of prostate cancer increased for meat cooked by roasting/grilling (OR = 1.31; 95% CI: 1.12-1.54). No heterogeneity according to cooking methods emerged for other cancers. Nonetheless, significant associations with boiled/stewed meat also emerged for cancer of the nasopharynx (OR = 1.97; 95% CI: 1.30-3.00) and stomach (OR = 1.86; 95% CI: 1.20-2.87). CONCLUSIONS: Our analysis confirmed red meat consumption as a risk factor for several cancer sites, with a limited impact of cooking methods. These findings, thus, call for a limitation of its consumption in populations of Western countries.
Resumo:
Aim The imperfect detection of species may lead to erroneous conclusions about species-environment relationships. Accuracy in species detection usually requires temporal replication at sampling sites, a time-consuming and costly monitoring scheme. Here, we applied a lower-cost alternative based on a double-sampling approach to incorporate the reliability of species detection into regression-based species distribution modelling.Location Doñana National Park (south-western Spain).Methods Using species-specific monthly detection probabilities, we estimated the detection reliability as the probability of having detected the species given the species-specific survey time. Such reliability estimates were used to account explicitly for data uncertainty by weighting each absence. We illustrated how this novel framework can be used to evaluate four competing hypotheses as to what constitutes primary environmental control of amphibian distribution: breeding habitat, aestivating habitat, spatial distribution of surrounding habitats and/or major ecosystems zonation. The study was conducted on six pond-breeding amphibian species during a 4-year period.Results Non-detections should not be considered equivalent to real absences, as their reliability varied considerably. The occurrence of Hyla meridionalis and Triturus pygmaeus was related to a particular major ecosystem of the study area, where suitable habitat for these species seemed to be widely available. Characteristics of the breeding habitat (area and hydroperiod) were of high importance for the occurrence of Pelobates cultripes and Pleurodeles waltl. Terrestrial characteristics were the most important predictors of the occurrence of Discoglossus galganoi and Lissotriton boscai, along with spatial distribution of breeding habitats for the last species.Main conclusions We did not find a single best supported hypothesis valid for all species, which stresses the importance of multiscale and multifactor approaches. More importantly, this study shows that estimating the reliability of non-detection records, an exercise that had been previously seen as a naïve goal in species distribution modelling, is feasible and could be promoted in future studies, at least in comparable systems.