989 resultados para Sample average approximation
Resumo:
Les centres d’appels sont des éléments clés de presque n’importe quelle grande organisation. Le problème de gestion du travail a reçu beaucoup d’attention dans la littérature. Une formulation typique se base sur des mesures de performance sur un horizon infini, et le problème d’affectation d’agents est habituellement résolu en combinant des méthodes d’optimisation et de simulation. Dans cette thèse, nous considérons un problème d’affection d’agents pour des centres d’appels soumis a des contraintes en probabilité. Nous introduisons une formulation qui exige que les contraintes de qualité de service (QoS) soient satisfaites avec une forte probabilité, et définissons une approximation de ce problème par moyenne échantillonnale dans un cadre de compétences multiples. Nous établissons la convergence de la solution du problème approximatif vers celle du problème initial quand la taille de l’échantillon croit. Pour le cas particulier où tous les agents ont toutes les compétences (un seul groupe d’agents), nous concevons trois méthodes d’optimisation basées sur la simulation pour le problème de moyenne échantillonnale. Étant donné un niveau initial de personnel, nous augmentons le nombre d’agents pour les périodes où les contraintes sont violées, et nous diminuons le nombre d’agents pour les périodes telles que les contraintes soient toujours satisfaites après cette réduction. Des expériences numériques sont menées sur plusieurs modèles de centre d’appels à faible occupation, au cours desquelles les algorithmes donnent de bonnes solutions, i.e. la plupart des contraintes en probabilité sont satisfaites, et nous ne pouvons pas réduire le personnel dans une période donnée sont introduire de violation de contraintes. Un avantage de ces algorithmes, par rapport à d’autres méthodes, est la facilité d’implémentation.
Resumo:
We discuss a general approach to building non-asymptotic confidence bounds for stochastic optimization problems. Our principal contribution is the observation that a Sample Average Approximation of a problem supplies upper and lower bounds for the optimal value of the problem which are essentially better than the quality of the corresponding optimal solutions. At the same time, such bounds are more reliable than “standard” confidence bounds obtained through the asymptotic approach. We also discuss bounding the optimal value of MinMax Stochastic Optimization and stochastically constrained problems. We conclude with a small simulation study illustrating the numerical behavior of the proposed bounds.
Resumo:
I introduce the new mgof command to compute distributional tests for discrete (categorical, multinomial) variables. The command supports largesample tests for complex survey designs and exact tests for small samples as well as classic large-sample x2-approximation tests based on Pearson’s X2, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read, 1984, Journal of the Royal Statistical Society, Series B (Methodological) 46: 440–464). The complex survey correction is based on the approach by Rao and Scott (1981, Journal of the American Statistical Association 76: 221–230) and parallels the survey design correction used for independence tests in svy: tabulate. mgof computes the exact tests by using Monte Carlo methods or exhaustive enumeration. mgof also provides an exact one-sample Kolmogorov–Smirnov test for discrete data.
Resumo:
Linear alkylbenzenes, LAB, formed by the Alel3 or HF catalyzed alkylation of benzene are common raw materials for surfactant manufacture. Normally they are sulphonated using S03 or oleum to give the corresponding linear alkylbenzene sulphonates In >95 % yield. As concern has grown about the environmental impact of surfactants,' questions have been raised about the trace levels of unreacted raw materials, linear alkylbenzenes and minor impurities present in them. With the advent of modem analytical instruments and techniques, namely GCIMS, the opportunity has arisen to identify the exact nature of these impurities and to determine the actual levels of them present in the commercial linear ,alkylbenzenes. The object of the proposed study was to separate, identify and quantify major and minor components (1-10%) in commercial linear alkylbenzenes. The focus of this study was on the structure elucidation and determination of impurities and on the qualitative determination of them in all analyzed linear alkylbenzene samples. A gas chromatography/mass spectrometry, (GCIMS) study was performed o~ five samples from the same manufacturer (different production dates) and then it was followed by the analyses of ten commercial linear alkylbenzenes from four different suppliers. All the major components, namely linear alkylbenzene isomers, followed the same elution pattern with the 2-phenyl isomer eluting last. The individual isomers were identified by interpretation of their electron impact and chemical ionization mass spectra. The percent isomer distribution was found to be different from sample to sample. Average molecular weights were calculated using two methods, GC and GCIMS, and compared with the results reported on the Certificate of Analyses (C.O.A.) provided by the manufacturers of commercial linear alkylbenzenes. The GC results in most cases agreed with the reported values, whereas GC/MS results were significantly lower, between 0.41 and 3.29 amu. The minor components, impurities such as branched alkylbenzenes and dialkyltetralins eluted according to their molecular weights. Their fragmentation patterns were studied using electron impact ionization mode and their molecular weight ions confirmed by a 'soft ionization technique', chemical ionization. The level of impurities present i~ the analyzed commercial linear alkylbenzenes was expressed as the percent of the total sample weight, as well as, in mg/g. The percent of impurities was observed to vary between 4.5 % and 16.8 % with the highest being in sample "I". Quantitation (mg/g) of impurities such as branched alkylbenzenes and dialkyltetralins was done using cis/trans-l,4,6,7-tetramethyltetralin as an internal standard. Samples were analyzed using .GC/MS system operating under full scan and single ion monitoring data acquisition modes. The latter data acquisition mode, which offers higher sensitivity, was used to analyze all samples under investigation for presence of linear dialkyltetralins. Dialkyltetralins were reported quantitatively, whereas branched alkylbenzenes were reported semi-qualitatively. The GC/MS method that was developed during the course of this study allowed identification of some other trace impurities present in commercial LABs. Compounds such as non-linear dialkyltetralins, dialkylindanes, diphenylalkanes and alkylnaphthalenes were identified but their detailed structure elucidation and the quantitation was beyond the scope of this study. However, further investigation of these compounds will be the subject of a future study.
Resumo:
A score test is developed for binary clinical trial data, which incorporates patient non-compliance while respecting randomization. It is assumed in this paper that compliance is all-or-nothing, in the sense that a patient either accepts all of the treatment assigned as specified in the protocol, or none of it. Direct analytic comparisons of the adjusted test statistic for both the score test and the likelihood ratio test are made with the corresponding test statistics that adhere to the intention-to-treat principle. It is shown that no gain in power is possible over the intention-to-treat analysis, by adjusting for patient non-compliance. Sample size formulae are derived and simulation studies are used to demonstrate that the sample size approximation holds. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
References (20)Cited By (1)Export CitationAboutAbstract Proper scoring rules provide a useful means to evaluate probabilistic forecasts. Independent from scoring rules, it has been argued that reliability and resolution are desirable forecast attributes. The mathematical expectation value of the score allows for a decomposition into reliability and resolution related terms, demonstrating a relationship between scoring rules and reliability/resolution. A similar decomposition holds for the empirical (i.e. sample average) score over an archive of forecast–observation pairs. This empirical decomposition though provides a too optimistic estimate of the potential score (i.e. the optimum score which could be obtained through recalibration), showing that a forecast assessment based solely on the empirical resolution and reliability terms will be misleading. The differences between the theoretical and empirical decomposition are investigated, and specific recommendations are given how to obtain better estimators of reliability and resolution in the case of the Brier and Ignorance scoring rule.
Resumo:
Background: Overall objectives of this dissertation are to examine the geographic variation and socio-demographic disparities (by age, race and gender) in the utilization and survival of newly FDA-approved chemotherapy agents (Oxaliplatin-containing regimens) as well as to determine the cost-effectiveness of Oxaliplatin in a large nationwide and population-based cohort of Medicare patients with resected stage-III colon cancer. Methods: A retrospective cohort of 7,654 Medicare patients was identified from the Surveillance, Epidemiology and End Results – Medicare linked database. Multiple logistic regression was performed to examine the relationship between receipt of Oxaliplatin-containing chemotherapy and geographic regions while adjusting for other patient characteristics. Cox proportional hazard model was used to estimate the effect of Oxaliplatin-containing chemotherapy on the survival variation across regions using 2004-2005 data. Propensity score adjustments were also made to control for potential bias related to non-random allocation of the treatment group. We used Kaplan-Meier sample average estimator to calculate the cost of disease after cancer-specific surgery to death, loss-to follow-up or censorship. Results: Only 51% of the stage-III patients received adjuvant chemotherapy within three to six months of colon-cancer specific surgery. Patients in the rural regions were approximately 30% less likely to receive Oxaliplatin chemotherapy than those residing in a big metro region (OR=0.69, p=0.033). The hazard ratio for patients residing in metro region was comparable to those residing in big metro region (HR: 1.05, 95% CI: 0.49-2.28). Patients who received Oxalipaltin chemotherapy were 33% less likely to die than those received 5-FU only chemotherapy (adjusted HR=0.67, 95% CI: 0.41-1.11). KMSA-adjusted mean payments were almost 2.5 times higher in the Oxaliplatin-containing group compared to 5-FU only group ($45,378 versus $17,856). When compared to no chemotherapy group, ICER of 5-FU based regimen was $12,767 per LYG, and ICER of Oxaliplatin-chemotherapy was $60,863 per LYG. Oxaliplatin was found economically dominated by 5-FU only chemotherapy in this study population. Conclusion: Chemotherapy use varies across geographic regions. We also observed considerable survival differences across geographic regions; the difference remained even after adjusting for socio-demographic characteristics. The cost-effectiveness of Oxaliplatin in Medicare patients may be over-estimated in the clinical trials. Our study found 5-FU only chemotherapy cost-effective in adjuvant settings in patients with stage-III colon cancer.^
Resumo:
Biweekly sediment trap samples and concurrent hydrographic measurements collected between March 2005 and October 2008 from the Cariaco Basin, Venezuela, are used to assess the relationship between [CO3]2- and the area densities (ho A) of two species of planktonic foraminifera (Globigerinoides ruber (pink) and Globigerinoides sacculifer). Calcification temperatures were calculated for each sample using species-appropriate oxygen isotope (d18O) temperature equations that were then compared to monthly temperature profiles taken at the study site in order to determine calcification depth. Ambient [CO3]2- was determined for these calcification depths using alkalinity, pH, temperature, salinity, and nutrient concentration measurements taken during monthly hydrographic cruises. The rho A, which is representative of calcification efficiency, is determined by dividing individual foraminiferal shell weights (±0.43 µg) by their associated silhouette areas and taking the sample average. The results of this study show a strong correlation between rho A and ambient [CO3]2- for both G. ruber and G. sacculifer (R**2 = 0.89 and 0.86, respectively), confirming that [CO3]2- has a pronounced effect on the calcification of these species. Though the rho A for both species reveal a highly significant (p < 0.001) relationship with ambient [CO3]2-, linear regression reveals that the extent to which [CO3]2- influences foraminiferal calcification is species specific. Hierarchical regression analyses indicate that other environmental parameters (temperature and [PO4]3-) do not confound the use of G. ruber and G. sacculifer rho A as a predictor for [CO3]2-. This study suggests that G. ruber and G. sacculifer rho A can be used as reliable proxies for past surface ocean [CO3]2?-
Resumo:
This research employs econometric analysis on a cross section of American electricity companies in order to study the cost implications associated with unbundling the operations of integrated companies into vertically and/or horizontally separated companies. Focusing on the representative sample average firm, we find that complete horizontal and vertical disintegration resulting in the creation of separate nuclear, conventional, and hydro electric generation companies as well as a separate firm distributing power to final consumers, results in a statistically significant 13.5 percent increase in costs. Maintaining a horizontally integrated generator producing nuclear, conventional, and hydro electric generation while imposing vertical separation by creating a stand alone distribution company, results in a lower but still substantial and statistically significant cost penalty amounting to an 8.1 % increase in costs relative to a fully integrated structure. As these results imply that a vertically separated but horizontally integrated generation firm would need to reduce the costs of generation by 11% just to recoup the cost increases associated with vertical separation, even the costs associated with just vertical unbundling are quite substantial. Our paper is also the first academic paper we are aware of that systematically considers the impact of generation mix on vertical, horizontal, and overall scope economies. As a result, we are able to demonstrate that the estimated cost of unbundling in the electricity sector is substantially influenced by generation mix. Thus, for example, we find evidence of strong vertical integration economies between nuclear and conventional generation, but little evidence for vertical integration benefits between hydro generation and the distribution of power. In contrast, we find strong evidence suggesting the presence of substantial horizontal integration economies associated with the joint production of hydro generation with nuclear and/or conventional fossil fuel generation. These results are significant because they indicate that the cost of unbundling the electricity sector will differ substantially in different systems, meaning that a blanket regulatory policy with regard to the appropriateness of vertical and horizontal unbundling is likely to be inappropriate.
Resumo:
Dissertação de Mestrado, Neurociências Cognitivas e Neuropsicologia, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2016
Resumo:
We investigate the X-ray properties of the Parkes sample of Bat-spectrum radio sources using data from the ROSAT All-Sky Survey and archival pointed PSPC observations. In total, 163 of the 323 sources are detected. For the remaining 160 sources, 2 sigma upper limits to the X-ray flux are derived. We present power-law photon indices in the 0.1-2.4 keV energy band for 115 sources, which were determined either with a hardness ratio technique or from direct fits to pointed PSPC data if a sufficient number of photons were available. The average photon index is <Gamma > = 1.95(-0.12)(+0.13) for flat-spectrum radio-loud quasars, <Gamma > = 1.70(-0.24)(+0.23) for galaxies, and <Gamma > = 2.40(-0.31)(+0.12) for BL Lac objects. The soft X-ray photon index is correlated with redshift and with radio spectral index in the sense that sources at high redshift and/or with flat (or inverted) radio spectra have flatter X-ray spectra on average. The results are in accord with orientation-dependent unification schemes for radio-loud active galactic nuclei. Webster et al. discovered many sources with unusually red optical continua among the quasars of this sample, and interpreted this result in terms of extinction by dust. Although the X-ray spectra in general do not show excess absorption, we find that low-redshift optically red quasars have significantly lower soft X-ray luminosities on average than objects with blue optical continua. The difference disappears for higher redshifts, as is expected for intrinsic absorption by cold gas associated with the dust. In addition, the scatter in log(f(x)/f(o)) is consistent with the observed optical extinction, contrary to previous claims based on optically or X-ray selected samples. Although alternative explanations for the red optical continua cannot be excluded with the present X-ray data, we note that the observed X-ray properties are consistent with the idea that dust plays an important role in some of the radio-loud quasars with red optical continua.
Resumo:
We calculate the density profiles and density correlation functions of the one-dimensional Bose gas in a harmonic trap, using the exact finite-temperature solutions for the uniform case, and applying a local density approximation. The results are valid for a trapping potential that is slowly varying relative to a correlation length. They allow a direct experimental test of the transition from the weak-coupling Gross-Pitaevskii regime to the strong-coupling, fermionic Tonks-Girardeau regime. We also calculate the average two-particle correlation which characterizes the bulk properties of the sample, and find that it can be well approximated by the value of the local pair correlation in the trap center.
Resumo:
This paper describes a comparison of adaptations of the QuEChERS (quick, easy, cheap, effective, rugged and safe) approach for the determination of 14 organochlorine pesticide (OCP) residues in strawberry jam by concurrent use of gas chromatography (GC) coupled to electron capture detector (ECD) and GC tandem mass spectrometry (GC-MS/MS). Three versions were tested based on the original QuEChERS method. The results were good (overall average of 89% recoveries with 15% RSD) using the ultrasonic bath at five spiked levels. Performance characteristics, such as accuracy, precision, linear range, limits of detection (LOD) and quantification (LOQ), were determined for each pesticide. LOD ranged from 0.8 to 8.9 microg kg-1 ; LOQ was in the range of 2.5–29.8 microg kg- 1; and calibration curves were linear (r2>0.9970) in the whole range of the explored concentrations (5–100 microg kg- 1). The LODs of these pesticides were much lower than the maximum residue levels (MRLs) allowed in Europe for strawberries. The method was successfully applied to the quantification of OCP in commercially available jams. The OCPs were detected lower than the LOD.