980 resultados para Sample average approximation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les centres d’appels sont des éléments clés de presque n’importe quelle grande organisation. Le problème de gestion du travail a reçu beaucoup d’attention dans la littérature. Une formulation typique se base sur des mesures de performance sur un horizon infini, et le problème d’affectation d’agents est habituellement résolu en combinant des méthodes d’optimisation et de simulation. Dans cette thèse, nous considérons un problème d’affection d’agents pour des centres d’appels soumis a des contraintes en probabilité. Nous introduisons une formulation qui exige que les contraintes de qualité de service (QoS) soient satisfaites avec une forte probabilité, et définissons une approximation de ce problème par moyenne échantillonnale dans un cadre de compétences multiples. Nous établissons la convergence de la solution du problème approximatif vers celle du problème initial quand la taille de l’échantillon croit. Pour le cas particulier où tous les agents ont toutes les compétences (un seul groupe d’agents), nous concevons trois méthodes d’optimisation basées sur la simulation pour le problème de moyenne échantillonnale. Étant donné un niveau initial de personnel, nous augmentons le nombre d’agents pour les périodes où les contraintes sont violées, et nous diminuons le nombre d’agents pour les périodes telles que les contraintes soient toujours satisfaites après cette réduction. Des expériences numériques sont menées sur plusieurs modèles de centre d’appels à faible occupation, au cours desquelles les algorithmes donnent de bonnes solutions, i.e. la plupart des contraintes en probabilité sont satisfaites, et nous ne pouvons pas réduire le personnel dans une période donnée sont introduire de violation de contraintes. Un avantage de ces algorithmes, par rapport à d’autres méthodes, est la facilité d’implémentation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss a general approach to building non-asymptotic confidence bounds for stochastic optimization problems. Our principal contribution is the observation that a Sample Average Approximation of a problem supplies upper and lower bounds for the optimal value of the problem which are essentially better than the quality of the corresponding optimal solutions. At the same time, such bounds are more reliable than “standard” confidence bounds obtained through the asymptotic approach. We also discuss bounding the optimal value of MinMax Stochastic Optimization and stochastically constrained problems. We conclude with a small simulation study illustrating the numerical behavior of the proposed bounds.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of estimating the time-dependent statistical characteristics of a random dynamical system is studied under two different settings. In the first, the system dynamics is governed by a differential equation parameterized by a random parameter, while in the second, this is governed by a differential equation with an underlying parameter sequence characterized by a continuous time Markov chain. We propose, for the first time in the literature, stochastic approximation algorithms for estimating various time-dependent process characteristics of the system. In particular, we provide efficient estimators for quantities such as the mean, variance and distribution of the process at any given time as well as the joint distribution and the autocorrelation coefficient at different times. A novel aspect of our approach is that we assume that information on the parameter model (i.e., its distribution in the first case and transition probabilities of the Markov chain in the second) is not available in either case. This is unlike most other work in the literature that assumes availability of such information. Also, most of the prior work in the literature is geared towards analyzing the steady-state system behavior of the random dynamical system while our focus is on analyzing the time-dependent statistical characteristics which are in general difficult to obtain. We prove the almost sure convergence of our stochastic approximation scheme in each case to the true value of the quantity being estimated. We provide a general class of strongly consistent estimators for the aforementioned statistical quantities with regular sample average estimators being a specific instance of these. We also present an application of the proposed scheme on a widely used model in population biology. Numerical experiments in this framework show that the time-dependent process characteristics as obtained using our algorithm in each case exhibit excellent agreement with exact results. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

I introduce the new mgof command to compute distributional tests for discrete (categorical, multinomial) variables. The command supports largesample tests for complex survey designs and exact tests for small samples as well as classic large-sample x2-approximation tests based on Pearson’s X2, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read, 1984, Journal of the Royal Statistical Society, Series B (Methodological) 46: 440–464). The complex survey correction is based on the approach by Rao and Scott (1981, Journal of the American Statistical Association 76: 221–230) and parallels the survey design correction used for independence tests in svy: tabulate. mgof computes the exact tests by using Monte Carlo methods or exhaustive enumeration. mgof also provides an exact one-sample Kolmogorov–Smirnov test for discrete data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Linear alkylbenzenes, LAB, formed by the Alel3 or HF catalyzed alkylation of benzene are common raw materials for surfactant manufacture. Normally they are sulphonated using S03 or oleum to give the corresponding linear alkylbenzene sulphonates In >95 % yield. As concern has grown about the environmental impact of surfactants,' questions have been raised about the trace levels of unreacted raw materials, linear alkylbenzenes and minor impurities present in them. With the advent of modem analytical instruments and techniques, namely GCIMS, the opportunity has arisen to identify the exact nature of these impurities and to determine the actual levels of them present in the commercial linear ,alkylbenzenes. The object of the proposed study was to separate, identify and quantify major and minor components (1-10%) in commercial linear alkylbenzenes. The focus of this study was on the structure elucidation and determination of impurities and on the qualitative determination of them in all analyzed linear alkylbenzene samples. A gas chromatography/mass spectrometry, (GCIMS) study was performed o~ five samples from the same manufacturer (different production dates) and then it was followed by the analyses of ten commercial linear alkylbenzenes from four different suppliers. All the major components, namely linear alkylbenzene isomers, followed the same elution pattern with the 2-phenyl isomer eluting last. The individual isomers were identified by interpretation of their electron impact and chemical ionization mass spectra. The percent isomer distribution was found to be different from sample to sample. Average molecular weights were calculated using two methods, GC and GCIMS, and compared with the results reported on the Certificate of Analyses (C.O.A.) provided by the manufacturers of commercial linear alkylbenzenes. The GC results in most cases agreed with the reported values, whereas GC/MS results were significantly lower, between 0.41 and 3.29 amu. The minor components, impurities such as branched alkylbenzenes and dialkyltetralins eluted according to their molecular weights. Their fragmentation patterns were studied using electron impact ionization mode and their molecular weight ions confirmed by a 'soft ionization technique', chemical ionization. The level of impurities present i~ the analyzed commercial linear alkylbenzenes was expressed as the percent of the total sample weight, as well as, in mg/g. The percent of impurities was observed to vary between 4.5 % and 16.8 % with the highest being in sample "I". Quantitation (mg/g) of impurities such as branched alkylbenzenes and dialkyltetralins was done using cis/trans-l,4,6,7-tetramethyltetralin as an internal standard. Samples were analyzed using .GC/MS system operating under full scan and single ion monitoring data acquisition modes. The latter data acquisition mode, which offers higher sensitivity, was used to analyze all samples under investigation for presence of linear dialkyltetralins. Dialkyltetralins were reported quantitatively, whereas branched alkylbenzenes were reported semi-qualitatively. The GC/MS method that was developed during the course of this study allowed identification of some other trace impurities present in commercial LABs. Compounds such as non-linear dialkyltetralins, dialkylindanes, diphenylalkanes and alkylnaphthalenes were identified but their detailed structure elucidation and the quantitation was beyond the scope of this study. However, further investigation of these compounds will be the subject of a future study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A score test is developed for binary clinical trial data, which incorporates patient non-compliance while respecting randomization. It is assumed in this paper that compliance is all-or-nothing, in the sense that a patient either accepts all of the treatment assigned as specified in the protocol, or none of it. Direct analytic comparisons of the adjusted test statistic for both the score test and the likelihood ratio test are made with the corresponding test statistics that adhere to the intention-to-treat principle. It is shown that no gain in power is possible over the intention-to-treat analysis, by adjusting for patient non-compliance. Sample size formulae are derived and simulation studies are used to demonstrate that the sample size approximation holds. Copyright © 2003 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

References (20)Cited By (1)Export CitationAboutAbstract Proper scoring rules provide a useful means to evaluate probabilistic forecasts. Independent from scoring rules, it has been argued that reliability and resolution are desirable forecast attributes. The mathematical expectation value of the score allows for a decomposition into reliability and resolution related terms, demonstrating a relationship between scoring rules and reliability/resolution. A similar decomposition holds for the empirical (i.e. sample average) score over an archive of forecast–observation pairs. This empirical decomposition though provides a too optimistic estimate of the potential score (i.e. the optimum score which could be obtained through recalibration), showing that a forecast assessment based solely on the empirical resolution and reliability terms will be misleading. The differences between the theoretical and empirical decomposition are investigated, and specific recommendations are given how to obtain better estimators of reliability and resolution in the case of the Brier and Ignorance scoring rule.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Overall objectives of this dissertation are to examine the geographic variation and socio-demographic disparities (by age, race and gender) in the utilization and survival of newly FDA-approved chemotherapy agents (Oxaliplatin-containing regimens) as well as to determine the cost-effectiveness of Oxaliplatin in a large nationwide and population-based cohort of Medicare patients with resected stage-III colon cancer. Methods: A retrospective cohort of 7,654 Medicare patients was identified from the Surveillance, Epidemiology and End Results – Medicare linked database. Multiple logistic regression was performed to examine the relationship between receipt of Oxaliplatin-containing chemotherapy and geographic regions while adjusting for other patient characteristics. Cox proportional hazard model was used to estimate the effect of Oxaliplatin-containing chemotherapy on the survival variation across regions using 2004-2005 data. Propensity score adjustments were also made to control for potential bias related to non-random allocation of the treatment group. We used Kaplan-Meier sample average estimator to calculate the cost of disease after cancer-specific surgery to death, loss-to follow-up or censorship. Results: Only 51% of the stage-III patients received adjuvant chemotherapy within three to six months of colon-cancer specific surgery. Patients in the rural regions were approximately 30% less likely to receive Oxaliplatin chemotherapy than those residing in a big metro region (OR=0.69, p=0.033). The hazard ratio for patients residing in metro region was comparable to those residing in big metro region (HR: 1.05, 95% CI: 0.49-2.28). Patients who received Oxalipaltin chemotherapy were 33% less likely to die than those received 5-FU only chemotherapy (adjusted HR=0.67, 95% CI: 0.41-1.11). KMSA-adjusted mean payments were almost 2.5 times higher in the Oxaliplatin-containing group compared to 5-FU only group ($45,378 versus $17,856). When compared to no chemotherapy group, ICER of 5-FU based regimen was $12,767 per LYG, and ICER of Oxaliplatin-chemotherapy was $60,863 per LYG. Oxaliplatin was found economically dominated by 5-FU only chemotherapy in this study population. Conclusion: Chemotherapy use varies across geographic regions. We also observed considerable survival differences across geographic regions; the difference remained even after adjusting for socio-demographic characteristics. The cost-effectiveness of Oxaliplatin in Medicare patients may be over-estimated in the clinical trials. Our study found 5-FU only chemotherapy cost-effective in adjuvant settings in patients with stage-III colon cancer.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Biweekly sediment trap samples and concurrent hydrographic measurements collected between March 2005 and October 2008 from the Cariaco Basin, Venezuela, are used to assess the relationship between [CO3]2- and the area densities (ho A) of two species of planktonic foraminifera (Globigerinoides ruber (pink) and Globigerinoides sacculifer). Calcification temperatures were calculated for each sample using species-appropriate oxygen isotope (d18O) temperature equations that were then compared to monthly temperature profiles taken at the study site in order to determine calcification depth. Ambient [CO3]2- was determined for these calcification depths using alkalinity, pH, temperature, salinity, and nutrient concentration measurements taken during monthly hydrographic cruises. The rho A, which is representative of calcification efficiency, is determined by dividing individual foraminiferal shell weights (±0.43 µg) by their associated silhouette areas and taking the sample average. The results of this study show a strong correlation between rho A and ambient [CO3]2- for both G. ruber and G. sacculifer (R**2 = 0.89 and 0.86, respectively), confirming that [CO3]2- has a pronounced effect on the calcification of these species. Though the rho A for both species reveal a highly significant (p < 0.001) relationship with ambient [CO3]2-, linear regression reveals that the extent to which [CO3]2- influences foraminiferal calcification is species specific. Hierarchical regression analyses indicate that other environmental parameters (temperature and [PO4]3-) do not confound the use of G. ruber and G. sacculifer rho A as a predictor for [CO3]2-. This study suggests that G. ruber and G. sacculifer rho A can be used as reliable proxies for past surface ocean [CO3]2?-

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research employs econometric analysis on a cross section of American electricity companies in order to study the cost implications associated with unbundling the operations of integrated companies into vertically and/or horizontally separated companies. Focusing on the representative sample average firm, we find that complete horizontal and vertical disintegration resulting in the creation of separate nuclear, conventional, and hydro electric generation companies as well as a separate firm distributing power to final consumers, results in a statistically significant 13.5 percent increase in costs. Maintaining a horizontally integrated generator producing nuclear, conventional, and hydro electric generation while imposing vertical separation by creating a stand alone distribution company, results in a lower but still substantial and statistically significant cost penalty amounting to an 8.1 % increase in costs relative to a fully integrated structure. As these results imply that a vertically separated but horizontally integrated generation firm would need to reduce the costs of generation by 11% just to recoup the cost increases associated with vertical separation, even the costs associated with just vertical unbundling are quite substantial. Our paper is also the first academic paper we are aware of that systematically considers the impact of generation mix on vertical, horizontal, and overall scope economies. As a result, we are able to demonstrate that the estimated cost of unbundling in the electricity sector is substantially influenced by generation mix. Thus, for example, we find evidence of strong vertical integration economies between nuclear and conventional generation, but little evidence for vertical integration benefits between hydro generation and the distribution of power. In contrast, we find strong evidence suggesting the presence of substantial horizontal integration economies associated with the joint production of hydro generation with nuclear and/or conventional fossil fuel generation. These results are significant because they indicate that the cost of unbundling the electricity sector will differ substantially in different systems, meaning that a blanket regulatory policy with regard to the appropriateness of vertical and horizontal unbundling is likely to be inappropriate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de Mestrado, Neurociências Cognitivas e Neuropsicologia, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2016

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An adaptive learning scheme, based on a fuzzy approximation to the gradient descent method for training a pattern classifier using unlabeled samples, is described. The objective function defined for the fuzzy ISODATA clustering procedure is used as the loss function for computing the gradient. Learning is based on simultaneous fuzzy decisionmaking and estimation. It uses conditional fuzzy measures on unlabeled samples. An exponential membership function is assumed for each class, and the parameters constituting these membership functions are estimated, using the gradient, in a recursive fashion. The induced possibility of occurrence of each class is useful for estimation and is computed using 1) the membership of the new sample in that class and 2) the previously computed average possibility of occurrence of the same class. An inductive entropy measure is defined in terms of induced possibility distribution to measure the extent of learning. The method is illustrated with relevant examples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We develop an online actor-critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process (MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multi-stage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.