878 resultados para Uncertainty in generation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

With standard assumptions on preferences and a fully-fledged econometric model we computed the welfare costs of macroeconomic uncertainty for post-war U.S. using the BeveridgeNelson decomposition. Welfare costs are about 0.9% per-capita consumption ($175.00) and marginal welfare costs are about twice as large.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a method for calculating the power flow in distribution networks considering uncertainties in the distribution system. Active and reactive power are used as uncertain variables and probabilistically modeled through probability distribution functions. Uncertainty about the connection of the users with the different feeders is also considered. A Monte Carlo simulation is used to generate the possible load scenarios of the users. The results of the power flow considering uncertainty are the mean values and standard deviations of the variables of interest (voltages in all nodes, active and reactive power flows, etc.), giving the user valuable information about how the network will behave under uncertainty rather than the traditional fixed values at one point in time. The method is tested using real data from a primary feeder system, and results are presented considering uncertainty in demand and also in the connection. To demonstrate the usefulness of the approach, the results are then used in a probabilistic risk analysis to identify potential problems of undervoltage in distribution systems. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents two mathematical models and one methodology to solve a transmission network expansion planning problem considering uncertainty in demand. The first model analyzed the uncertainty in the system as a whole; then, this model considers the uncertainty in the total demand of the power system. The second one analyzed the uncertainty in each load bus individually. The methodology used to solve the problem, finds the optimal transmission network expansion plan that allows the power system to operate adequately in an environment with uncertainty. The models presented are solved using a specialized genetic algorithm. The results obtained for several known systems from literature show that cheaper plans can be found satisfying the uncertainty in demand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Includes bibliography

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider model selection uncertainty in linear regression. We study theoretically and by simulation the approach of Buckland and co-workers, who proposed estimating a parameter common to all models under study by taking a weighted average over the models, using weights obtained from information criteria or the bootstrap. This approach is compared with the usual approach in which the 'best' model is used, and with Bayesian model averaging. The weighted predictor behaves similarly to model averaging, with generally more realistic mean-squared errors than the usual model-selection-based estimator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analytical methods accounting for imperfect detection are often used to facilitate reliable inference in population and community ecology. We contend that similar approaches are needed in disease ecology because these complicated systems are inherently difficult to observe without error. For example, wildlife disease studies often designate individuals, populations, or spatial units to states (e.g., susceptible, infected, post-infected), but the uncertainty associated with these state assignments remains largely ignored or unaccounted for. We demonstrate how recent developments incorporating observation error through repeated sampling extend quite naturally to hierarchical spatial models of disease effects, prevalence, and dynamics in natural systems. A highly pathogenic strain of avian influenza virus in migratory waterfowl and a pathogenic fungus recently implicated in the global loss of amphibian biodiversity are used as motivating examples. Both show that relatively simple modifications to study designs can greatly improve our understanding of complex spatio-temporal disease dynamics by rigorously accounting for uncertainty at each level of the hierarchy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of “testing laboratory” one of the most important aspect to deal with is the measurement result. Whenever decisions are based on measurement results, it is important to have some indication of the quality of the results. In every area concerning with noise measurement many standards are available but without an expression of uncertainty, it is impossible to judge whether two results are in compliance or not. ISO/IEC 17025 is an international standard related with the competence of calibration and testing laboratories. It contains the requirements that testing and calibration laboratories have to meet if they wish to demonstrate that they operate to a quality system, are technically competent and are able to generate technically valid results. ISO/IEC 17025 deals specifically with the requirements for the competence of laboratories performing testing and calibration and for the reporting of the results, which may or may not contain opinions and interpretations of the results. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. In this point of view, for a testing laboratory performing sound power measurement according to specific ISO standards and European Directives, the measurement of uncertainties is the most important factor to deal with. Sound power level measurement, according to ISO 3744:1994 , performed with a limited number of microphones distributed over a surface enveloping a source is affected by a certain systematic error and a related standard deviation. Making a comparison of measurement carried out with different microphone arrays is difficult because results are affected by systematic errors and standard deviation that are peculiarities of the number of microphones disposed on the surface, their spatial position and the complexity of the sound field. A statistical approach could give an overview of the difference between sound power level evaluated with different microphone arrays and an evaluation of errors that afflict this kind of measurement. Despite the classical approach that tend to follow the ISO GUM this thesis present a different point of view of the problem related to the comparison of result obtained from different microphone arrays.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To check the effectiveness of campaigns preventing drug abuse or indicating local effects of efforts against drug trafficking, it is beneficial to know consumed amounts of substances in a high spatial and temporal resolution. The analysis of drugs of abuse in wastewater (WW) has the potential to provide this information. In this study, the reliability of WW drug consumption estimates is assessed and a novel method presented to calculate the total uncertainty in observed WW cocaine (COC) and benzoylecgonine (BE) loads. Specifically, uncertainties resulting from discharge measurements, chemical analysis and the applied sampling scheme were addressed and three approaches presented. These consist of (i) a generic model-based procedure to investigate the influence of the sampling scheme on the uncertainty of observed or expected drug loads, (ii) a comparative analysis of two analytical methods (high performance liquid chromatography-tandem mass spectrometry and gas chromatography-mass spectrometry), including an extended cross-validation by influent profiling over several days, and (iii) monitoring COC and BE concentrations in WW of the largest Swiss sewage treatment plants. In addition, the COC and BE loads observed in the sewage treatment plant of the city of Berne were used to back-calculate the COC consumption. The estimated mean daily consumed amount was 107 ± 21 g of pure COC, corresponding to 321 g of street-grade COC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Backcalculation is the primary method used to reconstruct past human immunodeficiency virus (HIV) infection rates, to estimate current prevalence of HIV infection, and to project future incidence of acquired immunodeficiency syndrome (AIDS). The method is very sensitive to uncertainty about the incubation period. We estimate incubation distributions from three sets of cohort data and find that the estimates for the cohorts are substantially different. Backcalculations employing the different estimates produce equally good fits to reported AIDS counts but quite different estimates of cumulative infections. These results suggest that the incubation distribution is likely to differ for different populations and that the differences are large enough to have a big impact on the resulting estimates of HIV infection rates. This seriously limits the usefulness of backcalculation for populations (such as intravenous drug users, heterosexuals, and women) that lack precise information on incubation times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Genome-wide association studies (GWAS) are used to discover genes underlying complex, heritable disorders for which less powerful study designs have failed in the past. The number of GWAS has skyrocketed recently with findings reported in top journals and the mainstream media. Mircorarrays are the genotype calling technology of choice in GWAS as they permit exploration of more than a million single nucleotide polymorphisms (SNPs)simultaneously. The starting point for the statistical analyses used by GWAS, to determine association between loci and disease, are genotype calls (AA, AB, or BB). However, the raw data, microarray probe intensities, are heavily processed before arriving at these calls. Various sophisticated statistical procedures have been proposed for transforming raw data into genotype calls. We find that variability in microarray output quality across different SNPs, different arrays, and different sample batches has substantial inuence on the accuracy of genotype calls made by existing algorithms. Failure to account for these sources of variability, GWAS run the risk of adversely affecting the quality of reported findings. In this paper we present solutions based on a multi-level mixed model. Software implementation of the method described in this paper is available as free and open source code in the crlmm R/BioConductor.