878 resultados para Uncertainty in generation
Resumo:
One of the main claims of the nonparametric model of random uncertainty introduced by Soize (2000) [3] is its ability to account for model uncertainty. The present paper investigates this claim by examining the statistics of natural frequencies, total energy and underlying dispersion equation yielded by the nonparametric approach for two simple systems: a thin plate in bending and a one-dimensional finite periodic massspring chain. Results for the plate show that the average modal density and the underlying dispersion equation of the structure are gradually and systematically altered with increasing uncertainty. The findings for the massspring chain corroborate the findings for the plate and show that the remote coupling of nonadjacent degrees of freedom induced by the approach suppresses the phenomenon of mode localization. This remote coupling also leads to an instantaneous response of all points in the chain when one mass is excited. In the light of these results, it is argued that the nonparametric approach can deal with a certain type of model uncertainty, in this case the presence of unknown terms of higher or lower order in the governing differential equation, but that certain expectations about the system such as the average modal density may conflict with these results. © 2012 Elsevier Ltd.
Resumo:
Coupled hydrology and water quality models are an important tool today, used in the understanding and management of surface water and watershed areas. Such problems are generally subject to substantial uncertainty in parameters, process understanding, and data. Component models, drawing on different data, concepts, and structures, are affected differently by each of these uncertain elements. This paper proposes a framework wherein the response of component models to their respective uncertain elements can be quantified and assessed, using a hydrological model and water quality model as two exemplars. The resulting assessments can be used to identify model coupling strategies that permit more appropriate use and calibration of individual models, and a better overall coupled model response. One key finding was that an approximate balance of water quality and hydrological model responses can be obtained using both the QUAL2E and Mike11 water quality models. The balance point, however, does not support a particularly narrow surface response (or stringent calibration criteria) with respect to the water quality calibration data, at least in the case examined here. Additionally, it is clear from the results presented that the structural source of uncertainty is at least as significant as parameter-based uncertainties in areal models. © 2012 John Wiley & Sons, Ltd.
Resumo:
We consider the problem of matching model and sensory data features in the presence of geometric uncertainty, for the purpose of object localization and identification. The problem is to construct sets of model feature and sensory data feature pairs that are geometrically consistent given that there is uncertainty in the geometry of the sensory data features. If there is no geometric uncertainty, polynomial-time algorithms are possible for feature matching, yet these approaches can fail when there is uncertainty in the geometry of data features. Existing matching and recognition techniques which account for the geometric uncertainty in features either cannot guarantee finding a correct solution, or can construct geometrically consistent sets of feature pairs yet have worst case exponential complexity in terms of the number of features. The major new contribution of this work is to demonstrate a polynomial-time algorithm for constructing sets of geometrically consistent feature pairs given uncertainty in the geometry of the data features. We show that under a certain model of geometric uncertainty the feature matching problem in the presence of uncertainty is of polynomial complexity. This has important theoretical implications by demonstrating an upper bound on the complexity of the matching problem, an by offering insight into the nature of the matching problem itself. These insights prove useful in the solution to the matching problem in higher dimensional cases as well, such as matching three-dimensional models to either two or three-dimensional sensory data. The approach is based on an analysis of the space of feasible transformation parameters. This paper outlines the mathematical basis for the method, and describes the implementation of an algorithm for the procedure. Experiments demonstrating the method are reported.
Resumo:
This paper studies a problem of dynamic pricing faced by a retailer with limited inventory, uncertain about the demand rate model, aiming to maximize expected discounted revenue over an infinite time horizon. The retailer doubts his demand model which is generated by historical data and views it as an approximation. Uncertainty in the demand rate model is represented by a notion of generalized relative entropy process, and the robust pricing problem is formulated as a two-player zero-sum stochastic differential game. The pricing policy is obtained through the Hamilton-Jacobi-Isaacs (HJI) equation. The existence and uniqueness of the solution of the HJI equation is shown and a verification theorem is proved to show that the solution of the HJI equation is indeed the value function of the pricing problem. The results are illustrated by an example with exponential nominal demand rate.
Resumo:
Economic and environmental load dispatch aims to determine the amount of electricity generated from power plants to meet load demand while minimizing fossil fuel costs and air pollution emissions subject to operational and licensing requirements. These two scheduling problems are commonly formulated with non-smooth cost functions respectively considering various effects and constraints, such as the valve point effect, power balance and ramp rate limits. The expected increase in plug-in electric vehicles is likely to see a significant impact on the power system due to high charging power consumption and significant uncertainty in charging times. In this paper, multiple electric vehicle charging profiles are comparatively integrated into a 24-hour load demand in an economic and environment dispatch model. Self-learning teaching-learning based optimization (TLBO) is employed to solve the non-convex non-linear dispatch problems. Numerical results on well-known benchmark functions, as well as test systems with different scales of generation units show the significance of the new scheduling method.
Resumo:
Increasing installed capacities of wind power in an effort to achieve sustainable power systems for future generations pose problems for system operators. Volatility in generation volumes due to the adoption of stochastic wind power is increasing. Storage has been shown to act as a buffer for these stochastic energy sources, facilitating the integration of renewable energy into a historically inflexible power system. This paper examines peak and off peak benefits realised by installing a short term discharge storage unit in a system with a high penetration of wind power in 2020. A fully representative unit commitment and economic dispatch model is used to analyse two scenarios, one ‘with storage’ and one ‘without storage’. Key findings of this preliminary study show that wind curtailment can be reduced in the storage scenario, with a larger reduction in peak time ramping of gas generators is realised.
Resumo:
Efficient identification and follow-up of astronomical transients is hindered by the need for humans to manually select promising candidates from data streams that contain many false positives. These artefacts arise in the difference images that are produced by most major ground-based time-domain surveys with large format CCD cameras. This dependence on humans to reject bogus detections is unsustainable for next generation all-sky surveys and significant effort is now being invested to solve the problem computationally. In this paper, we explore a simple machine learning approach to real-bogus classification by constructing a training set from the image data of similar to 32 000 real astrophysical transients and bogus detections from the Pan-STARRS1 Medium Deep Survey. We derive our feature representation from the pixel intensity values of a 20 x 20 pixel stamp around the centre of the candidates. This differs from previous work in that it works directly on the pixels rather than catalogued domain knowledge for feature design or selection. Three machine learning algorithms are trained (artificial neural networks, support vector machines and random forests) and their performances are tested on a held-out subset of 25 per cent of the training data. We find the best results from the random forest classifier and demonstrate that by accepting a false positive rate of 1 per cent, the classifier initially suggests a missed detection rate of around 10 per cent. However, we also find that a combination of bright star variability, nuclear transients and uncertainty in human labelling means that our best estimate of the missed detection rate is approximately 6 per cent.
Resumo:
Building robust recognition systems requires a careful understanding of the effects of error in sensed features. Error in these image features results in a region of uncertainty in the possible image location of each additional model feature. We present an accurate, analytic approximation for this uncertainty region when model poses are based on matching three image and model points, for both Gaussian and bounded error in the detection of image points, and for both scaled-orthographic and perspective projection models. This result applies to objects that are fully three- dimensional, where past results considered only two-dimensional objects. Further, we introduce a linear programming algorithm to compute the uncertainty region when poses are based on any number of initial matches. Finally, we use these results to extend, from two-dimensional to three- dimensional objects, robust implementations of alignmentt interpretation- tree search, and ransformation clustering.
Resumo:
Previous research has shown that often there is clear inertia in individual decision making---that is, a tendency for decision makers to choose a status quo option. I conduct a laboratory experiment to investigate two potential determinants of inertia in uncertain environments: (i) regret aversion and (ii) ambiguity-driven indecisiveness. I use a between-subjects design with varying conditions to identify the effects of these two mechanisms on choice behavior. In each condition, participants choose between two simple real gambles, one of which is the status quo option. I find that inertia is quite large and that both mechanisms are equally important.
Resumo:
Crop production is inherently sensitive to fluctuations in weather and climate and is expected to be impacted by climate change. To understand how this impact may vary across the globe many studies have been conducted to determine the change in yield of several crops to expected changes in climate. Changes in climate are typically derived from a single to no more than a few General Circulation Models (GCMs). This study examines the uncertainty introduced to a crop impact assessment when 14 GCMs are used to determine future climate. The General Large Area Model for annual crops (GLAM) was applied over a global domain to simulate the productivity of soybean and spring wheat under baseline climate conditions and under climate conditions consistent with the 2050s under the A1B SRES emissions scenario as simulated by 14 GCMs. Baseline yield simulations were evaluated against global country-level yield statistics to determine the model's ability to capture observed variability in production. The impact of climate change varied between crops, regions, and by GCM. The spread in yield projections due to GCM varied between no change and a reduction of 50%. Without adaptation yield response was linearly related to the magnitude of local temperature change. Therefore, impacts were greatest for countries at northernmost latitudes where warming is predicted to be greatest. However, these countries also exhibited the greatest potential for adaptation to offset yield losses by shifting the crop growing season to a cooler part of the year and/or switching crop variety to take advantage of an extended growing season. The relative magnitude of impacts as simulated by each GCM was not consistent across countries and between crops. It is important, therefore, for crop impact assessments to fully account for GCM uncertainty in estimating future climates and to be explicit about assumptions regarding adaptation.
Resumo:
We consider the impact of data revisions on the forecast performance of a SETAR regime-switching model of U.S. output growth. The impact of data uncertainty in real-time forecasting will affect a model's forecast performance via the effect on the model parameter estimates as well as via the forecast being conditioned on data measured with error. We find that benchmark revisions do affect the performance of the non-linear model of the growth rate, and that the performance relative to a linear comparator deteriorates in real-time compared to a pseudo out-of-sample forecasting exercise.
Resumo:
With a rapidly increasing fraction of electricity generation being sourced from wind, extreme wind power generation events such as prolonged periods of low (or high) generation and ramps in generation, are a growing concern for the efficient and secure operation of national power systems. As extreme events occur infrequently, long and reliable meteorological records are required to accurately estimate their characteristics. Recent publications have begun to investigate the use of global meteorological “reanalysis” data sets for power system applications, many of which focus on long-term average statistics such as monthly-mean generation. Here we demonstrate that reanalysis data can also be used to estimate the frequency of relatively short-lived extreme events (including ramping on sub-daily time scales). Verification against 328 surface observation stations across the United Kingdom suggests that near-surface wind variability over spatiotemporal scales greater than around 300 km and 6 h can be faithfully reproduced using reanalysis, with no need for costly dynamical downscaling. A case study is presented in which a state-of-the-art, 33 year reanalysis data set (MERRA, from NASA-GMAO), is used to construct an hourly time series of nationally-aggregated wind power generation in Great Britain (GB), assuming a fixed, modern distribution of wind farms. The resultant generation estimates are highly correlated with recorded data from National Grid in the recent period, both for instantaneous hourly values and for variability over time intervals greater than around 6 h. This 33 year time series is then used to quantify the frequency with which different extreme GB-wide wind power generation events occur, as well as their seasonal and inter-annual variability. Several novel insights into the nature of extreme wind power generation events are described, including (i) that the number of prolonged low or high generation events is well approximated by a Poission-like random process, and (ii) whilst in general there is large seasonal variability, the magnitude of the most extreme ramps is similar in both summer and winter. An up-to-date version of the GB case study data as well as the underlying model are freely available for download from our website: http://www.met.reading.ac.uk/~energymet/data/Cannon2014/.
Resumo:
Debate over the late Quaternary megafaunal extinctions has focussed on whether human colonisation or climatic changes were more important drivers of extinction, with few extinctions being unambiguously attributable to either. Most analyses have been geographically or taxonomically restricted and the few quantitative global analyses have been limited by coarse temporal resolution or overly simplified climate reconstructions or proxies. We present a global analysis of the causes of these extinctions which uses high-resolution climate reconstructions and explicitly investigates the sensitivity of our results to uncertainty in the palaeological record. Our results show that human colonisation was the dominant driver of megafaunal extinction across the world but that climatic factors were also important. We identify the geographic regions where future research is likely to have the most impact, with our models reliably predicting extinctions across most of the world, with the notable exception of mainland Asia where we fail to explain the apparently low rate of extinction found in in the fossil record. Our results are highly robust to uncertainties in the palaeological record, and our main conclusions are unlikely to change qualitatively following minor improvements or changes in the dates of extinctions and human colonisation.
Resumo:
While there is an extensive and still growing body of literature on women in academia and the challenges they encounter in career progression, there is little research on their experience specifically within a business school setting. In this study, we attempt to address this gap and examine the experiences and career development of female academics in a business school and how these are impacted by downsizing programmes. To this end, an exploratory case study is conducted. The findings of this study show that female business school academics experience numerous challenges in terms of promotion and development, networking, and the multiple and conflicting demands placed upon them. As a result, the lack of visibility seems to be a pertinent issue in terms of their career progression. Our data also demonstrates that that, paradoxically, during periods of downsizing women become more visible and thus vulnerable to layoffs as a consequence of the challenges and pressures created in their environment during this process. In this paper, we argue that this heightened visibility, and being subject to possible layoffs, further reproduces inequality regimes in academia.
Resumo:
Random effect models have been widely applied in many fields of research. However, models with uncertain design matrices for random effects have been little investigated before. In some applications with such problems, an expectation method has been used for simplicity. This method does not include the extra information of uncertainty in the design matrix is not included. The closed solution for this problem is generally difficult to attain. We therefore propose an two-step algorithm for estimating the parameters, especially the variance components in the model. The implementation is based on Monte Carlo approximation and a Newton-Raphson-based EM algorithm. As an example, a simulated genetics dataset was analyzed. The results showed that the proportion of the total variance explained by the random effects was accurately estimated, which was highly underestimated by the expectation method. By introducing heuristic search and optimization methods, the algorithm can possibly be developed to infer the 'model-based' best design matrix and the corresponding best estimates.