910 resultados para power generation forecasting
Resumo:
Control and optimization of flavor is the ultimate challenge for the food and flavor industry. The major route to flavor formation during thermal processing is the Maillard reaction, which is a complex cascade of interdependent reactions initiated by the reaction between a reducing sugar and an amino compd. The complexity of the reaction means that researchers turn to kinetic modeling in order to understand the control points of the reaction and to manipulate the flavor profile. Studies of the kinetics of flavor formation have developed over the past 30 years from single- response empirical models of binary aq. systems to sophisticated multi-response models in food matrixes, based on the underlying chem., with the power to predict the formation of some key aroma compds. This paper discusses in detail the development of kinetic models of thermal generation of flavor and looks at the challenges involved in predicting flavor.
Resumo:
Purpose: Increasing costs of health care, fuelled by demand for high quality, cost-effective healthcare has drove hospitals to streamline their patient care delivery systems. One such systematic approach is the adaptation of Clinical Pathways (CP) as a tool to increase the quality of healthcare delivery. However, most organizations still rely on are paper-based pathway guidelines or specifications, which have limitations in process management and as a result can influence patient safety outcomes. In this paper, we present a method for generating clinical pathways based on organizational semiotics by capturing knowledge from syntactic, semantic and pragmatic to social level. Design/methodology/approach: The proposed modeling approach to generation of CPs adopts organizational semiotics and enables the generation of semantically rich representation of CP knowledge. Semantic Analysis Method (SAM) is applied to explicitly represent the semantics of the concepts, their relationships and patterns of behavior in terms of an ontology chart. Norm Analysis Method (NAM) is adopted to identify and formally specify patterns of behavior and rules that govern the actions identified on the ontology chart. Information collected during semantic and norm analysis is integrated to guide the generation of CPs using best practice represented in BPMN thus enabling the automation of CP. Findings: This research confirms the necessity of taking into consideration social aspects in designing information systems and automating CP. The complexity of healthcare processes can be best tackled by analyzing stakeholders, which we treat as social agents, their goals and patterns of action within the agent network. Originality/value: The current modeling methods describe CPs from a structural aspect comprising activities, properties and interrelationships. However, these methods lack a mechanism to describe possible patterns of human behavior and the conditions under which the behavior will occur. To overcome this weakness, a semiotic approach to generation of clinical pathway is introduced. The CP generated from SAM together with norms will enrich the knowledge representation of the domain through ontology modeling, which allows the recognition of human responsibilities and obligations and more importantly, the ultimate power of decision making in exceptional circumstances.
Resumo:
In this paper, we examine the temporal stability of the evidence for two commodity futures pricing theories. We investigate whether the forecast power of commodity futures can be attributed to the extent to which they exhibit seasonality and we also consider whether there are time varying parameters or structural breaks in these pricing relationships. Compared to previous studies, we find stronger evidence of seasonality in the basis, which supports the theory of storage. The power of the basis to forecast subsequent price changes is also strengthened, while results on the presence of a risk premium are inconclusive. In addition, we show that the forecasting power of commodity futures cannot be attributed to the extent to which they exhibit seasonality. We find that in most cases where structural breaks occur, only changes in the intercepts and not the slopes are detected, illustrating that the forecast power of the basis is stable over different economic environments.
Resumo:
Although difference-stationary (DS) and trend-stationary (TS) processes have been subject to considerable analysis, there are no direct comparisons for each being the data-generation process (DGP). We examine incorrect choice between these models for forecasting for both known and estimated parameters. Three sets of Monte Carlo simulations illustrate the analysis, to evaluate the biases in conventional standard errors when each model is mis-specified, compute the relative mean-square forecast errors of the two models for both DGPs, and investigate autocorrelated errors, so both models can better approximate the converse DGP. The outcomes are surprisingly different from established results.
Resumo:
We compare linear autoregressive (AR) models and self-exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two-regime SETAR process is used as the data-generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non-linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data
Resumo:
We evaluate the predictive power of leading indicators for output growth at horizons up to 1 year. We use the MIDAS regression approach as this allows us to combine multiple individual leading indicators in a parsimonious way and to directly exploit the information content of the monthly series to predict quarterly output growth. When we use real-time vintage data, the indicators are found to have significant predictive ability, and this is further enhanced by the use of monthly data on the quarter at the time the forecast is made
Resumo:
This paper examines the predictability of real estate asset returns using a number of time series techniques. A vector autoregressive model, which incorporates financial spreads, is able to improve upon the out of sample forecasting performance of univariate time series models at a short forecasting horizon. However, as the forecasting horizon increases, the explanatory power of such models is reduced, so that returns on real estate assets are best forecast using the long term mean of the series. In the case of indirect property returns, such short-term forecasts can be turned into a trading rule that can generate excess returns over a buy-and-hold strategy gross of transactions costs, although none of the trading rules developed could cover the associated transactions costs. It is therefore concluded that such forecastability is entirely consistent with stock market efficiency.
Resumo:
Flash floods pose a significant danger for life and property. Unfortunately, in arid and semiarid environment the runoff generation shows a complex non-linear behavior with a strong spatial and temporal non-uniformity. As a result, the predictions made by physically-based simulations in semiarid areas are subject to great uncertainty, and a failure in the predictive behavior of existing models is common. Thus better descriptions of physical processes at the watershed scale need to be incorporated into the hydrological model structures. For example, terrain relief has been systematically considered static in flood modelling at the watershed scale. Here, we show that the integrated effect of small distributed relief variations originated through concurrent hydrological processes within a storm event was significant on the watershed scale hydrograph. We model these observations by introducing dynamic formulations of two relief-related parameters at diverse scales: maximum depression storage, and roughness coefficient in channels. In the final (a posteriori) model structure these parameters are allowed to be both time-constant or time-varying. The case under study is a convective storm in a semiarid Mediterranean watershed with ephemeral channels and high agricultural pressures (the Rambla del Albujón watershed; 556 km 2 ), which showed a complex multi-peak response. First, to obtain quasi-sensible simulations in the (a priori) model with time-constant relief-related parameters, a spatially distributed parameterization was strictly required. Second, a generalized likelihood uncertainty estimation (GLUE) inference applied to the improved model structure, and conditioned to observed nested hydrographs, showed that accounting for dynamic relief-related parameters led to improved simulations. The discussion is finally broadened by considering the use of the calibrated model both to analyze the sensitivity of the watershed to storm motion and to attempt the flood forecasting of a stratiform event with highly different behavior.
Resumo:
Wind generation's contribution to supporting peak electricity demand is one of the key questions in wind integration studies. Differently from conventional units, the available outputs of different wind farms cannot be approximated as being statistically independent, and hence near-zero wind output is possible across an entire power system. This paper will review the risk model structures currently used to assess wind's capacity value, along with discussion of the resulting data requirements. A central theme is the benefits from performing statistical estimation of the joint distribution for demand and available wind capacity, focusing attention on uncertainties due to limited histories of wind and demand data; examination of Great Britain data from the last 25 years shows that the data requirements are greater than generally thought. A discussion is therefore presented into how analysis of the types of weather system which have historically driven extreme electricity demands can help to deliver robust insights into wind's contribution to supporting demand, even in the face of such data limitations. The role of the form of the probability distribution for available conventional capacity in driving wind capacity credit results is also discussed.
Resumo:
Forecasting wind power is an important part of a successful integration of wind power into the power grid. Forecasts with lead times longer than 6 h are generally made by using statistical methods to post-process forecasts from numerical weather prediction systems. Two major problems that complicate this approach are the non-linear relationship between wind speed and power production and the limited range of power production between zero and nominal power of the turbine. In practice, these problems are often tackled by using non-linear non-parametric regression models. However, such an approach ignores valuable and readily available information: the power curve of the turbine's manufacturer. Much of the non-linearity can be directly accounted for by transforming the observed power production into wind speed via the inverse power curve so that simpler linear regression models can be used. Furthermore, the fact that the transformed power production has a limited range can be taken care of by employing censored regression models. In this study, we evaluate quantile forecasts from a range of methods: (i) using parametric and non-parametric models, (ii) with and without the proposed inverse power curve transformation and (iii) with and without censoring. The results show that with our inverse (power-to-wind) transformation, simpler linear regression models with censoring perform equally or better than non-linear models with or without the frequently used wind-to-power transformation.
Resumo:
Thermal generation is a vital component of mature and reliable electricity markets. As the share of renewable electricity in such markets grows, so too do the challenges associated with its variability. Proposed solutions to these challenges typically focus on alternatives to primary generation, such as energy storage, demand side management, or increased interconnection. Less attention is given to the demands placed on conventional thermal generation or its potential for increased flexibility. However, for the foreseeable future, conventional plants will have to operate alongside new renewables and have an essential role in accommodating increasing supply-side variability. This paper explores the role that conventional generation has to play in managing variability through the sub-system case study of Northern Ireland, identifying the significance of specific plant characteristics for reliable system operation. Particular attention is given to the challenges of wind ramping and the need to avoid excessive wind curtailment. Potential for conflict is identified with the role for conventional plant in addressing these two challenges. Market specific strategies for using the existing fleet of generation to reduce the impact of renewable resource variability are proposed, and wider lessons from the approach taken are identified.
Resumo:
The different parameters used for the photoactivation process provide changes in the degree of conversion (DC%) and temperature rise (TR) of the composite resins. Thus, the purpose of this study was to evaluate the DC (%) and TR of the microhybrid composite resin photoactivated by a new generation LED. For the KBr pellet technique, the composite resin was placed into a metallic mould (1-mm thickness and 4-mm diameter) and photoactivated as follows: continuous LED LCU with different power density values (50-1000 mW/cm(2)). The measurements for the DC (%) were made in a FTIR Spectrometer Bomen (model MB-102, Quebec-Canada). The spectroscopy (FTIR) spectra for both uncured and cured samples were analyzed using an accessory for the diffuse reflectance. The measurements were recorded in the absorbance operating under the following conditions: 32 scans, 4-cm(-1) resolution, and a 300 to 4000-cm(-1) wavelength. The percentage of unreacted carbon-carbon double bonds (% C=C) was determined from the ratio of the absorbance intensities of aliphatic C=C (peak at 1638 cm(-1)) against an internal standard before and after the curing of the specimen: aromatic C-C (peak at 1608 cm-1). For the TR, the samples were made in a metallic mould (2-mm thickness and 4-mm diameter) and photoactivated during 5, 10, and 20 s. The thermocouple was attached to the multimeter to allow the temperature readings. The DC (%) and TR were calculated by the standard technique and submitted to ANOVA and Tukey`s test (p < 0.05). The degree of conversion values varied from 35.0 (+/- 1.3) to 45.0 (+/- 2.4) for 5 s, 45.0 (+/- 1.3) to 55.0 (+/- 2.4) for 10 s, and 47.0 (+/- 1.3) to 52.0 (+/- 2.4) for 20 s. For the TR, the values ranged from 0.3 (+/- 0.01) to 5.4 (+/- 0.11)degrees C for 5 s, from 0.5 (+/- 0.02) to 9.3 (+/- 0.28)degrees C for 10 s, and from 1.0 (+/- 0.06) to 15.0 (+/- 0.95)degrees C for 20 s. The power densities and irradiation times showed a significant effect on the degree of conversion and temperature rise.
Resumo:
Electricité de France (EDF) is a leading player in the European energy market by being both the first electricity producer in Europe and the world’s leading nuclear plant operator. EDF is also the first electricity producer and supplier in France. However, Europe, EDF’s core market, is currently underperforming: the European sovereign debt crisis is lowering significantly the growth perspective of an energy market that has already reached its maturity. As a consequence, European energy companies are now looking at international markets and especially BRIC economies where economic growth potential remains high. Among them, Brazil is expected to keep its strong economic and electricity demand growth perspectives for the coming decades. Though Brazil has not been considered as a strategic priority for EDF after the Light reversal in 2006, the current economic situation has led the Group to reconsider its position toward the country. EDF’s current presence in Brazil is limited to its stake in UTE Norte Fluminense, a thermal plant, located in the state of Rio de Janeiro. This report investigates the possibility and the feasibility of EDF’s activities expansion in Brazil and what added value it could bring for the Brazilian power market. Considering that the status quo would not allow EDF to take full advantage of Brazil’s future growth, this work is identifying the various options that are currently opened to EDF: market exit, status quo, EDF alone, local partner. For that purpose, this study collects and analyses the latest energy market data as well as generation companies’ information which are necessary to give a relevant overview of the current brazilian power sector and to present EDF strategic options for the country.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The paper describes a novel neural model to electrical load forecasting in transformers. The network acts as identifier of structural features to forecast process. So that output parameters can be estimated and generalized from an input parameter set. The model was trained and assessed through load data extracted from a Brazilian Electric Utility taking into account time, current, tension, active power in the three phases of the system. The results obtained in the simulations show that the developed technique can be used as an alternative tool to become more appropriate for planning of electric power systems.