969 resultados para forecasting model


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Electricity price forecasting has become an important area of research in the aftermath of the worldwide deregulation of the power industry that launched competitive electricity markets now embracing all market participants including generation and retail companies, transmission network providers, and market managers. Based on the needs of the market, a variety of approaches forecasting day-ahead electricity prices have been proposed over the last decades. However, most of the existing approaches are reasonably effective for normal range prices but disregard price spike events, which are caused by a number of complex factors and occur during periods of market stress. In the early research, price spikes were truncated before application of the forecasting model to reduce the influence of such observations on the estimation of the model parameters; otherwise, a very large forecast error would be generated on price spike occasions. Electricity price spikes, however, are significant for energy market participants to stay competitive in a market. Accurate price spike forecasting is important for generation companies to strategically bid into the market and to optimally manage their assets; for retailer companies, since they cannot pass the spikes onto final customers, and finally, for market managers to provide better management and planning for the energy market. This doctoral thesis aims at deriving a methodology able to accurately predict not only the day-ahead electricity prices within the normal range but also the price spikes. The Finnish day-ahead energy market of Nord Pool Spot is selected as the case market, and its structure is studied in detail. It is almost universally agreed in the forecasting literature that no single method is best in every situation. Since the real-world problems are often complex in nature, no single model is able to capture different patterns equally well. Therefore, a hybrid methodology that enhances the modeling capabilities appears to be a possibly productive strategy for practical use when electricity prices are predicted. The price forecasting methodology is proposed through a hybrid model applied to the price forecasting in the Finnish day-ahead energy market. The iterative search procedure employed within the methodology is developed to tune the model parameters and select the optimal input set of the explanatory variables. The numerical studies show that the proposed methodology has more accurate behavior than all other examined methods most recently applied to case studies of energy markets in different countries. The obtained results can be considered as providing extensive and useful information for participants of the day-ahead energy market, who have limited and uncertain information for price prediction to set up an optimal short-term operation portfolio. Although the focus of this work is primarily on the Finnish price area of Nord Pool Spot, given the result of this work, it is very likely that the same methodology will give good results when forecasting the prices on energy markets of other countries.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The objective of this Master’s thesis is to develop a model which estimates net working capital (NWC) monthly in a year period. The study is conducted by a constructive research which uses a case study. The estimation model is designed in the need of one case company which operates in project business. Net working capital components should be linked together by an automatic model and estimated individually, including advanced components of NWC for example POC receivables. Net working capital estimation model of this study contains three parts: output template, input template and calculation model. The output template gets estimate values automatically from the input template and the calculation model. Into the input template estimate values of more stable NWC components are inputted manually. The calculate model gets estimate values for major affecting components automatically from the systems of a company by using a historical data and made plans. As a precondition for the functionality of the estimation calculation is that sales are estimated in one year period because the sales are linked to all NWC components.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cement industry ranks 2nd in energy consumption among the industries in India. It is one of the major emitter of CO2, due to combustion of fossil fuel and calcination process. As the huge amount of CO2 emissions cause severe environment problems, the efficient and effective utilization of energy is a major concern in Indian cement industry. The main objective of the research work is to assess the energy cosumption and energy conservation of the Indian cement industry and to predict future trends in cement production and reduction of CO2 emissions. In order to achieve this objective, a detailed energy and exergy analysis of a typical cement plant in Kerala was carried out. The data on fuel usage, electricity consumption, amount of clinker and cement production were also collected from a few selected cement industries in India for the period 2001 - 2010 and the CO2 emissions were estimated. A complete decomposition method was used for the analysis of change in CO2 emissions during the period 2001 - 2010 by categorising the cement industries according to the specific thermal energy consumption. A basic forecasting model for the cement production trend was developed by using the system dynamic approach and the model was validated with the data collected from the selected cement industries. The cement production and CO2 emissions from the industries were also predicted with the base year as 2010. The sensitivity analysis of the forecasting model was conducted and found satisfactory. The model was then modified for the total cement production in India to predict the cement production and CO2 emissions for the next 21 years under three different scenarios. The parmeters that influence CO2 emissions like population and GDP growth rate, demand of cement and its production, clinker consumption and energy utilization are incorporated in these scenarios. The existing growth rate of the population and cement production in the year 2010 were used in the baseline scenario. In the scenario-1 (S1) the growth rate of population was assumed to be gradually decreasing and finally reach zero by the year 2030, while in scenario-2 (S2) a faster decline in the growth rate was assumed such that zero growth rate is achieved in the year 2020. The mitigation strategiesfor the reduction of CO2 emissions from the cement production were identified and analyzed in the energy management scenarioThe energy and exergy analysis of the raw mill of the cement plant revealed that the exergy utilization was worse than energy utilization. The energy analysis of the kiln system showed that around 38% of heat energy is wasted through exhaust gases of the preheater and cooler of the kiln sysetm. This could be recovered by the waste heat recovery system. A secondary insulation shell was also recommended for the kiln in the plant in order to prevent heat loss and enhance the efficiency of the plant. The decomposition analysis of the change in CO2 emissions during 2001- 2010 showed that the activity effect was the main factor for CO2 emissions for the cement industries since it is directly dependent on economic growth of the country. The forecasting model showed that 15.22% and 29.44% of CO2 emissions reduction can be achieved by the year 2030 in scenario- (S1) and scenario-2 (S2) respectively. In analysing the energy management scenario, it was assumed that 25% of electrical energy supply to the cement plants is replaced by renewable energy. The analysis revealed that the recovery of waste heat and the use of renewable energy could lead to decline in CO2 emissions 7.1% for baseline scenario, 10.9 % in scenario-1 (S1) and 11.16% in scenario-2 (S2) in 2030. The combined scenario considering population stabilization by the year 2020, 25% of contribution from renewable energy sources of the cement industry and 38% thermal energy from the waste heat streams shows that CO2 emissions from Indian cement industry could be reduced by nearly 37% in the year 2030. This would reduce a substantial level of greenhouse gas load to the environment. The cement industry will remain one of the critical sectors for India to meet its CO2 emissions reduction target. India’s cement production will continue to grow in the near future due to its GDP growth. The control of population, improvement in plant efficiency and use of renewable energy are the important options for the mitigation of CO2 emissions from Indian cement industries

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult to achieve because the relative values of the forecast components often fail to behave in a way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It has been shown that cause-specic mortality forecasts are pessimistic when compared with all-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approach of using log mortality rates and forecasts the density of deaths in the life table. Since these values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbing state), they are intrinsically relative rather than absolute values across decrements as well as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison (1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that the unit sum constraint is honoured. The structure of the best-known, single-decrement mortality-rate forecasting model, devised by Lee and Carter (1992), is expressed in compositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortality by cause of death for Japan

Relevância:

70.00% 70.00%

Publicador:

Resumo:

An operational dust forecasting model is developed by including the Met Office Hadley Centre climate model dust parameterization scheme, within a Met Office regional numerical weather prediction (NWP) model. The model includes parameterizations for dust uplift, dust transport, and dust deposition in six discrete size bins and provides diagnostics such as the aerosol optical depth. The results are compared against surface and satellite remote sensing measurements and against in situ measurements from the Facility for Atmospheric Airborne Measurements for a case study when a strong dust event was forecast. Comparisons are also performed against satellite and surface instrumentation for the entire month of August. The case study shows that this Saharan dust NWP model can provide very good guidance of dust events, as much as 42 h ahead. The analysis of monthly data suggests that the mean and variability in the dust model is also well represented.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Despite the success of studies attempting to integrate remotely sensed data and flood modelling and the need to provide near-real time data routinely on a global scale as well as setting up online data archives, there is to date a lack of spatially and temporally distributed hydraulic parameters to support ongoing efforts in modelling. Therefore, the objective of this project is to provide a global evaluation and benchmark data set of floodplain water stages with uncertainties and assimilation in a large scale flood model using space-borne radar imagery. An algorithm is developed for automated retrieval of water stages with uncertainties from a sequence of radar imagery and data are assimilated in a flood model using the Tewkesbury 2007 flood event as a feasibility study. The retrieval method that we employ is based on possibility theory which is an extension of fuzzy sets and that encompasses probability theory. In our case we first attempt to identify main sources of uncertainty in the retrieval of water stages from radar imagery for which we define physically meaningful ranges of parameter values. Possibilities of values are then computed for each parameter using a triangular ‘membership’ function. This procedure allows the computation of possible values of water stages at maximum flood extents along a river at many different locations. At a later stage in the project these data are then used in assimilation, calibration or validation of a flood model. The application is subsequently extended to a global scale using wide swath radar imagery and a simple global flood forecasting model thereby providing improved river discharge estimates to update the latter.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The incorporation of numerical weather predictions (NWP) into a flood forecasting system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and lead to a high number of false alarms. The availability of global ensemble numerical weather prediction systems through the THORPEX Interactive Grand Global Ensemble' (TIGGE) offers a new opportunity for flood forecast. The Grid-Xinanjiang distributed hydrological model, which is based on the Xinanjiang model theory and the topographical information of each grid cell extracted from the Digital Elevation Model (DEM), is coupled with ensemble weather predictions based on the TIGGE database (CMC, CMA, ECWMF, UKMO, NCEP) for flood forecast. This paper presents a case study using the coupled flood forecasting model on the Xixian catchment (a drainage area of 8826 km2) located in Henan province, China. A probabilistic discharge is provided as the end product of flood forecast. Results show that the association of the Grid-Xinanjiang model and the TIGGE database gives a promising tool for an early warning of flood events several days ahead.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This model connects directly the radar reflectivity data and hydrological variable runoff. The catchment is discretized in pixels (4 Km × 4 Km) with the same resolution of the CAPPI. Careful discretization is made so that every grid catchment pixel corresponds precisely to CAPPI grid cell. The basin is assumed a linear system and also time invariant. The forecast technique takes advantage of spatial and temporal resolutions obtained by the radar. The method uses only the measurements of the factor reflectivity distribution observed over the catchment area without using the reflectivity - rainfall rate transformation by the conventional Z-R relationships. The reflectivity values in each catchment pixel are translated to a gauging station by using a transfer function. This transfer function represents the travel time of the superficial water flowing through pixels in the drainage direction ending at the gauging station. The parameters used to compute the transfer function are concentration time and the physiographic catchment characteristics. -from Authors

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The ability to represent the transport and fate of an oil slick at the sea surface is a formidable task. By using an accurate numerical representation of oil evolution and movement in seawater, the possibility to asses and reduce the oil-spill pollution risk can be greatly improved. The blowing of the wind on the sea surface generates ocean waves, which give rise to transport of pollutants by wave-induced velocities that are known as Stokes’ Drift velocities. The Stokes’ Drift transport associated to a random gravity wave field is a function of the wave Energy Spectra that statistically fully describe it and that can be provided by a wave numerical model. Therefore, in order to perform an accurate numerical simulation of the oil motion in seawater, a coupling of the oil-spill model with a wave forecasting model is needed. In this Thesis work, the coupling of the MEDSLIK-II oil-spill numerical model with the SWAN wind-wave numerical model has been performed and tested. In order to improve the knowledge of the wind-wave model and its numerical performances, a preliminary sensitivity study to different SWAN model configuration has been carried out. The SWAN model results have been compared with the ISPRA directional buoys located at Venezia, Ancona and Monopoli and the best model settings have been detected. Then, high resolution currents provided by a relocatable model (SURF) have been used to force both the wave and the oil-spill models and its coupling with the SWAN model has been tested. The trajectories of four drifters have been simulated by using JONSWAP parametric spectra or SWAN directional-frequency energy output spectra and results have been compared with the real paths traveled by the drifters.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cost-efficient operation while satisfying performance and availability guarantees in Service Level Agreements (SLAs) is a challenge for Cloud Computing, as these are potentially conflicting objectives. We present a framework for SLA management based on multi-objective optimization. The framework features a forecasting model for determining the best virtual machine-to-host allocation given the need to minimize SLA violations, energy consumption and resource wasting. A comprehensive SLA management solution is proposed that uses event processing for monitoring and enables dynamic provisioning of virtual machines onto the physical infrastructure. We validated our implementation against serveral standard heuristics and were able to show that our approach is significantly better.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Simulating surface wind over complex terrain is a challenge in regional climate modelling. Therefore, this study aims at identifying a set-up of the Weather Research and Forecasting Model (WRF) model that minimises system- atic errors of surface winds in hindcast simulations. Major factors of the model configuration are tested to find a suitable set-up: the horizontal resolution, the planetary boundary layer (PBL) parameterisation scheme and the way the WRF is nested to the driving data set. Hence, a number of sensitivity simulations at a spatial resolution of 2 km are carried out and compared to observations. Given the importance of wind storms, the analysis is based on case studies of 24 historical wind storms that caused great economic damage in Switzerland. Each of these events is downscaled using eight different model set-ups, but sharing the same driving data set. The results show that the lack of representation of the unresolved topography leads to a general overestimation of wind speed in WRF. However, this bias can be substantially reduced by using a PBL scheme that explicitly considers the effects of non-resolved topography, which also improves the spatial structure of wind speed over Switzerland. The wind direction, although generally well reproduced, is not very sensitive to the PBL scheme. Further sensitivity tests include four types of nesting methods: nesting only at the boundaries of the outermost domain, analysis nudging, spectral nudging, and the so-called re-forecast method, where the simulation is frequently restarted. These simulations show that restricting the freedom of the model to develop large-scale disturbances slightly increases the temporal agreement with the observations, at the same time that it further reduces the overestimation of wind speed, especially for maximum wind peaks. The model performance is also evaluated in the outermost domains, where the resolution is coarser. The results demonstrate the important role of horizontal resolution, where the step from 6 to 2 km significantly improves model performance. In summary, the combination of a grid size of 2 km, the non-local PBL scheme modified to explicitly account for non-resolved orography, as well as analysis or spectral nudging, is a superior combination when dynamical downscaling is aimed at reproducing real wind fields.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

As the formative agents of cloud droplets, aerosols play an undeniably important role in the development of clouds and precipitation. Few meteorological models have been developed or adapted to simulate aerosols and their contribution to cloud and precipitation processes. The Weather Research and Forecasting model (WRF) has recently been coupled with an atmospheric chemistry suite and is jointly referred to as WRF-Chem, allowing atmospheric chemistry and meteorology to influence each other’s evolution within a mesoscale modeling framework. Provided that the model physics are robust, this framework allows the feedbacks between aerosol chemistry, cloud physics, and dynamics to be investigated. This study focuses on the effects of aerosols on meteorology, specifically, the interaction of aerosol chemical species with microphysical processes represented within the framework of the WRF-Chem. Aerosols are represented by eight size bins using the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) sectional parameterization, which is linked to the Purdue Lin bulk microphysics scheme. The aim of this study is to examine the sensitivity of deep convective precipitation modeled by the 2D WRF-Chem to varying aerosol number concentration and aerosol type. A systematic study has been performed regarding the effects of aerosols on parameters such as total precipitation, updraft/downdraft speed, distribution of hydrometeor species, and organizational features, within idealized maritime and continental thermodynamic environments. Initial results were obtained using WRFv3.0.1, and a second series of tests were run using WRFv3.2 after several changes to the activation, autoconversion, and Lin et al. microphysics schemes added by the WRF community, as well as the implementation of prescribed vertical levels by the author. The results of WRFv3.2 runs contrasted starkly with WRFv3.0.1 runs. The WRFv3.0.1 runs produced a propagating system resembling a developing squall line, whereas the WRFv3.2 runs did not. The response of total precipitation, updraft/downdraft speeds, and system organization to increasing aerosol concentrations were opposite between runs with different versions of WRF. Results of the WRFv3.2 runs, however, were in better agreement in timing and magnitude of vertical velocity and hydrometeor content with a WRFv3.0.1 run using single-moment Lin et al. microphysics, than WRFv3.0.1 runs with chemistry. One result consistent throughout all simulations was an inhibition in warm-rain processes due to enhanced aerosol concentrations, which resulted in a delay of precipitation onset that ranged from 2-3 minutes in WRFv3.2 runs, and up to 15 minutes in WRFv.3.0.1 runs. This result was not observed in a previous study by Ntelekos et al. (2009) using the WRF-Chem, perhaps due to their use of coarser horizontal and vertical resolution within their experiment. The changes to microphysical processes such as activation and autoconversion from WRFv3.0.1 to WRFv3.2, along with changes in the packing of vertical levels, had more impact than the varying aerosol concentrations even though the range of aerosol tested was greater than that observed in field studies. In order to take full advantage of the input of aerosols now offered by the chemistry module in WRF, the author recommends that a fully double-moment microphysics scheme be linked, rather than the limited double-moment Lin et al. scheme that currently exists. With this modification, the WRF-Chem will be a powerful tool for studying aerosol-cloud interactions and allow comparison of results with other studies using more modern and complex microphysical parameterizations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes a particle swarm optimization (PSO) approach to support electricity producers for multiperiod optimal contract allocation. The producer risk preference is stated by a utility function (U) expressing the tradeoff between the expectation and variance of the return. Variance estimation and expected return are based on a forecasted scenario interval determined by a price range forecasting model developed by the authors. A certain confidence level is associated to each forecasted scenario interval. The proposed model makes use of contracts with physical (spot and forward) and financial (options) settlement. PSO performance was evaluated by comparing it with a genetic algorithm-based approach. This model can be used by producers in deregulated electricity markets but can easily be adapted to load serving entities and retailers. Moreover, it can easily be adapted to the use of other type of contracts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper addresses the optimal involvement in derivatives electricity markets of a power producer to hedge against the pool price volatility. To achieve this aim, a swarm intelligence meta-heuristic optimization technique for long-term risk management tool is proposed. This tool investigates the long-term opportunities for risk hedging available for electric power producers through the use of contracts with physical (spot and forward contracts) and financial (options contracts) settlement. The producer risk preference is formulated as a utility function (U) expressing the trade-off between the expectation and the variance of the return. Variance of return and the expectation are based on a forecasted scenario interval determined by a long-term price range forecasting model. This model also makes use of particle swarm optimization (PSO) to find the best parameters allow to achieve better forecasting results. On the other hand, the price estimation depends on load forecasting. This work also presents a regressive long-term load forecast model that make use of PSO to find the best parameters as well as in price estimation. The PSO technique performance has been evaluated by comparison with a Genetic Algorithm (GA) based approach. A case study is presented and the results are discussed taking into account the real price and load historical data from mainland Spanish electricity market demonstrating the effectiveness of the methodology handling this type of problems. Finally, conclusions are dully drawn.