916 resultados para flood forecasting model
Resumo:
Due to the variability of wind power, it is imperative to accurately and timely forecast the wind generation to enhance the flexibility and reliability of the operation and control of real-time power. Special events such as ramps, spikes are hard to predict with traditional methods using solely recently measured data. In this paper, a new Gaussian Process model with hybrid training data taken from both the local time and historic dataset is proposed and applied to make short-term predictions from 10 minutes to one hour ahead. A key idea is that the similar pattern data in history are properly selected and embedded in Gaussian Process model to make predictions. The results of the proposed algorithms are compared to those of standard Gaussian Process model and the persistence model. It is shown that the proposed method not only reduces magnitude error but also phase error.
Resumo:
Diabetic retinopathy (DR) is the leading cause of blindness in the working-age population in the United States. The vision-threatening processes of neuroglial and vascular dysfunction in DR occur in concert, driven by hyperglycemia and propelled by a pathway of inflammation, ischemia, vasodegeneration, and breakdown of the blood retinal barrier. Currently, no therapies exist for normalizing the vasculature in DR. Here we show that a single intravitreal dose of adeno-associated virus serotype 2 encoding a more stable, soluble, and potent form of angiopoietin 1 (AAV2.COMP-Ang1) can ameliorate the structural and functional hallmarks of DR in Ins2Akita mice, with sustained effects observed through six months. In early DR, AAV2.COMP-Ang1 restored leukocyte-endothelial interaction, retinal oxygenation, vascular density, vascular marker expression, vessel permeability, retinal thickness, inner retinal cellularity, and retinal neurophysiological response to levels comparable to non-diabetic controls. In late DR, AAV2.COMP-Ang1 enhanced the therapeutic benefit of intravitreally-delivered endothelial colony-forming cells by promoting their integration into the vasculature and thereby stemming further visual decline. AAV2.COMP-Ang1 single-dose gene therapy can prevent neurovascular pathology, support vascular regeneration, and stabilize vision in DR.
Resumo:
This study attempts to implement a hydrodynamic operational model which can ultimately be used for projecting oil spill dispersal patterns and also sewage, pollution and can also be used in wave forecasting. A two layer nested model was created using MOHID Water, which is powerful ocean modelling software. The first layer (father) is used to impose the boundary conditions for the second layer (son). This was repeated for two different wind dominant regimes, Easterly and Westerly winds respectively. A qualitative comparison was done between measured tidal data and the tidal output. Sea surface temperature was also qualitatively compared with the model’s results. The results from both simulations were analysed and compared to historical literature. The comparison was done at the surface layer, 100 metre depth and at 800m depth. In the surface layer the first simulation generated an upwelling event near Cape St. Vincent and within the Algarve. The second simulation generated a non-upwelling event within which the surface was flow reversed and the warm water mass was along the Algarve coastline and evening turning clockwise around Cape St. Vincent. At the 100 metre depth for both simulations, velocity vortexes were observed near Cape St. Vincent travelling northerly and southerly at various instances. At 800metre depth a strong oceanic flow was observed moving north westerly along the continental shelf.
Resumo:
A number of media outlets now issue medium-range (~7 day) weather forecasts on a regular basis. It is therefore logical that aerobiologists should attempt to produce medium-range forecasts for allergenic pollen that cover the same time period as the weather forecasts. The objective of this study is to construct a medium-range (< 7 day) forecast model for grass pollen at north London. The forecast models were produced using regression analysis based on grass pollen and meteorological data from 1990-1999 and tested on data from 2000 and 2002. The modelling process was improved by dividing the grass pollen season into three periods; the pre-peak, peak and post peak periods of grass pollen release. The forecast consisted of five regression models. Two simple linear regression models predicting the start and end date of the peak period, and three multiple regression models forecasting daily average grass pollen counts in the pre-peak, peak and post-peak periods. Overall the forecast models achieved 62% accuracy in 2000 and 47% in 2002, reflecting the fact that the 2002 grass pollen season was of a higher magnitude than any of the other seasons included in the analysis. This study has the potential to make a notable contribution to the field of aerobiology. Winter averages of the North Atlantic Oscillation were used to predict certain characteristics of the grass pollen season, which presents an important advance in aerobiological work. The ability to predict allergenic pollen counts for a period between five and seven days will benefit allergy sufferers. Furthermore, medium-range forecasts for allergenic pollen will be of assistance to the medical profession, including allergists planning treatment and physicians scheduling clinical trials.
Resumo:
The meteorological and chemical transport model WRF-Chem was implemented to forecast PM10 concentrations over Poland. WRF-Chem version 3.5 was configured with three one way nested domains using the GFS meteorological data and the TNO MACC II emissions. Forecasts, with 48h lead time, were run for a winter and summer period 2014. WRF-Chem in general captures the variability in observed PM10 concentrations, but underestimates some peak concentrations during winter-time. The peaks coincide with either stable atmospheric condition during nighttime in the lower part of the planetary boundary layer or on days with very low surface temperatures. Such episodes lead to increased combustion in residential heating, where hard coal is the main fuel in Poland. This suggests that a key to improvement in the model performance for the peak concentrations is to focus on the simulation of PBL processes and the distribution of emissions with high resolution in WRF-Chem.
Resumo:
This paper applies Gaussian estimation methods to continuous time models for modelling overseas visitors into the UK. The use of continuous time modelling is widely used in economics and finance but not in tourism forecasting. Using monthly data for 1986–2010, various continuous time models are estimated and compared to autoregressive integrated moving average (ARIMA) and autoregressive fractionally integrated moving average (ARFIMA) models. Dynamic forecasts are obtained over different periods. The empirical results show that the ARIMA model performs very well, but that the constant elasticity of variance (CEV) continuous time model has the lowest root mean squared error (RMSE) over a short period.
Resumo:
Wind speed forecasting has been becoming an important field of research to support the electricity industry mainly due to the increasing use of distributed energy sources, largely based on renewable sources. This type of electricity generation is highly dependent on the weather conditions variability, particularly the variability of the wind speed. Therefore, accurate wind power forecasting models are required to the operation and planning of wind plants and power systems. A Support Vector Machines (SVM) model for short-term wind speed is proposed and its performance is evaluated and compared with several artificial neural network (ANN) based approaches. A case study based on a real database regarding 3 years for predicting wind speed at 5 minutes intervals is presented.
Resumo:
Forecasting future sales is one of the most important issues that is beyond all strategic and planning decisions in effective operations of retail businesses. For profitable retail businesses, accurate demand forecasting is crucial in organizing and planning production, purchasing, transportation and labor force. Retail sales series belong to a special type of time series that typically contain trend and seasonal patterns, presenting challenges in developing effective forecasting models. This work compares the forecasting performance of state space models and ARIMA models. The forecasting performance is demonstrated through a case study of retail sales of five different categories of women footwear: Boots, Booties, Flats, Sandals and Shoes. On both methodologies the model with the minimum value of Akaike's Information Criteria for the in-sample period was selected from all admissible models for further evaluation in the out-of-sample. Both one-step and multiple-step forecasts were produced. The results show that when an automatic algorithm the overall out-of-sample forecasting performance of state space and ARIMA models evaluated via RMSE, MAE and MAPE is quite similar on both one-step and multi-step forecasts. We also conclude that state space and ARIMA produce coverage probabilities that are close to the nominal rates for both one-step and multi-step forecasts.
Resumo:
The aim of this work project is to find a model that is able to accurately forecast the daily Value-at-Risk for PSI-20 Index, independently of the market conditions, in order to expand empirical literature for the Portuguese stock market. Hence, two subsamples, representing more and less volatile periods, were modeled through unconditional and conditional volatility models (because it is what drives returns). All models were evaluated through Kupiec’s and Christoffersen’s tests, by comparing forecasts with actual results. Using an out-of-sample of 204 observations, it was found that a GARCH(1,1) is an accurate model for our purposes.
Resumo:
The purpose of this study is to examine the impact of the choice of cut-off points, sampling procedures, and the business cycle on the accuracy of bankruptcy prediction models. Misclassification can result in erroneous predictions leading to prohibitive costs to firms, investors and the economy. To test the impact of the choice of cut-off points and sampling procedures, three bankruptcy prediction models are assessed- Bayesian, Hazard and Mixed Logit. A salient feature of the study is that the analysis includes both parametric and nonparametric bankruptcy prediction models. A sample of firms from Lynn M. LoPucki Bankruptcy Research Database in the U. S. was used to evaluate the relative performance of the three models. The choice of a cut-off point and sampling procedures were found to affect the rankings of the various models. In general, the results indicate that the empirical cut-off point estimated from the training sample resulted in the lowest misclassification costs for all three models. Although the Hazard and Mixed Logit models resulted in lower costs of misclassification in the randomly selected samples, the Mixed Logit model did not perform as well across varying business-cycles. In general, the Hazard model has the highest predictive power. However, the higher predictive power of the Bayesian model, when the ratio of the cost of Type I errors to the cost of Type II errors is high, is relatively consistent across all sampling methods. Such an advantage of the Bayesian model may make it more attractive in the current economic environment. This study extends recent research comparing the performance of bankruptcy prediction models by identifying under what conditions a model performs better. It also allays a range of user groups, including auditors, shareholders, employees, suppliers, rating agencies, and creditors' concerns with respect to assessing failure risk.
Resumo:
For the past 20 years, researchers have applied the Kalman filter to the modeling and forecasting the term structure of interest rates. Despite its impressive performance in in-sample fitting yield curves, little research has focused on the out-of-sample forecast of yield curves using the Kalman filter. The goal of this thesis is to develop a unified dynamic model based on Diebold and Li (2006) and Nelson and Siegel’s (1987) three-factor model, and estimate this dynamic model using the Kalman filter. We compare both in-sample and out-of-sample performance of our dynamic methods with various other models in the literature. We find that our dynamic model dominates existing models in medium- and long-horizon yield curve predictions. However, the dynamic model should be used with caution when forecasting short maturity yields
Resumo:
We study the problem of measuring the uncertainty of CGE (or RBC)-type model simulations associated with parameter uncertainty. We describe two approaches for building confidence sets on model endogenous variables. The first one uses a standard Wald-type statistic. The second approach assumes that a confidence set (sampling or Bayesian) is available for the free parameters, from which confidence sets are derived by a projection technique. The latter has two advantages: first, confidence set validity is not affected by model nonlinearities; second, we can easily build simultaneous confidence intervals for an unlimited number of variables. We study conditions under which these confidence sets take the form of intervals and show they can be implemented using standard methods for solving CGE models. We present an application to a CGE model of the Moroccan economy to study the effects of policy-induced increases of transfers from Moroccan expatriates.
Resumo:
We study the workings of the factor analysis of high-dimensional data using artificial series generated from a large, multi-sector dynamic stochastic general equilibrium (DSGE) model. The objective is to use the DSGE model as a laboratory that allow us to shed some light on the practical benefits and limitations of using factor analysis techniques on economic data. We explain in what sense the artificial data can be thought of having a factor structure, study the theoretical and finite sample properties of the principal components estimates of the factor space, investigate the substantive reason(s) for the good performance of di¤usion index forecasts, and assess the quality of the factor analysis of highly dissagregated data. In all our exercises, we explain the precise relationship between the factors and the basic macroeconomic shocks postulated by the model.
Resumo:
Production Planning and Control (PPC) systems have grown and changed because of the developments in planning tools and models as well as the use of computers and information systems in this area. Though so much is available in research journals, practice of PPC is lagging behind and does not use much from published research. The practices of PPC in SMEs lag behind because of many reasons, which need to be explored This research work deals with the effect of identified variables such as forecasting, planning and control methods adopted, demographics of the key person, standardization practices followed, effect of training, learning and IT usage on firm performance. A model and framework has been developed based on literature. Empirical testing of the model has been done after collecting data using a questionnaire schedule administered among the selected respondents from Small and Medium Enterprises (SMEs) in India. Final data included 382 responses. Hypotheses linking SME performance with the use of forecasting, planning and controlling were formed and tested. Exploratory factor analysis was used for data reduction and for identifying the factor structure. High and low performing firms were classified using a Logistic Regression model. A confirmatory factor analysis was used to study the structural relationship between firm performance and dependent variables.
Resumo:
Severe local storms, including tornadoes, damaging hail and wind gusts, frequently occur over the eastern and northeastern states of India during the pre-monsoon season (March-May). Forecasting thunderstorms is one of the most difficult tasks in weather prediction, due to their rather small spatial and temporal extension and the inherent non-linearity of their dynamics and physics. In this paper, sensitivity experiments are conducted with the WRF-NMM model to test the impact of convective parameterization schemes on simulating severe thunderstorms that occurred over Kolkata on 20 May 2006 and 21 May 2007 and validated the model results with observation. In addition, a simulation without convective parameterization scheme was performed for each case to determine if the model could simulate the convection explicitly. A statistical analysis based on mean absolute error, root mean square error and correlation coefficient is performed for comparisons between the simulated and observed data with different convective schemes. This study shows that the prediction of thunderstorm affected parameters is sensitive to convective schemes. The Grell-Devenyi cloud ensemble convective scheme is well simulated the thunderstorm activities in terms of time, intensity and the region of occurrence of the events as compared to other convective schemes and also explicit scheme