853 resultados para Runoff forecasting
Resumo:
We evaluate a number of real estate sentiment indices to ascertain current and forward-looking information content that may be useful for forecasting demand and supply activities. Analyzing the dynamic relationships within a Vector Auto-Regression (VAR) framework and using the quarterly US data over 1988-2010, we test the efficacy of several sentiment measures by comparing them with other coincident economic indicators. Overall, our analysis suggests that the sentiment in real estate convey valuable information that can help predict changes in real estate returns. These findings have important implications for investment decisions, from consumers' as well as institutional investors' perspectives.
Resumo:
Satellite-based (e.g., Synthetic Aperture Radar [SAR]) water level observations (WLOs) of the floodplain can be sequentially assimilated into a hydrodynamic model to decrease forecast uncertainty. This has the potential to keep the forecast on track, so providing an Earth Observation (EO) based flood forecast system. However, the operational applicability of such a system for floods developed over river networks requires further testing. One of the promising techniques for assimilation in this field is the family of ensemble Kalman (EnKF) filters. These filters use a limited-size ensemble representation of the forecast error covariance matrix. This representation tends to develop spurious correlations as the forecast-assimilation cycle proceeds, which is a further complication for dealing with floods in either urban areas or river junctions in rural environments. Here we evaluate the assimilation of WLOs obtained from a sequence of real SAR overpasses (the X-band COSMO-Skymed constellation) in a case study. We show that a direct application of a global Ensemble Transform Kalman Filter (ETKF) suffers from filter divergence caused by spurious correlations. However, a spatially-based filter localization provides a substantial moderation in the development of the forecast error covariance matrix, directly improving the forecast and also making it possible to further benefit from a simultaneous online inflow error estimation and correction. Additionally, we propose and evaluate a novel along-network metric for filter localization, which is physically-meaningful for the flood over a network problem. Using this metric, we further evaluate the simultaneous estimation of channel friction and spatially-variable channel bathymetry, for which the filter seems able to converge simultaneously to sensible values. Results also indicate that friction is a second order effect in flood inundation models applied to gradually varied flow in large rivers. The study is not conclusive regarding whether in an operational situation the simultaneous estimation of friction and bathymetry helps the current forecast. Overall, the results indicate the feasibility of stand-alone EO-based operational flood forecasting.
Resumo:
The Monte Carlo Independent Column Approximation (McICA) is a flexible method for representing subgrid-scale cloud inhomogeneity in radiative transfer schemes. It does, however, introduce conditional random errors but these have been shown to have little effect on climate simulations, where spatial and temporal scales of interest are large enough for effects of noise to be averaged out. This article considers the effect of McICA noise on a numerical weather prediction (NWP) model, where the time and spatial scales of interest are much closer to those at which the errors manifest themselves; this, as we show, means that noise is more significant. We suggest methods for efficiently reducing the magnitude of McICA noise and test these methods in a global NWP version of the UK Met Office Unified Model (MetUM). The resultant errors are put into context by comparison with errors due to the widely used assumption of maximum-random-overlap of plane-parallel homogeneous cloud. For a simple implementation of the McICA scheme, forecasts of near-surface temperature are found to be worse than those obtained using the plane-parallel, maximum-random-overlap representation of clouds. However, by applying the methods suggested in this article, we can reduce noise enough to give forecasts of near-surface temperature that are an improvement on the plane-parallel maximum-random-overlap forecasts. We conclude that the McICA scheme can be used to improve the representation of clouds in NWP models, with the provision that the associated noise is sufficiently small.
Resumo:
A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.
Resumo:
Objectives In this study a prototype of a new health forecasting alert system is developed, which is aligned to the approach used in the Met Office’s (MO) National Severe Weather Warning Service (NSWWS). This is in order to improve information available to responders in the health and social care system by linking temperatures more directly to risks of mortality, and developing a system more coherent with other weather alerts. The prototype is compared to the current system in the Cold Weather and Heatwave plans via a case-study approach to verify its potential advantages and shortcomings. Method The prototype health forecasting alert system introduces an “impact vs likelihood matrix” for the health impacts of hot and cold temperatures which is similar to those used operationally for other weather hazards as part of the NSWWS. The impact axis of this matrix is based on existing epidemiological evidence, which shows an increasing relative risk of death at extremes of outdoor temperature beyond a threshold which can be identified epidemiologically. The likelihood axis is based on a probability measure associated with the temperature forecast. The new method is tested for two case studies (one during summer 2013, one during winter 2013), and compared to the performance of the current alert system. Conclusions The prototype shows some clear improvements over the current alert system. It allows for a much greater degree of flexibility, provides more detailed regional information about the health risks associated with periods of extreme temperatures, and is more coherent with other weather alerts which may make it easier for front line responders to use. It will require validation and engagement with stakeholders before it can be considered for use.
Resumo:
Annual losses of cocoa in Ghana to mirids are significant. Therefore, accurate timing of insecticide application is critical to enhance yields. However, cocoa farmers often lack information on the expected mirid population for each season to enable them to optimise pesticide use. This study assessed farmers’ knowledge and perceptions of mirid control and their willingness to use forecasting systems informing them of expected mirid peaks and time of application of pesticides. A total of 280 farmers were interviewed in the Eastern and Ashanti regions of Ghana with a structured open and closed ended questionnaire. Most farmers (87%) considered mirids as the most important insect pest on cocoa with 47% of them attributing 30-40% annual crop loss to mirid damage. There was wide variation in the timing of insecticide application as a result of farmers using different sources of information to guide the start of application. The majority of farmers (56%) do not have access to information on the type, frequency and timing of insecticides to use. However, respondents who are members of farmer groups had better access to such information. Extension officers were the preferred channel for information transfer to farmers with 72% of farmers preferring them to other available methods of communication. Almost all the respondents (99%) saw the need for a comprehensive forecasting system to help farmers manage cocoa mirids. The importance of accurate timing for mirid control based on forecasted information to farmer groups and extension officers was discussed.
Resumo:
In this paper we assess opinion polls, prediction markets, expert opinion and statistical modelling over a large number of US elections in order to determine which perform better in terms of forecasting outcomes. In line with existing literature, we bias-correct opinion polls. We consider accuracy, bias and precision over different time horizons before an election, and we conclude that prediction markets appear to provide the most precise forecasts and are similar in terms of bias to opinion polls. We find that our statistical model struggles to provide competitive forecasts, while expert opinion appears to be of value. Finally we note that the forecast horizon matters; whereas prediction market forecasts tend to improve the nearer an election is, opinion polls appear to perform worse, while expert opinion performs consistently throughout. We thus contribute to the growing literature comparing election forecasts of polls and prediction markets.
Resumo:
Ecological forecasting is difficult but essential, because reactive management results in corrective actions that are often too late to avert significant environmental damage. Here, we appraise different forecasting methods with a particular focus on the modelling of species populations. We show how simple extrapolation of current trends in state is often inadequate because environmental drivers change in intensity over time and new drivers emerge. However, statistical models, incorporating relationships with drivers, simply offset the prediction problem, requiring us to forecast how the drivers will themselves change over time. Some authors approach this problem by focusing in detail on a single driver, whilst others use ‘storyline’ scenarios, which consider projected changes in a wide range of different drivers. We explain why both approaches are problematic and identify a compromise to model key drivers and interactions along with possible response options to help inform environmental management. We also highlight the crucial role of validation of forecasts using independent data. Although these issues are relevant for all types of ecological forecasting, we provide examples based on forecasts for populations of UK butterflies. We show how a high goodness-of-fit for models used to calibrate data is not sufficient for good forecasting. Long-term biological recording schemes rather than experiments will often provide data for ecological forecasting and validation because these schemes allow capture of landscape-scale land-use effects and their interactions with other drivers.
Resumo:
This paper characterizes the dynamics of jumps and analyzes their importance for volatility forecasting. Using high-frequency data on four prominent energy markets, we perform a model-free decomposition of realized variance into its continuous and discontinuous components. We find strong evidence of jumps in energy markets between 2007 and 2012. We then investigate the importance of jumps for volatility forecasting. To this end, we estimate and analyze the predictive ability of several Heterogenous Autoregressive (HAR) models that explicitly capture the dynamics of jumps. Conducting extensive in-sample and out-of-sample analyses, we establish that explicitly modeling jumps does not significantly improve forecast accuracy. Our results are broadly consistent across our four energy markets, forecasting horizons, and loss functions
Resumo:
Floods are the most frequent of natural disasters, affecting millions of people across the globe every year. The anticipation and forecasting of floods at the global scale is crucial to preparing for severe events and providing early awareness where local flood models and warning services may not exist. As numerical weather prediction models continue to improve, operational centres are increasingly using the meteorological output from these to drive hydrological models, creating hydrometeorological systems capable of forecasting river flow and flood events at much longer lead times than has previously been possible. Furthermore, developments in, for example, modelling capabilities, data and resources in recent years have made it possible to produce global scale flood forecasting systems. In this paper, the current state of operational large scale flood forecasting is discussed, including probabilistic forecasting of floods using ensemble prediction systems. Six state-of-the-art operational large scale flood forecasting systems are reviewed, describing similarities and differences in their approaches to forecasting floods at the global and continental scale. Currently, operational systems have the capability to produce coarse-scale discharge forecasts in the medium-range and disseminate forecasts and, in some cases, early warning products, in real time across the globe, in support of national forecasting capabilities. With improvements in seasonal weather forecasting, future advances may include more seamless hydrological forecasting at the global scale, alongside a move towards multi-model forecasts and grand ensemble techniques, responding to the requirement of developing multi-hazard early warning systems for disaster risk reduction.
Resumo:
During the eruption of Eyjafjallajökull in April and May 2010, the London Volcanic Ash Advisory Centre demonstrated the importance of infrared (IR) satellite imagery for monitoring volcanic ash and validating the Met Office operational model, NAME. This model is used to forecast ash dispersion and forms much of the basis of the advice given to civil aviation. NAME requires a source term describing the properties of the eruption plume at the volcanic source. Elements of the source term are often highly uncertain and significant effort has therefore been invested into the use of satellite observations of ash clouds to constrain them. This paper presents a data insertion method, where satellite observations of downwind ash clouds are used to create effective ‘virtual sources’ far from the vent. Uncertainty in the model output is known to increase over the duration of a model run, as inaccuracies in the source term, meteorological data and the parameterizations of the modelled processes accumulate. This new technique, where the dispersion model (DM) is ‘reinitialized’ part-way through a run, could go some way to addressing this.
Resumo:
On 23 November 1981, a strong cold front swept across the U.K., producing tornadoes from the west to the east coasts. An extensive campaign to collect tornado reports by the Tornado and Storm Research Organisation (TORRO) resulted in 104 reports, the largest U.K. outbreak. The front was simulated with a convection-permitting numerical model down to 200-m horizontal grid spacing to better understand its evolution and meteorological environment. The event was typical of tornadoes in the U.K., with convective available potential energy (CAPE) less than 150 J kg-1, 0-1-km wind shear of 10-20 m s-1, and a narrow cold-frontal rainband forming precipitation cores and gaps. A line of cyclonic absolute vorticity existed along the front, with maxima as large as 0.04 s-1. Some hook-shaped misovortices bore kinematic similarity to supercells. The narrow swath along which the line was tornadic was bounded on the equatorward side by weak vorticity along the line and on the poleward side by zero CAPE, enclosing a region where the environment was otherwise favorable for tornadogenesis. To determine if the 104 tornado reports were plausible, first possible duplicate reports were eliminated, resulting in as few as 58 tornadoes to as many as 90. Second, the number of possible parent misovortices that may have spawned tornadoes is estimated from model output. The number of plausible tornado reports in the 200-m grid-spacing domain was 22 and as many as 44, whereas the model simulation was used to estimate 30 possible parent misovortices within this domain. These results suggest that 90 reports was plausible.
Resumo:
The Plant–Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant–Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant–Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.
Resumo:
Ghana faces a macroeconomic problem of inflation for a long period of time. The problem in somehow slows the economic growth in this country. As we all know, inflation is one of the major economic challenges facing most countries in the world especially those in African including Ghana. Therefore, forecasting inflation rates in Ghana becomes very important for its government to design economic strategies or effective monetary policies to combat any unexpected high inflation in this country. This paper studies seasonal autoregressive integrated moving average model to forecast inflation rates in Ghana. Using monthly inflation data from July 1991 to December 2009, we find that ARIMA (1,1,1)(0,0,1)12 can represent the data behavior of inflation rate in Ghana well. Based on the selected model, we forecast seven (7) months inflation rates of Ghana outside the sample period (i.e. from January 2010 to July 2010). The observed inflation rate from January to April which was published by Ghana Statistical Service Department fall within the 95% confidence interval obtained from the designed model. The forecasted results show a decreasing pattern and a turning point of Ghana inflation in the month of July.