148 resultados para Spatial conditional autoregressive model
em CentAUR: Central Archive University of Reading - UK
Resumo:
The potential for spatial dependence in models of voter turnout, although plausible from a theoretical perspective, has not been adequately addressed in the literature. Using recent advances in Bayesian computation, we formulate and estimate the previously unutilized spatial Durbin error model and apply this model to the question of whether spillovers and unobserved spatial dependence in voter turnout matters from an empirical perspective. Formal Bayesian model comparison techniques are employed to compare the normal linear model, the spatially lagged X model (SLX), the spatial Durbin model, and the spatial Durbin error model. The results overwhelmingly support the spatial Durbin error model as the appropriate empirical model.
Resumo:
In many lower-income countries, the establishment of marine protected areas (MPAs) involves significant opportunity costs for artisanal fishers, reflected in changes in how they allocate their labor in response to the MPA. The resource economics literature rarely addresses such labor allocation decisions of artisanal fishers and how, in turn, these contribute to the impact of MPAs on fish stocks, yield, and income. This paper develops a spatial bio-economic model of a fishery adjacent to a village of people who allocate their labor between fishing and on-shore wage opportunities to establish a spatial Nash equilibrium at a steady state fish stock in response to various locations for no-take zone MPAs and managed access MPAs. Villagers’ fishing location decisions are based on distance costs, fishing returns, and wages. Here, the MPA location determines its impact on fish stocks, fish yield, and villager income due to distance costs, congestion, and fish dispersal. Incorporating wage labor opportunities into the framework allows examination of the MPA’s impact on rural incomes, with results determining that win-wins between yield and stocks occur in very different MPA locations than do win-wins between income and stocks. Similarly, villagers in a high-wage setting face a lower burden from MPAs than do those in low-wage settings. Motivated by issues of central importance in Tanzania and Costa Rica, we impose various policies on this fishery – location specific no-take zones, increasing on-shore wages, and restricting MPA access to a subset of villagers – to analyze the impact of an MPA on fish stocks and rural incomes in such settings.
Resumo:
Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.
Resumo:
Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.
Resumo:
We introduce a modified conditional logit model that takes account of uncertainty associated with mis-reporting in revealed preference experiments estimating willingness-to-pay (WTP). Like Hausman et al. [Journal of Econometrics (1988) Vol. 87, pp. 239-269], our model captures the extent and direction of uncertainty by respondents. Using a Bayesian methodology, we apply our model to a choice modelling (CM) data set examining UK consumer preferences for non-pesticide food. We compare the results of our model with the Hausman model. WTP estimates are produced for different groups of consumers and we find that modified estimates of WTP, that take account of mis-reporting, are substantially revised downwards. We find a significant proportion of respondents mis-reporting in favour of the non-pesticide option. Finally, with this data set, Bayes factors suggest that our model is preferred to the Hausman model.
Resumo:
Traditionally, siting and sizing decisions for parks and reserves reflected ecological characteristics but typically failed to consider ecological costs created from displaced resource collection, welfare costs on nearby rural people, and enforcement costs. Using a spatial game-theoretic model that incorporates the interaction of socioeconomic and ecological settings, we show how incorporating more recent mandates that include rural welfare and surrounding landscapes can result in very different optimal sizing decisions. The model informs our discussion of recent forest management in Tanzania, reserve sizing and siting decisions, estimating reserve effectiveness, and determining patterns of avoided forest degradation in Reduced Emissions from Deforestation and Forest Degradation programs.
Resumo:
Linear models of market performance may be misspecified if the market is subdivided into distinct regimes exhibiting different behaviour. Price movements in the US Real Estate Investment Trusts and UK Property Companies Markets are explored using a Threshold Autoregressive (TAR) model with regimes defined by the real rate of interest. In both US and UK markets, distinctive behaviour emerges, with the TAR model offering better predictive power than a more conventional linear autoregressive model. The research points to the possibility of developing trading rules to exploit the systematically different behaviour across regimes.
Resumo:
Accurate decadal climate predictions could be used to inform adaptation actions to a changing climate. The skill of such predictions from initialised dynamical global climate models (GCMs) may be assessed by comparing with predictions from statistical models which are based solely on historical observations. This paper presents two benchmark statistical models for predicting both the radiatively forced trend and internal variability of annual mean sea surface temperatures (SSTs) on a decadal timescale based on the gridded observation data set HadISST. For both statistical models, the trend related to radiative forcing is modelled using a linear regression of SST time series at each grid box on the time series of equivalent global mean atmospheric CO2 concentration. The residual internal variability is then modelled by (1) a first-order autoregressive model (AR1) and (2) a constructed analogue model (CA). From the verification of 46 retrospective forecasts with start years from 1960 to 2005, the correlation coefficient for anomaly forecasts using trend with AR1 is greater than 0.7 over parts of extra-tropical North Atlantic, the Indian Ocean and western Pacific. This is primarily related to the prediction of the forced trend. More importantly, both CA and AR1 give skillful predictions of the internal variability of SSTs in the subpolar gyre region over the far North Atlantic for lead time of 2 to 5 years, with correlation coefficients greater than 0.5. For the subpolar gyre and parts of the South Atlantic, CA is superior to AR1 for lead time of 6 to 9 years. These statistical forecasts are also compared with ensemble mean retrospective forecasts by DePreSys, an initialised GCM. DePreSys is found to outperform the statistical models over large parts of North Atlantic for lead times of 2 to 5 years and 6 to 9 years, however trend with AR1 is generally superior to DePreSys in the North Atlantic Current region, while trend with CA is superior to DePreSys in parts of South Atlantic for lead time of 6 to 9 years. These findings encourage further development of benchmark statistical decadal prediction models, and methods to combine different predictions.
Resumo:
Abstract: Following a workshop exercise, two models, an individual-based landscape model (IBLM) and a non-spatial life-history model were used to assess the impact of a fictitious insecticide on populations of skylarks in the UK. The chosen population endpoints were abundance, population growth rate, and the chances of population persistence. Both models used the same life-history descriptors and toxicity profiles as the basis for their parameter inputs. The models differed in that exposure was a pre-determined parameter in the life-history model, but an emergent property of the IBLM, and the IBLM required a landscape structure as an input. The model outputs were qualitatively similar between the two models. Under conditions dominated by winter wheat, both models predicted a population decline that was worsened by the use of the insecticide. Under broader habitat conditions, population declines were only predicted for the scenarios where the insecticide was added. Inputs to the models are very different, with the IBLM requiring a large volume of data in order to achieve the flexibility of being able to integrate a range of environmental and behavioural factors. The life-history model has very few explicit data inputs, but some of these relied on extensive prior modelling needing additional data as described in Roelofs et al.(2005, this volume). Both models have strengths and weaknesses; hence the ideal approach is that of combining the use of both simple and comprehensive modeling tools.
Resumo:
Geophysical time series sometimes exhibit serial correlations that are stronger than can be captured by the commonly used first‐order autoregressive model. In this study we demonstrate that a power law statistical model serves as a useful upper bound for the persistence of total ozone anomalies on monthly to interannual timescales. Such a model is usually characterized by the Hurst exponent. We show that the estimation of the Hurst exponent in time series of total ozone is sensitive to various choices made in the statistical analysis, especially whether and how the deterministic (including periodic) signals are filtered from the time series, and the frequency range over which the estimation is made. In particular, care must be taken to ensure that the estimate of the Hurst exponent accurately represents the low‐frequency limit of the spectrum, which is the part that is relevant to long‐term correlations and the uncertainty of estimated trends. Otherwise, spurious results can be obtained. Based on this analysis, and using an updated equivalent effective stratospheric chlorine (EESC) function, we predict that an increase in total ozone attributable to EESC should be detectable at the 95% confidence level by 2015 at the latest in southern midlatitudes, and by 2020–2025 at the latest over 30°–45°N, with the time to detection increasing rapidly with latitude north of this range.
Resumo:
We use Hasbrouck's (1991) vector autoregressive model for prices and trades to empirically test and assess the role played by the waiting time between consecutive transactions in the process of price formation. We find that as the time duration between transactions decreases, the price impact of trades, the speed of price adjustment to trade‐related information, and the positive autocorrelation of signed trades all increase. This suggests that times when markets are most active are times when there is an increased presence of informed traders; we interpret such markets as having reduced liquidity.
Resumo:
In this paper we investigate the price discovery process in single-name credit spreads obtained from bond, credit default swap (CDS), equity and equity option prices. We analyse short term price discovery by modelling daily changes in credit spreads in the four markets with a vector autoregressive model (VAR). We also look at price discovery in the long run with a vector error correction model (VECM). We find that in the short term the option market clearly leads the other markets in the sub-prime crisis (2007-2009). During the less severe sovereign debt crisis (2009-2012) and the pre-crisis period, options are still important but CDSs become more prominent. In the long run, deviations from the equilibrium relationship with the option market still lead to adjustments in the credit spreads observed or implied from other markets. However, options no longer dominate price discovery in any of the periods considered. Our findings have implications for traders, credit risk managers and financial regulators.
Resumo:
The EU FP7 Project MEGAPOLI: "Megacities: Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation" (http://megapoli.info) brings together leading European research groups, state-of-the-art scientific tools and key players from non-European countries to investigate the interactions among megacities, air quality and climate. MEGAPOLI bridges the spatial and temporal scales that connect local emissions, air quality and weather with global atmospheric chemistry and climate. The suggested concept of multi-scale integrated modelling of megacity impact on air quality and climate and vice versa is discussed in the paper. It requires considering different spatial and temporal dimensions: time scales from seconds and hours (to understand the interaction mechanisms) up to years and decades (to consider the climate effects); spatial resolutions: with model down- and up-scaling from street- to global-scale; and two-way interactions between meteorological and chemical processes.
Resumo:
Quantile forecasts are central to risk management decisions because of the widespread use of Value-at-Risk. A quantile forecast is the product of two factors: the model used to forecast volatility, and the method of computing quantiles from the volatility forecasts. In this paper we calculate and evaluate quantile forecasts of the daily exchange rate returns of five currencies. The forecasting models that have been used in recent analyses of the predictability of daily realized volatility permit a comparison of the predictive power of different measures of intraday variation and intraday returns in forecasting exchange rate variability. The methods of computing quantile forecasts include making distributional assumptions for future daily returns as well as using the empirical distribution of predicted standardized returns with both rolling and recursive samples. Our main findings are that the Heterogenous Autoregressive model provides more accurate volatility and quantile forecasts for currencies which experience shifts in volatility, such as the Canadian dollar, and that the use of the empirical distribution to calculate quantiles can improve forecasts when there are shifts
Resumo:
This paper examines the predictability of real estate asset returns using a number of time series techniques. A vector autoregressive model, which incorporates financial spreads, is able to improve upon the out of sample forecasting performance of univariate time series models at a short forecasting horizon. However, as the forecasting horizon increases, the explanatory power of such models is reduced, so that returns on real estate assets are best forecast using the long term mean of the series. In the case of indirect property returns, such short-term forecasts can be turned into a trading rule that can generate excess returns over a buy-and-hold strategy gross of transactions costs, although none of the trading rules developed could cover the associated transactions costs. It is therefore concluded that such forecastability is entirely consistent with stock market efficiency.