958 resultados para empirical models
Resumo:
The final warming date of the polar vortex is a key component of Southern Hemisphere stratospheric and tropospheric variability in spring and summer. We examine the effect of external forcings on Southern Hemisphere final warming date, and the sensitivity of any projected changes to model representation of the stratosphere. Final warming date is calculated using a temperature-based diagnostic for ensembles of high- and low-top CMIP5 models, under the CMIP5 historical, RCP4.5, and RCP8.5 forcing scenarios. The final warming date in the models is generally too late in comparison with those from reanalyses: around two weeks too late in the low-top ensemble, and around one week too late in the high-top ensemble. Ensemble Empirical Mode Decomposition (EEMD) is used to analyse past and future change in final warming date. Both the low- and high-top ensemble show characteristic behaviour expected in response to changes in greenhouse gas and stratospheric ozone concentrations. In both ensembles, under both scenarios, an increase in final warming date is seen between 1850 and 2100, with the latest dates occurring in the early twenty-first century, associated with the minimum in stratospheric ozone concentrations in this period. However, this response is more pronounced in the high-top ensemble. The high-top models show a delay in final warming date in RCP8.5 that is not produced by the low-top models, which are shown to be less responsive to greenhouse gas forcing. This suggests that it may be necessary to use stratosphere resolving models to accurately predict Southern Hemisphere surface climate change.
Resumo:
This paper investigates the application and use of development viability models in the formation of planning policies in the UK. Particular attention is paid to three key areas; the assumed development scheme in development viability models, the use of forecasts and the debate concerning Threshold Land Value. The empirical section reports on the results of an interview survey involving the main producers of development viability models and appraisals. It is concluded that, although development viability models have intrinsic limitations associated with model composition and input uncertainties, the most significant limitations are related to the ways that they have been adapted for use in the planning system. In addition, it is suggested that the contested nature of Threshold Land Value is an example of calculative practices providing a façade of technocratic rationality in the planning system.
Resumo:
Developing models to predict the effects of social and economic change on agricultural landscapes is an important challenge. Model development often involves making decisions about which aspects of the system require detailed description and which are reasonably insensitive to the assumptions. However, important components of the system are often left out because parameter estimates are unavailable. In particular, measurements of the relative influence of different objectives, such as risk, environmental management, on farmer decision making, have proven difficult to quantify. We describe a model that can make predictions of land use on the basis of profit alone or with the inclusion of explicit additional objectives. Importantly, our model is specifically designed to use parameter estimates for additional objectives obtained via farmer interviews. By statistically comparing the outputs of this model with a large farm-level land-use data set, we show that cropping patterns in the United Kingdom contain a significant contribution from farmer’s preference for objectives other than profit. In particular, we found that risk aversion had an effect on the accuracy of model predictions, whereas preference for a particular number of crops grown was less important. While nonprofit objectives have frequently been identified as factors in farmers’ decision making, our results take this analysis further by demonstrating the relationship between these preferences and actual cropping patterns.
Resumo:
We compare a number of models of post War US output growth in terms of the degree and pattern of non-linearity they impart to the conditional mean, where we condition on either the previous period's growth rate, or the previous two periods' growth rates. The conditional means are estimated non-parametrically using a nearest-neighbour technique on data simulated from the models. In this way, we condense the complex, dynamic, responses that may be present in to graphical displays of the implied conditional mean.
Resumo:
In this paper we discuss the current state-of-the-art in estimating, evaluating, and selecting among non-linear forecasting models for economic and financial time series. We review theoretical and empirical issues, including predictive density, interval and point evaluation and model selection, loss functions, data-mining, and aggregation. In addition, we argue that although the evidence in favor of constructing forecasts using non-linear models is rather sparse, there is reason to be optimistic. However, much remains to be done. Finally, we outline a variety of topics for future research, and discuss a number of areas which have received considerable attention in the recent literature, but where many questions remain.
Resumo:
Vintage-based vector autoregressive models of a single macroeconomic variable are shown to be a useful vehicle for obtaining forecasts of different maturities of future and past observations, including estimates of post-revision values. The forecasting performance of models which include information on annual revisions is superior to that of models which only include the first two data releases. However, the empirical results indicate that a model which reflects the seasonal nature of data releases more closely does not offer much improvement over an unrestricted vintage-based model which includes three rounds of annual revisions.
Resumo:
The mixing of floes of different thickness caused by repeated deformation of the ice cover is modeled as diffusion, and the mass balance equation for sea ice accounting for mass diffusion is developed. The effect of deformational diffusion on the ice thickness balance is shown to reach 1% of the divergence effect, which describes ridging and lead formation. This means that with the same accuracy the mass balance equation can be written in terms of mean velocity rather than mean mass-weighted velocity, which one should correctly use for a multicomponent fluid such as sea ice with components identified by floe thickness. Mixing (diffusion) of sea ice also occurs because of turbulent variations in wind and ocean drags that are unresolved in models. Estimates of the importance of turbulent mass diffusion on the dynamic redistribution of ice thickness are determined using empirical data for the turbulent diffusivity. For long-time-scale prediction (≫5 days), where unresolved atmospheric motion may have a length scale on the order of the Arctic basin and the time scale is larger than the synoptic time scale of atmospheric events, turbulent mass diffusion can exceed 10% of the divergence effect. However, for short-time-scale prediction, for example, 5 days, the unresolved scales are on the order of 100 km, and turbulent diffusion is about 0.1% of the divergence effect. Because inertial effects are small in the dynamics of the sea ice pack, diffusive momentum transfer can be disregarded.
Resumo:
Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.
Resumo:
We propose a geoadditive negative binomial model (Geo-NB-GAM) for regional count data that allows us to address simultaneously some important methodological issues, such as spatial clustering, nonlinearities, and overdispersion. This model is applied to the study of location determinants of inward greenfield investments that occurred during 2003–2007 in 249 European regions. After presenting the data set and showing the presence of overdispersion and spatial clustering, we review the theoretical framework that motivates the choice of the location determinants included in the empirical model, and we highlight some reasons why the relationship between some of the covariates and the dependent variable might be nonlinear. The subsequent section first describes the solutions proposed by previous literature to tackle spatial clustering, nonlinearities, and overdispersion, and then presents the Geo-NB-GAM. The empirical analysis shows the good performance of Geo-NB-GAM. Notably, the inclusion of a geoadditive component (a smooth spatial trend surface) permits us to control for spatial unobserved heterogeneity that induces spatial clustering. Allowing for nonlinearities reveals, in keeping with theoretical predictions, that the positive effect of agglomeration economies fades as the density of economic activities reaches some threshold value. However, no matter how dense the economic activity becomes, our results suggest that congestion costs never overcome positive agglomeration externalities.
Resumo:
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961–2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño–Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
Resumo:
The contraction of a species’ distribution range, which results from the extirpation of local populations, generally precedes its extinction. Therefore, understanding drivers of range contraction is important for conservation and management. Although there are many processes that can potentially lead to local extirpation and range contraction, three main null models have been proposed: demographic, contagion, and refuge. The first two models postulate that the probability of local extirpation for a given area depends on its relative position within the range; but these models generate distinct spatial predictions because they assume either a ubiquitous (demographic) or a clinal (contagion) distribution of threats. The third model (refuge) postulates that extirpations are determined by the intensity of human impacts, leading to heterogeneous spatial predictions potentially compatible with those made by the other two null models. A few previous studies have explored the generality of some of these null models, but we present here the first comprehensive evaluation of all three models. Using descriptive indices and regression analyses we contrast the predictions made by each of the null models using empirical spatial data describing range contraction in 386 terrestrial vertebrates (mammals, birds, amphibians, and reptiles) distributed across the World. Observed contraction patterns do not consistently conform to the predictions of any of the three models, suggesting that these may not be adequate null models to evaluate range contraction dynamics among terrestrial vertebrates. Instead, our results support alternative null models that account for both relative position and intensity of human impacts. These new models provide a better multifactorial baseline to describe range contraction patterns in vertebrates. This general baseline can be used to explore how additional factors influence contraction, and ultimately extinction for particular areas or species as well as to predict future changes in light of current and new threats.
Resumo:
Liquidity is a fundamentally important facet of investments, but there is no single measure that quantifies it perfectly. Instead, a range of measures are necessary to capture different dimensions of liquidity such as the breadth and depth of markets, the costs of transacting, the speed with which transactions can occur and the resilience of prices to trading activity. This article considers how different dimensions have been measured in financial markets and for various forms of real estate investment. The purpose of this exercise is to establish the range of liquidity measures that could be used for real estate investments before considering which measures and questions have been investigated so far. Most measures reviewed here are applicable to public real estate, but not all can be applied to private real estate assets or funds. Use of a broader range of liquidity measures could help real estate researchers tackle issues such as quantification of illiquidity premiums for the real estate asset class or different types of real estate, and how liquidity differences might be incorporated into portfolio allocation models.
Resumo:
Phylogenetic comparative methods are increasingly used to give new insights into the dynamics of trait evolution in deep time. For continuous traits the core of these methods is a suite of models that attempt to capture evolutionary patterns by extending the Brownian constant variance model. However, the properties of these models are often poorly understood, which can lead to the misinterpretation of results. Here we focus on one of these models – the Ornstein Uhlenbeck (OU) model. We show that the OU model is frequently incorrectly favoured over simpler models when using Likelihood ratio tests, and that many studies fitting this model use datasets that are small and prone to this problem. We also show that very small amounts of error in datasets can have profound effects on the inferences derived from OU models. Our results suggest that simulating fitted models and comparing with empirical results is critical when fitting OU and other extensions of the Brownian model. We conclude by making recommendations for best practice in fitting OU models in phylogenetic comparative analyses, and for interpreting the parameters of the OU model.
Resumo:
In this paper, we compare the performance of two statistical approaches for the analysis of data obtained from the social research area. In the first approach, we use normal models with joint regression modelling for the mean and for the variance heterogeneity. In the second approach, we use hierarchical models. In the first case, individual and social variables are included in the regression modelling for the mean and for the variance, as explanatory variables, while in the second case, the variance at level 1 of the hierarchical model depends on the individuals (age of the individuals), and in the level 2 of the hierarchical model, the variance is assumed to change according to socioeconomic stratum. Applying these methodologies, we analyze a Colombian tallness data set to find differences that can be explained by socioeconomic conditions. We also present some theoretical and empirical results concerning the two models. From this comparative study, we conclude that it is better to jointly modelling the mean and variance heterogeneity in all cases. We also observe that the convergence of the Gibbs sampling chain used in the Markov Chain Monte Carlo method for the jointly modeling the mean and variance heterogeneity is quickly achieved.
Resumo:
Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.