81 resultados para Estimation of skill level


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The potential for spatial dependence in models of voter turnout, although plausible from a theoretical perspective, has not been adequately addressed in the literature. Using recent advances in Bayesian computation, we formulate and estimate the previously unutilized spatial Durbin error model and apply this model to the question of whether spillovers and unobserved spatial dependence in voter turnout matters from an empirical perspective. Formal Bayesian model comparison techniques are employed to compare the normal linear model, the spatially lagged X model (SLX), the spatial Durbin model, and the spatial Durbin error model. The results overwhelmingly support the spatial Durbin error model as the appropriate empirical model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study analyzes organic adoption decisions using a rich set of time-to-organic durations collected from avocado small-holders in Michoacán Mexico. We derive robust, intrasample predictions about the profiles of entry and exit within the conventional-versus-organic complex and we explore the sensitivity of these predictions to choice of functional form. The dynamic nature of the sample allows us to make retrospective predictions and we establish, precisely, the profile of organic entry had the respondents been availed optimal amounts of adoption-restraining resources. A fundamental problem in the dynamic adoption literature, hitherto unrecognized, is discussed and consequent extensions are suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Under increasing greenhouse gas concentrations, ocean heat uptake moderates the rate of climate change, and thermal expansion makes a substantial contribution to sea level rise. In this paper we quantify the differences in projections among atmosphere-ocean general circulation models of the Coupled Model Intercomparison Project in terms of transient climate response, ocean heat uptake efficiency and expansion efficiency of heat. The CMIP3 and CMIP5 ensembles have statistically indistinguishable distributions in these parameters. The ocean heat uptake efficiency varies by a factor of two across the models, explaining about 50% of the spread in ocean heat uptake in CMIP5 models with CO2 increasing at 1%/year. It correlates with the ocean global-mean vertical profiles both of temperature and of temperature change, and comparison with observations suggests the models may overestimate ocean heat uptake and underestimate surface warming, because their stratification is too weak. The models agree on the location of maxima of shallow ocean heat uptake (above 700 m) in the Southern Ocean and the North Atlantic, and on deep ocean heat uptake (below 2000 m) in areas of the Southern Ocean, in some places amounting to 40% of the top-to-bottom integral in the CMIP3 SRES A1B scenario. The Southern Ocean dominates global ocean heat uptake; consequently the eddy-induced thickness diffusivity parameter, which is particularly influential in the Southern Ocean, correlates with the ocean heat uptake efficiency. The thermal expansion produced by ocean heat uptake is 0.12 m YJ−1, with an uncertainty of about 10% (1 YJ = 1024 J).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical methods of inference typically require the likelihood function to be computable in a reasonable amount of time. The class of “likelihood-free” methods termed Approximate Bayesian Computation (ABC) is able to eliminate this requirement, replacing the evaluation of the likelihood with simulation from it. Likelihood-free methods have gained in efficiency and popularity in the past few years, following their integration with Markov Chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC) in order to better explore the parameter space. They have been applied primarily to estimating the parameters of a given model, but can also be used to compare models. Here we present novel likelihood-free approaches to model comparison, based upon the independent estimation of the evidence of each model under study. Key advantages of these approaches over previous techniques are that they allow the exploitation of MCMC or SMC algorithms for exploring the parameter space, and that they do not require a sampler able to mix between models. We validate the proposed methods using a simple exponential family problem before providing a realistic problem from human population genetics: the comparison of different demographic models based upon genetic data from the Y chromosome.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method is suggested for the calculation of the friction velocity for stable turbulent boundary-layer flow over hills. The method is tested using a continuous upstream mean velocity profile compatible with the propagation of gravity waves, and is incorporated into the linear model of Hunt, Leibovich and Richards with the modification proposed by Hunt, Richards and Brighton to include the effects of stability, and the reformulated solution of Weng for the near-surface region. Those theoretical results are compared with results from simulations using a non-hydrostatic microscale-mesoscale two-dimensional numerical model, and with field observations for different values of stability. These comparisons show a considerable improvement in the behaviour of the theoretical model when the friction velocity is calculated using the method proposed here, leading to a consistent variation of the boundary-layer structure with stability, and better agreement with observational and numerical data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a model of market participation in which the presence of non-negligible fixed costs leads to random censoring of the traditional double-hurdle model. Fixed costs arise when household resources must be devoted a priori to the decision to participate in the market. These costs, usually of time, are manifested in non-negligible minimum-efficient supplies and supply correspondence that requires modification of the traditional Tobit regression. The costs also complicate econometric estimation of household behavior. These complications are overcome by application of the Gibbs sampler. The algorithm thus derived provides robust estimates of the fixed-costs, double-hurdle model. The model and procedures are demonstrated in an application to milk market participation in the Ethiopian highlands.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a current need to constrain the parameters of gravity wave drag (GWD) schemes in climate models using observational information instead of tuning them subjectively. In this work, an inverse technique is developed using data assimilation principles to estimate gravity wave parameters. Because mostGWDschemes assume instantaneous vertical propagation of gravity waves within a column, observations in a single column can be used to formulate a one-dimensional assimilation problem to estimate the unknown parameters. We define a cost function that measures the differences between the unresolved drag inferred from observations (referred to here as the ‘observed’ GWD) and the GWD calculated with a parametrisation scheme. The geometry of the cost function presents some difficulties, including multiple minima and ill-conditioning because of the non-independence of the gravity wave parameters. To overcome these difficulties we propose a genetic algorithm to minimize the cost function, which provides a robust parameter estimation over a broad range of prescribed ‘true’ parameters. When real experiments using an independent estimate of the ‘observed’ GWD are performed, physically unrealistic values of the parameters can result due to the non-independence of the parameters. However, by constraining one of the parameters to lie within a physically realistic range, this degeneracy is broken and the other parameters are also found to lie within physically realistic ranges. This argues for the essential physical self-consistency of the gravity wave scheme. A much better fit to the observed GWD at high latitudes is obtained when the parameters are allowed to vary with latitude. However, a close fit can be obtained either in the upper or the lower part of the profiles, but not in both at the same time. This result is a consequence of assuming an isotropic launch spectrum. The changes of sign in theGWDfound in the tropical lower stratosphere, which are associated with part of the quasi-biennial oscillation forcing, cannot be captured by the parametrisation with optimal parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A statistical–dynamical regionalization approach is developed to assess possible changes in wind storm impacts. The method is applied to North Rhine-Westphalia (Western Germany) using the FOOT3DK mesoscale model for dynamical downscaling and ECHAM5/OM1 global circulation model climate projections. The method first classifies typical weather developments within the reanalysis period using K-means cluster algorithm. Most historical wind storms are associated with four weather developments (primary storm-clusters). Mesoscale simulations are performed for representative elements for all clusters to derive regional wind climatology. Additionally, 28 historical storms affecting Western Germany are simulated. Empirical functions are estimated to relate wind gust fields and insured losses. Transient ECHAM5/OM1 simulations show an enhanced frequency of primary storm-clusters and storms for 2060–2100 compared to 1960–2000. Accordingly, wind gusts increase over Western Germany, reaching locally +5% for 98th wind gust percentiles (A2-scenario). Consequently, storm losses are expected to increase substantially (+8% for A1B-scenario, +19% for A2-scenario). Regional patterns show larger changes over north-eastern parts of North Rhine-Westphalia than for western parts. For storms with return periods above 20 yr, loss expectations for Germany may increase by a factor of 2. These results document the method's functionality to assess future changes in loss potentials in regional terms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We explored the potential for using Pediastrum (Meyen), a genus of green alga commonly found in palaeoecological studies, as a proxy for lake-level change in tropical South America. The study site, Laguna La Gaiba (LLG) (17°45′S, 57°40′W), is a broad, shallow lake located along the course of the Paraguay River in the Pantanal, a 135,000-km2 tropical wetland located mostly in western Brazil, but extending into eastern Bolivia. Fourteen surface sediment samples were taken from LLG across a range of lake depths (2-5.2 m) and analyzed for Pediastrum. We found seven species, of which P. musteri (Tell et Mataloni), P. argentiniense (Bourr. et Tell), and P. cf. angulosum (Ehrenb.) ex Menegh. were identified as potential indicators of lake level. Results of the modern dataset were applied to 31 fossil Pediastrum assemblages spanning the early Holocene (12.0 kyr BP) to present to infer past lake level changes qualitatively. Early Holocene (12.0-9.8 kyr BP) assemblages do not show a clear signal, though abundance of P. simplex (Meyen) suggests relatively high lake levels. Absence of P. musteri, characteristic of deep, open water, and abundance of macrophyte-associated taxa indicate lake levels were lowest from 9.8 to 3.0 kyr BP. A shift to wetter conditions began at 4.4 kyr BP, indicated by the appearance of P. musteri, though inferred lake levels did not reach modern values until 1.4 kyr BP. The Pediastrum-inferred mid-Holocene lowstand is consistent with lower precipitation, previously inferred using pollen from this site, and is also in agreement with evidence for widespread drought in the South American tropics during the middle Holocene. An inference for steadily increasing lake level from 4.4 kyr BP to present is consistent with diatom-inferred water level rise at Lake Titicaca, and demonstrates coherence with the broad pattern of increasing monsoon strength from the late Holocene until present in tropical South America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an efficient graph-based algorithm for quantifying the similarity of household-level energy use profiles, using a notion of similarity that allows for small time–shifts when comparing profiles. Experimental results on a real smart meter data set demonstrate that in cases of practical interest our technique is far faster than the existing method for computing the same similarity measure. Having a fast algorithm for measuring profile similarity improves the efficiency of tasks such as clustering of customers and cross-validation of forecasting methods using historical data. Furthermore, we apply a generalisation of our algorithm to produce substantially better household-level energy use forecasts from historical smart meter data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most of the operational Sea Surface Temperature (SST) products derived from satellite infrared radiometry use multi-spectral algorithms. They show, in general, reasonable performances with root mean square (RMS) residuals around 0.5 K when validated against buoy measurements, but have limitations, particularly a component of the retrieval error that relates to such algorithms' limited ability to cope with the full variability of atmospheric absorption and emission. We propose to use forecast atmospheric profiles and a radiative transfer model to simulate the algorithmic errors of multi-spectral algorithms. In the practical case of SST derived from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG), we demonstrate that simulated algorithmic errors do explain a significant component of the actual errors observed for the non linear (NL) split window algorithm in operational use at the Centre de Météorologie Spatiale (CMS). The simulated errors, used as correction terms, reduce significantly the regional biases of the NL algorithm as well as the standard deviation of the differences with drifting buoy measurements. The availability of atmospheric profiles associated with observed satellite-buoy differences allows us to analyze the origins of the main algorithmic errors observed in the SEVIRI field of view: a negative bias in the inter-tropical zone, and a mid-latitude positive bias. We demonstrate how these errors are explained by the sensitivity of observed brightness temperatures to the vertical distribution of water vapour, propagated through the SST retrieval algorithm.