994 resultados para Robust estimates


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes and applies an alternative demographic procedure for extending a demand system to allow for the effect of household size and composition changes, along with price changes, on expenditure allocation. The demographic procedure is applied to two recent demand functional forms to obtain their estimable demographic extensions. The estimation on pooled time series of Australian Household Expenditure Surveys yields sensible and robust estimates of the equivalence scale, and of its variation with relative prices. Further evidence on the usefulness of this procedure is provided by using it to evaluate the nature and magnitude of the inequality bias of relative price changes in Australia over a period from the late 1980s to the early part of the new millennium.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current bio-kinematic encoders use velocity, acceleration and angular information to encode human exercises. However, in exercise physiology there is a need to distinguish between the shape of the trajectory and its execution dynamics. In this paper we propose such a two-component model and explore how best to compute these components of an action. In particular, we show how a new spatial indexing scheme, derived directly from the underlying differential geometry of curves, provides robust estimates of the shape and dynamics compared to standard temporal indexing schemes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is renewed interest in robust estimates of food demand elasticities at a disaggregated level not only to analyse the impact of changing food preferences on the agricultural sector, but also to establish the likely impact of pricing incentives on households. Using data drawn from two national Household Expenditure Surveys covering the periods 1998/1999 and 2003/2004, and adopting an Almost Ideal Demand System approach that addresses the zero observations problem, this paper estimates a food demand system for 15 food categories for Australia. The categories cover the standard food items that Australian households demand routinely. Own-price, cross-price and expenditure elasticity estimates of the Marshallian and Hicksian types have been derived for all categories. The parameter estimates obtained in this study represent the first integrated set of food demand elasticities based on a highly disaggregated food demand system for Australia, and all accord with economic intuition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

© 2015 The Association for the Study of Animal Behaviour. Broad sense repeatability, which refers to the extent to which individual differences in trait scores are maintained over time, is of increasing interest to researchers studying behavioural or physiological traits. Broad sense repeatability is most often inferred from the statistic R (the intraclass correlation, or narrow sense repeatability). However, R ignores change over time, despite the inherent longitudinal nature of the data (repeated measures over time). Here, we begin by showing that most studies ignore time-related change when estimating broad sense repeatability, and estimate R with low statistical power. Given this problem, we (1) outline how and why ignoring time-related change in scores (that occurs for whatever reason) can seriously affect estimates of the broad sense repeatability of behavioural or physiological traits, (2) discuss conditions in which various indices of R can or cannot provide reliable estimates of broad sense repeatability, and (3) provide suggestions for experimental designs for future studies. Finally, given that we already have abundant evidence that many labile traits are 'repeatable' in that broad sense (i.e. R>. 0), we suggest a shift in focus towards obtaining robust estimates of the repeatability of behavioural and physiological traits. Given how labile these traits are, this will require greater experimental (and/or statistical) control and larger sample sizes in order to detect and quantify change over time (if present).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabalho examinou as características de carteiras compostas por ações e otimizadas segundo o critério de média-variância e formadas através de estimativas robustas de risco e retorno. A motivação para isto é a distribuição típica de ativos financeiros (que apresenta outliers e mais curtose que a distribuição normal). Para comparação entre as carteiras, foram consideradas suas propriedades: estabilidade, variabilidade e os índices de Sharpe obtidos pelas mesmas. O resultado geral mostra que estas carteiras obtidas através de estimativas robustas de risco e retorno apresentam melhoras em sua estabilidade e variabilidade, no entanto, esta melhora é insuficiente para diferenciar os índices de Sharpe alcançados pelas mesmas das carteiras obtidas através de método de máxima verossimilhança para estimativas de risco e retorno.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Eucalyptus breeding is typically conducted by selection in open-pollinated progenies. As mating is controlled only on the female side of the cross, knowledge of outcrossing versus selling rates is essential for maintaining adequate levels of genetic variability for continuous gains. Outcrossing rate in an open-pollinated breeding population of Eucalyptus urophylla was estimated by two PCR-based dominant marker technologies, RAPD and AFLP, using 11 open-pollinated progeny arrays of 24 individuals. Estimated outcrossing rates indicate predominant outcrossing and suggest maintenance of adequate genetic variability within families. The multilcous outcrossing rate (t(m)) estimated from RAPD markers (0.93 +/- 0.027), although in the same range, was higher (alpha > 0.01) than the estimate based on AFLP (0.89 +/- 0.033). Both estimates were of similar magnitude to those estimated for natural populations using isozymes. The estimated Wright's fixation index was lower than expected based on t, possibly resulting from selection against selfed seedlings when sampling plants for the study. An empirical analysis suggests that 18 is the minimum number of dominant marker loci necessary to achieve robust estimates of t,. This study demonstrates the usefulness of dominant markers, both RAPD and AFLP, for estimating the outcrossing rate in breeding and natural populations of forest trees. We anticipate an increasing use of such PCR-based technologies in mating-system studies, in view of their high throughput and universality of the reagents, particularly for species where isozyme systems have not yet been optimized.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we extend semiparametric mixed linear models with normal errors to elliptical errors in order to permit distributions with heavier and lighter tails than the normal ones. Penalized likelihood equations are applied to derive the maximum penalized likelihood estimates (MPLEs) which appear to be robust against outlying observations in the sense of the Mahalanobis distance. A reweighed iterative process based on the back-fitting method is proposed for the parameter estimation and the local influence curvatures are derived under some usual perturbation schemes to study the sensitivity of the MPLEs. Two motivating examples preliminarily analyzed under normal errors are reanalyzed considering some appropriate elliptical errors. The local influence approach is used to compare the sensitivity of the model estimates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Potential future changes in tropical cyclone (TC) characteristics are among the more serious regional threats of global climate change. Therefore, a better understanding of how anthropogenic climate change may affect TCs and how these changes translate in socio-economic impacts is required. Here, we apply a TC detection and tracking method that was developed for ERA-40 data to time-slice experiments of two atmospheric general circulation models, namely the fifth version of the European Centre model of Hamburg model (MPI, Hamburg, Germany, T213) and the Japan Meteorological Agency/ Meteorological research Institute model (MRI, Tsukuba city, Japan, TL959). For each model, two climate simulations are available: a control simulation for present-day conditions to evaluate the model against observations, and a scenario simulation to assess future changes. The evaluation of the control simulations shows that the number of intense storms is underestimated due to the model resolution. To overcome this deficiency, simulated cyclone intensities are scaled to the best track data leading to a better representation of the TC intensities. Both models project an increased number of major hurricanes and modified trajectories in their scenario simulations. These changes have an effect on the projected loss potentials. However, these state-of-the-art models still yield contradicting results, and therefore they are not yet suitable to provide robust estimates of losses due to uncertainties in simulated hurricane intensity, location and frequency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

FEAST is a recently developed eigenvalue algorithm which computes selected interior eigenvalues of real symmetric matrices. It uses contour integral resolvent based projections. A weakness is that the existing algorithm relies on accurate reasoned estimates of the number of eigenvalues within the contour. Examining the singular values of the projections on moderately-sized, randomly-generated test problems motivates orthogonalization-based improvements to the algorithm. The singular value distributions provide experimentally robust estimates of the number of eigenvalues within the contour. The algorithm is modified to handle both Hermitian and general complex matrices. The original algorithm (based on circular contours and Gauss-Legendre quadrature) is extended to contours and quadrature schemes that are recursively subdividable. A general complex recursive algorithm is implemented on rectangular and diamond contours. The accuracy of different quadrature schemes for various contours is investigated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The global ocean is a significant sink for anthropogenic carbon (Cant), absorbing roughly a third of human CO2 emitted over the industrial period. Robust estimates of the magnitude and variability of the storage and distribution of Cant in the ocean are therefore important for understanding the human impact on climate. In this synthesis we review observational and model-based estimates of the storage and transport of Cant in the ocean. We pay particular attention to the uncertainties and potential biases inherent in different inference schemes. On a global scale, three data-based estimates of the distribution and inventory of Cant are now available. While the inventories are found to agree within their uncertainty, there are considerable differences in the spatial distribution. We also present a review of the progress made in the application of inverse and data assimilation techniques which combine ocean interior estimates of Cant with numerical ocean circulation models. Such methods are especially useful for estimating the air–sea flux and interior transport of Cant, quantities that are otherwise difficult to observe directly. However, the results are found to be highly dependent on modeled circulation, with the spread due to different ocean models at least as large as that from the different observational methods used to estimate Cant. Our review also highlights the importance of repeat measurements of hydrographic and biogeochemical parameters to estimate the storage of Cant on decadal timescales in the presence of the variability in circulation that is neglected by other approaches. Data-based Cant estimates provide important constraints on forward ocean models, which exhibit both broad similarities and regional errors relative to the observational fields. A compilation of inventories of Cant gives us a "best" estimate of the global ocean inventory of anthropogenic carbon in 2010 of 155 ± 31 PgC (±20% uncertainty). This estimate includes a broad range of values, suggesting that a combination of approaches is necessary in order to achieve a robust quantification of the ocean sink of anthropogenic CO2.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Since multi-site reconstructions are less affected by site-specific climatic effects and artefacts, regional palaeotemperature reconstructions based on a number of sites can provide more robust estimates of centennial- to millennial-scale temperature trends than individual, site-specific records. Furthermore, reconstructions based on multiple records are necessary for developing continuous climate records over time scales longer than covered by individual sequences. Here, we present a procedure for developing such reconstructions based on relatively short (centuries to millennia), discontinuously sampled records as are typically developed when using biotic proxies in lake sediments for temperature reconstruction. The approach includes an altitudinal correction of temperatures, an interpolation of individual records to equal time intervals, a stacking procedure for sections of the interval of interest that have the same records available, as well as a splicing procedure to link the individual stacked records into a continuous reconstruction. Variations in the final, stacked and spliced reconstruction are driven by variations in the individual records, whereas the absolute temperature values are determined by the stacked segment based on the largest number of records. With numerical simulations based on the NGRIP δ18O record, we demonstrate that the interpolation and stacking procedure provides an approximation of a smoothed palaeoclimate record if based on a sufficient number of discontinuously sampled records. Finally, we provide an example of a stacked and spliced palaeotemperature reconstruction 15000–90 calibrated 14C yr BP based on six chironomid records from the northern and central Swiss Alps and eastern France to discuss the potential and limitations of this approach.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For swine dysentery, which is caused by Brachyspira hyodysenteriae infection and is an economically important disease in intensive pig production systems worldwide, a perfect or error-free diagnostic test ("gold standard") is not available. In the absence of a gold standard, Bayesian latent class modelling is a well-established methodology for robust diagnostic test evaluation. In contrast to risk factor studies in food animals, where adjustment for within group correlations is both usual and required for good statistical practice, diagnostic test evaluation studies rarely take such clustering aspects into account, which can result in misleading results. The aim of the present study was to estimate test accuracies of a PCR originally designed for use as a confirmatory test, displaying a high diagnostic specificity, and cultural examination for B. hyodysenteriae. This estimation was conducted based on results of 239 samples from 103 herds originating from routine diagnostic sampling. Using Bayesian latent class modelling comprising of a hierarchical beta-binomial approach (which allowed prevalence across individual herds to vary as herd level random effect), robust estimates for the sensitivities of PCR and culture, as well as for the specificity of PCR, were obtained. The estimated diagnostic sensitivity of PCR (95% CI) and culture were 73.2% (62.3; 82.9) and 88.6% (74.9; 99.3), respectively. The estimated specificity of the PCR was 96.2% (90.9; 99.8). For test evaluation studies, a Bayesian latent class approach is well suited for addressing the considerable complexities of population structure in food animals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Reliable information on causes of death is a fundamental component of health development strategies, yet globally only about one-third of countries have access to such information. For countries currently without adequate mortality reporting systems there are useful models other than resource-intensive population-wide medical certification. Sample-based mortality surveillance is one such approach. This paper provides methods for addressing appropriate sample size considerations in relation to mortality surveillance, with particular reference to situations in which prior information on mortality is lacking. Methods The feasibility of model-based approaches for predicting the expected mortality structure and cause composition is demonstrated for populations in which only limited empirical data is available. An algorithm approach is then provided to derive the minimum person-years of observation needed to generate robust estimates for the rarest cause of interest in three hypothetical populations, each representing different levels of health development. Results Modelled life expectancies at birth and cause of death structures were within expected ranges based on published estimates for countries at comparable levels of health development. Total person-years of observation required in each population could be more than halved by limiting the set of age, sex, and cause groups regarded as 'of interest'. Discussion The methods proposed are consistent with the philosophy of establishing priorities across broad clusters of causes for which the public health response implications are similar. The examples provided illustrate the options available when considering the design of mortality surveillance for population health monitoring purposes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper demonstrates that the conventional approach of using official liberalisation dates as the only existing breakdates could lead to inaccurate conclusions as to the effect of the underlying liberalisation policies. It also proposes an alternative paradigm for obtaining more robust estimates of volatility changes around official liberalisation dates and/or other important market events. By focusing on five East Asian emerging markets, all of which liberalised their financial markets in the late, and by using recent advances in the econometrics of structural change, it shows that (i) the detected breakdates in the volatility of stock market returns can be dramatically different to official liberalisation dates and (ii) the use of official liberalisation dates as breakdates can readily entail inaccurate inference. In contrast, the use of data-driven techniques for the detection of multiple structural changes leads to a richer and inevitably more accurate pattern of volatility evolution emerges in comparison with focussing on official liberalisation dates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los principales recursos pesqueros pelágicos de interés económico en el Perú son anchoveta (Engraulis ringens), jurel (Trachurus murphyi) y caballa (Scomber japonicus) [3]. Para su evaluación, se lleva a cabo cruceros de evaluación acústica en los que se integra información de ecoabundancia y proporción de tallas por especie para obtener valores de biomasa y abundancia. Sin embargo, para especies no objetivo (como jurel), dichos valores resultan poco confiables por la lejanía entre los puntos de muestreo biométrico y acústico. Para resolver este inconveniente, el presente trabajo propuso utilizar modelos empíricos (de tipo GAM y GLM) integrando variables ambientales y de seguimiento de desembarques con la finalidad de generar índices relativos y absolutos para anchoveta y jurel en el período de 1996-2013 dentro del área de las 200 mn frente a la costa peruana. Los resultados obtenidos realzaron la importancia de los lances de comprobación para la obtención de estimaciones robustas de biomasa. Así mismo, se observó que, para anchoveta, los modelos empíricos sí produjeron un buen índice relativo y absoluto, mejorando la utilización de la ecoabundancia por sí sola. Para jurel, sin embargo, el modelo final calibrado resultó en la obtención de un mejor índice relativo. Se recomienda además, la obtención de información de tallas y pesos medios de desembarques para jurel con la finalidad de mejorar las estimaciones de biomasa y abundancia.