994 resultados para Robust estimates


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Management of coastal species of small cetaceans is often impeded by a lack of robust estimates of their abundance. In the Austral summers of 1997−98, 1998−99, and 1999−2000 we conducted line-transect surveys of Hector’s dolphin (Cephalorhynchus hectori) abundance off the north, east, and south coasts of the South Island of New Zealand. Survey methods were modified for the use of a 15-m sailing catamaran, which was equipped with a collapsible sighting platform giving observers an eye-height of 6 m. Eighty-six percent of 2061 km of survey effort was allocated to inshore waters (4 nautical miles [nmi] or 7.4 km from shore), and the remainder to offshore waters (4−10 nmi or 7.4–18.5 km from shore). Transects were placed at 45° to the shore and spaced apart by 1, 2, 4, or 8 nmi according to pre-existing data on dolphin density. Survey effort within strata was uniform. Detection functions for sheltered waters and open coasts were fitted separately for each survey. The effect of attraction of dolphins to the survey vessel and the fraction of dolphins missed on the trackline were assessed with simultaneous boat and helicopter surveys in January 1999. Hector’s dolphin abundance in the coastal zone to 4 nmi offshore was calculated at 1880 individuals (CV=15.7%, log-normal 95% CI=1384−2554). These surveys are the first line-transect surveys for cetaceans in New Zealand’s coastal waters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Value-at-risk (VaR) forecasting generally relies on a parametric density function of portfolio returns that ignores higher moments or assumes them constant. In this paper, we propose a simple approach to forecasting of a portfolio VaR. We employ the Gram-Charlier expansion (GCE) augmenting the standard normal distribution with the first four moments, which are allowed to vary over time. In an extensive empirical study, we compare the GCE approach to other models of VaR forecasting and conclude that it provides accurate and robust estimates of the realized VaR. In spite of its simplicity, on our dataset GCE outperforms other estimates that are generated by both constant and time-varying higher-moments models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Soil carbon stores are a major component of the annual returns required by EU governments to the Intergovernmental Panel on Climate Change. Peat has a high proportion of soil carbon due to the relatively high carbon density of peat and organic-rich soils. For this reason it has become increasingly important to measure and model soil carbon stores and changes in peat stocks to facilitate the management of carbon changes over time. The approach investigated in this research evaluates the use of airborne geophysical (radiometric) data to estimate peat thickness using the attenuation of bedrock geology radioactivity by superficial peat cover. Remotely sensed radiometric data are validated with ground peat depth measurements combined with non-invasive geophysical surveys. Two field-based case studies exemplify and validate the results. Variography and kriging are used to predict peat thickness from point measurements of peat depth and airborne radiometric data and provide an estimate of uncertainty in the predictions. Cokriging, by assessing the degree of spatial correlation between recent remote sensed geophysical monitoring and previous peat depth models, is used to examine changes in peat stocks over time. The significance of the coregionalisation is that the spatial cross correlation between the remote and ground based data can be used to update the model of peat depth. The result is that by integrating remotely sensed data with ground geophysics, the need is reduced for extensive ground-based monitoring and invasive peat depth measurements. The overall goal is to provide robust estimates of peat thickness to improve estimates of carbon stocks. The implications from the research have a broader significance that promotes a reduction in the need for damaging onsite peat thickness measurement and an increase in the use of remote sensed data for carbon stock estimations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: "Cumulative meta-analysis" describes a statistical procedure to calculate, retrospectively, summary estimates from the results of similar trials every time the results of a further trial in the series had become available. In the early 1990 s, comparisons of cumulative meta-analyses of treatments for myocardial infarction with advice promulgated through medical textbooks showed that research had continued long after robust estimates of treatment effects had accumulated, and that medical textbooks had overlooked strong, existing evidence from trials. Cumulative meta-analyses have subsequently been used to assess what could have been known had new studies been informed by systematic reviews of relevant existing evidence and how waste might have been reduced.

METHODS AND FINDINGS: We used a systematic approach to identify and summarise the findings of cumulative meta-analyses of studies of the effects of clinical interventions, published from 1992 to 2012. Searches were done of PubMed, MEDLINE, EMBASE, the Cochrane Methodology Register and Science Citation Index. A total of 50 eligible reports were identified, including more than 1,500 cumulative meta-analyses. A variety of themes are illustrated with specific examples. The studies showed that initially positive results became null or negative in meta-analyses as more trials were done; that early null or negative results were over-turned; that stable results (beneficial, harmful and neutral) would have been seen had a meta-analysis been done before the new trial; and that additional trials had been much too small to resolve the remaining uncertainties.

CONCLUSIONS: This large, unique collection of cumulative meta-analyses highlights how a review of the existing evidence might have helped researchers, practitioners, patients and funders make more informed decisions and choices about new trials over decades of research. This would have led to earlier uptake of effective interventions in practice, less exposure of trial participants to less effective treatments, and reduced waste resulting from unjustified research.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: The measurement of calcitonin in washout fluids of thyroid nodule aspirate (FNA-calcitonin) has been reported as accurate to detect medullary thyroid carcinoma (MTC). The results from these studies have been promising and the most updated version of ATA guidelines quoted for the first time that "FNA findings that are inconclusive or suggestive of MTC should have calcitonin measured in the FNA washout fluid." Here we aimed to systematically review published data on this topic to provide more robust estimates. RESEARCH DESIGN AND METHODS: A comprehensive computer literature search of the medical databases was conducted by searching for the terms "calcitonin" AND "washout." The search was updated until April 2015. RESULTS: Twelve relevant studies, published between 2007 and 2014, were found. Overall, 413 thyroid nodules or neck lymph nodes underwent FNA-calcitonin, 95 were MTC lesions and 93 (97.9%) of these were correctly detected by this measurement regardless of their cytologic report. CONCLUSIONS: The present study shows that the above ATA recommendation is well supported. Almost all MTC lesions are correctly detected by FNA-calcitonin and this technique should be used to avoid false negative or inconclusive results from cytology. The routine determination of serum calcitonin in patients undergoing FNA should improve the selection of patients at risk for MTC, guiding the use of FNA-calcitonin in the same FNA sample and providing useful information to the cytopathologist for the morphological assessment and the application of tailored ancillary tests.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cette thèse comporte trois articles dont un est publié et deux en préparation. Le sujet central de la thèse porte sur le traitement des valeurs aberrantes représentatives dans deux aspects importants des enquêtes que sont : l’estimation des petits domaines et l’imputation en présence de non-réponse partielle. En ce qui concerne les petits domaines, les estimateurs robustes dans le cadre des modèles au niveau des unités ont été étudiés. Sinha & Rao (2009) proposent une version robuste du meilleur prédicteur linéaire sans biais empirique pour la moyenne des petits domaines. Leur estimateur robuste est de type «plugin», et à la lumière des travaux de Chambers (1986), cet estimateur peut être biaisé dans certaines situations. Chambers et al. (2014) proposent un estimateur corrigé du biais. En outre, un estimateur de l’erreur quadratique moyenne a été associé à ces estimateurs ponctuels. Sinha & Rao (2009) proposent une procédure bootstrap paramétrique pour estimer l’erreur quadratique moyenne. Des méthodes analytiques sont proposées dans Chambers et al. (2014). Cependant, leur validité théorique n’a pas été établie et leurs performances empiriques ne sont pas pleinement satisfaisantes. Ici, nous examinons deux nouvelles approches pour obtenir une version robuste du meilleur prédicteur linéaire sans biais empirique : la première est fondée sur les travaux de Chambers (1986), et la deuxième est basée sur le concept de biais conditionnel comme mesure de l’influence d’une unité de la population. Ces deux classes d’estimateurs robustes des petits domaines incluent également un terme de correction pour le biais. Cependant, ils utilisent tous les deux l’information disponible dans tous les domaines contrairement à celui de Chambers et al. (2014) qui utilise uniquement l’information disponible dans le domaine d’intérêt. Dans certaines situations, un biais non négligeable est possible pour l’estimateur de Sinha & Rao (2009), alors que les estimateurs proposés exhibent un faible biais pour un choix approprié de la fonction d’influence et de la constante de robustesse. Les simulations Monte Carlo sont effectuées, et les comparaisons sont faites entre les estimateurs proposés et ceux de Sinha & Rao (2009) et de Chambers et al. (2014). Les résultats montrent que les estimateurs de Sinha & Rao (2009) et de Chambers et al. (2014) peuvent avoir un biais important, alors que les estimateurs proposés ont une meilleure performance en termes de biais et d’erreur quadratique moyenne. En outre, nous proposons une nouvelle procédure bootstrap pour l’estimation de l’erreur quadratique moyenne des estimateurs robustes des petits domaines. Contrairement aux procédures existantes, nous montrons formellement la validité asymptotique de la méthode bootstrap proposée. Par ailleurs, la méthode proposée est semi-paramétrique, c’est-à-dire, elle n’est pas assujettie à une hypothèse sur les distributions des erreurs ou des effets aléatoires. Ainsi, elle est particulièrement attrayante et plus largement applicable. Nous examinons les performances de notre procédure bootstrap avec les simulations Monte Carlo. Les résultats montrent que notre procédure performe bien et surtout performe mieux que tous les compétiteurs étudiés. Une application de la méthode proposée est illustrée en analysant les données réelles contenant des valeurs aberrantes de Battese, Harter & Fuller (1988). S’agissant de l’imputation en présence de non-réponse partielle, certaines formes d’imputation simple ont été étudiées. L’imputation par la régression déterministe entre les classes, qui inclut l’imputation par le ratio et l’imputation par la moyenne sont souvent utilisées dans les enquêtes. Ces méthodes d’imputation peuvent conduire à des estimateurs imputés biaisés si le modèle d’imputation ou le modèle de non-réponse n’est pas correctement spécifié. Des estimateurs doublement robustes ont été développés dans les années récentes. Ces estimateurs sont sans biais si l’un au moins des modèles d’imputation ou de non-réponse est bien spécifié. Cependant, en présence des valeurs aberrantes, les estimateurs imputés doublement robustes peuvent être très instables. En utilisant le concept de biais conditionnel, nous proposons une version robuste aux valeurs aberrantes de l’estimateur doublement robuste. Les résultats des études par simulations montrent que l’estimateur proposé performe bien pour un choix approprié de la constante de robustesse.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En este trabajo se realiza la medición del riesgo de mercado para el portafolio de TES de un banco colombiano determinado, abordando el pronóstico de valor en riesgo (VaR) mediante diferentes modelos multivariados de volatilidad: EWMA, GARCH ortogonal, GARCH robusto, así como distintos modelos de VaR con distribución normal y distribución t-student, evaluando su eficiencia con las metodologías de backtesting propuestas por Candelon et al. (2011) con base en el método generalizado de momentos, junto con los test de independencia y de cobertura condicional planteados por Christoffersen y Pelletier (2004) y por Berkowitz, Christoffersen y Pelletier (2010). Los resultados obtenidos demuestran que la mejor especificación del VaR para la medición del riesgo de mercado del portafolio de TES de los bancos colombianos, es el construido a partir de volatilidades EWMA y basado en la distribución normal, ya que satisface las hipótesis de cobertura no condicional, independencia y cobertura condicional, al igual que los requerimientos estipulados en Basilea II y en la normativa vigente en Colombia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Global hydrological models (GHMs) model the land surface hydrologic dynamics of continental-scale river basins. Here we describe one such GHM, the Macro-scale - Probability-Distributed Moisture model.09 (Mac-PDM.09). The model has undergone a number of revisions since it was last applied in the hydrological literature. This paper serves to provide a detailed description of the latest version of the model. The main revisions include the following: (1) the ability for the model to be run for n repetitions, which provides more robust estimates of extreme hydrological behaviour, (2) the ability of the model to use a gridded field of coefficient of variation (CV) of daily rainfall for the stochastic disaggregation of monthly precipitation to daily precipitation, and (3) the model can now be forced with daily input climate data as well as monthly input climate data. We demonstrate the effects that each of these three revisions has on simulated runoff relative to before the revisions were applied. Importantly, we show that when Mac-PDM.09 is forced with monthly input data, it results in a negative runoff bias relative to when daily forcings are applied, for regions of the globe where the day-to-day variability in relative humidity is high. The runoff bias can be up to - 80% for a small selection of catchments but the absolute magnitude of the bias may be small. As such, we recommend future applications of Mac-PDM.09 that use monthly climate forcings acknowledge the bias as a limitation of the model. The performance of Mac-PDM.09 is evaluated by validating simulated runoff against observed runoff for 50 catchments. We also present a sensitivity analysis that demonstrates that simulated runoff is considerably more sensitive to method of PE calculation than to perturbations in soil moisture and field capacity parameters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Varroa destructor is a parasitic mite of the Eastern honeybee Apis cerana. Fifty years ago, two distinct evolutionary lineages (Korean and Japanese) invaded the Western honeybee Apis mellifera. This haplo-diploid parasite species reproduces mainly through brother sister matings, a system which largely favors the fixation of new mutations. In a worldwide sample of 225 individuals from 21 locations collected on Western honeybees and analyzed at 19 microsatellite loci, a series of de novo mutations was observed. Using historical data concerning the invasion, this original biological system has been exploited to compare three mutation models with allele size constraints for microsatellite markers: stepwise (SMM) and generalized (GSM) mutation models, and a model with mutation rate increasing exponentially with microsatellite length (ESM). Posterior probabilities of the three models have been estimated for each locus individually using reversible jump Markov Chain Monte Carlo. The relative support of each model varies widely among loci, but the GSM is the only model that always receives at least 9% support, whatever the locus. The analysis also provides robust estimates of mutation parameters for each locus and of the divergence time of the two invasive lineages (67,000 generations with a 90% credibility interval of 35,000-174,000). With an average of 10 generations per year, this divergence time fits with the last post-glacial Korea Japan land separation. (c) 2005 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Elephant poaching and the ivory trade remain high on the agenda at meetings of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). Well-informed debates require robust estimates of trends, the spatial distribution of poaching, and drivers of poaching. We present an analysis of trends and drivers of an indicator of elephant poaching of all elephant species. The site-based monitoring system known as Monitoring the Illegal Killing of Elephants (MIKE), set up by the 10th Conference of the Parties of CITES in 1997, produces carcass encounter data reported mainly by anti-poaching patrols. Data analyzed were site by year totals of 6,337 carcasses from 66 sites in Africa and Asia from 2002–2009. Analysis of these observational data is a serious challenge to traditional statistical methods because of the opportunistic and non-random nature of patrols, and the heterogeneity across sites. Adopting a Bayesian hierarchical modeling approach, we used the proportion of carcasses that were illegally killed (PIKE) as a poaching index, to estimate the trend and the effects of site- and country-level factors associated with poaching. Important drivers of illegal killing that emerged at country level were poor governance and low levels of human development, and at site level, forest cover and area of the site in regions where human population density is low. After a drop from 2002, PIKE remained fairly constant from 2003 until 2006, after which it increased until 2008. The results for 2009 indicate a decline. Sites with PIKE ranging from the lowest to the highest were identified. The results of the analysis provide a sound information base for scientific evidence-based decision making in the CITES process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The issue of diversification in direct real estate investment portfolios has been widely studied in academic and practitioner literature. Most work, however, has been done using either partially aggregated data or data for small samples of individual properties. This paper reports results from tests of both risk reduction and diversification that use the records of 10,000+ UK properties tracked by Investment Property Databank. It provides, for the first time, robust estimates of the diversification gains attainable given the returns, risks and cross‐correlations across the individual properties available to fund managers. The results quantify the number of assets and amount of money needed to construct both ‘balanced’ and ‘specialist’ property portfolios by direct investment. Target numbers will vary according to the objectives of investors and the degree to which tracking error is tolerated. The top‐level results are consistent with previous work, showing that a large measure of risk reduction can be achieved with portfolios of 30–50 properties, but full diversification of specific risk can only be achieved in very large portfolios. However, the paper extends previous work by demonstrating on a single, large dataset the implications of different methods of calculating risk reduction, and also by showing more disaggregated results relevant to the construction of specialist, sector‐focussed funds.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a model of market participation in which the presence of non-negligible fixed costs leads to random censoring of the traditional double-hurdle model. Fixed costs arise when household resources must be devoted a priori to the decision to participate in the market. These costs, usually of time, are manifested in non-negligible minimum-efficient supplies and supply correspondence that requires modification of the traditional Tobit regression. The costs also complicate econometric estimation of household behavior. These complications are overcome by application of the Gibbs sampler. The algorithm thus derived provides robust estimates of the fixed-costs, double-hurdle model. The model and procedures are demonstrated in an application to milk market participation in the Ethiopian highlands.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

African societies are dependent on rainfall for agricultural and other water-dependent activities, yet rainfall is extremely variable in both space and time and reoccurring water shocks, such as drought, can have considerable social and economic impacts. To help improve our knowledge of the rainfall climate, we have constructed a 30-year (1983–2012), temporally consistent rainfall dataset for Africa known as TARCAT (TAMSAT African Rainfall Climatology And Time-series) using archived Meteosat thermal infra-red (TIR) imagery, calibrated against rain gauge records collated from numerous African agencies. TARCAT has been produced at 10-day (dekad) scale at a spatial resolution of 0.0375°. An intercomparison of TARCAT from 1983 to 2010 with six long-term precipitation datasets indicates that TARCAT replicates the spatial and seasonal rainfall patterns and interannual variability well, with correlation coefficients of 0.85 and 0.70 with the Climate Research Unit (CRU) and Global Precipitation Climatology Centre (GPCC) gridded-gauge analyses respectively in the interannual variability of the Africa-wide mean monthly rainfall. The design of the algorithm for drought monitoring leads to TARCAT underestimating the Africa-wide mean annual rainfall on average by −0.37 mm day−1 (21%) compared to other datasets. As the TARCAT rainfall estimates are historically calibrated across large climatically homogeneous regions, the data can provide users with robust estimates of climate related risk, even in regions where gauge records are inconsistent in time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we extend partial linear models with normal errors to Student-t errors Penalized likelihood equations are applied to derive the maximum likelihood estimates which appear to be robust against outlying observations in the sense of the Mahalanobis distance In order to study the sensitivity of the penalized estimates under some usual perturbation schemes in the model or data the local influence curvatures are derived and some diagnostic graphics are proposed A motivating example preliminary analyzed under normal errors is reanalyzed under Student-t errors The local influence approach is used to compare the sensitivity of the model estimates (C) 2010 Elsevier B V All rights reserved

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Grubbs` measurement model is frequently used to compare several measuring devices. It is common to assume that the random terms have a normal distribution. However, such assumption makes the inference vulnerable to outlying observations, whereas scale mixtures of normal distributions have been an interesting alternative to produce robust estimates, keeping the elegancy and simplicity of the maximum likelihood theory. The aim of this paper is to develop an EM-type algorithm for the parameter estimation, and to use the local influence method to assess the robustness aspects of these parameter estimates under some usual perturbation schemes, In order to identify outliers and to criticize the model building we use the local influence procedure in a Study to compare the precision of several thermocouples. (C) 2008 Elsevier B.V. All rights reserved.