733 resultados para Smoothed bootstrap


Relevância:

10.00% 10.00%

Publicador:

Resumo:

1. Jerdon's courser Rhinoptilus bitorquatus is a nocturnally active cursorial bird that is only known to occur in a small area of scrub jungle in Andhra Pradesh, India, and is listed as critically endangered by the IUCN. Information on its habitat requirements is needed urgently to underpin conservation measures. We quantified the habitat features that correlated with the use of different areas of scrub jungle by Jerdon's coursers, and developed a model to map potentially suitable habitat over large areas from satellite imagery and facilitate the design of surveys of Jerdon's courser distribution. 2. We used 11 arrays of 5-m long tracking strips consisting of smoothed fine soil to detect the footprints of Jerdon's coursers, and measured tracking rates (tracking events per strip night). We counted the number of bushes and trees, and described other attributes of vegetation and substrate in a 10-m square plot centred on each strip. We obtained reflectance data from Landsat 7 satellite imagery for the pixel within which each strip lay. 3. We used logistic regression models to describe the relationship between tracking rate by Jerdon's coursers and characteristics of the habitat around the strips, using ground-based survey data and satellite imagery. 4. Jerdon's coursers were most likely to occur where the density of large (>2 m tall) bushes was in the range 300-700 ha(-1) and where the density of smaller bushes was less than 1000 ha(-1). This habitat was detectable using satellite imagery. 5. Synthesis and applications. The occurrence of Jerdon's courser is strongly correlated with the density of bushes and trees, and is in turn affected by grazing with domestic livestock, woodcutting and mechanical clearance of bushes to create pasture, orchards and farmland. It is likely that there is an optimal level of grazing and woodcutting that would maintain or create suitable conditions for the species. Knowledge of the species' distribution is incomplete and there is considerable pressure from human use of apparently suitable habitats. Hence, distribution mapping is a high conservation priority. A two-step procedure is proposed, involving the use of ground surveys of bush density to calibrate satellite image-based mapping of potential habitat. These maps could then be used to select priority areas for Jerdon's courser surveys. The use of tracking strips to study habitat selection and distribution has potential in studies of other scarce and secretive species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: This study reports the cost-effectiveness of a preventive intervention, consisting of counseling and specific support for the mother-infant relationship, targeted at women at high risk of developing postnatal depression. Methods: A prospective economic evaluation was conducted alongside a pragmatic randomized controlled trial in which women considered at high risk of developing postnatal depression were allocated randomly to the preventive intervention (n = 74) or to routine primary care (n = 77). The primary outcome measure was the duration of postnatal depression experienced during the first 18 months postpartum. Data on health and social care use by women and their infants up to 18 months postpartum were collected, using a combination of prospective diaries and face-to-face interviews, and then were combined with unit costs ( pound, year 2000 prices) to obtain a net cost per mother-infant dyad. The nonparametric bootstrap method was used to present cost-effectiveness acceptability curves and net benefit statistics at alternative willingness to pay thresholds held by decision makers for preventing 1 month of postnatal depression. Results: Women in the preventive intervention group were depressed for an average of 2.21 months (9.57 weeks) during the study period, whereas women in the routine primary care group were depressed for an average of 2.70 months (11.71 weeks). The mean health and social care costs were estimated at 2,396.9 pound per mother-infant dyad in the preventive intervention group and 2,277.5 pound per mother-infant dyad in the routine primary care group, providing a mean cost difference of 119.5 pound (bootstrap 95 percent confidence interval [Cl], -535.4, 784.9). At a willingness to pay threshold of 1,000 pound per month of postnatal depression avoided, the probability that the preventive intervention is cost-effective is .71 and the mean net benefit is 383.4 pound (bootstrap 95 percent Cl, -863.3- pound 1,581.5) pound. Conclusions: The preventive intervention is likely to be cost-effective even at relatively low willingness to pay thresholds for preventing 1 month of postnatal depression during the first 18 months postpartum. Given the negative impact of postnatal depression on later child development, further research is required that investigates the longer-term cost-effectiveness of the preventive intervention in high risk women.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 3D reconstruction of a Golgi-stained dendritic tree from a serial stack of images captured with a transmitted light bright-field microscope is investigated. Modifications to the bootstrap filter are discussed such that the tree structure may be estimated recursively as a series of connected segments. The tracking performance of the bootstrap particle filter is compared against Differential Evolution, an evolutionary global optimisation method, both in terms of robustness and accuracy. It is found that the particle filtering approach is significantly more robust and accurate for the data considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance benefit when using Grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effect of the synchronization overhead, mainly due to the high variability of completion times of the different tasks, which, in turn, is due to the large heterogeneity of Grid nodes. For this reason, it is important to have models which capture the performance of such systems. In this paper we describe a queueing-network-based performance model able to accurately analyze Grid architectures, and we use the model to study a real parallel application executed in a Grid. The proposed model improves the classical modelling techniques and highlights the impact of resource heterogeneity and network latency on the application performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance benefit when using grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effects of synchronization overheads, mainly due to the high variability in the execution times of the different tasks, which, in turn, is accentuated by the large heterogeneity of grid nodes. In this paper we design hierarchical, queuing network performance models able to accurately analyze grid architectures and applications. Thanks to the model results, we introduce a new allocation policy based on a combination between task partitioning and task replication. The models are used to study two real applications and to evaluate the performance benefits obtained with allocation policies based on task replication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The “case for real estate” in the mixed-asset portfolio is a topic of continuing interest to practitioners and academics. The argument is typically made by comparing efficient frontiers of portfolio with real estate to those that exclude real estate. However, most investors will have held inefficient portfolios. Thus, when analysing the real estate’s place in the mixed-asset portfolio it seems illogical to do so by comparing the difference in risk-adjusted performance between efficient portfolios, which few if any investor would have held. The approach adopted here, therefore, is to compare the risk-adjusted performance of a number of mixed-asset portfolios without real estate (which may or not be efficient) with a very large number of mixed-asset portfolios that include real estate (which again may or may not be efficient), to see the proportion of the time when there is an increase in risk-adjusted performance, significant or otherwise using appraisal-based and de-smoothed annual data from 1952-2003. So to the question how often does the addition of private real estate lead to increases the risk-adjusted performance compared with mixed-asset portfolios without real estate the answer is almost all the time. However, significant increases are harder to find. Additionally, a significant increase in risk-adjusted performance can come from either reductions in portfolio risk or increases in return depending on the investors’ initial portfolio structure. In other words, simply adding real estate to a mixed-asset portfolio is not enough to ensure significant increases in performance as the results are dependent on the percentage added and the proper reallocation of the initial portfolio mix in the expanded portfolio.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a substantial literature which suggests that appraisals are smoothed and lag the true level of prices. This study combines a qualitative interview survey of the leading fund manager/owners in the UK and their appraisers with a empirical study of the number of appraisals which change each month within the IPD Monthly Index. The paper concentrates on how the appraisal process operates for commercial property performance measurement purposes. The survey interviews suggest that periodic appraisal services are consolidating in fewer firms and, within these major firms, appraisers adopt different approaches to changing appraisals on a period by period basis, with some wanting hard transaction evidence while others act on "softer' signals. The survey also indicates a seasonal effect with greater effort and information being applied to annual and quarterly appraisals than monthly. The analysis of the appraisals within the Investment Property Databank Monthly Index confirms this effect with around 5% more appraisals being moved at each quarter day than the other months. January and August have significantly less appraisal changes than other months.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The phylogenetics of Sternbergia (Amaryllidaceae) were studied using DNA sequences of the plastid ndhF and matK genes and nuclear internal transcribed spacer (ITS) ribosomal region for 38, 37 and 32 ingroup and outgroup accessions, respectively. All members of Sternbergia were represented by at least one accession, except S. minoica and S. schubertii, with additional taxa from Narcissus and Pancratium serving as principal outgroups. Sternbergia was resolved and supported as sister to Narcissus and composed of two primary subclades: S. colchiciflora sister to S. vernalis, S. candida and S. clusiana, with this clade in turn sister to S. lutea and its allies in both Bayesian and bootstrap analyses. A clear relationship between the two vernal flowering members of the genus was recovered, supporting the hypothesis of a single origin of vernal flowering in Sternbergia. However, in the S. lutea complex, the DNA markers examined did not offer sufficient resolving power to separate taxa, providing some support for the idea that S. sicula and S. greuteriana are conspecific with S. lutea

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a substantial literature which suggests that appraisals are smoothed and lag the true level of prices. This study combines a qualitative interview survey of the leading fund manager/owners in the UK and their appraisers with a empirical study of the number of appraisals which change each month within the IPD Monthly Index. The paper concentrates on how the appraisal process operates for commercial real estate performance measurement purposes. The survey interviews suggest that periodic appraisal services are consolidating in fewer firms and, within these major firms, appraisers adopt different approaches to changing appraisals on a period by period basis, with some wanting hard transaction evidence while others act on ‘softer’ signals. The survey also indicates a seasonal effect with greater effort and information being applied to annual and quarterly appraisals than monthly. The analysis of the appraisals within the IPD Monthly Index confirms this effect with around 5% more appraisals being moved at each quarter day than the other months. More November appraisals change than expected and this suggests that the increased information flows for the December end year appraisals are flowing through into earlier appraisals, especially as client/appraiser draft appraisal meetings for the December appraisals, a regular occurrence in the UK, can occur in November. January illustrates significantly less activity than other months, a seasonal effect after the exertions of the December appraisals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel approach is presented for the evaluation of circulation type classifications (CTCs) in terms of their capability to predict surface climate variations. The approach is analogous to that for probabilistic meteorological forecasts and is based on the Brier skill score. This score is shown to take a particularly simple form in the context of CTCs and to quantify the resolution of a climate variable by the classifications. The sampling uncertainty of the skill can be estimated by means of nonparametric bootstrap resampling. The evaluation approach is applied for a systematic intercomparison of 71 CTCs (objective and manual, from COST Action 733) with respect to their ability to resolve daily precipitation in the Alpine region. For essentially all CTCs, the Brier skill score is found to be higher for weak and moderate compared to intense precipitation, for winter compared to summer, and over the north and west of the Alps compared to the south and east. Moreover, CTCs with a higher number of types exhibit better skill than CTCs with few types. Among CTCs with comparable type number, the best automatic classifications are found to outperform the best manual classifications. It is not possible to single out one ‘best’ classification for Alpine precipitation, but there is a small group showing particularly high skill.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study uses a bootstrap methodology to explicitly distinguish between skill and luck for 80 Real Estate Investment Trust Mutual Funds in the period January 1995 to May 2008. The methodology successfully captures non-normality in the idiosyncratic risk of the funds. Using unconditional, beta conditional and alpha-beta conditional estimation models, the results indicate that all but one fund demonstrates poor skill. Tests of robustness show that this finding is largely invariant to REIT market conditions and maturity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The case for property has typically rested on the application of modern portfolio theory (MPT), in that property has been shown to offer increased diversification benefits within a multi asset portfolio without hurting portfolio returns especially for lower risk portfolios. However this view is based upon the use of historic, usually appraisal based, data for property. Recent research suggests strongly that such data significantly underestimates the risk characteristics of property, because appraisals explicitly or implicitly smooth out much of the real volatility in property returns. This paper examines the portfolio diversification effects of including property in a multi-asset portfolio, using UK appraisal based (smoothed) data and several derived de-smoothed series. Having considered the effects of de-smoothing, we then consider the inclusion of a further low risk asset (cash) in order to investigate further whether property's place in a low risk portfolio is maintained. The conclusions of this study are that the previous supposed benefits of including property have been overstated. Although property may still have a place in a 'balanced' institutional portfolio, the case for property needs to be reassessed and not be based simplistically on the application of MPT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce an algorithm (called REDFITmc2) for spectrum estimation in the presence of timescale errors. It is based on the Lomb-Scargle periodogram for unevenly spaced time series, in combination with the Welch's Overlapped Segment Averaging procedure, bootstrap bias correction and persistence estimation. The timescale errors are modelled parametrically and included in the simulations for determining (1) the upper levels of the spectrum of the red-noise AR(1) alternative and (2) the uncertainty of the frequency of a spectral peak. Application of REDFITmc2 to ice core and stalagmite records of palaeoclimate allowed a more realistic evaluation of spectral peaks than when ignoring this source of uncertainty. The results support qualitatively the intuition that stronger effects on the spectrum estimate (decreased detectability and increased frequency uncertainty) occur for higher frequencies. The surplus information brought by algorithm REDFITmc2 is that those effects are quantified. Regarding timescale construction, not only the fixpoints, dating errors and the functional form of the age-depth model play a role. Also the joint distribution of all time points (serial correlation, stratigraphic order) determines spectrum estimation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and 5 height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, 10 and are compared to scores based on the temporal or spatial mean value of the observations and a “random” model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), and the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global 15 vegetation models (DGVMs). SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP) is too high. The two DGVMs show little difference for most benchmarks (including the interannual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified 20 several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change 25 impacts and feedbacks.