821 resultados para Small sample asymptotics
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Effects of roads on wildlife and its habitat have been measured using metrics, such as the nearest road distance, road density, and effective mesh size. In this work we introduce two new indices: (1) Integral Road Effect (IRE), which measured the sum effects of points in a road at a fixed point in the forest; and (2) Average Value of the Infinitesimal Road Effect (AVIRE), which measured the average of the effects of roads at this point. IRE is formally defined as the line integral of a special function (the infinitesimal road effect) along the curves that model the roads, whereas AVIRE is the quotient of IRE by the length of the roads. Combining tools of ArcGIS software with a numerical algorithm, we calculated these and other road and habitat cover indices in a sample of points in a human-modified landscape in the Brazilian Atlantic Forest, where data on the abundance of two groups of small mammals (forest specialists and habitat generalists) were collected in the field. We then compared through the Akaike Information Criterion (AIC) a set of candidate regression models to explain the variation in small mammal abundance, including models with our two new road indices (AVIRE and IRE) or models with other road effect indices (nearest road distance, mesh size, and road density), and reference models (containing only habitat indices, or only the intercept without the effect of any variable). Compared to other road effect indices, AVIRE showed the best performance to explain abundance of forest specialist species, whereas the nearest road distance obtained the best performance to generalist species. AVIRE and habitat together were included in the best model for both small mammal groups, that is, higher abundance of specialist and generalist small mammals occurred where there is lower average road effect (less AVIRE) and more habitat. Moreover, AVIRE was not significantly correlated with habitat cover of specialists and generalists differing from the other road effect indices, except mesh size, which allows for separating the effect of roads from the effect of habitat on small mammal communities. We suggest that the proposed indices and GIS procedures could also be useful to describe other spatial ecological phenomena, such as edge effect in habitat fragments. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.
Resumo:
This Ph.D. Thesis has been carried out in the framework of a long-term and large project devoted to describe the main photometric, chemical, evolutionary and integrated properties of a representative sample of Large and Small Magellanic Cloud (LMC and SMC respectively) clusters. The globular clusters system of these two Irregular galaxies provides a rich resource for investigating stellar and chemical evolution and to obtain a detailed view of the star formation history and chemical enrichment of the Clouds. The results discussed here are based on the analysis of high-resolution photometric and spectroscopic datasets obtained by using the last generation of imagers and spectrographs. The principal aims of this project are summarized as follows: • The study of the AGB and RGB sequences in a sample of MC clusters, through the analysis of a wide near-infrared photometric database, including 33 Magellanic globulars obtained in three observing runs with the near-infrared camera SOFI@NTT (ESO, La Silla). • The study of the chemical properties of a sample of MCs clusters, by using optical and near-infrared high-resolution spectra. 3 observing runs have been secured to our group to observe 9 LMC clusters (with ages between 100 Myr and 13 Gyr) with the optical high-resolution spectrograph FLAMES@VLT (ESO, Paranal) and 4 very young (<30 Myr) clusters (3 in the LMC and 1 in the SMC) with the near-infrared high-resolution spectrograph CRIRES@VLT. • The study of the photometric properties of the main evolutive sequences in optical Color- Magnitude Diagrams (CMD) obtained by using HST archive data, with the final aim of dating several clusters via the comparison between the observed CMDs and theoretical isochrones. The determination of the age of a stellar population requires an accurate measure of the Main Sequence (MS) Turn-Off (TO) luminosity and the knowledge of the distance modulus, reddening and overall metallicity. For this purpose, we limited the study of the age just to the clusters already observed with high-resolution spectroscopy, in order to date only clusters with accurate estimates of the overall metallicity.
Resumo:
The behaviour of a polymer depends strongly on the length- and time scale as well as on the temperature rnat which it is probed. In this work, I describe investigations of polymer surfaces using scanning probe rnmicroscopy with heatable probes. With these probes, surfaces can be heated within seconds down to rnmicroseconds. I introduce experiments for the local and fast determination of glass transition and melting rntemperatures. I developed a method which allows the determination of glass transition and melting rntemperatures on films with thicknesses below 100 nm: A background measurement on the substrate was rnperformed. The resulting curve was subtracted from the measurement on the polymer film. The rndifferential measurement on polystyrene films with thicknesses between 35 nm and 160 nm showed rncharacteristic signals at 95 ± 1 °C, in accordance with the glass transition of polystyrene. Pressing heated rnprobes into polymer films causes plastic deformation. Nanometer sized deformations are currently rninvestigated in novel concepts for high density data storage. A suitable medium for such a storage system rnhas to be easily indentable on one hand, but on the other hand it also has to be very stable towards rnsurface induced wear. For developing such a medium I investigated a new approach: A comparably soft rnmaterial, namely polystyrene, was protected with a thin but very hard layer made of plasma polymerized rnnorbornene. The resulting bilayered media were tested for surface stability and deformability. I showed rnthat the bilayered material combines the deformability of polystyrene with the surface stability of the rnplasma polymer, and that the material therefore is a very good storage medium. In addition we rninvestigated the glass transition temperature of polystyrene at timescales of 10 µs and found it to be rnapprox. 220 °C. The increase of this characteristic temperature of the polymer results from the short time rnat which the polymer was probed and reflects the well-known time-temperature superposition principle. rnHeatable probes were also used for the characterization of silverazide filled nanocapsules. The use of rnheatable probes allowed determining the decomposition temperature of the capsules from few rnnanograms of material. The measured decomposition temperatures ranged from 180 °C to 225 °C, in rnaccordance with literature values. The investigation of small amounts of sample was necessary due to the rnlimited availability of the material. Furthermore, investigating larger amounts of the capsules using rnconventional thermal gravimetric analysis could lead to contamination or even damage of the instrument. rnBesides the analysis of material parameters I used the heatable probes for the local thermal rndecomposition of pentacene precursor material in order to form nanoscale conductive structures. Here, rnthe thickness of the precursor layer was important for complete thermal decomposition. rnAnother aspect of my work was the investigation of redox active polymers - Poly-10-(4-vinylbenzyl)-10H-rnphenothiazine (PVBPT)- for data storage. Data is stored by changing the local conductivity of the material rnby applying a voltage between tip and surface. The generated structures were stable for more than 16 h. It rnwas shown that the presence of water is essential for succesfull patterning.
Resumo:
Objective To examine the presence and extent of small study effects in clinical osteoarthritis research. Design Meta-epidemiological study. Data sources 13 meta-analyses including 153 randomised trials (41 605 patients) that compared therapeutic interventions with placebo or non-intervention control in patients with osteoarthritis of the hip or knee and used patients’ reported pain as an outcome. Methods We compared estimated benefits of treatment between large trials (at least 100 patients per arm) and small trials, explored funnel plots supplemented with lines of predicted effects and contours of significance, and used three approaches to estimate treatment effects: meta-analyses including all trials irrespective of sample size, meta-analyses restricted to large trials, and treatment effects predicted for large trials. Results On average, treatment effects were more beneficial in small than in large trials (difference in effect sizes −0.21, 95% confidence interval −0.34 to −0.08, P=0.001). Depending on criteria used, six to eight funnel plots indicated small study effects. In six of 13 meta-analyses, the overall pooled estimate suggested a clinically relevant, significant benefit of treatment, whereas analyses restricted to large trials and predicted effects in large trials yielded smaller non-significant estimates. Conclusions Small study effects can often distort results of meta-analyses. The influence of small trials on estimated treatment effects should be routinely assessed.
Resumo:
Tajikistan is judged to be highly vulnerable to risk, including food insecurity risks and climate change risks. By some vulnerability measures it is the most vulnerable among all 28 countries in the World Bank’s Europe and Central Asia Region – ECA (World Bank 2009). The rural population, with its relatively high incidence of poverty, is particularly vulnerable. The Pilot Program for Climate Resilience (PPCR) in Tajikistan (2011) provided an opportunity to conduct a farm-level survey with the objective of assessing various dimensions of rural population’s vulnerability to risk and their perception of constraints to farming operations and livelihoods. The survey should be accordingly referred to as the 2011 PPCR survey. The rural population in Tajikistan is highly agrarian, with about 50% of family income deriving from agriculture (see Figure 4.1; also LSMS 2007 – own calculations). Tajikistan’s agriculture basically consists of two groups of producers: small household plots – the successors of Soviet “private agriculture” – and dehkan (or “peasant”) farms – new family farming structures that began to be created under relevant legislation passed after 1992 (Lerman and Sedik, 2008). The household plots manage 20% of arable land and produce 65% of gross agricultural output (GAO). Dehkan farms manage 65% of arable land and produce close to 30% of GAO. The remaining 15% of arable land is held in agricultural enterprises – the rapidly shrinking sector of corporate farms that succeeded the Soviet kolkhozes and sovkhozes and today produces less than 10% of GAO (TajStat 2011) The survey conducted in May 2011 focused on dehkan farms, as budgetary constraints precluded the inclusion of household plots. A total of 142 dehkan farms were surveyed in face-to-face interviews. They were sampled from 17 districts across all four regions – Sughd, Khatlon, RRP, and GBAO. The districts were selected so as to represent different agro-climatic zones, different vulnerability zones (based on the World Bank (2011) vulnerability assessment), and different food-insecurity zones (based on WFP/IPC assessments). Within each district, 3-4 jamoats were chosen at random and 2-3 farms were selected in each jamoat from lists provided by jamoat administration so as to maximize the variability by farm characteristics. The sample design by region/district is presented in Table A, which also shows the agro-climatic zone and the food security phase for each district. The sample districts are superimposed on a map of food security phases based on IPC April 2011.
Resumo:
OBJECTIVES: To determine sample sizes in studies on diagnostic accuracy and the proportion of studies that report calculations of sample size. DESIGN: Literature survey. DATA SOURCES: All issues of eight leading journals published in 2002. METHODS: Sample sizes, number of subgroup analyses, and how often studies reported calculations of sample size were extracted. RESULTS: 43 of 8999 articles were non-screening studies on diagnostic accuracy. The median sample size was 118 (interquartile range 71-350) and the median prevalence of the target condition was 43% (27-61%). The median number of patients with the target condition--needed to calculate a test's sensitivity--was 49 (28-91). The median number of patients without the target condition--needed to determine a test's specificity--was 76 (27-209). Two of the 43 studies (5%) reported a priori calculations of sample size. Twenty articles (47%) reported results for patient subgroups. The number of subgroups ranged from two to 19 (median four). No studies reported that sample size was calculated on the basis of preplanned analyses of subgroups. CONCLUSION: Few studies on diagnostic accuracy report considerations of sample size. The number of participants in most studies on diagnostic accuracy is probably too small to analyse variability of measures of accuracy across patient subgroups.
Resumo:
Marginal generalized linear models can be used for clustered and longitudinal data by fitting a model as if the data were independent and using an empirical estimator of parameter standard errors. We extend this approach to data where the number of observations correlated with a given one grows with sample size and show that parameter estimates are consistent and asymptotically Normal with a slower convergence rate than for independent data, and that an information sandwich variance estimator is consistent. We present two problems that motivated this work, the modelling of patterns of HIV genetic variation and the behavior of clustered data estimators when clusters are large.
Resumo:
We investigate compositionally monotonous, but energetically diverse, tephra samples from Pacaya to see if fossil bubbles in pyroclasts could reflect eruptive style. Bubble size distributions (BSD) were determined for four ash to lapilli size tephra samples using an adapted version of stereology conversion by Sahagian and Proussevitch (1998). Eruptions range from very weak to very energetic. Hundreds of ESEM BSEs images were processed throughout ImageJ software for a robust and statistically correct data set of vesicles (minimum 700 bubbles per sample). Qualitative textural analysis and major element chemical compositions were also executed. There is higher vesicularity for explosive pyroclasts and an inverse correlation between bubble number density (NV) and explosivity.
Resumo:
Recently, screening tests for monitoring the prevalence of transmissible spongiform encephalopathies specifically in sheep and goats became available. Although most countries require comprehensive test validation prior to approval, little is known about their performance under normal operating conditions. Switzerland was one of the first countries to implement 2 of these tests, an enzyme-linked immunosorbent assay (ELISA) and a Western blot, in a 1-year active surveillance program. Slaughtered animals (n = 32,777) were analyzed in either of the 2 tests with immunohistochemistry for confirmation of initial reactive results, and fallen stock samples (n = 3,193) were subjected to both screening tests and immunohistochemistry in parallel. Initial reactive and false-positive rates were recorded over time. Both tests revealed an excellent diagnostic specificity (>99.5%). However, initial reactive rates were elevated at the beginning of the program but dropped to levels below 1% with routine and enhanced staff training. Only those in the ELISA increased again in the second half of the program and correlated with the degree of tissue autolysis in the fallen stock samples. It is noteworthy that the Western blot missed 1 of the 3 atypical scrapie cases in the fallen stock, indicating potential differences in the diagnostic sensitivities between the 2 screening tests. However, an estimation of the diagnostic sensitivity for both tests on field samples remained difficult due to the low disease prevalence. Taken together, these results highlight the importance of staff training, sample quality, and interlaboratory comparison trials when such screening tests are implemented in the field.
Resumo:
Since 2010, the client base of online-trading service providers has grown significantly. Such companies enable small investors to access the stock market at advantageous rates. Because small investors buy and sell stocks in moderate amounts, they should consider fixed transaction costs, integral transaction units, and dividends when selecting their portfolio. In this paper, we consider the small investor’s problem of investing capital in stocks in a way that maximizes the expected portfolio return and guarantees that the portfolio risk does not exceed a prescribed risk level. Portfolio-optimization models known from the literature are in general designed for institutional investors and do not consider the specific constraints of small investors. We therefore extend four well-known portfolio-optimization models to make them applicable for small investors. We consider one nonlinear model that uses variance as a risk measure and three linear models that use the mean absolute deviation from the portfolio return, the maximum loss, and the conditional value-at-risk as risk measures. We extend all models to consider piecewise-constant transaction costs, integral transaction units, and dividends. In an out-of-sample experiment based on Swiss stock-market data and the cost structure of the online-trading service provider Swissquote, we apply both the basic models and the extended models; the former represent the perspective of an institutional investor, and the latter the perspective of a small investor. The basic models compute portfolios that yield on average a slightly higher return than the portfolios computed with the extended models. However, all generated portfolios yield on average a higher return than the Swiss performance index. There are considerable differences between the four risk measures with respect to the mean realized portfolio return and the standard deviation of the realized portfolio return.
Resumo:
Since 2010, the client base of online-trading service providers has grown significantly. Such companies enable small investors to access the stock market at advantageous rates. Because small investors buy and sell stocks in moderate amounts, they should consider fixed transaction costs, integral transaction units, and dividends when selecting their portfolio. In this paper, we consider the small investor’s problem of investing capital in stocks in a way that maximizes the expected portfolio return and guarantees that the portfolio risk does not exceed a prescribed risk level. Portfolio-optimization models known from the literature are in general designed for institutional investors and do not consider the specific constraints of small investors. We therefore extend four well-known portfolio-optimization models to make them applicable for small investors. We consider one nonlinear model that uses variance as a risk measure and three linear models that use the mean absolute deviation from the portfolio return, the maximum loss, and the conditional value-at-risk as risk measures. We extend all models to consider piecewise-constant transaction costs, integral transaction units, and dividends. In an out-of-sample experiment based on Swiss stock-market data and the cost structure of the online-trading service provider Swissquote, we apply both the basic models and the extended models; the former represent the perspective of an institutional investor, and the latter the perspective of a small investor. The basic models compute portfolios that yield on average a slightly higher return than the portfolios computed with the extended models. However, all generated portfolios yield on average a higher return than the Swiss performance index. There are considerable differences between the four risk measures with respect to the mean realized portfolio return and the standard deviation of the realized portfolio return.
Resumo:
Sample size calculations are advocated by the Consolidated Standards of Reporting Trials (CONSORT) group to justify sample sizes in randomized controlled trials (RCTs). This study aimed to analyse the reporting of sample size calculations in trials published as RCTs in orthodontic speciality journals. The performance of sample size calculations was assessed and calculations verified where possible. Related aspects, including number of authors; parallel, split-mouth, or other design; single- or multi-centre study; region of publication; type of data analysis (intention-to-treat or per-protocol basis); and number of participants recruited and lost to follow-up, were considered. Of 139 RCTs identified, complete sample size calculations were reported in 41 studies (29.5 per cent). Parallel designs were typically adopted (n = 113; 81 per cent), with 80 per cent (n = 111) involving two arms and 16 per cent having three arms. Data analysis was conducted on an intention-to-treat (ITT) basis in a small minority of studies (n = 18; 13 per cent). According to the calculations presented, overall, a median of 46 participants were required to demonstrate sufficient power to highlight meaningful differences (typically at a power of 80 per cent). The median number of participants recruited was 60, with a median of 4 participants being lost to follow-up. Our finding indicates good agreement between projected numbers required and those verified (median discrepancy: 5.3 per cent), although only a minority of trials (29.5 per cent) could be examined. Although sample size calculations are often reported in trials published as RCTs in orthodontic speciality journals, presentation is suboptimal and in need of significant improvement.