954 resultados para pooled estimates
Resumo:
Aim Estimates of geographic range size derived from natural history museum specimens are probably biased for many species. We aim to determine how bias in these estimates relates to range size. Location We conducted computer simulations based on herbarium specimen records from localities ranging from the southern United States to northern Argentina. Methods We used theory on the sampling distribution of the mean and variance to develop working hypotheses about how range size, defined as area of occupancy (AOO), was related to the inter-specific distribution of: (1) mean collection effort per area across the range of a species (MC); (2) variance in collection effort per area across the range of a species (VC); and (3) proportional bias in AOO estimates (PBias: the difference between the expected value of the estimate of AOO and true AOO, divided by true AOO). We tested predictions from these hypotheses using computer simulations based on a dataset of more than 29,000 herbarium specimen records documenting occurrences of 377 plant species in the tribe Bignonieae (Bignoniaceae). Results The working hypotheses predicted that the mean of the inter-specific distribution of MC, VC and PBias were independent of AOO, but that the respective variance and skewness decreased with increasing AOO. Computer simulations supported all but one prediction: the variance of the inter-specific distribution of VC did not decrease with increasing AOO. Main conclusions Our results suggest that, despite an invariant mean, the dispersion and symmetry of the inter-specific distribution of PBias decreases as AOO increases. As AOO increased, range size was less severely underestimated for a large proportion of simulated species. However, as AOO increased, range size estimates having extremely low bias were less common.
Resumo:
We present an analytic description of numerical results for the Landau-gauge SU(2) gluon propagator D(p(2)), obtained from lattice simulations (in the scaling region) for the largest lattice sizes to date, in d = 2, 3 and 4 space-time dimensions. Fits to the gluon data in 3d and in 4d show very good agreement with the tree-level prediction of the refined Gribov-Zwanziger (RGZ) framework, supporting a massive behavior for D(p(2)) in the infrared limit. In particular, we investigate the propagator's pole structure and provide estimates of the dynamical mass scales that can be associated with dimension-two condensates in the theory. In the 2d case, fitting the data requires a noninteger power of the momentum p in the numerator of the expression for D(p(2)). In this case, an infinite-volume-limit extrapolation gives D(0) = 0. Our analysis suggests that this result is related to a particular symmetry in the complex-pole structure of the propagator and not to purely imaginary poles, as would be expected in the original Gribov-Zwanziger scenario.
Resumo:
Background Statistical methods for estimating usual intake require at least two short-term dietary measurements in a subsample of the target population. However, the percentage of individuals with a second dietary measurement (replication rate) may influence the precision of estimates, such as percentiles and proportions of individuals below cut-offs of intake. Objective To investigate the precision of the usual food intake estimates using different replication rates and different sample sizes. Participants/setting Adolescents participating in the continuous National Health and Nutrition Examination Survey 2007-2008 (n=1,304) who completed two 24-hour recalls. Statistical analyses performed The National Cancer Institute method was used to estimate the usual intake of dark green vegetables in the original sample comprising 1,304 adolescents with a replication rate of 100%. A bootstrap with 100 replications was performed to estimate CIs for percentiles and proportions of individuals below cut-offs of intake. Using the same bootstrap replications, four sets of data sets were sampled with different replication rates (80%, 60%, 40%, and 20%). For each data set created, the National Cancer Institute method was performed and percentiles, Cl, and proportions of individuals below cut-offs were calculated. Precision estimates were checked by comparing each Cl obtained from data sets with different replication rates with the Cl obtained from original data set. Further, we sampled 1,000, 750, 500, and 250 individuals from the original data set, and performed the same analytical procedures. Results Percentiles of intake and percentage of individuals below the cut-off points were similar throughout the replication rates and sample sizes, but the Cl increased as the replication rate decreased. Wider CIs were observed at 40% and 20% of replication rate. Conclusions The precision of the usual intake estimates decreased when low replication rates were used. However, even with different sample sizes, replication rates >40% may not lead to an important loss of precision. J Acad Nutr Diet. 2012;112:1015-1020.
Resumo:
We investigated the association between diet and head and neck cancer (HNC) risk using data from the International Head and Neck Cancer Epidemiology (INHANCE) consortium. The INHANCE pooled data included 22 case-control studies with 14,520 cases and 22,737 controls. Center-specific quartiles among the controls were used for food groups, and frequencies per week were used for single food items. A dietary pattern score combining high fruit and vegetable intake and low red meat intake was created. Odds ratios (OR) and 95% confidence intervals (CI) for the dietary items on the risk of HNC were estimated with a two-stage random-effects logistic regression model. An inverse association was observed for higher-frequency intake of fruit (4th vs. 1st quartile OR = 0.52, 95% CI = 0.43-0.62, p (trend) < 0.01) and vegetables (OR = 0.66, 95% CI = 0.49-0.90, p (trend) = 0.01). Intake of red meat (OR = 1.40, 95% CI = 1.13-1.74, p (trend) = 0.13) and processed meat (OR = 1.37, 95% CI = 1.14-1.65, p (trend) < 0.01) was positively associated with HNC risk. Higher dietary pattern scores, reflecting high fruit/vegetable and low red meat intake, were associated with reduced HNC risk (per score increment OR = 0.90, 95% CI = 0.84-0.97).
Resumo:
In order to assess the contribution of different parenteral routes as risk exposure to the hepatitis C virus (HCV), samples from nine surveys or cross-sectional studies conducted in two Brazilian inland regions were pooled, including a total of 3,910 subjects. Heterogeneity among the study results for different risk factors was tested and the results were shown to be homogeneous. Anti-HCV antibodies were observed in 241 individuals, of which 146 (3.7%, 95% CI?=?3.24.4) had HCV exposure confirmed by immunoblot analysis or PCR test. After adjustment for relevant variables, a correlation between confirmed HCV exposure and injection drug use, tattooing, and advance age was observed. In a second logistic model that included exposures not searched in all nine studies, a smaller sample was analyzed, revealing an independent HCV association with past history of surgery and males who have sex with other males, in addition to repeated injection drug use. Overall, these analyses corroborate the finding that injection drug use is the main risk factor for HCV exposure and spread, in addition to other parenteral routes. J. Med. Virol. 84:756762, 2012. (C) 2012 Wiley Periodicals, Inc.
Resumo:
In this paper the influence of a secondary variable as a function of the correlation with the primary variable for collocated cokriging is examined. For this study five exhaustive data sets were generated in computer, from which samples with 60 and 104 data points were drawn using the stratified random sampling method. These exhaustive data sets were generated departing from a pair of primary and secondary variables showing a good correlation. Then successive sets were generated by adding an amount of white noise in such a way that the correlation gets poorer. Using these samples, it was possible to find out how primary and secondary information is used to estimate an unsampled location according to the correlation level.
Resumo:
Chlorophyll determination with a portable chlorophyll meter can indicate the period of highest N demand of plants and whether sidedressing is required or not. In this sense, defining the optimal timing of N application to common bean is fundamental to increase N use efficiency, increase yields and reduce the cost of fertilization. The objectives of this study were to evaluate the efficiency of N sufficiency index (NSI) calculated based on the relative chlorophyll index (RCI) in leaves, measured with a portable chlorophyll meter, as an indicator of time of N sidedressing fertilization and to verify which NSI (90 and 95 %) value is the most appropriate to indicate the moment of N fertilization of common bean cultivar Perola. The experiment was carried out in the rainy and dry growing seasons of the agricultural year 2009/10 on a dystroferric Red Nitosol, in Botucatu, São Paulo State, Brazil. The experiment was arranged in a randomized complete block design with five treatments, consisting of N managements (M1: 200 kg ha-1 N (40 kg at sowing + 80 kg 15 days after emergence (DAE) + 80 kg 30 DAE); M2: 100 kg ha-1 N (20 kg at sowing + 40 kg 15 DAE + 40 kg 30 DAE); M3: 20 kg ha-1 N at sowing + 30 kg ha-1 when chlorophyll meter readings indicated NSI < 95 %; M4: 20 kg ha-1 N at sowing + 30 kg ha-1 N when chlorophyll meter readings indicated NSI < 90 % and, M5: control (without N application)) and four replications. The variables RCI, aboveground dry matter, total leaf N concentration, production components, grain yield, relative yield, and N use efficiency were evaluated. The RCI correlated with leaf N concentrations. By monitoring the RCI with the chlorophyll meter, the period of N sidedressing of common bean could be defined, improving N use efficiency and avoiding unnecessary N supply to common bean. The NSI 90 % of the reference area was more efficient to define the moment of N sidedressing of common bean, to increase N use efficiency.
Sharp estimates for eigenvalues of integral operators generated by dot product kernels on the sphere
Resumo:
We obtain explicit formulas for the eigenvalues of integral operators generated by continuous dot product kernels defined on the sphere via the usual gamma function. Using them, we present both, a procedure to describe sharp bounds for the eigenvalues and their asymptotic behavior near 0. We illustrate our results with examples, among them the integral operator generated by a Gaussian kernel. Finally, we sketch complex versions of our results to cover the cases when the sphere sits in a Hermitian space.
Resumo:
The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.
Resumo:
In this Thesis we have presented our work on the analysis of galaxy clusters through their X-ray emission and the gravitational lensing effect that they induce. Our research work was mainly finalised to verify and possibly explain the observed mismatch between the galaxy cluster mass distributions estimated through two of the most promising techniques, i.e. the X-ray and the gravitational lensing analyses. Moreover, it is an established evidence that combined, multi-wavelength analyses are extremely effective in addressing and explaining the open issues in astronomy: however, in order to follow this approach, it is crucial to test the reliability and the limitations of the individual analysis techniques. In this Thesis we also assessed the impact of some factors that could affect both the X-ray and the strong lensing analyses.
Resumo:
In this thesis, we consider the problem of solving large and sparse linear systems of saddle point type stemming from optimization problems. The focus of the thesis is on iterative methods, and new preconditioning srategies are proposed, along with novel spectral estimtates for the matrices involved.
Resumo:
There is conflicting evidence whether Parkinson's disease (PD) is associated with impaired recognition memory and which of its underlying processes, namely recollection and familiarity, is more affected by the disease. The present study explored the contribution of recollection and familiarity to verbal recognition memory performance in 14 nondemented PD patients and a healthy control group with two different methods: (i) the word-frequency mirror effect, and (ii) Remember/Know judgments. Overall, recognition memory of patients was intact. The word-frequency mirror effect was observed both in patients and controls: Hit rates were higher and false alarm rates were lower for low-frequency compared to high-frequency words. However, Remember/Know judgments indicated normal recollection, but impaired familiarity. Our findings suggest that mild to moderate PD patients are selectively impaired at familiarity whereas recollection and overall recognition memory are intact.
Resumo:
Objective To examine the presence and extent of small study effects in clinical osteoarthritis research. Design Meta-epidemiological study. Data sources 13 meta-analyses including 153 randomised trials (41 605 patients) that compared therapeutic interventions with placebo or non-intervention control in patients with osteoarthritis of the hip or knee and used patients’ reported pain as an outcome. Methods We compared estimated benefits of treatment between large trials (at least 100 patients per arm) and small trials, explored funnel plots supplemented with lines of predicted effects and contours of significance, and used three approaches to estimate treatment effects: meta-analyses including all trials irrespective of sample size, meta-analyses restricted to large trials, and treatment effects predicted for large trials. Results On average, treatment effects were more beneficial in small than in large trials (difference in effect sizes −0.21, 95% confidence interval −0.34 to −0.08, P=0.001). Depending on criteria used, six to eight funnel plots indicated small study effects. In six of 13 meta-analyses, the overall pooled estimate suggested a clinically relevant, significant benefit of treatment, whereas analyses restricted to large trials and predicted effects in large trials yielded smaller non-significant estimates. Conclusions Small study effects can often distort results of meta-analyses. The influence of small trials on estimated treatment effects should be routinely assessed.
Resumo:
We performed a pooled analysis of three trials comparing titanium-nitride-oxide-coated bioactive stents (BAS) with paclitaxel-eluting stents (PES) in 1,774 patients. All patients were followed for 12 months. The primary outcomes of interest were recurrent myocardial infarction (MI), death and target lesion revascularization (TLR). Secondary endpoints were stent thrombosis (ST) and major adverse cardiac events (MACE) including MI, death and TLR. There were 922 patients in the BAS group and 852 in the PES group. BAS significantly reduced the risk of recurrent MI (2.7% vs. 5.6%; risk ratio 0.50, 95% CI 0.31-0.81; p = 0.004) and MACE (8.9% vs. 12.6%; risk ratio 0.71, 95% CI 0.54-0.94; p = 0.02) during the 12 months of follow up. In contrast, the differences between BAS and PES were not statistically significant with respect to TLR (risk ratio 0.98, 95% CI 0.68-1.41), death (risk ratio 0.96, 95% CI 0.61-1.51) and definite ST (risk ratio 0.28, 95% CI 0.05-1.47). In conclusion, the results of this analysis suggest that BAS is effective in reducing TLR and improves clinical outcomes by reducing MI and MACE compared with PES.