835 resultados para sampling methodology
Resumo:
Exposure assessment is an important step of risk assessment process and has evolved more quickly than perhaps any aspect of the four-step risk paradigm (hazard identification, exposure assessment, dose-response analysis, and risk characterization). Nevertheless, some epidemiological studies have associated adverse health effects to a chemical exposure with an inadequate or absent exposure quantification. In addition to the metric used, the truly representation of exposure by measurements depends on: the strategy of sampling, random collection of measurements, and similarity between the measured and unmeasured exposure groups. Two environmental monitoring methodologies for formaldehyde occupational exposure were used to assess the influence of metric selection in exposure assessment and, consequently, in risk assessment process.
Resumo:
INTRODUCTION: Previous cross-sectional studies have shown a high prevalence of chronic disease and disability among the elderly. Given Brazils rapid aging process and the obvious consequences of the growing number of old people with chronic diseases and associated disabilities for the provision of health services, a need was felt for a study that would overcome the limitations of cross-sectional data and shed some light on the main factors determining whether a person will live longer and free of disabling diseases, the so-called successful aging. The methodology of the first follow-up study of elderly residents in Brazil is presented. METHOD: The profile of the initial cohort is compared with previous cross-sectional data and an in-depth analysis of nonresponse is carried out in order to assess the validity of future longitudinal analysis. The EPIDOSO (Epidemiologia do Idoso) Study conducted a two-year follow-up of 1,667 elderly people (65+), living in S. Paulo. The study consisted of two waves, each consisting of household, clinical, and biochemical surveys. RESULTS AND CONCLUSIONS: In general, the initial cohort showed a similar profile to previous cross-sectional samples in S. Paulo. There was a majority of women, mostly widows, living in multigenerational households, and a high prevalence of chronic illnesses, psychiatric disturbances, and physical disabilities. Despite all the difficulties inherent in follow-up studies, there was a fairly low rate of nonresponse to the household survey after two years, which did not actually affect the representation of the cohort at the final household assessment, making unbiased longitudinal analysis possible. Concerning the clinical and blood sampling surveys, the respondents tended to be younger and less disabled than the nonrespondents, limiting the use of the clinical and laboratory data to longitudinal analysis aimed at a healthier cohort. It is worth mentioning that gender, education, family support, and socioeconomic status were not important determinants of nonresponse, as is often the case.
Resumo:
Bearing in mind the potential adverse health effects of ultrafine particles, it is of paramount importance to perform effective monitoring of nanosized particles in several microenvironments, which may include ambient air, indoor air, and also occupational environments. In fact, effective and accurate monitoring is the first step to obtaining a set of data that could be used further on to perform subsequent evaluations such as risk assessment and epidemiologic studies, thus proposing good working practices such as containment measures in order to reduce occupational exposure. This paper presents a useful methodology for monitoring ultrafine particles/nanoparticles in several microenvironments, using online analyzers and also sampling systems that allow further characterization on collected nanoparticles. This methodology was validated in three case studies presented in the paper, which assess monitoring of nanosized particles in the outdoor atmosphere, during cooking operations, and in a welding workshop.
Resumo:
We are launching a long-term study to characterize the biodiversity at different elevations in several Azorean Islands. Our aim is to use the Azores as a model archipelago to answer the fundamental question of what generates and maintains the global spatial heterogeneity of diversity in islands and to be able to understand the dynamics of change across time. An extensive, standardized sampling protocol was applied in most of the remnant forest fragments of five Azorean Islands. Fieldwork followed BRYOLAT methodology for the collection of bryophytes, ferns and other vascular plant species. A modified version of the BALA protocol was used for arthropods. A total of 70 plots (10 m x 10 m) are already established in five islands (Flores, Pico, São Jorge, Terceira and São Miguel), all respecting an elevation step of 200 m, resulting in 24 stations examined in Pico, 12 in Terceira, 10 in Flores, 12 in São Miguel and 12 in São Jorge. The first results regarding the vascular plants inventory include 138 vascular species including taxa from Lycopodiophyta (N=2), Pteridophyta (N=27), Pinophyta (N=2) and Magnoliophyta (N=107). In this contribution we also present the main research question for the next six years within the 2020 Horizon.
Resumo:
Sampling issues represent a topic of ongoing interest to the forensic science community essentially because of their crucial role in laboratory planning and working protocols. For this purpose, forensic literature described thorough (Bayesian) probabilistic sampling approaches. These are now widely implemented in practice. They allow, for instance, to obtain probability statements that parameters of interest (e.g., the proportion of a seizure of items that present particular features, such as an illegal substance) satisfy particular criteria (e.g., a threshold or an otherwise limiting value). Currently, there are many approaches that allow one to derive probability statements relating to a population proportion, but questions on how a forensic decision maker - typically a client of a forensic examination or a scientist acting on behalf of a client - ought actually to decide about a proportion or a sample size, remained largely unexplored to date. The research presented here intends to address methodology from decision theory that may help to cope usefully with the wide range of sampling issues typically encountered in forensic science applications. The procedures explored in this paper enable scientists to address a variety of concepts such as the (net) value of sample information, the (expected) value of sample information or the (expected) decision loss. All of these aspects directly relate to questions that are regularly encountered in casework. Besides probability theory and Bayesian inference, the proposed approach requires some additional elements from decision theory that may increase the efforts needed for practical implementation. In view of this challenge, the present paper will emphasise the merits of graphical modelling concepts, such as decision trees and Bayesian decision networks. These can support forensic scientists in applying the methodology in practice. How this may be achieved is illustrated with several examples. The graphical devices invoked here also serve the purpose of supporting the discussion of the similarities, differences and complementary aspects of existing Bayesian probabilistic sampling criteria and the decision-theoretic approach proposed throughout this paper.
Resumo:
Two trends which presently exist in relation to the concept of Paleontology are analyzed, pointing out some of the aspects which negative influence. Various reflections are made based on examples of some of the principal points of paleontological method, such as the influence of a punctual sampling, the meaning of size-frequency distribution and subjectivity in the identification of fossils. Topics which have a marked repercussion in diverse aspects of Paleontology are discussed.
Resumo:
OBJECTIVE: Accuracy studies of Patient Safety Indicators (PSIs) are critical but limited by the large samples required due to low occurrence of most events. We tested a sampling design based on test results (verification-biased sampling [VBS]) that minimizes the number of subjects to be verified. METHODS: We considered 3 real PSIs, whose rates were calculated using 3 years of discharge data from a university hospital and a hypothetical screen of very rare events. Sample size estimates, based on the expected sensitivity and precision, were compared across 4 study designs: random and VBS, with and without constraints on the size of the population to be screened. RESULTS: Over sensitivities ranging from 0.3 to 0.7 and PSI prevalence levels ranging from 0.02 to 0.2, the optimal VBS strategy makes it possible to reduce sample size by up to 60% in comparison with simple random sampling. For PSI prevalence levels below 1%, the minimal sample size required was still over 5000. CONCLUSIONS: Verification-biased sampling permits substantial savings in the required sample size for PSI validation studies. However, sample sizes still need to be very large for many of the rarer PSIs.
Resumo:
BACKGROUND: Many publications report the prevalence of chronic kidney disease (CKD) in the general population. Comparisons across studies are hampered as CKD prevalence estimations are influenced by study population characteristics and laboratory methods. METHODS: For this systematic review, two researchers independently searched PubMed, MEDLINE and EMBASE to identify all original research articles that were published between 1 January 2003 and 1 November 2014 reporting the prevalence of CKD in the European adult general population. Data on study methodology and reporting of CKD prevalence results were independently extracted by two researchers. RESULTS: We identified 82 eligible publications and included 48 publications of individual studies for the data extraction. There was considerable variation in population sample selection. The majority of studies did not report the sampling frame used, and the response ranged from 10 to 87%. With regard to the assessment of kidney function, 67% used a Jaffe assay, whereas 13% used the enzymatic assay for creatinine determination. Isotope dilution mass spectrometry calibration was used in 29%. The CKD-EPI (52%) and MDRD (75%) equations were most often used to estimate glomerular filtration rate (GFR). CKD was defined as estimated GFR (eGFR) <60 mL/min/1.73 m(2) in 92% of studies. Urinary markers of CKD were assessed in 60% of the studies. CKD prevalence was reported by sex and age strata in 54 and 50% of the studies, respectively. In publications with a primary objective of reporting CKD prevalence, 39% reported a 95% confidence interval. CONCLUSIONS: The findings from this systematic review showed considerable variation in methods for sampling the general population and assessment of kidney function across studies reporting CKD prevalence. These results are utilized to provide recommendations to help optimize both the design and the reporting of future CKD prevalence studies, which will enhance comparability of study results.
Resumo:
163 p.
Resumo:
[1] Cloud cover is conventionally estimated from satellite images as the observed fraction of cloudy pixels. Active instruments such as radar and Lidar observe in narrow transects that sample only a small percentage of the area over which the cloud fraction is estimated. As a consequence, the fraction estimate has an associated sampling uncertainty, which usually remains unspecified. This paper extends a Bayesian method of cloud fraction estimation, which also provides an analytical estimate of the sampling error. This method is applied to test the sensitivity of this error to sampling characteristics, such as the number of observed transects and the variability of the underlying cloud field. The dependence of the uncertainty on these characteristics is investigated using synthetic data simulated to have properties closely resembling observations of the spaceborne Lidar NASA-LITE mission. Results suggest that the variance of the cloud fraction is greatest for medium cloud cover and least when conditions are mostly cloudy or clear. However, there is a bias in the estimation, which is greatest around 25% and 75% cloud cover. The sampling uncertainty is also affected by the mean lengths of clouds and of clear intervals; shorter lengths decrease uncertainty, primarily because there are more cloud observations in a transect of a given length. Uncertainty also falls with increasing number of transects. Therefore a sampling strategy aimed at minimizing the uncertainty in transect derived cloud fraction will have to take into account both the cloud and clear sky length distributions as well as the cloud fraction of the observed field. These conclusions have implications for the design of future satellite missions. This paper describes the first integrated methodology for the analytical assessment of sampling uncertainty in cloud fraction observations from forthcoming spaceborne radar and Lidar missions such as NASA's Calipso and CloudSat.
Resumo:
Imputation is commonly used to compensate for item non-response in sample surveys. If we treat the imputed values as if they are true values, and then compute the variance estimates by using standard methods, such as the jackknife, we can seriously underestimate the true variances. We propose a modified jackknife variance estimator which is defined for any without-replacement unequal probability sampling design in the presence of imputation and non-negligible sampling fraction. Mean, ratio and random-imputation methods will be considered. The practical advantage of the method proposed is its breadth of applicability.
Resumo:
Background: Postprandial lipid metabolism in humans has deserved much attention during the last two decades. Although fasting lipid and lipoprotein parameters reflect body homeostasis to some extent, the transient lipid and lipoprotein accumulation that occurs in the circulation after a fat-containing meal highlights the individual capacity to handle an acute fat input. An exacerbated postprandial accumulation of triglyceride-rich lipoproteins in the circulation has been associated with an increased cardiovascular risk. Methods: The important number of studies published in this field raises the question of the methodology used for such postprandial studies, as reviewed. Results: Based on our experiences, the present review reports and discuss the numerous methodological issues involved to serve as a basis for further works. These aspects include aims of the postprandial tests, size and nutrient composition of the test meals and background diets, pre-test conditions, characteristics of subjects involved, timing of sampling, suitable markers of postprandial lipid metabolism and calculations. Conclusion: In conclusion, we stress the need for standardization of postprandial tests.
Resumo:
Contrails and especially their evolution into cirrus-like clouds are thought to have very important effects on local and global radiation budgets, though are generally not well represented in global climate models. Lack of contrail parameterisations is due to the limited availability of in situ contrail measurements which are difficult to obtain. Here we present a methodology for successful sampling and interpretation of contrail microphysical and radiative data using both in situ and remote sensing instrumentation on board the FAAM BAe146 UK research aircraft as part of the COntrails Spreading Into Cirrus (COSIC) study.
Resumo:
Weeds tend to aggregate in patches within fields and there is evidence that this is partly owing to variation in soil properties. Because the processes driving soil heterogeneity operate at different scales, the strength of the relationships between soil properties and weed density would also be expected to be scale-dependent. Quantifying these effects of scale on weed patch dynamics is essential to guide the design of discrete sampling protocols for mapping weed distribution. We have developed a general method that uses novel within-field nested sampling and residual maximum likelihood (REML) estimation to explore scale-dependent relationships between weeds and soil properties. We have validated the method using a case study of Alopecurus myosuroides in winter wheat. Using REML, we partitioned the variance and covariance into scale-specific components and estimated the correlations between the weed counts and soil properties at each scale. We used variograms to quantify the spatial structure in the data and to map variables by kriging. Our methodology successfully captured the effect of scale on a number of edaphic drivers of weed patchiness. The overall Pearson correlations between A. myosuroides and soil organic matter and clay content were weak and masked the stronger correlations at >50 m. Knowing how the variance was partitioned across the spatial scales we optimized the sampling design to focus sampling effort at those scales that contributed most to the total variance. The methods have the potential to guide patch spraying of weeds by identifying areas of the field that are vulnerable to weed establishment.
Resumo:
An efficient and robust method to measure vitamin D (25-hydroxy vitamin D3 (25(OH)D3) and 25-hydroxy vitamin D2 in dried blood spots (DBS) has been developed and applied in the pan-European multi-centre, internet-based, personalised nutrition intervention study Food4Me. The method includes calibration with blood containing endogenous 25(OH)D3, spotted as DBS and corrected for haematocrit content. The methodology was validated following international standards. The performance characteristics did not reach those of the current gold standard liquid chromatography-MS/MS in plasma for all parameters, but were found to be very suitable for status-level determination under field conditions. DBS sample quality was very high, and 3778 measurements of 25(OH)D3 were obtained from 1465 participants. The study centre and the season within the study centre were very good predictors of 25(OH)D3 levels (P<0·001 for each case). Seasonal effects were modelled by fitting a sine function with a minimum 25(OH)D3 level on 20 January and a maximum on 21 July. The seasonal amplitude varied from centre to centre. The largest difference between winter and summer levels was found in Germany and the smallest in Poland. The model was cross-validated to determine the consistency of the predictions and the performance of the DBS method. The Pearson's correlation between the measured values and the predicted values was r 0·65, and the sd of their differences was 21·2 nmol/l. This includes the analytical variation and the biological variation within subjects. Overall, DBS obtained by unsupervised sampling of the participants at home was a viable methodology for obtaining vitamin D status information in a large nutritional study.