908 resultados para stratified sampling
Resumo:
This paper evaluates whether the Swiss monitoring programme for foreign substances in animal products fulfils basic epidemiological quality requirements, and identifies possible sources of bias in the selection of samples. The sampling was analysed over a 4-year period (2002-05). The sampling frame in 37 participating abattoirs covered 51% of all slaughtered pigs, 73% of calves, 68% of beef and 36% of cows. The analysis revealed that some sub-populations as defined by the region of origin were statistically over-represented while others were under-represented. The programme that is in accordance with European Union requirements contained some relevant bias. Patterns of under-sampled regions characterized by management type differences were identified. This could lead to an underestimate of the number of contaminated animals within the programme. Although the current sampling was stratified and partially risk-based, its efficiency could be improved by adopting a more targeted approach.
Resumo:
Background: The goal of this study was to determine whether site-specific differences in the subgingival microbiota could be detected by the checkerboard method in subjects with periodontitis. Methods: Subjects with at least six periodontal pockets with a probing depth (PD) between 5 and 7 mm were enrolled in the study. Subgingival plaque samples were collected with sterile curets by a single-stroke procedure at six selected periodontal sites from 161 subjects (966 subgingival sites). Subgingival bacterial samples were assayed with the checkerboard DNA-DNA hybridization method identifying 37 species. Results: Probing depths of 5, 6, and 7 mm were found at 50% (n = 483), 34% (n = 328), and 16% (n = 155) of sites, respectively. Statistical analysis failed to demonstrate differences in the sum of bacterial counts by tooth type (P = 0.18) or specific location of the sample (P = 0.78). With the exceptions of Campylobacter gracilis (P <0.001) and Actinomyces naeslundii (P <0.001), analysis by general linear model multivariate regression failed to identify subject or sample location factors as explanatory to microbiologic results. A trend of difference in bacterial load by tooth type was found for Prevotella nigrescens (P <0.01). At a cutoff level of >/=1.0 x 10(5), Porphyromonas gingivalis and Tannerella forsythia (previously T. forsythensis) were present at 48.0% to 56.3% and 46.0% to 51.2% of sampled sites, respectively. Conclusions: Given the similarities in the clinical evidence of periodontitis, the presence and levels of 37 species commonly studied in periodontitis are similar, with no differences between molar, premolar, and incisor/cuspid subgingival sites. This may facilitate microbiologic sampling strategies in subjects during periodontal therapy.
Resumo:
Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation
Resumo:
Proteins are linear chain molecules made out of amino acids. Only when they fold to their native states, they become functional. This dissertation aims to model the solvent (environment) effect and to develop & implement enhanced sampling methods that enable a reliable study of the protein folding problem in silico. We have developed an enhanced solvation model based on the solution to the Poisson-Boltzmann equation in order to describe the solvent effect. Following the quantum mechanical Polarizable Continuum Model (PCM), we decomposed net solvation free energy into three physical terms– Polarization, Dispersion and Cavitation. All the terms were implemented, analyzed and parametrized individually to obtain a high level of accuracy. In order to describe the thermodynamics of proteins, their conformational space needs to be sampled thoroughly. Simulations of proteins are hampered by slow relaxation due to their rugged free-energy landscape, with the barriers between minima being higher than the thermal energy at physiological temperatures. In order to overcome this problem a number of approaches have been proposed of which replica exchange method (REM) is the most popular. In this dissertation we describe a new variant of canonical replica exchange method in the context of molecular dynamic simulation. The advantage of this new method is the easily tunable high acceptance rate for the replica exchange. We call our method Microcanonical Replica Exchange Molecular Dynamic (MREMD). We have described the theoretical frame work, comment on its actual implementation, and its application to Trp-cage mini-protein in implicit solvent. We have been able to correctly predict the folding thermodynamics of this protein using our approach.
Resumo:
Despite widespread use of species-area relationships (SARs), dispute remains over the most representative SAR model. Using data of small-scale SARs of Estonian dry grassland communities, we address three questions: (1) Which model describes these SARs best when known artifacts are excluded? (2) How do deviating sampling procedures (marginal instead of central position of the smaller plots in relation to the largest plot; single values instead of average values; randomly located subplots instead of nested subplots) influence the properties of the SARs? (3) Are those effects likely to bias the selection of the best model? Our general dataset consisted of 16 series of nested-plots (1 cm(2)-100 m(2), any-part system), each of which comprised five series of subplots located in the four corners and the centre of the 100-m(2) plot. Data for the three pairs of compared sampling designs were generated from this dataset by subsampling. Five function types (power, quadratic power, logarithmic, Michaelis-Menten, Lomolino) were fitted with non-linear regression. In some of the communities, we found extremely high species densities (including bryophytes and lichens), namely up to eight species in 1 cm(2) and up to 140 species in 100 m(2), which appear to be the highest documented values on these scales. For SARs constructed from nested-plot average-value data, the regular power function generally was the best model, closely followed by the quadratic power function, while the logarithmic and Michaelis-Menten functions performed poorly throughout. However, the relative fit of the latter two models increased significantly relative to the respective best model when the single-value or random-sampling method was applied, however, the power function normally remained far superior. These results confirm the hypothesis that both single-value and random-sampling approaches cause artifacts by increasing stochasticity in the data, which can lead to the selection of inappropriate models.
Impact of Orthorectification and Spatial Sampling on Maximum NDVI Composite Data in Mountain Regions
Resumo:
Determination of somatic cell count (SCC) is used worldwide in dairy practice to describe the hygienic status of the milk and the udder health of cows. When SCC is tested on a quarter level to detect single quarters with high SCC levels of cows for practical reasons, mostly foremilk samples after prestimulation (i.e. cleaning of the udder) are used. However, SCC is usually different in different milk fractions. Therefore, the goal of this study was the investigation of the use of foremilk samples for the estimation of total quarter SCC. A total of 378 milkings in 19 dairy cows were performed with a special milking device to drain quarter milk separately. Foremilk samples were taken after udder stimulation and before cluster attachment. SCC was measured in foremilk samples and in total quarter milk. Total quarter milk SCC could not be predicted precisely from foremilk SCC measurements. At relatively high foremilk SCC levels (>300 x 10(3) cells/ml) foremilk SCC were higher than total quarter milk. At around (50-300) x 10(3) cells/ml foremilk and total quarter SCC did not differ considerably. Most interestingly, if foremilk SCC was lower than 50 x 10(3) cells/ml the total quarter SCC was higher than foremilk SCC. In addition, individual cows showed dramatic variations in foremilk SCC that were not very well related to total quarter milk SCC. In conclusion, foremilk samples are useful to detect high quarter milk SCC to recognize possibly infected quarters, only if precise cell counts are not required. However, foremilk samples can be deceptive if very low cell numbers are to be detected.
Resumo:
We report on previously unknown early archaeological sites in the Bolivian lowlands, demonstrating for the first time early and middle Holocene human presence in western Amazonia. Multidisciplinary research in forest islands situated in seasonally-inundated savannahs has revealed stratified shell middens produced by human foragers as early as 10,000 years ago, making them the oldest archaeological sites in the region. The absence of stone resources and partial burial by recent alluvial sediments has meant that these kinds of deposits have, until now, remained unidentified. We conducted core sampling, archaeological excavations and an interdisciplinary study of the stratigraphy and recovered materials from three shell midden mounds. Based on multiple lines of evidence, including radiocarbon dating, sedimentary proxies (elements, steroids and black carbon), micromorphology and faunal analysis, we demonstrate the anthropogenic origin and antiquity of these sites. In a tropical and geomorphologically active landscape often considered challenging both for early human occupation and for the preservation of hunter-gatherer sites, the newly discovered shell middens provide evidence for early to middle Holocene occupation and illustrate the potential for identifying and interpreting early open-air archaeological sites in western Amazonia. The existence of early hunter-gatherer sites in the Bolivian lowlands sheds new light on the region’s past and offers a new context within which the late Holocene “Earthmovers” of the Llanos de Moxos could have emerged.
Resumo:
Quantitative data obtained by means of design-based stereology can add valuable information to studies performed on a diversity of organs, in particular when correlated to functional/physiological and biochemical data. Design-based stereology is based on a sound statistical background and can be used to generate accurate data which are in line with principles of good laboratory practice. In addition, by adjusting the study design an appropriate precision can be achieved to find relevant differences between groups. For the success of the stereological assessment detailed planning is necessary. In this review we focus on common pitfalls encountered during stereological assessment. An exemplary workflow is included, and based on authentic examples, we illustrate a number of sampling principles which can be implemented to obtain properly sampled tissue blocks for various purposes.
Resumo:
In all European Union countries, chemical residues are required to be routinely monitored in meat. Good farming and veterinary practice can prevent the contamination of meat with pharmaceutical substances, resulting in a low detection of drug residues through random sampling. An alternative approach is to target-monitor farms suspected of treating their animals with antimicrobials. The objective of this project was to assess, using a stochastic model, the efficiency of these two sampling strategies. The model integrated data on Swiss livestock as well as expert opinion and results from studies conducted in Switzerland. Risk-based sampling showed an increase in detection efficiency of up to 100% depending on the prevalence of contaminated herds. Sensitivity analysis of this model showed the importance of the accuracy of prior assumptions for conducting risk-based sampling. The resources gained by changing from random to risk-based sampling should be transferred to improving the quality of prior information.
Resumo:
In this note, we show that an extension of a test for perfect ranking in a balanced ranked set sample given by Li and Balakrishnan (2008) to the multi-cycle case turns out to be equivalent to the test statistic proposed by Frey et al. (2007). This provides an alternative interpretation and motivation for their test statistic.