33 resultados para sampling methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The objective of the present study was to compare three different sampling and questionnaire administration methods used in the international KIDSCREEN study in terms of participation, response rates, and external validity. METHODS: Children and adolescents aged 8-18 years were surveyed in 13 European countries using either telephone sampling and mail administration, random sampling of school listings followed by classroom or mail administration, or multistage random sampling of communities and households with self-administration of the survey materials at home. Cooperation, completion, and response rates were compared across countries and survey methods. Data on non-respondents was collected in 8 countries. The population fraction (PF, respondents in each sex-age, or educational level category, divided by the population in the same category from Eurostat census data) and population fraction ratio (PFR, ratio of PF) and their corresponding 95% confidence intervals were used to analyze differences by country between the KIDSCREEN samples and a reference Eurostat population. RESULTS: Response rates by country ranged from 18.9% to 91.2%. Response rates were highest in the school-based surveys (69.0%-91.2%). Sample proportions by age and gender were similar to the reference Eurostat population in most countries, although boys and adolescents were slightly underrepresented (PFR <1). Parents in lower educational categories were less likely to participate (PFR <1 in 5 countries). Parents in higher educational categories were overrepresented when the school and household sampling strategies were used (PFR = 1.78-2.97). CONCLUSION: School-based sampling achieved the highest overall response rates but also produced slightly more biased samples than the other methods. The results suggest that the samples were sufficiently representative to provide reference population values for the KIDSCREEN instrument.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The goal of this study was to determine whether site-specific differences in the subgingival microbiota could be detected by the checkerboard method in subjects with periodontitis. Methods: Subjects with at least six periodontal pockets with a probing depth (PD) between 5 and 7 mm were enrolled in the study. Subgingival plaque samples were collected with sterile curets by a single-stroke procedure at six selected periodontal sites from 161 subjects (966 subgingival sites). Subgingival bacterial samples were assayed with the checkerboard DNA-DNA hybridization method identifying 37 species. Results: Probing depths of 5, 6, and 7 mm were found at 50% (n = 483), 34% (n = 328), and 16% (n = 155) of sites, respectively. Statistical analysis failed to demonstrate differences in the sum of bacterial counts by tooth type (P = 0.18) or specific location of the sample (P = 0.78). With the exceptions of Campylobacter gracilis (P <0.001) and Actinomyces naeslundii (P <0.001), analysis by general linear model multivariate regression failed to identify subject or sample location factors as explanatory to microbiologic results. A trend of difference in bacterial load by tooth type was found for Prevotella nigrescens (P <0.01). At a cutoff level of >/=1.0 x 10(5), Porphyromonas gingivalis and Tannerella forsythia (previously T. forsythensis) were present at 48.0% to 56.3% and 46.0% to 51.2% of sampled sites, respectively. Conclusions: Given the similarities in the clinical evidence of periodontitis, the presence and levels of 37 species commonly studied in periodontitis are similar, with no differences between molar, premolar, and incisor/cuspid subgingival sites. This may facilitate microbiologic sampling strategies in subjects during periodontal therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To evaluate strategies used to select cases and controls and how reported odds ratios are interpreted, the authors examined 150 case-control studies published in leading general medicine, epidemiology, and clinical specialist journals from 2001 to 2007. Most of the studies (125/150; 83%) were based on incident cases; among these, the source population was mostly dynamic (102/125; 82%). A minority (23/125; 18%) sampled from a fixed cohort. Among studies with incident cases, 105 (84%) could interpret the odds ratio as a rate ratio. Fifty-seven (46% of 125) required the source population to be stable for such interpretation, while the remaining 48 (38% of 125) did not need any assumptions because of matching on time or concurrent sampling. Another 17 (14% of 125) studies with incident cases could interpret the odds ratio as a risk ratio, with 16 of them requiring the rare disease assumption for this interpretation. The rare disease assumption was discussed in 4 studies but was not relevant to any of them. No investigators mentioned the need for a stable population. The authors conclude that in current case-control research, a stable exposure distribution is much more frequently needed to interpret odds ratios than the rare disease assumption. At present, investigators conducting case-control studies rarely discuss what their odds ratios estimate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: The simple bedside method for sampling undiluted distal pulmonary edema fluid through a normal suction catheter (s-Cath) has been experimentally and clinically validated. However, there are no data comparing non-bronchoscopic bronchoalveolar lavage (mini-BAL) and s-Cath for assessing lung inflammation in acute hypoxaemic respiratory failure. We designed a prospective study in two groups of patients, those with acute lung injury (ALI)/acute respiratory distress syndrome (ARDS) and those with acute cardiogenic lung edema (ACLE), designed to investigate the clinical feasibility of these techniques and to evaluate inflammation in both groups using undiluted sampling obtained by s-Cath. To test the interchangeability of the two methods in the same patient for studying the inflammation response, we further compared mini-BAL and s-Cath for agreement of protein concentration and percentage of polymorphonuclear cells (PMNs). METHODS: Mini-BAL and s-Cath sampling was assessed in 30 mechanically ventilated patients, 21 with ALI/ARDS and 9 with ACLE. To analyse agreement between the two sampling techniques, we considered only simultaneously collected mini-BAL and s-Cath paired samples. The protein concentration and polymorphonuclear cell (PMN) count comparisons were performed using undiluted sampling. Bland-Altman plots were used for assessing the mean bias and the limits of agreement between the two sampling techniques; comparison between groups was performed by using the non-parametric Mann-Whitney-U test; continuous variables were compared by using the Student t-test, Wilcoxon signed rank test, analysis of variance or Student-Newman-Keuls test; and categorical variables were compared by using chi-square analysis or Fisher exact test. RESULTS: Using protein content and PMN percentage as parameters, we identified substantial variations between the two sampling techniques. When the protein concentration in the lung was high, the s-Cath was a more sensitive method; by contrast, as inflammation increased, both methods provided similar estimates of neutrophil percentages in the lung. The patients with ACLE showed an increased PMN count, suggesting that hydrostatic lung edema can be associated with a concomitant inflammatory process. CONCLUSIONS: There are significant differences between the s-Cath and mini-BAL sampling techniques, indicating that these procedures cannot be used interchangeably for studying the lung inflammatory response in patients with acute hypoxaemic lung injury.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Enquiries among patients on the one hand and experimental and observational studies on the other suggest an influence of stress on inflammatory bowel diseases (IBD). However, since this influence remains hypothetical, further research is essential. We aimed to devise recommendations for future investigations in IBD by means of scrutinizing previously applied methodology. METHODS: We critically reviewed prospective clinical studies on the effect of psychological stress on IBD. Eligible studies were searched by means of the PubMed electronic library and through checking the bibliographies of located sources. RESULTS: We identified 20 publications resulting from 18 different studies. Sample sizes ranged between 10 and 155 participants. Study designs in terms of patient assessment, control variables, and applied psychometric instruments varied substantially across studies. Methodological strengths and weaknesses were irregularly dispersed. Thirteen studies reported significant relationships between stress and adverse outcomes. CONCLUSIONS: Study designs, including accuracy of outcome assessment and repeated sampling of outcomes (i.e. symptoms, clinical, and endoscopic), depended upon conditions like sample size, participants' compliance, and available resources. Meeting additional criteria of sound methodology, like taking into account covariates of the disease and its course, is strongly recommended to possibly improve study designs in future IBD research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-resolution and highly precise age models for recent lake sediments (last 100–150 years) are essential for quantitative paleoclimate research. These are particularly important for sedimentological and geochemical proxies, where transfer functions cannot be established and calibration must be based upon the relation of sedimentary records to instrumental data. High-precision dating for the calibration period is most critical as it determines directly the quality of the calibration statistics. Here, as an example, we compare radionuclide age models obtained on two high-elevation glacial lakes in the Central Chilean Andes (Laguna Negra: 33°38′S/70°08′W, 2,680 m a.s.l. and Laguna El Ocho: 34°02′S/70°19′W, 3,250 m a.s.l.). We show the different numerical models that produce accurate age-depth chronologies based on 210Pb profiles, and we explain how to obtain reduced age-error bars at the bottom part of the profiles, i.e., typically around the end of the 19th century. In order to constrain the age models, we propose a method with five steps: (i) sampling at irregularly-spaced intervals for 226Ra, 210Pb and 137Cs depending on the stratigraphy and microfacies, (ii) a systematic comparison of numerical models for the calculation of 210Pb-based age models: constant flux constant sedimentation (CFCS), constant initial concentration (CIC), constant rate of supply (CRS) and sediment isotope tomography (SIT), (iii) numerical constraining of the CRS and SIT models with the 137Cs chronomarker of AD 1964 and, (iv) step-wise cross-validation with independent diagnostic environmental stratigraphic markers of known age (e.g., volcanic ash layer, historical flood and earthquakes). In both examples, we also use airborne pollutants such as spheroidal carbonaceous particles (reflecting the history of fossil fuel emissions), excess atmospheric Cu deposition (reflecting the production history of a large local Cu mine), and turbidites related to historical earthquakes. Our results show that the SIT model constrained with the 137Cs AD 1964 peak performs best over the entire chronological profile (last 100–150 years) and yields the smallest standard deviations for the sediment ages. Such precision is critical for the calibration statistics, and ultimately, for the quality of the quantitative paleoclimate reconstruction. The systematic comparison of CRS and SIT models also helps to validate the robustness of the chronologies in different sections of the profile. Although surprisingly poorly known and under-explored in paleolimnological research, the SIT model has a great potential in paleoclimatological reconstructions based on lake sediments

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this note, we show that an extension of a test for perfect ranking in a balanced ranked set sample given by Li and Balakrishnan (2008) to the multi-cycle case turns out to be equivalent to the test statistic proposed by Frey et al. (2007). This provides an alternative interpretation and motivation for their test statistic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a technique for online compression of ECG signals using the Golomb-Rice encoding algorithm. This is facilitated by a novel time encoding asynchronous analog-to-digital converter targeted for low-power, implantable, long-term bio-medical sensing applications. In contrast to capturing the actual signal (voltage) values the asynchronous time encoder captures and encodes the time information at which predefined changes occur in the signal thereby minimizing the sensor's energy use and the number of bits we store to represent the information by not capturing unnecessary samples. The time encoder transforms the ECG signal data to pure time information that has a geometric distribution such that the Golomb-Rice encoding algorithm can be used to further compress the data. An overall online compression rate of about 6 times is achievable without the usual computations associated with most compression methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many techniques based on data which are drawn by Ranked Set Sampling (RSS) scheme assume that the ranking of observations is perfect. Therefore it is essential to develop some methods for testing this assumption. In this article, we propose a parametric location-scale free test for assessing the assumption of perfect ranking. The results of a simulation study in two special cases of normal and exponential distributions indicate that the proposed test performs well in comparison with its leading competitors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Pathogenic bacteria are often asymptomatically carried in the nasopharynx. Bacterial carriage can be reduced by vaccination and has been used as an alternative endpoint to clinical disease in randomised controlled trials (RCTs). Vaccine efficacy (VE) is usually calculated as 1 minus a measure of effect. Estimates of vaccine efficacy from cross-sectional carriage data collected in RCTs are usually based on prevalence odds ratios (PORs) and prevalence ratios (PRs), but it is unclear when these should be measured. METHODS We developed dynamic compartmental transmission models simulating RCTs of a vaccine against a carried pathogen to investigate how VE can best be estimated from cross-sectional carriage data, at which time carriage should optimally be assessed, and to which factors this timing is most sensitive. In the models, vaccine could change carriage acquisition and clearance rates (leaky vaccine); values for these effects were explicitly defined (facq, 1/fdur). POR and PR were calculated from model outputs. Models differed in infection source: other participants or external sources unaffected by the trial. Simulations using multiple vaccine doses were compared to empirical data. RESULTS The combined VE against acquisition and duration calculated using POR (VEˆacq.dur, (1-POR)×100) best estimates the true VE (VEacq.dur, (1-facq×fdur)×100) for leaky vaccines in most scenarios. The mean duration of carriage was the most important factor determining the time until VEˆacq.dur first approximates VEacq.dur: if the mean duration of carriage is 1-1.5 months, up to 4 months are needed; if the mean duration is 2-3 months, up to 8 months are needed. Minor differences were seen between models with different infection sources. In RCTs with shorter intervals between vaccine doses it takes longer after the last dose until VEˆacq.dur approximates VEacq.dur. CONCLUSION The timing of sample collection should be considered when interpreting vaccine efficacy against bacterial carriage measured in RCTs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present three methods for the distortion-free enhancement of THz signals measured by electro-optic sampling in zinc blende-type detector crystals, e.g., ZnTe or GaP. A technique commonly used in optically heterodyne-detected optical Kerr effect spectroscopy is introduced, which is based on two measurements at opposite optical biases near the zero transmission point in a crossed polarizer detection geometry. In contrast to other techniques for an undistorted THz signal enhancement, it also works in a balanced detection scheme and does not require an elaborate procedure for the reconstruction of the true signal as the two measured waveforms are simply subtracted to remove distortions. We study three different approaches for setting an optical bias using the Jones matrix formalism and discuss them also in the framework of optical heterodyne detection. We show that there is an optimal bias point in realistic situations where a small fraction of the probe light is scattered by optical components. The experimental demonstration will be given in the second part of this two-paper series [J. Opt. Soc. Am. B, doc. ID 204877 (2014, posted online)].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three methods for distortion-free enhancement of electro-optic sampling measurements of terahertz signals are tested. In the first part of this two-paper series [J. Opt. Soc. Am B 31, 904–910 (2014)], the theoretical framework for describing the signal enhancement was presented and discussed. As the applied optical bias is decreased, individual signal traces become enhanced but distorted. Here we experimentally show that nonlinear signal components that distort the terahertz electric field measurement can be removed by subtracting traces recorded with opposite optical bias values. In all three methods tested, we observe up to an order of magnitude increase in distortion-free signal enhancement, in agreement with the theory, making possible measurements of small terahertz-induced transient birefringence signals with increased signal-to-noise ratio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The objective of the study was to evaluate the implications of different classifications of rheumatic heart disease on estimated prevalence, and to systematically assess the importance of incidental findings from echocardiographic screening among schoolchildren in Peru. METHODS We performed a cluster randomized observational survey using portable echocardiography among schoolchildren aged 5 to 16 years from randomly selected public and private schools in Arequipa, Peru. Rheumatic heart disease was defined according to the modified World Health Organization (WHO) criteria and the World Heart Federation (WHF) criteria. FINDINGS Among 1395 eligible students from 40 classes and 20 schools, 1023 (73%) participated in the present survey. The median age of the children was 11 years (interquartile range [IQR] 8-13 years) and 50% were girls. Prevalence of possible, probable and definite rheumatic heart disease according to the modified WHO criteria amounted to 19.7/1000 children and ranged from 10.2/1000 among children 5 to 8 years of age to 39.8/1000 among children 13 to 16 years of age; the prevalence of borderline/definite rheumatic heart disease according to the WHF criteria was 3.9/1000 children. 21 children (2.1%) were found to have congenital heart disease, 8 of which were referred for percutaneous or surgical intervention. CONCLUSIONS Prevalence of RHD in Peru was considerably lower compared to endemic regions in sub-Saharan Africa, southeast Asia, and Oceania; and paralleled by a comparable number of undetected congenital heart disease. Strategies to address collateral findings from echocardiographic screening are necessary in the setup of active surveillance programs for RHD. TRIAL REGISTRATION ClinicalTrials.gov identifier: NCT02353663.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During acts of physical aggression, offenders frequently come into contact with clothes of the victim, thereby leaving traces of DNA-bearing biological material on the garments. Since tape-lifting and swabbing, the currently established methods for non-destructive trace DNA sampling from clothing, both have their shortcomings in collection efficiency and handling, we thought about a new collection method for these challenging samples. Testing two readily available electrostatic devices for their potential to sample biological material from garments made of different fabrics, we found one of them, the electrostatic dust print lifter (DPL), to perform comparable to well-established sampling with wet cotton swabs. In simulated aggression scenarios, we had the same success rate for the establishment of single aggressor profiles, suitable for database submission, with both the DPL and wet swabbing. However, we lost a substantial amount of information with electrostatic sampling, since almost no mixed aggressor-victim profiles suitable for database entry could be established, compared to conventional swabbing. This study serves as a proof of principle for electrostatic DNA sampling from items of clothing. The technique still requires optimization before it might be used in real casework. But we are confident that in the future it could be an efficient and convenient contribution to the toolbox of forensic practitioners.