957 resultados para root sampling methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Enquiries among patients on the one hand and experimental and observational studies on the other suggest an influence of stress on inflammatory bowel diseases (IBD). However, since this influence remains hypothetical, further research is essential. We aimed to devise recommendations for future investigations in IBD by means of scrutinizing previously applied methodology. METHODS: We critically reviewed prospective clinical studies on the effect of psychological stress on IBD. Eligible studies were searched by means of the PubMed electronic library and through checking the bibliographies of located sources. RESULTS: We identified 20 publications resulting from 18 different studies. Sample sizes ranged between 10 and 155 participants. Study designs in terms of patient assessment, control variables, and applied psychometric instruments varied substantially across studies. Methodological strengths and weaknesses were irregularly dispersed. Thirteen studies reported significant relationships between stress and adverse outcomes. CONCLUSIONS: Study designs, including accuracy of outcome assessment and repeated sampling of outcomes (i.e. symptoms, clinical, and endoscopic), depended upon conditions like sample size, participants' compliance, and available resources. Meeting additional criteria of sound methodology, like taking into account covariates of the disease and its course, is strongly recommended to possibly improve study designs in future IBD research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Elevated plasma fibrinogen levels have prospectively been associated with an increased risk of coronary artery disease in different populations. Plasma fibrinogen is a measure of systemic inflammation crucially involved in atherosclerosis. The vagus nerve curtails inflammation via a cholinergic antiinflammatory pathway. We hypothesized that lower vagal control of the heart relates to higher plasma fibrinogen levels. METHODS: Study participants were 559 employees (age 17-63 years; 89% men) of an airplane manufacturing plant in southern Germany. All subjects underwent medical examination, blood sampling, and 24-hour ambulatory heart rate recording while kept on their work routine. The root mean square of successive differences in RR intervals during the night period (nighttime RMSSD) was computed as the heart rate variability index of vagal function. RESULTS: After controlling for demographic, lifestyle, and medical factors, nighttime RMSSD explained 1.7% (P = 0.001), 0.8% (P = 0.033), and 7.8% (P = 0.007), respectively, of the variance in fibrinogen levels in all subjects, men, and women. Nighttime RMSSD and fibrinogen levels were stronger correlated in women than in men. In all workers, men, and women, respectively, there was a mean +/- SEM increase of 0.41 +/- 0.13 mg/dL, 0.28 +/- 0.13 mg/dL, and 1.16 +/- 0.41 mg/dL fibrinogen for each millisecond decrease in nighttime RMSSD. CONCLUSIONS: Reduced vagal outflow to the heart correlated with elevated plasma fibrinogen levels independent of the established cardiovascular risk factors. This relationship seemed comparably stronger in women than men. Such an autonomic mechanism might contribute to the atherosclerotic process and its thrombotic complications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-resolution and highly precise age models for recent lake sediments (last 100–150 years) are essential for quantitative paleoclimate research. These are particularly important for sedimentological and geochemical proxies, where transfer functions cannot be established and calibration must be based upon the relation of sedimentary records to instrumental data. High-precision dating for the calibration period is most critical as it determines directly the quality of the calibration statistics. Here, as an example, we compare radionuclide age models obtained on two high-elevation glacial lakes in the Central Chilean Andes (Laguna Negra: 33°38′S/70°08′W, 2,680 m a.s.l. and Laguna El Ocho: 34°02′S/70°19′W, 3,250 m a.s.l.). We show the different numerical models that produce accurate age-depth chronologies based on 210Pb profiles, and we explain how to obtain reduced age-error bars at the bottom part of the profiles, i.e., typically around the end of the 19th century. In order to constrain the age models, we propose a method with five steps: (i) sampling at irregularly-spaced intervals for 226Ra, 210Pb and 137Cs depending on the stratigraphy and microfacies, (ii) a systematic comparison of numerical models for the calculation of 210Pb-based age models: constant flux constant sedimentation (CFCS), constant initial concentration (CIC), constant rate of supply (CRS) and sediment isotope tomography (SIT), (iii) numerical constraining of the CRS and SIT models with the 137Cs chronomarker of AD 1964 and, (iv) step-wise cross-validation with independent diagnostic environmental stratigraphic markers of known age (e.g., volcanic ash layer, historical flood and earthquakes). In both examples, we also use airborne pollutants such as spheroidal carbonaceous particles (reflecting the history of fossil fuel emissions), excess atmospheric Cu deposition (reflecting the production history of a large local Cu mine), and turbidites related to historical earthquakes. Our results show that the SIT model constrained with the 137Cs AD 1964 peak performs best over the entire chronological profile (last 100–150 years) and yields the smallest standard deviations for the sediment ages. Such precision is critical for the calibration statistics, and ultimately, for the quality of the quantitative paleoclimate reconstruction. The systematic comparison of CRS and SIT models also helps to validate the robustness of the chronologies in different sections of the profile. Although surprisingly poorly known and under-explored in paleolimnological research, the SIT model has a great potential in paleoclimatological reconstructions based on lake sediments

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this note, we show that an extension of a test for perfect ranking in a balanced ranked set sample given by Li and Balakrishnan (2008) to the multi-cycle case turns out to be equivalent to the test statistic proposed by Frey et al. (2007). This provides an alternative interpretation and motivation for their test statistic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose The sedimentation sign (SedSign) has been shown to discriminate well between selected patients with and without lumbar spinal stenosis (LSS). The purpose of this study was to compare the pressure values associated with LSS versus non-LSS and discuss whether a positive SedSign may be related to increased epidural pressure at the level of the stenosis. Methods We measured the intraoperative epidural pressure in five patients without LSS and a negative SedSign, and in five patients with LSS and a positive SedSign using a Codman TM catheter in prone position under radioscopy. Results Patients with a negative SedSign had a median epidural pressure of 9 mmHg independent of the measurement location. Breath and pulse-synchronous waves accounted for 1–3 mmHg. In patients with monosegmental LSS and a positive SedSign, the epidural pressure above and below the stenosis was similar (median 8–9 mmHg). At the level of the stenosis the median epidural pressure was 22 mmHg. A breath and pulse-synchronous wave was present cranial to the stenosis, but absent below. These findings were independent of the cross-sectional area of the spinal canal at the level of the stenosis. Conclusions Patients with LSS have an increased epidural pressure at the level of the stenosis and altered pressure wave characteristics below. We argue that the absence of sedimentation of lumbar nerve roots to the dorsal part of the dural sac in supine position may be due to tethering of affected nerve roots at the level of the stenosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a technique for online compression of ECG signals using the Golomb-Rice encoding algorithm. This is facilitated by a novel time encoding asynchronous analog-to-digital converter targeted for low-power, implantable, long-term bio-medical sensing applications. In contrast to capturing the actual signal (voltage) values the asynchronous time encoder captures and encodes the time information at which predefined changes occur in the signal thereby minimizing the sensor's energy use and the number of bits we store to represent the information by not capturing unnecessary samples. The time encoder transforms the ECG signal data to pure time information that has a geometric distribution such that the Golomb-Rice encoding algorithm can be used to further compress the data. An overall online compression rate of about 6 times is achievable without the usual computations associated with most compression methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and aims Fine root decomposition contributes significantly to element cycling in terrestrial ecosystems. However, studies on root decomposition rates and on the factors that potentially influence them are fewer than those on leaf litter decomposition. To study the effects of region and land use intensity on fine root decomposition, we established a large scale study in three German regions with different climate regimes and soil properties. Methods In 150 forest and 150 grassland sites we deployed litterbags (100 μm mesh size) with standardized litter consisting of fine roots from European beech in forests and from a lowland mesophilous hay meadow in grasslands. In the central study region, we compared decomposition rates of this standardized litter with root litter collected on-site to separate the effect of litter quality from environmental factors. Results Standardized herbaceous roots in grassland soils decomposed on average significantly faster (24 ± 6 % mass loss after 12 months, mean ± SD) than beech roots in forest soils (12 ± 4 %; p < 0.001). Fine root decomposition varied among the three study regions. Land use intensity, in particular N addition, decreased fine root decomposition in grasslands. The initial lignin:N ratio explained 15 % of the variance in grasslands and 11 % in forests. Soil moisture, soil temperature, and C:N ratios of soils together explained 34 % of the variance of the fine root mass loss in grasslands, and 24 % in forests. Conclusions Grasslands, which have higher fine root biomass and root turnover compared to forests, also have higher rates of root decomposition. Our results further show that at the regional scale fine root decomposition is influenced by environmental variables such as soil moisture, soil temperature and soil nutrient content. Additional variation is explained by root litter quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many techniques based on data which are drawn by Ranked Set Sampling (RSS) scheme assume that the ranking of observations is perfect. Therefore it is essential to develop some methods for testing this assumption. In this article, we propose a parametric location-scale free test for assessing the assumption of perfect ranking. The results of a simulation study in two special cases of normal and exponential distributions indicate that the proposed test performs well in comparison with its leading competitors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES To assess the diagnostic value of panoramic views (2D) of patients with impacted maxillary canines by a group of trained orthodontists and oral surgeons, and to quantify the subjective need and reasons for further three-dimensional (3D) imaging. MATERIALS AND METHODS The study comprises 60 patients with panoramic radiographs (2D) and cone beam computed tomography (CBCT) scans (3D), and a total of 72 impacted canines. Data from a standardized questionnaire were compared within (intragroup) and between (intergroup) a group of orthodontists and oral surgeons to assess possible correlations and differences. Furthermore, the questionnaire data were compared with the findings from the CBCT scans to estimate the correlation within and between the two specialties. Finally, the need and reasons for further 3D imaging was analysed for both groups. RESULTS When comparing questionnaire data with the analysis of the respective CBCT scans, orthodontists showed probability (Pr) values ranging from 0.443 to 0.943. Oral surgeons exhibited Pr values from 0.191 to 0.946. Statistically significant differences were found for the labiopalatal location of the impacted maxillary canine (P = 0.04), indicating a higher correlation in the orthodontist group. The most frequent reason mentioned for the further need of 3D analysis was the labiopalatal location of the impacted canines. Oral surgeons were more in favour of performing further 3D imaging (P = 0.04). CONCLUSIONS Orthodontists were more likely to diagnose the exact labiopalatal position of impacted maxillary canines when using panoramic views only. Generally, oral surgeons more often indicated the need for further 3D imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Pathogenic bacteria are often asymptomatically carried in the nasopharynx. Bacterial carriage can be reduced by vaccination and has been used as an alternative endpoint to clinical disease in randomised controlled trials (RCTs). Vaccine efficacy (VE) is usually calculated as 1 minus a measure of effect. Estimates of vaccine efficacy from cross-sectional carriage data collected in RCTs are usually based on prevalence odds ratios (PORs) and prevalence ratios (PRs), but it is unclear when these should be measured. METHODS We developed dynamic compartmental transmission models simulating RCTs of a vaccine against a carried pathogen to investigate how VE can best be estimated from cross-sectional carriage data, at which time carriage should optimally be assessed, and to which factors this timing is most sensitive. In the models, vaccine could change carriage acquisition and clearance rates (leaky vaccine); values for these effects were explicitly defined (facq, 1/fdur). POR and PR were calculated from model outputs. Models differed in infection source: other participants or external sources unaffected by the trial. Simulations using multiple vaccine doses were compared to empirical data. RESULTS The combined VE against acquisition and duration calculated using POR (VEˆacq.dur, (1-POR)×100) best estimates the true VE (VEacq.dur, (1-facq×fdur)×100) for leaky vaccines in most scenarios. The mean duration of carriage was the most important factor determining the time until VEˆacq.dur first approximates VEacq.dur: if the mean duration of carriage is 1-1.5 months, up to 4 months are needed; if the mean duration is 2-3 months, up to 8 months are needed. Minor differences were seen between models with different infection sources. In RCTs with shorter intervals between vaccine doses it takes longer after the last dose until VEˆacq.dur approximates VEacq.dur. CONCLUSION The timing of sample collection should be considered when interpreting vaccine efficacy against bacterial carriage measured in RCTs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION Recent meta-analyses of the outcome of apical surgery using modern techniques including microsurgical principles and high-power magnification have yielded higher rates of healing. However, the information is mainly based on 1- to 2-year follow-up data. The present prospective study was designed to re-examine a large sample of teeth treated with apical surgery after 5 years. METHODS Patients were recalled 5 years after apical surgery, and treated teeth were classified as healed or not healed based on clinical and radiographic examination. (The latter was performed independently by 3 observers). Two different methods of root-end preparation and filling (primary study parameters) were to be compared (mineral trioxide aggregate [MTA] vs adhesive resin composite [COMP]) without randomization. RESULTS A total of 271 patients and teeth from a 1-year follow-up sample of 339 could be re-examined after 5 years (dropout rate = 20.1%). The overall rate of healed cases was 84.5% with a significant difference (P = .0003) when comparing MTA (92.5%) and COMP (76.6%). The evaluation of secondary study parameters yielded no significant difference for healing outcome when comparing subcategories (ie, sex, age, type of tooth treated, post/screw, type of surgery). CONCLUSIONS The results from this prospective nonrandomized clinical study with a 5-year follow-up of 271 teeth indicate that MTA exhibited a higher healing rate than COMP in the longitudinal prognosis of root-end sealing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE To investigate the lethal activity of photoactivated disinfection (PAD) on Enterococcus faecalis (ATCC 29212) and mixed populations of aerobic or anaerobic bacteria in infected root canals using a diode laser after the application of a photosensitizer (PS). MATERIALS AND METHODS First, the bactericidal activity of a low power diode laser (200 mW) against E. faecalis ATCC 29212 pre-treated with a PS (toluidine blue) for 2 min were examined after different irradiation times (30 s, 60 s and 90 s). The bactericidal activity in the presence of human serum or human serum albumin (HSA) was also examined. Second, root canals were infected with E. faecalis or with mixed aerobic or anaerobic microbial populations for 3 days and then irrigated with 1.5% sodium hypochlorite and exposed to PAD for 60 s. RESULTS Photosensitization followed by laser irradiation for 60 s was sufficient to kill E. faecalis. Bacteria suspended in human serum (25% v/v) were totally eradicated after 30 s of irradiation. The addition of HSA (25 mg/ml or 50 mg/ml) to bacterial suspensions increased the antimicrobial efficacy of PAD after an irradiation time of 30 s, but no longer. The bactericidal effect of sodium hypochlorite was only enhanced by PAD during the early stages of treatment. PAD did not enhance the activity of sodium hypochlorite against a mixture of anaerobic bacteria. CONCLUSIONS The bactericidal activity of PAD appears to be enhanced by serum proteins in vitro, but is limited to bacteria present within the root canal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present three methods for the distortion-free enhancement of THz signals measured by electro-optic sampling in zinc blende-type detector crystals, e.g., ZnTe or GaP. A technique commonly used in optically heterodyne-detected optical Kerr effect spectroscopy is introduced, which is based on two measurements at opposite optical biases near the zero transmission point in a crossed polarizer detection geometry. In contrast to other techniques for an undistorted THz signal enhancement, it also works in a balanced detection scheme and does not require an elaborate procedure for the reconstruction of the true signal as the two measured waveforms are simply subtracted to remove distortions. We study three different approaches for setting an optical bias using the Jones matrix formalism and discuss them also in the framework of optical heterodyne detection. We show that there is an optimal bias point in realistic situations where a small fraction of the probe light is scattered by optical components. The experimental demonstration will be given in the second part of this two-paper series [J. Opt. Soc. Am. B, doc. ID 204877 (2014, posted online)].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three methods for distortion-free enhancement of electro-optic sampling measurements of terahertz signals are tested. In the first part of this two-paper series [J. Opt. Soc. Am B 31, 904–910 (2014)], the theoretical framework for describing the signal enhancement was presented and discussed. As the applied optical bias is decreased, individual signal traces become enhanced but distorted. Here we experimentally show that nonlinear signal components that distort the terahertz electric field measurement can be removed by subtracting traces recorded with opposite optical bias values. In all three methods tested, we observe up to an order of magnitude increase in distortion-free signal enhancement, in agreement with the theory, making possible measurements of small terahertz-induced transient birefringence signals with increased signal-to-noise ratio.