10 resultados para sampling techniques
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
The analysis of samplings from periodontal pockets is important in the diagnosis and therapy of periodontitis. In this study, three different sampling techniques were compared to determine whether one method yielded samples suitable for the reproducible and simultaneous determination of bacterial load, cytokines, neutrophil elastase, and arginine-specific gingipains (Rgps). Rgps are an important virulence factor of Porphyromonas gingivalis, the exact concentration of which in gingival crevicular fluid (GCF) has not been quantified.
Resumo:
We present a generalized framework for gradient-domain Metropolis rendering, and introduce three techniques to reduce sampling artifacts and variance. The first one is a heuristic weighting strategy that combines several sampling techniques to avoid outliers. The second one is an improved mapping to generate offset paths required for computing gradients. Here we leverage the properties of manifold walks in path space to cancel out singularities. Finally, the third technique introduces generalized screen space gradient kernels. This approach aligns the gradient kernels with image structures such as texture edges and geometric discontinuities to obtain sparser gradients than with the conventional gradient kernel. We implement our framework on top of an existing Metropolis sampler, and we demonstrate significant improvements in visual and numerical quality of our results compared to previous work.
Resumo:
In a surfactant-depletion model of lung injury, tidal recruitment of atelectasis and changes in shunt fraction lead to large Pao2 oscillations. We investigated the effect of these oscillations on conventional arterial blood gas (ABG) results using different sampling techniques in ventilated rabbits. In each rabbit, 5 different ventilator settings were studied, 2 before saline lavage injury and 3 after lavage injury. Ventilator settings were altered according to 5 different goals for the amplitude and mean value of brachiocephalic Pao2 oscillations, as guided by a fast responding intraarterial probe. ABG collection was timed to obtain the sample at the peak or trough of the Pao2 oscillations, or over several respiratory cycles. Before lung injury, oscillations were small and sample timing did not influence Pao2. After saline lavage, when Po2 fluctuations measured by the indwelling arterial Po2 probe confirmed tidal recruitment, Pao2 by ABG was significantly higher at peak (295 +/- 130 mm Hg) compared with trough (74 +/- 15 mm Hg) or mean (125 +/- 75 mm Hg). In early, mild lung injury after saline lavage, Pao2 can vary markedly during the respiratory cycle. When atelectasis is recruited with each breath, interpretation of changes in shunt fraction, based on conventional ABG analysis, should account for potentially large respiratory variations in arterial Po2.
Resumo:
INTRODUCTION: The simple bedside method for sampling undiluted distal pulmonary edema fluid through a normal suction catheter (s-Cath) has been experimentally and clinically validated. However, there are no data comparing non-bronchoscopic bronchoalveolar lavage (mini-BAL) and s-Cath for assessing lung inflammation in acute hypoxaemic respiratory failure. We designed a prospective study in two groups of patients, those with acute lung injury (ALI)/acute respiratory distress syndrome (ARDS) and those with acute cardiogenic lung edema (ACLE), designed to investigate the clinical feasibility of these techniques and to evaluate inflammation in both groups using undiluted sampling obtained by s-Cath. To test the interchangeability of the two methods in the same patient for studying the inflammation response, we further compared mini-BAL and s-Cath for agreement of protein concentration and percentage of polymorphonuclear cells (PMNs). METHODS: Mini-BAL and s-Cath sampling was assessed in 30 mechanically ventilated patients, 21 with ALI/ARDS and 9 with ACLE. To analyse agreement between the two sampling techniques, we considered only simultaneously collected mini-BAL and s-Cath paired samples. The protein concentration and polymorphonuclear cell (PMN) count comparisons were performed using undiluted sampling. Bland-Altman plots were used for assessing the mean bias and the limits of agreement between the two sampling techniques; comparison between groups was performed by using the non-parametric Mann-Whitney-U test; continuous variables were compared by using the Student t-test, Wilcoxon signed rank test, analysis of variance or Student-Newman-Keuls test; and categorical variables were compared by using chi-square analysis or Fisher exact test. RESULTS: Using protein content and PMN percentage as parameters, we identified substantial variations between the two sampling techniques. When the protein concentration in the lung was high, the s-Cath was a more sensitive method; by contrast, as inflammation increased, both methods provided similar estimates of neutrophil percentages in the lung. The patients with ACLE showed an increased PMN count, suggesting that hydrostatic lung edema can be associated with a concomitant inflammatory process. CONCLUSIONS: There are significant differences between the s-Cath and mini-BAL sampling techniques, indicating that these procedures cannot be used interchangeably for studying the lung inflammatory response in patients with acute hypoxaemic lung injury.
Resumo:
We present results from an intercomparison program of CO2, δ(O2/N2) and δ13CO2 measurements from atmospheric flask samples. Flask samples are collected on a bi-weekly basis at the High Altitude Research Station Jungfraujoch in Switzerland for three European laboratories: the University of Bern, Switzerland, the University of Groningen, the Netherlands and the Max Planck Institute for Biogeochemistry in Jena, Germany. Almost 4 years of measurements of CO2, δ(O2/N2) and δ13CO2 are compared in this paper to assess the measurement compatibility of the three laboratories. While the average difference for the CO2 measurements between the laboratories in Bern and Jena meets the required compatibility goal as defined by the World Meteorological Organization, the standard deviation of the average differences between all laboratories is not within the required goal. However, the obtained annual trend and seasonalities are the same within their estimated uncertainties. For δ(O2/N2) significant differences are observed between the three laboratories. The comparison for δ13CO2 yields the least compatible results and the required goals are not met between the three laboratories. Our study shows the importance of regular intercomparison exercises to identify potential biases between laboratories and the need to improve the quality of atmospheric measurements.
Resumo:
Time-based localization techniques such as multilateration are favoured for positioning to wide-band signals. Applying the same techniques with narrow-band signals such as GSM is not so trivial. The process is challenged by the needs of synchronization accuracy and timestamp resolution both in the nanoseconds range. We propose approaches to deal with both challenges. On the one hand, we introduce a method to eliminate the negative effect of synchronization offset on time measurements. On the other hand, we propose timestamps with nanoseconds accuracy by using timing information from the signal processing chain. For a set of experiments, ranging from sub-urban to indoor environments, we show that our proposed approaches are able to improve the localization accuracy of TDOA approaches by several factors. We are even able to demonstrate errors as small as 10 meters for outdoor settings with narrow-band signals.
Resumo:
Many techniques based on data which are drawn by Ranked Set Sampling (RSS) scheme assume that the ranking of observations is perfect. Therefore it is essential to develop some methods for testing this assumption. In this article, we propose a parametric location-scale free test for assessing the assumption of perfect ranking. The results of a simulation study in two special cases of normal and exponential distributions indicate that the proposed test performs well in comparison with its leading competitors.
Resumo:
We present three methods for the distortion-free enhancement of THz signals measured by electro-optic sampling in zinc blende-type detector crystals, e.g., ZnTe or GaP. A technique commonly used in optically heterodyne-detected optical Kerr effect spectroscopy is introduced, which is based on two measurements at opposite optical biases near the zero transmission point in a crossed polarizer detection geometry. In contrast to other techniques for an undistorted THz signal enhancement, it also works in a balanced detection scheme and does not require an elaborate procedure for the reconstruction of the true signal as the two measured waveforms are simply subtracted to remove distortions. We study three different approaches for setting an optical bias using the Jones matrix formalism and discuss them also in the framework of optical heterodyne detection. We show that there is an optimal bias point in realistic situations where a small fraction of the probe light is scattered by optical components. The experimental demonstration will be given in the second part of this two-paper series [J. Opt. Soc. Am. B, doc. ID 204877 (2014, posted online)].
Resumo:
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.
Resumo:
With the ongoing shift in the computer graphics industry toward Monte Carlo rendering, there is a need for effective, practical noise-reduction techniques that are applicable to a wide range of rendering effects and easily integrated into existing production pipelines. This course surveys recent advances in image-space adaptive sampling and reconstruction algorithms for noise reduction, which have proven very effective at reducing the computational cost of Monte Carlo techniques in practice. These approaches leverage advanced image-filtering techniques with statistical methods for error estimation. They are attractive because they can be integrated easily into conventional Monte Carlo rendering frameworks, they are applicable to most rendering effects, and their computational overhead is modest.