91 resultados para Interval sampling
Resumo:
The impact of initial sample distribution on separation and focusing of analytes in a pH 3–11 gradient formed by 101 biprotic carrier ampholytes under concomitant electroosmotic displacement was studied by dynamic high-resolution computer simulation. Data obtained with application of the analytes mixed with the carrier ampholytes (as is customarily done), as a short zone within the initial carrier ampholyte zone, sandwiched between zones of carrier ampholytes, or introduced before or after the initial carrier ampholyte zone were compared. With sampling as a short zone within or adjacent to the carrier ampholytes, separation and focusing of analytes is shown to proceed as a cationic, anionic, or mixed process and separation of the analytes is predicted to be much faster than the separation of the carrier components. Thus, after the initial separation, analytes continue to separate and eventually reach their focusing locations. This is different to the double-peak approach to equilibrium that takes place when analytes and carrier ampholytes are applied as a homogenous mixture. Simulation data reveal that sample application between two zones of carrier ampholytes results in the formation of a pH gradient disturbance as the concentration of the carrier ampholytes within the fluid element initially occupied by the sample will be lower compared to the other parts of the gradient. As a consequence thereof, the properties of this region are sample matrix dependent, the pH gradient is flatter, and the region is likely to represent a conductance gap (hot spot). Simulation data suggest that sample placed at the anodic side or at the anodic end of the initial carrier ampholyte zone are the favorable configurations for capillary isoelectric focusing with electroosmotic zone mobilization.
Resumo:
Tree-rings offer one of the few possibilities to empirically quantify and reconstruct forest growth dynamics over years to millennia. Contemporaneously with the growing scientific community employing tree-ring parameters, recent research has suggested that commonly applied sampling designs (i.e. how and which trees are selected for dendrochronological sampling) may introduce considerable biases in quantifications of forest responses to environmental change. To date, a systematic assessment of the consequences of sampling design on dendroecological and-climatological conclusions has not yet been performed. Here, we investigate potential biases by sampling a large population of trees and replicating diverse sampling designs. This is achieved by retroactively subsetting the population and specifically testing for biases emerging for climate reconstruction, growth response to climate variability, long-term growth trends, and quantification of forest productivity. We find that commonly applied sampling designs can impart systematic biases of varying magnitude to any type of tree-ring-based investigations, independent of the total number of samples considered. Quantifications of forest growth and productivity are particularly susceptible to biases, whereas growth responses to short-term climate variability are less affected by the choice of sampling design. The world's most frequently applied sampling design, focusing on dominant trees only, can bias absolute growth rates by up to 459% and trends in excess of 200%. Our findings challenge paradigms, where a subset of samples is typically considered to be representative for the entire population. The only two sampling strategies meeting the requirements for all types of investigations are the (i) sampling of all individuals within a fixed area; and (ii) fully randomized selection of trees. This result advertises the consistent implementation of a widely applicable sampling design to simultaneously reduce uncertainties in tree-ring-based quantifications of forest growth and increase the comparability of datasets beyond individual studies, investigators, laboratories, and geographical boundaries.
Resumo:
PURPOSE Therapeutic drug monitoring of patients receiving once daily aminoglycoside therapy can be performed using pharmacokinetic (PK) formulas or Bayesian calculations. While these methods produced comparable results, their performance has never been checked against full PK profiles. We performed a PK study in order to compare both methods and to determine the best time-points to estimate AUC0-24 and peak concentrations (C max). METHODS We obtained full PK profiles in 14 patients receiving a once daily aminoglycoside therapy. PK parameters were calculated with PKSolver using non-compartmental methods. The calculated PK parameters were then compared with parameters estimated using an algorithm based on two serum concentrations (two-point method) or the software TCIWorks (Bayesian method). RESULTS For tobramycin and gentamicin, AUC0-24 and C max could be reliably estimated using a first serum concentration obtained at 1 h and a second one between 8 and 10 h after start of the infusion. The two-point and the Bayesian method produced similar results. For amikacin, AUC0-24 could reliably be estimated by both methods. C max was underestimated by 10-20% by the two-point method and by up to 30% with a large variation by the Bayesian method. CONCLUSIONS The ideal time-points for therapeutic drug monitoring of once daily administered aminoglycosides are 1 h after start of a 30-min infusion for the first time-point and 8-10 h after start of the infusion for the second time-point. Duration of the infusion and accurate registration of the time-points of blood drawing are essential for obtaining precise predictions.
Resumo:
Many techniques based on data which are drawn by Ranked Set Sampling (RSS) scheme assume that the ranking of observations is perfect. Therefore it is essential to develop some methods for testing this assumption. In this article, we propose a parametric location-scale free test for assessing the assumption of perfect ranking. The results of a simulation study in two special cases of normal and exponential distributions indicate that the proposed test performs well in comparison with its leading competitors.
Resumo:
This article provides importance sampling algorithms for computing the probabilities of various types ruin of spectrally negative Lévy risk processes, which are ruin over the infinite time horizon, ruin within a finite time horizon and ruin past a finite time horizon. For the special case of the compound Poisson process perturbed by diffusion, algorithms for computing probabilities of ruins by creeping (i.e. induced by the diffusion term) and by jumping (i.e. by a claim amount) are provided. It is shown that these algorithms have either bounded relative error or logarithmic efficiency, as t,x→∞t,x→∞, where t>0t>0 is the time horizon and x>0x>0 is the starting point of the risk process, with y=t/xy=t/x held constant and assumed either below or above a certain constant.
Resumo:
BACKGROUND Pathogenic bacteria are often asymptomatically carried in the nasopharynx. Bacterial carriage can be reduced by vaccination and has been used as an alternative endpoint to clinical disease in randomised controlled trials (RCTs). Vaccine efficacy (VE) is usually calculated as 1 minus a measure of effect. Estimates of vaccine efficacy from cross-sectional carriage data collected in RCTs are usually based on prevalence odds ratios (PORs) and prevalence ratios (PRs), but it is unclear when these should be measured. METHODS We developed dynamic compartmental transmission models simulating RCTs of a vaccine against a carried pathogen to investigate how VE can best be estimated from cross-sectional carriage data, at which time carriage should optimally be assessed, and to which factors this timing is most sensitive. In the models, vaccine could change carriage acquisition and clearance rates (leaky vaccine); values for these effects were explicitly defined (facq, 1/fdur). POR and PR were calculated from model outputs. Models differed in infection source: other participants or external sources unaffected by the trial. Simulations using multiple vaccine doses were compared to empirical data. RESULTS The combined VE against acquisition and duration calculated using POR (VEˆacq.dur, (1-POR)×100) best estimates the true VE (VEacq.dur, (1-facq×fdur)×100) for leaky vaccines in most scenarios. The mean duration of carriage was the most important factor determining the time until VEˆacq.dur first approximates VEacq.dur: if the mean duration of carriage is 1-1.5 months, up to 4 months are needed; if the mean duration is 2-3 months, up to 8 months are needed. Minor differences were seen between models with different infection sources. In RCTs with shorter intervals between vaccine doses it takes longer after the last dose until VEˆacq.dur approximates VEacq.dur. CONCLUSION The timing of sample collection should be considered when interpreting vaccine efficacy against bacterial carriage measured in RCTs.
Resumo:
We present three methods for the distortion-free enhancement of THz signals measured by electro-optic sampling in zinc blende-type detector crystals, e.g., ZnTe or GaP. A technique commonly used in optically heterodyne-detected optical Kerr effect spectroscopy is introduced, which is based on two measurements at opposite optical biases near the zero transmission point in a crossed polarizer detection geometry. In contrast to other techniques for an undistorted THz signal enhancement, it also works in a balanced detection scheme and does not require an elaborate procedure for the reconstruction of the true signal as the two measured waveforms are simply subtracted to remove distortions. We study three different approaches for setting an optical bias using the Jones matrix formalism and discuss them also in the framework of optical heterodyne detection. We show that there is an optimal bias point in realistic situations where a small fraction of the probe light is scattered by optical components. The experimental demonstration will be given in the second part of this two-paper series [J. Opt. Soc. Am. B, doc. ID 204877 (2014, posted online)].
Resumo:
Three methods for distortion-free enhancement of electro-optic sampling measurements of terahertz signals are tested. In the first part of this two-paper series [J. Opt. Soc. Am B 31, 904–910 (2014)], the theoretical framework for describing the signal enhancement was presented and discussed. As the applied optical bias is decreased, individual signal traces become enhanced but distorted. Here we experimentally show that nonlinear signal components that distort the terahertz electric field measurement can be removed by subtracting traces recorded with opposite optical bias values. In all three methods tested, we observe up to an order of magnitude increase in distortion-free signal enhancement, in agreement with the theory, making possible measurements of small terahertz-induced transient birefringence signals with increased signal-to-noise ratio.
Resumo:
We present a generalized framework for gradient-domain Metropolis rendering, and introduce three techniques to reduce sampling artifacts and variance. The first one is a heuristic weighting strategy that combines several sampling techniques to avoid outliers. The second one is an improved mapping to generate offset paths required for computing gradients. Here we leverage the properties of manifold walks in path space to cancel out singularities. Finally, the third technique introduces generalized screen space gradient kernels. This approach aligns the gradient kernels with image structures such as texture edges and geometric discontinuities to obtain sparser gradients than with the conventional gradient kernel. We implement our framework on top of an existing Metropolis sampler, and we demonstrate significant improvements in visual and numerical quality of our results compared to previous work.
Resumo:
The QT interval, an electrocardiographic measure reflecting myocardial repolarization, is a heritable trait. QT prolongation is a risk factor for ventricular arrhythmias and sudden cardiac death (SCD) and could indicate the presence of the potentially lethal mendelian long-QT syndrome (LQTS). Using a genome-wide association and replication study in up to 100,000 individuals, we identified 35 common variant loci associated with QT interval that collectively explain ∼8-10% of QT-interval variation and highlight the importance of calcium regulation in myocardial repolarization. Rare variant analysis of 6 new QT interval-associated loci in 298 unrelated probands with LQTS identified coding variants not found in controls but of uncertain causality and therefore requiring validation. Several newly identified loci encode proteins that physically interact with other recognized repolarization proteins. Our integration of common variant association, expression and orthogonal protein-protein interaction screens provides new insights into cardiac electrophysiology and identifies new candidate genes for ventricular arrhythmias, LQTS and SCD.
Resumo:
One of the earliest accounts of duration perception by Karl von Vierordt implied a common process underlying the timing of intervals in the sub-second and the second range. To date, there are two major explanatory approaches for the timing of brief intervals: the Common Timing Hypothesis and the Distinct Timing Hypothesis. While the common timing hypothesis also proceeds from a unitary timing process, the distinct timing hypothesis suggests two dissociable, independent mechanisms for the timing of intervals in the sub-second and the second range, respectively. In the present paper, we introduce confirmatory factor analysis (CFA) to elucidate the internal structure of interval timing in the sub-second and the second range. Our results indicate that the assumption of two mechanisms underlying the processing of intervals in the second and the sub-second range might be more appropriate than the assumption of a unitary timing mechanism. In contrast to the basic assumption of the distinct timing hypothesis, however, these two timing mechanisms are closely associated with each other and share 77% of common variance. This finding suggests either a strong functional relationship between the two timing mechanisms or a hierarchically organized internal structure. Findings are discussed in the light of existing psychophysical and neurophysiological data.
Resumo:
The present study was designed to investigate the influences of type of psychophysical task (two-alternative forced-choice [2AFC] and reminder tasks), type of interval (filled vs. empty), sensory modality (auditory vs. visual), and base duration (ranging from 100 through 1,000 ms) on performance on duration discrimination. All of these factors were systematically varied in an experiment comprising 192 participants. This approach allowed for obtaining information not only on the general (main) effect of each factor alone, but also on the functional interplay and mutual interactions of some or all of these factors combined. Temporal sensitivity was markedly higher for auditory than for visual intervals, as well as for the reminder relative to the 2AFC task. With regard to base duration, discrimination performance deteriorated with decreasing base durations for intervals below 400 ms, whereas longer intervals were not affected. No indication emerged that overall performance on duration discrimination was influenced by the type of interval, and only two significant interactions were apparent: Base Duration × Type of Interval and Base Duration × Sensory Modality. With filled intervals, the deteriorating effect of base duration was limited to very brief base durations, not exceeding 100 ms, whereas with empty intervals, temporal discriminability was also affected for the 200-ms base duration. Similarly, the performance decrement observed with visual relative to auditory intervals increased with decreasing base durations. These findings suggest that type of task, sensory modality, and base duration represent largely independent sources of variance for performance on duration discrimination that can be accounted for by distinct nontemporal mechanisms.
Resumo:
Laying hens in loose housing systems have access to group-nests which provide space for several hens at a time to lay their eggs. They are thus rather large and the trend in the industry is to further increase the size of these nests. Though practicality is important for the producer, group-nests should also cater to the egg-laying behaviour of hens to promote good welfare. One of the factors playing a role in the attractiveness of a nest is the amount of enclosure: hens prefer more enclosure when having a choice between different nest types. The aim of this study was to investigate if hens prefer smaller group-nests to lay their eggs given that they may seem more enclosed than larger nests. The relative preference of groups of laying hens for two nest sizes – 0.43m2 vs. 0.86m2 – was tested in a free-access choice test. The experiment was conducted in two consecutive trials with 100 hens each. They were housed from 18 to 36 weeks of age in five groups of 20 animals and had access to two commercial group-nests differing in internal size only. We counted eggs daily as a measure of nest preference. At 28 and 36 weeks of age, videos were taken of the pens and inside the nests on one day during the first 5h of lights-on. The nest videos were used to record the number of hens per nest and their behaviour with a 10min scan sampling interval. The pen videos were observed continuously to count the total number of nest visits per nest and to calculate the duration of nest visits of five focal hens per pen. We found a relative preference for the small nest as more eggs, fewer nest visits per egg and longer nest visit durations were recorded for that nest. In addition, more hens – including more sitting hens – were in the small nests during the main egg-laying period, while the number of standing hens did not differ. These observations indicate that even though both nests may have been explored to a similar extent, the hens preferred the small nest for egg-laying.