869 resultados para Interval sampling
Resumo:
Many techniques based on data which are drawn by Ranked Set Sampling (RSS) scheme assume that the ranking of observations is perfect. Therefore it is essential to develop some methods for testing this assumption. In this article, we propose a parametric location-scale free test for assessing the assumption of perfect ranking. The results of a simulation study in two special cases of normal and exponential distributions indicate that the proposed test performs well in comparison with its leading competitors.
Resumo:
This article provides importance sampling algorithms for computing the probabilities of various types ruin of spectrally negative Lévy risk processes, which are ruin over the infinite time horizon, ruin within a finite time horizon and ruin past a finite time horizon. For the special case of the compound Poisson process perturbed by diffusion, algorithms for computing probabilities of ruins by creeping (i.e. induced by the diffusion term) and by jumping (i.e. by a claim amount) are provided. It is shown that these algorithms have either bounded relative error or logarithmic efficiency, as t,x→∞t,x→∞, where t>0t>0 is the time horizon and x>0x>0 is the starting point of the risk process, with y=t/xy=t/x held constant and assumed either below or above a certain constant.
Resumo:
BACKGROUND Pathogenic bacteria are often asymptomatically carried in the nasopharynx. Bacterial carriage can be reduced by vaccination and has been used as an alternative endpoint to clinical disease in randomised controlled trials (RCTs). Vaccine efficacy (VE) is usually calculated as 1 minus a measure of effect. Estimates of vaccine efficacy from cross-sectional carriage data collected in RCTs are usually based on prevalence odds ratios (PORs) and prevalence ratios (PRs), but it is unclear when these should be measured. METHODS We developed dynamic compartmental transmission models simulating RCTs of a vaccine against a carried pathogen to investigate how VE can best be estimated from cross-sectional carriage data, at which time carriage should optimally be assessed, and to which factors this timing is most sensitive. In the models, vaccine could change carriage acquisition and clearance rates (leaky vaccine); values for these effects were explicitly defined (facq, 1/fdur). POR and PR were calculated from model outputs. Models differed in infection source: other participants or external sources unaffected by the trial. Simulations using multiple vaccine doses were compared to empirical data. RESULTS The combined VE against acquisition and duration calculated using POR (VEˆacq.dur, (1-POR)×100) best estimates the true VE (VEacq.dur, (1-facq×fdur)×100) for leaky vaccines in most scenarios. The mean duration of carriage was the most important factor determining the time until VEˆacq.dur first approximates VEacq.dur: if the mean duration of carriage is 1-1.5 months, up to 4 months are needed; if the mean duration is 2-3 months, up to 8 months are needed. Minor differences were seen between models with different infection sources. In RCTs with shorter intervals between vaccine doses it takes longer after the last dose until VEˆacq.dur approximates VEacq.dur. CONCLUSION The timing of sample collection should be considered when interpreting vaccine efficacy against bacterial carriage measured in RCTs.
Resumo:
We present three methods for the distortion-free enhancement of THz signals measured by electro-optic sampling in zinc blende-type detector crystals, e.g., ZnTe or GaP. A technique commonly used in optically heterodyne-detected optical Kerr effect spectroscopy is introduced, which is based on two measurements at opposite optical biases near the zero transmission point in a crossed polarizer detection geometry. In contrast to other techniques for an undistorted THz signal enhancement, it also works in a balanced detection scheme and does not require an elaborate procedure for the reconstruction of the true signal as the two measured waveforms are simply subtracted to remove distortions. We study three different approaches for setting an optical bias using the Jones matrix formalism and discuss them also in the framework of optical heterodyne detection. We show that there is an optimal bias point in realistic situations where a small fraction of the probe light is scattered by optical components. The experimental demonstration will be given in the second part of this two-paper series [J. Opt. Soc. Am. B, doc. ID 204877 (2014, posted online)].
Resumo:
Three methods for distortion-free enhancement of electro-optic sampling measurements of terahertz signals are tested. In the first part of this two-paper series [J. Opt. Soc. Am B 31, 904–910 (2014)], the theoretical framework for describing the signal enhancement was presented and discussed. As the applied optical bias is decreased, individual signal traces become enhanced but distorted. Here we experimentally show that nonlinear signal components that distort the terahertz electric field measurement can be removed by subtracting traces recorded with opposite optical bias values. In all three methods tested, we observe up to an order of magnitude increase in distortion-free signal enhancement, in agreement with the theory, making possible measurements of small terahertz-induced transient birefringence signals with increased signal-to-noise ratio.
Resumo:
We present a generalized framework for gradient-domain Metropolis rendering, and introduce three techniques to reduce sampling artifacts and variance. The first one is a heuristic weighting strategy that combines several sampling techniques to avoid outliers. The second one is an improved mapping to generate offset paths required for computing gradients. Here we leverage the properties of manifold walks in path space to cancel out singularities. Finally, the third technique introduces generalized screen space gradient kernels. This approach aligns the gradient kernels with image structures such as texture edges and geometric discontinuities to obtain sparser gradients than with the conventional gradient kernel. We implement our framework on top of an existing Metropolis sampler, and we demonstrate significant improvements in visual and numerical quality of our results compared to previous work.
Resumo:
The QT interval, an electrocardiographic measure reflecting myocardial repolarization, is a heritable trait. QT prolongation is a risk factor for ventricular arrhythmias and sudden cardiac death (SCD) and could indicate the presence of the potentially lethal mendelian long-QT syndrome (LQTS). Using a genome-wide association and replication study in up to 100,000 individuals, we identified 35 common variant loci associated with QT interval that collectively explain ∼8-10% of QT-interval variation and highlight the importance of calcium regulation in myocardial repolarization. Rare variant analysis of 6 new QT interval-associated loci in 298 unrelated probands with LQTS identified coding variants not found in controls but of uncertain causality and therefore requiring validation. Several newly identified loci encode proteins that physically interact with other recognized repolarization proteins. Our integration of common variant association, expression and orthogonal protein-protein interaction screens provides new insights into cardiac electrophysiology and identifies new candidate genes for ventricular arrhythmias, LQTS and SCD.
Resumo:
One of the earliest accounts of duration perception by Karl von Vierordt implied a common process underlying the timing of intervals in the sub-second and the second range. To date, there are two major explanatory approaches for the timing of brief intervals: the Common Timing Hypothesis and the Distinct Timing Hypothesis. While the common timing hypothesis also proceeds from a unitary timing process, the distinct timing hypothesis suggests two dissociable, independent mechanisms for the timing of intervals in the sub-second and the second range, respectively. In the present paper, we introduce confirmatory factor analysis (CFA) to elucidate the internal structure of interval timing in the sub-second and the second range. Our results indicate that the assumption of two mechanisms underlying the processing of intervals in the second and the sub-second range might be more appropriate than the assumption of a unitary timing mechanism. In contrast to the basic assumption of the distinct timing hypothesis, however, these two timing mechanisms are closely associated with each other and share 77% of common variance. This finding suggests either a strong functional relationship between the two timing mechanisms or a hierarchically organized internal structure. Findings are discussed in the light of existing psychophysical and neurophysiological data.
Resumo:
The present study was designed to investigate the influences of type of psychophysical task (two-alternative forced-choice [2AFC] and reminder tasks), type of interval (filled vs. empty), sensory modality (auditory vs. visual), and base duration (ranging from 100 through 1,000 ms) on performance on duration discrimination. All of these factors were systematically varied in an experiment comprising 192 participants. This approach allowed for obtaining information not only on the general (main) effect of each factor alone, but also on the functional interplay and mutual interactions of some or all of these factors combined. Temporal sensitivity was markedly higher for auditory than for visual intervals, as well as for the reminder relative to the 2AFC task. With regard to base duration, discrimination performance deteriorated with decreasing base durations for intervals below 400 ms, whereas longer intervals were not affected. No indication emerged that overall performance on duration discrimination was influenced by the type of interval, and only two significant interactions were apparent: Base Duration × Type of Interval and Base Duration × Sensory Modality. With filled intervals, the deteriorating effect of base duration was limited to very brief base durations, not exceeding 100 ms, whereas with empty intervals, temporal discriminability was also affected for the 200-ms base duration. Similarly, the performance decrement observed with visual relative to auditory intervals increased with decreasing base durations. These findings suggest that type of task, sensory modality, and base duration represent largely independent sources of variance for performance on duration discrimination that can be accounted for by distinct nontemporal mechanisms.
Resumo:
Laying hens in loose housing systems have access to group-nests which provide space for several hens at a time to lay their eggs. They are thus rather large and the trend in the industry is to further increase the size of these nests. Though practicality is important for the producer, group-nests should also cater to the egg-laying behaviour of hens to promote good welfare. One of the factors playing a role in the attractiveness of a nest is the amount of enclosure: hens prefer more enclosure when having a choice between different nest types. The aim of this study was to investigate if hens prefer smaller group-nests to lay their eggs given that they may seem more enclosed than larger nests. The relative preference of groups of laying hens for two nest sizes – 0.43m2 vs. 0.86m2 – was tested in a free-access choice test. The experiment was conducted in two consecutive trials with 100 hens each. They were housed from 18 to 36 weeks of age in five groups of 20 animals and had access to two commercial group-nests differing in internal size only. We counted eggs daily as a measure of nest preference. At 28 and 36 weeks of age, videos were taken of the pens and inside the nests on one day during the first 5h of lights-on. The nest videos were used to record the number of hens per nest and their behaviour with a 10min scan sampling interval. The pen videos were observed continuously to count the total number of nest visits per nest and to calculate the duration of nest visits of five focal hens per pen. We found a relative preference for the small nest as more eggs, fewer nest visits per egg and longer nest visit durations were recorded for that nest. In addition, more hens – including more sitting hens – were in the small nests during the main egg-laying period, while the number of standing hens did not differ. These observations indicate that even though both nests may have been explored to a similar extent, the hens preferred the small nest for egg-laying.
Resumo:
Intestinal dendritic cells (DCs) are believed to sample and present commensal bacteria to the gut-associated immune system to maintain immune homeostasis. How antigen sampling pathways handle intestinal pathogens remains elusive. We present a murine colitogenic Salmonella infection model that is highly dependent on DCs. Conditional DC depletion experiments revealed that intestinal virulence of S. Typhimurium SL1344 DeltainvG mutant lacking a functional type 3 secretion system-1 (DeltainvG)critically required DCs for invasion across the epithelium. The DC-dependency was limited to the early phase of infection when bacteria colocalized with CD11c(+)CX3CR1(+) mucosal DCs. At later stages, the bacteria became associated with other (CD11c(-)CX3CR1(-)) lamina propria cells, DC depletion no longer attenuated the pathology, and a MyD88-dependent mucosal inflammation was initiated. Using bone marrow chimeric mice, we showed that the MyD88 signaling within hematopoietic cells, which are distinct from DCs, was required and sufficient for induction of the colitis. Moreover, MyD88-deficient DCs supported transepithelial uptake of the bacteria and the induction of MyD88-dependent colitis. These results establish that pathogen sampling by DCs is a discrete, and MyD88-independent, step during the initiation of a mucosal innate immune response to bacterial infection in vivo.
Resumo:
We propose a nonparametric variance estimator when ranked set sampling (RSS) and judgment post stratification (JPS) are applied by measuring a concomitant variable. Our proposed estimator is obtained by conditioning on observed concomitant values and using nonparametric kernel regression.
Resumo:
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.
Resumo:
OBJECTIVES Respondent-driven sampling (RDS) is a new data collection methodology used to estimate characteristics of hard-to-reach groups, such as the HIV prevalence in drug users. Many national public health systems and international organizations rely on RDS data. However, RDS reporting quality and available reporting guidelines are inadequate. We carried out a systematic review of RDS studies and present Strengthening the Reporting of Observational Studies in Epidemiology for RDS Studies (STROBE-RDS), a checklist of essential items to present in RDS publications, justified by an explanation and elaboration document. STUDY DESIGN AND SETTING We searched the MEDLINE (1970-2013), EMBASE (1974-2013), and Global Health (1910-2013) databases to assess the number and geographical distribution of published RDS studies. STROBE-RDS was developed based on STROBE guidelines, following Guidance for Developers of Health Research Reporting Guidelines. RESULTS RDS has been used in over 460 studies from 69 countries, including the USA (151 studies), China (70), and India (32). STROBE-RDS includes modifications to 12 of the 22 items on the STROBE checklist. The two key areas that required modification concerned the selection of participants and statistical analysis of the sample. CONCLUSION STROBE-RDS seeks to enhance the transparency and utility of research using RDS. If widely adopted, STROBE-RDS should improve global infectious diseases public health decision making.
Resumo:
Mobile pastoralists provide major contributions to the gross domestic product in Chad, but little information is available regarding their demography. The Lake Chad area population is increasing, resulting in competition for scarce land and water resources. For the first time, the density of people and animals from mobile and sedentary populations was assessed using randomly defined sampling areas. Four sampling rounds were conducted over two years in the same areas to show population density dynamics. We identified 42 villages of sedentary communities in the sampling zones; 11 (in 2010) and 16 (in 2011) mobile pastoralist camps at the beginning of the dry season and 34 (in 2011) and 30 (in 2012) camps at the end of the dry season. A mean of 64.0 people per km2 (95% confidence interval, 20.3-107.8) were estimated to live in sedentary villages. In the mobile communities, we found 5.9 people per km2 at the beginning and 17.5 people per km2 at the end of the dry season. We recorded per km2 on average 21.0 cattle and 31.6 small ruminants in the sedentary villages and 66.1 cattle and 102.5 small ruminants in the mobile communities, which amounts to a mean of 86.6 tropical livestock units during the dry season. These numbers exceed, by up to five times, the published carrying capacities for similar Sahelian zones. Our results underline the need for a new institutional framework. Improved land use management must equally consider the needs of mobile communities and sedentary populations.