915 resultados para Optimal Sampling Time


Relevância:

30.00% 30.00%

Publicador:

Resumo:

SUMMARY Campylobacteriosis has been the most common food-associated notifiable infectious disease in Switzerland since 1995. Contact with and ingestion of raw or undercooked broilers are considered the dominant risk factors for infection. In this study, we investigated the temporal relationship between the disease incidence in humans and the prevalence of Campylobacter in broilers in Switzerland from 2008 to 2012. We use a time-series approach to describe the pattern of the disease by incorporating seasonal effects and autocorrelation. The analysis shows that prevalence of Campylobacter in broilers, with a 2-week lag, has a significant impact on disease incidence in humans. Therefore Campylobacter cases in humans can be partly explained by contagion through broiler meat. We also found a strong autoregressive effect in human illness, and a significant increase of illness during Christmas and New Year's holidays. In a final analysis, we corrected for the sampling error of prevalence in broilers and the results gave similar conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An experiment was conducted to determine the effect of grazing versus zero-grazing on energy expenditure (EE), feeding behaviour and physical activity in dairy cows at different stages of lactation. Fourteen Holstein cows were subjected to two treatments in a repeated crossover design with three experimental series (S1, S2, and S3) reflecting increased days in milk (DIM). At the beginning of each series, cows were on average at 38, 94 and 171 (standard deviation (SD) 10.8) DIM, respectively. Each series consisted of two periods containing a 7-d adaptation and a 7-d collection period each. Cows either grazed on pasture for 16–18.5 h per day or were kept in a freestall barn and had ad libitum access to herbage harvested from the same paddock. Herbage intake was estimated using the double alkane technique. On each day of the collection period, EE of one cow in the barn and of one cow on pasture was determined for 6 h by using the 13C bicarbonate dilution technique, with blood sample collection done either manually in the barn or using an automatic sampling system on pasture. Furthermore, during each collection period physical activity and feeding behaviour of cows were recorded over 3 d using pedometers and behaviour recorders. Milk yield decreased with increasing DIM (P<0.001) but was similar with both treatments. Herbage intake was lower (P<0.01) for grazing cows (16.8 kg dry matter (DM)/d) compared to zero-grazing cows (18.9 kg DM/d). The lowest (P<0.001) intake was observed in S1 and similar intakes were observed in S2 and S3. Within the 6-h measurement period, grazing cows expended 19% more (P<0.001) energy (319 versus 269 kJ/kg metabolic body size (BW0.75)) than zero-grazing cows and differences in EE did not change with increasing DIM. Grazing cows spent proportionally more (P<0.001) time walking and less time standing (P<0.001) and lying (P<0.05) than zero-grazing cows. The proportion of time spent eating was greater (P<0.001) and that of time spent ruminating was lower (P<0.05) for grazing cows compared to zero-grazing cows. In conclusion, lower feed intake along with the unchanged milk production indicates that grazing cows mobilized body reserves to cover additional energy requirements which were at least partly caused by more physical activity. However, changes in cows׳ behaviour between the considered time points during lactation were too small so that differences in EE remained similar between treatments with increasing DIM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since no single experimental or modeling technique provides data that allow a description of transport processes in clays and clay minerals at all relevant scales, several complementary approaches have to be combined to understand and explain the interplay between transport relevant phenomena. In this paper molecular dynamics simulations (MD) were used to investigate the mobility of water in the interlayer of montmorillonite (Mt), and to estimate the influence of mineral surfaces and interlayer ions on the water diffusion. Random Walk (RW) simulations based on a simplified representation of pore space in Mt were used to estimate and understand the effect of the arrangement of Mt particles on the meso- to macroscopic diffusivity of water. These theoretical calculations were complemented with quasielastic neutron scattering (QENS) measurements of aqueous diffusion in Mt with two pseudo-layers of water performed at four significantly different energy resolutions (i.e. observation times). The size of the interlayer and the size of Mt particles are two characteristic dimensions which determine the time dependent behavior of water diffusion in Mt. MD simulations show that at very short time scales water dynamics has the characteristic features of an oscillatory motion in the cage formed by neighbors in the first coordination shell. At longer time scales, the interaction of water with the surface determines the water dynamics, and the effect of confinement on the overall water mobility within the interlayer becomes evident. At time scales corresponding to an average water displacement equivalent to the average size of Mt particles, the effects of tortuosity are observed in the meso- to macroscopic pore scale simulations. Consistent with the picture obtained in the simulations, the QENS data can be described using a (local) 3D diffusion at short observation times, whereas at sufficiently long observation times a 2D diffusive motion is clearly observed. The effects of tortuosity measured in macroscopic tracer diffusion experiments are in qualitative agreement with RW simulations. By using experimental data to calibrate molecular and mesoscopic theoretical models, a consistent description of water mobility in clay minerals from the molecular to the macroscopic scale can be achieved. In turn, simulations help in choosing optimal conditions for the experimental measurements and the data interpretation. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large deviations type approximation to the probability of ruin within a finite time for the compound Poisson risk process perturbed by diffusion is derived. This approximation is based on the saddlepoint method and generalizes the approximation for the non-perturbed risk process by Barndorff-Nielsen and Schmidli (Scand Actuar J 1995(2):169–186, 1995). An importance sampling approximation to this probability of ruin is also provided. Numerical illustrations assess the accuracy of the saddlepoint approximation using importance sampling as a benchmark. The relative deviations between saddlepoint approximation and importance sampling are very small, even for extremely small probabilities of ruin. The saddlepoint approximation is however substantially faster to compute.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pollen stratigraphy of a core 270 cm long from Lake Dalgoto at 2310 m in the Northern Pirin Mountains, southern Bulgaria, was treated by optimal partitioning and compared to a broken-stick model to reveal statistically significant pollen zones. The vegetational reconstructions presented here are based on pollen percentages and pollen influx, on comparisons of modern and fossil pollen spectra, and on macrofossil dates from other sites in the mountains. During the Younger Dryas (11000–10200 14C yr BP), an open xerophytic herb vegetation with Artemisia and Chenopodiaceae was widely developed around the lake. Deciduous trees growing at lower elevations contributed to the pollen rain deposited at the higher-elevation sampling sites. Specifically, from 10200 to 8500 yr BP, Quercus, Ulmus, Tilia and Betula expanded rapidly at low and intermediate elevations, and between 8500 and 6500 yr BP they extended to higher elevations close to the upper forest limit, which was formed by Betula pendula at about 1900 m. Coniferous species were limited in the region at this time. After 6500 yr BP, the expansion of conifers (Pinus peuce, P. sylvestris, P. mugo, Abies alba) at high elevations forced the deciduous trees downward. Between 6500 and 3000 yr BP, the forest limit at 2200 m was formed by P. peuce, and A. alba had its maximum vertical range up to 1900 m. Later the abundance and vertical range of P. peuce and A. alba were reduced. After 3000 yr BP, Picea expanded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). In this context both the correct associations among the observations, and the orbits of the objects have to be determined. The complexity of the MTT problem is defined by its dimension S. Where S stands for the number of ’fences’ used in the problem, each fence consists of a set of observations that all originate from dierent targets. For a dimension of S ˃ the MTT problem becomes NP-hard. As of now no algorithm exists that can solve an NP-hard problem in an optimal manner within a reasonable (polynomial) computation time. However, there are algorithms that can approximate the solution with a realistic computational e ort. To this end an Elitist Genetic Algorithm is implemented to approximately solve the S ˃ MTT problem in an e cient manner. Its complexity is studied and it is found that an approximate solution can be obtained in a polynomial time. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to e ciently process large data sets with minimal manual intervention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE In the present case series, the authors report on seven cases of erosively worn dentitions (98 posterior teeth) which were treated with direct resin composite. MATERIALS AND METHODS In all cases, both arches were restored by using the so-called stamp technique. All patients were treated with standardized materials and protocols. Prior to treatment, a waxup was made on die-cast models to build up the loss of occlusion as well as ensure the optimal future anatomy and function of the eroded teeth to be restored. During treatment, teeth were restored by using templates of silicone (ie, two "stamps," one on the vestibular, one on the oral aspect of each tooth), which were filled with resin composite in order to transfer the planned, future restoration (ie, in the shape of the waxup) from the extra- to the intraoral situation. Baseline examinations were performed in all patients after treatment, and photographs as well as radiographs were taken. To evaluate the outcome, the modified United States Public Health Service criteria (USPHS) were used. RESULTS The patients were re-assessed after a mean observation time of 40 months (40.8 ± 7.2 months). The overall outcome of the restorations was good, and almost exclusively "Alpha" scores were given. Only the marginal integrity and the anatomical form received a "Charlie" score (10.2%) in two cases. CONCLUSION Direct resin composite restorations made with the stamp technique are a valuable treatment option for restoring erosively worn dentitions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Although surgery represents the cornerstone treatment of endometrial cancer at initial diagnosis, scarce data are available in recurrent setting. The purpose of this study was to review the outcome of surgery in these patients. METHODS Medical records of all patients undergoing surgery for recurrent endometrial cancer at NCI Milano between January 2003 and January 2014 were reviewed. Survival was determined from the time of surgery for recurrence to last follow-up. Survival was estimated using Kaplan-Meier methods. Differences in survival were analyzed using the log-rank test. The Fisher's exact test was used to compare optimal versus suboptimal cytoreduction against possible predictive factors. RESULTS Sixty-four patients were identified. Median age was 66 years. Recurrences were multiple in 38 % of the cases. Optimal cytoreduction was achieved in 65.6 %. Median OR time was 165 min, median postoperative hemoglobin drop was 2.4 g/dl, and median length hospital stay was 5.5 days. Eleven patients developed postoperative complications, but only four required surgical management. Estimated 5-year progression-free survival (PFS) was 42 and 19 % in optimally and suboptimally cytoreduced patients, respectively. At multivariate analysis, only residual disease was associated with PFS. Estimated 5-year overall survival (OS) was 60 and 30 % in optimally and suboptimally cytoreduced patients, respectively. At multivariate analysis, residual disease and histotype were associated with OS. At multivariate analysis, only performance status was associated with optimal cytoreduction. CONCLUSIONS Secondary cytoreduction in endometrial cancer is associated with long PFS and OS. The only factors associated with improved long-term outcome are the absence of residual disease at the end of surgical resection and histotype.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In situ and simultaneous measurement of the three most abundant isotopologues of methane using mid-infrared laser absorption spectroscopy is demonstrated. A field-deployable, autonomous platform is realized by coupling a compact quantum cascade laser absorption spectrometer (QCLAS) to a preconcentration unit, called trace gas extractor (TREX). This unit enhances CH4 mole fractions by a factor of up to 500 above ambient levels and quantitatively separates interfering trace gases such as N2O and CO2. The analytical precision of the QCLAS isotope measurement on the preconcentrated (750 ppm, parts-per-million, µmole mole−1) methane is 0.1 and 0.5 ‰ for δ13C- and δD-CH4 at 10 min averaging time. Based on repeated measurements of compressed air during a 2-week intercomparison campaign, the repeatability of the TREX–QCLAS was determined to be 0.19 and 1.9 ‰ for δ13C and δD-CH4, respectively. In this intercomparison campaign the new in situ technique is compared to isotope-ratio mass spectrometry (IRMS) based on glass flask and bag sampling and real time CH4 isotope analysis by two commercially available laser spectrometers. Both laser-based analyzers were limited to methane mole fraction and δ13C-CH4 analysis, and only one of them, a cavity ring down spectrometer, was capable to deliver meaningful data for the isotopic composition. After correcting for scale offsets, the average difference between TREX–QCLAS data and bag/flask sampling–IRMS values are within the extended WMO compatibility goals of 0.2 and 5 ‰ for δ13C- and δD-CH4, respectively. This also displays the potential to improve the interlaboratory compatibility based on the analysis of a reference air sample with accurately determined isotopic composition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kydland and Prescott (1977) develop a simple model of monetary policy making, where the central bank needs some commitment technique to achieve optimal monetary policy over time. Although not their main focus, they illustrate the difference between consistent and optimal policy in a sequential-decision one-period world. We employ the analytical method developed in Yuan and Miller (2005), whereby the government appoints a central bank with consistent targets or delegates consistent targets to the central bank. Thus, the central bank s welfare function differs from the social welfare function, which cause consistent policy to prove optimal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the contacting approach to central banking in the context of a simple common agency model. The recent literature on optimal contracts suggests that the political principal of the central bank can design the appropriate incentive schemes that remedy for time-inconsistency problems in monetary policy. The effectiveness of such contracts, however, requires a central banker that attaches a positive weight to the incentive scheme. As a result, delegating monetary policy under such circumstances gives rise to the possibility that the central banker may respond to incentive schemes offered by other potential principals. We introduce common agency considerations in the design of optimal central banker contracts. We introduce two principals - society (government) and an interest group, whose objectives conflict with society's and we examine under what circumstances the government-offered or the interest-group-offered contract dominates. Our results largely depend on the type of bias that the interest group contract incorporates. In particular, when the interest group contract incorporates an inflationary bias the outcome depends on the principals' relative concern of the incentive schemes' costs. When the interest group contract incorporates an expansionary bias, however, it always dominates the government contract. A corollary of our results is that central banker contracts aiming to remove the expansionary bias of policymakers should be written explicitly in terms of the perceived bias.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes asymptotically optimal tests for unstable parameter process under the feasible circumstance that the researcher has little information about the unstable parameter process and the error distribution, and suggests conditions under which the knowledge of those processes does not provide asymptotic power gains. I first derive a test under known error distribution, which is asymptotically equivalent to LR tests for correctly identified unstable parameter processes under suitable conditions. The conditions are weak enough to cover a wide range of unstable processes such as various types of structural breaks and time varying parameter processes. The test is then extended to semiparametric models in which the underlying distribution in unknown but treated as unknown infinite dimensional nuisance parameter. The semiparametric test is adaptive in the sense that its asymptotic power function is equivalent to the power envelope under known error distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When conducting a randomized comparative clinical trial, ethical, scientific or economic considerations often motivate the use of interim decision rules after successive groups of patients have been treated. These decisions may pertain to the comparative efficacy or safety of the treatments under study, cost considerations, the desire to accelerate the drug evaluation process, or the likelihood of therapeutic benefit for future patients. At the time of each interim decision, an important question is whether patient enrollment should continue or be terminated; either due to a high probability that one treatment is superior to the other, or a low probability that the experimental treatment will ultimately prove to be superior. The use of frequentist group sequential decision rules has become routine in the conduct of phase III clinical trials. In this dissertation, we will present a new Bayesian decision-theoretic approach to the problem of designing a randomized group sequential clinical trial, focusing on two-arm trials with time-to-failure outcomes. Forward simulation is used to obtain optimal decision boundaries for each of a set of possible models. At each interim analysis, we use Bayesian model selection to adaptively choose the model having the largest posterior probability of being correct, and we then make the interim decision based on the boundaries that are optimal under the chosen model. We provide a simulation study to compare this method, which we call Bayesian Doubly Optimal Group Sequential (BDOGS), to corresponding frequentist designs using either O'Brien-Fleming (OF) or Pocock boundaries, as obtained from EaSt 2000. Our simulation results show that, over a wide variety of different cases, BDOGS either performs at least as well as both OF and Pocock, or on average provides a much smaller trial. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to assess the accuracy and precision of airborne volatile organic compound (VOC) concentrations measured using passive air samplers (3M 3500 organic vapor monitors) over extended sampling durations (9 and 15 days). A total of forty-five organic vapor monitor samples were collected at a State of Texas air monitoring site during two different sampling periods (July/August and November 2008). The results of this study indicate that for most of the tested compounds, there was no significant difference between long-term (9 or 15 days) sample concentrations and the means of parallel consecutive short-term (3 days) sample concentrations. Biases of 9 or 15-day measurements vs. consecutive 3-day measurements showed considerable variability. Those compounds that had percent bias values of <10% are suggested as acceptable for long-term sampling (9 and 15 days). Of the twenty-one compounds examined, 10 compounds are classified as acceptable for long-term sampling; these include m,p-xylene, 1,2,4-trimethylbenzene, n-hexane, ethylbenzene, benzene, toluene, o-xylene, d-limonene, dimethylpentane and methyl tertbutyl ether. The ratio of sampling procedure variability relative to variability within days was approximately 1.89 for both sampling periods for the 3-day vs. 9-day comparisons and approximately 2.19 for both sampling periods for the 3-day vs. 15-day comparisons. Considerably higher concentrations of most VOCs were measured during the November sampling period compared to the July/August period. These differences may be a result of varying meteorological conditions during these two time periods, e.g., the differences in wind direction, and wind speed. Further studies are suggested to further evaluate the accuracy and precision of 3M 3500 organic vapor monitors over extended sampling durations. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prevalent sampling is an efficient and focused approach to the study of the natural history of disease. Right-censored time-to-event data observed from prospective prevalent cohort studies are often subject to left-truncated sampling. Left-truncated samples are not randomly selected from the population of interest and have a selection bias. Extensive studies have focused on estimating the unbiased distribution given left-truncated samples. However, in many applications, the exact date of disease onset was not observed. For example, in an HIV infection study, the exact HIV infection time is not observable. However, it is known that the HIV infection date occurred between two observable dates. Meeting these challenges motivated our study. We propose parametric models to estimate the unbiased distribution of left-truncated, right-censored time-to-event data with uncertain onset times. We first consider data from a length-biased sampling, a specific case in left-truncated samplings. Then we extend the proposed method to general left-truncated sampling. With a parametric model, we construct the full likelihood, given a biased sample with unobservable onset of disease. The parameters are estimated through the maximization of the constructed likelihood by adjusting the selection bias and unobservable exact onset. Simulations are conducted to evaluate the finite sample performance of the proposed methods. We apply the proposed method to an HIV infection study, estimating the unbiased survival function and covariance coefficients. ^