891 resultados para sampling error
Resumo:
This work presents a Bayesian semiparametric approach for dealing with regression models where the covariate is measured with error. Given that (1) the error normality assumption is very restrictive, and (2) assuming a specific elliptical distribution for errors (Student-t for example), may be somewhat presumptuous; there is need for more flexible methods, in terms of assuming only symmetry of errors (admitting unknown kurtosis). In this sense, the main advantage of this extended Bayesian approach is the possibility of considering generalizations of the elliptical family of models by using Dirichlet process priors in dependent and independent situations. Conditional posterior distributions are implemented, allowing the use of Markov Chain Monte Carlo (MCMC), to generate the posterior distributions. An interesting result shown is that the Dirichlet process prior is not updated in the case of the dependent elliptical model. Furthermore, an analysis of a real data set is reported to illustrate the usefulness of our approach, in dealing with outliers. Finally, semiparametric proposed models and parametric normal model are compared, graphically with the posterior distribution density of the coefficients. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Convex combinations of long memory estimates using the same data observed at different sampling rates can decrease the standard deviation of the estimates, at the cost of inducing a slight bias. The convex combination of such estimates requires a preliminary correction for the bias observed at lower sampling rates, reported by Souza and Smith (2002). Through Monte Carlo simulations, we investigate the bias and the standard deviation of the combined estimates, as well as the root mean squared error (RMSE), which takes both into account. While comparing the results of standard methods and their combined versions, the latter achieve lower RMSE, for the two semi-parametric estimators under study (by about 30% on average for ARFIMA(0,d,0) series).
Resumo:
To investigate the feasibility and validity of sampling blood from the carpal pad in hospitalised healthy and diabetic dogs. METHODS The carpal pad was compared to the ear as a sampling site in 60 dogs (30 healthy and 30 diabetic dogs). RESULTS Lancing the pads was very well tolerated. The average glucose concentrations in blood samples obtained from the ears and carpal pads exhibited a strong positive correlation (r = 0.938) and there were no significant differences between them (P = 0.914). In addition, 98.3% of the values obtained were clinically acceptable when assessed by the error grid analysis. CLINICAL SIGNIFICANCE The carpal pad is a good alternative sampling site for home monitoring, especially in animals with a soft and/or light-coloured pad.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This article provides importance sampling algorithms for computing the probabilities of various types ruin of spectrally negative Lévy risk processes, which are ruin over the infinite time horizon, ruin within a finite time horizon and ruin past a finite time horizon. For the special case of the compound Poisson process perturbed by diffusion, algorithms for computing probabilities of ruins by creeping (i.e. induced by the diffusion term) and by jumping (i.e. by a claim amount) are provided. It is shown that these algorithms have either bounded relative error or logarithmic efficiency, as t,x→∞t,x→∞, where t>0t>0 is the time horizon and x>0x>0 is the starting point of the risk process, with y=t/xy=t/x held constant and assumed either below or above a certain constant.
Resumo:
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.
Resumo:
With the ongoing shift in the computer graphics industry toward Monte Carlo rendering, there is a need for effective, practical noise-reduction techniques that are applicable to a wide range of rendering effects and easily integrated into existing production pipelines. This course surveys recent advances in image-space adaptive sampling and reconstruction algorithms for noise reduction, which have proven very effective at reducing the computational cost of Monte Carlo techniques in practice. These approaches leverage advanced image-filtering techniques with statistical methods for error estimation. They are attractive because they can be integrated easily into conventional Monte Carlo rendering frameworks, they are applicable to most rendering effects, and their computational overhead is modest.
Resumo:
The Interstellar Boundary Explorer (IBEX) has been directly observing neutral atoms from the local interstellar medium for the last six years (2009–2014). This paper ties together the 14 studies in this Astrophysical Journal Supplement Series Special Issue, which collectively describe the IBEX interstellar neutral results from this epoch and provide a number of other relevant theoretical and observational results. Interstellar neutrals interact with each other and with the ionized portion of the interstellar population in the “pristine” interstellar medium ahead of the heliosphere. Then, in the heliosphereʼs close vicinity, the interstellar medium begins to interact with escaping heliospheric neutrals. In this study, we compare the results from two major analysis approaches led by IBEX groups in New Hampshire and Warsaw. We also directly address the question of the distance upstream to the pristine interstellar medium and adjust both sets of results to a common distance of ~1000 AU. The two analysis approaches are quite different, but yield fully consistent measurements of the interstellar He flow properties, further validating our findings. While detailed error bars are given for both approaches, we recommend that for most purposes, the community use “working values” of ~25.4 km s⁻¹, ~75°7 ecliptic inflow longitude, ~−5°1 ecliptic inflow latitude, and ~7500 K temperature at ~1000 AU upstream. Finally, we briefly address future opportunities for even better interstellar neutral observations to be provided by the Interstellar Mapping and Acceleration Probe mission, which was recommended as the next major Heliophysics mission by the NRCʼs 2013 Decadal Survey.
Resumo:
The youngest ice marginal zone between the White Sea and the Ural mountains is the W-E trending belt of moraines called the Varsh-Indiga-Markhida-Harbei-Halmer-Sopkay, here called the Markhida line. Glacial elements show that it was deposited by the Kara Ice Sheet, and in the west, by the Barents Ice Sheet. The Markhida moraine overlies Eemian marine sediments, and is therefore of Weichselian age. Distal to the moraine are Eemian marine sediments and three Palaeolithic sites with many C-14 dates in the range 16-37 ka not covered by till, proving that it represents the maximum ice sheet extension during the Weichselian. The Late Weichselian ice limit of M. G. Grosswald is about 400 km (near the Urals more than 700 km) too far south. Shorelines of ice dammed Lake Komi, probably dammed by the ice sheet ending at the Markhida line, predate 37 ka. We conclude that the Markhida line is of Middle/Early Weichselian age, implying that no ice sheet reached this part of Northern Russia during the Late Weichselian. This age is supported by a series of C-14 and OSL dates inside the Markhida line all of >45 ka. Two moraine loops protrude south of the Markhida line; the Laya-Adzva and Rogavaya moraines. These moraines are covered by Lake Komi sediments, and many C-14 dates on mammoth bones inside the moraines are 26-37 ka. The morphology indicates that the moraines are of Weichselian age, but a Saalian age cannot be excluded. No post-glacial emerged marine shorelines are found along the Barents Sea coast north of the Markhida line.
Resumo:
A late Quaternary pollen record from northern Sakhalin Island (51.34°N, 142.14°E, 15 m a.s.l.) spanning the last 43.7 ka was used to reconstruct regional climate dynamics and vegetation distribution by using the modern analogue technique (MAT). The long-term trends of the reconstructed mean annual temperature (TANN) and precipitation (PANN), and total tree cover are generally in line with key palaeoclimate records from the North Atlantic region and the Asian monsoon domain. TANN largely follows the fluctuations in solar summer insolation at 55°N. During Marine Isotope Stage (MIS) 3, TANN and PANN were on average 0.2 °C and 700 mm, respectively, thus very similar to late Holocene/modern conditions. Full glacial climate deterioration (TANN = -3.3 °C, PANN = 550 mm) was relatively weak as suggested by the MAT-inferred average climate parameters and tree cover densities. However, error ranges of the climate reconstructions during this interval are relatively large and the last glacial environments in northern Sakhalin could be much colder and drier than suggested by the weighted average values. An anti-phase relationship between mean temperature of the coldest (MTCO) and warmest (MTWA) month is documented during the last glacial period, i.e. MIS 2 and 3, suggesting more continental climate due to sea levels that were lower than present. Warmest and wettest climate conditions have prevailed since the end of the last glaciation with an optimum (TANN = 1.5 °C, PANN = 800 mm) in the middle Holocene interval (ca 8.7-5.2 cal. ka BP). This lags behind the solar insolation peak during the early Holocene. We propose that this is due to continuous Holocene sea level transgression and regional influence of the Tsushima Warm Current, which reached maximum intensity during the middle Holocene. Several short-term climate oscillations are suggested by our reconstruction results and correspond to Northern Hemisphere Heinrich and Dansgaard-Oeschger events, the Bølling-Allerød and the Younger Dryas. The most prominent fluctuation is registered during Heinrich 4 event, which is marked by noticeably colder and drier conditions and the spread of herbaceous taxa.
Resumo:
In this work we carry out some results in sampling theory for U-invariant subspaces of a separable Hilbert space H, also called atomic subspaces. These spaces are a generalization of the well-known shift- invariant subspaces in L2 (R); here the space L2 (R) is replaced by H, and the shift operator by U. Having as data the samples of some related operators, we derive frame expansions allowing the recovery of the elements in Aa. Moreover, we include a frame perturbation-type result whenever the samples are affected with a jitter error.
Resumo:
The aim of this study was to determine the most informative sampling time(s) providing a precise prediction of tacrolimus area under the concentration-time curve (AUC). Fifty-four concentration-time profiles of tacrolimus from 31 adult liver transplant recipients were analyzed. Each profile contained 5 tacrolimus whole-blood concentrations (predose and 1, 2, 4, and 6 or 8 hours postdose), measured using liquid chromatography-tandem mass spectrometry. The concentration at 6 hours was interpolated for each profile, and 54 values of AUC(0-6) were calculated using the trapezoidal rule. The best sampling times were then determined using limited sampling strategies and sensitivity analysis. Linear mixed-effects modeling was performed to estimate regression coefficients of equations incorporating each concentration-time point (C0, C1, C2, C4, interpolated C5, and interpolated C6) as a predictor of AUC(0-6). Predictive performance was evaluated by assessment of the mean error (ME) and root mean square error (RMSE). Limited sampling strategy (LSS) equations with C2, C4, and C5 provided similar results for prediction of AUC(0-6) (R-2 = 0.869, 0.844, and 0.832, respectively). These 3 time points were superior to C0 in the prediction of AUC. The ME was similar for all time points; the RMSE was smallest for C2, C4, and C5. The highest sensitivity index was determined to be 4.9 hours postdose at steady state, suggesting that this time point provides the most information about the AUC(0-12). The results from limited sampling strategies and sensitivity analysis supported the use of a single blood sample at 5 hours postdose as a predictor of both AUC(0-6) and AUC(0-12). A jackknife procedure was used to evaluate the predictive performance of the model, and this demonstrated that collecting a sample at 5 hours after dosing could be considered as the optimal sampling time for predicting AUC(0-6).
Resumo:
Two direct sampling correlator-type receivers for differential chaos shift keying (DCSK) communication systems under frequency non-selective fading channels are proposed. These receivers operate based on the same hardware platform with different architectures. In the first scheme, namely sum-delay-sum (SDS) receiver, the sum of all samples in a chip period is correlated with its delayed version. The correlation value obtained in each bit period is then compared with a fixed threshold to decide the binary value of recovered bit at the output. On the other hand, the second scheme, namely delay-sum-sum (DSS) receiver, calculates the correlation value of all samples with its delayed version in a chip period. The sum of correlation values in each bit period is then compared with the threshold to recover the data. The conventional DCSK transmitter, frequency non-selective Rayleigh fading channel, and two proposed receivers are mathematically modelled in discrete-time domain. The authors evaluated the bit error rate performance of the receivers by means of both theoretical analysis and numerical simulation. The performance comparison shows that the two proposed receivers can perform well under the studied channel, where the performances get better when the number of paths increases and the DSS receiver outperforms the SDS one.
Resumo:
Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.