914 resultados para Error Probability


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement error models often arise in epidemiological and clinical research. Usually, in this set up it is assumed that the latent variable has a normal distribution. However, the normality assumption may not be always correct. Skew-normal/independent distribution is a class of asymmetric thick-tailed distributions which includes the Skew-normal distribution as a special case. In this paper, we explore the use of skew-normal/independent distribution as a robust alternative to null intercept measurement error model under a Bayesian paradigm. We assume that the random errors and the unobserved value of the covariate (latent variable) follows jointly a skew-normal/independent distribution, providing an appealing robust alternative to the routine use of symmetric normal distribution in this type of model. Specific distributions examined include univariate and multivariate versions of the skew-normal distribution, the skew-t distributions, the skew-slash distributions and the skew contaminated normal distributions. The methods developed is illustrated using a real data set from a dental clinical trial. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the present study was to determine the classification error probabilities, as lean or obese, in hypercaloric diet-induced obesity, which depends on the variable used to characterize animal obesity. In addition, the misclassification probabilities in animals submitted to normocaloric diet were also evaluated. Male Wistar rats were randomly distributed into two groups: normal diet (ND; n=3 1; 3,5 Kcal/g) and hypercaloric diet (HD; n=31; 4,6 Kcal/g). The ND group received commercial Labina rat feed and HD animals a cycle of five hypercaloric diets for a 14-week period. The variables analysed were body weight, body composition, body weight to length ratio, Lee index, body mass index and misclassification probability A 5% significance level was used. The hypercaloric pellet-diet cycle promoted increase of body weight, carcass fat, body weight to length ratio and Lee index. The total misclassification probabilities ranged from 19.21 % to 40.91 %. In Conclusion, the results of this experiment show that rnisclassification probabilities Occur when dietary manipulation is used to promote obesity in animals. This misjudgement ranges from 19.49% to 40.52% in hypercaloric diet and 18.94% to 41.30% in normocaloric diet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To estimate the heritability for the probability that yearling heifers would become pregnant, we analyzed the records of 11,487 Nellore animals that participated in breeding seasons at three farms in the Brazilian states of São Paulo and Mato Grosso do Sul. All heifers were exposed to a bull at the age of about 14 mo. The probability of pregnancy was analyzed as a categorical trait, with a value of 1 (success) assigned to heifers that were diagnosed pregnant by rectal palpation about 60 d after the end of the breeding season of 90 d and a value of 0 (failure) assigned to those that were not pregnant at that time. The estimate of heritability, obtained by Method 9, was 0.57 with standard error of 0.01. The EPD was predicted using a maximum a posteriori threshold method and was expressed as deviations from 50% probability. The range in EPD was -24.50 to 24.55%, with a mean of 0.78% and a SD of 7.46%. We conclude that EPD for probability of pregnancy can be used to select heifers with a higher probability of being fertile. However, it is mainly recommended for the selection of bulls for the production of precocious daughters because the accuracy of prediction is higher for bulls, depending on their number of daughters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A free-space optical (FSO) laser communication system with perfect fast-tracking experiences random power fading due to atmospheric turbulence. For a FSO communication system without fast-tracking or with imperfect fast-tracking, the fading probability density function (pdf) is also affected by the pointing error. In this thesis, the overall fading pdfs of FSO communication system with pointing errors are calculated using an analytical method based on the fast-tracked on-axis and off-axis fading pdfs and the fast-tracked beam profile of a turbulence channel. The overall fading pdf is firstly studied for the FSO communication system with collimated laser beam. Large-scale numerical wave-optics simulations are performed to verify the analytically calculated fading pdf with collimated beam under various turbulence channels and pointing errors. The calculated overall fading pdfs are almost identical to the directly simulated fading pdfs. The calculated overall fading pdfs are also compared with the gamma-gamma (GG) and the log-normal (LN) fading pdf models. They fit better than both the GG and LN fading pdf models under different receiver aperture sizes in all the studied cases. Further, the analytical method is expanded to the FSO communication system with beam diverging angle case. It is shown that the gamma pdf model is still valid for the fast-tracked on-axis and off-axis fading pdfs with point-like receiver aperture when the laser beam is propagated with beam diverging angle. Large-scale numerical wave-optics simulations prove that the analytically calculated fading pdfs perfectly fit the overall fading pdfs for both focused and diverged beam cases. The influence of the fast-tracked on-axis and off-axis fading pdfs, the fast-tracked beam profile, and the pointing error on the overall fading pdf is also discussed. At last, the analytical method is compared with the previous heuristic fading pdf models proposed since 1970s. Although some of previously proposed fading pdf models provide close fit to the experiment and simulation data, these close fits only exist under particular conditions. Only analytical method shows accurate fit to the directly simulated fading pdfs under different turbulence strength, propagation distances, receiver aperture sizes and pointing errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coastal managers require reliable spatial data on the extent and timing of potential coastal inundation, particularly in a changing climate. Most sea level rise (SLR) vulnerability assessments are undertaken using the easily implemented bathtub approach, where areas adjacent to the sea and below a given elevation are mapped using a deterministic line dividing potentially inundated from dry areas. This method only requires elevation data usually in the form of a digital elevation model (DEM). However, inherent errors in the DEM and spatial analysis of the bathtub model propagate into the inundation mapping. The aim of this study was to assess the impacts of spatially variable and spatially correlated elevation errors in high-spatial resolution DEMs for mapping coastal inundation. Elevation errors were best modelled using regression-kriging. This geostatistical model takes the spatial correlation in elevation errors into account, which has a significant impact on analyses that include spatial interactions, such as inundation modelling. The spatial variability of elevation errors was partially explained by land cover and terrain variables. Elevation errors were simulated using sequential Gaussian simulation, a Monte Carlo probabilistic approach. 1,000 error simulations were added to the original DEM and reclassified using a hydrologically correct bathtub method. The probability of inundation to a scenario combining a 1 in 100 year storm event over a 1 m SLR was calculated by counting the proportion of times from the 1,000 simulations that a location was inundated. This probabilistic approach can be used in a risk-aversive decision making process by planning for scenarios with different probabilities of occurrence. For example, results showed that when considering a 1% probability exceedance, the inundated area was approximately 11% larger than mapped using the deterministic bathtub approach. The probabilistic approach provides visually intuitive maps that convey uncertainties inherent to spatial data and analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the evolution of a finite size population formed by mutationally isolated lineages of error-prone replicators in a two-peak fitness landscape. Computer simulations are performed to gain a stochastic description of the system dynamics. More specifically, for different population sizes, we compute the probability of each lineage being selected in terms of their mutation rates and the amplification factors of the fittest phenotypes. We interpret the results as the compromise between the characteristic time a lineage takes to reach its fittest phenotype by crossing the neutral valley and the selective value of the sequences that form the lineages. A main conclusion is drawn: for finite population sizes, the survival probability of the lineage that arrives first to the fittest phenotype rises significantly

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Escherichia coli dnaQ gene encodes the proofreading 3' exonuclease (epsilon subunit) of DNA polymerase III holoenzyme and is a critical determinant of chromosomal replication fidelity. We constructed by site-specific mutagenesis a mutant, dnaQ926, by changing two conserved amino acid residues (Asp-12-->Ala and Glu-14-->Ala) in the Exo I motif, which, by analogy to other proofreading exonucleases, is essential for the catalytic activity. When residing on a plasmid, dnaQ926 confers a strong, dominant mutator phenotype, suggesting that the protein, although deficient in exonuclease activity, still binds to the polymerase subunit (alpha subunit or dnaE gene product). When dnaQ926 was transferred to the chromosome, replacing the wild-type gene, the cells became inviable. However, viable dnaQ926 strains could be obtained if they contained one of the dnaE alleles previously characterized in our laboratory as antimutator alleles or if it carried a multicopy plasmid containing the E. coli mutL+ gene. These results suggest that loss of proofreading exonuclease activity in dnaQ926 is lethal due to excessive error rates (error catastrophe). Error catastrophe results from both the loss of proofreading and the subsequent saturation of DNA mismatch repair. The probability of lethality by excessive mutation is supported by calculations estimating the number of inactivating mutations in essential genes per chromosome replication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the dependence of Bayesian error bars on the distribution of data in input space. For generalized linear regression models we derive an upper bound on the error bars which shows that, in the neighbourhood of the data points, the error bars are substantially reduced from their prior values. For regions of high data density we also show that the contribution to the output variance due to the uncertainty in the weights can exhibit an approximate inverse proportionality to the probability density. Empirical results support these conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An exact solution to a family of parity check error-correcting codes is provided by mapping the problem onto a Husimi cactus. The solution obtained in the thermodynamic limit recovers the replica-symmetric theory results and provides a very good approximation to finite systems of moderate size. The probability propagation decoding algorithm emerges naturally from the analysis. A phase transition between decoding success and failure phases is found to coincide with an information-theoretic upper bound. The method is employed to compare Gallager and MN codes.