153 resultados para Random error
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous Score for classification of study participants into healthy, or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between Studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might he confounded by a potentially unknown variation of the cut-off Value. To cope with this phenomena it is suggested to use, instead an overall estimate of the misclassification error previously suggested and used as Youden's index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel-Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden's index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
The node-density effect is an artifact of phylogeny reconstruction that can cause branch lengths to be underestimated in areas of the tree with fewer taxa. Webster, Payne, and Pagel (2003, Science 301:478) introduced a statistical procedure (the "delta" test) to detect this artifact, and here we report the results of computer simulations that examine the test's performance. In a sample of 50,000 random data sets, we find that the delta test detects the artifact in 94.4% of cases in which it is present. When the artifact is not present (n = 10,000 simulated data sets) the test showed a type I error rate of approximately 1.69%, incorrectly reporting the artifact in 169 data sets. Three measures of tree shape or "balance" failed to predict the size of the node-density effect. This may reflect the relative homogeneity of our randomly generated topologies, but emphasizes that nearly any topology can suffer from the artifact, the effect not being confined only to highly unevenly sampled or otherwise imbalanced trees. The ability to screen phylogenies for the node-density artifact is important for phylogenetic inference and for researchers using phylogenetic trees to infer evolutionary processes, including their use in molecular clock dating. [Delta test; molecular clock; molecular evolution; node-density effect; phylogenetic reconstruction; speciation; simulation.]
Resumo:
Accelerated failure time models with a shared random component are described, and are used to evaluate the effect of explanatory factors and different transplant centres on survival times following kidney transplantation. Different combinations of the distribution of the random effects and baseline hazard function are considered and the fit of such models to the transplant data is critically assessed. A mixture model that combines short- and long-term components of a hazard function is then developed, which provides a more flexible model for the hazard function. The model can incorporate different explanatory variables and random effects in each component. The model is straightforward to fit using standard statistical software, and is shown to be a good fit to the transplant data. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
The absorption cross-sections of Cl2O6 and Cl2O4 have been obtained using a fast flow reactor with a diode array spectrometer (DAS) detection system. The absorption cross-sections at the wavelengths of maximum absorption (lambda(max)) determined in this study are those of Cl2O6: (1.47 +/- 0.15) x 10(-17) cm(2) molecule(-1), at lambda(max) = 276 nm and T = 298 K; and Cl2O4: (9.0 +/- 2.0) x 10(-19) cm(2) molecule(-1), at lambda(max) = 234 nm and T = 298 K. Errors quoted are two standard deviations together with estimates of the systematic error. The shapes of the absorption spectra were obtained over the wavelength range 200-450 nm for Cl2O6 and 200-350 nm for Cl2O4, and were normalized to the absolute cross-sections obtained at lambda(max) for each oxide, and are presented at 1 nm intervals. These data are discussed in relation to previous measurements. The reaction of O with OCIO has been investigated with the objective of observing transient spectroscopic absorptions. A transient absorption was seen, and the possibility is explored of identifying the species with the elusive sym-ClO3 or ClO4, both of which have been characterized in matrices, but not in the gas-phase. The photolysis of OCIO was also re-examined, with emphasis being placed on the products of reaction. UV absorptions attributable to one of the isomers of the ClO dimer, chloryl chloride (ClClO2) were observed; some Cl2O4 was also found at long photolysis times, when much of the ClClO2 had itself been photolysed. We suggest that reports of Cl2O6 formation in previous studies could be a consequence of a mistaken identification. At low temperatures, the photolysis of OCIO leads to the formation of Cl2O3 as a result of the addition of the ClO primary product to OCIO. ClClO2 also appears to be one product of the reaction between O-3 and OCIO, especially when the reaction occurs under explosive conditions. We studied the kinetics of the non-explosive process using a stopped-flow technique, and suggest a value for the room-temperature rate coefficient of (4.6 +/- 0.9) x 10(-19) cm(3) molecule(-1) s(-1) (limit quoted is 2sigma random errors). The photochemical and thermal decomposition of Cl2O6 is described in this paper. For photolysis at k = 254 nm, the removal of Cl2O6 is not accompanied by the build up of any other strong absorber. The implications of the results are either that the photolysis of Cl2O6 produces Cl-2 directly, or that the initial photofragments are converted rapidly to Cl-2. In the thermal decomposition of Cl2O6, Cl2O4 was shown to be a product of reaction, although not necessarily the major one. The kinetics of decomposition were investigated using the stopped-flow technique. At relatively high [OCIO] present in the system, the decay kinetics obeyed a first-order law, with a limiting first-order rate coefficient of 0.002 s(-1). (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
In real-world environments it is usually difficult to specify the quality of a preventive maintenance (PM) action precisely. This uncertainty makes it problematic to optimise maintenance policy.-This problem is tackled in this paper by assuming that the-quality of a PM action is a random variable following a probability distribution. Two frequently studied PM models, a failure rate PM model and an age reduction PM model, are investigated. The optimal PM policies are presented and optimised. Numerical examples are also given.
Resumo:
Random number generation (RNG) is a functionally complex process that is highly controlled and therefore dependent on Baddeley's central executive. This study addresses this issue by investigating whether key predictions from this framework are compatible with empirical data. In Experiment 1, the effect of increasing task demands by increasing the rate of the paced generation was comprehensively examined. As expected, faster rates affected performance negatively because central resources were increasingly depleted. Next, the effects of participants' exposure were manipulated in Experiment 2 by providing increasing amounts of practice on the task. There was no improvement over 10 practice trials, suggesting that the high level of strategic control required by the task was constant and not amenable to any automatization gain with repeated exposure. Together, the results demonstrate that RNG performance is a highly controlled and demanding process sensitive to additional demands on central resources (Experiment 1) and is unaffected by repeated performance or practice (Experiment 2). These features render the easily administered RNG task an ideal and robust index of executive function that is highly suitable for repeated clinical use.
Resumo:
The human electroencephalogram (EEG) is globally characterized by a 1/f power spectrum superimposed with certain peaks, whereby the "alpha peak" in a frequency range of 8-14 Hz is the most prominent one for relaxed states of wakefulness. We present simulations of a minimal dynamical network model of leaky integrator neurons attached to the nodes of an evolving directed and weighted random graph (an Erdos-Renyi graph). We derive a model of the dendritic field potential (DFP) for the neurons leading to a simulated EEG that describes the global activity of the network. Depending on the network size, we find an oscillatory transition of the simulated EEG when the network reaches a critical connectivity. This transition, indicated by a suitably defined order parameter, is reflected by a sudden change of the network's topology when super-cycles are formed from merging isolated loops. After the oscillatory transition, the power spectra of simulated EEG time series exhibit a 1/f continuum superimposed with certain peaks. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The convergence speed of the standard Least Mean Square adaptive array may be degraded in mobile communication environments. Different conventional variable step size LMS algorithms were proposed to enhance the convergence speed while maintaining low steady state error. In this paper, a new variable step LMS algorithm, using the accumulated instantaneous error concept is proposed. In the proposed algorithm, the accumulated instantaneous error is used to update the step size parameter of standard LMS is varied. Simulation results show that the proposed algorithm is simpler and yields better performance than conventional variable step LMS.
Resumo:
Exact error estimates for evaluating multi-dimensional integrals are considered. An estimate is called exact if the rates of convergence for the low- and upper-bound estimate coincide. The algorithm with such an exact rate is called optimal. Such an algorithm has an unimprovable rate of convergence. The problem of existing exact estimates and optimal algorithms is discussed for some functional spaces that define the regularity of the integrand. Important for practical computations data classes are considered: classes of functions with bounded derivatives and Holder type conditions. The aim of the paper is to analyze the performance of two optimal classes of algorithms: deterministic and randomized for computing multidimensional integrals. It is also shown how the smoothness of the integrand can be exploited to construct better randomized algorithms.
Using simulation to determine the sensibility of error sources for software effort estimation models
Resumo:
This paper investigates random number generators in stochastic iteration algorithms that require infinite uniform sequences. We take a simple model of the general transport equation and solve it with the application of a linear congruential generator, the Mersenne twister, the mother-of-all generators, and a true random number generator based on quantum effects. With this simple model we show that for reasonably contractive operators the theoretically not infinite-uniform sequences perform also well. Finally, we demonstrate the power of stochastic iteration for the solution of the light transport problem.
Resumo:
Urban surveillance footage can be of poor quality, partly due to the low quality of the camera and partly due to harsh lighting and heavily reflective scenes. For some computer surveillance tasks very simple change detection is adequate, but sometimes a more detailed change detection mask is desirable, eg, for accurately tracking identity when faced with multiple interacting individuals and in pose-based behaviour recognition. We present a novel technique for enhancing a low-quality change detection into a better segmentation using an image combing estimator in an MRF based model.