27 resultados para Error probability
em University of Queensland eSpace - Australia
Resumo:
This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Analysis of a major multi-site epidemiologic study of heart disease has required estimation of the pairwise correlation of several measurements across sub-populations. Because the measurements from each sub-population were subject to sampling variability, the Pearson product moment estimator of these correlations produces biased estimates. This paper proposes a model that takes into account within and between sub-population variation, provides algorithms for obtaining maximum likelihood estimates of these correlations and discusses several approaches for obtaining interval estimates. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.
Resumo:
Two experiments were conducted on the nature of expert perception in the sport of squash. In the first experiment, ten expert and fifteen novice players attempted to predict the direction and force of squash strokes from either a film display (occluded at variable time periods before and after the opposing player had struck the ball) or a matched point-light display (containing only the basic kinematic features of the opponent's movement pattern). Experts outperformed the novices under both display conditions, and the same basic time windows that characterised expert and novice pick-up of information in the film task also persisted in the point-light task. This suggests that the experts' perceptual advantage is directly related to their superior pick-up of essential kinematic information. In the second experiment, the vision of six expert and six less skilled players was occluded by remotely triggered liquid-crystal spectacles at quasi-random intervals during simulated match play. Players were required to complete their current stroke even when the display was occluded and their prediction performance was assessed with respect to whether they moved to the correct half of the court to match the direction and depth of the opponent's stroke. Consistent with experiment 1, experts were found to be superior in their advance pick-up of both directional and depth information when the display was occluded during the opponent's hitting action. However, experts also remained better than chance, and clearly superior to less skilled players, in their prediction performance under conditions where occlusion occurred before any significant pre-contact preparatory movement by the opposing player was visible. This additional source of expert superiority is attributable to their superior attunement to the information contained in the situational probabilities and sequential dependences within their opponent's pattern of play.
Resumo:
The phenomenon of probability backflow, previously quantified for a free nonrelativistic particle, is considered for a free particle obeying Dirac's equation. It is known that probability backflow can occur in the opposite direction to the momentum; that is to say, there exist positive-energy states in which the particle certainly has a positive momentum in a given direction, but for which the component of the probability flux vector in that direction is negative. It is shown thar the maximum possible amount of probability that can flow backwards, over a given time interval of duration T, depends on the dimensionless parameter epsilon = root 4h/mc(2)T, where m is the mass of the particle and c is the speed of light. At epsilon = 0, the nonrelativistic value of approximately 0.039 for this maximum is recovered. Numerical studies suggest that the maximum decreases monotonically as epsilon increases from 0, and show that it depends on the size of m, h, and T, unlike the nonrelativistic case.
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.
Resumo:
In his study of the 'time of arrival' problem in the nonrelativistic quantum mechanics of a single particle, Allcock [1] noted that the direction of the probability flux vector is not necessarily the same as that of the mean momentum of a wave packet, even when the packet is composed entirely of plane waves with a common direction of momentum. Packets can be constructed, for example for a particle moving under a constant force, in which probability flows for a finite time in the opposite direction to the momentum. A similar phenomenon occurs for the Dirac electron. The maximum amount of probabilitiy backflow which can occur over a given time interval can be calculated in each case.
Resumo:
The aim of this study was to investigate the frequency of axillary metastasis in women with tubular carcinoma (TC) of the breast. Women who underwent axillary dissection for TC in the Western Sydney area (1984-1995) were identified retrospectively through a search of computerized records. A centralized pathology review was performed and tumours were classified as pure tubular (22) or mixed tubular (nine), on the basis of the invasive component containing 90 per cent or more, or 75-90 per cent tubule formation respectively. A Medline search of the literature was undertaken to compile a collective series (20 studies with a total of 680 patients) to address the frequency of nodal involvement in TC. A quantitative meta-analysis was used to combine the results of these studies. The overall frequency of nodal metastasis was five of 31 (16 per cent); one of 22 pure tubular and four of nine mixed tumours (P = 0.019). None of the tumours with a diameter of 10 mm or less (n = 16) had nodal metastasis compared with five of 15 larger tumours (P = 0.018). The meta-analysis of 680 women showed an overall frequency of nodal metastasis in TC of 13.8 (95 per cent confidence interval 9.3-18.3) per cent. The frequency of nodal involvement was 6.6 (1.7-11.4) per cent in pure TC (n = 244) and 25.0 (12.5-37.6) per cent in mixed TC (n = 149). A case may be made for observing the clinically negative axilla in women with a small TC (10 mm or less in diameter).
Resumo:
We present a method of estimating HIV incidence rates in epidemic situations from data on age-specific prevalence and changes in the overall prevalence over time. The method is applied to women attending antenatal clinics in Hlabisa, a rural district of KwaZulu/Natal, South Africa, where transmission of HIV is overwhelmingly through heterosexual contact. A model which gives age-specific prevalence rates in the presence of a progressing epidemic is fitted to prevalence data for 1998 using maximum likelihood methods and used to derive the age-specific incidence. Error estimates are obtained using a Monte Carlo procedure. Although the method is quite general some simplifying assumptions are made concerning the form of the risk function and sensitivity analyses are performed to explore the importance of these assumptions. The analysis shows that in 1998 the annual incidence of infection per susceptible woman increased from 5.4 per cent (3.3-8.5 per cent; here and elsewhere ranges give 95 per cent confidence limits) at age 15 years to 24.5 per cent (20.6-29.1 per cent) at age 22 years and declined to 1.3 per cent (0.5-2.9 per cent) at age 50 years; standardized to a uniform age distribution, the overall incidence per susceptible woman aged 15 to 59 was 11.4 per cent (10.0-13.1 per cent); per women in the population it was 8.4 per cent (7.3-9.5 per cent). Standardized to the age distribution of the female population the average incidence per woman was 9.6 per cent (8.4-11.0 per cent); standardized to the age distribution of women attending antenatal clinics, it was 11.3 per cent (9.8-13.3 per cent). The estimated incidence depends on the values used for the epidemic growth rate and the AIDS related mortality. To ensure that, for this population, errors in these two parameters change the age specific estimates of the annual incidence by less than the standard deviation of the estimates of the age specific incidence, the AIDS related mortality should be known to within +/-50 per cent and the epidemic growth rate to within +/-25 per cent, both of which conditions are met. In the absence of cohort studies to measure the incidence of HIV infection directly, useful estimates of the age-specific incidence can be obtained from cross-sectional, age-specific prevalence data and repeat cross-sectional data on the overall prevalence of HIV infection. Several assumptions were made because of the lack of data but sensitivity analyses show that they are unlikely to affect the overall estimates significantly. These estimates are important in assessing the magnitude of the public health problem, for designing vaccine trials and for evaluating the impact of interventions. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Background and aim of the study: Results of valve re-replacement (reoperation) in 898 patients undergoing aortic valve replacement with cryopreserved homograft valves between 1975 and 1998 are reported. The study aim was to provide estimates of unconditional probability of valve reoperation and cumulative incidence function (actual risk) of reoperation. Methods: Valves were implanted by subcoronary insertion (n = 500), inclusion cylinder (n = 46), and aortic root replacement (n = 352). Probability of reoperation was estimated by adopting a mixture model framework within which estimates were adjusted for two risk factors: patient age at initial replacement, and implantation technique. Results: For a patient aged 50 years, the probability of reoperation in his/her lifetime was estimated as 44% and 56% for non-root and root replacement techniques, respectively. For a patient aged 70 years, estimated probability of reoperation was 16% and 25%, respectively. Given that a reoperation is required, patients with non-root replacement have a higher hazard rate than those with root replacement (hazards ratio = 1.4), indicating that non-root replacement patients tend to undergo reoperation earlier before death than root replacement patients. Conclusion: Younger patient age and root versus non-root replacement are risk factors for reoperation. Valve durability is much less in younger patients, while root replacement patients appear more likely to live longer and hence are more likely to require reoperation.
Resumo:
We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber , Phys. Rev. Lett. 86, 4402 (2001)].
Resumo:
We present a scheme which offers a significant reduction in the resources required to implement linear optics quantum computing. The scheme is a variation of the proposal of Knill, Laflamme and Milburn, and makes use of an incremental approach to the error encoding to boost probability of success.
Resumo:
We theoretically study the Hilbert space structure of two neighboring P-donor electrons in silicon-based quantum computer architectures. To use electron spins as qubits, a crucial condition is the isolation of the electron spins from their environment, including the electronic orbital degrees of freedom. We provide detailed electronic structure calculations of both the single donor electron wave function and the two-electron pair wave function. We adopted a molecular orbital method for the two-electron problem, forming a basis with the calculated single donor electron orbitals. Our two-electron basis contains many singlet and triplet orbital excited states, in addition to the two simple ground state singlet and triplet orbitals usually used in the Heitler-London approximation to describe the two-electron donor pair wave function. We determined the excitation spectrum of the two-donor system, and study its dependence on strain, lattice position, and interdonor separation. This allows us to determine how isolated the ground state singlet and triplet orbitals are from the rest of the excited state Hilbert space. In addition to calculating the energy spectrum, we are also able to evaluate the exchange coupling between the two donor electrons, and the double occupancy probability that both electrons will reside on the same P donor. These two quantities are very important for logical operations in solid-state quantum computing devices, as a large exchange coupling achieves faster gating times, while the magnitude of the double occupancy probability can affect the error rate.
Resumo:
The acceptance-probability-controlled simulated annealing with an adaptive move generation procedure, an optimization technique derived from the simulated annealing algorithm, is presented. The adaptive move generation procedure was compared against the random move generation procedure on seven multiminima test functions, as well as on the synthetic data, resembling the optical constants of a metal. In all cases the algorithm proved to have faster convergence and superior escaping from local minima. This algorithm was then applied to fit the model dielectric function to data for platinum and aluminum.
Resumo:
The small sample performance of Granger causality tests under different model dimensions, degree of cointegration, direction of causality, and system stability are presented. Two tests based on maximum likelihood estimation of error-correction models (LR and WALD) are compared to a Wald test based on multivariate least squares estimation of a modified VAR (MWALD). In large samples all test statistics perform well in terms of size and power. For smaller samples, the LR and WALD tests perform better than the MWALD test. Overall, the LR test outperforms the other two in terms of size and power in small samples.