946 resultados para incremental computation
Resumo:
We present unified, systematic derivations of schemes in the two known measurement-based models of quantum computation. The first model (introduced by Raussendorf and Briegel, [Phys. Rev. Lett. 86, 5188 (2001)]) uses a fixed entangled state, adaptive measurements on single qubits, and feedforward of the measurement results. The second model (proposed by Nielsen, [Phys. Lett. A 308, 96 (2003)] and further simplified by Leung, [Int. J. Quant. Inf. 2, 33 (2004)]) uses adaptive two-qubit measurements that can be applied to arbitrary pairs of qubits, and feedforward of the measurement results. The underlying principle of our derivations is a variant of teleportation introduced by Zhou, Leung, and Chuang, [Phys. Rev. A 62, 052316 (2000)]. Our derivations unify these two measurement-based models of quantum computation and provide significantly simpler schemes.
Resumo:
Recently, there have been several suggestions that weak Kerr nonlinearity can be used for generation of macroscopic superpositions and entanglement and for linear optics quantum computation. However, it is not immediately clear that this approach can overcome decoherence effects. Our numerical study shows that nonlinearity of weak strength could be useful for macroscopic entanglement generation and quantum gate operations in the presence of decoherence. We suggest specific values for real experiments based on our analysis. Our discussion shows that the generation of macroscopic entanglement using this approach is within the reach of current technology.
Resumo:
Objective: Inpatient length of stay (LOS) is an important measure of hospital activity, health care resource consumption, and patient acuity. This research work aims at developing an incremental expectation maximization (EM) based learning approach on mixture of experts (ME) system for on-line prediction of LOS. The use of a batchmode learning process in most existing artificial neural networks to predict LOS is unrealistic, as the data become available over time and their pattern change dynamically. In contrast, an on-line process is capable of providing an output whenever a new datum becomes available. This on-the-spot information is therefore more useful and practical for making decisions, especially when one deals with a tremendous amount of data. Methods and material: The proposed approach is illustrated using a real example of gastroenteritis LOS data. The data set was extracted from a retrospective cohort study on all infants born in 1995-1997 and their subsequent admissions for gastroenteritis. The total number of admissions in this data set was n = 692. Linked hospitalization records of the cohort were retrieved retrospectively to derive the outcome measure, patient demographics, and associated co-morbidities information. A comparative study of the incremental learning and the batch-mode learning algorithms is considered. The performances of the learning algorithms are compared based on the mean absolute difference (MAD) between the predictions and the actual LOS, and the proportion of predictions with MAD < 1 day (Prop(MAD < 1)). The significance of the comparison is assessed through a regression analysis. Results: The incremental learning algorithm provides better on-line prediction of LOS when the system has gained sufficient training from more examples (MAD = 1.77 days and Prop(MAD < 1) = 54.3%), compared to that using the batch-mode learning. The regression analysis indicates a significant decrease of MAD (p-value = 0.063) and a significant (p-value = 0.044) increase of Prop(MAD
Resumo:
This article is a short introduction to and review of the cluster-state model of quantum computation, in which coherent quantum information processing is accomplished via a sequence of single-qubit measurements applied to a fixed quantum state known as a cluster state. We also discuss a few novel properties of the model, including a proof that the cluster state cannot occur as the exact ground state of any naturally occurring physical system, and a proof that measurements on any quantum state which is linearly prepared in one dimension can be efficiently simulated on a classical computer, and thus are not candidates for use as a substrate for quantum computation.
Resumo:
In this paper we do a detailed numerical investigation of the fault-tolerant threshold for optical cluster-state quantum computation. Our noise model allows both photon loss and depolarizing noise, as a general proxy for all types of local noise other than photon loss noise. We obtain a threshold region of allowed pairs of values for the two types of noise. Roughly speaking, our results show that scalable optical quantum computing is possible in the combined presence of both noise types, provided that the loss probability is less than 3 X 10(-3) and the depolarization probability is less than 10(-4). Our fault-tolerant protocol involves a number of innovations, including a method for syndrome extraction known as telecorrection, whereby repeated syndrome measurements are guaranteed to agree. This paper is an extended version of Dawson.
Resumo:
Quantum computers hold great promise for solving interesting computational problems, but it remains a challenge to find efficient quantum circuits that can perform these complicated tasks. Here we show that finding optimal quantum circuits is essentially equivalent to finding the shortest path between two points in a certain curved geometry. By recasting the problem of finding quantum circuits as a geometric problem, we open up the possibility of using the mathematical techniques of Riemannian geometry to suggest new quantum algorithms or to prove limitations on the power of quantum computers.
Resumo:
We present here a new approach to scalable quantum computing - a 'qubus computer' - which realizes qubit measurement and quantum gates through interacting qubits with a quantum communication bus mode. The qubits could be 'static' matter qubits or 'flying' optical qubits, but the scheme we focus on here is particularly suited to matter qubits. There is no requirement for direct interaction between the qubits. Universal two-qubit quantum gates may be effected by schemes which involve measurement of the bus mode, or by schemes where the bus disentangles automatically and no measurement is needed. In effect, the approach integrates together qubit degrees of freedom for computation with quantum continuous variables for communication and interaction.
Resumo:
We describe a generalization of the cluster-state model of quantum computation to continuous-variable systems, along with a proposal for an optical implementation using squeezed-light sources, linear optics, and homodyne detection. For universal quantum computation, a nonlinear element is required. This can be satisfied by adding to the toolbox any single-mode non-Gaussian measurement, while the initial cluster state itself remains Gaussian. Homodyne detection alone suffices to perform an arbitrary multimode Gaussian transformation via the cluster state. We also propose an experiment to demonstrate cluster-based error reduction when implementing Gaussian operations.
Resumo:
We investigate decoherence effects in the recently suggested quantum-computation scheme using weak nonlinearities, strong probe coherent fields, detection, and feedforward methods. It is shown that in the weak-nonlinearity-based quantum gates, decoherence in nonlinear media can be made arbitrarily small simply by using arbitrarily strong probe fields, if photon-number-resolving detection is used. On the contrary, we find that homodyne detection with feedforward is not appropriate for this scheme because in this case decoherence rapidly increases as the probe field gets larger.
Resumo:
Aims Technological advances in cardiac imaging have led to dramatic increases in test utilization and consumption of a growing proportion of cardiovascular healthcare costs. The opportunity costs of strategies favouring exercise echocardiography or SPECT imaging have been incompletely evaluated. Methods and results We examined prognosis and cost-effectiveness of exercise echocardiography (n=4884) vs. SPECT (n=4637) imaging in stable, intermediate risk, chest pain patients. Ischaemia extent was defined as the number of vascular territories with echocardiographic wall motion or SPECT perfusion abnormalities. Cox proportional hazard models were employed to assess time to cardiac death or myocardial infarction (MI). Total cardiovascular costs were summed (discounted and inflation-corrected) throughout follow-up. A cost-effectiveness ratio = 2% annual event risk), SPECT ischaemia was associated with earlier and greater utilization of coronary revascularization (P < 0.0001) resulting in an incremental cost-effectiveness ratio of $32 381/LYS. Conclusion Health care policies aimed at allocating limited resources can be effectively guided by applying clinical and economic outcomes evidence. A strategy aimed at cost-effective testing would support using echocardiography in low-risk patients with suspected coronary disease, whereas those higher risk patients benefit from referral to SPECT imaging.
Resumo:
In this article, we propose a framework, namely, Prediction-Learning-Distillation (PLD) for interactive document classification and distilling misclassified documents. Whenever a user points out misclassified documents, the PLD learns from the mistakes and identifies the same mistakes from all other classified documents. The PLD then enforces this learning for future classifications. If the classifier fails to accept relevant documents or reject irrelevant documents on certain categories, then PLD will assign those documents as new positive/negative training instances. The classifier can then strengthen its weakness by learning from these new training instances. Our experiments’ results have demonstrated that the proposed algorithm can learn from user-identified misclassified documents, and then distil the rest successfully.
Resumo:
Background. Stress myocardial contrast echo (MCE) is technically challenging with exercise (Ex) because of cardiacmovementandshort duration ofhyperemia.Vasodilators solve these limitations, but are less potent for inducing abnormal wall motion (WM). We sought whether a combined dipyridamole (DI; 0.56 mg/kg i.v. 4 min) and Ex stress protocol would enable MCE to provide incremental benefit toWManalysis for detection of CAD. Methods. Standard echo images were followed by real time MCE at rest and following stress in 85 pts, 70 undergoing quantitative coronary angiography and 15 low risk pts.WMAfrom standard and LVopacification images, and then myocardial perfusion were assessed sequentially in a blinded fashion. A subgroup of 13 pts also underwent Ex alone, to assess the contribution of DI to quantitative myocardial flow reserve (MFR). Results. Significant (>50%) stenoses were present in 43 pts, involving 69 territories. Addition of MCE improved SE sensitivity for detection of CAD (91% versus 74%, P = 0.02) and better appreciation of disease extent (87% versus 65%territories, P=0.003), with a non-significant reduction in specificity. In 55 territories subtended by a significant stenosis, but with no resting WM abnormality, ability to identify ischemia was also significantly increased by MCE (82% versus 60%, P = 0.002). MFR was less with Ex alone than with DIEx stress (2.4 ± 1.6 versus 4.0 ± 1.9, P = 0.05), suggesting prolongation of hyperaemia with DI may be essential to the results. Conclusions. Dipyridamole-exercise MCE adds significant incremental benefit to standard SE, with improved diagnostic sensitivity and more accurate estimation of extent of CAD.